text
stringlengths 10
951k
| source
stringlengths 39
44
|
|---|---|
Kidney
The kidneys are two bean-shaped organs found in vertebrates. They are located on the left and right in the retroperitoneal space, and in adult humans are about in length. They receive blood from the paired renal arteries; blood exits into the paired renal veins. Each kidney is attached to a ureter, a tube that carries excreted urine to the bladder.
The nephron is the structural and functional unit of the kidney. Each human adult kidney contains around 1 million nephrons, while a mouse kidney contains only about 12,500 nephrons. The kidney participates in the control of the volume of various body fluids, fluid osmolality, acid-base balance, various electrolyte concentrations, and removal of toxins. Filtration occurs in the glomerulus: one-fifth of the blood volume that enters the kidneys is filtered. Examples of substances reabsorbed are solute-free water, sodium, bicarbonate, glucose, and amino acids. Examples of substances secreted are hydrogen, ammonium, potassium and uric acid. The kidneys also carry out functions independent of the nephron. For example, they convert a precursor of vitamin D to its active form, calcitriol; and synthesize the hormones erythropoietin and renin.
Renal physiology is the study of kidney function. Nephrology is the medical specialty which addresses diseases of kidney "function": these include chronic kidney disease, nephritic and nephrotic syndromes, acute kidney injury, and pyelonephritis. Urology addresses diseases of kidney (and urinary tract) "anatomy": these include cancer, renal cysts, kidney stones and ureteral stones, and urinary tract obstruction.
Procedures used in the management of kidney disease include chemical and microscopic examination of the urine (urinalysis), measurement of kidney function by calculating the estimated glomerular filtration rate (eGFR) using the serum creatinine; and kidney biopsy and CT scan to evaluate for abnormal anatomy. Dialysis and kidney transplantation are used to treat kidney failure; one (or both sequentially) of these are almost always used when renal function drops below 15%. Nephrectomy is frequently used to cure renal cell carcinoma.
In humans, the kidneys are located high in the abdominal cavity, one on each side of the spine, and lie in a retroperitoneal position at a slightly oblique angle. The asymmetry within the abdominal cavity, caused by the position of the liver, typically results in the right kidney being slightly lower and smaller than the left, and being placed slightly more to the middle than the left kidney. The left kidney is approximately at the vertebral level T12 to L3, and the right is slightly lower. The right kidney sits just below the diaphragm and posterior to the liver. The left kidney sits below the diaphragm and posterior to the spleen. On top of each kidney is an adrenal gland. The upper parts of the kidneys are partially protected by the 11th and 12th ribs. Each kidney, with its adrenal gland is surrounded by two layers of fat: the perirenal fat present between renal fascia and renal capsule and pararenal fat superior to the renal fascia.
The kidney is a bean-shaped structure with a convex and a concave border. A recessed area on the concave border is the renal hilum, where the renal artery enters the kidney and the renal vein and ureter leave. The kidney is surrounded by tough fibrous tissue, the renal capsule, which is itself surrounded by perirenal fat, renal fascia, and pararenal fat. The anterior (front) surface of these tissues is the peritoneum, while the posterior (rear) surface is the transversalis fascia.
The superior pole of the right kidney is adjacent to the liver. For the left kidney, it is next to the spleen. Both, therefore, move down upon inhalation.
A Danish study measured the median renal length to be on the left side and on the right side in adults. Median renal volumes were on the left and on the right.
The functional substance, or parenchyma, of the kidney is divided into two major structures: the outer renal cortex and the inner renal medulla. Grossly, these structures take the shape of eight to 18 cone-shaped renal lobes, each containing renal cortex surrounding a portion of medulla called a renal pyramid. Between the renal pyramids are projections of cortex called renal columns. Nephrons, the urine-producing functional structures of the kidney, span the cortex and medulla. The initial filtering portion of a nephron is the renal corpuscle, which is located in the cortex. This is followed by a renal tubule that passes from the cortex deep into the medullary pyramids. Part of the renal cortex, a medullary ray is a collection of renal tubules that drain into a single collecting duct.
The tip, or papilla, of each pyramid empties urine into a minor calyx; minor calyces empty into major calyces, and major calyces empty into the renal pelvis. This becomes the ureter. At the hilum, the ureter and renal vein exit the kidney and the renal artery enters. Hilar fat and lymphatic tissue with lymph nodes surround these structures. The hilar fat is contiguous with a fat-filled cavity called the renal sinus. The renal sinus collectively contains the renal pelvis and calyces and separates these structures from the renal medullary tissue.
The kidneys possess no overtly moving structures.
The kidneys receive blood from the renal arteries, left and right, which branch directly from the abdominal aorta. Despite their relatively small size, the kidneys receive approximately 20% of the cardiac output. Each renal artery branches into segmental arteries, dividing further into interlobar arteries, which penetrate the renal capsule and extend through the renal columns between the renal pyramids. The interlobar arteries then supply blood to the arcuate arteries that run through the boundary of the cortex and the medulla. Each arcuate artery supplies several interlobular arteries that feed into the afferent arterioles that supply the glomeruli.
Blood drains from the kidneys, ultimately into the inferior vena cava. After filtration occurs, the blood moves through a small network of small veins (venules) that converge into interlobular veins. As with the arteriole distribution, the veins follow the same pattern: the interlobular provide blood to the arcuate veins then back to the interlobar veins, which come to form the renal veins which exiting the kidney .
The kidney and nervous system communicate via the renal plexus, whose fibers course along the renal arteries to reach each kidney. Input from the sympathetic nervous system triggers vasoconstriction in the kidney, thereby reducing renal blood flow. The kidney also receives input from the parasympathetic nervous system, by way of the renal branches of the vagus nerve; the function of this is yet unclear. Sensory input from the kidney travels to the T10-11 levels of the spinal cord and is sensed in the corresponding dermatome. Thus, pain in the flank region may be referred from corresponding kidney.
Renal histology is the study of the microscopic structure of the kidney. Distinct cell types include:
About 20,000 protein coding genes are expressed in human cells and almost 70% of these genes are expressed in normal, adult kidneys. Just over 300 genes are more specifically expressed in the kidney, with only some 50 genes being highly specific for the kidney. Many of the corresponding kidney specific proteins are expressed in the cell membrane and function as transporter proteins. The highest expressed kidney specific protein is uromodulin, the most abundant protein in urine with functions that prevent calcification and growth of bacteria. Specific proteins are expressed in the different compartments of the kidney with podocin and nephrin expressed in glomeruli, Solute carrier family protein SLC22A8 expressed in proximal tubules, calbindin expressed in distal tubules and aquaporin 2 expressed in the collecting duct cells.
The mammalian kidney develops from intermediate mesoderm. Kidney development, also called "nephrogenesis", proceeds through a series of three successive developmental phases: the pronephros, mesonephros, and metanephros. The metanephros are primordia of the permanent kidney.
The kidneys excrete a variety of waste products produced by metabolism into the urine. The microscopic structural and functional unit of the kidney is the nephron. It processes the blood supplied to it via filtration, reabsorption, secretion and excretion; the consequence of those processes is the production of urine. These include the nitrogenous wastes urea, from protein catabolism, and uric acid, from nucleic acid metabolism. The ability of mammals and some birds to concentrate wastes into a volume of urine much smaller than the volume of blood from which the wastes were extracted is dependent on an elaborate countercurrent multiplication mechanism. This requires several independent nephron characteristics to operate: a tight hairpin configuration of the tubules, water and ion permeability in the descending limb of the loop, water impermeability in the ascending loop, and active ion transport out of most of the ascending limb. In addition, passive countercurrent exchange by the vessels carrying the blood supply to the nephron is essential for enabling this function.
The kidney participates in whole-body homeostasis, regulating acid-base balance, electrolyte concentrations, extracellular fluid volume, and blood pressure. The kidney accomplishes these homeostatic functions both independently and in concert with other organs, particularly those of the endocrine system. Various endocrine hormones coordinate these endocrine functions; these include renin, angiotensin II, aldosterone, antidiuretic hormone, and atrial natriuretic peptide, among others.
Filtration, which takes place at the renal corpuscle, is the process by which cells and large proteins are retained while materials of smaller molecular weights are filtered from the blood to make an ultrafiltrate that eventually becomes urine. The kidney generates 180 liters of filtrate a day. The process is also known as hydrostatic filtration due to the hydrostatic pressure exerted on the capillary walls.
Reabsorption is the transport of molecules from this ultrafiltrate and into the peritubular capillary. It is accomplished via selective receptors on the luminal cell membrane. Water is 55% reabsorbed in the proximal tubule. Glucose at normal plasma levels is completely reabsorbed in the proximal tubule. The mechanism for this is the Na+/glucose cotransporter. A plasma level of 350 mg/dL will fully saturate the transporters and glucose will be lost in the urine. A plasma glucose level of approximately 160 is sufficient to allow glucosuria, which is an important clinical clue to diabetes mellitus.
Amino acids are reabsorbed by sodium dependent transporters in the proximal tubule. Hartnup disease is a deficiency of the tryptophan amino acid transporter, which results in pellagra.
Secretion is the reverse of reabsorption: molecules are transported from the peritubular capillary through the interstitial fluid, then through the renal tubular cell and into the ultrafiltrate.
The last step in the processing of the ultrafiltrate is "excretion": the ultrafiltrate passes out of the nephron and travels through a tube called the "collecting duct", which is part of the collecting duct system, and then to the ureters where it is renamed "urine". In addition to transporting the ultrafiltrate, the collecting duct also takes part in reabsorption.
The kidneys secrete a variety of hormones, including erythropoietin, calcitriol, and renin. Erythropoietin is released in response to hypoxia (low levels of oxygen at tissue level) in the renal circulation. It stimulates erythropoiesis (production of red blood cells) in the bone marrow. Calcitriol, the activated form of vitamin D, promotes intestinal absorption of calcium and the renal reabsorption of phosphate. Renin is an enzyme which regulates angiotensin and aldosterone levels.
Although the kidney cannot directly sense blood, long-term regulation of blood pressure predominantly depends upon the kidney. This primarily occurs through maintenance of the extracellular fluid compartment, the size of which depends on the plasma sodium concentration. Renin is the first in a series of important chemical messengers that make up the renin–angiotensin system. Changes in renin ultimately alter the output of this system, principally the hormones angiotensin II and aldosterone. Each hormone acts via multiple mechanisms, but both increase the kidney's absorption of sodium chloride, thereby expanding the extracellular fluid compartment and raising blood pressure. When renin levels are elevated, the concentrations of angiotensin II and aldosterone increase, leading to increased sodium chloride reabsorption, expansion of the extracellular fluid compartment, and an increase in blood pressure. Conversely, when renin levels are low, angiotensin II and aldosterone levels decrease, contracting the extracellular fluid compartment, and decreasing blood pressure.
Two organ systems, the kidneys and lungs, maintain acid-base homeostasis, which is the maintenance of pH around a relatively stable value. The lungs contribute to acid-base homeostasis by regulating carbon dioxide (CO2) concentration. The kidneys have two very important roles in maintaining the acid-base balance: to reabsorb and regenerate bicarbonate from urine, and to excrete hydrogen ions and fixed acids (anions of acids) into urine.
The kidneys help maintain the water and salt level of the body. Any significant rise in plasma osmolality is detected by the hypothalamus, which communicates directly with the posterior pituitary gland. An increase in osmolality causes the gland to secrete antidiuretic hormone (ADH), resulting in water reabsorption by the kidney and an increase in urine concentration. The two factors work together to return the plasma osmolality to its normal levels.
Various calculations and methods are used to try to measure kidney function. Renal clearance is the volume of plasma from which the substance is completely cleared from the blood per unit time. The filtration fraction is the amount of plasma that is actually filtered through the kidney. This can be defined using the equation. The kidney is a very complex organ and mathematical modelling has been used to better understand kidney function at several scales, including fluid uptake and secretion.
Nephrology is the subspeciality under Internal Medicine that deals with kidney function and disease states related to renal malfunction and their management including dialysis and kidney transplantation. Urology is the specialty under Surgery that deals with kidney structure abnormalities such as kidney cancer and cysts and problems with urinary tract. Nephrologists are internists, and urologists are surgeons, whereas both are often called "kidney doctors". There are overlapping areas that both nephrologists and urologists can provide care such as kidney stones and kidney related infections.
There are many causes of kidney disease. Some causes are acquired over the course of life, such as diabetic nephropathy whereas others are congenital, such as polycystic kidney disease.
Medical terms related to the kidneys commonly use terms such as "renal" and the prefix "nephro-". The adjective "renal", meaning related to the kidney, is from the Latin "rēnēs", meaning kidneys; the prefix "nephro-" is from the Ancient Greek word for kidney, "nephros (νεφρός)". For example, surgical removal of the kidney is a "nephrectomy", while a reduction in kidney function is called "renal dysfunction".
Generally, humans can live normally with just one kidney, as one has more functioning renal tissue than is needed to survive. Only when the amount of functioning kidney tissue is greatly diminished does one develop chronic kidney disease. Renal replacement therapy, in the form of dialysis or kidney transplantation, is indicated when the glomerular filtration rate has fallen very low or if the renal dysfunction leads to severe symptoms.
Dialysis is a treatment that substitutes for the function of normal kidneys. Dialysis may be instituted when approximately 85%-90% of kidney function is lost, as indicated by a glomerular filtration rate (GFR) of less than 15. Dialysis removes metabolic waste products as well as excess water and sodium (thereby contributing to regulating blood pressure); and maintains many chemical levels within the body. Life expectancy is 5–10 years for those on dialysis; some live up to 30 years. Dialysis can occur via the blood (through a catheter or arteriovenous fistula), or through the peritoneum (peritoneal dialysis) Dialysis is typically administered three times a week for several hours at free-standing dialysis centers, allowing recipients to lead an otherwise essentially normal life.
Many renal diseases are diagnosed on the basis of a detailed medical history, and physical examination. The medical history takes into account present and past symptoms, especially those of kidney disease; recent infections; exposure to substances toxic to the kidney; and family history of kidney disease.
Kidney function is tested for using blood tests and urine tests. A usual blood test is for urea and electrolytes, known as a "U and E". Creatinine is also tested for. Urine tests such as urinalysis can evaluate for pH, protein, glucose, and the presence of blood. Microscopic analysis can also identify the presence of urinary casts and crystals. The glomerular filtration rate (GFR) can be calculated.
Renal ultrasonography is essential in the diagnosis and management of kidney-related diseases. Other modalities, such as CT and MRI, should always be considered as supplementary imaging modalities in the assessment of renal disease.
The role of the renal biopsy is to diagnose renal disease in which the etiology is not clear based upon noninvasive means (clinical history, past medical history, medication history, physical exam, laboratory studies, imaging studies). In general, a renal pathologist will perform a detailed morphological evaluation and integrate the morphologic findings with the clinical history and laboratory data, ultimately arriving at a pathological diagnosis. A renal pathologist is a physician who has undergone general training in anatomic pathology and additional specially training in the interpretation of renal biopsy specimens.
Ideally, multiple core sections are obtained and evaluated for adequacy (presence of glomeruli) intraoperatively. A pathologist/pathology assistant divides the specimen(s) for submission for light microscopy, immunofluorescence microscopy and electron microscopy.
The pathologist will examine the specimen using light microscopy with multiple staining techniques (hematoxylin and eosin/H&E, PAS, trichrome, silver stain) on multiple level sections. Multiple immunofluorescence stains are performed to evaluate for antibody, protein and complement deposition. Finally, ultra-structural examination is performed with electron microscopy and may reveal the presence of electron-dense deposits or other characteristic abnormalities that may suggest an etiology for the patient's renal disease.
In the majority of vertebrates, the mesonephros persists into the adult, albeit usually fused with the more advanced metanephros; only in amniotes is the mesonephros restricted to the embryo. The kidneys of fish and amphibians are typically narrow, elongated organs, occupying a significant portion of the trunk. The collecting ducts from each cluster of nephrons usually drain into an "archinephric duct", which is homologous with the vas deferens of amniotes. However, the situation is not always so simple; in cartilaginous fish and some amphibians, there is also a shorter duct, similar to the amniote ureter, which drains the posterior (metanephric) parts of the kidney, and joins with the archinephric duct at the bladder or cloaca. Indeed, in many cartilaginous fish, the anterior portion of the kidney may degenerate or cease to function altogether in the adult.
In the most primitive vertebrates, the hagfish and lampreys, the kidney is unusually simple: it consists of a row of nephrons, each emptying directly into the archinephric duct. Invertebrates may possess excretory organs that are sometimes referred to as "kidneys", but, even in "Amphioxus", these are never homologous with the kidneys of vertebrates, and are more accurately referred to by other names, such as nephridia. In amphibians, kidneys and the urinary bladder harbour specialized parasites, monogeneans of the family Polystomatidae.
The kidneys of reptiles consist of a number of lobules arranged in a broadly linear pattern. Each lobule contains a single branch of the ureter in its centre, into which the collecting ducts empty. Reptiles have relatively few nephrons compared with other amniotes of a similar size, possibly because of their lower metabolic rate.
Birds have relatively large, elongated kidneys, each of which is divided into three or more distinct lobes. The lobes consists of several small, irregularly arranged, lobules, each centred on a branch of the ureter. Birds have small glomeruli, but about twice as many nephrons as similarly sized mammals.
The human kidney is fairly typical of that of mammals. Distinctive features of the mammalian kidney, in comparison with that of other vertebrates, include the presence of the renal pelvis and renal pyramids and a clearly distinguishable cortex and medulla. The latter feature is due to the presence of elongated loops of Henle; these are much shorter in birds, and not truly present in other vertebrates (although the nephron often has a short "intermediate segment" between the convoluted tubules). It is only in mammals that the kidney takes on its classical "kidney" shape, although there are some exceptions, such as the multilobed reniculate kidneys of pinnipeds and cetaceans.
Kidneys of various animals show evidence of evolutionary adaptation and have long been studied in ecophysiology and comparative physiology. Kidney morphology, often indexed as the relative medullary thickness, is associated with habitat aridity among species of mammals and diet (e.g., carnivores have only long loops of Henle).
In ancient Egypt, the kidneys, like the heart, were left inside the mummified bodies, unlike other organs which were removed. Comparing this to the biblical statements, and to drawings of human body with the heart and two kidneys portraying a set of scales for weighing justice, it seems that the Egyptian beliefs had also connected the kidneys with judgement and perhaps with moral decisions.
According to studies in modern and ancient Hebrew, various body organs in humans and animals served also an emotional or logical role, today mostly attributed to the brain and the endocrine system. The kidney is mentioned in several biblical verses in conjunction with the heart, much as the bowels were understood to be the "seat" of emotion – grief, joy and pain. Similarly, the Talmud ("Berakhoth" 61.a) states that one of the two kidneys counsels what is good, and the other evil.
In the sacrifices offered at the biblical Tabernacle and later on at the temple in Jerusalem, the priests were instructed to remove the kidneys and the adrenal gland covering the kidneys of the sheep, goat and cattle offerings, and to burn them on the altar, as the holy part of the "offering for God" never to be eaten.
In ancient India, according to the Ayurvedic medical systems, the kidneys were considered the beginning of the excursion channels system, the 'head' of the "Mutra Srota"s, receiving from all other systems, and therefore important in determining a person's health balance and temperament by the balance and mixture of the three 'Dosha's – the three health elements: Vatha (or Vata) – air, Pitta – bile, and Kapha – mucus. The temperament and health of a person can then be seen in the resulting color of the urine.
Modern Ayurveda practitioners, a practice which is characterized as pseudoscience, have attempted to revive these methods in medical procedures as part of Ayurveda Urine therapy. These procedures have been called "nonsensical" by skeptics.
The Latin term "renes" is related to the English word "reins", a synonym for the kidneys in Shakespearean English (e.g. "Merry Wives of Windsor" 3.5), which was also the time when the King James Version of the Bible was translated. Kidneys were once popularly regarded as the seat of the conscience and reflection, and a number of verses in the Bible (e.g. Ps. 7:9, Rev. 2:23) state that God searches out and inspects the kidneys, or "reins", of humans, together with the heart.
The kidneys, like other offal, can be cooked and eaten.
Kidneys are usually grilled or sautéed, but in more complex dishes they are stewed with a sauce that will improve their flavor. In many preparations, kidneys are combined with pieces of meat or liver, as in mixed grill. Dishes include the British steak and kidney pie, the Swedish "hökarpanna" (pork and kidney stew), the French "rognons de veau sauce moutarde" (veal kidneys in mustard sauce) and the Spanish "riñones al Jerez" (kidneys stewed in sherry sauce) .
Kidney stones have been identified and recorded about as long as written historical records exist. The urinary tract including the ureters, as well as their function to drain urine from the kidneys, has been described by Galen in the second century AD.
The first to examine the ureter through an internal approach, called ureteroscopy, rather than surgery was Hampton Young in 1929. This was improved on by VF Marshall who is the first published use of a flexible endoscope based on fiber optics, which occurred in 1964. The insertion of a drainage tube into the renal pelvis, bypassing the uterers and urinary tract, called nephrostomy], was first described in 1941. Such an approach differed greatly from the open surgical approaches within the urinary system employed during the preceeding two millenia.
|
https://en.wikipedia.org/wiki?curid=17025
|
Kermit (protocol)
Kermit is a computer file transfer/management protocol and a set of communications software tools primarily used in the early years of personal computing in the 1980s. It provides a consistent approach to file transfer, terminal emulation, script programming, and character set conversion across many different computer hardware and operating system platforms.
The Kermit protocol supports text and binary file transfers on both full-duplex and half-duplex 8-bit and 7-bit serial connections in a system- and medium-independent fashion, and is implemented on hundreds of different computer and operating system platforms. On full-duplex connections, a Sliding Window Protocol is used with selective retransmission which provides excellent performance and error recovery characteristics. On 7-bit connections, locking shifts provide efficient transfer of 8-bit data. When properly implemented, as in the Columbia University Kermit Software collection, its authors claim performance is equal to or better than other protocols such as ZMODEM, YMODEM, and XMODEM, especially on poor connections. On connections over RS-232 Statistical Multiplexers where some control characters cannot be transmitted, Kermit can be configured to work, unlike protocols like XMODEM that require the connection to be transparent (i.e. all 256 possible values of a byte to be transferable).
Kermit can be used as a means to bootstrap other software, even itself. To distribute Kermit through non 8-bit clean networks Columbia developed .boo, a binary-to-text encoding system similar to BinHex. After a mainframe computer receives MS-DOS Kermit in .boo format, users can type in a "baby Kermit" in BASIC on their personal computers that downloads Kermit and convert it into binary.
Similarly, CP/M machines use many different floppy disk formats, which means that one machine often cannot read disks from another CP/M machine, and Kermit is used as part of a process to transfer applications and data between CP/M machines and other machines with different operating systems. The CP/M file-copy program PIP can usually access a computer's serial (RS-232) port, and if configured to use a very low baud rate (because it has no built-in error correction) can be used to transfer a small, simple version of Kermit from one machine to another over a null modem cable, or failing that, a very simple version of the Kermit protocol can be hand coded in binary in less than 2K using DDT, the CP/M Dynamic Debugging Tool. Once done, the simple version of Kermit can be used to download a fully functional version. That version can then be used to transfer any CP/M application or data.
Newer versions of Kermit included scripting language and automation of commands. The Kermit scripting language evolved from its TOPS-20 EXEC-inspired command language and was influenced syntactically and semantically by ALGOL 60, C, BLISS-10, PL/I, SNOBOL, and LISP.
The correctness of the Kermit protocol has been verified with formal methods.
In the late 1970s, users of Columbia University's mainframe computers had only 35 kilobytes of storage per person. Kermit was developed at the university so students could move files between them and floppy disks at various microcomputers around campus, such as IBM or DEC DECSYSTEM-20 mainframes and Intertec Superbrains running CP/M. IBM mainframes used an EBCDIC character set and CP/M and DEC machines used ASCII, so conversion between the two character sets was one of the early functions built into Kermit. The first file transfer with Kermit occurred in April 1981. The protocol was originally designed in 1981 by Frank da Cruz and Bill Catchings.
Columbia University coordinated development of versions of Kermit for many different computers at the university and elsewhere, and distributed the software for free; Kermit for the new IBM Personal Computer became especially popular. In 1986 the university founded the Kermit Project, which took over development and started charging fees for commercial use; the project was financially self-sufficient. For non-commercial use, Columbia University stated that
By 1988 Kermit was available on more than 300 computers and operating systems. The protocol became a de facto data communications standard for transferring files between dissimilar computer systems, and by the early 1990s it could convert multilingual character encodings. Kermit software has been used in many countries, for tasks ranging from simple student assignments to solving compatibility problems aboard the International Space Station. It was ported to a wide variety of mainframe, minicomputer and microcomputer systems down to handhelds and electronic pocket calculators. Most versions had a user interface based on the original TOPS-20 Kermit. Later versions of some Kermit implementations also support network as well as serial connections.
Implementations that are presently supported include C-Kermit (for Unix and OpenVMS) and Kermit 95 (for versions of Microsoft Windows from Windows 95 onwards and OS/2), but other versions remain available as well.
As of 1 July 2011, Columbia University ceased to host this project and released it to open source. In June 2011, the Kermit Project released a beta version of C-Kermit v9.0 under an Open Source Revised 3-Clause BSD License.
As well as the implementations developed and/or distributed by Columbia University, the Kermit protocol was implemented in a number of third-party communications software packages, among others ProComm and ProComm Plus. The term "SuperKermit" was coined by third-party vendors to refer to higher speed Kermit implementations offering features such as full duplex operation, sliding windows, and long packets; however, that term was deprecated by the original Kermit team at Columbia University, who saw these as simply features of the core Kermit protocol.
Kermit was named after Kermit the Frog from The Muppets, with permission from Henson Associates. The program's icon in the Apple Macintosh version was a depiction of Kermit the Frog. A backronym was nevertheless created, perhaps to avoid trademark issues, "KL10 Error-Free Reciprocal Microprocessor Interchange over TTY lines."
Kermit is an open protocol—anybody can base their own program on it, but some Kermit software and source code is copyright by Columbia University. As of version 9.0 (starting with the first test release after Alpha.09), C-Kermit has an Open Source license, the Revised 3-Clause BSD License. Everybody can use it as they wish for any purpose, including redistribution and resale. It may be included with any operating system where it works or can be made to work, including both free and commercial versions of Unix and Hewlett-Packard (formerly DEC) VMS (OpenVMS). Technical support was available from Columbia University through 30 June 2011.
|
https://en.wikipedia.org/wiki?curid=17027
|
Kermit the Frog
Kermit the Frog is a Muppet character created and originally performed by Jim Henson. Introduced in 1955, Kermit serves as the straight man protagonist of numerous Muppet productions, most notably "Sesame Street" and "The Muppet Show", as well as in other television series, feature films, specials, and public service announcements through the years. Henson performed Kermit until his death in 1990; Steve Whitmire performed Kermit from that time until his dismissal in 2016. Kermit is currently performed by Matt Vogel. He was also voiced by Frank Welker in "Muppet Babies" and occasionally in other animation projects, and is voiced by Matt Danner in the 2018 reboot of "Muppet Babies".
Kermit performed the hit singles "Bein' Green" in 1970 and "Rainbow Connection" in 1979 for "The Muppet Movie", the first feature-length film featuring the Muppets. "The Rainbow Connection" reached No. 25 on the "Billboard" Hot 100. Kermit's iconic look and voice have been recognizable in popular culture worldwide since, and in 2006, the character was credited as the author of "Before You Leap: A Frog's Eye View of Life's Greatest Lessons", an "autobiography" told from the perspective of the character himself.
Kermit first appeared on May 9, 1955, in the premiere of WRC-TV's "Sam and Friends". This prototype Kermit was created from a discarded spring coat belonging to Henson's mother and two ping pong ball halves for eyes.
Initially, Kermit was a lizard-like creature. He subsequently made a number of television appearances before his status as a frog was established. His collar was added at the time to make him seem more frog-like and to conceal the seam between his head and body.
The origin of Kermit's name is a subject of some debate. It is often claimed that Kermit was named after Henson's childhood friend Kermit Scott, from Leland, Mississippi. However, Karen Falk, head archivist and board of directors member for the Jim Henson Legacy organization, denies this claim on the Jim Henson Company's website:
Joy DiMenna, the only daughter of Kermit Kalman Cohen who worked as a sound engineer at WBAL-TV during Jim Henson's time with "Sam and Friends", recalls that the puppet was named after her father. According to Kermit Cohen's obituary, as well as DiMenna and Lenny Levin, a colleague of Mr. Cohen's at WBAL:
Another common belief is that Kermit was named for Kermit Love, who worked with Henson in designing and constructing Muppets, particularly on "Sesame Street" but Love's association with Henson did not begin until well after Kermit's creation and naming, and he always denied any connection between his name and that of the character.
As "Sesame Street" is localized for some different markets that speak languages other than English, Kermit is often renamed. In Portugal, he is called "Cocas, o Sapo" ("sapo" means "toad"), and in Brazil, his name is similar: "Caco, o Sapo". In most of Hispanic America, his name is "la rana René" ("René the Frog"), while in Spain, he is named "Gustavo". In the Arabic version, he is known as "Kamel", which is a common Arabic male name that means "perfect". In Hungary, he is called "Breki" (onomatopoetic).
Jim Henson originated the character in 1955 on his local television series, "Sam and Friends". Brian Henson described his father's performance as Kermit as "coming out of his own personality—was a wry intelligence, a little bit of a naughtiness, but Kermit always loved everyone around and also loved a good prank." He continued to perform the character until his death in 1990. Henson's last known performance as Kermit was for an appearance on "The Arsenio Hall Show" to promote "The Muppets at Walt Disney World". Henson died twelve days after that appearance.
Following Henson's death, veteran Muppet performer Steve Whitmire was named Kermit's new performer. In 2017, Whitmire seemed to imply in a blog post that Henson had asked him to assume the role before he died, though Henson's daughter Cheryl claimed Brian had selected him after her father's death. Whitmire's first public performance as Kermit was at the end of the television special "The Muppets Celebrate Jim Henson" in 1990. He remained Kermit's principal performer until 2016. Whitmire later revealed that he had not chosen to voluntarily leave the role, but rather had been dismissed by The Muppets Studio in October 2016. In an interview with "The Hollywood Reporter" later in July 2017, Whitmire elaborated he was fired for two reasons: long-term creative disagreements over Kermit's characterization and prolonged labor union negotiations that delayed his involvement in Muppet-related productions. Disney announced that Matt Vogel would be taking over as the performer of Kermit on July 10, 2017.
For a brief demonstration at "MuppetFest" (a 2001 Muppet fan convention), Muppet performer John Kennedy performed Kermit opposite Whitmire's performance of young Kermit (from "Kermit's Swamp Years"). Kennedy also performed Kermit for "Muppets Ahoy!", a 2006 Disney Cruise Line stage show (though Whitmire performed Kermit for the first few shows). Muppet performer Artie Esposito briefly performed Kermit in 2009 for a few personal appearances (an appearance on "America's Got Talent", the "MTV Video Music Awards", and at the 2009 D23 Expo).
Voice actor Frank Welker provided the voice of Baby Kermit on the animated Saturday morning cartoon, "Muppet Babies". He also provided the voice of an adult Kermit for a short-lived spin-off, "Little Muppet Monsters". Matt Danner voices Baby Kermit on the 2018 reboot of "Muppet Babies".
A biography has been developed for Kermit the Frog as if he was an actual living performer rather than a puppet character. According to this fictional biography, he was born in Leland, Mississippi, alongside approximately 2,353 siblings, though a 2011 "interview" on "The Ellen DeGeneres Show" has him state that he was from the swamps of Louisiana.
As portrayed in the 2002 film "Kermit's Swamp Years", at the age of 12, he was the first of his siblings to leave the swamp, and one of the first frogs to talk to humans. He is shown in the film encountering a 12-year-old Jim Henson (played by Christian Kriebel) for the first time.
According to "The Muppet Movie," Kermit returned to the swamp, where a passing agent (Dom DeLuise) noted he had talent and, thus inspired, he headed to Hollywood, encountering the rest of the Muppets along the way. Together, they were given a standard "rich and famous" contract by Lew Lord (Orson Welles) of Wide World Studios and began their showbiz careers. In "Before You Leap", Kermit again references encountering Jim Henson sometime after the events depicted in the course of "The Muppet Movie" and details their friendship and their partnership in the entertainment industry, crediting Henson as being the individual to whom he owes his fame. At some point after the events of "The Muppet Movie", Kermit and the other Muppets begin "The Muppet Show", and the characters remain together as a group, before starring in the other Muppet films and "Muppets Tonight", with Kermit usually at the core of the stories as the lead protagonist. Kermit is shown in "The Muppet Movie" as stating that the events of the film are "approximately how it happened" when asked by his nephew Robin about how the Muppets got started.
Fozzie Bear is portrayed as Kermit's best friend—a fact reiterated by Kermit in "Before You Leap"—and the two were frequently seen together during sketches on "The Muppet Show" and in other Muppet-related media and merchandise.
On August 4, 2015, Kermit the Frog and Miss Piggy "announced" that they had ended their romantic relationship. On September 2, 2015, Kermit was stated to have found a new girlfriend, a pig named Denise, but around February 2016, Denise supposedly broke up with Kermit after almost six months together.
Kermit has been featured prominently on both "The Muppet Show" and "Sesame Street". However, he had a prominent career before "Sesame Street"s debut in 1969, as he starred in "Sam and Friends", and numerous Muppets made guest appearances on "Today" from 1961 and "The Ed Sullivan Show" from 1966.
Kermit was one of the original main Muppet characters on "Sesame Street". Closely identified with the show, Kermit usually appeared as a lecturer on simple topics, a straight man to another Muppet foil (usually Grover, Herry Monster or Cookie Monster), or a news reporter interviewing storybook characters for Sesame Street News. He sang many songs on the show, including "Bein' Green", and appeared in the 1998 video "The Best of Kermit on Sesame Street".
Unlike the rest of the show's Muppets, Kermit was never any property of Sesame Workshop and has rarely been a part of the show's merchandise. When Sesame Workshop bought full ownership of its characters from The Jim Henson Company for $180 million, Kermit was not included in the deal. The character now belongs to The Muppets Studio, a division of The Walt Disney Company. His first "Sesame Street" appearance since Disney ownership was in an "Elmo's World" segment in the show's 40th-season premiere on November 10, 2009. His most recent appearance was in the 2019 television special "Sesame Street's 50th Anniversary Celebration", where he performed "Bein' Green" with Elvis Costello.
In "The Muppet Show" television series, Kermit was the central character, the showrunner, and the long-suffering stage manager of the theater show, trying to keep order amidst the chaos created by the other Muppets. Henson once claimed that Kermit's job on the "Muppet Show" was much like his own: "trying to get a bunch of crazies to actually get the job done." It was on this show that the running gag of Kermit being pursued by leading lady Miss Piggy developed.
On "Muppets Tonight", Kermit was still a main character, although he was the producer rather than frontman. He appeared in many parody sketches such as "NYPD Green", "City Schtickers", "Flippers", and "The Muppet Odd Squad", as well as in the "Psychiatrist's Office" sketch.
Kermit also served as the mascot for The Jim Henson Company, until the sale of the Muppet characters to Disney. A Kermit puppet can be seen at the National Museum of American History.
Kermit appears in "Muppet*Vision 3D", an attraction that opened in 1991 at Disney's Hollywood Studios at Walt Disney World in Lake Buena Vista, Florida. The character was also formerly featured in the aforementioned attraction in Disney California Adventure Park at the Disneyland Resort in Anaheim, California until its closure in 2014. Kermit also appears at the Magic Kingdom at "The Muppets Present...Great Moments in American History". He also appeared in two parades; Disney Stars and Motor Cars Parade which ran at Disney's Hollywood Studios from 2001 to 2008 and Disney's Honorary VoluntEars Cavalcade which was held during 2010 at the Magic Kingdom and Disneyland.
Kermit the Frog has appeared in almost every Muppet production, as well as making guest appearances in other shows and movies.
Below is a list of his more well-known appearances:
Kermit was awarded an honorary doctorate of Amphibious Letters on May 19, 1996, at Southampton College, New York, where he also gave a commencement speech. He is also the only "amphibian" to have had the honor of addressing the Oxford Union. A statue of Henson and Kermit was erected on the campus of Henson's alma mater, the University of Maryland, College Park in 2003.
Kermit was also given the honor of being the Grand Marshal of the Tournament of Roses Parade in 1996. The Macy's Thanksgiving Day Parade has featured a Kermit balloon since 1977.
On November 14, 2002, Kermit the Frog received a star on the Hollywood Walk of Fame. The star is located at 6801 Hollywood Blvd.
On Kermit's 50th birthday in 2005, the United States Postal Service released a set of new stamps with photos of Kermit and some of his fellow Muppets on them. The background of the stamp sheet features a photo of a silhouetted Henson sitting in a window well, with Kermit sitting in his lap looking at him.
Kermit was also the grand marshal for Michigan State University's homecoming parade in 2006.
In 2013, the original Kermit puppet from "Sam and Friends" was donated to the Smithsonian Institution in Washington, D.C. for display in the pop culture gallery. In 2015, the Leland Chamber of Commerce in Leland, Mississippi opened a small museum containing puppets and memorabilia dedicated to Kermit.
Kermit's legacy is also deeply entrenched in the science community. One of the famous WP-3D "Orion" research platforms flown by the NOAA Hurricane Hunters is named after Kermit. The other is named after Miss Piggy. In 2015, the discovery of the Costa Rican glass frog "Hyalinobatrachium dianae" also attracted viral media attention due to the creature's perceived resemblance to Kermit, with researcher Brian Kubicki quoted as saying "I am glad that this species has ended up getting so much international attention, and in doing so it is highlighting the amazing amphibians that are native to Costa Rica and the need to continue exploring and studying the country's amazing tropical forests".
Kermit has made numerous guest appearances on popular television shows, including co-hosting individual episodes of a number of long-running talk shows. On April 2, 1979, Kermit guest-hosted "The Tonight Show Starring Johnny Carson" to promote "The Muppet Movie". From 1983 to 1995, the French political satire show "Le Bébête Show" used copies of various Muppets to parody key political figures, and Kermit renamed "Kermitterrand", embodied President François Mitterrand. On May 21, 2018, Kermit and contestant Maddie Poppe performed "Rainbow Connection" live on American Idol.
As an April Fool's joke, Kermit hosted CNN's "Larry King Live" in 1994 and interviewed Hulk Hogan. Kermit was also a semi-regular during various incarnations of "Hollywood Squares", with other Muppets such as Big Bird and Oscar the Grouch also making appearances on the original "Hollywood Squares".
Jim Henson's characters, including the Muppets, have inspired merchandise internationally, with Chris Bensch, chief curator of Rochester, New York's The Strong National Museum of Play, reporting "There seems to have been a particular craze for Kermit the Frog in Japan," likely due to the "cuteness appeal". Baby Kermit plush toys became popular in the 1980s after the success of "Muppet Babies." In 1991, one year after Jim Henson died, merchandise featuring Kermit and other Muppet characters was being sold at Disney theme parks, causing Henson Associates to file a lawsuit against Disney for copyright infringement. Henson alleged that the "counterfeit merchandise" falsely indicated that the characters belonged to Disney, although the latter company had the right to exercise use of the characters due to an earlier licensing agreement. The Henson Associates highlighted a T-shirt displaying Kermit, the Disney brand, and a copyright symbol. Disney representative Erwin Okun said the lawsuit was "outrageous" and "an unfortunate break with the legacy of a fine relationship with Disney that Jim Henson left behind". Disney later acquired the Muppets, and thusly, clothes, toys and souvenirs depicting Kermit and the Muppets continued to be sold at Disney theme parks and stores.
The Leland Chamber of Commerce's small Kermit-themed museum set out to preserve some of the dolls and merchandise. In 2016, "The New Zealand Herald" reported a hat featuring Kermit sipping Lipton tea, associated with the "But That's None of My Business" Internet meme, became a popular seller after basketball player LeBron James drew attention for wearing one.
In March 2007, "Sad Kermit", an unofficial parody, was uploaded to the website YouTube, showing a store-bought Kermit puppet performing a version of the Nine Inch Nails song "Hurt" in a style similar to Johnny Cash's famous cover version. In contrast to the real Kermit character's usual family-friendly antics, the video shows the puppet engaging in drug abuse, smoking, alcoholism, performing oral sex on Rowlf the Dog, smashing a picture of Miss Piggy (with a breast exposed) and attempting suicide. The video became an Internet meme. The "Victoria Times Colonist" called it an "online sensation". The "Chicago Sun-Times" said it "puts the high in 'Hi-ho!'" The "London Free Press" said "Sad Kermit is in a world of pain". The "Houston Press" described it as the "world's most revolting web phenomenon". SF Weekly described the unauthorized video as "ironic slandering". Clips have been featured on the Canadian television series "The Hour", where host George Stroumboulopoulos speculated that the Kermit version of "Hurt" was inspired by the Cash version rather than that of Nine Inch Nails.
Kermit has also appeared in a popular meme in which he is shown sipping tea, "one used when you sassily point something out, and then slyly back away, claiming that it's not [your] business". The photo is taken from "Be More Kermit," a Lipton advertisement that aired in 2014, and was adapted into the "But That's None of My Business" meme by African American comedians on the Tumblr blog Kermit the Snitch, making appearances on Twitter, Instagram and Facebook. Charles Pulliam-Moore of the TV station "Fusion" praised "But That's None of My Business" as "a symbol for the comedic brilliance born out of black communities on the internet," but Stephanie Hayes of "Bustle" magazine criticized the memes as racist and obscene.
In 2016, a "Good Morning America" post on Twitter referred to "But That's None of My Business" as "Tea Lizard," becoming the subject of viral online derision. "New York" magazine replied that, "Kermit is a frog. A frog is an amphibian. A lizard is a reptile. It's just so insulting. Beyond a frog and a lizard both being clearly ectothermic, they couldn't be any more different. Not all green things are the same, you ignorant bastards". "Popular Science" also addressed the misnomer, writing "Frogs, which are amphibians, have quite a few significant differences from reptiles in how they breathe, their life cycles, whether they have scales or not... there's a lot to absorb here".
In November 2016, a new meme surfaced of Kermit talking to a hooded version of himself which represents the self and its dark inner thoughts. It involves captioning of a screenshot taken from the "Muppets Most Wanted" movie of Kermit and Constantine looking at each other.
|
https://en.wikipedia.org/wiki?curid=17029
|
KHAD
Khadamat-e Aetla'at-e Dawlati (Pashto/) translates directly to English as: "State Intelligence Agency". However, this phrase is more precisely translated as "State Information Services", Khadamat-e Aetela'at-e Dawlati, almost always known by its acronym KHAD (or KhAD), is the main security agency and intelligence agency of Afghanistan, and also served as the secret police during the Soviet occupation. The successor to AGSA (Department for Safeguarding the Interests of Afghanistan) and KAM (Security and Intelligence Organization), KHAD was nominally part of the Afghan state, but it was firmly under the control of the Soviet KGB until 1989. In January 1986 its status was upgraded and it was thereafter officially known as the "Ministry of State Security" (Wizarat-i Amaniyyat-i Dawlati, or WAD).
After the December 1979 Soviet invasion, the KAM was renamed KhAD and came under the control of the KGB. This was an agency specifically created for the suppression of the Democratic Republic's internal opponents. However, KHAD has continued to operate after the fall of the Soviet-backed government in 1992 and acted as the intelligence arm of the United Front or "Northern Alliance" during the civil war in Afghanistan (1992-1996).
Little is known of its internal organization and most of its records were either destroyed by the Taliban (along with its headquarters) or were taken to Moscow by the KGB (particularly ones which outlined membership, informants, and assignments with Soviet or KGB personnel) where they remain classified to this day. KHAD's system of informers and operatives extended into virtually every aspect of Afghan life, especially in the government-controlled urban areas. Aside from its secret police work, KHAD supervised ideological education at schools and colleges, ran a special school for war orphans, and recruited young men for the militia.
Its importance to Moscow was reflected in the fact that it was chiefly responsible for the training of a new generation of Afghans who would be loyal to the Soviet Union. Another important area was work with tribes and ethnic minorities. KHAD collaborated with the Ministry of Nationalities and Tribal Affairs to foster support for the regime in the countryside. KHAD also directed its attention to Afghanistan's Hindu and Sikh religious minorities.
KHAD was also responsible for co-opting religious leaders. It funded an official body known as the Religious Affairs Directorate and recruited pro-regime "ulama" and mosque attendants to spy on worshipers.
Some sources give 60% of the People's Democratic Party of Afghanistan's membership as belonging to the armed forces, "Sarandoy", or KHAD.
KHAD also had a political role that was clearly unintended by the Soviets. It was initially headed by Mohammad Najibullah, until he became President of Afghanistan in 1986. Najibullah and other high officials were Parchamis. As head of the KHAD apparatus, Najibullah was also extremely powerful.
Consequently, KHAD evolved into a Parchami stronghold, equally zealous in the suppression of enemies of the revolution. Thus, KHAD was zealous in suppressing Khalqis in the government and in the armed forces.
There was a bitter rivalry between Najibullah and Sayed Muhammad Gulabzoi. Gulabzoi, a Khalq sympathizer, was Minister of the Interior and commander of "Sarandoy" ("Defenders of the Revolution"), the national gendarmerie. Gulabzoi was one of the few prominent Khalqis remaining in office in a Parcham-dominated regime.
In late 1985, Najibullah was promoted to be a secretary on the PDPA Central Committee; in this capacity he may be able to exercise party authority over all security organs, including those attached to the Khalq-dominated defense and interior ministries. It was assumed to be a reward for the efficiency and ruthlessness of the secret police, which was in sharp contrast to the performance of the poorly trained and demoralized armed forces.
In the mid-1980s, KHAD enjoyed a formidable measure of autonomy in relation to other Afghan state institutions.
KHAD reportedly had some success in penetrating the leadership councils of several resistance groups, most of which were headquartered in Pakistan. By the mid-1980s KHAD had gained a fearsome reputation as the eyes, ears, and scourge of the regime. Its influence was pervasive and its methods lawless. KHAD's activities reached beyond the borders of Afghanistan to neighboring Pakistan and Iran.
After establishment of Karzai government in 2001, KHAD was reestablished and Gen. Arif of the Northern Alliance became its chief. KHAD was directly controlled by the defense minister Mohammed Fahim, who previously controlled it from 1992 until the Taliban took Kabul in 1996. There are some complaints that KHAD was used as a tool against opponents by the Northern Alliance.
KHAD was also accused of human rights abuses in the mid-1980s. These included the use of torture, the use of predetermined "show trials" to dispose of political prisoners, and widespread arbitrary arrest and detention. Secret trials and the execution of prisoners without trial were also common.
It was especially active and aggressive in the urban centers, especially in Kabul. Organizations such as Amnesty International continued to publish detailed reports of KHAD's use of torture and of inhumane conditions in the country's prisons and jails.
KHAD also operated eight detention centers in the capital, which were located at KHAD headquarters, at the Ministry of the Interior headquarters, and at a location known as the Central Interrogation Office. The most notorious of the Communist-run detention centers was Pul-e-Charkhi prison, where 27,000 political prisoners are thought to have been murdered. Recently mass graves of executed prisoners have been uncovered dating back to the Soviet era.
On 29 February 2000, when The Netherlands had no diplomatic mission in Afghanistan, the Dutch Ministry of Foreign Affairs published a disputed report on the involvement of the KHAD in the human rights abuses, partly based on secret sources, allegedly biased political sycophants from the side of the Taliban and the Pakistani intelligence agency ISI. Some of its conclusions were already published in the Dutch press before the official publication of the full report. This report, quoted frequently in the cases of Afghan asylum seekers to support the exclusion ground of article 1F of the Convention Relating to the Status of Refugees in the national refugee policy of the Netherlands, was also published in an English translation on 26 April 2001. In 2008 another report on this matter was published by the UNHCR. In this report some conclusions of the Dutch report were contested.
On 14 October 2005, the District Court in the Hague convicted two high-ranking KhAD officers who sought asylum in the Netherlands in the 1990s. Hesamuddin Hesam and Habibullah Jalalzoy were found guilty of complicity to torture and violations of the laws and customs of war, committed in Afghanistan in the 1980s. Hesam was sentenced to 12 years imprisonment. He was the head of the military intelligence service (KhAD-e-Nezamy) and deputy minister of the Ministry of State Security (WAD). Jalalzoy was the head of the unit investigations and interrogations within the military intelligence of the KhAD. He was sentenced to 9 years imprisonment. On 29 January 2007, the Dutch appeal court upheld the sentences. The judgements were confirmed by the Dutch Supreme Court on 10 July 2008.
On 25 June 2007, the District Court in the Hague acquitted another senior KhAD officer. General Abdullah Faqirzada was one of the deputy heads of the KhAD-e-Nezamy from 1980 until 1987. Although the court held it plausible that Faqirzada was closely involved with the human rights abuses in the military branch of the KhAD, it concluded there was no evidence for his individual involvement nor his command responsibility for the specific crimes the charge was based upon. On 16 July 2009, the Dutch appeal court upheld the acquittal.
|
https://en.wikipedia.org/wiki?curid=17031
|
Kantele
A kantele () is a traditional Finnish and Karelian plucked string instrument (chordophone) belonging to the south east Baltic box zither family known as the Baltic psaltery along with Estonian kannel, Latvian kokles, Lithuanian kanklės and Russian gusli.
Modern instruments with 15 or fewer strings are generally more closely modeled on traditional shapes, and form a category of instrument known as small kantele, in contrast to the modern concert kantele.
The oldest forms of kantele have 5 or 6 horsehair strings and a wooden body carved from one piece; more modern instruments have metal strings and often a body made from several pieces. The traditional kantele has neither bridge nor nut, the strings run directly from the tuning pegs to a metal bar ("varras") set into wooden brackets ("ponsi"). Though not acoustically efficient, this construction is part of the distinctive sound of the kantele.
The most typical and traditional tuning of the five-string small kantele is just intonation arrived at via five-limit tuning, often in Dmajor or Dminor. This occurs especially if kantele is played as a solo instrument or as a part of a folk music ensemble. The major triad is then formed by D1–F1–A1. In modern variants of small kantele, there are often semitone levers for some strings. Most typical is the lever for a five string kantele is a switch between F1 and F1, which allows most of the folk music to be played without retuning. Larger small kanteles very often have also other semitone levers that allow more varied selection of music to be played without retuning.
Modern concert kantele can have up to 40 strings. The playing positions of concert kantele and small kantele are reversed: to the player of a small kantele the longest low pitched strings are farthest away from his body, whilst to a concert kantele this side of the instrument is nearest, and the short high pitched strings farthest away. Concert versions have a switch mechanism (similar to semitone levers on a modern folk harp) for making sharps and flats, an innovation developed introduced by Paul Salminen in the 1920s.
The kantele has a distinctive bell-like sound. The Finnish kantele generally has a diatonic tuning though small kantele with between 5 and 15 strings are often tuned to a gapped mode missing a seventh and with the lowest pitched strings tuned to a fourth below the tonic as a drone. Players hold the kantele in their laps or on a small table. There are two main techniques to play, either plucking the strings with their fingers or strumming unstopped strings (sometimes with a matchstick). The small and concert kanteles have different, though related, repertoires.
There have been strong developments for the kantele in Finland since the mid-20th century, starting with the efforts of modern players such as Martti Pokela in the 1950s and 1960s. Education for playing the instrument starts in schools and music institutes up to conservatories and the Sibelius Academy, the only music university in Finland. Even some artistic doctoral studies have been made at the Academy with traditional, western classical and electronic music. A Finnish luthiery, Koistinen Kantele, has developed also an electric kantele, which employs pick-ups similar to those on electric guitars. It has gained popularity amongst Finnish heavy metal musicians such as Amorphis.
In Finland's national epic, Kalevala, the mage Väinämöinen makes the first kantele from the jawbone of a giant pike and a few hairs from Hiisi's stallion. The music it makes draws all the forest creatures near to wonder at its beauty.
Later, after losing and greatly grieving over his kantele, Väinämöinen makes another one from a birch, strung with the hair of a willing maiden, and its magic proves equally profound. It is the gift the eternal mage leaves behind when he departs Kaleva at the advent of Christianity.
|
https://en.wikipedia.org/wiki?curid=17035
|
Kumquat
Kumquats (or cumquats in Australian English, ; Citrus japonica) are a group of small fruit-bearing trees in the flowering plant family Rutaceae. They were previously classified as forming the now-historical genus "Fortunella", or placed within "Citrus sensu lato".
The edible fruit closely resembles the orange ("Citrus sinensis") in color and shape but is much smaller, being approximately the size of a large olive. Kumquat is a fairly cold-hardy citrus.
The English name "kumquat" derives from the Cantonese "kamkwat" ("kam" meaning "golden"; "kwat" meaning "orange").
The kumquat plant is native to China.. The earliest historical reference to kumquats appears in Imperial literature from the 12th century. They have long been cultivated in India, Japan, Taiwan, the Philippines, and Southeast Asia. They were introduced to Europe in 1846 by Robert Fortune, collector for the London Horticultural Society, and shortly thereafter were taken to North America.
They are slow-growing evergreen shrubs or short trees that stand tall, with dense branches, sometimes bearing small thorns. The leaves are dark glossy green, and the flowers are white, similar to other citrus flowers, and can be borne singly or clustered within the leaves' axils. Depending on size, the kumquat tree can produce hundreds or even thousands of fruits each year.
Citrus taxonomy is complicated and controversial. Different systems place different types of kumquat in different species, or unite them into as few as two species. Historically they were viewed as falling within the genus "Citrus", but the Swingle system of citrus taxonomy elevated them to their own genus, "Fortunella". Recent phylogenetic analysis suggests they do indeed fall within "Citrus". Swingle divided the kumquats into two subgenera, the "Protocitrus", containing the primitive Hong Kong kumquat, and "Eufortunella", comprising the round, oval kumquat, Meiwa kumquats, to which Tanaka added two others, the Malayan kumquat and the Jiangsu kumquat. Chromosomal analysis suggested that Swingle's "Eufortunella" represent a single 'true' species, while Tanaka's additional species were revealed to be likely hybrids of "Fortunella" with other "Citrus", so-called "Citrofortunella". Recent genomic analysis concluded there was only one true species of kumquat, but the analysis did not include the Hong Kong variety seen as a distinct species in all earlier analyses.
The round kumquat, Marumi kumquat or Morgani kumquat (retaining the name "Citrus japonica" or "Fortunella japonica" when kumquats are divided into multiple species), is an evergreen tree that produces edible golden-yellow fruit. The fruit is small and usually spherical but can be oval shaped. The peel has a sweet flavor, but the fruit has a distinctly sour center. The fruit can be eaten cooked but is mainly used to make marmalades, jellies, and other spreads. It is grown in Luxembourg and can be used in bonsai cultivation. The plant symbolizes good luck in China and other Asian countries, where it is often kept as a houseplant and given as a gift during the Lunar New Year. Round kumquats are more commonly cultivated than other species due to their high cold tolerance.
The oval kumquat or Nagami kumquat ("Citrus margarita" or "Fortunella margarita" if dividing "Eufortunella" kumquats into separate species) is ovoid in shape and typically eaten whole, skin and all. The inside is still quite sour, but the skin has a very sweet flavour, so when eaten together an unusual tart-sweet, refreshing flavour is produced. The fruit ripens mid- to late winter and always crops very heavily, creating a spectacular display against the dark green foliage. The tree tends to be much smaller and dwarf in nature, making it ideal for pots and occasionally bonsai cultivation.
The 'Centennial Variegated' kumquat cultivar arose spontaneously from the oval kumquat. It produces a greater proportion of fruit to peel than the oval kumquat, and the fruit are rounder and sometimes necked. Fruit are distinguishable by their variegation in color, exhibiting bright green and yellow stripes, and by its lack of thorns.
The Meiwa kumquat ("Citrus crassifolia" or "Fortunella crassifolia") was brought to Japan from China at the end of the 19th century, it has seedy oval fruits and thick leaves, and was characterized as a different species by Swingle. Its fruit is typically eaten skin and all.
The Hong Kong kumquat ("Citrus hindsii" or "Fortunella hindsii") produces only pea-sized bitter and acidic fruit with very little pulp and large seeds. It is primarily grown as an ornamental plant, though it is also found in southern China growing in the wild. Not only is it the most primitive of the kumquats, but with the kumquats being the most primitive citrus, Swingle described it as the closest to the ancestral species from which all citrus evolved. While the wild Hong Kong kumquat is tetraploid, there is a commercial diploid variety, the Golden Bean kumquat with slightly larger fruit.
The Jiangsu kumquat or Fukushu kumquat ("Citrus obovata" or "Fortunella obovata") bears edible fruit that can be eaten raw, as well as made into jelly and marmalade. The fruit can be round or bell-shaped and is bright orange when fully ripe. The plant can be distinguished from other kumquats by its distinctly round leaves. It is typically grown for its edible fruit and as an ornamental plant; it cannot withstand frost, however, unlike the round kumquat which has a high cold tolerance. These kumquats are often seen near the Yuvraj section of the Nayak Province. Chromosomal analysis showed this variety to be a likely hybrid.
The Malayan kumquat ("Fortunella polyandra" or Tanaka's "Fortunella swinglei" - in "Citrus" it would be "C. x swinglei"), from the Malay Peninsula where it is known as the "hedge lime" ("limau pagar"), is another hybrid, perhaps a limequat. It has a thin peel on larger fruit compared to other kumquats.
Kumquats are much hardier than citrus plants such as oranges. The Nagami kumquat requires a hot summer, ranging from 25 °C to 38 °C (77 °F to 100 °F), but can withstand frost down to about without injury.
The fruit is usually consumed whole with its peel and is sometimes used in fruit salads.
In cultivation in the UK, "Citrus japonica" has gained the Royal Horticultural Society’s Award of Garden Merit (confirmed 2017).
Kumquats do not grow well from seeds and so are vegetatively propagated by using rootstock of another citrus fruit, air layering, or cuttings.
The essential oil of the kumquat peel contains much of the aroma of the fruit, and is composed principally of limonene, which makes up around 93% of the total. Besides limonene and alpha-pinene (0.34%), both of which are considered monoterpenes, the oil is unusually rich (0.38% total) in sesquiterpenes such as α-bergamotene (0.021%), caryophyllene (0.18%), α-humulene (0.07%) and α-muurolene (0.06%), and these contribute to the spicy and woody flavor of the fruit. Carbonyl compounds make up much of the remainder, and these are responsible for much of the distinctive flavor. These compounds include esters such as isopropyl propanoate (1.8%) and terpinyl acetate (1.26%); ketones such as carvone (0.175%); and a range of aldehydes such as citronellal (0.6%) and 2-methylundecanal. Other oxygenated compounds include nerol (0.22%) and trans-linalool oxide (0.15%).
Hybrid forms of the kumquat include the following:
|
https://en.wikipedia.org/wiki?curid=17037
|
Kyanite
Kyanite is typically a blue aluminosilicate mineral, usually found in aluminium-rich metamorphic pegmatites and/or sedimentary rock. Kyanite in metamorphic rocks generally indicates pressures higher than four kilobars. It is commonly found in quartz.
Although potentially stable at lower pressure and low temperature, the activity of water is usually high enough under such conditions that it is replaced by hydrous aluminosilicates such as muscovite, pyrophyllite, or kaolinite. Kyanite is also known as disthene, rhaeticite and cyanite.
Kyanite is a member of the aluminosilicate series, which also includes the polymorph andalusite and the polymorph sillimanite. Kyanite is strongly anisotropic, in that its hardness varies depending on its crystallographic direction. In kyanite, this anisotropism can be considered an identifying characteristic.
At temperatures above 1100 °C kyanite decomposes into mullite and vitreous silica via the following reaction: 3(Al2O3·SiO2) → 3Al2O3·2SiO2 + SiO2. This transformation results in an expansion.
Its name comes from the same origin as that of the color cyan, being derived from the Ancient Greek word κύανος. This is generally rendered into English as "kyanos" or "kuanos" and means "dark blue".
Kyanite is used primarily in refractory and ceramic products, including porcelain plumbing and dishware. It is also used in electronics, electrical insulators and abrasives.
Kyanite has been used as a semiprecious gemstone, which may display cat's eye chatoyancy, though this use is limited by its anisotropism and perfect cleavage. Color varieties include recently discovered orange kyanite from Tanzania. The orange color is due to inclusion of small amounts of manganese (Mn3+) in the structure.
Kyanite is one of the index minerals that are used to estimate the temperature, depth, and pressure at which a rock undergoes metamorphism.
Kyanite's elongated, columnar crystals are usually a good first indication of the mineral, as well as its color (when the specimen is blue). Associated minerals are useful as well, especially the presence of the polymorphs of staurolite, which occur frequently with kyanite. However, the most useful characteristic in identifying kyanite is its anisotropism. If one suspects a specimen to be kyanite, verifying that it has two distinctly different hardness values on perpendicular axes is a key to identification; it has a hardness of 5.5 parallel to {001} and 7 parallel to {100}.
Kyanite occurs in gneiss, schist, pegmatite, and quartz veins resulting from high pressure regional metamorphism of principally pelitic rocks. It occurs as detrital grains in sedimentary rocks.
It occurs associated with staurolite, andalusite, sillimanite, talc, hornblende, gedrite, mullite and corundum.
Kyanite occurs in Manhattan schist, formed under extreme pressure as a result of the two landmasses that formed supercontinent Pangaea.
|
https://en.wikipedia.org/wiki?curid=17038
|
Kansas–Nebraska Act
The Kansas–Nebraska Act of 1854 () was a territorial organic act that created the territories of Kansas and Nebraska. It was drafted by Democratic Senator Stephen A. Douglas, passed by the 33rd United States Congress, and signed into law by President Franklin Pierce. Douglas introduced the bill with the goal of opening up new lands to development and facilitating construction of a transcontinental railroad, but the Kansas–Nebraska Act is most notable for effectively repealing the Missouri Compromise, stoking national tensions over slavery, and contributing to a series of armed conflicts known as "Bleeding Kansas".
The United States had acquired vast amounts of sparsely settled land in the 1803 Louisiana Purchase, and since the 1840s Douglas had sought to establish a territorial government in a portion of the Louisiana Purchase that was still unorganized. Douglas's efforts were stymied by Senator David Rice Atchison and other Southern leaders who refused to allow the creation of territories that banned slavery; slavery would have been banned because the Missouri Compromise outlawed slavery in territory north of latitude 36°30' north. To win the support of Southerners like Atchison, Pierce and Douglas agreed to back the repeal of the Missouri Compromise, with the status of slavery instead decided on the basis of "popular sovereignty." Under popular sovereignty, the citizens of each territory, rather than Congress, would determine whether or not slavery would be allowed.
Douglas's bill to repeal the Missouri Compromise and organize Kansas Territory and Nebraska Territory won approval by a wide margin in the Senate, but faced stronger opposition in the House of Representatives. Though Northern Whigs strongly opposed the bill, the bill passed the House with the support of almost all Southerners and some Northern Democrats. After the passage of the act, pro- and anti-slavery elements flooded into Kansas with the goal of establishing a population that would vote for or against slavery, resulting in a series of armed conflicts known as "Bleeding Kansas". Douglas and Pierce hoped that popular sovereignty would help bring an end to the national debate over slavery, but the Kansas–Nebraska Act outraged many Northerners, giving rise to the anti-slavery Republican Party. Ongoing tensions over slavery would eventually lead to the American Civil War.
In his 1853 inaugural address, President Franklin Pierce expressed hope that the Compromise of 1850 had settled the debate over the issue of slavery in the territories. The compromise had allowed slavery in Utah Territory and New Mexico Territory, which had been acquired in the Mexican–American War. The Missouri Compromise, which banned slavery in territories north of the 36°30′ parallel, remained in place for the other U.S. territories acquired in the Louisiana Purchase, including a vast unorganized territory often referred to as "Nebraska". As settlers poured into the unorganized territory, and commercial and political interests called for a transcontinental railroad through the region, pressure mounted for the organization of the eastern parts of the unorganized territory. Though organization of the territory was required to develop the region, an organization bill threatened to re-open the contentious debates over slavery in the territories that had taken place during and after the Mexican–American War.
The topic of a transcontinental railroad had been discussed since the 1840s. While there were debates over the specifics, especially the route to be taken, there was a public consensus that such a railroad should be built by private interests, financed by public land grants. In 1845, Stephen A. Douglas, then serving in his first term in the U.S. House of Representatives, had submitted an unsuccessful plan to organize the Nebraska Territory formally, as the first step in building a railroad with its eastern terminus in Chicago. Railroad proposals were debated in all subsequent sessions of Congress with cities such as Chicago, St. Louis, Quincy, Memphis, and New Orleans competing to be the jumping-off point for the construction.
Several proposals in late 1852 and early 1853 had strong support, but they failed because of disputes over whether the railroad would follow a northern or a southern route. In early 1853, the House of Representatives passed a bill 107 to 49 to organize the Nebraska Territory in the land west of Iowa and Missouri. In March, the bill moved to the Senate Committee on Territories, which was headed by Douglas. Missouri Senator David Atchison announced that he would support the Nebraska proposal only if slavery were to be permitted. While the bill was silent on this issue, slavery would have been prohibited under the Missouri Compromise in territory north of 36°30' latitude and west of the Mississippi River. Other Southern senators were as inflexible as Atchison. By a vote of 23 to 17, the Senate voted to table the motion, with every senator from the states south of Missouri voting to table.
During the Senate adjournment, the issues of the railroad and the repeal of the Missouri Compromise became entangled in Missouri politics, as Atchison campaigned for re-election against the forces of Thomas Hart Benton. Atchison was maneuvered into choosing between antagonizing the state's railroad interests or its slaveholders. Finally, he took the position that he would rather see Nebraska "sink in hell" before he would allow it to be overrun by free soilers.
Representatives then generally found lodging in boarding houses when they were in the nation's capital to perform their legislative duties. Atchison shared lodgings in an F Street house shared by the leading Southerners in Congress. He himself was the Senate's President pro tempore. His housemates included Robert T. Hunter (from Virginia, chairman of the Finance Committee), James Mason (from Virginia, chairman of the Foreign Affairs Committee) and Andrew P. Butler (from South Carolina, chairman of the Judiciary Committee). When Congress reconvened on December 5, 1853, the group, termed the F Street Mess, along with Virginian William O. Goode, formed the nucleus that would insist on slaveholder equality in Nebraska. Douglas was aware of the group's opinions and power and knew that he needed to address its concerns. Douglas was also a fervent believer in popular sovereignty – the policy of letting the voters, almost exclusively white males, of a territory decide whether or not slavery should exist in it.
Iowa Senator Augustus C. Dodge immediately reintroduced the same legislation to organize Nebraska that had stalled in the previous session; it was referred to Douglas's committee on December 14. Douglas, hoping to achieve the support of the Southerners, publicly announced that the same principle that had been established in the Compromise of 1850 should apply in Nebraska.
In the Compromise of 1850, Utah and New Mexico Territories had been organized without any restrictions on slavery, and many supporters of Douglas argued that the compromise had already superseded the Missouri Compromise. The territories were, however, given the authority to decide for themselves whether they would apply for statehood as either free or slaves states whenever they chose to apply. The two territories, however, unlike Nebraska, had not been part of the Louisiana Purchase and had arguably never been subject to the Missouri Compromise.
The bill was reported to the main body of the Senate on January 4, 1854. It had been modified by Douglas, who had also authored the New Mexico Territory and Utah Territory Acts, to mirror the language from the Compromise of 1850. In the bill, a vast new Nebraska Territory was created to extend from Kansas north all the way to the 49th parallel, the US–Canada border. A large portion of Nebraska Territory would soon be split off into Dakota Territory (1861), and smaller portions transferred to Colorado Territory (1861) and Idaho Territory (1863) before the balance of the land became the State of Nebraska in 1867.
Furthermore, any decisions on slavery in the new lands were to be made "when admitted as a state or states, the said territory, or any portion of the same, shall be received into the Union, with or without slavery, as their constitution may prescribe at the time of their admission." In a report accompanying the bill, Douglas's committee wrote that the Utah and New Mexico Acts:
The report compared the situation in New Mexico and Utah with the situation in Nebraska. In the first instance, many had argued that slavery had previously been prohibited under Mexican law, just as it was prohibited in Nebraska under the Missouri Compromise. Just as the creation of New Mexico and Utah territories had not ruled on the validity of Mexican law on the acquired territory, the Nebraska bill was neither "affirming or repealing ... the Missouri act." In other words, popular sovereignty was being established by ignoring, rather than addressing, the problem presented by the Missouri Compromise.
Douglas's attempt to finesse his way around the Missouri Compromise did not work. Kentucky Whig Archibald Dixon believed that unless the Missouri Compromise was explicitly repealed, slaveholders would be reluctant to move to the new territory until slavery was actually approved by the settlers, who would most likely oppose slavery. On January 16 Dixon surprised Douglas by introducing an amendment that would repeal the section of the Missouri Compromise that prohibited slavery north of the 36°30' parallel. Douglas met privately with Dixon and in the end, despite his misgivings on Northern reaction, agreed to accept Dixon's arguments.
From a political standpoint, the Whig Party had been in decline in the South because of the effectiveness with which it had been hammered by the Democratic Party over slavery. The Southern Whigs hoped that by seizing the initiative on this issue, they would be identified as strong defenders of slavery. Many Northern Whigs broke with them in the Act. The party eventually died by the division over the issue.
A similar amendment was offered in the House by Philip Phillips of Alabama. With the encouragement of the "F Street Mess", Douglas met with them and Phillips to ensure that the momentum for passing the bill remained with the Democratic Party. They arranged to meet with President Franklin Pierce to ensure that the issue would be declared a test of party loyalty within the Democratic Party.
Pierce had barely mentioned Nebraska in his State of the Union message the previous month and was not enthusiastic about the implications of repealing the Missouri Compromise. Close advisors Senator Lewis Cass, a proponent of popular sovereignty as far back as 1848 as an alternative to the Wilmot Proviso, and Secretary of State William L. Marcy both told Pierce that repeal would create serious political problems. The full cabinet met and only Secretary of War Jefferson Davis and Secretary of Navy James C. Dobbin supported repeal. Instead the president and cabinet submitted to Douglas an alternative plan that would have sought out a judicial ruling on the constitutionality of the Missouri Compromise. Both Pierce and Attorney General Caleb Cushing believed that the Supreme Court would find it unconstitutional.
Douglas's committee met later that night. Douglas was agreeable to the proposal, but the Atchison group was not. Determined to offer the repeal to Congress on January 23 but reluctant to act without Pierce's commitment, Douglas arranged through Davis to meet with Pierce on January 22 even though it was a Sunday, when Pierce generally refrained from conducting any business. Douglas was accompanied at the meeting by Atchison, Hunter, Phillips, and John C. Breckinridge of Kentucky.
Douglas and Atchison first met alone with Pierce before the whole group convened. Pierce was persuaded to support repeal, and at Douglas' insistence, Pierce provided a written draft, asserting that the Missouri Compromise had been made inoperative by the principles of the Compromise of 1850. Pierce later informed his cabinet, which concurred in the change of direction. The "Washington Union", the communications organ for the administration, wrote on January 24 that support for the bill would be "a test of Democratic orthodoxy."
On January 23, a revised bill was introduced in the Senate that repealed the Missouri Compromise and divided the territory into two territories: Kansas and Nebraska. The division was the result of concerns expressed by settlers already in Nebraska as well as the senators from Iowa, who were concerned with the location of the territory's seat of government if such a large territory were created. Existing language to affirm the application of all other laws of the United States in the new territory was supplemented by the language agreed on with Pierce: "except the eighth section of the act preparatory to the admission of Missouri into the Union, approved March 6, 1820, which was superseded by the legislation of 1850, commonly called the compromise measures, and is declared inoperative." Identical legislation was soon introduced in the House.
Historian Allan Nevins wrote that "two interconnected battles began to rage, one in Congress and one in the country at large: each fought with a pertinacity, bitterness, and rancor unknown even in Wilmot Proviso days." In Congress, the freesoilers were at a distinct disadvantage. The Democrats held large majorities in each house, and Douglas, "a ferocious fighter, the fiercest, most ruthless, and most unscrupulous that Congress had perhaps ever known" led a tightly disciplined party. It was in the nation at large that the opponents of Nebraska hoped to achieve a moral victory. "The New York Times", which had earlier supported Pierce, predicted that this would be the final straw for Northern supporters of the slavery forces and would "create a deep-seated, intense, and ineradicable hatred of the institution which will crush its political power, at all hazards, and at any cost."
The day after the bill was reintroduced, two Ohioans, Representative Joshua Giddings and Senator Salmon P. Chase, published a free-soil response, "Appeal of the Independent Democrats in Congress to the People of the United States:
Douglas took the appeal personally and responded in Congress, when the debate was opened on January 30 before a full House and packed gallery. Douglas biographer Robert W. Johanssen described part of the speech:
The debate would continue for four months, as many Anti-Nebraska political rallies were held across the north. Douglas remained the main advocate for the bill while Chase, William Seward, of New York, and Charles Sumner, of Massachusetts, led the opposition. The "New-York Tribune" wrote on March 2:
The debate in the Senate concluded on March 4, 1854, when Douglas, beginning near midnight on March 3, made a five-and-a-half-hour speech. The final vote in favor of passage was 37 to 14. Free-state senators voted 14 to 12 in favor, and slave-state senators supported the bill 23 to 2.
On March 21, 1854, as a delaying tactic in the House of Representatives, the legislation was referred by a vote of 110 to 95 to the Committee of the Whole, where it was the last item on the calendar. Realizing from the vote to stall that the act faced an uphill struggle, the Pierce administration made it clear to all Democrats that passage of the bill was essential to the party and would dictate how federal patronage would be handled. Davis and Cushing, from Massachusetts, along with Douglas, spearheaded the partisan efforts. By the end of April, Douglas believed that there were enough votes to pass the bill. The House leadership then began a series of roll call votes in which legislation ahead of the Kansas–Nebraska Act was called to the floor and tabled without debate.
Thomas Hart Benton was among those speaking forcefully against the measure. On April 25, in a House speech that biographer William Nisbet Chambers called "long, passionate, historical, [and] polemical," Benton attacked the repeal of the Missouri Compromise, which he "had stood upon ... above thirty years, and intended to stand upon it to the end—solitary and alone, if need be; but preferring company." The speech was distributed afterwards as a pamphlet when opposition to the act moved outside the walls of Congress.
It was not until May 8 that the debate began in the House. The debate was even more intense than in the Senate. While it seemed to be a foregone conclusion that the bill would pass, the opponents went all out to fight it. Historian Michael Morrison wrote:
The floor debate was handled by Alexander Stephens, of Georgia, who insisted that the Missouri Compromise had never been a true compromise but had been imposed on the South. He argued that the issue was whether republican principles, "that the citizens of every distinct community or State should have the right to govern themselves in their domestic matters as they please," would be honored.
The final House vote in favor of the bill was 113 to 100. Northern Democrats supported the bill 44 to 42, but all 45 northern Whigs opposed it. Southern Democrats voted in favor by 57 to 2, and southern Whigs supported it by 12 to 7.
President Franklin Pierce signed the Kansas–Nebraska Act into law on May 30, 1854.
Immediate responses to the passage of the Kansas–Nebraska Act fell into two classes. The less common response was held by Douglas's supporters, who believed that the bill would withdraw "the question of slavery from the halls of Congress and the political arena, committing it to the arbitration of those who were immediately interested in, and alone responsible for, its consequences." In other words, they believed that the Act would leave decisions about slavery in the hands of the people, rather than under the carefully balanced jurisdiction of the Federal government. The far more common response was one of outrage, interpreting Douglas's actions as part of "an atrocious plot." Especially in the eyes of northerners, the Kansas–Nebraska Act was aggression and an attack on the power and beliefs of free states. The response led to calls for public action against the South, as seen in broadsides that advertised gatherings in northern states to discuss publicly what to do about the presumption of the Act.
Douglas and former Illinois Representative Abraham Lincoln aired their disagreement over the Kansas–Nebraska Act in seven public speeches during September and October 1854. Lincoln gave his most comprehensive argument against slavery and the provisions of the act in Peoria, Illinois, on October 16, in the Peoria Speech. He and Douglas both spoke to the large audience, Douglas first and Lincoln in response, two hours later. Lincoln's three-hour speech presented thorough moral, legal, and economic arguments against slavery and raised Lincoln's political profile for the first time. The speeches set the stage for the Lincoln-Douglas debates four years later, when Lincoln sought Douglas's Senate seat.
Bleeding Kansas, Bloody Kansas, or the Border War was a series of violent political confrontations in the United States between 1854 and 1861 involving anti-slavery "Free-Staters" and pro-slavery "Border Ruffian", or "Southern" elements in Kansas. At the heart of the conflict was the question of whether Kansas would allow or outlaw slavery, and thus enter the Union as a slave state or a free state.
Pro-slavery settlers came to Kansas mainly from neighboring Missouri. Their influence in territorial elections was often bolstered by resident Missourians who crossed into Kansas solely for the purpose of voting in such ballots. They formed groups such as the Blue Lodges and were dubbed "border ruffians", a term coined by opponent and abolitionist Horace Greeley. Abolitionist settlers, known as "jayhawkers," moved from the East expressly to make Kansas a free state. A clash between the opposing sides was inevitable.
Successive territorial governors, usually sympathetic to slavery, attempted to maintain the peace. The territorial capital of Lecompton, the target of much agitation, became such a hostile environment for Free-Staters that they set up their own, unofficial legislature at Topeka.
John Brown and his sons gained notoriety in the fight against slavery by murdering five pro-slavery farmers with a broadsword in the Pottawatomie massacre. Brown also helped defend a few dozen Free-State supporters from several hundred angry pro-slavery supporters at Osawatomie.
Prior to the organization of the Kansas–Nebraska territory in 1854, the Kansas and Nebraska Territories were consolidated as part of the Indian Territory. Throughout the 1830s, large-scale relocations of Native American tribes to the Indian Territory took place, with many Southeastern nations removed to present-day Oklahoma, a process ordered by the Indian Removal Act of 1830 and known as the Trail of Tears, and many Midwestern nations removed by way of treaty to present-day Kansas. Among the latter were the Shawnee, Delaware, Kickapoo, Kaskaskia and Peoria, Ioway, and Miami. The passing of the Kansas–Nebraska Act came into direct conflict with the relocations. White American settlers from both the free-soil North and pro-slavery South flooded the Northern Indian Territory, hoping to influence the vote on slavery that would come following the admittance of Kansas and, to a lesser extent, Nebraska to the United States.
In order to avoid and/or alleviate the reservation-settlement problem, further treaty negotiations were attempted with the tribes of Kansas and Nebraska. In 1854 alone, the U.S. agreed to acquire lands in Kansas or Nebraska from several tribes including the Kickapoo, Delaware, Omaha, Shawnee, Otoe and Missouri, Miami, and Kaskaskia and Peoria. In exchange for their land cessions, the tribes largely received small reservations in the Indian Territory of Oklahoma or Kansas in some cases.
For the nations that remained in Kansas beyond 1854, the Kansas–Nebraska Act introduced a host of other problems. In 1855, white "squatters" built the city of Leavenworth on the Delaware reservation without the consent of either the Delaware or the US government. When Commissioner of Indian Affairs George Manypenny ordered for military support in removing the squatters, both the military and the squatters refused to comply, undermining both Federal authority and the treaties in place with the Delaware. In addition to the violations of treaty agreements, other promises made were not being kept. Construction and infrastructure improvement projects dedicated in nearly every treaty, for example, took a great deal longer than expected. Beyond that, however, the most damaging violation by White American settlers was the mistreatment of Native Americans and their properties. Personal maltreatment, stolen property, and deforestation have all been cited. Furthermore, the squatters' premature and illegal settlement of the Kansas Territory jeopardized the value of the land and, with it, the future of the Indian tribes living on them. Because treaties were land cessions and purchases, the value of the land handed over to the Federal government was critical to the payment received by a given Native nation. Deforestation, destruction of property, and other general injuries to the land lowered the value of the territories that were ceded by the Kansas Territory tribes.
Manypenny's 1856 "Report on Indian Affairs" explained the devastating effect of diseases White settlers brought to Kansas on Indian populations. Without providing statistics, Indian Affairs Superintendent to the area Colonel Alfred Cumming reported at least more deaths than births in most tribes in the area. While noting intemperance, or alcoholism, as a leading cause of death, Cumming specifically cited cholera, smallpox, and measles, none of which the Native Americans were able to treat. The disastrous epidemics exemplified the Osage people, who lost an estimated 1300 lives to scurvy, measles, smallpox, and scrofula between 1852 and 1856, contributing, in part, to the massive decline in population, from 8000 in 1850 to just 3500 in 1860. The Osage had already encountered epidemics associated with relocation and white settlement. The initial removal acts in the 1830s brought both White American settlers and foreign Native American tribes to the Great Plains and into contact with the Osage people. Between 1829 and 1843, influenza, cholera, and smallpox killed an estimated 1242 Osage Indians, resulting in a population recession of roughly 20 percent between 1830 and 1850.
The Kansas–Nebraska Act divided the nation and pointed it toward civil war. Congressional Democrats suffered huge losses in the mid-term elections of 1854, as voters provided support to a wide array of new parties opposed to the Democrats and the Kansas-Nebraska Act. By 1855, opponents of the Kansas–Nebraska Act had coalesced into the Republican Party, which replaced the Whigs as the main opposition to the Democrats in the Northern states, although some Democratic opponents instead joined the nativist American Party. Pierce declared his full opposition to the Republican Party, decrying what he saw as its anti-southern stance, but his perceived pro-Southern actions in Kansas continued to inflame Northern anger.
Partly due to the unpopularity of the Kansas–Nebraska Act, Pierce lost his bid for re-nomination at the 1856 Democratic National Convention to James Buchanan. Pierce remains the only elected president who actively sought reelection but was denied his party's nomination for a second term. Republicans nominated John C. Frémont in the 1856 presidential election and campaigned on "Bleeding Kansas" and the unpopularity of the Kansas–Nebraska Act. Buchanan won the election, but Frémont carried a majority of the free states. Two days after Buchanan's inauguration, Chief Justice Roger Taney delivered the "Dred Scott" decision, which asserted that Congress had no constitutional power to exclude slavery in the territories. Douglas continued to support the doctrine of popular sovereignty, but Buchanan insisted that Democrats respect the "Dred Scott" decision and its repudiation of federal interference with slavery in the territories.
Guerrilla warfare in Kansas continued throughout Buchanan's presidency and extended into 1860s. Buchanan attempted to admit Kansas as a state under the pro-slavery Lecompton Constitution, but Kansas voters rejected that constitution in an August 1858 referendum. Anti-slavery delegates won a majority of the elections to the 1859 Kansas constitutional convention, and Kansas won admission as a free state under the anti-slavery Wyandotte Constitution in the final months of Buchanan's presidency.
|
https://en.wikipedia.org/wiki?curid=17042
|
Kaleidoscope
A kaleidoscope () is an optical instrument with two or more reflecting surfaces tilted to each other in an angle, so that one or more (parts of) objects on one end of the mirrors are seen as a regular symmetrical pattern when viewed from the other end, due to repeated reflection. The reflectors (or mirrors) are usually enclosed in a tube, often containing on one end a cell with loose, colored pieces of glass or other transparent (and/or opaque) materials to be reflected into the viewed pattern. Rotation of the cell causes motion of the materials, resulting in an ever-changing view being presented.
Coined by its Scottish inventor David Brewster, "kaleidoscope" is derived from the Ancient Greek word καλός ("kalos"), "beautiful, beauty", εἶδος ("eidos"), "that which is seen: form, shape" and σκοπέω ("skopeō"), "to look to, to examine", hence "observation of beautiful forms." It was first published in the patent that was granted on July 10, 1817.
Multiple reflection by two or more reflecting surfaces has been known since antiquity and was described as such by Giambattista della Porta in his "Magia Naturalis" (1558–1589). In 1646 Athanasius Kircher described an experiment with a construction of two mirrors, which could be opened and closed like a book and positioned in various angles, showing regular polygon figures consisting of reflected aliquot sectors of 360°. Mr. Bradley's "New Improvements in Planting and Gardening" (1717) described a similar construction to be placed on geometrical drawings to show an image with multiplied reflection. However, an optimal configuration that produces the full effects of the kaleidoscope was not recorded before 1815.
In 1814 Sir David Brewster conducted experiments on light polarization by successive reflections between plates of glass and first noted "the circular arrangement of the images of a candle round a center, and the multiplication of the sectors formed by the extremities of the plates of glass". He forgot about it, but noticed a more impressive version of the effect during further experiments in February 1815. A while later he was impressed by the multiplied reflection of a bit of cement that was pressed through at the end of a triangular glass trough, which appeared more regular and almost perfectly symmetrical in comparison to the reflected objects that had been situated further away from the reflecting plates in earlier experiments. This triggered more experiments to find the conditions for the most beautiful and symmetrically perfect conditions. An early version had pieces of colored glass and other irregular objects fixed permanently and was admired by some Members of the Royal Society of Edinburgh, including Sir George Mackenzie who predicted its popularity. A version followed in which some of the objects and pieces of glass could move when the tube was rotated. The last step, regarded as most important by Brewster, was to place the reflecting panes in a draw tube with a concave lens to distinctly introduce surrounding objects into the reflected pattern.
Brewster thought his instrument to be of great value in "all the ornamental arts" as a device that creates an "infinity of patterns". Artists could accurately delineate the produced figures of the kaleidoscope by means of the solar microscope (a type of camera obscura device), magic lantern or camera lucida. Brewster believed it would at the same time become a popular instrument "for the purposes of rational amusement". He decided to apply for a patent. British patent no. 4136 "for a new Optical Instrument called "The Kaleidoscope" for exhibiting and creating beautiful Forms and Patterns of great use in all the ornamental Arts" was granted in July 1817. Unfortunately the manufacturer originally engaged to produce the product had shown one of the patent instruments to some of the London opticians to see if he could get orders from them. Soon the instrument was copied and marketed before the manufacturer had prepared any number of kaleidoscopes for sale. An estimated two hundred thousand kaleidoscopes sold in London and Paris in just three months. Brewster figured at most a thousand of these were authorized copies that were constructed correctly, while the majority of the others did not give a correct impression of his invention. Because so relatively few people had experienced a proper kaleidoscope or knew how to apply it to ornamental arts, he decided to publicize a treatise on the principles and the correct construction of the kaleidoscope.
It was thought that the patent was reduced in a Court of Law since its principles were supposedly already known. Brewster stated that the kaleidoscope was different because the particular positions of the object and of the eye, played a very important role in producing the beautiful symmetrical forms. Brewster's opinion was shared by several scientists, including James Watt.
Philip Carpenter originally tried to produce his own imitation of the kaleidoscope, but was not satisfied with the results. He decided to offer his services to Brewster as manufacturer. Brewster agreed and Carpenter's models were stamped "sole maker". Realizing that the company could not meet the level of demand, Brewster gained permission from Carpenter in 1818 for the device to be made by other manufacturers. In his 1819 "Treatise on the Kaleidoscope" Brewster listed more than a dozen manufacturers/sellers of patent kaleidoscopes. Carpenter's company would keep on selling kaleidoscopes for 60 years. H.M. Quackenbush Co. based in upstate New York in the United States was another authorized manufacturer.
In 1987, kaleidoscope artist Thea Marshall, working with the Willamette Science and Technology Center, a science museum located in the Eugene, Oregon, designed and constructed a 1,000 square foot traveling mathematics and science exhibition, "Kaleidoscopes: Reflections of Science and Art." With funding from the National Science Foundation, and circulated under the auspices of the Smithsonian Institution Traveling Exhibition Service (SITES), the exhibition appeared in 15 science museums over a three year period, reaching more than one million visitors in the United States and Canada. Interactive exhibit modules enabled visitors to better understand and appreciate how kaleidoscopes function.
David Brewster defined several variables in his patent and publications:
In his patent Brewster perceived two forms for the kaleidoscope:
In his "Treatise on the Kaleidoscope" (1819) he described the basic form with an object cell:
Brewster also developed several variations:
Brewster also imagined another application for the kaleidoscope:
Manufacturers and artists have created kaleidoscopes with a wide variety of materials and in many shapes. A few of these added elements that were not previously described by inventor David Brewster:
Cozy Baker (d. October 19, 2010)—founder of the Brewster Kaleidoscope Society—collected kaleidoscopes and wrote books about many of the artists making them in the 1970s through 2001. Her book "Kaleidoscope Artistry" is a limited compendium of kaleidoscope makers, containing pictures of the interior and exterior views of contemporary artworks. Baker is credited with energizing a renaissance in kaleidoscope-making in the US; She spent her life putting kaleidoscope artists and galleries together so they would know each other and encourage each other.
In 1999 a short-lived magazine dedicated to kaleidoscopes—"Kaleidoscope Review"—was published, covering artists, collectors, dealers, events, and including how-to articles. This magazine was created and edited by Brett Bensley, at that time a well-known kaleidoscope artist and resource on kaleidoscope information. Changed name to "The New Kaleidoscope Review", and then switched to a video presentation on YouTube, "The Kaleidoscope Maker."
Most kaleidoscopes are mass-produced from inexpensive materials, and intended as children's toys. At the other extreme are handmade pieces that display fine craftsmanship. Craft galleries often carry a few kaleidoscopes, while other enterprises specialize in them, carrying dozens of different types from different artists and craftspeople. Most handmade kaleidoscopes are now made in India, Bangladesh, Japan, the USA, Russia and Italy, following a long tradition of glass craftsmanship in those countries.
|
https://en.wikipedia.org/wiki?curid=17043
|
Common kestrel
The common kestrel ("Falco tinnunculus") is a bird of prey species belonging to the kestrel group of the falcon family Falconidae. It is also known as the European kestrel, Eurasian kestrel, or Old World kestrel. In Britain, where no other kestrel species occurs, it is generally just called "the kestrel".
This species occurs over a large range. It is widespread in Europe, Asia, and Africa, as well as occasionally reaching the east coast of North America. It has colonized a few oceanic islands, but vagrant individuals are generally rare; in the whole of Micronesia for example, the species was only recorded twice each on Guam and Saipan in the Marianas.
Common kestrels measure from head to tail, with a wingspan of . Females are noticeably larger, with the adult male weighing , around on average; the adult female weighs , around on average. They are thus small compared with other birds of prey, but larger than most songbirds. Like the other "Falco" species, they have long wings as well as a distinctive long tail.
Their plumage is mainly light chestnut brown with blackish spots on the upperside and buff with narrow blackish streaks on the underside; the remiges are also blackish. Unlike most raptors, they display sexual colour dimorphism with the male having fewer black spots and streaks, as well as a blue-grey cap and tail. The tail is brown with black bars in females, and has a black tip with a narrow white rim in both sexes. All common kestrels have a prominent black malar stripe like their closest relatives.
The cere, feet, and a narrow ring around the eye are bright yellow; the toenails, bill and iris are dark. Juveniles look like adult females, but the underside streaks are wider; the yellow of their bare parts is paler. Hatchlings are covered in white down feathers, changing to a buff-grey second down coat before they grow their first true plumage.
In the cool-temperate parts of its range, the common kestrel migrates south in winter; otherwise it is sedentary, though juveniles may wander around in search for a good place to settle down as they become mature. It is a diurnal animal of the lowlands and prefers open habitat such as fields, heaths, shrubland and marshland. It does not require woodland to be present as long as there are alternative perching and nesting sites like rocks or buildings. It will thrive in treeless steppe where there are abundant herbaceous plants and shrubs to support a population of prey animals. The common kestrel readily adapts to human settlement, as long as sufficient swathes of vegetation are available, and may even be found in wetlands, moorlands and arid savanna. It is found from the sea to the lower mountain ranges, reaching up to ASL in the hottest tropical parts of its range but only to about in the subtropical climate of the Himalayan foothills.
Globally, this species is not considered threatened by the IUCN. Its stocks were affected by the indiscriminate use of organochlorines and other pesticides in the mid-20th century, but being something of an r-strategist able to multiply quickly under good conditions it was less affected than other birds of prey. The global population has been fluctuating considerably over the years but remains generally stable; it is roughly estimated at 1–2 million pairs or so, about 20% of which are found in Europe. There has been a recent decline in parts of Western Europe such as Ireland. Subspecies "dacotiae" is quite rare, numbering less than 1000 adult birds in 1990, when the ancient western Canarian subspecies "canariensis" numbered about ten times as many birds.
When hunting, the common kestrel characteristically hovers about above the ground, searching for prey, either by flying into the wind or by soaring using ridge lift. Like most birds of prey, common kestrels have keen eyesight enabling them to spot small prey from a distance. Once prey is sighted, the bird makes a short, steep dive toward the target. It can often be found hunting along the sides of roads and motorways. This species is able to see near ultraviolet light, allowing the birds to detect the urine trails around rodent burrows as they shine in an ultraviolet colour in the sunlight. Another favourite (but less conspicuous) hunting technique is to perch a bit above the ground cover, surveying the area. When the birds spots prey animals moving by, they will pounce on them. They also prowl a patch of hunting ground in a ground-hugging flight, ambushing prey as they happen across it.
Common kestrels eat almost exclusively mouse-sized mammals. Voles, shrews and true mice supply up to three-quarters or more of the biomass most individuals ingest. On oceanic islands (where mammals are often scarce), small birds (mainly passerines) may make up the bulk of its diet. Elsewhere, birds are only an important food during a few weeks each summer when inexperienced fledglings abound. Other suitably sized vertebrates like bats, swifts, frogs and lizards are eaten only on rare occasions. However, kestrels are more likely to prey on lizards in southern latitudes. In northern latitudes, the kestrel is found more often to deliver lizards to their nestlings during midday and also with increasing ambient temperature. Seasonally, arthropods may be a main prey item. Generally, invertebrates like camel spiders and even earthworms, but mainly sizeable insects such as beetles, orthopterans and winged termites will be eaten.
"F. tinnunculus" requires the equivalent of 4–8 voles a day, depending on energy expenditure (time of the year, amount of hovering, etc.). They have been known to catch several voles in succession and cache some for later consumption. An individual nestling consumes on average 4.2 g/h, equivalent to 67.8 g/d (3–4 voles per day).
The common kestrel starts breeding in spring (or the start of the dry season in the tropics), i.e. April or May in temperate Eurasia and some time between August and December in the tropics and southern Africa. It is a cavity nester, preferring holes in cliffs, trees or buildings; in built-up areas, common kestrels will often nest on buildings, and will reuse the old nests of corvids. The diminutive subspecies "dacotiae", the "sarnicolo" of the eastern Canary Islands is peculiar for nesting occasionally in the dried fronds below the top of palm trees, apparently coexisting with small songbirds which also make their home there. In general, common kestrels will usually tolerate conspecifics nesting nearby, and sometimes a few dozen pairs may be found nesting in a loose colony.
The clutch is normally 3–7 eggs; more eggs may be laid in total but some will be removed during the laying time. This lasts about 2 days per egg laid. The eggs are abundantly patterned with brown spots, from a wash that tinges the entire surface buffish white to large almost-black blotches. Incubation lasts from 4 weeks to one month, and only the female incubates the eggs. The male is responsible for providing her with food, and for some time after hatching this remains the same. Later, both parents share brooding and hunting duties until the young fledge, after 4–5 weeks. The family stays close together for a few weeks, during which time the young learn how to fend for themselves and hunt prey. The young become sexually mature the next breeding season. Female kestrel chicks with blacker plumage have been found to have bolder personalities, indicating that even in juvenile birds plumage coloration can act as a status signal.
Data from Britain shows nesting pairs bringing up about 2–3 chicks on average, though this includes a considerable rate of total brood failures; actually, few pairs that do manage to fledge offspring raise less than 3 or 4. Compared to their siblings, first-hatched chicks have greater survival and recruitment probability, thought to be due to the first-hatched chicks obtaining a higher body condition when in the nest. Population cycles of prey, particularly voles, have a considerable influence on breeding success. Most common kestrels die before they reach 2 years of age; mortality up until the first birthday may be as high as 70%. At least females generally breed at one year of age; possibly, some males take a year longer to maturity as they do in related species. The biological lifespan to death from senescence can be 16 years or more, however; one was recorded to have lived almost 24 years.
This species is part of a clade that contains the kestrel species with black malar stripes, a feature which apparently was not present in the most ancestral kestrels. They seem to have radiated in the Gelasian (Late Pliocene, roughly 2.5–2 mya, probably starting in tropical East Africa, as indicated by mtDNA cytochrome "b" sequence data analysis and considerations of biogeography. The common kestrel's closest living relative is apparently the nankeen or Australian kestrel ("F. cenchroides"), which probably derived from ancestral common kestrels settling in Australia and adapting to local conditions less than one million years ago, during the Middle Pleistocene.
The rock kestrel ("F. rupicolus"), previously considered a subspecies, is now treated as a distinct species.
The lesser kestrel ("F. naumanni"), which much resembles a small common kestrel with no black on the upperside except wing and tail tips, is probably not very closely related to the present species, and the American kestrel ("F. sparverius") is apparently not a true kestrel at all. Both species have much grey in their wings in males, which does not occur in the common kestrel or its close living relatives but does in almost all other falcons.
A number of subspecies of the common kestrel are known, though some are hardly distinct and may be invalid. Most of them differ little, and mainly in accordance with Bergmann's and Gloger's Rules. Tropical African forms have less grey in the male plumage.
The common kestrels of Europe living during cold periods of the Quaternary glaciation differed slightly in size from the current population; they are sometimes referred to as the paleosubspecies "F. t. atavus" ("see also" Bergmann's Rule). The remains of these birds, which presumably were the direct ancestors of the living "F. t. tinnunculus" (and perhaps other subspecies), are found throughout the then-unglaciated parts of Europe, from the Late Pliocene (ELMA Villanyian/ICS Piacenzian, MN16) about 3 million years ago to the Middle Pleistocene Saalian glaciation which ended about 130,000 years ago, when they finally gave way to birds indistinguishable from those living today. Some of the voles the Ice Age common kestrels ate—such as European pine voles ("Microtus subterraneus")—were indistinguishable from those alive today. Other prey species of that time evolved more rapidly (like "M. malei", the presumed ancestor of today's tundra vole "M. oeconomus"), while yet again others seem to have gone entirely extinct without leaving any living descendants—for example "Pliomys lenki", which apparently fell victim to the Weichselian glaciation about 100,000 years ago.
The kestrel is sometimes seen, like other birds of prey, as a symbol of the power and vitality of nature. In "Into Battle" (1915), the war poet Julian Grenfell invokes the superhuman characteristics of the kestrel among several birds, when hoping for prowess in battle:
"The kestrel hovering by day,
And the little owl that call at night,
Bid him be swift and keen as they,
As keen of ear, as swift of sight."
Gerard Manley Hopkins (1844–1889) writes on the kestrel in his poem "The Windhover", exalting in their mastery of flight and their majesty in the sky.
"I caught this morning morning's minion, king-
dom of daylight's dauphin, dapple-dawn-drawn Falcon, in his riding"
A kestrel is also one of the main characters in "The Animals of Farthing Wood".
Barry Hines’ novel "A Kestrel for a Knave" - together with the 1969 film based on it, Ken Loach's "Kes" - is about a working-class boy in England who befriends a kestrel.
The Pathan name for the kestrel, Bād Khurak, means "wind hover" and in Punjab it is called Larzānak or "little hoverer". It was once used as a decoy to capture other birds of prey in Persia and Arabia. It was also used to train greyhounds meant for hunting gazelles in parts of Arabia. Young greyhounds would be set after jerboa-rats which would also be distracted and forced to make twists and turns by the dives of a kestrel.
The name "kestrel" is derived from the French crécerelle which is diminutive for crécelle, which also referred to a bell used by lepers. The word first appears in 1678 in the work of Francis Willughby. The kestrel was once used to drive and keep away pigeons. Archaic names for the kestrel include "windhover" and "windfucker," due to its habit of beating the wind (hovering in air).
The Late Latin "falco" derives from "falx", "falcis", a sickle, referencing the claws of the bird. The species name "tinnunculus" is Latin for "kestrel" from "tinnulus", "shrill".
|
https://en.wikipedia.org/wiki?curid=17044
|
Kalmia latifolia
Kalmia latifolia, commonly called mountain laurel, calico-bush, or spoonwood, is a broadleaved evergreen shrub in the heather family, Ericaceae, that is native to the eastern United States. Its range stretches from southern Maine south to northern Florida, and west to Indiana and Louisiana. Mountain laurel is the state flower of Connecticut and Pennsylvania. It is the namesake of Laurel County in Kentucky, the city of Laurel, Mississippi, and the Laurel Highlands in southwestern Pennsylvania.
"Kalmia latifolia" is an evergreen shrub growing to 3–9 m tall. The leaves are 3–12 cm long and 1–4 cm wide. Its flowers are round, ranging from light pink to white, and occur in clusters. There are several named cultivars today that have darker shades of pink, near red and maroon pigment. It blooms in May and June. All parts of the plant are poisonous. Roots are fibrous and matted.
The plant is naturally found on rocky slopes and mountainous forest areas. It thrives in acidic soil, preferring a soil pH in the 4.5 to 5.5 range. The plant often grows in large thickets, covering great areas of forest floor. In the Appalachians, it can become tree-sized but is a shrub farther north. The species is a frequent component of oak-heath forests. In low, wet areas, it grows densely, but in dry uplands has a more sparse form. In the southern Appalachians, laurel thickets are referred to as "laurel hells" because it's nearly impossible to pass through one.
"Kalmia latifolia" is notable for its unusual method of dispensing its pollen. As the flower grows, the filaments of its stamens are bent and brought into tension. When an insect lands on the flower, the tension is released, catapulting the pollen forcefully onto the insect. Experiments have shown the flower capable of flinging its pollen up to 15 cm. Physicist Lyman J. Briggs became fascinated with this phenomenon in the 1950s after his retirement from the National Bureau of Standards and conducted a series of experiments in order to explain it.
"Kalmia latifolia" is also known as ivybush or spoonwood (because Native Americans used to make their spoons out of it).
The plant was first recorded in America in 1624, but it was named after the Finnish explorer and botanist Pehr Kalm (1716–1779), who sent samples to Linnaeus.
The Latin specific epithet "latifolia" means "with broad leaves" – as opposed to its sister species "Kalmia angustifolia", "with narrow leaves".
The plant was originally brought to Europe as an ornamental plant during the 18th century. It is still widely grown for its attractive flowers and year-round evergreen leaves. Elliptic, alternate, leathery, glossy evergreen leaves (to 5" long) are dark green above and yellow green beneath and reminiscent to the leaves of rhododendrons. All parts of this plant are toxic if ingested. Numerous cultivars have been selected with varying flower color. Many of the cultivars have originated from the Connecticut Experiment Station in Hamden and from the plant breeding of Dr. Richard Jaynes. Jaynes has numerous named varieties that he has created and is considered the world's authority on "Kalmia latifolia".
In the UK the following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:
The wood of the mountain laurel is heavy and strong but brittle, with a close, straight grain. It has never been a viable commercial crop as it does not grow large enough, yet it is suitable for wreaths, furniture, bowls and other household items. It was used in the early 19th century in wooden-works clocks. Root burls were used for pipe bowls in place of imported briar burls unattainable during World War II. It can be used for handrails or guard rails.
Mountain laurel is poisonous to several animals, including horses, goats, cattle, deer, monkeys, and humans, due to grayanotoxin and arbutin. The green parts of the plant, flowers, twigs, and pollen are all toxic, including food products made from them, such as toxic honey that may produce neurotoxic and gastrointestinal symptoms in humans eating more than a modest amount. Symptoms of toxicity begin to appear about 6 hours following ingestion. Symptoms include irregular or difficulty breathing, anorexia, repeated swallowing, profuse salivation, watering of the eyes and nose, cardiac distress, incoordination, depression, vomiting, frequent defecation, weakness, convulsions, paralysis, coma, and eventually death. Necropsy of animals who have died from spoonwood poisoning show gastrointestinal hemorrhage.
The Cherokee use the plant as an analgesic, placing an infusion of leaves on scratches made over location of the pain. They also rub the bristly edges of ten to twelve leaves over the skin for rheumatism, crush the leaves to rub brier scratches, use an infusion as a wash "to get rid of pests", use a compound as a liniment, rub leaf ooze into the scratched skin of ball players to prevent cramps, and use a leaf salve for healing. They also use the wood for carving.
The Hudson Bay Cree use a decoction of the leaves for diarrhea, but consider the plant to be poisonous.
|
https://en.wikipedia.org/wiki?curid=17045
|
Khmer Rouge
The Khmer Rouge (, ; ; "Red Khmers") is the name which was popularly given to members of the Communist Party of Kampuchea (CPK) and by extension to the regime through which the CPK ruled Cambodia between 1975 and 1979. The name had been coined in the 1960s by Norodom Sihanouk to describe his country's then heterogeneous, communist-led dissidents, with whom he allied after his 1970 overthrow.
The Khmer Rouge army was slowly built up in the jungles of Eastern Cambodia during the late 1960s, supported by the North Vietnamese army, the Viet Cong, the Pathet Lao, and the Communist Party of China (CPC). Although it originally fought against Sihanouk, on the advice of the CPC, the Khmer Rouge changed its position and supported Sihanouk after he was overthrown in a 1970 coup by Lon Nol who established the pro-United States Khmer Republic. Despite a massive American bombing campaign against them, the Khmer Rouge won the Cambodian Civil War when they captured the Cambodian capital and overthrew the Khmer Republic in 1975. Following their victory, the Khmer Rouge who were led by Pol Pot, Nuon Chea, Ieng Sary, Son Sen, and Khieu Samphan immediately set about forcibly evacuating the country's major cities and in 1976 they renamed the country Democratic Kampuchea.
The Khmer Rouge regime was highly autocratic, xenophobic, paranoid, and repressive. Many deaths resulted from the regime's social engineering policies and the "Maha Lout Ploh", an imitation of China's Great Leap Forward, which caused the Great Chinese Famine. The Khmer Rouge's attempts at agricultural reform through collectivisation similarly led to widespread famine, while its insistence on absolute self-sufficiency even in the supply of medicine led to the death of many thousands from treatable diseases such as malaria. The Khmer Rouge regime murdered hundreds of thousands of their perceived political opponents, and its racist emphasis on national purity resulted in the genocide of Cambodian minorities. Arbitrary executions and torture were carried out by its cadres against perceived subversive elements, or during genocidal purges of its own ranks between 1975 and 1978. Ultimately, the "Cambodian genocide" led to the death of 1.5 to 2 million people, around 25% of Cambodia's population.
In the 1970s, the Khmer Rouge were largely supported and funded by the CPC, receiving approval from Mao Zedong himself; it is estimated that at least 90% of the foreign aid which was provided to the Khmer Rouge came from China. The regime was removed from power in 1979 when Vietnam invaded Cambodia and quickly destroyed most of the Khmer Rouge's forces. The Khmer Rouge then fled to Thailand, whose government saw them as a buffer force against the Communist Vietnamese. During the 1980s, China and the United States – together with their allies – backed Pol Pot in exile in order to counter the power of the Soviet Union and Vietnam, providing the Khmers with intelligence, food, weapons, and military training. The extent of US support is disputed. The Khmer Rouge continued to fight against the Vietnamese and the new People's Republic of Kampuchea government during the Cambodian–Vietnamese War which ended in 1989. The Cambodian governments-in-exile (including the Khmer Rouge) held onto Cambodia's United Nations seat (with considerable international support) until 1993, when the monarchy was restored and the name of the Cambodian state was changed from Democratic Cambodia to the Kingdom of Cambodia. A year later, thousands of Khmer Rouge guerrillas surrendered themselves in a government amnesty. In 1996, a new political party called the Democratic National Union Movement was formed by Ieng Sary, who was granted amnesty for his role as the deputy leader of the Khmer Rouge. The organisation was largely dissolved by the mid-1990s and finally surrendered completely in 1999. In 2014, two Khmer Rouge leaders, Nuon Chea and Khieu Samphan, were jailed for life by a United Nations-backed court which found them guilty of crimes against humanity for their roles in the Khmer Rouge's genocidal campaign.
The term "Khmers rouges", French for red Khmers, was coined by King Norodom Sihanouk and later adopted by English speakers (in the form of the corrupted version Khmer Rouge). It was used to refer to a succession of communist parties in Cambodia which evolved into the Communist Party of Kampuchea (CPK) and later the Party of Democratic Kampuchea. Its military was known successively as the Kampuchean Revolutionary Army and the National Army of Democratic Kampuchea.
In power, the movement's ideology was shaped by a power struggle during 1976 in which the so-called Party Centre led by Pol Pot defeated other regional elements of its leadership. The Party Centre's ideology combined elements of Marxism with a strongly xenophobic form of Khmer nationalism. Due in part to its secrecy and changes in how it presented itself, academic interpretations of its political position vary widely, ranging from interpreting it as the "purest" Marxist–Leninist movement to characterising it as an anti-Marxist "peasant revolution".
Its leaders and theorists, most of whom had been exposed to the heavily Stalinist outlook of the French Communist Party during the 1950s, developed a distinctive and eclectic "post-Leninist" ideology that drew on elements of Stalinism, Maoism and the postcolonial theory of Frantz Fanon. In the early 1970s, the Khmer Rouge looked to the model of Enver Hoxha's Albania which they believed was the most advanced communist state then in existence. Many of the regime's characteristics such as its focus on the rural peasantry rather than the urban proletariat as the bulwark of revolution, its emphasis on Great Leap Forward-type initiatives, its desire to abolish personal interest in human behaviour, its promotion of communal living and eating and its focus on perceived common sense over technical knowledge appear to have been heavily influenced by Maoist ideology. However, the Khmer Rouge displayed these characteristics in a more extreme form.
While the CPK described itself as the "number 1 Communist state" once it was in power, some communist regimes, such as Vietnam, saw it as a Maoist deviation from orthodox Marxism. The Maoist and Khmer Rouge belief that human willpower could overcome material and historical conditions was strongly at odds with mainstream Marxism, which emphasised materialism and the idea of history as inevitable progression toward communism.
In 1981, following the Cambodian–Vietnamese War, during which they were supported by the United States, the Khmer Rouge officially renounced communism.
One of the regime's defining characteristics was its Khmer ultranationalism, which combined an idealisation of the Angkor Empire (802–1431) with an existential fear for the survival of the Cambodian state, which had historically been liquidated during periods of Vietnamese and Siamese intervention. The spillover of Vietnamese fighters from the Vietnamese–American War further aggravated anti-Vietnamese sentiments as the 1960s went on: the Khmer Republic under Lon Nol, overthrown by the Khmer Rouge, had itself promoted Mon-Khmer nationalism and was responsible for several anti-Vietnamese pogroms during the 1970s. Some historians such as Ben Kiernan have stated that the importance the regime gave to race overshadowed its conceptions of class.
Once in power, the Khmer Rouge explicitly targeted the Chinese, the Vietnamese, the Cham minority and even their partially Khmer offspring. The same attitude extended to the party's own ranks, as senior CPK figures of non-Khmer ethnicity were removed from the leadership despite extensive revolutionary experience and were often killed.
The Khmer Rouge's economic policy, which was largely based on the plans of Khieu Samphan, focused on the achievement of national self-reliance through an initial phase of agricultural collectivism. This would then be used as a route to achieve rapid social transformation and industrial and technological development without assistance from foreign powers, a process which the party characterised as a "Super Great Leap Forward". The strong emphasis on autarky in Khmer Rouge planning was probably influenced by the early writings of Samir Amin, which were cited in Khieu Samphan's PhD thesis.
The party's General Secretary Pol Pot strongly influenced the propagation of this policy. He was reportedly impressed with the self-sufficient manner in which the mountain tribes of Cambodia lived, which the party believed was a form of primitive communism. Khmer Rouge theory developed the concept that the nation should take "agriculture as the basic factor and use the fruits of agriculture to build industry". In 1975, Khmer Rouge representatives to China said that Pol Pot's belief was that the collectivisation of agriculture was capable of "[creating] a complete Communist society without wasting time on the intermediate steps". Society was accordingly classified into peasant "base people", who would be the bulwark of the transformation; and urban "new people", who were to be reeducated or liquidated. The focus of the Khmer Rouge leadership on the peasantry as the base of the revolution was according to Michael Vickery a product of their status as "petty-bourgeois radicals who had been overcome by peasantist romanticism". The opposition of the peasantry and the urban population in Khmer Rouge ideology was heightened by the structure of the Cambodian rural economy, where small farmers and peasants had historically suffered from indebtedness to urban money-lenders rather than suffering from indebtedness to landlords. The policy of evacuating major towns, as well as providing a reserve of easily exploitable agricultural labour, was likely viewed positively by the Khmer Rouge's peasant supporters as removing the source of their debts.
Democratic Kampuchea is sometimes described as an atheist state, although its constitution stated that everyone had freedom of religion, or not to hold a religion. However, it specified that what it termed "reactionary religion" would not be permitted. While in practice religious activity was not tolerated, the relationship of the CPK to the majority Cambodian Theravada Buddhism was complex; several key figures in its history such as Tou Samouth and Ta Mok were former monks, along with many lower level cadres, who often proved some of the strictest disciplinarians. While there was extreme harassment of Buddhist institutions, there was a tendency for the CPK regime to internalise and reconfigure the symbolism and language of Cambodian Buddhism so that many revolutionary slogans mimicked the formulae learned by young monks during their training. Some cadres who had previously been monks interpreted their change of vocation as a simple movement from a lower to a higher religion, mirroring attitudes around the growth of Cao Dai in the 1920s.
The repression of Islam, practised by the country's Cham minority; and of adherents of Christianity was extensive. Islamic religious leaders were executed, although some Cham Muslims appear to have been told they could continue devotions in private as long as it did not interfere with work quotas. Nevertheless, Mat Ly, a Cham who served as the deputy minister of agriculture under the People's Republic of Kampuchea, stated that Khmer Rouge troops had perpetrated a number of massacres in Cham villages in the Central and Eastern zones where the residents had refused to give up Islamic customs.
Buddhist laity seem not to have been singled out for persecution, although traditional belief in the tutelary spirits, or "neak ta", rapidly eroded as people were forcibly moved from their home areas. The position with Buddhist monks was more complicated: as with Islam, many religious leaders were killed whereas many ordinary monks were sent to remote monasteries where they were subjected to hard physical labour. The same division between rural and urban populations was seen in the regime's treatment of monks. For instance, those from urban monasteries were classified as "new monks" and sent to rural areas to live alongside "base monks" of peasant background, who were classified as "proper and revolutionary". Monks were not ordered to defrock until as late as 1977 in Kratié Province, where many monks found that they reverted to the status of lay peasantry as the agricultural work they were allocated to involved regular breaches of monastic rules. While there is evidence of widespread vandalism of Buddhist monasteries, many more than were initially survived the Khmer Rouge years in fair condition, as did most Khmer historical monuments, and it is possible that stories of their near total destruction were propaganda issued by the successor People's Republic of Kampuchea. Nevertheless, it has been estimated that nearly 25,000 Buddhist monks were killed by the regime.
While François Ponchaud stated that Christians were invariably taken away and killed with the accusation of having links with the CIA, at least some cadres appear to have regarded it as preferable to the "feudal" class-based Buddhism. Nevertheless, it remained deeply suspect to the regime thanks to its close links to French colonialism; Phnom Penh cathedral was razed along with other places of worship.
The history of the communist movement in Cambodia can be divided into six phases, namely the emergence before World War II of the Indochinese Communist Party (ICP), whose members were almost exclusively Vietnamese; the 10-year struggle for independence from the French, when a separate Cambodian communist party, the Kampuchean (or Khmer) People's Revolutionary Party (KPRP), was established under Vietnamese auspices; the period following the Second Party Congress of the KPRP in 1960, when Saloth Sar (Pol Pot after 1976) and other future Khmer Rouge leaders gained control of its apparatus; the revolutionary struggle from the initiation of the Khmer Rouge insurgency in 1967–1968 to the fall of the Lon Nol government in April 1975; the Democratic Kampuchea regime from April 1975 to January 1979; and the period following the Third Party Congress of the KPRP in January 1979, when Hanoi effectively assumed control over Cambodia's government and communist party.
In 1930, Ho Chi Minh founded the Communist Party of Vietnam by unifying three smaller communist movements that had emerged in northern, central and southern Vietnam during the late 1920s. Almost immediately, the party was renamed the Indochinese Communist Party, ostensibly so it could include revolutionaries from Cambodia and Laos. Almost without exception, all of the earliest party members were Vietnamese. By the end of World War II, a handful of Cambodians had joined its ranks, but their influence on the Indochinese communist movement as well as their influence on developments within Cambodia was negligible.
Viet Minh units occasionally made forays into Cambodian bases during their war against the French and in conjunction with the leftist government that ruled Thailand until 1947 the Viet Minh encouraged the formation of armed, left-wing Khmer Issarak bands. On April 17, 1950 (25 years to the day before the Khmer Rouge captured Phnom Penh), the first nationwide congress of the Khmer Issarak groups convened and the United Issarak Front was established. Its leader was Son Ngoc Minh and a third of its leadership consisted of members of the ICP. According to the historian David P. Chandler, the leftist Issarak groups aided by the Viet Minh occupied a sixth of Cambodia's territory by 1952 and on the eve of the Geneva Conference controlled as much as one half of the country.
In 1951, the ICP was reorganized into three national units, namely the Vietnam Workers' Party (VWP), the Lao Issara and the Kampuchean or Khmer People's Revolutionary Party (KPRP). According to a document issued after the reorganization, the VWP would continue to "supervise" the smaller Laotian and Cambodian movements. Most KPRP leaders and rank-and-file seem to have been either Khmer Krom, or ethnic Vietnamese living in Cambodia.
According to Democratic Kampuchea's perspective of party history, the Viet Minh's failure to negotiate a political role for the KPRP at the 1954 Geneva Conference represented a betrayal of the Cambodian movement, which still controlled large areas of the countryside and which commanded at least 5,000 armed men. Following the conference, about 1,000 members of the KPRP, including Son Ngoc Minh, made a Long March into North Vietnam, where they remained in exile. In late 1954, those who stayed in Cambodia founded a legal political party, the Pracheachon Party, which participated in the 1955 and the 1958 National Assembly elections. In the September 1955 election, it won about four percent of the vote, but did not secure a seat in the legislature. Members of the Pracheachon were subject to constant harassment and to arrests because the party remained outside Sihanouk's political organization, Sangkum. Government attacks prevented it from participating in the 1962 election and drove it underground. Sihanouk habitually labelled local leftists the Khmer Rouge, a term that later came to signify the party and the state headed by Pol Pot, Ieng Sary, Khieu Samphan and their associates.
During the mid-1950s, KPRP factions, the "urban committee" (headed by Tou Samouth) and the "rural committee" (headed by Sieu Heng), emerged. In very general terms, these groups espoused divergent revolutionary lines. The prevalent "urban" line endorsed by North Vietnam recognized that Sihanouk by virtue of his success in winning independence from the French was a genuine national leader whose neutralism and deep distrust of the United States made him a valuable asset in Hanoi's struggle to "liberate" South Vietnam. Advocates of this line hoped that the prince could be persuaded to distance himself from the right-wing and to adopt leftist policies. The other line, supported for the most part by rural cadres who were familiar with the harsh realities of the countryside, advocated an immediate struggle to overthrow the "feudalist" Sihanouk.
During the 1950s, Khmer students in Paris organized their own communist movement which had little, if any, connection to the hard-pressed party in their homeland. From their ranks came the men and women who returned home and took command of the party apparatus during the 1960s, led an effective insurgency against Lon Nol from 1968 until 1975 and established the regime of Democratic Kampuchea.
Pol Pot, who rose to the leadership of the communist movement in the 1960s, was born in 1928 (some sources say 1925) in Kampong Thum Province, northeast of Phnom Penh. He attended a technical high school in the capital and then went to Paris in 1949 to study radio electronics (other sources say he attended a school for fax machines and also studied civil engineering). Described by one source as a "determined, rather plodding organizer", Pol Pot failed to obtain a degree, but according to Jesuit priest Father François Ponchaud he acquired a taste for the classics of French literature as well as an interest in the writings of Karl Marx.
Another member of the Paris student group was Ieng Sary, a Chinese-Khmer born in 1925 in South Vietnam. He attended the elite Lycée Sisowath in Phnom Penh before beginning courses in commerce and politics at the Paris Institute of Political Science (more widely known as Sciences Po) in France. Khieu Samphan was born in 1931 and specialized in economics and politics during his time in Paris. Hou Yuon (born in 1930) studied economics and law, Son Sen (born in 1930) studied education and literature and Hu Nim (born in 1932) studied law.
Two members of the group, Khieu Samphan and Hou Yuon, earned doctorates from the University of Paris while Hu Nim obtained his degree from the University of Phnom Penh in 1965. Most came from landowner or civil servant families. Pol Pot and Hou Yuon may have been related to the royal family as an older sister of Pol Pot had been a concubine at the court of King Monivong. Pol Pot and Ieng Sary married Khieu Ponnary and Khieu Thirith, also known as Ieng Thirith, purportedly relatives of Khieu Samphan. These two well-educated women also played a central role in the regime of Democratic Kampuchea.
A number turned to orthodox Marxism–Leninism. At some time between 1949 and 1951, Pol Pot and Ieng Sary joined the French Communist Party. In 1951, the two men went to East Berlin to participate in a youth festival. This experience is considered to have been a turning point in their ideological development. Meeting with Khmers who were fighting with the Viet Minh (but subsequently judged them to be too subservient to the Vietnamese), they became convinced that only a tightly disciplined party organization and a readiness for armed struggle could achieve revolution. They transformed the Khmer Students Association (KSA), to which most of the 200 or so Khmer students in Paris belonged, into an organization for nationalist and leftist ideas.
Inside the KSA and its successor organizations, there was a secret organization known as the Cercle Marxiste (Marxist circle). The organization was composed of cells of three to six members with most members knowing nothing about the overall structure of the organization. In 1952, Pol Pot, Hou Yuon, Ieng Sary and other leftists gained notoriety by sending an open letter to Sihanouk calling him the "strangler of infant democracy". A year later, the French authorities closed down the KSA, but Hou Yuon and Khieu Samphan helped to establish in 1956 a new group, the Khmer Students Union. Inside, the group was still run by the Cercle Marxiste.
The doctoral dissertations which were written by Hou Yuon and Khieu Samphan express basic themes that would later become the cornerstones of the policy that was adopted by Democratic Kampuchea. The central role of the peasants in national development was espoused by Hou Yuon in his 1955 thesis, "The Cambodian Peasants and Their Prospects for Modernization", which challenged the conventional view that urbanization and industrialization are necessary precursors of development.
The major argument in Khieu Samphan's 1959 thesis, "Cambodia's Economy and Industrial Development", was that the country had to become self-reliant and end its economic dependency on the developed world. In its general contours, Samphan's work reflected the influence of a branch of the dependency theory school which blamed lack of development in the Third World on the economic domination of the industrialized nations.
After returning to Cambodia in 1953, Pol Pot threw himself into party work. At first, he went to join with forces allied to the Viet Minh operating in the rural areas of Kampong Cham Province (Kompong Cham). After the end of the war, he moved to Phnom Penh under Tou Samouth's "urban committee", where he became an important point of contact between above-ground parties of the left and the underground secret communist movement.
His comrades Ieng Sary and Hou Yuon became teachers at a new private high school, the Lycée Kambuboth, which Hou Yuon helped to establish. Khieu Samphan returned from Paris in 1959, taught as a member of the law faculty of the University of Phnom Penh and started a left-wing French-language publication, "L'Observateur". The paper soon acquired a reputation in Phnom Penh's small academic circle. The following year, the government closed the paper and Sihanouk's police publicly humiliated Samphan by beating, undressing and photographing him in public; as Shawcross notes, "not the sort of humiliation that men forgive or forget".
Yet the experience did not prevent Samphan from advocating cooperation with Sihanouk in order to promote a united front against United States activities in South Vietnam. Khieu Samphan, Hou Yuon and Hu Nim were forced to "work through the system" by joining the Sangkum and by accepting posts in the prince's government.
In late September 1960, twenty-one leaders of the KPRP held a secret congress in a vacant room of the Phnom Penh railroad station. This pivotal event remains shrouded in mystery because its outcome has become an object of contention and considerable historical rewriting between pro-Vietnamese and anti-Vietnamese Khmer communist factions.
The question of cooperation with, or resistance to, Sihanouk was thoroughly discussed. Tou Samouth, who advocated a policy of cooperation, was elected general secretary of the KPRP that was renamed the Workers' Party of Kampuchea (WPK). His ally Nuon Chea, also known as Long Reth, became deputy general secretary, but Pol Pot and Ieng Sary were named to the Political Bureau to occupy the third and the fifth highest positions in the renamed party's hierarchy. The name change is significant. By calling itself a workers' party, the Cambodian movement claimed equal status with the Vietnam Workers' Party. The pro-Vietnamese regime of the People's Republic of Kampuchea (PRK) implied in the 1980s that the September 1960 meeting was nothing more than the second congress of the KPRP.
On July 20, 1962, Tou Samouth was murdered by the Cambodian government. At the WPK's second congress in February 1963, Pol Pot was chosen to succeed Tou Samouth as the party's general secretary. Samouth's allies Nuon Chea and Keo Meas were removed from the Central Committee and replaced by Son Sen and Vorn Vet. From then on, Pol Pot and loyal comrades from his Paris student days controlled the party centre, edging out older veterans whom they considered excessively pro-Vietnamese.
In July 1963, Pol Pot and most of the central committee left Phnom Penh to establish an insurgent base in Ratanakiri Province in the northeast. Pol Pot had shortly before been put on a list of 34 leftists who were summoned by Sihanouk to join the government and sign statements saying Sihanouk was the only possible leader for the country. Pol Pot and Chou Chet were the only people on the list who escaped. All the others agreed to cooperate with the government and were afterward under 24-hour watch by the police.
The region where Pol Pot and the others moved to was inhabited by tribal minorities, the Khmer Loeu, whose rough treatment (including resettlement and forced assimilation) at the hands of the central government made them willing recruits for a guerrilla struggle. In 1965, Pol Pot made a visit of several months to North Vietnam and China. Since 1950s, Pol Pot had made frequent visits to the People's Republic of China, receiving political and military training—especially on the theory of "Dictatorship of the proletariat"—from the personnel of Communist Party of China (CPC). From November 1965 to February 1966, Pol Pot received training from high-ranking CPC officials such as Chen Boda and Zhang Chunqiao, on topics such as the communist revolution in China, class conflicts, Communist International, etc. Pol Pot was particularly impressed by the lecture on "political purge" by Kang Sheng. This experience had enhanced his prestige when he returned to the WPK's "liberated areas". Despite friendly relations between Sihanouk and the Chinese, the latter kept Pol Pot's visit a secret from Sihanouk.
In September 1966, KPRP changed its name to the Communist Party of Kampuchea (CPK).The change in the name of the party was a closely guarded secret. Lower ranking members of the party and even the Vietnamese were not told of it and neither was the membership until many years later. The party leadership endorsed armed struggle against the government, then led by Sihanouk. In 1967, several small-scale attempts at insurgency were made by the CPK but they had little success. In 1968, the Khmer Rouge was officially formed and its forces launched a national insurgency across Cambodia. Though North Vietnam had not been informed of the decision, its forces provided shelter and weapons to the Khmer Rouge after the insurgency started. Vietnamese support for the insurgency made it impossible for the Cambodian military to effectively counter it. For the next two years, the insurgency grew as Sihanouk did very little to stop it. As the insurgency grew stronger, the party finally openly declared itself to be the Communist Party of Kampuchea.
The political appeal of the Khmer Rouge was increased as a result of the situation created by the removal of Sihanouk as head of state in 1970. Premier Lon Nol deposed Sihanouk with the support of the National Assembly. Sihanouk, who was in exile in Beijing, made an alliance with the Khmer Rouge on the advice of CPC, and became the nominal head of a Khmer Rouge–dominated government-in-exile (known by its French acronym GRUNK) backed by China. In 1970 alone, the Chinese reportedly gave 400 tons of military aid to the United Front. Although thoroughly aware of the weakness of Lon Nol's forces and loath to commit American military force to the new conflict in any form other than air power, the Nixon administration supported the newly proclaimed Khmer Republic.
On 29 March 1970, the North Vietnamese launched an offensive against the Cambodian army. Documents uncovered from the Soviet Union archives revealed that the invasion was launched at the explicit request of the Khmer Rouge following negotiations with Nuon Chea. A force of North Vietnamese quickly overran large parts of eastern Cambodia reaching to within of Phnom Penh before being pushed back. By June, three months after the removal of Sihanouk, they had swept government forces from the entire northeastern third of the country. After defeating those forces, the North Vietnamese turned the newly won territories over to the local insurgents. The Khmer Rouge also established "liberated" areas in the south and the southwestern parts of the country, where they operated independently of the North Vietnamese.
After Sihanouk showed his support for the Khmer Rouge by visiting them in the field, their ranks swelled from 6,000 to 50,000 fighters. Many of the new recruits for the Khmer Rouge were apolitical peasants who fought in support of the King, not for communism, of which they had little understanding. Sihanouk's popular support in rural Cambodia allowed the Khmer Rouge to extend its power and influence to the point that by 1973 it exercised "de facto" control over the majority of Cambodian territory, although only a minority of its population. Many people in Cambodia who helped the Khmer Rouge against the Lon Nol government thought they were fighting for the restoration of Sihanouk.
By 1975, with the Lon Nol government running out of ammunition, it was clear that it was only a matter of time before the government would collapse. On 17 April 1975, the Khmer Rouge captured Phnom Penh.
The relationship between the massive carpet bombing of Cambodia by the United States and the growth of the Khmer Rouge, in terms of recruitment and popular support, has been a matter of interest to historians. Some scholars, including Michael Ignatieff, Adam Jones and Greg Grandin, have cited the United States intervention and bombing campaign (spanning 1965–1973) as a significant factor which lead to increased support for the Khmer Rouge among the Cambodian peasantry. According to Ben Kiernan, the Khmer Rouge "would not have won power without U.S. economic and military destabilization of Cambodia. ... It used the bombing's devastation and massacre of civilians as recruitment propaganda and as an excuse for its brutal, radical policies and its purge of moderate communists and Sihanoukists." Pol Pot biographer David P. Chandler writes that the bombing "had the effect the Americans wanted – it broke the Communist encirclement of Phnom Penh", but it also accelerated the collapse of rural society and increased social polarization. Peter Rodman and Michael Lind claimed that the United States intervention saved the Lon Nol regime from collapse in 1970 and 1973. Craig Etcheson acknowledged that U.S. intervention increased recruitment for the Khmer Rouge but disputed that it was a primary cause of the Khmer Rouge victory. William Shawcross wrote that the United States bombing and ground incursion plunged Cambodia into the chaos that Sihanouk had worked for years to avoid.
By 1973, Vietnamese support of the Khmer Rouge had largely disappeared. On the other hand, China led by its Communist Party (CPC) largely "armed and trained" the Khmer Rouge, including Pol Pot himself, both during the Cambodian civil war and the years afterward. In 1970 alone, the Chinese reportedly gave 400 tons of military aid to the National United Front of Kampuche formed by Sihanouk and the Khmer Rouge.
In April 1975, Khmer Rouge seized power in Cambodia, and in January, 1976, Democratic Kampuchea was established. During the Cambodian genocide, the CPC was the main international patron of the Khmer Rouge, supplying "more than 15,000 military advisers" and most of its external aid. It is estimated that at least 90% of the foreign aid to Khmer Rouge came from China, with 1975 alone seeing US$1 billion in interest-free economics and military aid and US$20 million gift, which was "the biggest aid ever given to any one country by China". In June 1975, Pol Pot and other officials of Khmer Rouge met with Mao Zedong in Beijing, receiving Mao's approval and advice; in addition, Mao also taught Pot his "Theory of Continuing Revolution under the Dictatorship of the Proletariat(无产阶级专政下继续革命理论)". High-ranking CPC officials such as Zhang Chunqiao later visited Cambodia to offer help.
Democratic Kampuchea was overthrown by the Vietnamese army in January 1979, and the Khmer Rouge fled to Thailand. However, to counter the power of the Soviet Union and Vietnam, a group of countries including China, the United States, Thailand as well as some Western countries supported the Khmer Rouge-dominated Coalition Government of Democratic Kampuchea (CGDK) to continue holding Cambodia's seat in the United Nations, which was held until 1993, long after the Cold War had ended. China has defended its ties with the Khmer Rouge. Chinese Foreign Ministry spokeswoman Jiang Yu said that "the government of Democratic Kampuchea had a legal seat at the United Nations, and had established broad foreign relations with more than 70 countries".
The governing structure of Democratic Kampuchea was split between the state presidium which was headed by Khieu Samphan, the cabinet was led by Pol Pot who was also Democratic Kampuchea's prime minister and the party's own Politburo and Central Committee. All were complicated by a number of political factions which existed in 1975. The leadership of the Party Centre, the faction which was headed by Pol Pot, remained largely unchanged from the early 1960s to the mid-1990s. Its leaders were mostly from middle-class families and had been educated at French universities. The second significant faction was made up of men who had been active in the pre-1960 party and had stronger links to Vietnam as a result. However, government documents show that there were several major shifts in power between factions during the period in which the regime was in control.
In 1975–1976, there were several powerful zonal Khmer Rouge leaders who maintained their own armies and had different party backgrounds than the members of the Pol Pot clique, particularly So Phim and Nhim Ros, both of whom were vice presidents of the state presidium and members of the Politburo and Central Committee respectively. A possible military coup attempt was made in May 1976, and its leader was a senior Eastern Zone cadre named Chan Chakrey, who had been appointed deputy secretary of the army's General Staff. A reorganisation which occurred in September 1976, during which Pol Pot was demoted in the state presidium, was later presented as an attempted pro-Vietnamese coup by the Party Center. Over the next two years, So Phim, Nhim Ros, Vorn Vet and many other figures who had been associated with the pre-1960 party would be arrested and executed. So Phim's execution would be followed by that of the majority of the cadres and much of the population of the Eastern Zone that he had controlled. The Party Centre, lacking much in the way of their own military resources, accomplished their seizure of power by forming an alliance with Southwestern Zone leader Ta Mok and Pok, head of the North Zone's troops. Both men were of a purely peasant background and were therefore natural allies of the strongly peasantist ideology of the Pol Pot faction.
The Standing Committee of the Khmer Rouge's Central Committee during its period of power consisted of the following:
In power, the Khmer Rouge carried out a radical program that included isolating the country from all foreign influences, closing schools, hospitals and some factories, abolishing banking, finance and currency, and collectivising agriculture. Khmer Rouge theorists, who developed the ideas of Hou Yuon and Khieu Samphan, believed that an initial period of self-imposed economic isolation and national self-sufficiency would stimulate the rebirth of the crafts as well as the rebirth of the country's latent industrial capability.
In Phnom Penh and other cities, the Khmer Rouge told residents that they would only be moved about "two or three kilometers" away from the city and would return in "two or three days". Some witnesses said they were told that the evacuation was because of the "threat of American bombing" and they were also told that they did not have to lock their houses since the Khmer Rouge would "take care of everything" until they returned. If people refused to evacuate, they would immediately be killed and their homes would be burned to the ground. The evacuees were sent on long marches to the countryside, which killed thousands of children, elderly people and sick people. These were not the first evacuations of civilian populations by the Khmer Rouge because similar evacuations of populations without possessions had been occurring on a smaller scale since the early 1970s.
On arrival at the villages to which they had been assigned, evacuees were required to write brief autobiographical essays. The essay's content, particularly with regard to the subject's activity during the Khmer Republic regime, was used to determine their fate. Military officers and those occupying elite professional roles were usually sent for reeducation, which in practice meant immediate execution or confinement in a labour camp. Those with specialist technical skills often found themselves sent back to cities to restart production in factories which had been interrupted by the takeover. The remaining displaced urban population ("new people"), as part of the regime's drive to increase food production, were placed into agricultural communes alongside the peasant "base people" or "old people". The latter's holdings were collectivised. Cambodians were expected to produce three tons of rice per hectare as before the Khmer Rouge era the average was only one ton per hectare. The total lack of agricultural knowledge on the part of the former city dwellers made famine inevitable. The rural peasantry were often unsympathetic, or they were too frightened to assist them. Such acts as picking wild fruit or berries were seen as "private enterprise" and punished with death. Labourers were forced to work long shifts without adequate rest or food, resulting in many deaths through exhaustion, illness and starvation. Workers were executed for attempting to escape from the communes, for breaching minor rules, or after being denounced by colleagues. If caught, offenders were quietly taken off to a distant forest or field after sunset and killed. Unwilling to import Western medicines, the regime turned to traditional medicine instead and placed medical care in the hands of cadres who were only given rudimentary training. The famine, forced labour and lack of access to appropriate services led to a high number of deaths.
Khmer Rouge economic policies took a similarly extreme course. Officially, trade was only restricted to bartering between communes, a policy which the regime developed in order to enforce self-reliance. Banks were raided and all currency and records were destroyed by fire thus eliminating any claim to funds. After 1976, the regime reinstated discussion of export in the period after the disastrous effects of its planning began to become apparent.
Commercial fishing was said to have been banned by the Khmer Rouge in 1976.
The regulations made by the Angkar also had effects on the traditional Cambodian family unit. The regime was primarily interested in increasing the young population and one of the strictest regulations prohibited sex outside marriage which was punishable by execution. In this as in some other respects, the Khmer Rouge followed a morality based on an idealised conception of the attitudes of prewar rural Cambodia. Marriage required permission from the authorities and the Khmer Rouge were strict in only giving permission for people of the same class and level of education to marry. Such rules were applied even more strictly to party cadres. While some refugees spoke of families being deliberately broken up, this appears to have referred mainly to the traditional Cambodian extended family unit, which the regime actively sought to destroy in favour of small nuclear units of parents and children.
The regime promoted arranged marriages, particularly between party cadres. While some academics such as Michael Vickery have noted that arranged marriages were also feature of rural Cambodia prior to 1975, those conducted by the Khmer Rouge regime often involved people unfamiliar to each other. As well as reflecting the Khmer Rouge obsession with production and reproduction, such marriages were designed to increase people's dependency on the regime by undermining existing family and other loyalties.
It is often concluded that the Khmer Rouge regime promoted functional illiteracy. This statement is not completely incorrect, but it is quite inaccurate. The Khmer Rouge wanted to "eliminate all traces of Cambodia's imperialist past", and its previous culture was one of them. The Khmer Rouge didn't want the Cambodian people to be completely ignorant, and primary education was provided to them. Nevertheless, the Khmer Rouge's policies dramatically reduced the Cambodian population's cultural inflow as well as its knowledge and creativity. The Khmer Rouge's goal was to gain full control of all of the information that the Cambodian people received, and spread revolutionary culture among the masses.
Education came to a "virtual standstill" in Democratic Kampuchea. Irrespective of central policies, most local cadres considered higher education useless and as a result, they were suspicious of those who had received it. The regime abolished all literary schooling above primary grades, ostensibly focusing on basic literacy instead. In practice, primary schools were not set up in many areas due to the extreme disruptions which had been caused by the regime's takeover, and most ordinary people, especially "new people", felt that their children were taught nothing worthwhile in those schools which still existed. The exception was the Eastern Zone, which until 1976 was run by cadres who were closely connected with Vietnam rather than the Party Centre, where a more organised system seems to have existed under which children were given extra rations, taught by teachers who were drawn from the "base people" and given a limited number of official textbooks.
Beyond primary education, a number of technical courses were taught in factories to students who were drawn from the favoured "base people". However, there was a general reluctance to increase people's education in Democratic Kampuchea and in some districts, cadres were known to kill people who boasted about their educational accomplishments, and it was considered bad form for people to allude to any special technical training. Based on a speech which Pol Pot made in 1978, it appears that he may have ultimately envisaged that illiterate students with approved poor peasant backgrounds could become trained engineers within ten years by doing a lot of targeted studying along with a lot of practical work.
The Khmer language has a complex system of usages to define speakers' rank and social status. During the rule of the Khmer Rouge, these usages were abolished. People were encouraged to call each other "friend" (មិត្ត; "mitt") and to avoid traditional signs of deference such as bowing or folding the hands in salutation, known as sampeah.
Language was also transformed in other ways. The Khmer Rouge invented new terms. In keeping with the regime's theories on Khmer identity, the majority of new words were coined with reference to Pali or Sanskrit terms while Chinese and Vietnamese-language borrowings were discouraged. People were told to "forge" ("lot dam") a new revolutionary character, that they were the "instruments" (ឧបករណ៍; "opokar") of the ruling body known as Angkar (អង្គការ, The Organisation) and that nostalgia for pre-revolutionary times ("chheu satek arom", or "memory sickness") could result in execution. Rural terms like "Mae" (ម៉ែ; mother) replaced urban terms like "Mak" (ម៉ាក់; mother).
Many Cambodians crossed the border into Thailand to seek asylum. From there, they were transported to refugee camps such as Sa Kaeo or Khao-I-Dang, the only camp allowing resettlement in countries such as the United States, France, Canada and Australia. In some refugee camps, such as Site 8, Phnom Chat, or Ta Prik, the Khmer Rouge cadres controlled food distribution and restricted the activities of international aid agencies.
Acting through the Santebal, the Khmer Rouge arrested, tortured and eventually executed anyone who was suspected of belonging to several categories of supposed "enemies", including the following:
The Santebal established over 150 prisons for political opponents, Tuol Sleng, a high school was turned into the Santebal headquarters and interrogation center for the highest value political prisoners. Tuol Sleng was operated by the Santebal commander Khang Khek Ieu, more commonly known as Comrade Duch, together with his subordinates Mam Nai and Tang Sin Hean. According to Ben Kiernan, "all but seven of the twenty thousand Tuol Sleng prisoners" were executed. The buildings of Tuol Sleng have been preserved as they were left when the Khmer Rouge were driven out in 1979. Several of the rooms are now lined with thousands of black-and-white photographs of prisoners that were taken by the Khmer Rouge.
On 7 August 2014, when sentencing two former Khmer Rouge leaders to life imprisonment, Cambodian judge Nil Nonn said there were evidences of "a widespread and systematic attack against the civilian population of Cambodia". He said the leaders, Nuon Chea, the regime's chief ideologue and former deputy to late leader Pol Pot and Khieu Samphan, the former head of state, together in a "joint criminal enterprise" were involved in murder, extermination, political persecution and other inhumane acts related to the mass eviction of city-dwellers, and executions of enemy soldiers. In November 2018, the trial convicted Nuon Chea and Khieu Samphan of crimes against humanity and genocide against the Vietnamese, while Nuon Chea was also found guilty of genocide relating to the Chams.
According to a 2001 academic source, the most widely accepted estimates of excess deaths under the Khmer Rouge range from 1.5 million to 2 million, although figures as low as 1 million and as high as 3 million have been cited; conventionally accepted estimates of deaths due to Khmer Rouge executions range from 500,000 to 1 million, "a third to one half of excess mortality during the period". However, a 2013 academic source (citing research from 2009) indicates that execution may have accounted for as much as 60% of the total, with 23,745 mass graves containing approximately 1.3 million suspected victims of execution.
Ben Kiernan estimates that 1.671 million to 1.871 million Cambodians died as a result of Khmer Rouge policy, or between 21% and 24% of Cambodia's 1975 population. A study by French demographer Marek Sliwinski calculated slightly fewer than 2 million unnatural deaths under the Khmer Rouge out of a 1975 Cambodian population of 7.8 million; 33.5% of Cambodian men died under the Khmer Rouge compared to 15.7% of Cambodian women. Researcher Craig Etcheson of the Documentation Center of Cambodia (DC-Cam) suggests that the death toll was between 2 million and 2.5 million, with a "most likely" figure of 2.2 million. After five years of researching mass grave sites, he estimated that they contained 1.38 million suspected victims of execution. Although considerably higher than earlier and more widely accepted estimates of Khmer Rouge executions, "Etcheson argues that these numbers are plausible, given the nature of the mass grave and DC-Cam's methods, which are more likely to produce an under-count of bodies rather than an over-estimate." Demographer Patrick Heuveline estimated that between 1.17 million and 3.42 million Cambodians died unnatural deaths between 1970 and 1979, with between 150,000 and 300,000 of those deaths occurring during the civil war. Heuveline's central estimate is 2.52 million excess deaths, of which 1.4 million were the direct result of violence. "As best as can now be estimated, over two million Cambodians died during the 1970s because of the political events of the decade, the vast majority of them during the mere four years of the 'Khmer Rouge' regime. This number of deaths is even more staggering when related to the size of the Cambodian population, then less than eight million. ... Subsequent reevaluations of the demographic data situated the death toll for the [civil war] in the order of 300,000 or less." Despite being based on a house-to-house survey of Cambodians, the estimate of 3.3 million deaths promulgated by the Khmer Rouge's successor regime, the People's Republic of Kampuchea (PRK), is generally considered to be an exaggeration; among other methodological errors, the PRK authorities added the estimated number of victims that had been found in the partially-exhumed mass graves to the raw survey results, meaning that some victims would have been double-counted. An additional 300,000 Cambodians starved to death between 1979 and 1980, largely as a result of the after-effects of Khmer Rouge policy.
Genocide
While the period from 1975–79 is commonly associated with the phrase 'the Cambodian genocide', scholars debate whether the legal definition of the crime can be applied generally. While the ECCC has convicted two former leaders with genocide, this was for treatment of ethnic and religious minorities, the Vietnamese and Cham. The death toll of these two groups, approximately 100,000 people, is roughly 5 percent of generally accepted total of two million. The treatment of these groups can be seen to fall under the legal definition of genocide, as they were targeted on the basis of their religion or ethnicity. The vast majority of deaths were of the Khmer ethnic group, which was not a target of the Khmer Rouge. The deaths occurring as a result of targeting these Khmer, whether it was the 'new people' or simple 'enemies' of the regime, was based on political distinctions, rather than ethnic or religious. Historian David Chandler, in an interview conducted in 2018, stated that crimes against humanity was the term that best fit the atrocities of the regime and that some attempts to characterise the majority of the killings as genocide was flawed and at times politicised.
Hou Yuon was one of the first senior leaders to be purged. The Khmer Rouge originally reported that he had been killed in the final battles for Phnom Penh but he was apparently executed in late 1975 or early 1976.
In late 1975 numerous Cambodian intellectuals, professionals and students returned from overseas to support the revolution. These returnees were treated with suspicion and made to undergo reeducation, while some were sent straight to Tuol Sleng.
In 1976 the center announced the start of the socialist revolution and ordered the elimination of class enemies, this resulted in the expulsion and execution of numerous people within the party and army who were deemed to be of the wrong class. In mid-1976 Ieng Thirith, now minister of social affairs, inspected the northwestern zone. On her return to Phnom Penh she reported that the zone's cadres were deliberately disobeying orders from the center, blaming enemy agents who were trying to undermine the revolution. During 1976 troops formerly from the eastern zone demanded the right to marry without the party's approval. They were arrested and under interrogation implicated their commander who then implicated eastern zone cadres who were arrested and executed.
In September Keo Meas, who had been tasked with writing a history of the party, was arrested as a result of disputes over the foundation date of the party and its reliance on Vietnamese support. Under torture at Tuol Sleng he confessed that the date chosen was part of a plot to undermine the party's legitimacy and was then executed.
In late 1976 with the Kampuchean economy underperforming Pol Pot ordered a purge of the ministry of commerce and Khoy Thoun and his subordinates who he had brought from the northern zone were arrested and tortured at Tuol Sleng before being executed. Khoy Thoun confessed to having been recruited by the CIA in 1958. The center also ordered troops from the eastern and central zones to purge the northern zone killing or arresting numerous cadres.
At the end of 1976, following disappointing rice harvests in the northwestern zone, the party center ordered a purge of the zone. Troops from the western and southwestern zone were ordered into the northwestern zone and over the next year killed at least 40 senior cadre and numerous lower ranking leaders. The chaos caused by this purge allowed many peasants to escape the zone and seek refuge in Thailand.
In 1977 the center began purging the returnees, sending 148 to Tuol Sleng and continuing a purge of the ministry of foreign affairs where many returnees and intellectuals were suspected of spying for foreign powers.
In January the center ordered eastern and southeastern zone troops to conduct cross-border raids into Vietnam. In March 1977 the center ordered So Phim, the eastern zone commander, to send his troops to the border, however with class warfare purges underway in the eastern zone, many units mutinied and fled into Vietnam. Among the troops defecting in this period was Hun Sen. On 10 April 1977 Hu Nim and his wife were arrested. After three months of interrogation at Tuol Sleng he confessed to working with the CIA to undermine the revolution following which he and his wife were executed.
In July 1977 Pol Pot and Duch sent So Phim a list of "traitors" in the eastern zone, many of whom were So Phim's trusted subordinates. So Phim disputed the list and refused to execute those listed, for the center this implicated So Phim as a traitor. In October 1977 in order to secure the Thai border while focussing on confrontation with Vietnam, Nhim Ros, the northwestern zone leader, was blamed for clashes on the Thai border, acting on behalf of both the Vietnamese and the CIA.
In December 1977 the Vietnamese launched a punitive attack into eastern Cambodia, quickly routing the eastern zone troops including Heng Samrin's Division 4 and further convincing Pol Pot of So Phim's treachery. Son Sen was sent to the eastern zone with center zone troops to aid the defense. In January 1978, following the Vietnamese withdrawal, a purge of the eastern zone began. In March So Phim called a secret meeting of his closest subordinates advising them that those who had been purged were not traitors and warning them to be wary. During the next month more than 400 eastern zone cadres were sent to Tuol Sleng while two eastern zone division commanders were replaced. During May eastern zone military leaders were called to meetings where they were arrested or killed. So Phim was called to a meeting by Son Sen, but refused to attend, instead sending four messengers who failed to return. On 25 May Son Sen sent two brigades of troops to attack the eastern zone and capture So Phim. Unable to believe he was being purged, So Phim went into hiding and attempted to contact Pol Pot by radio. A meeting was arranged but instead of Pol Pot a group of center soldiers arrived and So Phim committed suicide and the soldiers then killed his family.
Many of the surviving eastern zone leaders fled into the jungle where they hid from and fought center zone troops. In October Chea Sim led a group of 300 people across the border into Vietnam and the Vietnamese then launched a raid into the eastern zone that allowed Heng Samrin and his group of 2–3000 soldiers and followers to seek refuge in Vietnam. Meanwhile, the center decided that the entire eastern zone was full of traitors and embarked on a large scale purge of the area, with over 10,000 killed by July 1978, while thousands were evacuated to other zones to prevent them from defecting to the Vietnamese. The center also stepped up purges nationwide, killing cadres and their families, "old people" and eastern zone evacuees who were regarded as having dubious loyalty.
In September 1978 a purge of the ministry of industry was begun and in November Pol Pot ordered the arrest of Vorn Vet, the deputy premier for the economy, followed by his supporters. Vorn Vet had previously served as the secretary of the zone around Phnom Penh, had established the Santebal and been Duch's immediate superior. Under torture Vorn Vet admitted to being an agent of the CIA and the Vietnamese. Unable to reach the borders, ministry of industry personnel who could escape the purge went into hiding in Phnom Penh
Fearing a Vietnamese attack, Pol Pot ordered a pre-emptive invasion of Vietnam on 18 April 1978. His Cambodian forces crossed the border and looted nearby villages, mostly in the border town of Ba Chúc. Of the 3,157 civilians who had lived in Ba Chúc, only two survived the massacre. These Cambodian forces were repelled by the Vietnamese.
Due to several years of border conflict and the flood of refugees fleeing Kampuchea, relations between Kampuchea and Vietnam collapsed by December 1978. On 25 December 1978, the Vietnamese armed forces along with the Kampuchean United Front for National Salvation, an organization founded by Heng Samrin that included many dissatisfied former Khmer Rouge members, invaded Cambodia and captured Phnom Penh on 7 January 1979. Despite a traditional Cambodian fear of Vietnamese domination, defecting Khmer Rouge activists assisted the Vietnamese and with Vietnam's approval became the core of the new People's Republic of Kampuchea. The new government was quickly dismissed by the Khmer Rouge and China as a "puppet government".
At the same time, the Khmer Rouge retreated west and it continued to control certain areas near the Thai border for the next decade. These included Phnom Malai, the mountainous areas near Pailin in the Cardamom Mountains and Anlong Veng in the Dângrêk Mountains.
These Khmer Rouge bases were not self-sufficient and were funded by diamond and timber smuggling, by military assistance from China channeled by means of the Thai military and by food smuggled from markets across the border in Thailand.
Despite its deposal, the Khmer Rouge retained its United Nations seat, which was occupied by Thiounn Prasith, an old compatriot of Pol Pot and Ieng Sary from their student days in Paris and one of the 21 attendees at the 1960 KPRP Second Congress. The seat was retained under the name Democratic Kampuchea until 1982 and then under the name Coalition Government of Democratic Kampuchea. Western governments voted in favor of the Coalition Government of Democratic Kampuchea retaining Cambodia's seat in the organization over the newly installed Vietnamese-backed People's Republic of Kampuchea, even though it included the Khmer Rouge. In 1988 Margaret Thatcher stated: "So, you'll find that the more reasonable ones of the Khmer Rouge will have to play some part in the future government, but only a minority part. I share your utter horror that these terrible things went on in Kampuchea". On the contrary, Sweden changed its vote in the United Nations and withdrew its support for the Khmer Rouge after many Swedish citizens wrote letters to their elected representatives demanding a policy change towards Pol Pot's regime.
Vietnam's victory was supported by the Soviet Union and had significant ramifications for the region. The People's Republic of China launched a punitive invasion of northern Vietnam but then retreated, with both sides claiming victory. China, the United States and the ASEAN countries sponsored the creation and the military operations of a Cambodian government-in-exile, known as the Coalition Government of Democratic Kampuchea, which included the Khmer Rouge, the republican KPNLF and the royalist ANS.
Eastern and central Cambodia were firmly under the control of Vietnam and its Cambodian allies by 1980 while the western part of the country continued to be a battlefield throughout the 1980s, and millions of land mines were sown across the countryside. The Khmer Rouge, still led by Pol Pot, was the strongest of the three rebel groups in the Coalition Government, which received extensive military aid from China, Britain and the United States and intelligence from the Thai military. Britain and the United States in particular gave aid to the two non-Khmer Rouge members of the coalition.
In an attempt to broaden its support base, the Khmer Rouge formed the Patriotic and Democratic Front of the Great National Union of Kampuchea in 1979. In 1981, the Khmer Rouge went so far as to officially renounce communism and somewhat moved their ideological emphasis to nationalism and anti-Vietnamese rhetoric instead. However, some analysts argue that this change meant little in practice because according to historian Kelvin Rowley the "CPK propaganda had always relied on nationalist rather than revolutionary appeals".
Although Pol Pot relinquished the Khmer Rouge leadership to Khieu Samphan in 1985, he continued to be the driving force behind the Khmer Rouge insurgency, giving speeches to his followers. Journalist Nate Thayer, who spent some time with the Khmer Rouge during that period, commented that despite the international community's near-universal condemnation of the Khmer Rouge's brutal rule a considerable number of Cambodians in Khmer Rouge-controlled areas seemed genuinely to support Pol Pot.
While Vietnam proposed to withdraw from Cambodia in return for a political settlement that would exclude the Khmer Rouge from power, the rebel coalition government as well as ASEAN, China and the United States, insisted that such a condition was unacceptable. Nevertheless, Vietnam declared in 1985 that it would complete the withdrawal of its forces from Cambodia by 1990 and it did so in 1989, having allowed the Cambodian People's Party government that it had installed there to consolidate its rule and gain sufficient military strength.
After a decade of inconclusive conflict, the pro-Vietnamese Cambodian government and the rebel coalition signed a treaty in 1991 calling for elections and disarmament. However, the Khmer Rouge resumed fighting in 1992, boycotted the election and in the following year rejected its results. It now fought the new Cambodian coalition government which included the former Vietnamese-backed communists (headed by Hun Sen) as well as the Khmer Rouge's former non-communist and monarchist allies (notably Prince Rannaridh).
Ieng Sary led a mass defection from the Khmer Rouge in 1996, with half of its remaining soldiers (about 4,000) switching to the government side and Ieng Sary becoming leader of Pailin province. A conflict between the two main participants in the ruling coalition caused in 1997 Prince Rannaridh to seek support from some of the Khmer Rouge leaders while refusing to have any dealings with Pol Pot. This resulted in bloody factional fighting among the Khmer Rouge leaders, ultimately leading to Pol Pot's trial and imprisonment by the Khmer Rouge. Pol Pot died in April 1998. Khieu Samphan surrendered in December.
On 29 December 1998, leaders of the Khmer Rouge apologised for the 1970s genocide. By 1999, most members had surrendered or been captured. In December 1999, Ta Mok and the remaining leaders surrendered and the Khmer Rouge effectively ceased to exist. Most of the surviving Khmer Rouge leaders live in the Pailin area or are hiding in Phnom Penh.
Cambodia has gradually recovered demographically and economically from the Khmer Rouge regime, although the psychological scars affect many Cambodian families and émigré communities. It is noteworthy that Cambodia has a very young population and by 2003 three-quarters of Cambodians were too young to remember the Khmer Rouge era. Nonetheless, their generation is affected by the traumas of the past.
Members of this younger generation may know of the Khmer Rouge only through word of mouth from parents and elders. In part, this is because the government does not require that educators teach children about Khmer Rouge atrocities in the schools. However, Cambodia's Education Ministry started to teach Khmer Rouge history in high schools beginning in 2009.
The Extraordinary Chambers in the Courts of Cambodia (ECCC) was established as a Cambodian court with international participation and assistance to bring to trial senior leaders and those most responsible for crimes committed during the Khmer Rouge regime. It has been handling four cases since 2007. ECCC's efforts for outreach toward both national and international audience include public trial hearings, study tours, video screenings, school lectures and video archives on the web site. As of May 2018, cases against the former leadership of the Khmer Rouge regime for crimes including genocide and crimes against humanity remain ongoing.
After claiming to feel great remorse for his part in Khmer Rouge atrocities, Kaing Guek Eav (alias Duch), head of a torture centre from which 16,000 men, women and children were sent to their deaths, surprised the court in his trial on 27 November 2009 with a plea for his freedom. His Cambodian lawyer Kar Savuth stunned the tribunal further by issuing the trial's first call for an acquittal of his client even after his French lawyer denied seeking such a verdict. On 26 July 2010, he was convicted and sentenced to thirty years. Theary Seng responded: "We hoped this tribunal would strike hard at impunity, but if you can kill 14,000 people and serve only 19 years – 11 hours per life taken – what is that? It's a joke", voicing concerns about political interference. In February 2012, Duch's sentence was increased to life imprisonment following appeals by both the prosecution and defence. In dismissing the defence's appeal, Judge Kong Srim stated that "Duch's crimes were "undoubtedly among the worst in recorded human history" and deserved "the highest penalty available".
Public trial hearings in Phnom Penh are open to the people of Cambodia over the age of 18 including foreigners. In order to assist people's will to participate in the public hearings, the court provides free bus transportation for groups of Cambodians who want to visit the court. Since the commencement of Case 001 trial in 2009 through the end of 2011, 53,287 people have participated in the public hearings. ECCC also has hosted Study Tour Program to help villagers in rural areas understand the history of the Khmer Rouge regime. The court provides free transport for them to come to visit the court and meet with court officials to learn about its work, in addition to visits to the genocide museum and the killing fields. ECCC also has visited village to village to provide video screenings and school lectures to promote their understanding of the trial proceedings. Furthermore, trials and transcripts are partially available with English translation on the ECCC's website.
The Tuol Sleng Museum of Genocide and Choeung Ek Killing Fields are two major museums to learn the history of the Khmer Rouge.
The Tuol Sleng Museum of Genocide is a former high school building, which was transformed into a torture, interrogation and execution center between 1976 and 1979. The Khmer Rouge called the center S-21. Of the estimated 15,000 to 30,000 prisoners, only seven prisoners survived. The Khmer Rouge photographed the vast majority of the inmates and left a photographic archive, which enables visitors to see almost 6,000 S-21 portraits on the walls. Visitors can also learn how the inmates were tortured from the equipment and facilities exhibited in the buildings. In addition, one of the seven survivors shares his story with visitors at the museum.
The Choeung Ek Killing Fields are located about 15 kilometers outside of Phnom Penh. Most of the prisoners who were held captive at S-21 were taken to the fields to be executed and deposited in one of the approximately 129 mass graves. It is estimated that the graves contain the remains of over 20,000 victims. After the discovery of the site in 1979, the Vietnamese transformed the site into a memorial and stored skulls and bones in an open-walled wooden memorial pavilion. Eventually, these remains were showcased in the memorial's centerpiece stupa, or Buddhist shrine.
The Documentation Center of Cambodia (DC-Cam), an independent research institute, published "A History of Democratic Kampuchea 1975–1979", the nation's first textbook on the history of the Khmer Rouge. The 74-page textbook was approved by the government as a supplementary text in 2007. The textbook is aiming at standardising and improving the information students receive about the Khmer Rouge years because the government-issued social studies textbook devotes eight or nine pages to the period. The publication was a part of their genocide education project that includes leading the design of a national genocide studies curriculum with the Ministry of Education, training thousands of teachers and 1,700 high schools on how to teach about genocide and working with universities across Cambodia.
Youth for Peace, a Cambodian non-governmental organization (NGO) that offers education in peace, leadership, conflict resolution and reconciliation to Cambodian's youth, published a book titled "Behind the Darkness:Taking Responsibility or Acting Under Orders?" in 2011. The book is unique in that instead of focusing on the victims as most books do, it collects the stories of former Khmer Rouge, giving insights into the functioning of the regime and approaching the question of how such a regime could take place.
While the tribunal contributes to the memorialization process at national level, some civil society groups promote memorialization at community level. The International Center for Conciliation (ICfC) began working in Cambodia in 2004 as a branch of the ICfC in Boston. ICfC launched the Justice and History Outreach (JHO) project in 2007 and has worked in villages in rural Cambodia with the goal of creating mutual understanding and empathy between victims and former members of the Khmer Rouge. Following the dialogues, villagers identify their own ways of memorialization such as collecting stories to be transmitted to the younger generations or building a memorial. Through the process, some villagers are beginning to accept the possibility of an alternative viewpoint to the traditional notions of evil associated with anyone who worked for the Khmer Rouge regime.
Radio National Kampuchea (RNK) as well as private and NGO radio stations broadcast programmes on the Khmer Rouge and trials. ECCC has its own weekly radio program on RNK which provides an opportunity for the public to interact with court officials and deepen their understanding of Cases.
Youth for Peace, a Cambodian NGO that offers education in peace, leadership, conflict resolution and reconciliation to Cambodian's youth, has broadcast the weekly radio program "You Also Have A Chance" since 2009. Aiming at preventing the passing on of hatred and violence to future generations, the program allows former Khmer Rouge to talk anonymously about their past experience.
All Cambodian television stations include regular coverage of the progress of the trials. The following stations feature special programming:
International television stations such as the BBC, Al Jazeera, CNN, NHK and Channel News Asia also cover the development of trials.
|
https://en.wikipedia.org/wiki?curid=17049
|
Kenji Sahara
Kenji Sahara (佐原 健二 "Sahara Kenji") (born 14 May 1932) is a Japanese actor. He was born in Kawasaki City, Kanagawa. His real name is Masayoshi Kato (加藤 正好 "Katō Masayoshi"). Initially he used the name Tadashi Ishihara before changing it when he secured the lead role in "Rodan" (1956).
Sahara did a lot of work for the Toho Company, the studio that has so far has produced twenty-eight "Godzilla" movies. He appeared in more of the "Godzilla" series than any other actor.
Also, he is the actor who was often relied on in most of the films by Directors Ishiro Honda and Eiji Tsuburaya.
Since Director Tsuburaya was not able to fail in the first time of the Tsuburaya pro's beginning by any means, he left all the works to Sahara. This was winning a great success and the Tsuburaya pro was able to make the "Ultraman" series which continues till the present.
He has appeared in many supporting roles.
Sahara is famous as a mainstay of Toho special-effects movies and the "Ultraman" series.
Sahara was also the lead in the first of the "Ultra" series, Ultra Q. He also appeared in a number of subsequent "Ultra" series, including:
His latest Ultraman appearance was in the 2008 Ultraman movie, "Superior Ultraman 8 Brothers". He also made a cameo in episodes 47 and 48 of Sonic X, being the voice of Dr. Atsumi.
|
https://en.wikipedia.org/wiki?curid=17051
|
Kiyoshi Atsumi
Kiyoshi Atsumi (渥美 清 "Atsumi Kiyoshi"), born Yasuo Tadokoro (田所 康雄 "Tadokoro Yasuo", 10 March 1928 in Tokyo – 4 August 1996 in Tokyo), was a Japanese film actor. He started his career in 1951 as a comedian at a strip-show theater in Asakusa. After two years of fighting pulmonary tuberculosis, he made his debut on TV in 1956 and on film in 1957. His vivid performance of a lovable, innocent man in the film “Dear Mr. Emperor” ("Haikei Tenno-Heika-Sama") in 1963 established his reputation as an actor.
Later he became the star of the highly popular "Tora-san series of films". His portrayal of the main characters lasted from the original "Otoko wa Tsurai yo" (translated in English as 'It's Tough being a Man') in 1969 to the 48th film released in 1995, the year before his death. The enduring success of the series made him synonymous with the Tora-san character, and many Japanese regarded his death as the death of the character Tora-san, not the death of the actor Yasuo Tadokoro or Kiyoshi Atsumi.
|
https://en.wikipedia.org/wiki?curid=17056
|
Karel Hynek Mácha
Karel Hynek Mácha () (16 November 1810 – 5 November 1836) was a Czech romantic poet.
Mácha grew up in Prague, the son of a foreman at a mill. He learned Latin and German in school. He went on to study law at Prague University; during that time he also became involved in theatre (as an actor he first appeared in Jan Nepomuk Štěpánek's play "Czech and German" in July 1832 in Benešov), where he met Eleonora Šomková, with whom he had a son out of wedlock. He was fond of travel, enjoying trips into the mountains, and was an avid walker. Eventually he moved to Litoměřice, a quiet town some 60 km from Prague, to prepare for law school exams and to write poetry. Three days before he was to be married to Šomková, just a few weeks after he had begun working as a legal assistant, Mácha overexerted himself while helping to extinguish a fire and soon thereafter died of pneumonia. The day after his death had been scheduled as his wedding day in Prague.
Mácha was buried in Litoměřice in a pauper's grave. Recognition came after his death: in 1939, his remains were exhumed, and they were given a formal state burial at the Vyšehrad cemetery in Prague. A statue was erected in his honor in Petřín Park, Prague. In 1937 a biographical film, "Karel Hynek Mácha", was made by Zet Molas (a pen name of Zdena Smolová). Lake Mácha () was named after him in 1961.
Macha was honored on a 50 Haleru and a 1 Koruna stamp on 30 April 1936, Scott Catalog # 213–214. The stamp depicts a statue of Macha that is found in Prague and was issued by the postal agency of Czechoslovakia ('Československo'). He was again honored on a 43 koruna postage stamp issued by the postal agency of the Czech Republic ('Česká Pošta') on 10 March 2010. This 43 koruna postage stamp is presented on a miniature souvenir sheet. The Scott catalog number for this postage stamp honoring Macha is Scott #3446.
Karel Mácha was appointed patron saint of the youth collective "De Barries" in 2019.
His lyrical epic poem "Máj" (May), published in 1836 shortly before his death, was judged by his contemporaries as confusing, too individualistic, and not in harmony with the national ideas. Czech playwright Josef Kajetán Tyl even wrote a parody of Mácha's style, "Rozervanec" (The Chaotic). "Máj" was rejected by publishers, and was published by a vanity press at Mácha's own expense, not long before his early death. Josef Bohuslav Foerster set May for choir and orchestra as his Op.159.
Mácha's genius was discovered and glorified much later by the poets and novelists of the 1850s (e.g., Jan Neruda, Vítězslav Hálek, and Karolina Světlá) and "Máj" is now regarded as the classic work of Czech Romanticism and is considered one of the best Czech poems ever written. It contains forebodings of many of the tendencies of 20th-century literature: existentialism, alienation, isolation, surrealism, and so on.
Mácha also authored a collection of autobiographical sketches titled "Pictures From My Life", the 1835–36 novel "Cikáni" (Gypsies), and several individual poems, as well as a journal in which, among other things, he detailed his sexual encounters with Šomková. The "Diary of Travel to Italy" describes his journey to Venice, Trieste, and Ljubljana (where he met the Slovene national poet France Prešeren) in 1834. The "Secret Diary" describes his daily life in autumn 1835 with cipher passages concerning his relationship with Eleonora Šomková.
|
https://en.wikipedia.org/wiki?curid=17059
|
Krupp
The Krupp family (see pronunciation), a prominent 400-year-old German dynasty from Essen, is famous for their production of steel, artillery, ammunition and other armaments. The family business, known as Friedrich Krupp AG (Friedrich Krupp AG Hoesch-Krupp after acquiring Hoesch AG in 1991 and lasting until 1999), was the largest company in Europe at the beginning of the 20th century, and was the premier weapons manufacturer for Germany in both world wars. Starting from the Thirty Years' War until the end of the Second World War, it produced battleships, U-boats, tanks, howitzers, guns, utilities, and hundreds of other commodities.
The dynasty began in 1587 when a trader named Arndt Krupp moved to Essen and joined the merchants' guild. He began buying vacated real estate from families who fled the city due to the Black Death, and became one of the city's richest men. His descendants produced small guns during the Thirty Years' War and eventually acquired fulling mills, coal mines, and an iron forge. During the Napoleonic Wars, Friedrich Krupp founded the Gusstahlfabrik (Cast Steel Works) and started smelted steel production in 1816. This led to the company becoming a major industrial power and laid the foundation for the steel empire that would come to dominate the world for nearly a century under his son Alfred. Krupp became the arms manufacturer for the Kingdom of Prussia in 1859, and later the German Empire.
During the 19th century, Alfred Krupp pioneered various benefit and social programs for the workers of the Krupp company. These included on-site technical and manual training, insurance (accident and life), low cost housing, hospitals, recreational facilities, parks, schools, bath houses, saloons, and department stores. Widows and orphans were guaranteed an income if their husbands or fathers were killed or injured at work.
The company produced steel used to build railroads in the United States and to cap the Chrysler Building. During the time of the Third Reich, the Krupp company supported the Nazi regime and the use of slave labour; a tool employed for carrying out the Holocaust while gaining from it an economical benefit. The company had a workshop near the Auschwitz death camp. After the war, Krupp was rebuilt from scratch and again became one of the wealthiest companies in Europe. However, this growth did not last indefinitely. In 1967, an economic recession resulted in significant financial loss for the company. In 1999, it merged with Thyssen AG to form the industrial conglomerate ThyssenKrupp AG.
Controversy has not eluded the Krupp company. Being a major weapons supplier to multiple sides throughout various conflicts, the Krupps were sometimes blamed for the wars themselves or the degree of carnage that ensued.
Friedrich Krupp (1787–1826) launched the family's metal-based activities, building a pioneering steel foundry in Essen in 1810. His son Alfred (1812–87), known as "the Cannon King" or as "Alfred the Great", invested heavily in new technology to become a significant manufacturer of steel rollers (used to make eating utensils) and railway tyres. He also invested in fluidized hotbed technologies (notably the Bessemer process) and acquired many mines in Germany and France. Unusual for the era, he provided social services for his workers, including subsidized housing and health and retirement benefits.
The company began to make steel cannons in the 1840s—especially for the Russian, Turkish, and Prussian armies. Low non-military demand and government subsidies meant that the company specialized more and more in weapons: by the late 1880s the manufacture of armaments represented around 50% of Krupp's total output. When Alfred started with the firm, it had five employees. At his death twenty thousand people worked for Krupp—making it the world's largest industrial company and the largest private company in the German empire.
Krupp's had a Great Krupp Building with an exhibition of guns at the Columbian Exposition in 1893.
In the 20th century the company was headed by Gustav Krupp von Bohlen und Halbach (1870–1950), who assumed the surname of Krupp when he married the Krupp heiress, Bertha Krupp. After Adolf Hitler came to power in Germany in 1933, the Krupp works became the center for German rearmament. In 1943, by a special order from Hitler, the company reverted to a sole-proprietorship, with Gustav and Bertha's eldest son Alfried Krupp von Bohlen und Halbach (1907–67) as proprietor. After Germany's defeat, Gustav was senile and incapable of standing trial, and the Nuremberg Military Tribunal convicted Alfried as a war criminal in the Krupp Trial for "plunder" and for his company's use of slave labor. It sentenced him to 12 years in prison and ordered him to sell 75% of his holdings. In 1951, as the Cold War developed and no buyer came forward, the U.S. occupation authorities released him, and in 1953 he resumed control of the firm.
In 1968, the company became a corporation. In 1999, the Krupp Group merged with its largest competitor, Thyssen AG; the combined company—ThyssenKrupp, became Germany's fifth-largest firm and one of the largest steel producers in the world.
The Krupp family first appeared in the historical record in 1587, when Arndt Krupp joined the merchants' guild in Essen. Arndt, a trader, arrived in town just before an epidemic of the Black Death plague and became one of the city's wealthiest men by purchasing the property of families who fled the epidemic. After he died in 1624, his son Anton took over the family business; Anton oversaw a gunsmithing operation during the Thirty Years' War (1618–48), which was the first instance of the family's long association with arms manufacturing.
For the next century the Krupps continued to acquire property and became involved in municipal politics in Essen. By the mid-18th-century, Friedrich Jodocus Krupp, Arndt's great-great-grandson, headed the Krupp family. In 1751, he married Helene Amalie Ascherfeld (another of Arndt's great-great-grandchildren); Jodocus died six years later, which left his widow to run the business: a family first. The Widow Krupp greatly expanded the family's holdings over the decades, acquiring a fulling mill, shares in four coal mines, and (in 1800) an iron forge located on a stream near Essen.
In 1807 the progenitor of the modern Krupp firm, Friedrich Krupp, began his commercial career at age 19 when the Widow Krupp appointed him manager of the forge. Friedrich's father, the widow's son, had died 11 years previously; since that time, the widow had tutored the boy in the ways of commerce, as he seemed the logical family heir. Unfortunately, Friedrich proved too idiotic for his own good, and quickly ran the formerly profitable forge into the ground. The widow soon had to sell it away.
In 1810, the widow died, and in what would prove a disastrous move, left virtually all the Krupp fortune and property to Friedrich. Newly enriched, Friedrich decided to discover the secret of cast (crucible) steel. Benjamin Huntsman, a clockmaker from Sheffield, had pioneered a process to make crucible steel in 1740, but the British had managed to keep it secret, forcing others to import steel. When Napoleon began his blockade of the British Empire (see Continental System), British steel became unavailable, and Napoleon offered a prize of four thousand francs to anyone who could replicate the British process. This prize piqued Friedrich's interest.
Thus, in 1811 Friedrich founded the Krupp Gusstahlfabrik (Cast Steel Works). He realized he would need a large facility with a power source for success, and so he built a mill and foundry on the Ruhr River, which unfortunately proved an unreliable stream. Friedrich spent a significant amount of time and money in the small, waterwheel-powered facility, neglecting other Krupp business, but in 1816 he was able to produce smelted steel. He died in Essen, 8 October 1826 age 39.
Alfred Krupp (born Alfried Felix Alwyn Krupp), son of Friedrich Carl, was born in Essen in 1812. His father's death forced him to leave school at the age of fourteen and take on responsibility for the steel works in companionship with his mother Therese Krupp. Prospects were daunting: his father had spent a considerable fortune in the attempt to cast steel in large ingots, and to keep the works going the widow and family lived in extreme frugality. The young director laboured alongside the workmen by day and carried on his father's experiments at night, while occasionally touring Europe trying to promote Krupp products and make sales. It was during a stay in England that young Alfried became enamored of the country and adopted the English spelling of his name.
For years, the works made barely enough money to cover the workmen's wages. Then, in 1841, Alfred's brother Hermann invented the spoon-roller—which Alfred patented, bringing in enough money to enlarge the factory, steel production, and cast steel blocks. In 1847 Krupp made his first cannon of cast steel. At the Great Exhibition (London) of 1851, he exhibited a 6 pounder made entirely from cast steel, and a solid flawless ingot of steel weighing , more than twice as much as any previously cast. He surpassed this with a ingot for the Paris Exposition in 1855. Krupp's exhibits caused a sensation in the engineering world, and the Essen works became famous.
In 1851, another successful innovation, no-weld railway tyres, began the company's primary revenue stream, from sales to railways in the United States. Alfred enlarged the factory and fulfilled his long-cherished scheme to construct a breech-loading cannon of cast steel. He strongly believed in the superiority of breech-loaders, on account of improved accuracy and speed, but this view did not win general acceptance among military officers, who remained loyal to tried-and-true muzzle-loaded bronze cannon. Alfred soon began producing breech loading howitzers, one of which he gifted to the Prussian court.
Indeed, unable to sell his steel cannon, Krupp gave it to the King of Prussia, who used it as a decorative piece. The king's brother Wilhelm, however, realized the significance of the innovation. After he became regent in 1859, Prussia bought its first 312 steel cannon from Krupp, which became the main arms manufacturer for the Prussian military.
Prussia used the advanced technology of Krupp to defeat both Austria and France in the German Wars of Unification. The French high command refused to purchase Krupp guns despite Napoleon III's support. The Franco-Prussian war was in part a contest of "Kruppstahl" versus bronze cannon. The success of German artillery spurred the first international arms race, against Schneider-Creusot in France and Armstrong in England. Krupp was able to sell, alternately, improved artillery and improved steel shielding to countries from Russia to Chile to Siam.
In the Panic of 1873, Alfred continued to expand, including the purchase of Spanish mines and Dutch shipping, making Krupp the biggest and richest company in Europe but nearly bankrupting it. He was bailed out with a 30 million Mark loan from a consortium of banks arranged by the Prussian State Bank.
In 1878 and 1879 Krupp held competitions known as "Völkerschiessen", which were firing demonstrations of cannon for international buyers. These were held in Meppen, at the largest proving ground in the world; privately owned by Krupp. He took on 46 nations as customers. At the time of his death in 1887, he had 75,000 employees, including 20,200 in Essen. In his lifetime, Krupp manufactured a total of 24,576 guns; 10,666 for the German government and 13,910 for export.
Krupp established the "Generalregulativ" as the firm's basic constitution. The company was a sole proprietorship, inherited by primogeniture, with strict control of workers. Krupp demanded a loyalty oath, required workers to obtain written permission from their foremen when they needed to use the toilet and issued proclamations telling his workers not to concern themselves with national politics. In return, Krupp provided social services that were unusually liberal for the era, including "colonies" with parks, schools and recreation grounds - while the widows' and orphans' and other benefit schemes insured the men and their families in case of illness or death. Essen became a large company town and Krupp became a de facto state within a state, with "Kruppianer" as loyal to the company and the Krupp family as to the nation and the Hohenzollern family. Krupp's paternalist strategy was adopted by Bismarck as government policy, as a preventive against Social Democratic tendencies, and later influenced the development and adoption of "Führerprinzip" by Adolf Hitler.
The Krupp social services program began about 1861, when it was found that there were not sufficient houses in the town for firm employees, and the firm began building dwellings. By 1862 ten houses were ready for foremen, and in 1863 the first houses for workingmen were built in Alt Westend. Neu Westend was built in 1871 and 1872. By 1905, 400 houses were provided, many being given rent free to widows of former workers. A cooperative society was founded in 1868 which became the Consum-Anstalt. Profits were divided according to amounts purchased. A boarding house for single men, the Ménage, was started in 1865 with 200 boarders and by 1905 accommodated 1000. Bath houses were provided and employees received free medical services. Accident, life, and sickness insurance societies were formed, and the firm contributed to their support. Technical and manual training schools were provided.
Krupp was also held in high esteem by the kaiser, who dismissed Julius von Verdy du Vernois and his successor Hans von Kaltenborn for rejecting Krupp's design of the C-96 field gun, quipping, "I’ve canned three War Ministers because of Krupp, and still they don’t catch on!"
Krupp proclaimed he wished to have "a man come and start a counter-revolution" against Jews, socialists and liberals. In some of his odder moods, he considered taking the role himself. According to historian William Manchester, his great grandson, Krupp would interpret these outbursts as a prophecy fulfilled by the coming of Hitler.
Krupp's marriage was not a happy one. His wife Bertha (not to be confused with their granddaughter), was unwilling to remain in polluted Essen in Villa Hügel, the mansion which Krupp designed. She spent most of their married years in resorts and spas, with their only child, a son.
After Krupp's death in 1887, his only son, Friedrich Alfred, carried on the work. The father had been a hard man, known as "Herr Krupp" since his early teens. Friedrich Alfred was called "Fritz" all his life, and was strikingly dissimilar to his father in appearance and personality. He was a philanthropist, a rarity amongst Ruhr industrial leaders. Part of his philanthropy supported the study of eugenics.
Fritz was a skilled businessman, though of a different sort from his father. Fritz was a master of the subtle sell, and cultivated a close rapport with the Kaiser, Wilhelm II. Under Fritz's management, the firm's business blossomed further and further afield, spreading across the globe. He focused on arms manufacturing, as the US railroad market purchased from its own growing steel industry.
Fritz Krupp authorized many new products that would do much to change history. In 1890 Krupp developed nickel steel, which was hard enough to allow thin battleship armor and cannon using Nobel's improved gunpowder. In 1892, Krupp bought Gruson in a hostile takeover. It became Krupp-Panzer and manufactured armor plate and ships' turrets. In 1893 Rudolf Diesel brought his new engine to Krupp to construct. In 1896 Krupp bought Germaniawerft in Kiel, which became Germany's main warship builder and built the first German U-boat in 1906.
Fritz married Magda and they had two daughters: Bertha (1886–1957) and Barbara (1887–1972); the latter married Tilo Freiherr von Wilmowsky (1878–1966) in 1907.
Fritz was arrested on 15 October 1902 by Italian police at his retreat on the Mediterranean island of Capri, where he enjoyed the companionship of forty or so adolescent Italian boys. He had a subsequent publicity disaster and was found dead in his chambers not long after. It was alleged suicide, but foul play was suspected and details of the event were vague. His wife was institutionalized for insanity.
Upon Fritz's death, his teenage daughter Bertha inherited the firm. In 1903, the firm formally incorporated as a joint-stock company, Fried. Krupp Grusonwerk AG. However, Bertha owned all but four shares. Kaiser Wilhelm II felt it was unthinkable for the Krupp firm to be run by a woman. He arranged for Bertha to marry Gustav von Bohlen und Halbach, a Prussian courtier to the Vatican and grandson of American Civil War General Henry Bohlen. By imperial proclamation at the wedding, Gustav was given the additional surname "Krupp," which was to be inherited by primogeniture along with the company.
In 1911, Gustav bought Hamm Wireworks to manufacture barbed wire. In 1912, Krupp began manufacturing stainless steel. At this time 50% of Krupp's armaments were sold to Germany, and the rest to 52 other nations. The company had invested worldwide, including in cartels with other international companies. Essen was the company headquarters. In 1913 Germany jailed a number of military officers for selling secrets to Krupp, in what was known as the "Kornwalzer scandal." Gustav was not himself penalized and fired only a single director, Otto Eccius.
After Archduke Franz Ferdinand was assassinated in 1914, Krupp bought his , in Werfen in the Austrian Alps, and which was a former residence of the Archbishops of Salzburg.
Gustav led the firm through World War I, concentrating almost entirely on artillery manufacturing, particularly following the loss of overseas markets as a result of the Allied blockade. Vickers of England naturally suspended royalty payments during the war (Krupp held the patent on shell fuses, but back-payment was made in 1926).
In 1916, the German government seized Belgian industry and conscripted Belgian civilians for forced labor in the Ruhr. These were novelties in modern warfare and in violation of the Hague Conventions, to which Germany was a signatory. During the war, Friedrich Krupp Germaniawerft produced 84 U-boats for the German navy, as well as the Deutschland submarine freighter, intended to ship raw material to Germany despite the blockade. In 1918 the Allies named Gustav a war criminal, but the trials never proceeded.
After the war, the firm was forced to renounce arms manufacturing. Gustav attempted to reorient to consumer products, under the slogan "Wir machen alles!" (we make everything!), but operated at a loss for years. The company laid off 70,000 workers but was able to stave off Socialist unrest by continuing severance pay and its famous social services for workers. The company opened a dental hospital to provide steel teeth and jaws for wounded veterans. It received its first contract from the Prussian State railway, and manufactured its first locomotive.
In 1920, the Ruhr Uprising occurred in reaction to the Kapp Putsch. The Ruhr Red Army, or Rote Soldatenbund, took over much of the demilitarized Rhineland unopposed. Krupp's factory in Essen was occupied, and independent republics were declared, but the German Reichswehr invaded from Westphalia and quickly restored order. Later in the year, Britain oversaw the dismantling of much of Krupp's factory, reducing capacity by half and shipping industrial equipment to France as war reparations.
In the hyperinflation of 1923, the firm printed Kruppmarks for use in Essen, which was the only stable currency there. France and Belgium occupied the Ruhr and established martial law. French soldiers inspecting Krupp's factory in Essen were cornered by workers in a garage, opened fire with a machine gun, and killed thirteen. This incident spurred reprisal killings and sabotage across the Rhineland, and when Krupp held a large, public funeral for the workers, he was fined and jailed by the French. This made him a national hero, and he was granted an amnesty by the French after seven months.
Although Krupp was a monarchist at heart, he cooperated with the Weimar Republic; as a munitions manufacturer his first loyalty was to the government in power. He was deeply involved with the Reichswehr's evasion of the Treaty of Versailles, and secretly engaged in arms design and manufacture. In 1921 Krupp bought Bofors in Sweden as a front company and sold arms to neutral nations including the Netherlands and Denmark. In 1922, Krupp established Suderius AG in the Netherlands, as a front company for shipbuilding, and sold submarine designs to neutrals including the Netherlands, Spain, Turkey, Finland, and Japan. German Chancellor Wirth arranged for Krupp to secretly continue designing artillery and tanks, coordinating with army chief von Seeckt and navy chief Paul Behncke. Krupp was able to hide this activity from Allied inspectors for five years, and kept up his engineers' skills by hiring them out to Eastern European governments including Russia.
In 1924, the Raw Steel Association (Rohstahlgemeinschaft) was established in Luxembourg, as a quota-fixing cartel for coal and steel, by France, Britain, Belgium, Luxembourg, Austria, Czechoslovakia, and Germany. Germany, however, chose to violate quotas and pay fines, in order to monopolize the Ruhr's output and continue making high-grade steel. In 1926, Krupp began the manufacture of Widia ("Wie Diamant") cobalt-tungsten carbide. In 1928, German industry under Krupp leadership put down a general strike, locking out 250,000 workers, and encouraging the government to cut wages 15%. In 1929, the Chrysler Building was capped with Krupp steel.
Gustav and especially Bertha were initially skeptical of Hitler, who was not of their class. Gustav's skepticism toward the Nazis waned when Hitler dropped plans to nationalize business, the Communists gained seats in the 6 November elections, and Chancellor Kurt von Schleicher suggested a planned economy with price controls. Despite this, as late as the day before President Paul von Hindenburg appointed Hitler Chancellor, Gustav warned him not to do so. However, after Hitler won power, Gustav became enamoured with the Nazis (Fritz Thyssen described him as "a super-Nazi") to a degree his wife and subordinates found bizarre.
In 1933, Hitler made Gustav chairman of the Reich Federation of German Industry. Gustav ousted Jews from the organization and disbanded the board, establishing himself as the sole-decision maker. Hitler visited Gustav just before the Röhm purge in 1934, which among other things eliminated many of those who actually believed in the "socialism" of "National Socialism." Gustav supported the "Adolf Hitler Endowment Fund of German Industry", administrated by Bormann, who used it to collect millions of Marks from German businessmen. As part of Hitler's secret rearmament program, Krupp expanded from 35,000 to 112,000 employees.
Gustav was alarmed at Hitler's aggressive foreign policy after the Munich Agreement, but by then he was fast succumbing to senility and was effectively displaced by his son Alfried. He was indicted at the Nuremberg Trials but never tried, due to his advanced dementia. He was thus the only German to be accused of being a war criminal after both world wars. He was nursed by his wife in a roadside inn near Blühnbach until his death in 1950, and then cremated and interred quietly, since his adopted name was at that time one of the most notorious in the American Zone.
As the eldest son of Bertha Krupp, Alfried was destined by family tradition to become the sole heir of the Krupp concern. An amateur photographer and Olympic sailor, he was an early supporter of Nazism among German industrialists, joining the SS in 1931, and never disavowing his allegiance to Hitler.
His father's health began to decline in 1939, and after a stroke in 1941, Alfried took over full control of the firm, continuing its role as main arms supplier to Germany at war. In 1943, Hitler decreed the Lex Krupp, authorizing the transfer of all Bertha's shares to Alfried, giving him the name "Krupp" and dispossessing his siblings.
During the war, Krupp was allowed to take over many industries in occupied nations, including Arthur Krupp steel works in Berndorf, Austria, the Alsacian Corporation for Mechanical Construction (Elsaessische Maschinenfabrik AG, or ELMAG), Robert Rothschild's tractor factory in France, Škoda Works in Czechoslovakia, and Deutsche Schiff- und Maschinenbau AG (Deschimag) in Bremen. This activity became the basis for the charge of "plunder" at the war crimes trial of Krupp executives after the war.
As another war crime, Krupp used slave labor, both POWs and civilians from occupied countries, and Krupp representatives were sent to concentration camps to select laborers. Treatment of Slavic and Jewish slaves was particularly harsh, since they were considered sub-human in Nazi Germany, and Jews were targeted for "extermination through labor". The number of slaves cannot be calculated due to constant fluctuation but is estimated at 100,000, at a time when the free employees of Krupp numbered 278,000. The highest number of Jewish slave laborers at any one time was about 25,000 in January 1943.
In 1942–1943, Krupp built the Berthawerk factory (named for his mother), near the Markstadt forced labour camp, for production of artillery fuses. Jewish women were used as slave labor there, leased from the SS for 4 Marks a head per day. Later in 1943 it was taken over by Union Werke.
In 1942, although Russia in retreat relocated many factories to the Urals, steel factories were simply too large to move. Krupp took over production, including at the Molotov steel works near Kharkov and Kramatorsk in eastern Ukraine, and at mines supplying the iron, manganese, and chrome vital for steel production.
The battle of Stalingrad in 1942 convinced Krupp that Germany would lose the war, and he secretly began liquidating 200 million Marks in government bonds. This allowed him to retain much of his fortune and hide it overseas.
Beginning in 1943, Allied bombers targeted the main German industrial district in the Ruhr. Most damage at Krupp's works was actually to the slave labor camps, and German tank production continued to increase from 1,000 to 1,800 per month. However, by the end of the war, with a manpower shortage preventing repairs, the main factories were out of commission.
On 25 July 1943 the Royal Air Force attacked the Krupp Works with 627 heavy bombers, dropping 2,032 long tons of bombs in an Oboe-marked attack. Upon his arrival at the works the next morning, Gustav Krupp suffered a fit from which he never recovered.
After the war, the Ruhr became part of the British Zone of occupation. The British dismantled Krupp's factories, sending machinery all over Europe as war reparations. The Russians seized Krupp's Grusonwerk in Magdeburg, including the formula for tungsten steel. Germaniawerft in Kiel was dismantled, and Krupp's role as an arms manufacturer came to an end. Allied High Commission Law 27, in 1950, mandated the decartelization of German industry.
Meanwhile, Alfried was held in Landsberg prison, where Hitler had been imprisoned in 1924. At the Krupp Trial, held in 1947–1948 in Nuremberg following the main Nuremberg trials, Alfried and most of his co-defendants were convicted of crimes against humanity (plunder and slave labor), while being acquitted of crimes against peace, and conspiracy. Alfried was condemned to 12 years in prison and the "forfeiture of all [his] property both real and personal," making him a pauper. Two years later, on 31 January 1951, John J. McCloy, High Commissioner of the American zone of occupation, issued an amnesty to the Krupp defendants. Much of Alfried's industrial empire was restored, but he was forced to transfer some of his fortune to his siblings, and he renounced arms manufacturing.
By this time, West Germany's Wirtschaftswunder had begun, and the Korean War had shifted the United States's priority from denazification to anti-Communism. German industry was seen as integral to western Europe's economic recovery, the limit on steel production was lifted, and the reputation of Hitler-era firms and industrialists was rehabilitated.
In 1953 Krupp negotiated the Mehlem agreement with the governments of the US, Great Britain and France. Hitler's Lex Krupp was upheld, reestablishing Alfried as sole proprietor, but Krupp mining and steel businesses were sequestered and pledged to be divested by 1959. There is scant evidence that Alfried intended to fulfill his side of the bargain, and he continued to receive royalties from the sequestered industries.
Despite having only 16,000 employees and 16,000 pensioners, Alfried refused to cut pensions. He ended unprofitable businesses including shipbuilding, railway tyres, and farm equipment. He hired Berthold Beitz, an insurance executive, as the face of the company, and began a public relations campaign to promote Krupp worldwide, omitting references to Nazism or arms manufacturing. Beginning with Adenauer, he established personal diplomacy with heads of state, making both open and secret deals to sell equipment and engineering expertise. Expansion was significant in the former colonies of Great Britain and behind the Iron Curtain, in countries eager to industrialize but suspicious of NATO. Krupp built rolling mills in Mexico, paper mills in Egypt, foundries in Iran, refineries in Greece, a vegetable oil processing plant in Sudan, and its own steel plant in Brazil. In India, Krupp rebuilt Rourkela in Odisha as company town similar to his own Essen. In West Germany, Krupp made jet fighters in Bremen, as a joint venture with United Aircraft, and built an atomic reactor in Jülich, partly funded by the government. The company expanded to 125,000 employees worldwide, and in 1959 Krupp was the fourth largest in Europe (after Royal Dutch, Unilever, and Mannesmann), and the 12th largest in the world.
1959 was also Krupp's deadline to sell his sequestered industries, but he was supported by other Ruhr industrialists, who refused to place bids. Krupp not only took back control of those companies in 1960, he used a shell company in Sweden to buy the Bochumer Verein für Gussstahlfabrikation AG, in his opinion the best remaining steel manufacturer in West Germany. The Common Market allowed these moves, effectively ending the Allied policy of decartelization. Alfried was the richest man in Europe, and among the world's handful of billionaires.
The treatment of Jews during the war had remained an issue. In 1951, Adenauer acknowledged that "unspeakable crimes were perpetrated in the name of the German people, which impose upon them the obligation to make moral and material amends." Negotiations with the Claims Conference resulted in the Reparations Agreement between Israel and West Germany. IG Farben, Siemens, Krupp, AEG, Telefunken, and Rheinmetall separately provided compensation to Jewish slave laborers, but Alfried refused to consider compensation to non-Jewish slave laborers.
In the mid-1960s, a series of blows ended the special status of Krupp. A recession in 1966 exposed the company's overextended credit and turned Alfried's cherished mining and steel companies into loss-leaders. In 1967, the West German Federal Tax Court ended sales tax exemptions for private companies, of which Krupp was the largest, and voided the Hitler-era exemption of the company from inheritance tax. Alfried's only son, Arndt von Bohlen und Halbach (1938–1986), would not develop an interest in the family business and was willing to renounce his inheritance. Alfried arranged for the firm to be reorganized as a corporation and a foundation for scientific research, with a generous pension for Arndt. Although Arndt was homosexual, like his great-grandfather Friedrich (Fritz) Krupp, he married but was childless. He was an alcoholic and died of cancer in 1986, aged 48, 399 years after Arndt Krupp arrived in Essen.
Alfried had married twice, both ending in divorce, and by family tradition he had excluded his siblings from company management. He died in Essen in 1967, and the company's transformation was completed the next year, capitalized at 500 million DM, with Beitz in charge of the Alfried Krupp von Bohlen und Halbach Foundation and chairman of the corporation's board until 1989. Between 1968 and 1990 the foundation awarded grants totaling around 360 million DM. In 1969, the coal mines were transferred to Ruhrkohle AG. Stahlwerke Südwestfalen was bought for stainless steel, and Polysius AG and Heinrich Koppers for engineering and the construction of industrial plants.
In the early 1980s, the company spun off all its operating activities and was restructured as a holding company. VDM Nickel-Technologie was bought in 1989, for high-performance materials, mechanical engineering and electronics. That year, Gerhard Cromme became chairman and chief executive of Krupp. After its hostile takeover of rival steelmaker Hoesch AG in 1990–1991, the companies were merged in 1992 as "Fried. Krupp AG Hoesch Krupp," under Cromme. After closing one main steel plant and laying off 20,000 employees, the company had a steelmaking capacity of around eight million metric tons and sales of about 28 billion DM (US$18.9 billion). The new Krupp had six divisions: steel, engineering, plant construction, automotive supplies, trade, and services. After two years of heavy losses, a modest net profit of 40 million DM (US$29.2 million) followed in 1994.
In 1997 Krupp attempted a hostile takeover of the larger Thyssen, but the bid was abandoned after resistance from Thyssen management and protests by its workers. Nevertheless, Thyssen agreed to merge the two firms' flat steel operations, and Thyssen Krupp Stahl AG was created in 1997 as a jointly owned subsidiary (60% by Thyssen and 40% by Krupp). About 6,300 workers were laid off. Later that year, Krupp and Thyssen announced a full merger, which was completed in 1999 with the formation of ThyssenKrupp AG. Cromme and Ekkehard Schulz were named co-chief executives of the new company, operating worldwide in three main business areas: steel, capital goods (elevators and industrial equipment), and services (specialty materials, environmental services, mechanical engineering, and scaffolding services).
The unexpected victory of Prussia over France (19 July 187010 May 1871) demonstrated the superiority of breech-loaded steel cannon over muzzle-loaded brass. Krupp artillery was a significant factor at the battles of Wissembourg and Gravelotte, and was used during the siege of Paris. Krupp's anti-balloon guns were the first anti-aircraft guns. Prussia fortified the major North German ports with batteries that could hit French ships from a distance of 4,000 yards, inhibiting invasion.
Krupp's construction of the Great Venezuela Railway from 1888 to 1894 raised Venezuelan national debt. Venezuela's suspension of debt payments in 1901 led to gunboat diplomacy of the Venezuela Crisis of 1902–1903.
Russia and the Ottoman Empire both bought large quantities of Krupp guns. By 1887, Russia had bought 3,096 Krupp guns, while the Ottomans bought 2,773 Krupp guns. By the start of the Balkan wars the largest export market for Krupp worldwide was Turkey, which purchased 3,943 Krupp guns of various types between 1854 and 1912. The 2nd largest customer in the Balkans was Romania, which purchased 1,450 guns in the same period, while Bulgaria purchased 517 pieces, Greece 356, Austria-Hungary 298, Montenegro 25, and Serbia just 6 guns.
Krupp produced most of the artillery of the Imperial German Army, including its heavy siege guns: the 1914 420 mm Big Bertha, the 1916 Langer Max, and the seven Paris Guns in 1917 and 1918. In addition, Friedrich Krupp Germaniawerft built German warships and submarines in Kiel. During the war Krupp modified also the design of an existing Langer Max gun which they built in Koekelare. The gun called Batterie Pommern was the largest gun of the world in 1917 and was able to shoot shells of ±750 kg from Koekelare to Dunkirk.
Before World War I Krupp had a contract with the British armaments company Vickers and Son Ltd. (formerly Vickers Maxim) to supply Vickers-constructed Maxim machine guns. Conversely, from 1902 Krupp was contracted by Vickers to supply its patented fuses to Vickers bullets. It is known that wounded and deceased German soldiers were found to have spent Vickers bullets with the German inscription "Krupps patent zünder [fuses]" lying around their bodies.
Krupp received its first order for 135 Panzer I tanks in 1933, and during World War II made tanks, artillery, naval guns, armor plate, munitions and other armaments for the German military. Friedrich Krupp Germaniawerft shipyard launched the cruiser "Prinz Eugen", as well as many of Germany's U-boats (130 between 1934 and 1945) using preassembled parts supplied by other Krupp factories in a process similar to the construction of the US liberty ships.
In the 1930s, Krupp developed two 800 mm railway guns, the Schwerer Gustav and the Dora. These guns were the biggest artillery pieces ever fielded by an army during wartime, and weighed almost 1,344 tons. They could fire a 7-ton shell over a distance of 37 kilometers. More crucial to the operations of the German military was Krupp's development of the famed 88 mm anti-aircraft cannon which found use as a notoriously effective anti-tank gun.
In an address to the Hitler Youth, Adolf Hitler stated "In our eyes, the German boy of the future must be slim and slender, as fast as a greyhound, tough as leather and hard as Krupp steel" (""... der deutsche Junge der Zukunft muß schlank und rank sein, flink wie Windhunde, zäh wie Leder und hart wie Kruppstahl."")
During the war Germany's industry was heavily bombed. The germans built large-scale night-time decoys like the Krupp decoy site (German: Kruppsche Nachtscheinanlage) which was a German decoy-site of the Krupp steel works in Essen. During World War II, it was designed to divert Allied airstrikes from the actual production site of the arms factory.
Krupp Industries employed workers conscripted by the Nazi regime from across Europe. These workers were initially paid, but as Nazi fortunes declined they were kept as slave workers. They were abused, beaten, and starved by the thousands, as detailed in the book "The Arms of Krupp". Nazi Germany kept two million French POWs captured in 1940 as forced laborers throughout the war. They added compulsory (and volunteer) workers from occupied nations, especially in metal factories. The shortage of volunteers led the Vichy government of France to deport workers to Germany, where they constituted 15% of the labor force by August 1944. The largest number worked in the giant Krupp steel works in Essen. Low pay, long hours, frequent bombings, and crowded air raid shelters added to the unpleasantness of poor housing, inadequate heating, limited food, and poor medical care, all compounded by harsh Nazi discipline. In an affidavit provided at the Nuremberg Trials following the war, Dr. Wilhelm Jaeger, the senior doctor for the Krupp "slaves," wrote, "Sanitary conditions were atrocious. At Kramerplatz only ten children's toilets were available for 1200 inhabitants. . . Excretion contaminated the entire floors of these lavatories. The Tatars and Kirghiz suffered most; they collapsed like flies [from] bad housing, the poor quality and insufficient quantity of food, overwork and insufficient rest. . . Countless fleas, bugs and other vermin tortured the inhabitants of these camps. . ." The survivors finally returned home in the summer of 1945 after their liberation by the allied armies.
Krupp industries was prosecuted after the end of war for its support to the Nazi regime and use of forced labour.
Krupp's trucks were once again produced after the war, but so as to minimize the negative wartime connotations of the Krupp name they were sold as "Südwerke" trucks from 1946 until 1954, when the Krupp name was considered rehabilitated.
Krupp Steel Works of Essen, Germany, manufactured the spherical pressure chamber of the dive vessel "Trieste", the first vessel to take humans to the deepest known point in the oceans, accomplished in 1960. This was a heavy duty replacement for the original pressure sphere (made in Italy by Acciaierie Terni) and was manufactured in three finely machined sections: an equatorial ring and two hemispherical caps. The sphere weighed 13 tonnes in air (eight tonnes in water) with walls that were 12.7 centimetres (5.0 in) thick.
Krupp Steel Works was also contracted in the mid-1960s to construct the Effelsberg 100-m Radio Telescope, which, from 1972 to 2000 was the largest fully steerable radio telescope in the world.
Krupp was the first company to patent a seamless, reliable and strong enough railway tyre for rail freight. Krupp received original contracts in the United States and enjoyed a period of technological superiority while also contributing the majority of rail to the new continental railway system. "Nearly all railroads were using Krupp rails, the New York Central, Illinois Central, Delaware and Hudson, Maine Central, Lake Shore and Michigan Southern, Bangor and Aroostook, Great Northern, Boston and Albany, Florida and East Coast, Texas and Pacific, Southern Pacific, and Mexican National."
In 1893, a mechanical engineer by the name of Rudolf Diesel approached Gustav with a patent for a "new kind of internal combustion engine employing autoignition of the fuel". He also included his text "Theorie und Konstruktion eines rationellen Wärmemotors". Four years later, the first 3-horsepower diesel engine was produced.
The common English pronunciations are or . The common German pronunciations are or . Thus the "u" is usually treated as short in both languages, corresponding logically (in either language's regular orthography) with the doubled consonant that follows. A British documentary on the Krupp family and firm included footage of German-speakers of the 1930s who would have had speaking contact with the family, which attests the long , thus or , rather than what would be the regular German spelling pronunciation, or . The documentary's narration used the English equivalent, . This would seem to indicate that the short "u" is a spelling pronunciation, but it is nonetheless the most common treatment.
|
https://en.wikipedia.org/wiki?curid=17060
|
Kwame Nkrumah
Kwame Nkrumah (21 September 190927 April 1972) was a Ghanaian politician and revolutionary. He was the first Prime Minister and President of Ghana, having led the Gold Coast to independence from Britain in 1957. An influential advocate of pan-Africanism, Nkrumah was a founding member of the Organization of African Unity and winner of the Lenin Peace Prize from the Soviet Union in 1962.
After twelve years abroad pursuing higher education, developing his political philosophy and organizing with other diasporic pan-Africanists, Nkrumah returned to the Gold Coast to begin his political career as an advocate of national independence. He formed the Convention People's Party, which achieved rapid success through its unprecedented appeal to the common voter. He became Prime Minister in 1952 and retained the position when Ghana declared independence from Britain in 1957. In 1960, Ghanaians approved a new constitution and elected Nkrumah President.
His administration was primarily socialist as well as nationalist. It funded national industrial and energy projects, developed a strong national education system and promoted a pan-Africanist culture. Under Nkrumah, Ghana played a leading role in African international relations during the decolonization period.
In 1964, a constitutional amendment made Ghana a one-party state, with Nkrumah as president for life of both the nation and its party. Nkrumah was deposed in 1966 by the National Liberation Council which under the supervision of international financial institutions privatized many of the country's state corporations. Nkrumah lived the rest of his life in Guinea, of which he was named honorary co-president.
Kwame Nkrumah was born on 21 September 1909 in Nkroful, Gold Coast (now in Ghana) to a poor and illiterate family. Nkroful was a small village in the Nzema area, in the far southwest of the Gold Coast, close to the frontier with the French colony of the Ivory Coast. His father did not live with the family, but worked in Half Assini where he pursued his goldsmith business until his death. Kwame Nkrumah was raised by his mother and his extended family, who lived together in traditional fashion, with more distant relatives often visiting. He lived a carefree childhood, spent in the village, in the bush, and on the nearby sea. By the naming customs of the Akan people, he was given the name Kwame, the name given to males born on a Saturday. During his years as a student in the United States, though, he was known as Francis Nwia Kofi Nkrumah, Kofi being the name given to males born on Friday. He later changed his name to Kwame Nkrumah in 1945 in the UK, preferring the name "Kwame". According to Ebenezer Obiri Addo in his study of the future president, the name "Nkrumah", a name traditionally given to a ninth child, indicates that Kwame likely held that place in the house of his father, who had several wives.
His father, Opanyin Kofi Nwiana Ngolomah, came from Nkroful, belonging to Akan tribe of the Asona clan. Sources indicated that Ngolomah stayed at Tarkwa-Nsuaem and dealt in goldsmith business. In addition, Ngolomah was respected for his wise counsel by those who sought his advice on traditional issues and domestic affairs. He died in 1927.
Kwame was the only child of his mother. Nkrumah's mother sent him to the elementary school run by a Catholic mission at Half Assini, where he proved an adept student. A German Roman Catholic priest by the name of George Fischer was said to have profoundly influenced his elementary school education. Although his mother, whose name was Elizabeth Nyanibah (1876/77–1979), later stated his year of birth was 1912, Nkrumah wrote that he was born on 21 September 1909. Nyanibah, who hailed from Nsuaem and belongs to the Agona family, was a fishmonger and petty trader when she married his father. Eight days after his birth, his father named him as Francis Nwia-Kofi after a relative but later his parents named him as Francis Kwame Ngolomah. He progressed through the ten-year elementary programme in eight years. By about 1925 he was a student-teacher in the school, and had been baptized into the Catholic faith. While at the school, he was noticed by the Reverend Alec Garden Fraser, principal of the Government Training College (soon to become Achimota School) in the Gold Coast's capital, Accra. Fraser arranged for Nkrumah to train as a teacher at his school. Here, Columbia-educated deputy headmaster Kwegyir Aggrey exposed him to the ideas of Marcus Garvey and W. E. B. Du Bois. Aggrey, Fraser, and others at Achimota taught that there should be close co-operation between the races in governing the Gold Coast, but Nkrumah, echoing Garvey, soon came to believe that only when the black race governed itself could there be harmony between the races.
After obtaining his teacher's certificate from the Prince of Wales' College at Achimota in 1930, Nkrumah was given a teaching post at the Roman Catholic primary school in Elmina in 1931, and after a year there, was made headmaster of the school at Axim. In Axim, he started to get involved in politics and founded the Nzima Literary Society. In 1933, he was appointed a teacher at the Catholic seminary at Amissano. Although the life there was strict, he liked it, and considered becoming a Jesuit. Nkrumah had heard journalist and future Nigerian president Nnamdi Azikiwe speak while a student at Achimota; the two men met and Azikiwe's influence increased Nkrumah's interest in black nationalism. The young teacher decided to further his education. Azikiwe had attended Lincoln University, a historically black college in Chester County, Pennsylvania, west of Philadelphia, and he advised Nkrumah to enroll there . Nkrumah, who had failed the entrance examination for London University, gained funds for the trip and his education from relatives. He traveled by way of Britain, where he learned, to his outrage, of Italy's invasion of Ethiopia, one of the few independent African nations. He arrived in the United States, in October 1935.
According to historian John Henrik Clarke in his article on Nkrumah's American sojourn, "the influence of the ten years that he spent in the United States would have a lingering effect on the rest of his life." Nkrumah had sought entry to Lincoln University some time before he began his studies there. On 1 March 1935, he sent the school a letter noting that his application had been pending for more than a year. When he arrived in New York in October 1935, he traveled to Pennsylvania, where he enrolled despite lacking the funds for the full semester. He soon won a scholarship that provided for his tuition at Lincoln. He remained short of funds through his time in the US. To make ends meet, he worked in menial jobs, including as a dishwasher. On Sundays, he visited black Presbyterian churches in Philadelphia and in New York.
Nkrumah completed a Bachelor of Arts degree in economics and sociology in 1939. Lincoln then appointed him an assistant lecturer in philosophy, and he began to receive invitations to be a guest preacher in Presbyterian churches in Philadelphia and New York. In 1939, Nkrumah enrolled at Lincoln's seminary and at the Ivy League University of Pennsylvania in Philadelphia and in 1942, he was initiated into the Mu chapter of Phi Beta Sigma fraternity at Lincoln University. Nkrumah gained a Bachelor of Theology degree from Lincoln in 1942, the top student in the course. He earned from Penn the following year a Master of Arts degree in philosophy and a Master of Science in education. While at Penn, Nkrumah worked with the linguist William Everett Welmers, providing the spoken material that formed the basis of the first descriptive grammar of his native Fante dialect of the Akan language.
Nkrumah spent his summers in Harlem, a center of black life, thought and culture. He found housing and employment in New York City with difficulty and involved himself in the community. He spent many evenings listening to and arguing with street orators, and according to Clarke, Kwame Nkrumah in his years in America stated;
Nkrumah was an activist student, organizing a group of expatriate African students in Pennsylvania and building it into the African Students Association of America and Canada, becoming its president. Some members felt that the group should aspire for each colony to gain independence on its own; Nkrumah urged a Pan-African strategy. Nkrumah played a major role in the Pan-African conference held in New York in 1944, which urged the United States, at the end of the Second World War, to help ensure Africa became developed and free.
His old teacher Aggrey had died in 1929 in the US, and in 1942 Nkrumah led traditional prayers for Aggrey at the graveside. This led to a break between him and Lincoln, though after he rose to prominence in the Gold Coast, he returned in 1951 to accept an honorary degree. Nevertheless, Nkrumah's doctoral thesis remained uncompleted. He had adopted the forename Francis while at the Amissano seminary; in 1945 he took the name Kwame Nkrumah.
Nkrumah read books about politics and divinity, and tutored students in philosophy. In 1943 Nkrumah met Trinidadian Marxist C. L. R. James, Russian expatriate Raya Dunayevskaya, and Chinese-American Grace Lee Boggs, all of whom were members of an American-based Marxist intellectual cohort. Nkrumah later credited James with teaching him "how an underground movement worked". Federal Bureau of Investigation files on Nkrumah, kept from January to May 1945, identify him as a possible communist. Nkrumah was determined to go to London, wanting to continue his education there now that the Second World War had ended. James, in a 1945 letter introducing Nkrumah to Trinidad-born George Padmore in London, wrote: "This young man is coming to you. He is not very bright, but nevertheless do what you can for him because he's determined to throw Europeans out of Africa."
Nkrumah returned to London in May 1945 and enrolled at the London School of Economics as a PhD candidate in anthropology. He withdrew after one term and the next year enrolled at University College, with the intent to write a philosophy dissertation on "Knowledge and Logical Positivism". His supervisor, A. J. Ayer, declined to rate Nkrumah as a "first-class philosopher", saying, "I liked him and enjoyed talking to him but he did not seem to me to have an analytical mind. He wanted answers too quickly. I think part of the trouble may have been that he wasn't concentrating very hard on his thesis. It was a way of marking time until the opportunity came for him to return to Ghana." Finally, Nkrumah enrolled in, but did not complete, a study in law at Gray's Inn.
Nkrumah spent his time on political organizing. He and Padmore were among the principal organizers, and co-treasurers, of the Fifth Pan-African Congress in Manchester (15–19 October 1945). The Congress elaborated a strategy for supplanting colonialism with African socialism. They agreed to pursue a federal United States of Africa, with interlocking regional organizations, governing through separate states of limited sovereignty. They planned to pursue a new African culture without tribalism, democratic within a socialist or communist system, synthesizing traditional aspects with modern thinking, and for this to be achieved by nonviolent means if possible. Among those who attended the congress was the venerable W. E. B. Dubois along with some who later took leading roles in leading their nations to independence, including Hastings Banda of Nyasaland (which became Malawi), Jomo Kenyatta of Kenya and Obafemi Awolowo of Nigeria.
The congress sought to establish ongoing African activism in Britain in conjunction with the West African National Secretariat (WANS) to work towards the decolonization of Africa. Nkrumah became the secretary of WANS. In addition to seeking to organize Africans to gain their nations' freedom, Nkrumah sought to succour the many West African seamen who had been stranded, destitute, in London at the end of the war, and established a Colored Workers Association to empower and succour them. The U.S. State Department and MI5 watched Nkrumah and the WANS, focusing on their links with Communism. Nkrumah and Padmore established a group called The Circle to lead the way to West African independence and unity; the group aimed to create a Union of African Socialist Republics. A document from The Circle, setting forth that goal, was found on Nkrumah upon his arrest in Accra in 1948, and was used against him by the British authorities.
The 1946 Gold Coast constitution gave Africans a majority on the Legislative Council for the first time. Seen as a major step towards self-government, the new arrangement prompted the colony's first true political party, founded in August 1947, the United Gold Coast Convention (UGCC). The UGCC sought self-government as quickly as possible. Since the leading members were all successful professionals, they needed to pay someone to run the party, and their choice fell on Nkrumah at the suggestion of Ako Adjei. Nkrumah hesitated, realizing the UGCC was controlled by conservative interests, but decided that the new post gave him huge political opportunities, and accepted. After being questioned by British officials about his communist affiliations, Nkrumah boarded the MV "Accra" at Liverpool in November 1947 for the voyage home.
After brief stops in Sierra Leone, Liberia, and the Ivory Coast, he arrived in the Gold Coast, and after a brief stay and reunion with his mother in Tarkwa, began work at the party's headquarters in Saltpond on 29 December 1947 where he worked as a general secretary. Nkrumah quickly submitted plans for branches of the UGCC to be established colony-wide, and for strikes if necessary to gain political ends. This activist stance divided the party's governing committee, which was led by J. B. Danquah. Nkrumah embarked on a tour to gain donations for the UGCC and establish new branches.
Although the Gold Coast was politically more advanced than Britain's other West Africa colonies, there was considerable discontent. Postwar inflation had caused public anger at high prices, leading to a boycott of the small stores run by Arabs which began in January 1948. The cocoa bean farmers were upset because trees exhibiting swollen-shoot disease, but still capable of yielding a crop, were being destroyed by the colonial authorities. There were about 63,000 ex-servicemen in the Gold Coast, many of whom had trouble obtaining employment and felt the colonial government was doing nothing to address their grievances. Nkrumah and Danquah addressed a meeting of the Ex-Service men's Union in Accra on 20 February 1948, which was in preparation for a march to present a petition to the governor. When that demonstration took place on 28 February, there was gunfire from the British, prompting the 1948 Accra riots, which spread throughout the country. According to Nkrumah's biographer, David Birmingham, "West Africa's erstwhile "model colony" witnessed a riot and business premises were looted. The African Revolution had begun."
The government assumed that the UGCC was responsible for the unrest, and arrested six leaders, including Nkrumah and Danquah. The Big Six were incarcerated together in Kumasi, increasing the rift between Nkrumah and the others, who blamed him for the riots and their detention. After the British learned that there were plots to storm the prison, the six were separated, with Nkrumah sent to Lawra. They were freed in April 1948. Many students and teachers had demonstrated for their release, and been suspended; Nkrumah, using his own funds, began the Ghana National College. This, among other activities, led UGCC committee members to accuse him of acting in the party's name without authority. Fearing he would harm them more outside the party than within, they agreed to make him honorary treasurer. Nkrumah's popularity, already large, was increased with his founding of the "Accra Evening News", which was not a party organ but was owned by Nkrumah and others. He also founded the Committee on Youth Organization (CYO) as a youth wing for the UGCC. It soon broke away and adopted the motto "Self-Government Now". The CYO united students, ex-servicemen, and market women. Nkrumah recounted in his autobiography that he knew that a break with the UGCC was inevitable, and wanted the masses behind him when the conflict occurred. Nkrumah's appeals for "Free-Dom" appealed to the great numbers of underemployed youths who had come from the farms and villages to the towns. "Old hymn tunes were adapted to new songs of liberation which welcomed traveling orators, and especially Nkrumah himself, to mass rallies across the Gold Coast."
Beginning in April 1949, there was considerable pressure on Nkrumah from his supporters to leave the UGCC and form his own party. On 12 June 1949, he announced the formation of the Convention People's Party (CPP), with the word "convention" chosen, according to Nkrumah, "to carry the masses with us". There were attempts to heal the breach with the UGCC; at one July meeting, it was agreed to reinstate Nkrumah as secretary and disband the CPP. But Nkrumah's supporters would not have it, and persuaded him to refuse the offer and remain at their head.
The CPP adopted the red cockerel as its symbol – a familiar icon for local ethnic groups, and a symbol of leadership, alertness, and masculinity. Party symbols and colors (red, white, and green) appeared on clothing, flags, vehicles, and houses. CPP operatives drove red-white-and-green vans across the country, playing music and rallying public support for the party and especially for Nkrumah. These efforts were wildly successful, especially because previous political efforts in the Gold Coast had focused exclusively on the urban intelligentsia.
The British convened a selected commission of middle-class Africans, including all of the Big Six except Nkrumah, to draft a new constitution that would give Ghana more self-government. Nkrumah saw, even before the commission reported, that its recommendations would fall short of full dominion status, and began to organize a Positive Action campaign. Nkrumah demanded a constituent assembly to write a constitution. When the governor, Charles Arden-Clarke, would not commit to this, Nkrumah called for Positive Action, with the unions beginning a general strike to begin on 8 January 1950. The strike quickly led to violence, and Nkrumah and other CPP leaders were arrested on 22 January, and the "Evening News" was banned. Nkrumah was sentenced to a total of three years in prison, and he was incarcerated with common criminals in Accra's Fort James.
Nkrumah's assistant, Komla Agbeli Gbedemah, ran the CPP in his absence; the imprisoned leader was able to influence events through smuggled notes written on toilet paper. The British prepared for an election for the Gold Coast under their new constitution, and Nkrumah insisted that the CPP contest all seats. The situation had become calmer once Nkrumah was arrested, and the CPP and the British worked together to prepare electoral rolls. Nkrumah stood, from prison, for a directly elected Accra seat. Gbedemah worked to set up a nationwide campaign organization, using vans with loudspeakers to blare the party's message. The UGCC failed to set up a nationwide structure, and proved unable to take advantage of the fact that many of its opponents were in prison.
In the February 1951 legislative election, the first general election to be held under universal franchise in colonial Africa, the CPP was elected in a landslide. The CPP secured 34 of the 38 seats contested on a party basis, with Nkrumah elected for his Accra constituency. The UGCC won three seats, and one was taken by an independent. Arden-Clarke saw that the only alternative to Nkrumah's freedom was the end of the constitutional experiment. Nkrumah was released from prison on 12 February, receiving a rapturous reception from his followers. The following day, Arden-Clarke sent for him and asked him to form a government.
Nkrumah had stolen Arden-Clarke's secretary Erica Powell after she was dismissed and sent home for getting too close to Nkrumah. Powell returned to Ghana in January 1955 to be Nkrumah's private secretary, a position she held for ten years. Powell was very close to him and during their time together time Powell largely wrote Nkrumah's (auto)biography, although this was not admitted until much later.
Nkrumah faced several challenges as he assumed office. He had never served in government, and needed to learn that art. The Gold Coast was composed of four regions, several former colonies amalgamated into one. Nkrumah sought to unite them under one nationality, and bring the country to independence. Key to meeting the challenges was convincing the British that the CPP's programmes were not only practical, but inevitable, and Nkrumah and Arden-Clarke worked closely together. The governor instructed the civil service to give the fledgling government full support, and the three British members of the cabinet took care not to vote against the elected majority.
Prior to the CPP taking office, British officials had prepared a ten-year plan for development. With demands for infrastructure improvements coming in from all over the colony, Nkrumah approved it in general, but halved the time to five years. The colony was in good financial shape, with reserves from years of cocoa profit held in London, and Nkrumah was able to spend freely. Modern trunk roads were built along the coast and within the interior. The rail system was modernized and expanded. Modern water and sewer systems were installed in most towns, where housing schemes were begun. Construction began on a new harbor at Tema, near Accra, and the existing port, at Takoradi, was expanded. An urgent programme to build and expand schools, from primary to teacher and trade training, was begun. From 1951 to 1956, the number of pupils being educated at the colony's schools rose from 200,000 to 500,000. Nevertheless, the number of graduates being produced was insufficient to the burgeoning civil service's needs, and in 1953, Nkrumah announced that though Africans would be given preference, the country would be relying on expatriate European civil servants for several years.
Nkrumah's title was Leader of Government Business in a cabinet chaired by Arden-Clarke. Quick progress was made, and in 1952, the governor withdrew from the cabinet, leaving Nkrumah as his prime minister, with the portfolios that had been reserved for expatriates going to Africans. There were accusations of corruption, and of nepotism, as officials, following African custom, attempted to benefit their extended families and their tribes. The recommendations following the 1948 riots had included elected local government rather than the existing system dominated by the chiefs. This was uncontroversial until it became clear that it would be implemented by the CPP. That party's majority in the Legislative Assembly passed legislation in late 1951 that shifted power from the chiefs to the chairs of the councils, though there was some local rioting as rates were imposed.
Nkrumah's re-titling as prime minister had not given him additional power, and he sought constitutional reform that would lead to independence. In 1952, he consulted with the visiting Colonial Secretary, Oliver Lyttelton, who indicated that Britain would look favorably on further advancement, so long as the chiefs and other stakeholders had the opportunity to express their views. Initially skeptical of Nkrumah's socialist policies, Britain's MI5 had compiled large amounts of intelligence on Nkrumah through several sources, including tapping phones and mail interception under the code name of SWIFT. Beginning in October 1952, Nkrumah sought opinions from councils and from political parties on reform, and consulted widely across the country, including with opposition groups. The result the following year was a White Paper on a new constitution, seen as a final step before independence. Published in June 1953, the constitutional proposals were accepted both by the assembly and by the British, and came into force in April of the following year. The new document provided for an assembly of 104 members, all directly elected, with an all-African cabinet responsible for the internal governing of the colony. In the election on 15 June 1954, the CPP won 71, with the regional Northern People's Party forming the official opposition.
A number of opposition groups formed the National Liberation Movement. Their demands were for a federal, rather than a unitary government for an independent Gold Coast, and for an upper house of parliament where chiefs and other traditional leaders could act as a counter to the CPP majority in the assembly. They drew considerable support in the Northern Territory and among the chiefs in Ashanti, who petitioned the British queen, Elizabeth II, asking for a Royal Commission into what form of government the Gold Coast should have. This was refused by her government, who in 1955 stated that such a commission should only be used if the people of the Gold Coast proved incapable of deciding their own affairs. Amid political violence, the two sides attempted to reconcile their differences, but the NLM refused to participate in any committee with a CPP majority. The traditional leaders were also incensed by a new bill that had just been enacted, which allowed minor chiefs to appeal to the government in Accra, bypassing traditional chiefly authority. The British were unwilling to leave unresolved the fundamental question as to how an independent Gold Coast should be governed, and in June 1956, the Colonial Secretary, Alan Lennox-Boyd announced that there would be another general election in the Gold Coast, and if a "reasonable majority" took the CPP's position, Britain would set a date for independence. The results of the July 1956 election were almost identical to those from four years before, and on 3 August the assembly voted for independence under the name Nkrumah had proposed in April, Ghana. In September, the Colonial Office announced independence day would be 6 March 1957.
The opposition was not satisfied with the plan for independence, and demanded that power be devolved to the regions. Discussions took place through late 1956 and into 1957. Although Nkrumah did not compromise on his insistence on a unitary state, the nation was divided into five regions, with power devolved from Accra, and the chiefs having a role in their governments. On 21 February 1957, the British prime minister, Harold Macmillan, announced that Ghana would be a full member of the Commonwealth of Nations with effect from 6 March.
Ghana became independent on 6 March 1957. As the first of Britain's African colonies to gain majority-rule independence, the celebrations in Accra were the focus of world attention; over 100 reporters and photographers covered the events. United States President Dwight D. Eisenhower sent congratulations and his vice president, Richard Nixon, to represent the U.S. at the event. The Soviet delegation urged Nkrumah to visit Moscow as soon as possible. Ralph Bunche, an African American, was there for the United Nations, while the Duchess of Kent represented Queen Elizabeth. Offers of assistance poured in from across the world. Even without them, the country seemed prosperous, with cocoa prices high and the potential of new resource development.
As the fifth of March turned to the sixth, Nkrumah stood before tens of thousands of supporters and proclaimed, "Ghana will be free forever." He spoke at the first session of the Ghana Parliament that Independence Day, telling his new country's citizens that "we have a duty to prove to the world that Africans can conduct their own affairs with efficiency and tolerance and through the exercise of democracy. We must set an example to all Africa."
Nkrumah was hailed as the "Osagyefo" – which means "redeemer" in the Akan language. This independence ceremony included the Duchess of Kent and Governor General Charles Arden-Clarke. With more than 600 reporters in attendance, Ghanaian independence became one of the most internationally reported news events in modern African history.
The flag of Ghana designed by Theodosia Okoh, inverting Ethiopia's green-yellow-red Lion of Judah flag and replacing the lion with a black star. Red symbolizes bloodshed; green stands for beauty, agriculture, and abundance; yellow represents mineral wealth; and the Black Star represents African freedom. The country's new coat of arms, designed by Amon Kotei, includes eagles, a lion, a St. George's Cross, and a Black Star, with copious gold and gold trim. Philip Gbeho was commissioned to compose the new national anthem, "God Bless Our Homeland Ghana".
As a monument to the new nation, Nkrumah opened Black Star Square near Osu Castle in the coastal district of Osu, Accra. This square would be used for national symbolism and mass patriotic rallies.
Under Nkrumah's leadership, Ghana adopted some social democratic policies and practices. Nkrumah created a welfare system, started various community programs, and established schools.
Nkrumah had only a short honeymoon before there was unrest among his people. The government deployed troops to Togo-land to quell unrest following a disputed plebiscite on membership in the new country. A serious bus strike in Accra stemmed from resentments among the Ga people, who believed members of other tribes were getting preferential treatment in government promotion, and this led to riots there in August. Nkrumah's response was to repress local movements by the Avoidance of Discrimination Act (6 December 1957), which banned regional or tribal-based political parties. Another strike at tribalism fell in Ashanti, where Nkrumah and the CPP got most local chiefs who were not party supporters destooled. These repressive actions concerned the opposition parties, who came together to form the United Party under Kofi Abrefa Busia.
In 1958, an opposition MP was arrested on charges of trying to obtain arms abroad for a planned infiltration of the Ghana Army (GA). Nkrumah was convinced there had been an assassination plot against him, and his response was to have the parliament pass the Preventive Detention Act, allowing for incarceration for up to five years without charge or trial, with only Nkrumah empowered to release prisoners early. According to Nkrumah's biographer, David Birmingham, "no single measure did more to bring down Nkrumah's reputation than his adoption of internment without trial for the preservation of security." Nkrumah intended to bypass the British-trained judiciary, which he saw as opposing his plans when they subjected them to constitutional scrutiny.
Another source of irritation was the regional assemblies, which had been organized on an interim basis pending further constitutional discussions. The opposition, which was strong in Ashanti and the north, proposed significant powers for the assemblies; the CPP wanted them to be more or less advisory. In 1959, Nkrumah used his majority in the parliament to push through the Constitutional Amendment Act, which abolished the assemblies and allowed the parliament to amend the constitution with a simple majority.
Queen Elizabeth II remained sovereign over Ghana from 1957 to 1960. William Hare, 5th Earl of Listowel was the Governor-General, and Nkrumah remained Prime Minister. On 6 March 1960, Nkrumah announced plans for a new constitution which would make Ghana a republic, headed by a president with broad executive and legislative powers. The draft included a provision to surrender Ghanaian sovereignty to a Union of African States. On 19, 23, and 27 April 1960 a presidential election and plebiscite on the constitution were held. The constitution was ratified and Nkrumah was elected president over J. B. Danquah, the UP candidate, 1,016,076 to 124,623. Ghana remained a part of the British-led Commonwealth of Nations.
Nkrumah also sought to eliminate "tribalism", a source of loyalties held more deeply than those to the nation-state. Thus, as he wrote in "Africa Must Unite": "We were engaged in a kind of war, a war against poverty and disease, against ignorance, against tribalism and disunity. We needed to secure the conditions which could allow us to pursue our policy of reconstruction and development." To this end, in 1958, his government passed "An Act to prohibit organizations using or engaging in racial or religious propaganda to the detriment of any other racial or religious community, or securing the election of persons on account of their racial or religious affiliations, or for other purposes in connection therewith." Nkrumah attempted to saturate the country in national flags, and declared a widely disobeyed ban on tribal flags.
Kofi Abrefa Busia of the United Party (Ghana) gained prominence as an opposition leader in the debate over this Act, taking a more classically liberal position and criticizing the ban on tribal politics as repressive. Soon after, he left the country. Nkrumah was also a very flamboyant leader. "The New York Times" in 1972 wrote: "During his high‐flying days as the leader of Ghana in the 1950s and early 1960s, Kwame Nkrumah was a flamboyant spellbinder. At home, he created a cult of personality and gloried in the title of 'Osagyefo' (Redeemer). Abroad, he rubbed elbows with the world's leaders as the first man to lead an African colony to independence after World War II."
During his tenure as Prime Minister and then President, Nkrumah succeeded in reducing the political importance of the local chieftaincy (e.g., the Akan chiefs and the Asantehene). These chiefs had maintained authority during colonial rule through collaboration with the British authorities; in fact, they were sometimes favored over the local intelligentsia, who made trouble for the British with organizations like the Aborigines' Rights Protection Society. The Convention People's Party had a strained relationship with the chiefs when it came to power, and this relationship became more hostile as the CPP incited political opposition chiefs and criticized the institution as undemocratic. Acts passed in 1958 and 1959 gave the government more power to dis-stool chiefs directly, and proclaimed government of stool land – and revenues. These policies alienated the chiefs and led them to looking favorably on the overthrow of Nkrumah and his Party.
In 1962, three younger members of the CPP were brought up on charges of taking part in a plot to blow up Nkrumah's car in a motorcade. The sole evidence against the alleged plotters was that they rode in cars well behind Nkrumah's car. When the defendants were acquitted, Nkrumah sacked the chief judge of the state security court, then got the CPP-dominated parliament to pass a law allowing a new trial. At this second trial, all three men were convicted and sentenced to death, though these sentences were subsequently commuted to life imprisonment. Shortly afterward, the constitution was amended to give the president the power to summarily remove judges at all levels.
In 1964, Nkrumah proposed a constitutional amendment which would make the CPP the only legal party, with Nkrumah as president for life of both nation and party. The amendment passed with 99.91 percent of the vote, an implausibly high total that led observers to condemn the vote as "obviously rigged". Ghana had effectively been a one-party state since independence. The amendment transformed Nkrumah's presidency into a "de facto" legal dictatorship.
After substantial Africanization of the civil service in 1952–60, the number of expatriates rose again from 1960 to 1965. Many of the new outside workers came not from the United Kingdom but from the Soviet Union, Poland, Czechoslovakia, Yugoslavia, Italy, and the United Nations.
In 1951, the CPP created the Accelerated Development Plan for Education. This plan set up a six-year primary course, to be attended as close to universally as possible, with a range of possibilities to follow. All children were to learn arithmetic, as well as gain "a sound foundation for citizenship with permanent literacy in both English and the vernacular." Primary education became compulsory in 1962. The plan also stated that religious schools would no longer receive funding, and that some existing missionary schools would be taken over by government.
In 1961, Nkrumah laid the first stones in the foundation of the Kwame Nkrumah Ideological Institute created to train Ghanaian civil servants as well as promote Pan-Africanism. In 1964, all students entering college in Ghana were required to attend a two-week "ideological orientation" at the institute. Nkrumah remarked that "trainees should be made to realize the party's ideology is religion, and should be practiced faithfully and fervently."
In 1964, Nkrumah brought forth the Seven Year Development Plan for National Reconstruction and Development, which identified education as a key source of development and called for the expansion of secondary technical schools. Secondary education would also include "in-service training programmes". As Nkrumah told Parliament: "Employers, both public and private, will be expected to make a far greater contribution to labor training through individual factory and farm schools, industry-wide training schemes, day release, payment for attendance at short courses and evening classes." This training would be indirectly subsidized with tax credits and import allocations.
In 1952, the Artisan Trading Scheme, arranged with the Colonial Office and UK Ministry of Labor, provided for a few experts in every field to travel to Britain for technical education. Kumasi Technical Institute was founded in 1956. In September 1960, it added the Technical Teacher Training Center. In 1961, the CPP passed the Apprentice Act, which created a general Apprenticeship Board along with committees for each industry.
Nkrumah promoted pan-African culture, calling for international libraries and cooperative efforts to study history and culture. He decried the norms of white supremacy and Euro-centrism imposed by British textbooks and cultural institutions. He wore a traditional northern robe, fugu, but donned Kente cloth, from the south, for ceremonies, to symbolize his identity as a representative of the whole country. He oversaw the opening of the Ghana Museum on 5 March 1957; the Arts Council of Ghana, a wing of the Ministry of Education and Culture, in 1958; the Research Library on African Affairs in June 1961; and the Ghana Film Corporation in 1964. In 1962, Nkrumah opened the Institute of African Studies.
A campaign against nudity in the northern part of the country received special attention from Nkrumah, who reportedly deployed Propaganda Secretary Hannah Cudjoe to respond. Cudjoe also formed the Ghana Women's League, which advanced the Party's agenda on nutrition, raising children, and wearing clothing. The League also led a demonstration against the detonation of French nuclear weapons in the Sahara. Cudjoe was eventually demoted with the consolidation of national women's groups, and marginalized within the Party structure.
Laws passed in 1959 and 1960 designated special positions in parliament to be held by women. Some women were promoted to the CPP Central Committee. Women attended more universities, took up more professions including medicine and law, and went on professional trips to Israel, the Soviet Union, and the Eastern Bloc. Women also entered the army and air force. Most women remained in agriculture and trade; some received assistance from the Co-operative Movement.
Nkrumah's image was widely disseminated, for example, on postage stamps and on money, in the style of monarchs – providing fodder for accusations of a Nkrumahist personality cult.
In 1957 Nkrumah created a well-funded Ghana News Agency to generate domestic news and disseminate it abroad. In ten years time the GNA had 8045 km of domestic telegraph line, and maintained stations in Lagos, Nairobi, London, and New York City.
Nkrumah consolidated state control over newspapers, establishing the "Ghanaian Times" in 1958 and then in 1962 obtaining its competitor, the "Daily Graphic", from the Mirror Group of London. As he wrote in "Africa Must Unite": "It is part of our revolutionary credo that within the competitive system of capitalism, the press cannot function in accordance with a strict regard for the sacredness of facts, and that the press, therefore, should not remain in private hands." Starting in 1960, he invoked the right of pre-publication censorship of all news.
The Gold Coast Broadcasting Service was established in 1954 and revamped as the Ghana Broadcasting Corporation (GBC). Many television broadcasts featured Nkrumah, commenting for example on the problematic "insolence and laziness of boys and girls". Before celebrations of May Day, 1963, Nkrumah went on television to announce the expansion of Ghana's Young Pioneers, the introduction of a National Pledge, the beginning of a National Flag salute in schools, and the creation of a National Training program to inculcate virtue and the spirit of service among Ghanaian youth. Quoth Nkrumah (to Parliament, on 15 October 1963), "Ghana's television will not cater for cheap entertainment or commercialism; its paramount objective will be education in its broadest and purest sense."
As per the 1965 Instrument of Incorporation of the Ghana Broadcasting Corporation, the Minister of Information and Broadcasting had "powers of direction" over the media, and the President had the power "at any time, if he is satisfied that it is in the national interest to do so, take over the control and management of the affairs or any part of the functions of the Corporation," hiring, firing, reorganizing, and making other commands at will.
Radio programs, designed in part to reach non-reading members of the public, were a major focus of the Ghana Broadcasting Corporation. In 1961, the GBC formed an external service broadcasting in English, French, Arabic, Swahili, Portuguese, and Hausa. Using four 100-kilowatt transmitters and two 250-kilowatt transmitters, the GBC External Service broadcast 110 hours of Pan-Africanist programming to Africa and Europe each week.
He refused advertising in all media, beginning with the "Evening News" of 1948.
The Gold Coast had been among the wealthiest and most socially advanced areas in Africa, with schools, railways, hospitals, social security, and an advanced economy.
Nkrumah attempted to rapidly industrialize Ghana's economy. He reasoned that if Ghana escaped the colonial trade system by reducing dependence on foreign capital, technology, and material goods, it could become truly independent.
After the Ten Year Development Plan, Nkrumah brought forth the Second Development Plan in 1959. This plan called for the development of manufacturing: 600 factories producing 100 varieties of product.
The Statutory Corporations Act, passed in November 1959 and revised in 1961 and 1964, created the legal framework for public corporations, which included state enterprises. This law placed the country's major corporations under the direction of government ministers. The State Enterprises Secretariat office was located in Flagstaff House and under the direct control of the president.
After visiting the Soviet Union, Eastern Europe, and China in 1961, Nkrumah apparently became still more convinced of the need for state control of the economy.
Nkrumah's time in office began successfully: forestry, fishing, and cattle-breeding expanded, production of cocoa (Ghana's main export) doubled, and modest deposits of bauxite and gold were exploited more effectively. The construction of a dam on the Volta River (launched in 1961) provided water for irrigation and hydro-electric power, which produced enough electricity for the towns and for a new aluminum plant. Government funds were also provided for village projects in which local people built schools and roads, while free health care and education were introduced.
A Seven-Year Plan introduced in 1964 focused on further industrialization, emphasizing domestic substitutes for common imports, modernization of the building materials industry, machine making, electrification, and electronics.
Nkrumah's advocacy of industrial development, with help of longtime friend and Minister of Finance, Komla Agbeli Gbedema, led to the Volta River Project: the construction of a hydroelectric power plant, the Akosombo Dam on the Volta River in eastern Ghana. The Volta River Project was the centerpiece of Nkrumah's economic program. On 20 February 1958, he told the National Assembly: "It is my strong belief that the Volta River Project provides the quickest and most certain method of leading us towards economic independence." Ghana invoked assistance from the United States, Israel, and the World Bank in constructing the dam.
Kaiser Aluminum agreed to build the dam for Nkrumah, but restricted what could be produced using the power generated. Nkrumah borrowed money to build the dam, and placed Ghana in debt. To finance the debt, he raised taxes on the cocoa farmers in the south. This accentuated regional differences and jealousy. The dam was completed and opened by Nkrumah amidst world publicity on 22 January 1966.
Nkrumah initiated the Ghana Nuclear Reactor Project in 1961, created the Ghana Atomic Energy Commission in 1963, and in 1964 laid the first stone in the building of an atomic energy facility.
In 1954 the world price of cocoa rose from £150 to £450 per ton. Rather than allowing cocoa farmers to keep the windfall, Nkrumah appropriated the increased revenue via central government levies, then invested the capital into various national development projects. This policy alienated one of the major constituencies that helped him come to power.
Prices continued to fluctuate. In 1960 one ton of cocoa sold for £250 in London. By August 1965 this price had dropped to £91, one fifth of its value ten years before. The quick price decline caused the government's reliance on the reserves and forced farmers to take a portion of their earning in bonds.
Nkrumah actively promoted a policy of Pan-Africanism from the beginning of his presidency. This entailed the creation of a series of new international organizations, which held their inaugural meetings in Accra. These were:
Meanwhile, Ghana withdrew from colonial organizations including West Africa Airways Corporation, the West African Currency Board, the West African Cocoa Research Institute, and the West African Court of Appeal.
In the Year of Africa, 1960, Nkrumah negotiated the creation of a Union of African States, a political alliance between Ghana, Guinea, and Mali. Immediately there formed a women's group called Women of the Union of African States.
Nkrumah was a leading figure in the short-lived Casablanca Group of African leaders, which sought to achieve pan-African unity and harmony through deep political, economic, and military integration of the continent in the early 1960s prior to the establishment of the Organization of African Unity (OAU).
Nkrumah was instrumental in the creation of the OAU in Addis Ababa in 1963. He aspired to create a united military force, the African High Command, which Ghana would substantially lead, and committed to this vision in Article 2 of the 1960 Republican Constitution: "In the confident expectation of an early surrender of sovereignty to a union of African states and territories, the people now confer on Parliament the power to provide for the surrender of the whole or any part of the sovereignty of Ghana."
He was also a proponent of the United Nations, but critical of the Great Powers' ability to control it.
Nkrumah opposed entry of African states into the Common Market of the European Economic Community, a status given to many former French colonies and considered by Nigeria. Instead, Nkrumah advocated, in a speech given on 7 April 1960,
an African common market, a common currency area and the development of communications of all kinds to allow the free flow of goods and services. International capital can be attracted to such viable economic areas, but it would not be attracted to a divided and balkanized Africa, with each small region engaged in senseless and suicidal economic competition with its neighbours.
In 1956, Ghana took control of the Royal West African Frontier Force (RWAFF), Gold Coast Regiment, from the British War Office. This force had formerly been deployed to quell internal dissent, and occasionally to fight in wars: most recently, in World War II, against the Japanese in India and Burma. The most senior officers in this force were British, and, although training of African officers began in 1947, only 28 of 212 officers in December 1956 were indigenous Africans. The British officers still received British salaries, which vastly exceeded those allotted to their Ghanaian counterparts. Concerned about a possible military coup, Nkrumah delayed the placement of African officers in top leadership roles.
Nkrumah quickly established the Ghanaian Air Force, acquiring 14 "Beaver" airplanes from Canada and setting up a flight school with British instructors. "Otters", "Caribou," and "Chipmunks" were to follow. Ghana also obtained four Ilyushin-18 aircraft from the Soviet Union. Preparation began in April 1959 with assistance from India and Israel.
The Ghanaian Navy received two inshore minesweepers with 40- and 20-millimeter guns, the "Afadzato" and the "Yogaga", from Britain in December 1959. It subsequently received the "Elmina" and the "Komenda", seaward defense boats with 40-millimetre guns. The Navy's flagship, and training ship, was the "Achimota", a British yacht constructed during World War II. In 1961, the Navy ordered two 600-ton corvettes, the "Keta" and "Kromantse," from Vosper & Company and received them in 1967. It also procured four Soviet patrol boats. Naval officers were trained at the Britannia Royal Naval College in Dartmouth. The Ghanaian military budget rose each year, from $9.35 million (US dollars) in 1958 to $47 million in 1965.
The first international deployment of the Ghanaian armed forces was to Congo (Léopoldville/Kinshasa), where Ghanaian troops were airlifted in 1960 at the beginning of the Congo crisis. One week after Belgian troops occupied the lucrative mining province of Katanga, Ghana dispatched more than a thousand of its own troops to join a United Nations force. The use of British officers in this context was politically unacceptable, and this event occasioned a hasty transfer of officer positions to Ghanaians. The Congo war was long and difficult. On 19 January 1961 the Third Infantry Battalion mutinied. On 28 April 1961, 43 men were massacred in a surprise attack by the Congolese army.
Ghana also gave military support to rebels fighting against Ian Smith's white-minority government in Rhodesia (now Zimbabwe), which had unilaterally declared independence from Britain in 1965.
In 1961, Nkrumah went on tour through Eastern Europe, proclaiming solidarity with the Soviet Union and the People's Republic of China. Nkrumah's clothing changed to the Chinese-supplied Mao suit.
In 1962 Kwame Nkrumah was awarded the Lenin Peace Prize by the Soviet Union.
In February 1966, while Nkrumah was on a state visit to North Vietnam and China, his government was overthrown in a violent "coup d'état" led by the national military and police forces, with backing from the civil service. The conspirators, led by Joseph Arthur Ankrah, named themselves the National Liberation Council and ruled as a military government for three years. Nkrumah did not learn of the coup until he arrived in China. After the coup, Nkrumah stayed in Beijing for four days and Premier Zhou Enlai treated him with courtesy.
Nkrumah alluded to possible American complicity in the coup in his 1969 memoir "Dark Days in Ghana", though he may have based this conclusion on falsified documents shown to him by the KGB. In 1978 John Stockwell, former Chief of the Angola Task Force of the U.S. Central Intelligence Agency (CIA), wrote that agents at the CIA's Accra station "maintained intimate contact with the plotters as a coup was hatched". Afterward, "inside CIA headquarters the Accra station was given full, if unofficial credit for the eventual coup. ...None of this was adequately reflected in the agency's written records." Later the same year, Seymour Hersh of "The New York Times", citing "first hand intelligence sources," defended Stockwell's account, claiming that "many CIA operatives in Africa considered the agency's role in the overthrow of Mr. Nkrumah to have been pivotal." These claims have never been verified.
Following the coup, Ghana realigned itself internationally, cutting its close ties to Guinea and the Eastern Bloc, accepting a new friendship with the Western Bloc, and inviting the International Monetary Fund and World Bank to take a lead role in managing the economy. With this reversal, accentuated by the expulsion of immigrants and a new willingness to negotiate with apartheid South Africa, Ghana lost a good deal of its stature in the eyes of African nationalists.
Nkrumah never returned to Ghana, but he continued to push for his vision of African unity. He lived in exile in Conakry, Guinea, as the guest of President Ahmed Sékou Touré, who made him honorary co-president of the country. Nkrumah read, wrote, corresponded, gardened, and entertained guests. Despite retirement from public office, he felt that he was still threatened by Western intelligence agencies. When his cook died mysteriously, he feared that someone would poison him, and began hoarding food in his room. He suspected that foreign agents were going through his mail, and lived in constant fear of abduction and assassination. In failing health, he flew to Bucharest, Romania, for medical treatment in August 1971. He died of prostate cancer in April 1972 at the age of 62 while in Romania.
Nkrumah was buried in a tomb in the village of his birth, Nkroful, Ghana. While the tomb remains in Nkroful, his remains were transferred to a large national memorial tomb and park in Accra, Ghana.
Over his lifetime, Nkrumah was awarded honorary doctorates by many universities including Lincoln University (Pennsylvania), Moscow State University (USSR), Cairo University (Egypt), Jagiellonian University (Poland) and Humboldt University (East Germany).
In 2000, he was voted African Man of the Millennium by listeners to the BBC World Service, being described by the BBC as a "Hero of Independence", and an "International symbol of freedom as the leader of the first black African country to shake off the chains of colonial rule."
According to intelligence documents released by the U.S. Department of State's Office of the Historian, "Nkrumah was doing more to undermine [U.S. government] interests than any other black African."
In September 2009, President John Atta Mills declared 21 September (the 100th anniversary of Kwame Nkrumah's birth) to be Founders' Day, a statutory holiday in Ghana to celebrate the legacy of Kwame Nkrumah.
In April 2019, President Akufo-Addo approved the Public Holidays (Amendment) Act 2019 which changed 21 September from Founders' Day to Kwame Nkrumah Memorial Day.
He generally took a non-aligned Marxist perspective on economics, and believed capitalism had malignant effects that were going to stay with Africa for a long time. Although he was clear on distancing himself from the African socialism of many of his contemporaries, Nkrumah argued that socialism was the system that would best accommodate the changes that capitalism had brought, while still respecting African values. He specifically addresses these issues and his politics in a 1967 essay entitled "African Socialism Revisited":
We know that the traditional African society was founded on principles of egalitarianism. In its actual workings, however, it had various shortcomings. Its humanist impulse, nevertheless, is something that continues to urge us towards our all-African socialist reconstruction. We postulate each man to be an end in himself, not merely a means; and we accept the necessity of guaranteeing each man equal opportunities for his development. The implications of this for sociopolitical practice have to be worked out scientifically, and the necessary social and economic policies pursued with resolution. Any meaningful humanism must begin from egalitarianism and must lead to objectively chosen policies for safeguarding and sustaining egalitarianism. Hence, socialism. Hence, also, scientific socialism.
Nkrumah was also best-known politically for his strong commitment to and promotion of pan-Africanism. He was inspired by the writings of black intellectuals such as Marcus Garvey, W. E. B. Du Bois, and George Padmore, and his relationships with them. Much of his understanding and relationship to these men was created during his years in America as a student. Some would argue that his greatest inspiration was Marcus Garvey, although he also had a meaningful relationship with C. L. R. James. Nkrumah looked to these men to craft a general solution to the ills of Africa. To follow in these intellectual footsteps Nkrumah had intended to continue his education in London, but found himself involved in direct activism. Then, motivated by advice from Du Bois, Nkrumah decided to focus on creating peace in Africa. He became a passionate advocate of the "African Personality" embodied in the slogan "Africa for the Africans" earlier popularised by Edward Wilmont Blyden and he viewed political independence as a prerequisite for economic independence. Nkrumah's dedications to pan-Africanism in action attracted these intellectuals to his Ghanaian projects. Many Americans, such as Du Bois and Kwame Ture, moved to Ghana to join him in his efforts. These men are buried there today. His press officer for six years was the Grenadian anticolonialist Sam Morris. Nkrumah's biggest success in this area was his significant influence in the founding of the Organisation of African Unity.
Nkrumah also became a symbol for black liberation in the United States. When in 1958 the Harlem Lawyers Association had an event in Nkrumah's honour, diplomat Ralph Bunche told him:
We salute you, Kwame Nkrumah, not only because you are Prime Minister of Ghana, although this is cause enough. We salute you because you are a true and living representation of our hopes and ideals, of the determination we have to be accepted fully as equal beings, of the pride we have held and nurtured in our African origin, of the freedom of which we know we are capable, of the freedom in which we believe, of the dignity imperative to our stature as men.
In 1961, Nkrumah delivered a speech called "I Speak Of Freedom". During this speech he talked about how "Africa could become one of the greatest forces for good in the world". He mentions how Africa is a land of "vast riches" with mineral resources from that "range from gold and diamonds to uranium and petroleum". Nkrumah says that the reason Africa isn't thriving right now is because the European powers have been taking all the wealth for themselves. If Africa could be independent of European rule then it could truly flourish and contribute positively to the world. In the ending words of this speech Nkrumah calls his people to action by saying "This is our chance. We must act now. Tomorrow may be too late and the opportunity will have passed, and with it the hope of free Africa's survival". This rallied the nation in a nationalistic movement.
Kwame Nkrumah married Fathia Ritzk, an Egyptian Coptic bank worker and former teacher, on the evening of her arrival in Ghana: New Year's Eve, 1957–1958. Fathia's mother refused to bless their marriage, due to reluctance to see another one of her children leave with a foreign husband.
As a married couple, the Nkrumah family had three children: Gamal (born 1959), Samia (born 1960), and Sekou (born 1963). Gamal is a newspaper journalist, while Samia and Sekou are politicians. Nkrumah also has another son, Francis, a paediatrician (born 1962). There appears to be another son, Onsy Anwar Nathan Kwame Nkrumah, born to an Egyptian mother and an additional daughter, Elizabeth. Onsy's claim to be Nkrumah's son is disputed by Nkrumah's other children.
Nkrumah is played by Danny Sapani in the Netflix television series "The Crown" (season 2, episode 8 "Dear Mrs Kennedy"). The show's portrayal of the historical significance of the Queen's dance with Nkrumah has been refuted as exaggerated.
|
https://en.wikipedia.org/wiki?curid=17062
|
Guanyin
Guanyin or Guan Yin () is the most commonly used Chinese translation of the bodhisattva known as Avalokiteśvara. Guanyin is the Buddhist bodhisattva associated with compassion. In the East Asian world, Guanyin is the equivalent term for Avalokitesvara Bodhisattva. Guanyin also refers to the bodhisattva as adopted by other Eastern religions. She was first given the appellation of "Goddess of Mercy" or the Mercy Goddess by Jesuit missionaries in China. The Chinese name Guanyin is short for Guanshiyin, which means "[The One Who] Perceives the Sounds of the World."
Some Buddhists believe that when one of their adherents departs from this world, they are placed by Guanyin in the heart of a lotus, and then sent to the western Pure Land of Sukhāvatī. Guanyin is often referred to as the "most widely beloved Buddhist Divinity" with miraculous powers to assist all those who pray to her, as is said in the Lotus Sutra and Karandavyuha Sutra.
Several large temples in East Asia are dedicated to Guanyin including Shitennō-ji, Sensō-ji, Kiyomizu-dera, Sanjūsangen-dō, Shaolin, Dharma Drum Mountain and many others. Guanyin's abode and bodhimanda in India is recorded as being on Mount Potalaka. With the localization of the belief in Guanyin, each area adopted their own Potalaka. In China, Putuoshan is considered the bodhimanda of Guanyin. Naksansa is considered to be the Potalaka of Guanyin in Korea. Japan's Potalaka is located at Fudarakusan-ji. Tibet's Potalaka is the Potala Palace. There are several pilgrimage centers for Guanyin in East Asia. Putuoshan is the main pilgrimage site in China. There is a 33 temple Guanyin pilgrimage in Korea which includes Naksansa. In Japan there are several pilgrimages associated with Guanyin. The oldest one of them is the Saigoku Kannon Pilgrimage, a pilgrimage through 33 temples with Guanyin shrines. Guanyin is beloved by all Buddhist traditions in a non-denominational way and found in most Tibetan temples under the name Chenrezig. Guanyin is also beloved and worshiped in the temples in Nepal. The Hiranya Varna Mahavihar located in Patan is one example. Guanyin is also found in some influential Theravada temples such as Gangaramaya, Kelaniya and Natha Devale nearby Sri Dalada Maligawa in Sri Lanka; Guanyin can also be found in Thailand's Temple of the Emerald Buddha, Wat Huay Pla Kang (where the huge statue of her is often mistakenly called the "Big Buddha") and Burma's Shwedagon Pagoda. Statues of Guanyin are a widely depicted subject of Asian art and found in the Asian art sections of most museums in the world.
"Guānyīn" is a translation from the Sanskrit "Avalokitasvara" or "Avalokiteśvara", referring to the Mahāyāna bodhisattva of the same name. Another later name for this bodhisattva is "Guānzìzài" (). It was initially thought that the Chinese mis-transliterated the word "Avalokiteśvara" as "Avalokitasvara" which explained why Xuanzang translated it as "Guānzìzài" instead of "Guānyīn". However, the original form was indeed "Avalokitasvara" with the ending "svara" ("sound, noise"), which means "sound perceiver", literally "he who looks down upon sound" (i.e., the cries of sentient beings who need his help). This is the exact equivalent of the Chinese translation "Guānyīn". This etymology was furthered in the Chinese by the tendency of some Chinese translators, notably Kumarajiva, to use the variant "Guānshìyīn", literally "he who perceives the world's lamentations"—wherein "lok" was read as simultaneously meaning both "to look" and "world" (Skt. "loka"; Ch. 世, "shì").
Direct translations from the Sanskrit name "Avalokitasvara" include:
The name "Avalokitasvara" was later supplanted by the "Avalokiteśvara" form containing the ending "-īśvara", which does not occur in Sanskrit before the seventh century. The original form "Avalokitasvara" appears in Sanskrit fragments of the fifth century. The original meaning of the name "Avalokitasvara" fits the Buddhist understanding of the role of a bodhisattva. The reinterpretation presenting him as an "īśvara" shows a strong influence of Śaivism, as the term "īśvara" was usually connected to the Hindu notion of Śiva as a creator god and ruler of the world.
While some of those who revered "Avalokiteśvara" upheld the Buddhist rejection of the doctrine of any creator god, Encyclopædia Britannica does cite "Avalokiteśvara" as the creator god of the world. This position is taken in the widely used Karandavyuha Sutra with its well-known mantra Oṃ maṇi padme hūṃ. In addition, the Lotus Sutra is the first time the "Avalokiteśvara" is mentioned. Chapter 25 refers to him as "Lokeśvara" (Lord God of all beings) and "Lokanātha" (Lord and Protector of all beings) and ascribes extreme attributes of divinity to him.
Direct translations from the Sanskrit name Avalokiteśvara include:
Due to the devotional popularity of Guanyin in Asia, she is known by many names, most of which are simply the localised pronunciations of "Guanyin" or "Guanshiyin":
In these same countries, the variant "Guanzizai" "Lord of Contemplation" and its equivalents are also used, such as in the "Heart Sutra", among other sources.
The "Lotus Sūtra" (Sanskrit "Saddharma Puṇḍarīka Sūtra") is generally accepted to be the earliest literature teaching about the doctrines of Avalokiteśvara. These are found in the twenty fifth chapter of the Lotus Sūtra. This chapter is devoted to Avalokitesvara, describing him as a compassionate bodhisattva who hears the cries of sentient beings, and who works tirelessly to help those who call upon his name. This Chapter also places Avalokiteshwara as Higher than any other being in the Buddhist Cosmology stating that "if one were to pray with true devotion to Avalokiteshwara for one second, they would generate more blessings than if one worshiped with all types of offerings as many Gods as there are in the grains of sand of 62 Ganges Rivers for an entire lifetime". As a result, Avalokiteshwara is often considered the most beloved Buddhist Divinity and is venerated in many important temples including Shitennō-ji, the first official temple of Japan, Sensō-ji, the oldest temple of Tokyo, Kiyomizu-dera and Sanjūsangen-dō which are the two most visited temples in Kyoto.
The "Lotus Sutra" describes Avalokiteśvara as a bodhisattva who can take the form of any type of god including Indra or Brahma; any type of Buddha, any type of king or Chakravartin or even any kind of Heavenly Guardian including Vajrapani and Vaisravana as well as any gender male or female, adult or child, human or non-human being, in order to teach the Dharma to sentient beings. Folk traditions in China and other East Asian countries have added many distinctive characteristics and legends to Guanyin c.q. Avalokiteśvara. Avalokiteśvara was originally depicted as a male bodhisattva, and therefore wears chest-revealing clothing and may even sport a light moustache. Although this depiction still exists in the Far East, Guanyin is more often depicted as a woman in modern times. Additionally, some people believe that Guanyin is androgynous or perhaps without gender.
A total of 33 different manifestations of Avalokitasvara are described, including female manifestations, all to suit the minds of various beings. Chapter 25 consists of both a prose and a verse section. This earliest source often circulates separately as its own sūtra, called the "Avalokitasvara Sūtra" (Ch. ), and is commonly recited or chanted at Buddhist temples in East Asia. The "Lotus Sutra" and its thirty-three manifestations of Guanyin, of which seven are female manifestations, is known to have been very popular in Chinese Buddhism as early as in the Sui and Tang dynasties. Additionally, Tan Chung notes that according to the doctrines of the Mahāyāna sūtras themselves, it does not matter whether Guanyin is male, female, or genderless, as the ultimate reality is in emptiness (Skt. "śūnyatā").
Representations of the bodhisattva in China prior to the Song dynasty (960–1279) were masculine in appearance. Images which later displayed attributes of both genders are believed to be in accordance with the Lotus Sutra, where Avalokitesvara has the supernatural power of assuming any form required to relieve suffering, and also has the power to grant children. Because this bodhisattva is considered the personification of compassion and kindness, a mother goddess and patron of mothers and seamen, the representation in China was further interpreted in an all-female form around the 12th century. On occasion, Guanyin is also depicted holding an infant in order to further stress the relationship between the bodhisattva, maternity, and birth. In the modern period, Guanyin is most often represented as a beautiful, white-robed woman, a depiction which derives from the earlier "Pandaravasini" form.
In some Buddhist temples and monasteries, Guanyin's image is occasionally that of a young man dressed in Northern Song Buddhist robes and seated gracefully. He is usually depicted looking or glancing down, symbolising that Guanyin continues to watch over the world.
In China, Guanyin is generally portrayed as a young woman wearing a flowing white robe, and usually also necklaces symbolic of Indian or Chinese royalty. In her left hand is a jar containing pure water, and the right holds a willow branch. The crown usually depicts the image of Amitābha.
There are also regional variations of Guanyin depictions. In Fujian, for example, a popular depiction of Guanyin is as a maiden dressed in Tang hanfu carrying a fish basket. A popular image of Guanyin as both Guanyin of the South Sea and Guanyin with a Fish Basket can be seen in late 16th-century Chinese encyclopedias and in prints that accompany the novel "Golden Lotus".
In Chinese art, Guanyin is often depicted either alone, standing atop a dragon, accompanied by a white cockatoo and flanked by two children or two warriors. The two children are her acolytes who came to her when she was meditating at Mount Putuo. The girl is called Longnü and the boy Shancai. The two warriors are the historical general Guan Yu from the late Han dynasty and the bodhisattva Skanda, who appears in the Chinese classical novel "Fengshen Yanyi". The Buddhist tradition also displays Guanyin, or other buddhas and bodhisattvas, flanked with the above-mentioned warriors, but as bodhisattvas who protect the temple and the faith itself.
The Tamizh God (தமிழ்க் கடவுள்) Murugan is also called Guhan/Kugan in Kandha sasti chants. He is the one who is the dharma protector and who restores the peace in the world. His idols and temples are mostly found in mountains and hilly terrains (Kurunji regions). He has arupadai veedu (six war homes) in the modern Indian State of Tamil Nadu, which has nothing but temples and the Murugan (Guhan/kugan) idols, which are made with secret herbs by agasthiyar sitthar, and which can produce cosmic energy and the water/milk after getting down from the idol. They are valuable and considered as sacred (some people say it contains medical properties to cure many diseases since the idol was made with secret herbs).
In the Karandavyuha Sutra, Avalokiteshwara is called "The One With A Thousand Arms and Thousand eyes" and is described as superior to all gods and buddhas of the Indian pantheon. The Sutra also states that "it is easier to count all the leaves of every tree of every forest and all the grains of sand in the universe than to count the blessings and power of Avalokiteshwara". This version of Avalokiteshwara with a thousand arms depicting the power of all gods also shows various buddhas in the crown depicting the wisdom of all buddhas. It is called Senju Kannon in Japan and 1000 statues of this nature can be found at the popular Sanjūsangen-dō temple of Kyoto.
One Buddhist legend from the "Complete Tale of Guanyin and the Southern Seas" () presents Guanyin as vowing to never rest until she had freed all sentient beings from saṃsāra or cycle of rebirth. Despite strenuous effort, she realised that there were still many unhappy beings yet to be saved. After struggling to comprehend the needs of so many, her head split into eleven pieces. The buddha Amitābha, upon seeing her plight, gave her eleven heads to help her hear the cries of those who are suffering. Upon hearing these cries and comprehending them, Avalokiteśvara attempted to reach out to all those who needed aid, but found that her two arms shattered into pieces. Once more, Amitābha came to her aid and appointed her a thousand arms to let her reach out to those in need.
Many Himalayan versions of the tale include eight arms with which Avalokitesvara skillfully upholds the dharma, each possessing its own particular implement, while more Chinese-specific versions give varying accounts of this number.
In China, it is said that fishermen used to pray to her to ensure safe voyages. The titles "Guanyin of the Southern Ocean" () and "Guanyin (of/on) the Island" stem from this tradition.
Another story from the "Precious Scroll of Fragrant Mountain" () describes an incarnation of Guanyin as the daughter of a cruel king who wanted her to marry a wealthy but uncaring man. The story is usually ascribed to the research of the Buddhist monk Jiang Zhiqi during the 11th century. The story is likely to have its origin in Taoism. When Jiang penned the work, he believed that the Guanyin we know today was actually a princess called Miaoshan (), who had a religious following on Fragrant Mountain. Despite this there are many variants of the story in Chinese mythology.
According to the story, after the king asked his daughter Miaoshan to marry the wealthy man, she told him that she would obey his command, so long as the marriage eased three misfortunes.
The king asked his daughter what were the three misfortunes that the marriage should ease. Miaoshan explained that the first misfortune the marriage should ease was the suffering people endure as they age. The second misfortune it should ease was the suffering people endure when they fall ill. The third misfortune it should ease was the suffering caused by death. If the marriage could not ease any of the above, then she would rather retire to a life of religion forever.
When her father asked who could ease all the above, Miaoshan pointed out that a doctor was able to do all of these. Her father grew angry as he wanted her to marry a person of power and wealth, not a healer. He forced her into hard labour and reduced her food and drink but this did not cause her to yield.
Every day she begged to be able to enter a temple and become a nun instead of marrying. Her father eventually allowed her to work in the temple, but asked the monks to give her the toughest chores in order to discourage her. The monks forced Miaoshan to work all day and all night while others slept in order to finish her work. However, she was such a good person that the animals living around the temple began to help her with her chores. Her father, seeing this, became so frustrated that he attempted to burn down the temple. Miaoshan put out the fire with her bare hands and suffered no burns. Now struck with fear, her father ordered her to be put to death.
In one version of this legend, when Guanyin was executed, a supernatural tiger took her to one of the more hell-like realms of the dead. However, instead of being punished like the other spirits of the dead, Guanyin played music, and flowers blossomed around her. This completely surprised the hell guardian. The story says that Guanyin, by merely being in that Naraka (hell), turned it into a paradise.
A variant of the legend says that Miaoshan allowed herself to die at the hand of the executioner. According to this legend, as the executioner tried to carry out her father's orders, his axe shattered into a thousand pieces. He then tried a sword which likewise shattered. He tried to shoot Miaoshan down with arrows but they all veered off.
Finally in desperation he used his hands. Miaoshan, realising the fate that the executioner would meet at her father's hand should she fail to let herself die, forgave the executioner for attempting to kill her. It is said that she voluntarily took on the massive karmic guilt the executioner generated for killing her, thus leaving him guiltless. It is because of this that she descended into the Hell-like realms. While there, she witnessed first-hand the suffering and horrors that the beings there must endure, and was overwhelmed with grief. Filled with compassion, she released all the good karma she had accumulated through her many lifetimes, thus freeing many suffering souls back into Heaven and Earth. In the process, that Hell-like realm became a paradise. It is said that Yama, the ruler of hell, sent her back to Earth to prevent the utter destruction of his realm, and that upon her return she appeared on Fragrant Mountain.
Another tale says that Miaoshan never died, but was in fact transported by a supernatural tiger, believed to be the Deity of the Place, to Fragrant Mountain.
The legend of Miaoshan usually ends with Miaozhuangyan, Miaoshan's father, falling ill with jaundice. No physician was able to cure him. Then a monk appeared saying that the jaundice could be cured by making a medicine out of the arm and eye of one without anger. The monk further suggested that such a person could be found on Fragrant Mountain. When asked, Miaoshan willingly offered up her eyes and arms. Miaozhuangyan was cured of his illness and went to the Fragrant Mountain to give thanks to the person. When he discovered that his own daughter had made the sacrifice, he begged for forgiveness. The story concludes with Miaoshan being transformed into the Thousand Armed Guanyin, and the king, queen and her two sisters building a temple on the mountain for her. She began her journey to a pure land and was about to cross over into heaven when she heard a cry of suffering from the world below. She turned around and saw the massive suffering endured by the people of the world. Filled with compassion, she returned to Earth, vowing never to leave till such time as all suffering has ended.
After her return to Earth, Guanyin was said to have stayed for a few years on the island of Mount Putuo where she practised meditation and helped the sailors and fishermen who got stranded. Guanyin is frequently worshipped as patron of sailors and fishermen due to this. She is said to frequently becalm the sea when boats are threatened with rocks. After some decades Guanyin returned to Fragrant Mountain to continue her meditation.
Legend has it that Shancai (also called Sudhana in Sanskrit) was a disabled boy from India who was very interested in studying the dharma. When he heard that there was a Buddhist teacher on the rocky island of Putuo he quickly journeyed there to learn. Upon arriving at the island, he managed to find Guanyin despite his severe disability.
Guanyin, after having a discussion with Shancai, decided to test the boy's resolve to fully study the Buddhist teachings. She conjured the illusion of three sword-wielding pirates running up the hill to attack her. Guanyin took off and dashed to the edge of a cliff, the three illusions still chasing her.
Shancai, seeing that his teacher was in danger, hobbled uphill. Guanyin then jumped over the edge of the cliff, and soon after this the three bandits followed. Shancai, still wanting to save his teacher, managed to crawl his way over the cliff edge.
Shancai fell down the cliff but was halted in midair by Guanyin, who now asked him to walk. Shancai found that he could walk normally and that he was no longer crippled. When he looked into a pool of water he also discovered that he now had a very handsome face. From that day forth, Guanyin taught Shancai the entire dharma.
Many years after Shancai became a disciple of Guanyin, a distressing event happened in the South China Sea. The third son of one of the Dragon Kings was caught by a fisherman while swimming in the form of a fish. Being stuck on land, he was unable to transform back into his dragon form. His father, despite being a mighty Dragon King, was unable to do anything while his son was on land. Distressed, the son called out to all of Heaven and Earth.
Hearing this cry, Guanyin quickly sent Shancai to recover the fish and gave him all the money she had. The fish at this point was about to be sold in the market. It was causing quite a stir as it was alive hours after being caught. This drew a much larger crowd than usual at the market. Many people decided that this prodigious situation meant that eating the fish would grant them immortality, and so all present wanted to buy the fish. Soon a bidding war started, and Shancai was easily outbid.
Shancai begged the fish seller to spare the life of the fish. The crowd, now angry at someone so daring, was about to pry him away from the fish when Guanyin projected her voice from far away, saying "A life should definitely belong to one who tries to save it, not one who tries to take it."
The crowd, realising their shameful actions and desire, dispersed. Shancai brought the fish back to Guanyin, who promptly returned it to the sea. There the fish transformed back to a dragon and returned home. Paintings of Guanyin today sometimes portray her holding a fish basket, which represents the aforementioned tale.
But the story does not end there. As a reward for Guanyin saving his son, the Dragon King sent his granddaughter, a girl called Longnü ("dragon girl"), to present Guanyin with the Pearl of Light. The Pearl of Light was a precious jewel owned by the Dragon King that constantly shone. Longnü, overwhelmed by the presence of Guanyin, asked to be her disciple so that she might study the dharma. Guanyin accepted her offer with just one request: that Longnü be the new owner of the Pearl of Light.
In popular iconography, Longnü and Shancai are often seen alongside Guanyin as two children. Longnü is seen either holding a bowl or an ingot, which represents the Pearl of Light, whereas Shancai is seen with palms joined and knees slightly bent to show that he was once crippled.
The "Precious Scroll of the Parrot" () tells the story of a parrot who becomes a disciple of Guanyin. During the Tang Dynasty a small parrot ventures out to search for its mother's favourite food upon which it is captured by a poacher (parrots were quite popular during the Tang Dynasty). When it managed to escape it found out that its mother had already died. The parrot grieved for its mother and provides her with a proper funeral. It then sets out to become a disciple of Guanyin.
In popular iconography, the parrot is coloured white and usually seen hovering to the right side of Guanyin with either a pearl or a prayer bead clasped in its beak. The parrot becomes a symbol of filial piety.
When the people of Quanzhou in Fujian could not raise enough money to build a bridge, Guanyin changed into a beautiful maiden. Getting on a boat, she offered to marry any man who could hit her with a piece of silver from the edge of the water. Due to many people missing, she collected a large sum of money in her boat. However, Lü Dongbin, one of the Eight Immortals, helped a merchant hit Guanyin in the hair with silver powder, which floated away in the water. Guanyin bit her finger and a drop of blood fell into the water, but she vanished. This blood was swallowed by a washer woman, who gave birth to Chen Jinggu () or Lady Linshui (); the hair was turned into a female white snake and sexually used men and killed rival women. The snake and Chen were to be mortal enemies. The merchant was sent to be reborn as Liu Qi ().
Chen was a beautiful and talented girl, but did not wish to marry Liu Qi. Instead, she fled to Mount Lu in Jiangxi, where she learned many Taoist skills. Destiny eventually caused her to marry Liu and she became pregnant. A drought in Fujian caused many people to ask her to call for rain, which was a ritual that could not be performed while pregnant. She temporarily aborted her child, which was killed by the white snake. Chen managed to kill the snake with a sword, but died either of a miscarriage or hemorrhage; she was able to complete the ritual, and ended drought.
This story is popular in Zhejiang, Taiwan and especially Fujian.
"Quan Am Thi Kinh" () is a Vietnamese verse recounting the life of a woman, Thi Kinh. She was accused falsely of having intended to kill her husband, and when she disguised herself as a man to lead a religious life in a Buddhist temple, she was again falsely blamed for having committed sexual intercourse with a girl named Thi Mau. She was accused of impregnating her, which was strictly forbidden by Buddhist law. However, thanks to her endurance of all indignities and her spirit of self-sacrifice, she could enter into Nirvana and became Goddess of Mercy (Phat Ba Quan Am) P. Q. Phan's 2014 opera "" is based on this story.
In Japan Guanyin is known primarily as Kannon or, reflecting an older pronunciation, Kwannon. Many forms of Kannon exist, both male and female. Many aspects of Kannon have been developed natively in Japan, supplanting Japanese deities, and some have been developed as late as the 20th century. Some forms include:
Kannon is important in Japanese Pure Land Buddhism and is often depicted and venerated with Amida and Seishi as part of a trio.
In Tibet, Guanyin is revered under the name Chenrezig. Unlike much of other East Asia Buddhism where Guanyin is usually portrayed as female or androgynous, Chenrezig is revered in male form. While similarities of the female form of Guanyin with the female buddha or boddhisattva Tara are noted—particularly the aspect of Tara called Green Tara—Guanyin is rarely identified with Tara.
Through Guanyin's identity as Avalokitesvara she is part of the "padmakula" (Lotus family) of buddhas. The buddha of the Lotus family is Amitābha, whose consort is Pāṇḍaravāsinī. Guanyin's female form is sometimes said to have been inspired by Pāṇḍaravāsinī.
Next to Sun Wu Kong, the monkey king himself, there is no supernatural entity more important to the famous myths from China about a strange mystical monkey, a couple of exiled gods, a dragon, and a priest trying to bring sacred scrolls back to China than her. She delivered the ring that let the priest control the monkey king. She informed all those involved of their great place in the quest which allowed most of them to reach enlightenment. When a demon was too powerful or tricky even for the monkey king she came to their rescue. And when the monkey king was feeling like abandoning the quest she managed to talk him into returning.
Due to her symbolization of compassion, in East Asia, Guanyin is associated with vegetarianism. Buddhist cuisine is generally decorated with her image and she appears in most Buddhist vegetarian pamphlets and magazines.
In East Asian Buddhism, Guanyin is the bodhisattva Avalokiteśvara. Among the Chinese, Avalokiteśvara is almost exclusively called "Guanshiyin Pusa" (). The Chinese translation of many Buddhist sutras has in fact replaced the Chinese transliteration of Avalokitesvara with "Guanshiyin" (). Some Taoist scriptures give her the title of "Guanyin Dashi", sometimes informally "Guanyin Fozu".
In Chinese culture, the popular belief and worship of Guanyin as a goddess by the populace is generally not viewed to be in conflict with the bodhisattva Avalokitesvara's nature. In fact the widespread worship of Guanyin as a "Goddess of Mercy and Compassion" is seen by Buddhists as the boundless salvific nature of bodhisattva Avalokiteśvara at work (in Buddhism, this is referred to as Guanyin's "skillful means", or upaya). The Buddhist canon states that bodhisattvas can assume whatsoever gender and form is needed to liberate beings from ignorance and dukkha. With specific reference to Avalokitesvara, he is stated both in the "Lotus Sutra" (Chapter 25 "Perceiver of the World's Sounds" or "Universal Gateway"), and the "Śūraṅgama Sūtra" to have appeared before as a woman or a goddess to save beings from suffering and ignorance. Some Buddhist schools refer to Guanyin both as male and female interchangeably.
Guanyin is immensely popular among Chinese Buddhists, especially those from devotional schools. She is generally seen as a source of unconditional love and, more importantly, as a saviour. In her bodhisattva vow, Guanyin promises to answer the cries and pleas of all sentient beings and to liberate them from their own karmic woes. Based on the Lotus Sutra and the Shurangama sutra, Avalokitesvara is generally seen as a saviour, both spiritually and physically. The sutras state that through his saving grace even those who have no chance of being enlightened can be enlightened, and those deep in negative karma can still find salvation through his compassion.
In Mahayana Buddhism, gender is no obstacle to attaining enlightenment (or nirvana). The Buddhist concept of non-duality applies here. The "Vimalakirti Sutra"s "Goddess" chapter clearly illustrates an enlightened being who is also a female and deity. In the "Lotus Sutra", a maiden became enlightened in a very short time span. The view that Avalokiteśvara is also the goddess Guanyin does not seem contradictory to Buddhist beliefs. Guanyin has been a buddha called the "Tathāgata of Brightness of Correct Dharma" ().
Given that bodhisattvas are known to incarnate at will as living people according to the sutras, the princess Miaoshan is generally viewed by Buddhists as an incarnation of Guanyin.
In Pure Land Buddhism, Guanyin is described as the "Barque of Salvation". Along with Amitābha and the bodhisattva Mahasthamaprapta, she temporarily liberates beings out of the Wheel of Samsara into the Pure Land, where they will have the chance to accrue the necessary merit so as to be a Buddha in one lifetime. In Chinese Buddhist iconography, Guanyin is often depicted as meditating or sitting alongside one of the Buddhas and usually accompanied by another bodhisattva. The buddha and bodhisattva that are portrayed together with Guanyin usually follow whichever school of Buddhism they represent. In Pure Land Buddhism, for example, Guanyin is frequently depicted on the left of Amitābha, while on the buddha's right is Mahasthamaprapta. Temples that revere the bodhisattva Ksitigarbha usually depict him meditating beside Amitābha and Guanyin.
Even among Chinese Buddhist schools that are non-devotional, Guanyin is still highly venerated. Instead of being seen as an active external force of unconditional love and salvation, the personage of Guanyin is highly revered as the principle of compassion, mercy and love. The act, thought and feeling of compassion and love is viewed as Guanyin. A merciful, compassionate, loving individual is said to be Guanyin. A meditative or contemplative state of being at peace with oneself and others is seen as Guanyin.
In the Mahayana canon, the "Heart Sutra" is ascribed entirely to Guanyin. This is unique, since most Mahayana Sutras are usually ascribed to Gautama Buddha and the teachings, deeds or vows of the bodhisattvas are described by Shakyamuni Buddha. In the "Heart Sutra", Guanyin describes to the arhat Sariputta the nature of reality and the essence of the Buddhist teachings. The famous Buddhist saying "Form is emptiness, emptiness is form" () comes from this sutra.
Guanyin is an extremely popular goddess in Chinese folk religion and is worshiped in many Chinese communities throughout East and Southeast Asia. In Taoism, records claim Guanyin was a Chinese woman who became an immortal, Cihang Zhenren in Shang dynasty or Xingyin ().
Guanyin is revered in the general Chinese population due to her unconditional love and compassion. She is generally regarded by many as the protector of women and children, perhaps due to iconographic confusion with images of Hariti. By this association, she is also seen as a fertility goddess capable of granting children to couples. An old Chinese superstition involves a woman who, wishing to have a child, offers a shoe to Guanyin. In Chinese culture, a borrowed shoe sometimes is used when a child is expected. After the child is born, the shoe is returned to its owner along with a new pair as a thank you gift.
Guanyin is also seen as the champion of the unfortunate, the sick, the disabled, the poor, and those in trouble. Some coastal and river areas of China regard her as the protector of fishermen, sailors, and generally people who are out at sea, thus many have also come to believe that Mazu, the goddess of the sea, is a manifestation of Guanyin. Due to her association with the legend of the Great Flood, where she sent down a dog holding rice grains in its tail after the flood, she is worshiped as an agrarian and agriculture goddess. In some quarters, especially among business people and traders, she is looked upon as a goddess of fortune. In recent years there have been claims of her being the protector of air travelers.
Guanyin is also a ubiquitous figure found within new religious movements of Asia:
Some Buddhist and Christian observers have commented on the similarity between Guanyin and Mary, mother of Jesus. This can be attributed to the representation of Guanyin holding a child in Chinese art and sculpture; it is believed that Guanyin is the patron saint of mothers and grants parents filial children, this apparition is popularly known as the "Child-Sending Guanyin" (). One example of this comparison can be found in Tzu Chi, a Taiwanese Buddhist humanitarian organisation, which noticed the similarity between this form of Guanyin and the Virgin Mary. The organisation commissioned a portrait of Guanyin holding a baby, closely resembling the typical Catholic Madonna and Child painting. Copies of this portrait are now displayed prominently in Tzu Chi affiliated medical centres, especially since Tzu Chi's founder is a Buddhist master and her supporters come from various religious backgrounds.
During the Edo period in Japan, when Christianity was banned and punishable by death, some underground Christian groups venerated Jesus and the Virgin Mary by disguising them as statues of Kannon holding a child; such statues are known as "Maria Kannon". Many had a cross hidden in an inconspicuous location.
It is suggested the similarity comes from the conquest and colonization of the Philippines by Spain during the 16th century, when Asian cultures influenced engravings of the Virgin Mary, as evidenced, for example, in an ivory carving of the Virgin Mary by a Chinese carver.
The statue of Guanyin (Gwanse-eum) in Gilsangsa in Seoul, South Korea was sculpted by Catholic sculptor Choi Jong-tae, who modeled the statue after the Virgin Mary in hopes of fostering religious reconciliation in Korean society.
In the 1946 film Three Strangers the titular characters wish for a shared sweepstakes ticket to win before a statue of Guanyin, referred to in the film as Kwan Yin.
For a 2005 "Fo Guang Shan" TV series, Andy Lau performed the song "Kwun Sai Yam", which emphasizes the idea that everyone can be like Guanyin.
In the manga series Hunter x Hunter and its 2011 anime adaptation, the chairman of the hunter's association, Isaac Netero, has the ability to summon a giant statue of Guanyin and use her thousand arms to attack.
In the 2011 Thai movie The Billionaire, also known as Top Secret: Wai Roon Pan Lan (), Guanyin appears to entrepreneur Top (Itthipat Peeradechapan), founder of Tao Kae Noi Seaweed Snacks, providing him inspiration during his period of uncertainty.
Fantasy author Richard Parks has frequently utilized Guanyin as a character in his fiction, most notably in the short stories "A Garden in Hell" (2006) and "The White Bone Fan" (2009), the novella "The Heavenly Fox" (2011), and the novel "All the Gates of Hell" (2013).
The 2013 Buddhist film "Avalokitesvara," tells the origins of Mount Putuo, the famous pilgrimage site for Avalokitesvara Bodhisattva in China. The film was filmed onsite on Mount Putuo and featured several segments where monks chant the Heart Sutra in Chinese and Sanskrit. Egaku, the protagonist of the film, also chants the Heart Sutra in Japanese.
Kōdai-ji Temple in Kyoto commissioned an android version of Kannon to preach Buddhist scriptures. The android, named Mindar, was unveiled February 23, 2019.
|
https://en.wikipedia.org/wiki?curid=17063
|
Mississippi River
The Mississippi River is the second-longest river and chief river of the second-largest drainage system on the North American continent, second only to the Hudson Bay drainage system. From its traditional source of Lake Itasca in northern Minnesota, it flows generally south for to the Mississippi River Delta in the Gulf of Mexico. With its many tributaries, the Mississippi's watershed drains all or parts of 32 U.S. states and two Canadian provinces between the Rocky and Appalachian mountains. The main stem is entirely within the United States; the total drainage basin is , of which only about one percent is in Canada. The Mississippi ranks as the fourth-longest river and fifteenth-largest river by discharge in the world. The river either borders or passes through the states of Minnesota, Wisconsin, Iowa, Illinois, Missouri, Kentucky, Tennessee, Arkansas, Mississippi, and Louisiana.
Native Americans have lived along the Mississippi River and its tributaries for thousands of years. Most were hunter-gatherers, but some, such as the Mound Builders, formed prolific agricultural societies. The arrival of Europeans in the 16th century changed the native way of life as first explorers, then settlers, ventured into the basin in increasing numbers. The river served first as a barrier, forming borders for New Spain, New France, and the early United States, and then as a vital transportation artery and communications link. In the 19th century, during the height of the ideology of manifest destiny, the Mississippi and several western tributaries, most notably the Missouri, formed pathways for the western expansion of the United States.
Formed from thick layers of the river's silt deposits, the Mississippi embayment is one of the most fertile regions of the United States; steamboats were widely used in the 19th and early 20th centuries to ship agricultural and industrial goods. During the American Civil War, the Mississippi's capture by Union forces marked a turning point towards victory, due to the river's strategic importance to the Confederate war effort. Because of substantial growth of cities and the larger ships and barges that replaced steamboats, the first decades of the 20th century saw the construction of massive engineering works such as levees, locks and dams, often built in combination. A major focus of this work has been to prevent the lower Mississippi from shifting into the channel of the Atchafalaya River and bypassing New Orleans.
Since the 20th century, the Mississippi River has also experienced major pollution and environmental problems – most notably elevated nutrient and chemical levels from agricultural runoff, the primary contributor to the Gulf of Mexico dead zone.
The word Mississippi itself comes from , the French rendering of the Anishinaabe (Ojibwe or Algonquin) name for the river, (Great River).
In the 18th century, the river was the primary western boundary of the young United States, and since the country's expansion westward, the Mississippi River has been widely considered a convenient if approximate dividing line between the Eastern, Southern, and Midwestern United States, and the Western United States. This is exemplified by the Gateway Arch in St. Louis and the phrase "Trans-Mississippi" as used in the name of the Trans-Mississippi Exposition.
It is common to qualify a regionally superlative landmark in relation to it, such as "the highest peak east of the Mississippi" or "the oldest city west of the Mississippi". The FCC also uses it as the dividing line for broadcast call-signs, which begin with W to the east and K to the west, mixing together in media markets along the river.
The Mississippi River can be divided into three sections: the Upper Mississippi, the river from its headwaters to the confluence with the Missouri River; the Middle Mississippi, which is downriver from the Missouri to the Ohio River; and the Lower Mississippi, which flows from the Ohio to the Gulf of Mexico.
The Upper Mississippi runs from its headwaters to its confluence with the Missouri River at St. Louis, Missouri. It is divided into two sections:
The source of the Upper Mississippi branch is traditionally accepted as Lake Itasca, above sea level in Itasca State Park in Clearwater County, Minnesota. The name "Itasca" was chosen to designate the "true head" of the Mississippi River as a combination of the last four letters of the Latin word for truth () and the first two letters of the Latin word for head (). However, the lake is in turn fed by a number of smaller streams.
From its origin at Lake Itasca to St. Louis, Missouri, the waterway's flow is moderated by 43 dams. Fourteen of these dams are located above Minneapolis in the headwaters region and serve multiple purposes, including power generation and recreation. The remaining 29 dams, beginning in downtown Minneapolis, all contain locks and were constructed to improve commercial navigation of the upper river. Taken as a whole, these 43 dams significantly shape the geography and influence the ecology of the upper river. Beginning just below Saint Paul, Minnesota, and continuing throughout the upper and lower river, the Mississippi is further controlled by thousands of Wing Dikes that moderate the river's flow in order to maintain an open navigation channel and prevent the river from eroding its banks.
The head of navigation on the Mississippi is the Coon Rapids Dam in Coon Rapids, Minnesota. Before it was built in 1913, steamboats could occasionally go upstream as far as Saint Cloud, Minnesota, depending on river conditions.
The uppermost lock and dam on the Upper Mississippi River is the Upper St. Anthony Falls Lock and Dam in Minneapolis. Above the dam, the river's elevation is . Below the dam, the river's elevation is . This drop is the largest of all the Mississippi River locks and dams. The origin of the dramatic drop is a waterfall preserved adjacent to the lock under an apron of concrete. Saint Anthony Falls is the only true waterfall on the entire Mississippi River. The water elevation continues to drop steeply as it passes through the gorge carved by the waterfall.
After the completion of the St. Anthony Falls Lock and Dam in 1963, the river's head of navigation moved upstream, to the Coon Rapids Dam. However, the Locks were closed in 2015 to control the spread of invasive Asian carp, making Minneapolis once again the site of the head of navigation of the river.
The Upper Mississippi has a number of natural and artificial lakes, with its widest point being Lake Winnibigoshish, near Grand Rapids, Minnesota, over across. Lake Onalaska, created by Lock and Dam No. 7, near La Crosse, Wisconsin, is more than wide. Lake Pepin, a natural lake formed behind the delta of the Chippewa River of Wisconsin as it enters the Upper Mississippi, is more than wide.
By the time the Upper Mississippi reaches Saint Paul, Minnesota, below Lock and Dam No. 1, it has dropped more than half its original elevation and is above sea level. From St. Paul to St. Louis, Missouri, the river elevation falls much more slowly, and is controlled and managed as a series of pools created by 26 locks and dams.
The Upper Mississippi River is joined by the Minnesota River at Fort Snelling in the Twin Cities; the St. Croix River near Prescott, Wisconsin; the Cannon River near Red Wing, Minnesota; the Zumbro River at Wabasha, Minnesota; the Black, La Crosse, and Root rivers in La Crosse, Wisconsin; the Wisconsin River at Prairie du Chien, Wisconsin; the Rock River at the Quad Cities; the Iowa River near Wapello, Iowa; the Skunk River south of Burlington, Iowa; and the Des Moines River at Keokuk, Iowa. Other major tributaries of the Upper Mississippi include the Crow River in Minnesota, the Chippewa River in Wisconsin, the Maquoketa River and the Wapsipinicon River in Iowa, and the Illinois River in Illinois.
The Upper Mississippi is largely a multi-thread stream with many bars and islands. From its confluence with the St. Croix River downstream to Dubuque, Iowa, the river is entrenched, with high bedrock bluffs lying on either side. The height of these bluffs decreases to the south of Dubuque, though they are still significant through Savanna, Illinois. This topography contrasts strongly with the Lower Mississippi, which is a meandering river in a broad, flat area, only rarely flowing alongside a bluff (as at Vicksburg, Mississippi).
The Mississippi River is known as the Middle Mississippi from the Upper Mississippi River's confluence with the Missouri River at St. Louis, Missouri, for to its confluence with the Ohio River at Cairo, Illinois.
The Middle Mississippi is relatively free-flowing. From St. Louis to the Ohio River confluence, the Middle Mississippi falls over for an average rate of . At its confluence with the Ohio River, the Middle Mississippi is above sea level. Apart from the Missouri and Meramec rivers of Missouri and the Kaskaskia River of Illinois, no major tributaries enter the Middle Mississippi River.
The Mississippi River is called the Lower Mississippi River from its confluence with the Ohio River to its mouth at the Gulf of Mexico, a distance of about . At the confluence of the Ohio and the Middle Mississippi, the long-term mean discharge of the Ohio at Cairo, Illinois is , while the long-term mean discharge of the Mississippi at Thebes, Illinois (just upriver from Cairo) is . Thus, by volume, the main branch of the Mississippi River system at Cairo can be considered to be the Ohio River (and the Allegheny River further upstream), rather than the Middle Mississippi.
In addition to the Ohio River, the major tributaries of the Lower Mississippi River are the White River, flowing in at the White River National Wildlife Refuge in east central Arkansas; the Arkansas River, joining the Mississippi at Arkansas Post; the Big Black River in Mississippi; and the Yazoo River, meeting the Mississippi at Vicksburg, Mississippi. The widest point of the Mississippi River is in the Lower Mississippi portion where it exceeds in width in several places.
Deliberate water diversion at the Old River Control Structure in Louisiana allows the Atchafalaya River in Louisiana to be a major distributary of the Mississippi River, with 30% of the combined flow of the Mississippi and Red Rivers flowing to the Gulf of Mexico by this route, rather than continuing down the Mississippi's current channel past Baton Rouge and New Orleans on a longer route to the Gulf. Although the Red River is commonly mistaken for an additional tributary, its water flows separately into the Gulf of Mexico through the Atchafalaya River.
The Mississippi River has the world's fourth-largest drainage basin ("watershed" or "catchment"). The basin covers more than , including all or parts of 32 U.S. states and two Canadian provinces. The drainage basin empties into the Gulf of Mexico, part of the Atlantic Ocean. The total catchment of the Mississippi River covers nearly 40% of the landmass of the continental United States. The highest point within the watershed is also the highest point of the Rocky Mountains, Mount Elbert at .
In the United States, the Mississippi River drains the majority of the area between the crest of the Rocky Mountains and the crest of the Appalachian Mountains, except for various regions drained to Hudson Bay by the Red River of the North; to the Atlantic Ocean by the Great Lakes and the Saint Lawrence River; and to the Gulf of Mexico by the Rio Grande, the Alabama and Tombigbee rivers, the Chattahoochee and Appalachicola rivers, and various smaller coastal waterways along the Gulf.
The Mississippi River empties into the Gulf of Mexico about downstream from New Orleans. Measurements of the length of the Mississippi from Lake Itasca to the Gulf of Mexico vary somewhat, but the United States Geological Survey's number is . The retention time from Lake Itasca to the Gulf is typically about 90 days.
The Mississippi River discharges at an annual average rate of between . Although it is the fifth-largest river in the world by volume, this flow is a small fraction of the output of the Amazon, which moves nearly during wet seasons. On average, the Mississippi has only 8% the flow of the Amazon River.
Fresh river water flowing from the Mississippi into the Gulf of Mexico does not mix into the salt water immediately. The images from NASA's MODIS (to the right) show a large plume of fresh water, which appears as a dark ribbon against the lighter-blue surrounding waters. These images demonstrate that the plume did not mix with the surrounding sea water immediately. Instead, it stayed intact as it flowed through the Gulf of Mexico, into the Straits of Florida, and entered the Gulf Stream. The Mississippi River water rounded the tip of Florida and traveled up the southeast coast to the latitude of Georgia before finally mixing in so thoroughly with the ocean that it could no longer be detected by MODIS.
Before 1900, the Mississippi River transported an estimated of sediment per year from the interior of the United States to coastal Louisiana and the Gulf of Mexico. During the last two decades, this number was only per year. The reduction in sediment transported down the Mississippi River is the result of engineering modification of the Mississippi, Missouri, and Ohio rivers and their tributaries by dams, meander cutoffs, river-training structures, and bank revetments and soil erosion control programs in the areas drained by them.
Over geologic time, the Mississippi River has experienced numerous large and small changes to its main course, as well as additions, deletions, and other changes among its numerous tributaries, and the lower Mississippi River has used different pathways as its main channel to the Gulf of Mexico across the delta region.
Through a natural process known as avulsion or delta switching, the lower Mississippi River has shifted its final course to the mouth of the Gulf of Mexico every thousand years or so. This occurs because the deposits of silt and sediment begin to clog its channel, raising the river's level and causing it to eventually find a steeper, more direct route to the Gulf of Mexico. The abandoned distributaries diminish in volume and form what are known as bayous. This process has, over the past 5,000 years, caused the coastline of south Louisiana to advance toward the Gulf from . The currently active delta lobe is called the Birdfoot Delta, after its shape, or the Balize Delta, after La Balize, Louisiana, the first French settlement at the mouth of the Mississippi.
The current form of the Mississippi River basin was largely shaped by the Laurentide Ice Sheet of the most recent Ice Age. The southernmost extent of this enormous glaciation extended well into the present-day United States and Mississippi basin. When the ice sheet began to recede, hundreds of feet of rich sediment were deposited, creating the flat and fertile landscape of the Mississippi Valley. During the melt, giant glacial rivers found drainage paths into the Mississippi watershed, creating such features as the Minnesota River, James River, and Milk River valleys. When the ice sheet completely retreated, many of these "temporary" rivers found paths to Hudson Bay or the Arctic Ocean, leaving the Mississippi Basin with many features "over-sized" for the existing rivers to have carved in the same time period.
Ice sheets during the Illinoian Stage, about 300,000 to 132,000 years before present, blocked the Mississippi near Rock Island, Illinois, diverting it to its present channel farther to the west, the current western border of Illinois. The Hennepin Canal roughly follows the ancient channel of the Mississippi downstream from Rock Island to Hennepin, Illinois. South of Hennepin, to Alton, Illinois, the current Illinois River follows the ancient channel used by the Mississippi River before the Illinoian Stage.
Timeline of outflow course changes
In March 1876, the Mississippi suddenly changed course near the settlement of Reverie, Tennessee, leaving a small part of Tipton County, Tennessee, attached to Arkansas and separated from the rest of Tennessee by the new river channel. Since this event was an avulsion, rather than the effect of incremental erosion and deposition, the state line still follows the old channel.
The town of Kaskaskia, Illinois once stood on a peninsula at the confluence of the Mississippi and Kaskaskia (Okaw) Rivers. Founded as a French colonial community, it later became the capital of the Illinois Territory and was the first state capital of Illinois until 1819. Beginning in 1844, successive flooding caused the Mississippi River to slowly encroach east. A major flood in 1881 caused it to overtake the lower of the Kaskaskia River, forming a new Mississippi channel and cutting off the town from the rest of the state. Later flooding destroyed most of the remaining town, including the original State House. Today, the remaining island and community of 14 residents is known as an enclave of Illinois and is accessible only from the Missouri side.
The New Madrid Seismic Zone, along the Mississippi River near New Madrid, Missouri, between Memphis and St. Louis, is related to an aulacogen (failed rift) that formed at the same time as the Gulf of Mexico. This area is still quite active seismically. Four great earthquakes in 1811 and 1812, estimated at approximately 8 on the Richter magnitude scale, had tremendous local effects in the then sparsely settled area, and were felt in many other places in the Midwestern and eastern U.S. These earthquakes created Reelfoot Lake in Tennessee from the altered landscape near the river.
When measured from its traditional source at Lake Itasca, the Mississippi has a length of . When measured from its longest stream source (most distant source from the sea), Brower's Spring in Montana, the source of the Missouri River, it has a length of , making it the fourth longest river in the world after the Nile, Amazon, and Yangtze. When measured by the largest stream source (by water volume), the Ohio River, by extension the Allegheny river, would be the source, and the Mississippi would begin in Pennsylvania.
At its source at Lake Itasca, the Mississippi River is about deep. The average depth of the Mississippi River between Saint Paul and Saint Louis is between deep, the deepest part being Lake Pepin, which averages deep and has a maximum depth of . Between Saint Louis, Missouri, where the Missouri River joins and Cairo, Illinois, the depth averages . Below Cairo, where the Ohio River joins, the depth averages deep. The deepest part of the river is in New Orleans, where it reaches deep.
The Mississippi River runs through or along 10 states, from Minnesota to Louisiana, and is used to define portions of these states borders, with Wisconsin, Illinois, Kentucky, Tennessee, and Mississippi along the east side of the river, and Iowa, Missouri, and Arkansas along its west side. Substantial parts of both Minnesota and Louisiana are on either side of the river, although the Mississippi defines part of the boundary of each of these states.
In all of these cases, the middle of the riverbed at the time the borders were established was used as the line to define the borders between adjacent states. In various areas, the river has since shifted, but the state borders have not changed, still following the former bed of the Mississippi River as of their establishment, leaving several small isolated areas of one state across the new river channel, contiguous with the adjacent state. Also, due to a meander in the river, a small part of western Kentucky is contiguous with Tennessee, but isolated from the rest of its state.
Many of the communities along the Mississippi River are listed below; most have either historic significance or cultural lore connecting them to the river. They are sequenced from the source of the river to its end.
The road crossing highest on the Upper Mississippi is a simple steel culvert, through which the river (locally named "Nicolet Creek") flows north from Lake Nicolet under "Wilderness Road" to the West Arm of Lake Itasca, within Itasca State Park.
The earliest bridge across the Mississippi River was built in 1855. It spanned the river in Minneapolis where the current Hennepin Avenue Bridge is located. No highway or railroad tunnels cross under the Mississippi River.
The first railroad bridge across the Mississippi was built in 1856. It spanned the river between the Rock Island Arsenal in Illinois and Davenport, Iowa. Steamboat captains of the day, fearful of competition from the railroads, considered the new bridge a hazard to navigation. Two weeks after the bridge opened, the steamboat "Effie Afton" rammed part of the bridge, setting it on fire. Legal proceedings ensued, with Abraham Lincoln defending the railroad. The lawsuit went to the Supreme Court of the United States, which ruled in favor of the railroad.
Below is a general overview of selected Mississippi bridges that have notable engineering or landmark significance, with their cities or locations. They are sequenced from the Upper Mississippi's source to the Lower Mississippi's mouth.
A clear channel is needed for the barges and other vessels that make the main stem Mississippi one of the great commercial waterways of the world. The task of maintaining a navigation channel is the responsibility of the United States Army Corps of Engineers, which was established in 1802. Earlier projects began as early as 1829 to remove snags, close off secondary channels and excavate rocks and sandbars.
Steamboats entered trade in the 1820s, so the period 1830–1850 became the golden age of steamboats. As there were few roads or rails in the lands of the Louisiana Purchase, river traffic was an ideal solution. Cotton, timber and food came down the river, as did Appalachian coal. The port of New Orleans boomed as it was the trans-shipment point to deep sea ocean vessels. As a result, the image of the twin stacked, wedding cake Mississippi steamer entered into American mythology. Steamers worked the entire route from the trickles of Montana, to the Ohio River; down the Missouri and Tennessee, to the main channel of the Mississippi. Only with the arrival of the railroads in the 1880s did steamboat traffic diminish. Steamboats remained a feature until the 1920s. Most have been superseded by pusher tugs. A few survive as icons—the Delta Queen and the River Queen for instance.
A series of 29 locks and dams on the upper Mississippi, most of which were built in the 1930s, is designed primarily to maintain a channel for commercial barge traffic. The lakes formed are also used for recreational boating and fishing. The dams make the river deeper and wider but do not stop it. No flood control is intended. During periods of high flow, the gates, some of which are submersible, are completely opened and the dams simply cease to function. Below St. Louis, the Mississippi is relatively free-flowing, although it is constrained by numerous levees and directed by numerous wing dams.
On the lower Mississippi, from Baton Rouge to the mouth of the Mississippi, the navigation depth is , allowing container ships and cruise ships to dock at the Port of New Orleans and bulk cargo ships shorter than air draft that fit under the Huey P. Long Bridge to traverse the Mississippi to Baton Rouge. There is a feasibility study to dredge this portion of the river to to allow New Panamax ship depths.
In 1829, there were surveys of the two major obstacles on the upper Mississippi, the Des Moines Rapids and the Rock Island Rapids, where the river was shallow and the riverbed was rock. The Des Moines Rapids were about long and just above the mouth of the Des Moines River at Keokuk, Iowa. The Rock Island Rapids were between Rock Island and Moline, Illinois. Both rapids were considered virtually impassable.
In 1848, the Illinois and Michigan Canal was built to connect the Mississippi River to Lake Michigan via the Illinois River near Peru, Illinois. The canal allowed shipping between these important waterways. In 1900, the canal was replaced by the Chicago Sanitary and Ship Canal. The second canal, in addition to shipping, also allowed Chicago to address specific health issues (typhoid fever, cholera and other waterborne diseases) by sending its waste down the Illinois and Mississippi river systems rather than polluting its water source of Lake Michigan.
The Corps of Engineers recommended the excavation of a channel at the Des Moines Rapids, but work did not begin until after Lieutenant Robert E. Lee endorsed the project in 1837. The Corps later also began excavating the Rock Island Rapids. By 1866, it had become evident that excavation was impractical, and it was decided to build a canal around the Des Moines Rapids. The canal opened in 1877, but the Rock Island Rapids remained an obstacle. In 1878, Congress authorized the Corps to establish a channel to be obtained by building wing dams that direct the river to a narrow channel causing it to cut a deeper channel, by closing secondary channels and by dredging. The channel project was complete when the Moline Lock, which bypassed the Rock Island Rapids, opened in 1907.
To improve navigation between St. Paul, Minnesota, and Prairie du Chien, Wisconsin, the Corps constructed several dams on lakes in the headwaters area, including Lake Winnibigoshish and Lake Pokegama. The dams, which were built beginning in the 1880s, stored spring run-off which was released during low water to help maintain channel depth.
In 1907, Congress authorized a channel project on the Mississippi River, which was not complete when it was abandoned in the late 1920s in favor of the channel project.
In 1913, construction was complete on Lock and Dam No. 19 at Keokuk, Iowa, the first dam below St. Anthony Falls. Built by a private power company (Union Electric Company of St. Louis) to generate electricity (originally for streetcars in St. Louis), the Keokuk dam was one of the largest hydro-electric plants in the world at the time. The dam also eliminated the Des Moines Rapids. Lock and Dam No. 1 was completed in Minneapolis, Minnesota in 1917. Lock and Dam No. 2, near Hastings, Minnesota, was completed in 1930.
Before the Great Mississippi Flood of 1927, the Corps's primary strategy was to close off as many side channels as possible to increase the flow in the main river. It was thought that the river's velocity would scour off bottom sediments, deepening the river and decreasing the possibility of flooding. The 1927 flood proved this to be so wrong that communities threatened by the flood began to create their own levee breaks to relieve the force of the rising river.
The Rivers and Harbors Act of 1930 authorized the channel project, which called for a navigation channel feet deep and wide to accommodate multiple-barge tows. This was achieved by a series of locks and dams, and by dredging. Twenty-three new locks and dams were built on the upper Mississippi in the 1930s in addition to the three already in existence.
Until the 1950s, there was no dam below Lock and Dam 26 at Alton, Illinois. Chain of Rocks Lock (Lock and Dam No. 27), which consists of a low-water dam and an canal, was added in 1953, just below the confluence with the Missouri River, primarily to bypass a series of rock ledges at St. Louis. It also serves to protect the St. Louis city water intakes during times of low water.
U.S. government scientists determined in the 1950s that the Mississippi River was starting to switch to the Atchafalaya River channel because of its much steeper path to the Gulf of Mexico. Eventually the Atchafalaya River would capture the Mississippi River and become its main channel to the Gulf of Mexico, leaving New Orleans on a side channel. As a result, the U.S. Congress authorized a project called the Old River Control Structure, which has prevented the Mississippi River from leaving its current channel that drains into the Gulf via New Orleans.
Because the large scale of high-energy water flow threatened to damage the structure, an auxiliary flow control station was built adjacent to the standing control station. This $300 million project was completed in 1986 by the Corps of Engineers. Beginning in the 1970s, the Corps applied hydrological transport models to analyze flood flow and water quality of the Mississippi. Dam 26 at Alton, Illinois, which had structural problems, was replaced by the Mel Price Lock and Dam in 1990. The original Lock and Dam 26 was demolished.
The Corps now actively creates and maintains spillways and floodways to divert periodic water surges into backwater channels and lakes, as well as route part of the Mississippi's flow into the Atchafalaya Basin and from there to the Gulf of Mexico, bypassing Baton Rouge and New Orleans. The main structures are the Birds Point-New Madrid Floodway in Missouri; the Old River Control Structure and the Morganza Spillway in Louisiana, which direct excess water down the west and east sides (respectively) of the Atchafalaya River; and the Bonnet Carré Spillway, also in Louisiana, which directs floodwaters to Lake Pontchartrain (see diagram). Some experts blame urban sprawl for increases in both the risk and frequency of flooding on the Mississippi River.
Some of the pre-1927 strategy is still in use today, with the Corps actively cutting the necks of horseshoe bends, allowing the water to move faster and reducing flood heights.
Approximately 50,000 years ago, the Central United States was covered by an inland sea, which was drained by the Mississippi and its tributaries into the Gulf of Mexico—creating large floodplains and extending the continent further to the south in the process. The soil in areas such as Louisiana was thereafter found to be very rich.
The area of the Mississippi River basin was first settled by hunting and gathering Native American peoples and is considered one of the few independent centers of plant domestication in human history. Evidence of early cultivation of sunflower, a goosefoot, a marsh elder and an indigenous squash dates to the 4th millennium BC. The lifestyle gradually became more settled after around 1000 BC during what is now called the Woodland period, with increasing evidence of shelter construction, pottery, weaving and other practices.
A network of trade routes referred to as the Hopewell interaction sphere was active along the waterways between about 200 and 500 AD, spreading common cultural practices over the entire area between the Gulf of Mexico and the Great Lakes. A period of more isolated communities followed, and agriculture introduced from Mesoamerica based on the Three Sisters (maize, beans and squash) gradually came to dominate. After around 800 AD there arose an advanced agricultural society today referred to as the Mississippian culture, with evidence of highly stratified complex chiefdoms and large population centers.
The most prominent of these, now called Cahokia, was occupied between about 600 and 1400 AD and at its peak numbered between 8,000 and 40,000 inhabitants, larger than London, England of that time. At the time of first contact with Europeans, Cahokia and many other Mississippian cities had dispersed, and archaeological finds attest to increased social stress.
Modern American Indian nations inhabiting the Mississippi basin include Cheyenne, Sioux, Ojibwe, Potawatomi, Ho-Chunk, Fox, Kickapoo, Tamaroa, Moingwena, Quapaw and Chickasaw.
The word "Mississippi" itself comes from "Messipi", the French rendering of the Anishinaabe (Ojibwe or Algonquin) name for the river, "Misi-ziibi" (Great River). The Ojibwe called Lake Itasca "Omashkoozo-zaaga'igan" (Elk Lake) and the river flowing out of it "Omashkoozo-ziibi" (Elk River). After flowing into Lake Bemidji, the Ojibwe called the river "Bemijigamaag-ziibi" (River from the Traversing Lake). After flowing into Cass Lake, the name of the river changes to "Gaa-miskwaawaakokaag-ziibi" (Red Cedar River) and then out of Lake Winnibigoshish as "Wiinibiigoonzhish-ziibi" (Miserable Wretched Dirty Water River), "Gichi-ziibi" (Big River) after the confluence with the Leech Lake River, then finally as "Misi-ziibi" (Great River) after the confluence with the Crow Wing River. After the expeditions by Giacomo Beltrami and Henry Schoolcraft, the longest stream above the juncture of the Crow Wing River and "Gichi-ziibi" was named "Mississippi River". The Mississippi River Band of Chippewa Indians, known as the "Gichi-ziibiwininiwag", are named after the stretch of the Mississippi River known as the "Gichi-ziibi". The Cheyenne, one of the earliest inhabitants of the upper Mississippi River, called it the "Máʼxe-éʼometaaʼe" (Big Greasy River) in the Cheyenne language. The Arapaho name for the river is "Beesniicíe". The Pawnee name is "Kickaátit".
The Mississippi was spelled during French Louisiana and was also known as the Rivière Saint-Louis.
On May 8, 1541, Spanish explorer Hernando de Soto became the first recorded European to reach the Mississippi River, which he called "Río del Espíritu Santo" ("River of the Holy Spirit"), in the area of what is now Mississippi. In Spanish, the river is called "Río Mississippi".
French explorers Louis Jolliet and Jacques Marquette began exploring the Mississippi in the 17th century. Marquette traveled with a Sioux Indian who named it "Ne Tongo" ("Big river" in Sioux language) in 1673. Marquette proposed calling it the "River of the Immaculate Conception".
When Louis Jolliet explored the Mississippi Valley in the 17th century, natives guided him to a quicker way to return to French Canada via the Illinois River. When he found the Chicago Portage, he remarked that a canal of "only half a league" (less than ) would join the Mississippi and the Great Lakes. In 1848, the continental divide separating the waters of the Great Lakes and the Mississippi Valley was breached by the Illinois and Michigan canal via the Chicago River. This both accelerated the development, and forever changed the ecology of the Mississippi Valley and the Great Lakes.
In 1682, René-Robert Cavelier, Sieur de La Salle and Henri de Tonti claimed the entire Mississippi River Valley for France, calling the river "Colbert River" after Jean-Baptiste Colbert and the region "La Louisiane", for King Louis XIV. On March 2, 1699, Pierre Le Moyne d'Iberville rediscovered the mouth of the Mississippi, following the death of La Salle. The French built the small fort of La Balise there to control passage.
In 1718, about upriver, New Orleans was established along the river crescent by Jean-Baptiste Le Moyne, Sieur de Bienville, with construction patterned after the 1711 resettlement on Mobile Bay of Mobile, the capital of French Louisiana at the time.
Following Britain's victory in the Seven Years War the Mississippi became the border between the British and Spanish Empires. The Treaty of Paris (1763) gave Great Britain rights to all land east of the Mississippi and Spain rights to land west of the Mississippi. Spain also ceded Florida to Britain to regain Cuba, which the British occupied during the war. Britain then divided the territory into East and West Florida.
Article 8 of the Treaty of Paris (1783) states, "The navigation of the river Mississippi, from its source to the ocean, shall forever remain free and open to the subjects of Great Britain and the citizens of the United States". With this treaty, which ended the American Revolutionary War, Britain also ceded West Florida back to Spain to regain the Bahamas, which Spain had occupied during the war. Initial disputes around the ensuing claims of the U.S. and Spain were resolved when Spain was pressured into signing Pinckney's Treaty in 1795. However, in 1800, under duress from Napoleon of France, Spain ceded an undefined portion of West Florida to France in the secret Treaty of San Ildefonso. The United States then secured effective control of the river when it bought the Louisiana Territory from France in the Louisiana Purchase of 1803. This triggered a dispute between Spain and the U.S. on which parts of West Florida Spain had ceded to France in the first place, which would, in turn, decide which parts of West Florida the U.S. had bought from France in the Louisiana Purchase, versus which were unceded Spanish property. Following ongoing U.S. colonization creating facts on the ground, and U.S. military actions, Spain ceded both West Florida and East Florida in their entirety to the United States in the Adams–Onís Treaty of 1819.
The last serious European challenge to U.S. control of the river came at the conclusion of War of 1812 when British forces mounted an attack on New Orleans – the attack was repulsed by an American army under the command of General Andrew Jackson.
In the Treaty of 1818, the U.S. and Great Britain agreed to fix the border running from the Lake of the Woods to the Rocky Mountains along the 49th parallel north. In effect, the U.S. ceded the northwestern extremity of the Mississippi basin to the British in exchange for the southern portion of the Red River basin.
So many settlers traveled westward through the Mississippi river basin, as well as settled in it, that Zadok Cramer wrote a guide book called "The Navigator", detailing the features and dangers and navigable waterways of the area. It was so popular that he updated and expanded it through 12 editions over a period of 25 years.
The colonization of the area was barely slowed by the three earthquakes in 1811 and 1812, estimated at approximately 8 on the Richter magnitude scale, that were centered near New Madrid, Missouri.
Mark Twain's book, "Life on the Mississippi", covered the steamboat commerce which took place from 1830 to 1870 on the river before more modern ships replaced the steamer. The book was published first in serial form in "Harper's Weekly" in seven parts in 1875. The full version, including a passage from the then unfinished "Adventures of Huckleberry Finn" and works from other authors, was published by James R. Osgood & Company in 1885.
The first steamboat to travel the full length of the Lower Mississippi from the Ohio River to New Orleans was the "New Orleans" in December 1811. Its maiden voyage occurred during the series of New Madrid earthquakes in 1811–12. The Upper Mississippi was treacherous, unpredictable and to make traveling worse, the area was not properly mapped out or surveyed. Until the 1840s only two trips a year to the Twin Cities landings were made by steamboats which suggests it was not very profitable.
Steamboat transport remained a viable industry, both in terms of passengers and freight until the end of the first decade of the 20th century. Among the several Mississippi River system steamboat companies was the noted Anchor Line, which, from 1859 to 1898, operated a luxurious fleet of steamers between St. Louis and New Orleans.
Italian explorer Giacomo Beltrami, wrote about his journey on the "Virginia", which was the first steamboat to make it to Fort St. Anthony in Minnesota. He referred to his voyage as a promenade that was once a journey on the Mississippi. The steamboat era changed the economic and political life of the Mississippi, as well as the nature of travel itself. The Mississippi was completely changed by the steamboat era as it transformed into a flourishing tourist trade.
Control of the river was a strategic objective of both sides in the American Civil War. In 1862 Union forces coming down the river successfully cleared Confederate defenses at Island Number 10 and Memphis, Tennessee, while Naval forces coming upriver from the Gulf of Mexico captured New Orleans, Louisiana. The remaining major Confederate stronghold was on the heights overlooking the river at Vicksburg, Mississippi, and the Union's Vicksburg Campaign (December 1862 to July 1863), and the fall of Port Hudson, completed control of the lower Mississippi River. The Union victory ending the Siege of Vicksburg on July 4, 1863, was pivotal to the Union's final victory of the Civil War.
The "Big Freeze" of 1918–19 blocked river traffic north of Memphis, Tennessee, preventing transportation of coal from southern Illinois. This resulted in widespread shortages, high prices, and rationing of coal in January and February.
In the spring of 1927, the river broke out of its banks in 145 places, during the Great Mississippi Flood of 1927 and inundated to a depth of up to .
In 1962 and 1963, industrial accidents spilled of soybean oil into the Mississippi and Minnesota rivers. The oil covered the Mississippi River from St. Paul to Lake Pepin, creating an ecological disaster and a demand to control water pollution.
On October 20, 1976, the automobile ferry, "MV George Prince", was struck by a ship traveling upstream as the ferry attempted to cross from Destrehan, Louisiana, to Luling, Louisiana. Seventy-eight passengers and crew died; only eighteen survived the accident.
In 1988, the water level of the Mississippi fell to below zero on the Memphis gauge. The remains of wooden-hulled water craft were exposed in an area of on the bottom of the Mississippi River at West Memphis, Arkansas. They dated to the late 19th to early 20th centuries. The State of Arkansas, the Arkansas Archeological Survey, and the Arkansas Archeological Society responded with a two-month data recovery effort. The fieldwork received national media attention as good news in the middle of a drought.
The Great Flood of 1993 was another significant flood, primarily affecting the Mississippi above its confluence with the Ohio River at Cairo, Illinois.
Two portions of the Mississippi were designated as American Heritage Rivers in 1997: the lower portion around Louisiana and Tennessee, and the upper portion around Iowa, Illinois, Minnesota, Missouri and Wisconsin. The Nature Conservancy's project called "America's Rivershed Initiative" announced a 'report card' assessment of the entire basin in October 2015 and gave the grade of D+. The assessment noted the aging navigation and flood control infrastructure along with multiple environmental problems.
In 2002, Slovenian long-distance swimmer Martin Strel swam the entire length of the river, from Minnesota to Louisiana, over the course of 68 days. In 2005, the Source to Sea Expedition paddled the Mississippi and Atchafalaya Rivers to benefit the Audubon Society's Upper Mississippi River Campaign.
Geologists believe that the lower Mississippi could take a new course to the Gulf. Either of two new routes—through the Atchafalaya Basin or through Lake Pontchartrain—might become the Mississippi's main channel if flood-control structures are overtopped or heavily damaged during a severe flood.
Failure of the Old River Control Structure, the Morganza Spillway, or nearby levees would likely re-route the main channel of the Mississippi through Louisiana's Atchafalaya Basin and down the Atchafalaya River to reach the Gulf of Mexico south of Morgan City in southern Louisiana. This route provides a more direct path to the Gulf of Mexico than the present Mississippi River channel through Baton Rouge and New Orleans. While the risk of such a diversion is present during any major flood event, such a change has so far been prevented by active human intervention involving the construction, maintenance, and operation of various levees, spillways, and other control structures by the U.S. Army Corps of Engineers.
The Old River Control Structure, between the present Mississippi River channel and the Atchafalaya Basin, sits at the normal water elevation and is ordinarily used to divert 30% of the Mississippi's flow to the Atchafalaya River. There is a steep drop here away from the Mississippi's main channel into the Atchafalaya Basin. If this facility were to fail during a major flood, there is a strong concern the water would scour and erode the river bottom enough to capture the Mississippi's main channel. The structure was nearly lost during the 1973 flood, but repairs and improvements were made after engineers studied the forces at play. In particular, the Corps of Engineers made many improvements and constructed additional facilities for routing water through the vicinity. These additional facilities give the Corps much more flexibility and potential flow capacity than they had in 1973, which further reduces the risk of a catastrophic failure in this area during other major floods, such as that of 2011.
Because the Morganza Spillway is slightly higher and well back from the river, it is normally dry on both sides. Even if it failed at the crest during a severe flood, the floodwaters would have to erode to normal water levels before the Mississippi could permanently jump channel at this location. During the 2011 floods, the Corps of Engineers opened the Morganza Spillway to 1/4 of its capacity to allow of water to flood the Morganza and Atchafalaya floodways and continue directly to the Gulf of Mexico, bypassing Baton Rouge and New Orleans. In addition to reducing the Mississippi River crest downstream, this diversion reduced the chances of a channel change by reducing stress on the other elements of the control system.
Some geologists have noted that the possibility for course change into the Atchafalaya also exists in the area immediately north of the Old River Control Structure. Army Corps of Engineers geologist Fred Smith once stated, "The Mississippi wants to go west. 1973 was a forty-year flood. The big one lies out there somewhere—when the structures can't release all the floodwaters and the levee is going to have to give way. That is when the river's going to jump its banks and try to break through."
Another possible course change for the Mississippi River is a diversion into Lake Pontchartrain near New Orleans. This route is controlled by the Bonnet Carré Spillway, built to reduce flooding in New Orleans. This spillway and an imperfect natural levee about 4–6 meters (12 to 20 feet) high are all that prevents the Mississippi from taking a new, shorter course through Lake Pontchartrain to the Gulf of Mexico. Diversion of the Mississippi's main channel through Lake Pontchartrain would have consequences similar to an Atchafalaya diversion, but to a lesser extent, since the present river channel would remain in use past Baton Rouge and into the New Orleans area.
The sport of water skiing was invented on the river in a wide region between Minnesota and Wisconsin known as Lake Pepin. Ralph Samuelson of Lake City, Minnesota, created and refined his skiing technique in late June and early July 1922. He later performed the first water ski jump in 1925 and was pulled along at by a Curtiss flying boat later that year.
There are seven National Park Service sites along the Mississippi River. The Mississippi National River and Recreation Area is the National Park Service site dedicated to protecting and interpreting the Mississippi River itself. The other six National Park Service sites along the river are (listed from north to south):
The Mississippi basin is home to a highly diverse aquatic fauna and has been called the "mother fauna" of North American fresh water.
About 375 fish species are known from the Mississippi basin, far exceeding other North Hemisphere river basin exclusively within temperate/subtropical regions, except the Yangtze. Within the Mississippi basin, streams that have their source in the Appalachian and Ozark highlands contain especially many species. Among the fish species in the basin are numerous endemics, as well as relicts such as paddlefish, sturgeon, gar and bowfin.
Because of its size and high species diversity, the Mississippi basin is often divided into subregions. The Upper Mississippi River alone is home to about 120 fish species, including walleye, sauger, large mouth bass, small mouth bass, white bass, northern pike, bluegill, crappie, channel catfish, flathead catfish, common shiner, freshwater drum and shovelnose sturgeon.
In addition to fish, several species of turtles (such as snapping, musk, mud, map, cooter, painted and softshell turtles), American alligator, aquatic amphibians (such as hellbender, mudpuppy, three-toed amphiuma and lesser siren), and cambarid crayfish (such as the red swamp crayfish) are native to the Mississippi basin.
Numerous introduced species are found in the Mississippi and some of these are invasive. Among the introductions are fish such as Asian carp, including the silver carp that have become infamous for outcompeting native fish and their potentially dangerous jumping behavior. They have spread throughout much of the basin, even approaching (but not yet invading) the Great Lakes. The Minnesota Department of Natural Resources has designated much of the Mississippi River in the state as infested waters by the exotic species zebra mussels and Eurasian watermilfoil.
|
https://en.wikipedia.org/wiki?curid=19579
|
Men in black
In popular culture and UFO conspiracy theories, men in black (MIB) are supposed men dressed in black suits who claim to be quasi-government agents who harass, threaten or assassinate UFO witnesses to keep them quiet about what they have seen. It is sometimes implied that they may be aliens themselves. The term is also frequently used to describe mysterious men working for unknown organizations, as well as various branches of government allegedly designed to protect secrets or perform other strange activities. The term is generic, used for any unusual, threatening or strangely behaved individual whose appearance on the scene can be linked in some fashion with a UFO sighting. Several alleged encounters with the men in black have been reported by UFO researchers and enthusiasts.
Stories about allegedly real-life men in black inspired the semi-comic science fiction "Men in Black" franchise of comic books, films and other media.
Folklorist James R. Lewis compares accounts of men in black with tales of people encountering Lucifer and speculates that they can be considered a kind of "psychological drama."
Men in black figure prominently in ufology and UFO folklore. In the 1950s and 1960s, UFOlogists adopted a conspiratorial mindset and began to fear they would be subject to organized intimidation in retaliation for discovering "the truth of the UFOs."
In 1947, Harold Dahl claimed to have been warned not to talk about his alleged UFO sighting on Maury Island by a man in a dark suit. In the mid-1950s, the ufologist Albert K. Bender claimed he was visited by men in dark suits who threatened and warned him not to continue investigating UFOs. Bender maintained that the men in black were secret government agents who had been given the task of suppressing evidence of UFOs. The ufologist John Keel claimed to have had encounters with men in black and referred to them as "demonic supernaturals" with "dark skin and/or 'exotic' facial features." According to the ufologist Jerome Clark, reports of men in black represent "experiences" that "don't seem to have occurred in the world of consensus reality."
Historian Aaron Gulyas wrote, "during the 1970s, 1980s, and 1990s, UFO conspiracy theorists would incorporate the Men in Black into their increasingly complex and paranoid visions."
In his article, "Gray Barker: My Friend, the Myth-Maker," John C. Sherwood claims that, in the late 1960s, at the age of 18, he cooperated when Gray Barker urged him to develop a hoax—which Barker subsequently published—about what Barker called "blackmen," three mysterious UFO inhabitants who silenced Sherwood's pseudonymous identity, "Dr. Richard H. Pratt."
The G-Man from the Half-Life video game franchise is said to have characteristics of the men in black, including an awkward grasp of human speech and mannerisms.
The 1997 science fiction movie Men In Black (1997 film) starring Will Smith and Tommy Lee Jones was inspired by the Men In Black conspiracy
|
https://en.wikipedia.org/wiki?curid=19581
|
Monomer
A monomer ( ; "mono-", "one" + "-mer", "part") is a molecule that can react together with other monomer molecules to form a larger polymer chain or three-dimensional network in a process called polymerization.
Monomers can be classified in many ways. They can be subdivided into two broad classes, depending on the kind of the polymer that they form. Monomers that participate in condensation polymerization have a different stoichiometry than monomers that participate in addition polymerization:
Other classifications include:
The polymerization of one kind of monomer gives a homopolymer. Many polymers are copolymers, meaning that they are derived from two different monomers. In the case of condensation polymerizations, the ratio of comonomers is usually 1:1. For example, the formation of many nylons requires equal amounts of a dicarboxylic acid and diamine. In the case of addition polymerizations, the comonomer content is often only a few percent. For example, small amounts of 1-octene monomer are copolymerized with ethylene to give specialized polyethylene.
The term "monomeric protein" may also be used to describe one of the proteins making up a multiprotein complex.
Some of the main biopolymers are listed below:
For "proteins", the monomers are amino acids. Polymerization occurs at ribosomes. Usually about 20 types of amino acid monomers are used to produce proteins. Hence proteins are not homopolymers.
For polynucleic acids (DNA/RNA), the monomers are nucleotides, each of which is made of a pentose sugar, a nitrogenous base and a phosphate group. Nucleotide monomers are found in the cell nucleus. Four types of nucleotide monomers are precursors to DNA and four different nucleotide monomers are precursors to RNA.
For carbohydrates, the monomers are monosaccharides. The most abundant natural monomer is glucose, which is linked by glycosidic bonds into the polymers cellulose, starch, and glycogen.
Isoprene is a natural monomer that polymerizes to form natural rubber, most often "cis-"1,4-polyisoprene, but also "trans-"1,4-polymer. Synthetic rubbers are often based on butadiene, which is structurally related to isoprene.
|
https://en.wikipedia.org/wiki?curid=19583
|
Mitochondrion
The mitochondrion (, plural mitochondria) is a semi autonomous double-membrane-bound organelle found in most eukaryotic organisms. Some cells in some multicellular organisms may, however, lack mitochondria (for example, mature mammalian red blood cells). A number of unicellular organisms, such as microsporidia, parabasalids, and diplomonads, have also reduced or transformed their mitochondria into other structures. To date, only one eukaryote, "Monocercomonoides", is known to have completely lost its mitochondria, and one multicellular organism, "Henneguya salminicola", is known to have retained mitochondrion-related organelles in association with a complete loss of their mitochondrial genome.
The word mitochondrion comes from the Greek , , "thread", and , , "granule" or "grain-like". Mitochondria generate most of the cell's supply of adenosine triphosphate (ATP), used as a source of chemical energy. A mitochondrion is thus termed the "powerhouse" of the cell.
Mitochondria are commonly between 0.75 and 3 μm² in area but vary considerably in size and structure. Unless specifically stained, they are not visible. In addition to supplying cellular energy, mitochondria are involved in other tasks, such as signaling, cellular differentiation, and cell death, as well as maintaining control of the cell cycle and cell growth. Mitochondrial biogenesis is in turn temporally coordinated with these cellular processes. Mitochondria have been implicated in several human diseases, including mitochondrial disorders, cardiac dysfunction, heart failure and autism.
The number of mitochondria in a cell can vary widely by organism, tissue, and cell type. For instance, red blood cells have no mitochondria, whereas liver cells can have more than 2000. The organelle is composed of compartments that carry out specialized functions. These compartments or regions include the outer membrane, the intermembrane space, the inner membrane, and the cristae and matrix.
Although most of a cell's DNA is contained in the cell nucleus, the mitochondrion has its own independent genome ("mitogenome") that shows substantial similarity to bacterial genomes. Mitochondrial proteins (proteins transcribed from mitochondrial DNA) vary depending on the tissue and the species. In humans, 615 distinct types of proteins have been identified from cardiac mitochondria, whereas in rats, 940 proteins have been reported. The mitochondrial proteome is thought to be dynamically regulated.
The first observations of intracellular structures that probably represented mitochondria were published in the 1840s. Richard Altmann, in 1890, established them as cell organelles and called them "bioblasts". The term "mitochondria" was coined by Carl Benda in 1898. Leonor Michaelis discovered that Janus green can be used as a supravital stain for mitochondria in 1900. In 1904, Friedrich Meves, made the first recorded observation of mitochondria in plants in cells of the white waterlily, "Nymphaea alba" and in 1908, along with Claudius Regaud, suggested that they contain proteins and lipids. Benjamin F. Kingsbury, in 1912, first related them with cell respiration, but almost exclusively based on morphological observations. In 1913, particles from extracts of guinea-pig liver were linked to respiration by Otto Heinrich Warburg, which he called "grana". Warburg and Heinrich Otto Wieland, who had also postulated a similar particle mechanism, disagreed on the chemical nature of the respiration. It was not until 1925, when David Keilin discovered cytochromes, that the respiratory chain was described.
In 1939, experiments using minced muscle cells demonstrated that cellular respiration using one oxygen atom can form two adenosine triphosphate (ATP) molecules, and, in 1941, the concept of the phosphate bonds of ATP being a form of energy in cellular metabolism was developed by Fritz Albert Lipmann. In the following years, the mechanism behind cellular respiration was further elaborated, although its link to the mitochondria was not known. The introduction of tissue fractionation by Albert Claude allowed mitochondria to be isolated from other cell fractions and biochemical analysis to be conducted on them alone. In 1946, he concluded that cytochrome oxidase and other enzymes responsible for the respiratory chain were isolated to the mitochondria. Eugene Kennedy and Albert Lehninger discovered in 1948 that mitochondria are the site of oxidative phosphorylation in eukaryotes. Over time, the fractionation method was further developed, improving the quality of the mitochondria isolated, and other elements of cell respiration were determined to occur in the mitochondria.
The first high-resolution electron micrographs appeared in 1952, replacing the Janus Green stains as the preferred way of visualizing the mitochondria. This led to a more detailed analysis of the structure of the mitochondria, including confirmation that they were surrounded by a membrane. It also showed a second membrane inside the mitochondria that folded up in ridges dividing up the inner chamber and that the size and shape of the mitochondria varied from cell to cell.
The popular term "powerhouse of the cell" was coined by Philip Siekevitz in 1957.
In 1967, it was discovered that mitochondria contained ribosomes. In 1968, methods were developed for mapping the mitochondrial genes, with the genetic and physical map of yeast mitochondrial DNA being completed in 1976.
There are two hypotheses about the origin of mitochondria: endosymbiotic and autogenous. The endosymbiotic hypothesis suggests that mitochondria were originally prokaryotic cells, capable of implementing oxidative mechanisms that were not possible for eukaryotic cells; they became endosymbionts living inside the eukaryote. In the autogenous hypothesis, mitochondria were born by splitting off a portion of DNA from the nucleus of the eukaryotic cell at the time of divergence with the prokaryotes; this DNA portion would have been enclosed by membranes, which could not be crossed by proteins. Since mitochondria have many features in common with bacteria, the endosymbiotic hypothesis is more widely accepted.
A mitochondrion contains DNA, which is organized as several copies of a single, usually circular chromosome. This mitochondrial chromosome contains genes for redox proteins, such as those of the respiratory chain. The CoRR hypothesis proposes that this co-location is required for redox regulation. The mitochondrial genome codes for some RNAs of ribosomes, and the 22 tRNAs necessary for the translation of mRNAs into protein. The circular structure is also found in prokaryotes. The proto-mitochondrion was probably closely related to "Rickettsia". However, the exact relationship of the ancestor of mitochondria to the alphaproteobacteria and whether the mitochondrion was formed at the same time or after the nucleus, remains controversial. For example, it has been suggested that the SAR11 clade of bacteria shares a relatively recent common ancestor with the mitochondria, while phylogenomic analyses indicate that mitochondria evolved from a proteobacteria lineage is closely related to or a member of alphaproteobacteria.
The ribosomes coded for by the mitochondrial DNA are similar to those from bacteria in size and structure. They closely resemble the bacterial 70S ribosome and not the 80S cytoplasmic ribosomes, which are coded for by nuclear DNA.
The endosymbiotic relationship of mitochondria with their host cells was popularized by Lynn Margulis. The endosymbiotic hypothesis suggests that mitochondria descended from bacteria that somehow survived endocytosis by another cell, and became incorporated into the cytoplasm. The ability of these bacteria to conduct respiration in host cells that had relied on glycolysis and fermentation would have provided a considerable evolutionary advantage. This symbiotic relationship probably developed 1.7 to 2 billion years ago.
A few groups of unicellular eukaryotes have only vestigial mitochondria or derived structures: the microsporidians, metamonads, and archamoebae. These groups appear as the most primitive eukaryotes on phylogenetic trees constructed using rRNA information, which once suggested that they appeared before the origin of mitochondria. However, this is now known to be an artifact of long-branch attraction—they are derived groups and retain genes or organelles derived from mitochondria (e. g., mitosomes and hydrogenosomes). By this, mitochondria, hydrogenosomes, mitosomes, and related organelles as found in some loricifera (e. g. "Spinoloricus") and myxozoa (e. g. "Henneguya zschokkei") are together classified as MROs, mitochondrion-related organelles.
Monocercomonoides appear to have lost their mitochondria completely and at least some of the mitochondrial functions seem to be carried out by cytoplasmic proteins now"."
A mitochondrion contains outer and inner membranes composed of phospholipid bilayers and proteins. The two membranes have different properties. Because of this double-membraned organization, there are five distinct parts to a mitochondrion. They are:
Mitochondria stripped of their outer membrane are called mitoplasts.
The outer mitochondrial membrane, which encloses the entire organelle, is 60 to 75 angstroms (Å) thick. It has a protein-to-phospholipid ratio similar to that of the cell membrane (about 1:1 by weight). It contains large numbers of integral membrane proteins called porins. A major trafficking protein is the pore-forming voltage-dependent anion channel (VDAC). The VDAC is the primary transporter of nucleotides, ions and metabolites between the cytosol and the intermembrane space. It is formed as a beta barrel that spans the outer membrane, similar to that in the gram-negative bacterial membrane. Larger proteins can enter the mitochondrion if a signaling sequence at their N-terminus binds to a large multisubunit protein called translocase in the outer membrane, which then actively moves them across the membrane. Mitochondrial pro-proteins are imported through specialised translocation complexes.
The outer membrane also contains enzymes involved in such diverse activities as the elongation of fatty acids, oxidation of epinephrine, and the degradation of tryptophan. These enzymes include monoamine oxidase, rotenone-insensitive NADH-cytochrome c-reductase, kynurenine hydroxylase and fatty acid Co-A ligase. Disruption of the outer membrane permits proteins in the intermembrane space to leak into the cytosol, leading to certain cell death. The mitochondrial outer membrane can associate with the endoplasmic reticulum (ER) membrane, in a structure called MAM (mitochondria-associated ER-membrane). This is important in the ER-mitochondria calcium signaling and is involved in the transfer of lipids between the ER and mitochondria. Outside the outer membrane there are small (diameter: 60Å) particles named sub-units of Parson.
The mitochondrial intermembrane space is the space between the outer membrane and the inner membrane. It is also known as perimitochondrial space. Because the outer membrane is freely permeable to small molecules, the concentrations of small molecules, such as ions and sugars, in the intermembrane space is the same as in the cytosol. However, large proteins must have a specific signaling sequence to be transported across the outer membrane, so the protein composition of this space is different from the protein composition of the cytosol. One protein that is localized to the intermembrane space in this way is cytochrome c.
The inner mitochondrial membrane contains proteins with three types of functions:
It contains more than 151 different polypeptides, and has a very high protein-to-phospholipid ratio (more than 3:1 by weight, which is about 1 protein for 15 phospholipids). The inner membrane is home to around 1/5 of the total protein in a mitochondrion. Additionally, the inner membrane is rich in an unusual phospholipid, cardiolipin. This phospholipid was originally discovered in cow hearts in 1942, and is usually characteristic of mitochondrial and bacterial plasma membranes. Cardiolipin contains four fatty acids rather than two, and may help to make the inner membrane impermeable. Unlike the outer membrane, the inner membrane does not contain porins, and is highly impermeable to all molecules. Almost all ions and molecules require special membrane transporters to enter or exit the matrix. Proteins are ferried into the matrix via the translocase of the inner membrane (TIM) complex or via Oxa1. In addition, there is a membrane potential across the inner membrane, formed by the action of the enzymes of the electron transport chain. Inner membrane fusion is mediated by the inner membrane protein OPA1.
The inner mitochondrial membrane is compartmentalized into numerous cristae, which expand the surface area of the inner mitochondrial membrane, enhancing its ability to produce ATP. For typical liver mitochondria, the area of the inner membrane is about five times as large as the outer membrane. This ratio is variable and mitochondria from cells that have a greater demand for ATP, such as muscle cells, contain even more cristae. Mitochondria within the same cell can have substantially different crista-density, the ones that are required to produce more energy, have much more crista-membrane surface. These folds are studded with small round bodies known as F1 particles or oxysomes. These are not simple random folds but rather invaginations of the inner membrane, which can affect overall chemiosmotic function.
One recent mathematical modeling study has suggested that the optical properties of the cristae in filamentous mitochondria may affect the generation and propagation of light within the tissue.
The matrix is the space enclosed by the inner membrane. It contains about 2/3 of the total proteins in a mitochondrion. The matrix is important in the production of ATP with the aid of the ATP synthase contained in the inner membrane. The matrix contains a highly concentrated mixture of hundreds of enzymes, special mitochondrial ribosomes, tRNA, and several copies of the mitochondrial DNA genome. Of the enzymes, the major functions include oxidation of pyruvate and fatty acids, and the citric acid cycle. The DNA molecules are packaged into nucleoids by proteins, one of which is TFAM.
Mitochondria have their own genetic material, and the machinery to manufacture their own RNAs and proteins ("see: protein biosynthesis"). A published human mitochondrial DNA sequence revealed 16,569 base pairs encoding 37 genes: 22 tRNA, 2 rRNA, and 13 peptide genes. The 13 mitochondrial peptides in humans are integrated into the inner mitochondrial membrane, along with proteins encoded by genes that reside in the host cell's nucleus.
The mitochondria-associated ER membrane (MAM) is another structural element that is increasingly recognized for its critical role in cellular physiology and homeostasis. Once considered a technical snag in cell fractionation techniques, the alleged ER vesicle contaminants that invariably appeared in the mitochondrial fraction have been re-identified as membranous structures derived from the MAM—the interface between mitochondria and the ER. Physical coupling between these two organelles had previously been observed in electron micrographs and has more recently been probed with fluorescence microscopy. Such studies estimate that at the MAM, which may comprise up to 20% of the mitochondrial outer membrane, the ER and mitochondria are separated by a mere 10–25 nm and held together by protein tethering complexes.
Purified MAM from subcellular fractionation has been shown to be enriched in enzymes involved in phospholipid exchange, in addition to channels associated with Ca2+ signaling. These hints of a prominent role for the MAM in the regulation of cellular lipid stores and signal transduction have been borne out, with significant implications for mitochondrial-associated cellular phenomena, as discussed below. Not only has the MAM provided insight into the mechanistic basis underlying such physiological processes as intrinsic apoptosis and the propagation of calcium signaling, but it also favors a more refined view of the mitochondria. Though often seen as static, isolated 'powerhouses' hijacked for cellular metabolism through an ancient endosymbiotic event, the evolution of the MAM underscores the extent to which mitochondria have been integrated into overall cellular physiology, with intimate physical and functional coupling to the endomembrane system.
The MAM is enriched in enzymes involved in lipid biosynthesis, such as phosphatidylserine synthase on the ER face and phosphatidylserine decarboxylase on the mitochondrial face. Because mitochondria are dynamic organelles constantly undergoing fission and fusion events, they require a constant and well-regulated supply of phospholipids for membrane integrity. But mitochondria are not only a destination for the phospholipids they finish synthesis of; rather, this organelle also plays a role in inter-organelle trafficking of the intermediates and products of phospholipid biosynthetic pathways, ceramide and cholesterol metabolism, and glycosphingolipid anabolism.
Such trafficking capacity depends on the MAM, which has been shown to facilitate transfer of lipid intermediates between organelles. In contrast to the standard vesicular mechanism of lipid transfer, evidence indicates that the physical proximity of the ER and mitochondrial membranes at the MAM allows for lipid flipping between opposed bilayers. Despite this unusual and seemingly energetically unfavorable mechanism, such transport does not require ATP. Instead, in yeast, it has been shown to be dependent on a multiprotein tethering structure termed the ER-mitochondria encounter structure, or ERMES, although it remains unclear whether this structure directly mediates lipid transfer or is required to keep the membranes in sufficiently close proximity to lower the energy barrier for lipid flipping.
The MAM may also be part of the secretory pathway, in addition to its role in intracellular lipid trafficking. In particular, the MAM appears to be an intermediate destination between the rough ER and the Golgi in the pathway that leads to very-low-density lipoprotein, or VLDL, assembly and secretion. The MAM thus serves as a critical metabolic and trafficking hub in lipid metabolism.
A critical role for the ER in calcium signaling was acknowledged before such a role for the mitochondria was widely accepted, in part because the low affinity of Ca2+ channels localized to the outer mitochondrial membrane seemed to contradict this organelle's purported responsiveness to changes in intracellular Ca2+ flux. But the presence of the MAM resolves this apparent contradiction: the close physical association between the two organelles results in Ca2+ microdomains at contact points that facilitate efficient Ca2+ transmission from the ER to the mitochondria. Transmission occurs in response to so-called "Ca2+ puffs" generated by spontaneous clustering and activation of IP3R, a canonical ER membrane Ca2+ channel.
The fate of these puffs—in particular, whether they remain restricted to isolated locales or integrated into Ca2+ waves for propagation throughout the cell—is determined in large part by MAM dynamics. Although reuptake of Ca2+ by the ER (concomitant with its release) modulates the intensity of the puffs, thus insulating mitochondria to a certain degree from high Ca2+ exposure, the MAM often serves as a firewall that essentially buffers Ca2+ puffs by acting as a sink into which free ions released into the cytosol can be funneled. This Ca2+ tunneling occurs through the low-affinity Ca2+ receptor VDAC1, which recently has been shown to be physically tethered to the IP3R clusters on the ER membrane and enriched at the MAM. The ability of mitochondria to serve as a Ca2+ sink is a result of the electrochemical gradient generated during oxidative phosphorylation, which makes tunneling of the cation an exergonic process. Normal, mild calcium influx from cytosol into the mitochondrial matrix causes transient depolarization that is corrected by pumping out protons.
But transmission of Ca2+ is not unidirectional; rather, it is a two-way street. The properties of the Ca2+ pump SERCA and the channel IP3R present on the ER membrane facilitate feedback regulation coordinated by MAM function. In particular, the clearance of Ca2+ by the MAM allows for spatio-temporal patterning of Ca2+ signaling because Ca2+ alters IP3R activity in a biphasic manner. SERCA is likewise affected by mitochondrial feedback: uptake of Ca2+ by the MAM stimulates ATP production, thus providing energy that enables SERCA to reload the ER with Ca2+ for continued Ca2+ efflux at the MAM. Thus, the MAM is not a passive buffer for Ca2+ puffs; rather it helps modulate further Ca2+ signaling through feedback loops that affect ER dynamics.
Regulating ER release of Ca2+ at the MAM is especially critical because only a certain window of Ca2+ uptake sustains the mitochondria, and consequently the cell, at homeostasis. Sufficient intraorganelle Ca2+ signaling is required to stimulate metabolism by activating dehydrogenase enzymes critical to flux through the citric acid cycle. However, once Ca2+ signaling in the mitochondria passes a certain threshold, it stimulates the intrinsic pathway of apoptosis in part by collapsing the mitochondrial membrane potential required for metabolism. Studies examining the role of pro- and anti-apoptotic factors support this model; for example, the anti-apoptotic factor Bcl-2 has been shown to interact with IP3Rs to reduce Ca2+ filling of the ER, leading to reduced efflux at the MAM and preventing collapse of the mitochondrial membrane potential post-apoptotic stimuli. Given the need for such fine regulation of Ca2+ signaling, it is perhaps unsurprising that dysregulated mitochondrial Ca2+ has been implicated in several neurodegenerative diseases, while the catalogue of tumor suppressors includes a few that are enriched at the MAM.
Recent advances in the identification of the tethers between the mitochondrial and ER membranes suggest that the scaffolding function of the molecular elements involved is secondary to other, non-structural functions. In yeast, ERMES, a multiprotein complex of interacting ER- and mitochondrial-resident membrane proteins, is required for lipid transfer at the MAM and exemplifies this principle. One of its components, for example, is also a constituent of the protein complex required for insertion of transmembrane beta-barrel proteins into the lipid bilayer. However, a homologue of the ERMES complex has not yet been identified in mammalian cells. Other proteins implicated in scaffolding likewise have functions independent of structural tethering at the MAM; for example, ER-resident and mitochondrial-resident mitofusins form heterocomplexes that regulate the number of inter-organelle contact sites, although mitofusins were first identified for their role in fission and fusion events between individual mitochondria. Glucose-related protein 75 (grp75) is another dual-function protein. In addition to the matrix pool of grp75, a portion serves as a chaperone that physically links the mitochondrial and ER Ca2+ channels VDAC and IP3R for efficient Ca2+ transmission at the MAM. Another potential tether is Sigma-1R, a non-opioid receptor whose stabilization of ER-resident IP3R may preserve communication at the MAM during the metabolic stress response.
The MAM is a critical signaling, metabolic, and trafficking hub in the cell that allows for the integration of ER and mitochondrial physiology. Coupling between these organelles is not simply structural but functional as well and critical for overall cellular physiology and homeostasis. The MAM thus offers a perspective on mitochondria that diverges from the traditional view of this organelle as a static, isolated unit appropriated for its metabolic capacity by the cell. Instead, this mitochondrial-ER interface emphasizes the integration of the mitochondria, the product of an endosymbiotic event, into diverse cellular processes. Recently it has also been shown, that mitochondria and MAM-s in neurons are anchored to specialised intercellular communication sites (so called somatic-junctions). Microglial processes monitor and protect neuronal functions at these sites, and MAM-s are supposed to have an important role in this type of cellular quality-control.
Mitochondria (and related structures) are found in all eukaryotes (except two—the Oxymonad "Monocercomonoides" and "Henneguya salminicola"). Although commonly depicted as bean-like structures they form a highly dynamic network in the majority of cells where they constantly undergo fission and fusion. The population of all the mitochondria of a given cell constitutes the chondriome. Mitochondria vary in number and location according to cell type. A single mitochondrion is often found in unicellular organisms. Conversely, the chondriome size of human liver cells is large, with about 1000–2000 mitochondria per cell, making up 1/5 of the cell volume. The mitochondrial content of otherwise similar cells can vary substantially in size and membrane potential, with differences arising from sources including uneven partitioning at cell divisions, leading to extrinsic differences in ATP levels and downstream cellular processes. The mitochondria can be found nestled between myofibrils of muscle or wrapped around the sperm flagellum. Often, they form a complex 3D branching network inside the cell with the cytoskeleton. The association with the cytoskeleton determines mitochondrial shape, which can affect the function as well: different structures of the mitochondrial network may afford the population a variety of physical, chemical, and signalling advantages or disadvantages. Mitochondria in cells are always distributed along microtubules and the distribution of these organelles is also correlated with the endoplasmic reticulum. Recent evidence suggests that vimentin, one of the components of the cytoskeleton, is also critical to the association with the cytoskeleton.
The most prominent roles of mitochondria are to produce the energy currency of the cell, ATP (i.e., phosphorylation of ADP), through respiration, and to regulate cellular metabolism. The central set of reactions involved in ATP production are collectively known as the citric acid cycle, or the Krebs cycle. However, the mitochondrion has many other functions in addition to the production of ATP.
A dominant role for the mitochondria is the production of ATP, as reflected by the large number of proteins in the inner membrane for this task. This is done by oxidizing the major products of glucose: pyruvate, and NADH, which are produced in the cytosol. This type of cellular respiration known as aerobic respiration, is dependent on the presence of oxygen, which provides most of the energy released. When oxygen is limited, the glycolytic products will be metabolized by anaerobic fermentation, a process that is independent of the mitochondria. The production of ATP from glucose and oxygen has an approximately 13-times higher yield during aerobic respiration compared to fermentation. Plant mitochondria can also produce a limited amount of ATP without oxygen by using the alternate substrate nitrite. ATP crosses out through the inner membrane with the help of a specific protein, and across the outer membrane via porins. ADP returns via the same route.
Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix where they can either be oxidized and combined with coenzyme A to form CO2, acetyl-CoA, and NADH, or they can be carboxylated (by pyruvate carboxylase) to form oxaloacetate. This latter reaction ”fills up” the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in muscle) are suddenly increased by activity.
In the citric acid cycle, all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that the additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence, the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of oxaloacetate available to combine with acetyl-CoA to form citric acid. This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell.
Acetyl-CoA, on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of acetyl-CoA is consumed for every molecule of oxaloacetate present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of acetyl-CoA that produces CO2 and water, with the energy thus released captured in the form of ATP.
In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial oxaloacetate is an early step in the gluconeogenic pathway, which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here, the addition of oxaloacetate to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate (malate) is immediately removed from the mitochondrion to be converted into cytosolic oxaloacetate, which is ultimately converted into glucose, in a process that is almost the reverse of glycolysis.
The enzymes of the citric acid cycle are located in the mitochondrial matrix, with the exception of succinate dehydrogenase, which is bound to the inner mitochondrial membrane as part of Complex II. The citric acid cycle oxidizes the acetyl-CoA to carbon dioxide, and, in the process, produces reduced cofactors (three molecules of NADH and one molecule of FADH2) that are a source of electrons for the "electron transport chain", and a molecule of GTP (that is readily converted to an ATP).
The electrons from NADH and FADH2 are transferred to oxygen (O2), an energy-rich molecule, and hydrogen (protons) in several steps via the electron transport chain. NADH and FADH2 molecules are produced within the matrix via the citric acid cycle but are also produced in the cytoplasm by glycolysis. Reducing equivalents from the cytoplasm can be imported via the malate-aspartate shuttle system of antiporter proteins or feed into the electron transport chain using a glycerol phosphate shuttle. Protein complexes in the inner membrane (NADH dehydrogenase (ubiquinone), cytochrome c reductase, and cytochrome c oxidase) perform the transfer and the incremental release of energy is used to pump protons (H+) into the intermembrane space. This process is efficient, but a small percentage of electrons may prematurely reduce oxygen, forming reactive oxygen species such as superoxide. This can cause oxidative stress in the mitochondria and may contribute to the decline in mitochondrial function associated with the aging process.
As the proton concentration increases in the intermembrane space, a strong electrochemical gradient is established across the inner membrane. The protons can return to the matrix through the ATP synthase complex, and their potential energy is used to synthesize ATP from ADP and inorganic phosphate (Pi). This process is called chemiosmosis, and was first described by Peter Mitchell, who was awarded the 1978 Nobel Prize in Chemistry for his work. Later, part of the 1997 Nobel Prize in Chemistry was awarded to Paul D. Boyer and John E. Walker for their clarification of the working mechanism of ATP synthase.
Under certain conditions, protons can re-enter the mitochondrial matrix without contributing to ATP synthesis. This process is known as "proton leak" or mitochondrial uncoupling and is due to the facilitated diffusion of protons into the matrix. The process results in the unharnessed potential energy of the proton electrochemical gradient being released as heat. The process is mediated by a proton channel called thermogenin, or UCP1. Thermogenin is a 33 kDa protein first discovered in 1973. Thermogenin is primarily found in brown adipose tissue, or brown fat, and is responsible for non-shivering thermogenesis. Brown adipose tissue is found in mammals, and is at its highest levels in early life and in hibernating animals. In humans, brown adipose tissue is present at birth and decreases with age.
The concentrations of free calcium in the cell can regulate an array of reactions and is important for signal transduction in the cell. Mitochondria can transiently store calcium, a contributing process for the cell's homeostasis of calcium.
In fact, their ability to rapidly take in calcium for later release makes them very good "cytosolic buffers" for calcium. The endoplasmic reticulum (ER) is the most significant storage site of calcium, and there is a significant interplay between the mitochondrion and ER with regard to calcium. The calcium is taken up into the matrix by the mitochondrial calcium uniporter on the inner mitochondrial membrane. It is primarily driven by the mitochondrial membrane potential. Release of this calcium back into the cell's interior can occur via a sodium-calcium exchange protein or via "calcium-induced-calcium-release" pathways. This can initiate calcium spikes or calcium waves with large changes in the membrane potential. These can activate a series of second messenger system proteins that can coordinate processes such as neurotransmitter release in nerve cells and release of hormones in endocrine cells.
Ca2+ influx to the mitochondrial matrix has recently been implicated as a mechanism to regulate respiratory bioenergetics by allowing the electrochemical potential across the membrane to transiently "pulse" from ΔΨ-dominated to pH-dominated, facilitating a reduction of oxidative stress. In neurons, concomitant increases in cytosolic and mitochondrial calcium act to synchronize neuronal activity with mitochondrial energy metabolism. Mitochondrial matrix calcium levels can reach the tens of micromolar levels, which is necessary for the activation of isocitrate dehydrogenase, one of the key regulatory enzymes of the Krebs cycle.
Mitochondria play a central role in many other metabolic tasks, such as:
Some mitochondrial functions are performed only in specific types of cells. For example, mitochondria in liver cells contain enzymes that allow them to detoxify ammonia, a waste product of protein metabolism. A mutation in the genes regulating any of these functions can result in mitochondrial diseases.
The relationship between cellular proliferation and mitochondria has been investigated using cervical cancer HeLa cells. Tumor cells require an ample amount of ATP (Adenosine triphosphate) in order to synthesize bioactive compounds such as lipids, proteins, and nucleotides for rapid cell proliferation. The majority of ATP in tumor cells is generated via the oxidative phosphorylation pathway (OxPhos). Interference with OxPhos have shown to cause cell cycle arrest suggesting that mitochondria play a role in cell proliferation. Mitochondrial ATP production is also vital for cell division and differentiation in infection in addition to basic functions in the cell including the regulation of cell volume, solute concentration, and cellular architecture. ATP levels differ at various stages of the cell cycle suggesting that there is a relationship between the abundance of ATP and the cell's ability to enter a new cell cycle. ATP's role in the basic functions of the cell make the cell cycle sensitive to changes in the availability of mitochondrial derived ATP. The variation in ATP levels at different stages of the cell cycle support the hypothesis that mitochondria play an important role in cell cycle regulation. Although the specific mechanisms between mitochondria and the cell cycle regulation is not well understood, studies have shown that low energy cell cycle checkpoints monitor the energy capability before committing to another round of cell division.
Mitochondria contain their own genome, an indication that they are derived from bacteria through endosymbiosis. However, the ancestral endosymbiont genome has lost most of its genes so that the mitochondrial genome (mitogenome) is one of the most reduced genomes across organisms.
The human mitochondrial genome is a circular DNA molecule of about 16 kilobases. It encodes 37 genes: 13 for subunits of respiratory complexes I, III, IV and V, 22 for mitochondrial tRNA (for the 20 standard amino acids, plus an extra gene for leucine and serine), and 2 for rRNA. One mitochondrion can contain two to ten copies of its DNA.
As in prokaryotes, there is a very high proportion of coding DNA and an absence of repeats. Mitochondrial genes are transcribed as multigenic transcripts, which are cleaved and polyadenylated to yield mature mRNAs. Not all proteins necessary for mitochondrial function are encoded by the mitochondrial genome; most are coded by genes in the cell nucleus and the corresponding proteins are imported into the mitochondrion. The exact number of genes encoded by the nucleus and the mitochondrial genome differs between species. Most mitochondrial genomes are circular, although exceptions have been reported. In general, mitochondrial DNA lacks introns, as is the case in the human mitochondrial genome; however, introns have been observed in some eukaryotic mitochondrial DNA, such as that of yeast and protists, including "Dictyostelium discoideum". Between protein-coding regions, tRNAs are present. During transcription, the tRNAs acquire their characteristic L-shape that gets recognized and cleaved by specific enzymes. Mitochondrial tRNA genes have different sequences from the nuclear tRNAs but lookalikes of mitochondrial tRNAs have been found in the nuclear chromosomes with high sequence similarity.
In animals, the mitochondrial genome is typically a single circular chromosome that is approximately 16 kb long and has 37 genes. The genes, while highly conserved, may vary in location. Curiously, this pattern is not found in the human body louse ("Pediculus humanus"). Instead, this mitochondrial genome is arranged in 18 minicircular chromosomes, each of which is 3–4 kb long and has one to three genes. This pattern is also found in other sucking lice, but not in chewing lice. Recombination has been shown to occur between the minichromosomes. The reason for this difference is not known.
While slight variations on the standard genetic code had been predicted earlier, none was discovered until 1979, when researchers studying human mitochondrial genes determined that they used an alternative code. However, the mitochondria of many other eukaryotes, including most plants, use the standard code. Many slight variants have been discovered since, including various alternative mitochondrial codes. Further, the AUA, AUC, and AUU codons are all allowable start codons.
Some of these differences should be regarded as pseudo-changes in the genetic code due to the phenomenon of RNA editing, which is common in mitochondria. In higher plants, it was thought that CGG encoded for tryptophan and not arginine; however, the codon in the processed RNA was discovered to be the UGG codon, consistent with the standard genetic code for tryptophan. Of note, the arthropod mitochondrial genetic code has undergone parallel evolution within a phylum, with some organisms uniquely translating AGG to lysine.
Mitochondrial genomes have far fewer genes than the bacteria from which they are thought to be descended. Although some have been lost altogether, many have been transferred to the nucleus, such as the respiratory complex II protein subunits. This is thought to be relatively common over evolutionary time. A few organisms, such as the "Cryptosporidium", actually have mitochondria that lack any DNA, presumably because all their genes have been lost or transferred. In "Cryptosporidium", the mitochondria have an altered ATP generation system that renders the parasite resistant to many classical mitochondrial inhibitors such as cyanide, azide, and atovaquone.
Mitochondria divide by binary fission, similar to bacterial cell division. The regulation of this division differs between eukaryotes. In many single-celled eukaryotes, their growth and division are linked to the cell cycle. For example, a single mitochondrion may divide synchronously with the nucleus. This division and segregation process must be tightly controlled so that each daughter cell receives at least one mitochondrion. In other eukaryotes (in mammals for example), mitochondria may replicate their DNA and divide mainly in response to the energy needs of the cell, rather than in phase with the cell cycle. When the energy needs of a cell are high, mitochondria grow and divide. When energy use is low, mitochondria are destroyed or become inactive. In such examples, and in contrast to the situation in many single-celled eukaryotes, mitochondria are apparently randomly distributed to the daughter cells during the division of the cytoplasm. Understanding of mitochondrial dynamics, which is described as the balance between mitochondrial fusion and fission, has revealed that functional and structural alterations in mitochondrial morphology are important factors in pathologies associated with several disease conditions.
The hypothesis of mitochondrial binary fission has relied on the visualization by fluorescence microscopy and conventional transmission electron microscopy (TEM). The resolution of fluorescence microscopy(~200 nm) is insufficient to distinguish structural details, such as double mitochondrial membrane in mitochondrial division or even to distinguish individual mitochondria when several are close together. Conventional TEM has also some technical limitations in verifying mitochondrial division. Cryo-electron tomography was recently used to visualize mitochondrial division in frozen hydrated intact cells. It revealed that mitochondria divide by budding.
An individual's mitochondrial genes are not inherited by the same mechanism as nuclear genes. Typically, the mitochondria are inherited from one parent only. In humans, when an egg cell is fertilized by a sperm, the egg nucleus and sperm nucleus each contribute equally to the genetic makeup of the zygote nucleus. In contrast, the mitochondria, and therefore the mitochondrial DNA, usually come from the egg only. The sperm's mitochondria enter the egg, but do not contribute genetic information to the embryo. Instead, paternal mitochondria are marked with ubiquitin to select them for later destruction inside the embryo. The egg cell contains relatively few mitochondria, but it is these mitochondria that survive and divide to populate the cells of the adult organism. Mitochondria are, therefore, in most cases inherited only from mothers, a pattern known as maternal inheritance. This mode is seen in most organisms, including the majority of animals. However, mitochondria in some species can sometimes be inherited paternally. This is the norm among certain coniferous plants, although not in pine trees and yews. For Mytilids, paternal inheritance only occurs within males of the species. It has been suggested that it occurs at a very low level in humans. It was suggested in 2012, in an article in Current Biology, that mitochondria that shorten male lifespan stay in the system because they are inherited only through the mother. By contrast, natural selection weeds out mitochondria that reduce female survival as such mitochondria are less likely to be passed on to the next generation. Therefore, it is suggested that human females and female animals tend to live longer than males. The authors claim that this is a partial explanation. Dr Tom Kirkwood, professor of ageing at Newcastle University, commented on the article "I certainly don't think this is a discovery that explains why women live five-to-six years longer than men."
Uniparental inheritance leads to little opportunity for genetic recombination between different lineages of mitochondria, although a single mitochondrion can contain 2–10 copies of its DNA. For this reason, mitochondrial DNA is usually thought to reproduce by binary fission. What recombination does take place maintains genetic integrity rather than maintaining diversity. However, there are studies showing evidence of recombination in mitochondrial DNA. It is clear that the enzymes necessary for recombination are present in mammalian cells. Further, evidence suggests that animal mitochondria can undergo recombination. The data are a bit more controversial in humans, although indirect evidence of recombination exists. If recombination does not occur, the whole mitochondrial DNA sequence represents a single haplotype, which makes it useful for studying the evolutionary history of populations.
Entities undergoing uniparental inheritance and with little to no recombination may be expected to be subject to Muller's ratchet, the inexorable accumulation of deleterious mutations until functionality is lost. Animal populations of mitochondria avoid this buildup through a developmental process known as the mtDNA bottleneck. The bottleneck exploits stochastic processes in the cell to increase in the cell-to-cell variability in mutant load as an organism develops: a single egg cell with some proportion of mutant mtDNA thus produces an embryo where different cells have different mutant loads. Cell-level selection may then act to remove those cells with more mutant mtDNA, leading to a stabilisation or reduction in mutant load between generations. The mechanism underlying the bottleneck is debated, with a recent mathematical and experimental metastudy providing evidence for a combination of random partitioning of mtDNAs at cell divisions and random turnover of mtDNA molecules within the cell.
Mitochondria can repair oxidative DNA damage by mechanisms that are analogous to those occurring in the cell nucleus. The proteins that are employed in mtDNA repair are encoded by nuclear genes, and are translocated to the mitochondria. The DNA repair pathways in mammalian mitochondria include base excision repair, double-strand break repair, direct reversal and mismatch repair. Also DNA damages may be bypassed, rather than repaired, by translesion synthesis.
Of the several DNA repair process in mitochondria, the base excision repair pathway is the one that has been most comprehensively studied. Base excision repair is carried out by a sequence of enzymatic catalyzed steps that include recognition and excision of a damaged DNA base, removal of the resulting abasic site, end processing, gap filling and ligation. A common damage in mtDNA that is repaired by base excision repair is 8-oxoguanine produced by the oxidation of guanine.
Double-strand breaks can be repaired by homologous recombinational repair in both mammalian mtDNA and plant mtDNA. Double-strand breaks in mtDNA can also be repaired by microhomology-mediated end joining. Although there is evidence for the repair processes of direct reversal and mismatch repair in mtDNA, these processes are still not well characterized.
Eukaryotic cells typically have mitochondrial DNA; however, mitochondria that lack their own DNA have been found in a marine parasitic dinoflagellate from the genus "Amoebophyra". This microorganism, "A. cerati", has functional mitochondria that lack a genome. In related species, the mitochondrial genome still has three genes, but in "A. cerati" only a single mitochondrial gene — the cytochrome c oxidase I gene ("cox1") — is found, and it has migrated to the genome of the nucleus.
The near-absence of genetic recombination in mitochondrial DNA makes it a useful source of information for scientists involved in population genetics and evolutionary biology. Because all the mitochondrial DNA is inherited as a single unit, or haplotype, the relationships between mitochondrial DNA from different individuals can be represented as a gene tree. Patterns in these gene trees can be used to infer the evolutionary history of populations. The classic example of this is in human evolutionary genetics, where the molecular clock can be used to provide a recent date for mitochondrial Eve. This is often interpreted as strong support for a recent modern human expansion out of Africa. Another human example is the sequencing of mitochondrial DNA from Neanderthal bones. The relatively large evolutionary distance between the mitochondrial DNA sequences of Neanderthals and living humans has been interpreted as evidence for the lack of interbreeding between Neanderthals and anatomically modern humans.
However, mitochondrial DNA reflects only the history of the females in a population and so may not represent the history of the population as a whole. This can be partially overcome by the use of paternal genetic sequences, such as the non-recombining region of the Y-chromosome. In a broader sense, only studies that also include nuclear DNA can provide a comprehensive evolutionary history of a population.
Recent measurements of the molecular clock for mitochondrial DNA reported a value of 1 mutation every 7884 years dating back to the most recent common ancestor of humans and apes, which is consistent with estimates of mutation rates of autosomal DNA (10−8 per base per generation.
Damage and subsequent dysfunction in mitochondria is an important factor in a range of human diseases due to their influence in cell metabolism. Mitochondrial disorders often present themselves as neurological disorders, including autism. They can also manifest as myopathy, diabetes, multiple endocrinopathy, and a variety of other systemic disorders. Diseases caused by mutation in the mtDNA include Kearns–Sayre syndrome, MELAS syndrome and Leber's hereditary optic neuropathy. In the vast majority of cases, these diseases are transmitted by a female to her children, as the zygote derives its mitochondria and hence its mtDNA from the ovum. Diseases such as Kearns-Sayre syndrome, Pearson syndrome, and progressive external ophthalmoplegia are thought to be due to large-scale mtDNA rearrangements, whereas other diseases such as MELAS syndrome, Leber's hereditary optic neuropathy, myoclonic epilepsy with ragged red fibers (MERRF), and others are due to point mutations in mtDNA.
In other diseases, defects in nuclear genes lead to dysfunction of mitochondrial proteins. This is the case in Friedreich's ataxia, hereditary spastic paraplegia, and Wilson's disease. These diseases are inherited in a dominance relationship, as applies to most other genetic diseases. A variety of disorders can be caused by nuclear mutations of oxidative phosphorylation enzymes, such as coenzyme Q10 deficiency and Barth syndrome. Environmental influences may interact with hereditary predispositions and cause mitochondrial disease. For example, there may be a link between pesticide exposure and the later onset of Parkinson's disease. Other pathologies with etiology involving mitochondrial dysfunction include schizophrenia, bipolar disorder, dementia, Alzheimer's disease, Parkinson's disease, epilepsy, stroke, cardiovascular disease, chronic fatigue syndrome, retinitis pigmentosa, and diabetes mellitus.
Mitochondria-mediated oxidative stress plays a role in cardiomyopathy in Type 2 diabetics. Increased fatty acid delivery to the heart increases fatty acid uptake by cardiomyocytes, resulting in increased fatty acid oxidation in these cells. This process increases the reducing equivalents available to the electron transport chain of the mitochondria, ultimately increasing reactive oxygen species (ROS) production. ROS increases uncoupling proteins (UCPs) and potentiate proton leakage through the adenine nucleotide translocator (ANT), the combination of which uncouples the mitochondria. Uncoupling then increases oxygen consumption by the mitochondria, compounding the increase in fatty acid oxidation. This creates a vicious cycle of uncoupling; furthermore, even though oxygen consumption increases, ATP synthesis does not increase proportionally because the mitochondria are uncoupled. Less ATP availability ultimately results in an energy deficit presenting as reduced cardiac efficiency and contractile dysfunction. To compound the problem, impaired sarcoplasmic reticulum calcium release and reduced mitochondrial reuptake limits peak cytosolic levels of the important signaling ion during muscle contraction. Decreased intra-mitochondrial calcium concentration increases dehydrogenase activation and ATP synthesis. So in addition to lower ATP synthesis due to fatty acid oxidation, ATP synthesis is impaired by poor calcium signaling as well, causing cardiac problems for diabetics.
Given the role of mitochondria as the cell's powerhouse, there may be some leakage of the high-energy electrons in the respiratory chain to form reactive oxygen species. This was thought to result in significant oxidative stress in the mitochondria with high mutation rates of mitochondrial DNA (mtDNA). Hypothesized links between aging and oxidative stress are not new and were proposed in 1956, which was later refined into the mitochondrial free radical theory of aging. A vicious cycle was thought to occur, as oxidative stress leads to mitochondrial DNA mutations, which can lead to enzymatic abnormalities and further oxidative stress.
A number of changes can occur to mitochondria during the aging process. Tissues from elderly patients show a decrease in enzymatic activity of the proteins of the respiratory chain. However, mutated mtDNA can only be found in about 0.2% of very old cells. Large deletions in the mitochondrial genome have been hypothesized to lead to high levels of oxidative stress and neuronal death in Parkinson's disease.
Madeleine L'Engle's 1973 science fantasy novel "A Wind in the Door" prominently features the mitochondria of main character Charles Wallace Murry, as being inhabited by creatures known as the farandolae. The novel also features other characters traveling inside one of Murry's mitochondria.
The 1995 horror fiction novel "Parasite Eve" by Hideaki Sena depicts mitochondria as having some consciousness and mind control abilities, attempting to use these to overtake eukaryotes as the dominant life form. This text was adapted into an eponymous film, video game, and video game sequel all involving a similar premise.
In the "Star Wars" franchise, microorganisms referred to as "midi-chlorians" give some characters the ability to sense and use the Force. George Lucas, director of the 1999 film "", in which midi-chlorians were introduced, described them as "a loose depiction of mitochondria". The non-fictional bacteria genus "Midichloria" was later named after the midi-chlorians of "Star Wars".
As a result of the mitochondrion's prominence in modern American science education, the phrase "the mitochondria is the powerhouse of the cell" became an internet meme. On the contrary, cytologists are in consensus that tRNA has a better case for the title of powerhouse, due to its essential role in protein production.
General
|
https://en.wikipedia.org/wiki?curid=19588
|
Minimax
Minimax (sometimes MinMax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for "mini"mizing the possible loss for a worst case ("max"imum loss) scenario. When dealing with gains, it is referred to as "maximin"—to maximize the minimum gain. Originally formulated for two-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.
The maximin value is the highest value that the player can be sure to get without knowing the actions of the other players; equivalently, it is the lowest value the other players can force the player to receive when they know the player's action. Its formal definition is:
Where:
Calculating the maximin value of a player is done in a worst-case approach: for each possible action of the player, we check all possible actions of the other players and determine the worst possible combination of actions—the one that gives player the smallest value. Then, we determine which action player can take in order to make sure that this smallest value is the highest possible.
For example, consider the following game for two players, where the first player ("row player") may choose any of three moves, labelled , , or , and the second player ("column" player) may choose either of two moves, or . The result of the combination of both moves is expressed in a payoff table:
(where the first number in each cell is the pay-out of the row player and the second number is the pay-out of the column player).
For the sake of example, we consider only pure strategies. Check each player in turn:
If both players play their respective maximin strategies formula_9, the payoff vector is formula_10.
The minimax value of a player is the smallest value that the other players can force the player to receive, without knowing the player's actions; equivalently, it is the largest value the player can be sure to get when they "know" the actions of the other players. Its formal definition is:
The definition is very similar to that of the maximin value—only the order of the maximum and minimum operators is inverse. In the above example:
For every player , the maximin is at most the minimax:
Intuitively, in maximin the maximization comes before the minimization, so player tries to maximize their value before knowing what the others will do; in minimax the maximization comes after the minimization, so player is in a much better position—they maximize their value knowing what the others did.
Another way to understand the "notation" is by reading from right to left: when we write
the initial set of outcomes formula_16 depends on both formula_17 and formula_18. We first "marginalize away" formula_17 from formula_16, by maximizing over formula_17 (for every possible value of formula_18) to yield a set of marginal outcomes formula_23, which depends only on formula_18. We then minimize over formula_18 over these outcomes. (Conversely for maximin.)
Although it is always the case that formula_26 and formula_27, the payoff vector resulting from both players playing their minimax strategies, formula_28 in the case of formula_29 or formula_30 in the case of formula_31, cannot similarly be ranked against the payoff vector formula_10 resulting from both players playing their maximin strategy.
In two-player zero-sum games, the minimax solution is the same as the Nash equilibrium.
In the context of zero-sum games, the minimax theorem is equivalent to:
For every two-person, zero-sum game with finitely many strategies, there exists a value V and a mixed strategy for each player, such that
Equivalently, Player 1's strategy guarantees them a payoff of V regardless of Player 2's strategy, and similarly Player 2 can guarantee themselves a payoff of −V. The name minimax arises because each player minimizes the maximum payoff possible for the other—since the game is zero-sum, they also minimize their own maximum loss (i.e. maximize their minimum payoff).
See also example of a game without a value.
The following example of a zero-sum game, where A and B make simultaneous moves, illustrates "minimax" solutions. Suppose each player has three choices and consider the payoff matrix for A displayed on the right. Assume the payoff matrix for B is the same matrix with the signs reversed (i.e. if the choices are A1 and B1 then B pays 3 to A). Then, the minimax choice for A is A2 since the worst possible result is then having to pay 1, while the simple minimax choice for B is B2 since the worst possible result is then no payment. However, this solution is not stable, since if B believes A will choose A2 then B will choose B1 to gain 1; then if A believes B will choose B1 then A will choose A1 to gain 3; and then B will choose B2; and eventually both players will realize the difficulty of making a choice. So a more stable strategy is needed.
Some choices are "dominated" by others and can be eliminated: A will not choose A3 since either A1 or A2 will produce a better result, no matter what B chooses; B will not choose B3 since some mixtures of B1 and B2 will produce a better result, no matter what A chooses.
A can avoid having to make an expected payment of more than 1∕3 by choosing A1 with probability 1∕6 and A2 with probability 5∕6: The expected payoff for A would be 3 × (1∕6) − 1 × (5∕6) = −1∕3 in case B chose B1 and −2 × (1∕6) + 0 × (5∕6) = −1/3 in case B chose B2. Similarly, B can ensure an expected gain of at least 1/3, no matter what A chooses, by using a randomized strategy of choosing B1 with probability 1∕3 and B2 with probability 2∕3. These mixed minimax strategies are now stable and cannot be improved.
Frequently, in game theory, maximin is distinct from minimax. Minimax is used in zero-sum games to denote minimizing the opponent's maximum payoff. In a zero-sum game, this is identical to minimizing one's own maximum loss, and to maximizing one's own minimum gain.
"Maximin" is a term commonly used for non-zero-sum games to describe the strategy which maximizes one's own minimum payoff. In non-zero-sum games, this is not generally the same as minimizing the opponent's maximum gain, nor the same as the Nash equilibrium strategy.
The minimax values are very important in the theory of repeated games. One of the central theorems in this theory, the folk theorem, relies on the minimax values.
In combinatorial game theory, there is a minimax algorithm for game solutions.
A simple version of the minimax "algorithm", stated below, deals with games such as tic-tac-toe, where each player can win, lose, or draw.
If player A "can" win in one move, their best move is that winning move.
If player B knows that one move will lead to the situation where player A "can" win in one move, while another move will lead to the situation where player A can, at best, draw, then player B's best move is the one leading to a draw.
Late in the game, it's easy to see what the "best" move is.
The Minimax algorithm helps find the best move, by working backwards from the end of the game. At each step it assumes that player A is trying to maximize the chances of A winning, while on the next turn player B is trying to minimize the chances of A winning (i.e., to maximize B's own chances of winning).
A minimax algorithm is a recursive algorithm for choosing the next move in an n-player game, usually a two-player game. A value is associated with each position or state of the game. This value is computed by means of a position evaluation function and it indicates how good it would be for a player to reach that position. The player then makes the move that maximizes the minimum value of the position resulting from the opponent's possible following moves. If it is A's turn to move, A gives a value to each of their legal moves.
A possible allocation method consists in assigning a certain win for A as +1 and for B as −1. This leads to combinatorial game theory as developed by John Horton Conway. An alternative is using a rule that if the result of a move is an immediate win for A it is assigned positive infinity and if it is an immediate win for B, negative infinity. The value to A of any other move is the maximum of the values resulting from each of B's possible replies. For this reason, A is called the "maximizing player" and B is called the "minimizing player", hence the name "minimax algorithm". The above algorithm will assign a value of positive or negative infinity to any position since the value of every position will be the value of some final winning or losing position. Often this is generally only possible at the very end of complicated games such as chess or go, since it is not computationally feasible to look ahead as far as the completion of the game, except towards the end, and instead, positions are given finite values as estimates of the degree of belief that they will lead to a win for one player or another.
This can be extended if we can supply a heuristic evaluation function which gives values to non-final game states without considering all possible following complete sequences. We can then limit the minimax algorithm to look only at a certain number of moves ahead. This number is called the "look-ahead", measured in "plies". For example, the chess computer Deep Blue (the first one to beat a reigning world champion, Garry Kasparov at that time) looked ahead at least 12 plies, then applied a heuristic evaluation function.
The algorithm can be thought of as exploring the nodes of a "game tree". The "effective branching factor" of the tree is the average number of children of each node (i.e., the average number of legal moves in a position). The number of nodes to be explored usually increases exponentially with the number of plies (it is less than exponential if evaluating forced moves or repeated positions). The number of nodes to be explored for the analysis of a game is therefore approximately the branching factor raised to the power of the number of plies. It is therefore impractical to completely analyze games such as chess using the minimax algorithm.
The performance of the naïve minimax algorithm may be improved dramatically, without affecting the result, by the use of alpha-beta pruning.
Other heuristic pruning methods can also be used, but not all of them are guaranteed to give the same result as the un-pruned search.
A naïve minimax algorithm may be trivially modified to additionally return an entire Principal Variation along with a minimax score.
The pseudocode for the depth limited minimax algorithm is given below.
The minimax function returns a heuristic value for leaf nodes (terminal nodes and nodes at the maximum search depth).
Non leaf nodes inherit their value from a descendant leaf node.
The heuristic value is a score measuring the favorability of the node for the maximizing player.
Hence nodes resulting in a favorable outcome, such as a win, for the maximizing player have higher scores than nodes more favorable for the minimizing player.
The heuristic value for terminal (game ending) leaf nodes are scores corresponding to win, loss, or draw, for the maximizing player.
For non terminal leaf nodes at the maximum search depth, an evaluation function estimates a heuristic value for the node.
The quality of this estimate and the search depth determine the quality and accuracy of the final minimax result.
Minimax treats the two players (the maximizing player and the minimizing player) separately in its code. Based on the observation that formula_33, minimax may often be simplified into the negamax algorithm.
Suppose the game being played only has a maximum of two possible moves per player each turn. The algorithm generates the tree on the right, where the circles represent the moves of the player running the algorithm ("maximizing player"), and squares represent the moves of the opponent ("minimizing player"). Because of the limitation of computation resources, as explained above, the tree is limited to a "look-ahead" of 4 moves.
The algorithm evaluates each "leaf node" using a heuristic evaluation function, obtaining the values shown. The moves where the "maximizing player" wins are assigned with positive infinity, while the moves that lead to a win of the "minimizing player" are assigned with negative infinity. At level 3, the algorithm will choose, for each node, the smallest of the "child node" values, and assign it to that same node (e.g. the node on the left will choose the minimum between "10" and "+∞", therefore assigning the value "10" to itself). The next step, in level 2, consists of choosing for each node the largest of the "child node" values. Once again, the values are assigned to each "parent node". The algorithm continues evaluating the maximum and minimum values of the child nodes alternately until it reaches the "root node", where it chooses the move with the largest value (represented in the figure with a blue arrow). This is the move that the player should make in order to "minimize" the "maximum" possible loss.
Minimax theory has been extended to decisions where there is no other player, but where the consequences of decisions depend on unknown facts. For example, deciding to prospect for minerals entails a cost which will be wasted if the minerals are not present, but will bring major rewards if they are. One approach is to treat this as a game against "nature" (see move by nature), and using a similar mindset as Murphy's law or resistentialism, take an approach which minimizes the maximum expected loss, using the same techniques as in the two-person zero-sum games.
In addition, expectiminimax trees have been developed, for two-player games in which chance (for example, dice) is a factor.
In classical statistical decision theory, we have an estimator formula_34 that is used to estimate a parameter formula_35. We also assume a risk function formula_36, usually specified as the integral of a loss function. In this framework, formula_37 is called minimax if it satisfies
An alternative criterion in the decision theoretic framework is the Bayes estimator in the presence of a prior distribution formula_39. An estimator is Bayes if it minimizes the "average" risk
A key feature of minimax decision making is being non-probabilistic: in contrast to decisions using expected value or expected utility, it makes no assumptions about the probabilities of various outcomes, just scenario analysis of what the possible outcomes are. It is thus robust to changes in the assumptions, as these other decision techniques are not. Various extensions of this non-probabilistic approach exist, notably minimax regret and Info-gap decision theory.
Further, minimax only requires ordinal measurement (that outcomes be compared and ranked), not "interval" measurements (that outcomes include "how much better or worse"), and returns ordinal data, using only the modeled outcomes: the conclusion of a minimax analysis is: "this strategy is minimax, as the worst case is (outcome), which is less bad than any other strategy". Compare to expected value analysis, whose conclusion is of the form: "this strategy yields E("X")="n."" Minimax thus can be used on ordinal data, and can be more transparent.
In philosophy, the term "maximin" is often used in the context of John Rawls's "A Theory of Justice," where he refers to it (Rawls 1971, p. 152) in the context of The Difference Principle.
Rawls defined this principle as the rule which states that social and economic inequalities should be arranged so that "they are to be of the greatest benefit to the least-advantaged members of society".
|
https://en.wikipedia.org/wiki?curid=19589
|
Minnesota
Minnesota () is a state in the Upper Midwest, Great Lakes, and northern regions of the United States. Minnesota was admitted as the 32nd U.S. state on May 11, 1858, created from the eastern half of the Minnesota Territory. The state has many lakes, and is known as the "Land of 10,000 Lakes". Its official motto is (French for "The Star of the North").
Minnesota is the 12th largest in area and the 22nd most populous of the U.S. states; nearly 55% of its residents live in the Minneapolis–Saint Paul metropolitan area (known as the "Twin Cities"). This area has the largest concentration of transportation, business, industry, education, and government in the state. Urban centers in "Greater Minnesota" include Duluth, Mankato, Moorhead, Rochester and St. Cloud.
The geography of the state consists of western prairies now given over to intensive agriculture; deciduous forests in the southeast, now partially cleared, farmed, and settled; and the less populated North Woods, used for mining, forestry, and recreation.
For thousands of years before Europeans arrived, Minnesota was inhabited by various indigenous peoples. French explorers, missionaries, and fur traders began exploring the region in the 17th century, encountering the Dakota and Ojibwe/Anishinaabe tribes. Much of what is now Minnesota was part of the vast French holding of Louisiana, which was purchased by the United States in 1803. Following several territorial reorganizations, Minnesota in its current form was admitted as the country's 32nd state on May 11, 1858. Like many Midwestern states, it remained sparsely populated and centered on lumber and agriculture.
During the mid-19th and early 20th centuries, European immigrants began settling the state. Many of them came from Scandinavia, Germany, and Central Europe (e.g., Czechs and Slovaks). To this day, Minnesota remains a center of Scandinavian American, German American, and Czech American cultures (e.g., Kolach Days or Kolacky Days in Montgomery, and Bohemian Flats in Minneapolis). Historical evidence suggests that many people immigrated to Minnesota as a result of the failed European Revolutions of 1848.
Minnesota's standard of living index is among the highest in the United States, behind only Massachusetts and Connecticut, and the state is also among the best-educated and wealthiest in the nation. In recent years, its economy has greatly diversified, shifting from traditional activities such as agriculture and resource extraction to services and finance. While Minnesota's population is still largely dominated by Scandinavian- and German-Americans, domestic migration and immigration from Asia, the Horn of Africa, the Middle East, and Latin America have broadened the demographics of the state.
The word "Minnesota" comes from the Dakota name for the Minnesota River, which got its name from one of two words in Dakota: "mní sóta", which means "clear blue water", or "Mníssota", which means "cloudy water". Dakota people demonstrated the name to early settlers by dropping milk into water and calling it "mní sóta". Many places in the state have similar Dakota names, such as Minnehaha Falls ("curling water" or waterfall), Minneiska ("white water"), Minneota ("much water"), Minnetonka ("big water"), Minnetrista ("crooked water"), and Minneapolis, a hybrid word combining Dakota "mní" ("water") and "-polis" (Greek for "city").
Minnesota is the second northernmost U.S. state (after Alaska) and northernmost contiguous state. The isolated Northwest Angle in Lake of the Woods county is the only part of the 48 contiguous states north of the 49th parallel. The state is part of the U.S. region known as the Upper Midwest and part of North America's Great Lakes Region. It shares a Lake Superior water border with Michigan and a land and water border with Wisconsin to the east. Iowa is to the south, North Dakota and South Dakota are to the west, and the Canadian provinces of Ontario and Manitoba are to the north. With , or approximately 2.25% of the United States, Minnesota is the 12th-largest state.
Minnesota has some of the earth's oldest rocks, gneisses that are about 3.6 billion years old (80% as old as the planet). About 2.7 billion years ago basaltic lava poured out of cracks in the floor of the primordial ocean; the remains of this volcanic rock formed the Canadian Shield in northeast Minnesota. The roots of these volcanic mountains and the action of Precambrian seas formed the Iron Range of northern Minnesota. Since a period of volcanism 1.1 billion years ago, Minnesota's geological activity has been more subdued, with no volcanism or mountain formation, but with repeated incursions of the sea, which left behind multiple strata of sedimentary rock.
In more recent times, massive ice sheets at least one kilometer thick ravaged the state's landscape and sculpted its terrain. The Wisconsin glaciation left 12,000 years ago. These glaciers covered all of Minnesota except the far southeast, an area characterized by steep hills and streams that cut into the bedrock. This area is known as the Driftless Zone for its absence of glacial drift. Much of the remainder of the state has 50 feet (15 m) or more of glacial till left behind as the last glaciers retreated. Gigantic Lake Agassiz formed in the northwest 13,000 years ago. Its bed created the fertile Red River valley, and its outflow, glacial River Warren, carved the valley of the Minnesota River and the Upper Mississippi downstream from Fort Snelling. Minnesota is geologically quiet today; it experiences earthquakes infrequently, most of them minor.
The state's high point is Eagle Mountain at 2,301 feet (701 m), which is only away from the low point of 601 feet (183 m) at the shore of Lake Superior. Notwithstanding dramatic local differences in elevation, much of the state is a gently rolling peneplain.
Two major drainage divides meet in Minnesota's northeast in rural Hibbing, forming a triple watershed. Precipitation can follow the Mississippi River south to the Gulf of Mexico, the Saint Lawrence Seaway east to the Atlantic Ocean, or the Hudson Bay watershed to the Arctic Ocean.
The state's nickname "Land of 10,000 Lakes" is apt, as there are 11,842 Minnesota lakes over in size. Minnesota's portion of Lake Superior is the largest at and deepest (at ) body of water in the state. Minnesota has 6,564 natural rivers and streams that cumulatively flow for . The Mississippi River begins its journey from its headwaters at Lake Itasca and crosses the Iowa border downstream. It is joined by the Minnesota River at Fort Snelling, by the St. Croix River near Hastings, by the Chippewa River at Wabasha, and by many smaller streams. The Red River, in the bed of glacial Lake Agassiz, drains the northwest part of the state northward toward Canada's Hudson Bay. Approximately of wetlands are within Minnesota's borders, the most of any state except Alaska.
Minnesota has four ecological provinces: prairie parkland, in the southwestern and western parts of the state; the eastern broadleaf forest (Big Woods) in the southeast, extending in a narrowing strip to the state's northwestern part, where it transitions into tallgrass aspen parkland; and the northern Laurentian mixed forest, a transitional forest between the northern boreal forest and the broadleaf forests to the south. These northern forests are a vast wilderness of pine and spruce trees mixed with patchy stands of birch and poplar.
Much of Minnesota's northern forest has undergone logging, leaving only a few patches of old growth forest today in areas such as in the Chippewa National Forest and the Superior National Forest, where the Boundary Waters Canoe Area Wilderness has some of unlogged land. Although logging continues, regrowth and replanting keep about a third of the state forested. Nearly all Minnesota's prairies and oak savannas have been fragmented by farming, grazing, logging, and suburban development.
While loss of habitat has affected native animals such as the pine marten, elk, woodland caribou, and bison, others like whitetail deer and bobcat thrive. Minnesota has the nation's largest population of timber wolves outside Alaska, and supports healthy populations of black bears, moose, and gophers. Located on the Mississippi Flyway, Minnesota hosts migratory waterfowl such as geese and ducks, and game birds such as grouse, pheasants, and turkeys. It is home to birds of prey, including the largest number of breeding pairs of bald eagles in the lower 48 states as of 2007, red-tailed hawks, and snowy owls. Hawk Ridge is one of the premier bird watching sites in North America. The lakes teem with sport fish such as walleye, bass, muskellunge, and northern pike, and brook, brown, and rainbow trout populate streams in the southeast and northeast.
Minnesota experiences temperature extremes characteristic of its continental climate, with cold winters and hot summers. The lowest temperature recorded was at Tower on February 2, 1996, and the highest was at Moorhead on July 6, 1936. Meteorological events include rain, snow, blizzards, thunderstorms, hail, derechos, tornadoes, and high-velocity straight-line winds. The growing season varies from 90 days in the Iron Range to 160 days in southeast Minnesota near the Mississippi River, and average temperatures range from . Average summer dewpoints range from about in the south to about in the north. Average annual precipitation ranges from , and droughts occur every 10 to 50 years.
Minnesota's first state park, Itasca State Park, was established in 1891, and is the source of the Mississippi River. Today Minnesota has 72 state parks and recreation areas, 58 state forests covering about four million acres (16,000 km2), and numerous state wildlife preserves, all managed by the Minnesota Department of Natural Resources. The Chippewa and Superior national forests comprise . The Superior National Forest in the northeast contains the Boundary Waters Canoe Area Wilderness, which encompasses over a million acres (4,000 km2) and a thousand lakes. To its west is Voyageurs National Park. The Mississippi National River and Recreation Area (MNRRA) is a corridor along the Mississippi River through the Minneapolis–St. Paul Metropolitan Area connecting a variety of sites of historic, cultural, and geologic interest.
Before European settlement of North America, a subculture of Sioux called the Dakota people lived in Minnesota. As Europeans settled the east coast, Native Americans moved away from them, causing migration of the Anishinaabe (also known as Ojibwe) and other Native Americans into the Minnesota area. The first Europeans in the area were French voyageur fur traders who arrived in the 17th century and began using the Grand Portage to access trapping and trading areas further inland. Late that century, Anishinaabe migrated westward to Minnesota, causing tensions with the Dakota people. Explorers such as Daniel Greysolon, Sieur du Lhut, Father Louis Hennepin, Jonathan Carver, Henry Schoolcraft, and Joseph Nicollet mapped the state.
The region was part of Spanish Louisiana from 1762 to 1802. The portion of the state east of the Mississippi River became part of the United States at the end of the American Revolutionary War, when the Second Treaty of Paris was signed. Land west of the Mississippi was acquired with the Louisiana Purchase, though part of the Red River Valley was disputed until the Treaty of 1818. By the late 1700s, the North West Company had established the post of Fort Charlotte at the Lake Superior end of the Grand Portage. It moved 50 miles northeast to Fort William in 1803. In 1805 Zebulon Pike bargained with Native Americans to acquire land at the confluence of the Minnesota and Mississippi rivers. The construction of Fort Snelling followed between 1819 and 1825. Its soldiers built a grist mill and a sawmill at Saint Anthony Falls, the first of the water-powered industries around which the city of Minneapolis later grew. Meanwhile, squatters, government officials, and sight-seers had settled near the fort. In 1839 the army forced them to move downriver and they settled in the area that became St. Paul. Minnesota Territory was formed on March 3, 1849. The first territorial legislature (held September 2, 1849) was dominated by men from New England or of New England ancestry. Thousands of people had come to build farms and cut timber, and Minnesota became the 32nd U.S. state on May 11, 1858. The founding population was so overwhelmingly of New England origins that the state was dubbed "the New England of the West".
Treaties between European settlers and the Dakota and Ojibwe gradually forced the natives off their lands and onto smaller reservations. In 1861 residents of Mankato formed the Knights of the Forest, with a goal of eliminating all Native Americans from Minnesota. As conditions deteriorated for the Dakota, tensions rose, leading to the Dakota War of 1862. The six-week war ended with the execution of 38 Dakota and the exile of most of the rest to the Crow Creek Reservation in Dakota Territory. As many as 800 white settlers died during the war.
Logging and farming were mainstays of Minnesota's early economy. The sawmills at Saint Anthony Falls and logging centers like Pine City, Marine on St. Croix, Stillwater, and Winona processed high volumes of lumber. These cities were on rivers that were ideal for transportation. Saint Anthony Falls was later tapped to provide power for flour mills. Innovations by Minneapolis millers led to the production of Minnesota "patent" flour, which commanded almost double the price of "bakers'" or "clear" flour, which it replaced. By 1900 Minnesota mills, led by Pillsbury, Northwestern and the Washburn-Crosby Company (a forerunner of General Mills), were grinding 14.1 percent of the nation's grain.
The state's iron-mining industry was established with the discovery of iron in the Vermilion Range and the Mesabi Range in the 1880s, and in the Cuyuna Range in the early 20th century. The ore was shipped by rail to Duluth and Two Harbors, then loaded onto ships and transported eastward over the Great Lakes.
Industrial development and the rise of manufacturing caused the population to shift gradually from rural areas to cities during the early 20th century. Nevertheless, farming remained prevalent. Minnesota's economy was hit hard by the Great Depression, resulting in lower prices for farmers, layoffs among iron miners, and labor unrest. Compounding the adversity, western Minnesota and the Dakotas were hit by drought from 1931 to 1935. New Deal programs provided some economic turnaround. The Civilian Conservation Corps and other programs around the state established some jobs for Indians on their reservations, and the Indian Reorganization Act of 1934 provided the tribes with a mechanism of self-government. This gave Natives a greater voice within the state, and promoted more respect for tribal customs because religious ceremonies and native languages were no longer suppressed.
After World War II, industrial development quickened. New technology increased farm productivity through automation of feedlots for hogs and cattle, machine milking at dairy farms, and raising chickens in large buildings. Planting became more specialized with hybridization of corn and wheat, and farm machinery such as tractors and combines became the norm. University of Minnesota professor Norman Borlaug contributed to these developments as part of the Green Revolution. Suburban development accelerated due to increased postwar housing demand and convenient transportation. Increased mobility in turn enabled more specialized jobs.
Minnesota became a center of technology after World War II. Engineering Research Associates was formed in 1946 to develop computers for the United States Navy. It later merged with Remington Rand, and then became Sperry Rand. William Norris left Sperry in 1957 to form Control Data Corporation (CDC). Cray Research was formed when Seymour Cray left CDC to form his own company. Medical device maker Medtronic also started business in the Twin Cities in 1949.
Saint Paul, in east-central Minnesota along the banks of the Mississippi River, has been Minnesota's capital city since 1849, first as capital of the Territory of Minnesota, and then as the state capital since 1858.
Saint Paul is adjacent to Minnesota's most populous city, Minneapolis; they and their suburbs are collectively known as the Twin Cities metropolitan area, the country's 16th-largest metropolitan area and home to about 55 percent of the state's population. The remainder of the state is known as "Greater Minnesota" or "Outstate Minnesota".
The state has 17 cities with populations above 50,000 as of the 2010 census. In descending order of population, they are Minneapolis, Saint Paul, Rochester, Duluth, Bloomington, Brooklyn Park, Plymouth, Saint Cloud, Woodbury, Eagan, Maple Grove, Coon Rapids, Eden Prairie, Minnetonka, Burnsville, Apple Valley, Blaine, and Lakeville. Of these only Rochester, Duluth, and Saint Cloud are outside the Twin Cities metropolitan area.
Minnesota's population continues to grow, primarily in the urban centers. The populations of metropolitan Sherburne and Scott counties doubled between 1980 and 2000, while 40 of the state's 87 counties lost residents over the same period.
From fewer than 6,120 white settlers in 1850, Minnesota's official population grew to over 1.7 million by 1900. Each of the next six decades saw a 15 percent increase in population, reaching 3.4 million in 1960. Growth then slowed, rising 11 percent to 3.8 million in 1970, and an average of 9percent over the next three decades to 4.9 million in the 2000 Census.
The United States Census Bureau estimates the population of Minnesota was 5,639,632 on July 1, 2019, a 6.33 percent increase since the 2010 United States Census. The rate of population change, and age and gender distributions, approximate the national average. Minnesota's center of population is in Hennepin County.
As of the 2010 Census Minnesota's population was 5,303,925. The gender makeup of the state was 49.6% male and 50.4% female. 24.2% of the population was under the age of 18; 9.5% between the ages of 18 and 24; 26.3% from 25 to 44; 27.1% from 45 to 64; and 12.9% 65 or older.
The table below shows the racial composition of Minnesota's population as of 2017.
According to the 2017 American Community Survey, 5.1% of Minnesota's population were of Hispanic or Latino origin (of any race): Mexican (3.5%), Puerto Rican (0.2%), Cuban (0.1%), and other Hispanic or Latino origin (1.2%). The ancestry groups claimed by more than five percent of the population were: German (33.8%), Norwegian (15.3%), Irish (10.5%), Swedish (8.1%), and English (5.4%).
In 2011 non-Hispanic whites were involved in 72.3 percent of all the births. Minnesota's growing minority groups, however, still form a smaller percentage of the population than in the nation as a whole.
Minnesota has the country's largest Somali population, with an estimated 57,000 people, the largest concentration outside of the Horn of Africa.
The majority of Minnesotans are Protestants, including a large Lutheran contingent, owing to the state's largely Northern European ethnic makeup. Roman Catholics (of largely German, Irish, French and Slavic descent) make up the largest single Christian denomination. A 2010 survey by the Pew Forum on Religion and Public Life showed that 32 percent of Minnesotans were affiliated with Mainline Protestant traditions, 21 percent were Evangelical Protestants, 28 percent Roman Catholic, 1percent each Jewish, Muslim, Buddhist, and Black Protestant, and smaller amounts of other faiths, with 13 percent unaffiliated. According to the Association of Religion Data Archives, the denominations with the most adherents in 2010 were the Roman Catholic Church with 1,150,367; the Evangelical Lutheran Church in America with 737,537; and the Lutheran Church–Missouri Synod with 182,439. This is broadly consistent with the results of the 2001 American Religious Identification Survey, which also gives detailed percentages for many individual denominations. The international Confessional Evangelical Lutheran Conference is headquartered in Mankato, Minnesota. Although Christianity is dominant, Minnesota has a long history with non-Christian faiths. Ashkenazi Jewish pioneers set up Saint Paul's first synagogue in 1856. Minnesota is home to more than 30 mosques, mostly in the Twin Cities metro area. The Temple of ECK, the spiritual home of Eckankar, is based in Minnesota.
Once primarily a producer of raw materials, Minnesota's economy has transformed to emphasize finished products and services. Perhaps the most significant characteristic of the economy is its diversity; the relative outputs of its business sectors closely match the United States as a whole. Minnesota's economy had a gross domestic product of $262 billion in 2008, with 33 of the United States' top 1,000 publicly traded companies by revenue headquartered in Minnesota, including Target, UnitedHealth Group, 3M, General Mills, U.S. Bancorp, Ameriprise, Hormel, Land O' Lakes, SuperValu, Best Buy, and Valspar. Private companies based in Minnesota include Cargill, the largest privately owned company in the United States, and Carlson Companies, the parent company of Radisson Hotels.
Minnesota's per capita personal income in 2008 was $42,772, the tenth-highest in the nation. Its three-year median household income from 2002 to 2004 was $55,914, ranking fifth in the U.S. and first among the 36 states not on the Atlantic coast.
As of December 2018 the state's unemployment rate was 2.8 percent.
Minnesota's earliest industries were fur trading and agriculture. Minneapolis grew around the flour mills powered by St. Anthony Falls. Although less than one percent of the population is now employed in the agricultural sector, it remains a major part of the state's economy, ranking sixth in the nation in the value of products sold. The state is the nation's largest producer of sugar beets, sweet corn, and peas for processing, and farm-raised turkeys. Minnesota is also a large producer of corn and soybeans, and has the most food cooperatives per capita in the United States. Forestry remains strong, including logging, pulpwood processing and paper production, and forest products manufacturing. Minnesota was famous for its soft-ore mines, which produced a significant portion of the world's iron ore for more than a century. Although the high-grade ore is now depleted, taconite mining continues, using processes developed locally to save the industry. In 2004 the state produced 75 percent of the country's usable iron ore. The mining boom created the port of Duluth, which continues to be important for shipping ore, coal, and agricultural products. The manufacturing sector now includes technology and biomedical firms, in addition to the older food processors and heavy industry. The nation's first indoor shopping mall was Edina's Southdale Center, and its largest is Bloomington's Mall of America.
Minnesota is one of 42 U.S. states with its own lottery; its games include Powerball, Mega Millions, Lotto America (all three multi-state), Northstar Cash, and Gopher 5.
Minnesota produces ethanol fuel and is the first to mandate its use, a ten percent mix (E10). In 2019 there were more than 411 service stations supplying E85 fuel, comprising 85 percent ethanol and 15 percent gasoline. A two percent biodiesel blend has been required in diesel fuel since 2005. Minnesota is ranked in the top ten for wind energy production. The state gets nearly one fifth of all its electrical energy from wind.
Xcel Energy is the state's largest utility and is headquartered in the state; it is one of five investor-owned utilities. There are also a number of municipal utilities.
Minnesota has a progressive income tax structure; the four brackets of state income tax rates are 5.35, 7.05, 7.85 and 9.85 percent. As of 2008 Minnesota was ranked 12th in the nation in per capita total state and local taxes. In 2008 Minnesotans paid 10.2 percent of their income in state and local taxes; the U.S. average was 9.7 percent. The state sales tax in Minnesota is 6.875 percent, but clothing, prescription drug medications and food items for home consumption are exempt. The state legislature may allow municipalities to institute local sales taxes and special local taxes, such as the 0.5 percent supplemental sales tax in Minneapolis. Excise taxes are levied on alcohol, tobacco, and motor fuel. The state imposes a use tax on items purchased elsewhere but used within Minnesota. Owners of real property in Minnesota pay property tax to their county, municipality, school district, and special taxing districts.
Minnesota's leading fine art museums include the Minneapolis Institute of Art, the Walker Art Center, the Frederick R. Weisman Art Museum, and The Museum of Russian Art (TMORA). All are in Minneapolis. The Minnesota Orchestra and the Saint Paul Chamber Orchestra are prominent full-time professional musical ensembles who perform concerts and offer educational programs to the Twin Cities' community. The world-renowned Guthrie Theater moved into a new Minneapolis facility in 2006, boasting three stages and overlooking the Mississippi River. Attendance at theatrical, musical, and comedy events in the area is strong. In the United States, the Twin Cities' number of theater seats per capita ranks behind only New York City; with some 2.3 million theater tickets sold annually. The Minnesota Fringe Festival is an annual celebration of theatre, dance, improvisation, puppetry, kids' shows, visual art, and musicals. The summer festival consists of more than 800 performances over 11 days in Minneapolis, and is the largest non-juried performing arts festival in the United States.
The rigors and rewards of pioneer life on the prairie are the subject of "Giants in the Earth" by Ole Rolvaag and the "Little House" series of children's books by Laura Ingalls Wilder. Small-town life is portrayed grimly by Sinclair Lewis in the novel "Main Street", and more gently and affectionately by Garrison Keillor in his tales of Lake Wobegon. St. Paul native F. Scott Fitzgerald writes of the social insecurities and aspirations of the young city in stories such as "Winter Dreams" and "The Ice Palace" (published in "Flappers and Philosophers"). Henry Wadsworth Longfellow's epic poem "The Song of Hiawatha" was inspired by Minnesota and names many of the state's places and bodies of water. Minnesota native Robert Zimmerman (Bob Dylan) won the 2016 Nobel Prize in Literature. Science fiction writer Marissa Lingen lives here.
Minnesota musicians include Holly Henry, Bob Dylan, Eddie Cochran, The Andrews Sisters, The Castaways, The Trashmen, Prince, Soul Asylum, David Ellefson, Chad Smith, John Wozniak, Hüsker Dü, Owl City, The Replacements, and Dessa. Minnesotans helped shape the history of music through popular American culture: the Andrews Sisters' "Boogie Woogie Bugle Boy" was an iconic tune of World War II, while the Trashmen's "Surfin' Bird" and Bob Dylan epitomize two sides of the 1960s. In the 1980s, influential hit radio groups and musicians included Prince, The Original 7ven, Jimmy Jam & Terry Lewis, The Jets, Lipps Inc., and Information Society.
Minnesotans have also made significant contributions to comedy, theater, media, and film. The comic strip "Peanuts" was created by St. Paul native Charles M. Schulz. A Prairie Home Companion which first aired in 1974, became a long-running comedy radio show on National Public Radio. A cult scifi cable TV show, Mystery Science Theater 3000, was created by Joel Hodgson in Hopkins, and Minneapolis, MN. Another popular comedy staple developed in the 1990s, The Daily Show, was originated through Lizz Winstead and Madeleine Smithberg.
Joel and Ethan Coen, Terry Gilliam, Bill Pohlad, and Mike Todd contributed to the art of filmmaking as writers, directors, and producers. Notable actors from Minnesota include Loni Anderson, Richard Dean Anderson, James Arness, Jessica Biel, Rachael Leigh Cook, Julia Duffy, Mike Farrell, Judy Garland, Peter Graves, Josh Hartnett, Garrett Hedlund, Tippi Hedren, Jessica Lange, Kelly Lynch, E.G. Marshall, Laura Osnes, Melissa Peterman, Chris Pratt, Marion Ross, Jane Russell, Winona Ryder, Seann William Scott, Kevin Sorbo, Lea Thompson, Vince Vaughn, Jesse Ventura, and Steve Zahn.
Stereotypical traits of Minnesotans include "Minnesota nice", Lutheranism, a strong sense of community and shared culture, and a distinctive brand of North Central American English sprinkled with Scandinavian expressions. Potlucks, usually with a variety of hotdishes, are popular small-town church activities. A small segment of the Scandinavian population attend a traditional lutefisk dinner to celebrate Christmas. Life in Minnesota has also been depicted or used as a backdrop, in movies such as "Fargo", "Grumpy Old Men", "Grumpier Old Men", "Juno", "Drop Dead Gorgeous", "Young Adult", "A Serious Man", "New in Town", "Rio", and in famous television series like "Little House on the Prairie", "The Mary Tyler Moore Show", "The Golden Girls", "Coach", "The Rocky and Bullwinkle Show", "How I Met Your Mother" and "Fargo". Major movies shot on location in Minnesota include "That Was Then... This Is Now", "Purple Rain", "Airport", "Beautiful Girls", "North Country", "Untamed Heart", "Feeling Minnesota", "Jingle All The Way", "A Simple Plan", and "The Mighty Ducks films".
The Minnesota State Fair, advertised as "The Great Minnesota Get-Together", is an icon of state culture. In a state of 5.5 million people, there were more than 1.8 million visitors to the fair in 2014, setting a new attendance record. The fair covers the variety of Minnesota life, including fine art, science, agriculture, food preparation, 4-H displays, music, the midway, and corporate merchandising. It is known for its displays of seed art, butter sculptures of dairy princesses, the birthing barn, and the "fattest pig" competition. One can also find dozens of varieties of food on a stick, such as Pronto Pups, cheese curds, and deep-fried candy bars. On a smaller scale, many of these attractions are offered at numerous county fairs.
Other large annual festivals include the Saint Paul Winter Carnival, the Minnesota Renaissance Festival, Minneapolis' Aquatennial and Mill City Music Festival, Moondance Jam in Walker, Sonshine Christian music festival in Willmar, the Judy Garland Festival in Grand Rapids, the Eelpout Festival on Leech Lake, and the WE Fest in Detroit Lakes.
Minnesotans have low rates of premature death, infant mortality, cardiovascular disease, and occupational fatalities. They have long life expectancies, and high rates of health insurance and regular exercise. These and other measures have led two groups to rank Minnesota as the healthiest state in the nation; however, in one of these rankings, Minnesota descended from first to sixth in the nation between 2005 and 2009 because of low levels of public health funding and the prevalence of binge drinking. While overall health indicators are strong, Minnesota does have significant health disparities in minority populations.
On October 1, 2007, Minnesota became the 17th state to enact the Freedom to Breathe Act, a statewide smoking ban in restaurants and bars.
The Minnesota Department of Health is the primary state health agency responsible for public policy and regulation. Medical care in the state is provided by a comprehensive network of hospitals and clinics operated by a number of large providers including Allina Hospitals & Clinics, CentraCare Health System, Essentia Health, HealthPartners, M Health Fairview and the Mayo Clinic Health System. There are two teaching hospitals and medical schools in Minnesota. The University of Minnesota Medical School is a high-rated teaching institution that has made a number of breakthroughs in treatment, and its research activities contribute significantly to the state's growing biotechnology industry. The Mayo Clinic, a world-renowned hospital based in Rochester, was founded by William Worrall Mayo, an immigrant from England.
"U.S. News & World Report" 2014–2015 survey ranked 4,743 hospitals in the United States in 16 specialized fields of care, and placed the Mayo Clinic in the top four in all fields except psychiatry, where it ranked seventh. The hospital ranked #1 in eight fields and #2 in three others. The Mayo Clinic and the University of Minnesota are partners in the Minnesota Partnership for Biotechnology and Medical Genomics, a state-funded program that conducts research into cancer, Alzheimer's disease, heart health, obesity, and other areas.
One of the Minnesota Legislature's first acts when it opened in 1858 was the creation of a normal school in Winona. Minnesota's commitment to education has contributed to a literate and well-educated populace. In 2009, according to the U.S. Census Bureau, Minnesota had the second-highest proportion of high school graduates, with 91.5% of people 25 and older holding a diploma, and the tenth-highest proportion of people with bachelor's degrees. In 2015, Minneapolis was named the nation's "Most Literate City", while St. Paul placed fourth, according to a major annual survey. In a 2013 study conducted by the National Center for Educational Statistics comparing the performance of eighth-grade students internationally in math and science, Minnesota ranked eighth in the world and third in the United States, behind Massachusetts and Vermont. In 2014, Minnesota students earned the tenth-highest average composite score in the nation on the ACT exam. In 2013, nationwide in per-student public education spending, Minnesota ranked 21st. While Minnesota has chosen not to implement school vouchers, it is home to the first charter school.
The state supports a network of public universities and colleges, including 37 institutions in the Minnesota State Colleges and Universities System, and five major campuses of the University of Minnesota system. It is also home to more than 20 private colleges and universities, six of which rank among the nation's top 100 liberal arts colleges, according to U.S. News & World Report.
Transportation in Minnesota is overseen by the Minnesota Department of Transportation (MnDOT) at the state level and by regional and local governments at the local level. Principal transportation corridors radiate from the Twin Cities metropolitan area and along interstate corridors in Greater Minnesota. The major Interstate highways are Interstate 35 (I-35), I-90, and I-94, with I-35 and I-94 connecting the Minneapolis–St. Paul area, and I-90 traveling east-west along the southern edge of the state. In 2006, a constitutional amendment was passed that required sales and use taxes on motor vehicles to fund transportation, with at least 40 percent dedicated to public transit. There are nearly two dozen rail corridors in Minnesota, most of which go through Minneapolis–St. Paul or Duluth. There is water transportation along the Mississippi River system and from the ports of Lake Superior.
Minnesota's principal airport is Minneapolis–St. Paul International Airport (MSP), a major passenger and freight hub for Delta Air Lines and Sun Country Airlines. Most other domestic carriers serve the airport. Large commercial jet service is provided at Duluth and Rochester, with scheduled commuter service to four smaller cities via Delta Connection carriers SkyWest Airlines, Compass Airlines, and Endeavor Air.
Public transit services are available in the regional urban centers in Minnesota including Metro Transit in the Twin Cities, opt out suburban operators Minnesota Valley Transit Authority, SouthWest Transit, Plymouth Metrolink, Maple Grove Transit and others. In Greater Minnesota transit services are provided by city systems such as Duluth Transit Authority, Mankato Transit System, MATBUS (Fargo-Moorhead), Rochester Public Transit, Saint Cloud Metro Bus, Winona Public Transit and others. Dial-a-Ride service is available for persons with disabilities in a majority of Minnesota Counties.
In addition to bus services, Amtrak's daily "Empire Builder" (Chicago–Seattle/Portland) train runs through Minnesota, calling at the Saint Paul Union Depot and five other stations. Intercity bus providers include Jefferson Lines, Greyhound, and Megabus. Local public transit is provided by bus networks in the larger cities and by two rail services. The Northstar Line commuter rail service runs from Big Lake to the Target Field station in downtown Minneapolis. From there, light rail runs to Saint Paul Union Depot on the Green Line, and to the MSP airport and the Mall of America via the Blue Line.
As with the federal government of the United States, power in Minnesota is divided into three branches: executive, legislative, and judicial.
The executive branch is headed by the governor. Governor Tim Walz, DFL (Democratic–Farmer–Labor), took office on January 7, 2019. The governor has a cabinet consisting of the leaders of various state government agencies, called commissioners. The other elected constitutional offices are secretary of state, attorney general, and state auditor.
Constitutional officeholders:
The Minnesota Legislature is a bicameral body consisting of the Senate and the House of Representatives. The state has 67 districts, each with about 60,000 people. Each district has one senator and two representatives, each senatorial district being divided into "A" and "B" sections for members of the House. Senators serve for four years and representatives for two years.
In the November 2010 Minnesota House election, the Republicans gained 25 house seats, giving them control of the body by a 72–62 margin. The 2010 Senate election also saw Minnesota voters elect a Republican majority in the state Senate for the first time since 1972. In 2012, the Democrats regained the House of Representatives by a margin of 73–61, picking up 11 seats; the Democrats also regained the Minnesota Senate. Control of the House shifted back to Republicans in the 2014 election, and returned to the DFL in the 2018 midterm election. Since 2016, the Senate has had a slim Republican majority.
House Leadership
Senate Leadership
Minnesota's court system has three levels. Most cases start in the district courts, which are courts of general jurisdiction. There are 279 district court judgeships in ten judicial districts. Appeals from the trial courts and challenges to certain governmental decisions are heard by the Minnesota Court of Appeals, consisting of 19 judges who typically sit in three-judge panels. The seven-justice Minnesota Supreme Court hears all appeals from the tax court, the workers' compensation court of appeals, first-degree murder convictions, and discretionary appeals from the court of appeals; it also has original jurisdiction over election disputes.
Two specialized courts within administrative agencies have been established: the workers' compensation court of appeals, and the tax court, which deals with non-criminal tax cases.
Supreme Court Justices
Associate Justices
In addition to the city and county levels of government found in the United States, Minnesota has other entities that provide governmental oversight and planning. Regional development commissions (RDCs) provide technical assistance to local governments in broad multi-county area of the state. Along with this Metropolitan Planning Organizations (MPOs), such as the Metropolitan Council, provide planning and oversight of land use actions in metropolitan areas. Many lakes and rivers are overseen by watershed districts and soil and water conservation districts.
Minnesota's United States senators are Democrat Amy Klobuchar and Democrat Tina Smith. The outcome of the 2008 U.S. Senate election in Minnesota was contested until June 30, 2009; when the Minnesota Supreme Court ruled in favor of Franken, Republican Norm Coleman conceded defeat. Franken resigned on January 2, 2018, and Minnesota Governor Mark Dayton appointed his lieutenant governor, Tina Smith, to Franken's seat until a special election in November 2018. The state has eight congressional districts; they are represented by Jim Hagedorn (1st district; R), Angie Craig (2nd; DFL), Dean Phillips (3rd; DFL), Betty McCollum (4th; DFL), Ilhan Omar (5th; DFL), Tom Emmer (6th; R), Collin Peterson (7th; DFL), and Pete Stauber (8th; R).
Federal court cases are heard in the United States District Court for the District of Minnesota, which holds court in Minneapolis, St. Paul, Duluth, and Fergus Falls. Appeals are heard by the Eighth Circuit Court of Appeals, which is based in St. Louis, Missouri and routinely also hears cases in St. Paul.
The State of Minnesota was created by the United States federal government in the traditional and cultural range of lands occupied by the Dakota and Anishinaabe peoples as well as other Native American groups. After many years of unequal treaties and forced resettlement by the state and federal government, the tribes re-organized into sovereign tribal governments. Today, the tribal governments are divided into 11 semi-autonomous reservations that negotiate with the U.S. and the state on a bilateral basis:
Four Dakota Mdewakanton communities:
Seven Anishinaabe reservations:
The first six of the Anishinaabe bands compose the Minnesota Chippewa Tribe, the collective federally recognized tribal government of the Bois Forte, Fond du Lac, Grand Portage, Leech Lake, Mille Lacs, and White Earth reservations.
Minnesota is known for a politically active citizenry, and populism has been a long-standing force among the state's political parties. Minnesota has a consistently high voter turnout. In the 2008 U.S. presidential election, 78.2% of eligible Minnesotans voted—the highest percentage of any U.S. state—versus the national average of 61.2%. Voters can register on election day at their polling places with evidence of residency.
Hubert Humphrey brought national attention to the state with his address at the 1948 Democratic National Convention. Minnesotans have consistently cast their Electoral College votes for Democratic presidential candidates since 1976, longer than any other state. Minnesota is the only state in the nation that did not vote for Ronald Reagan in either of his presidential runs. Minnesota has gone for the Democratic Party in every presidential election since 1960, with the exception of 1972, when it was carried by Republican Richard Nixon.
Both the Democratic and Republican parties have major-party status in Minnesota, but its state-level Democratic party has a different name, officially known as the Minnesota Democratic-Farmer-Labor Party (DFL). It was formed out of a 1944 alliance of the Minnesota Democratic and Farmer-Labor parties.
The state has had active third-party movements. The Reform Party, now the Independence Party, was able to elect former mayor of Brooklyn Park and professional wrestler Jesse Ventura to the governorship in 1998. The Independence Party has received enough support to keep major-party status. The Green Party, while no longer having major-party status, has a large presence in municipal government, notably in Minneapolis and Duluth, where it competes directly with the DFL party for local offices. Major-party status in Minnesota (which grants state funding for elections) is reserved to parties whose candidates receive five percent or more of the vote in any statewide election (e.g., governor, secretary of state, U.S. president).
The state's U.S. Senate seats have generally been split since the early 1990s, and in the 108th and 109th Congresses, Minnesota's congressional delegation was split, with four representatives and one senator from each party. In the 2006 mid-term election, Democrats were elected to all state offices, except governor and lieutenant governor, where Republicans Tim Pawlenty and Carol Molnau narrowly won re-election. The DFL posted double-digit gains in both houses of the legislature, elected Amy Klobuchar to the U.S. Senate, and increased the party's U.S. House caucus by one. Keith Ellison (DFL) was elected as the first African American U.S. Representative from Minnesota, as well as the first Muslim elected to Congress nationwide. In 2008, DFLer and former comedian and radio talk show host Al Franken defeated incumbent Republican Norm Coleman in the U.S. Senate race by 312 votes out of three million cast.
In the 2010 election, Republicans took control of both chambers of the Minnesota legislature for the first time in 38 years and, with Mark Dayton's election, the DFL party took the governor's office for the first time in 20 years. Two years later, the DFL regained control of both houses, and with Dayton in office, the party had same-party control of both the legislative and executive branches for the first time since 1990. Two years later, the Republicans regained control of the Minnesota House, and in 2016, the GOP also regained control of the State Senate.
In 2018, the DFL retook control of the Minnesota House, while electing DFLer Tim Walz as Governor.
The Twin Cities area is the fifteenth-largest media market in the United States, as ranked by Nielsen Media Research. The state's other top markets are Fargo–Moorhead (118th nationally), Duluth–Superior (137th), Rochester–Mason City–Austin (152nd), and Mankato (200th).
Broadcast television in Minnesota and the Upper Midwest started on April 27, 1948, when KSTP-TV began broadcasting. Hubbard Broadcasting, which owns KSTP, is now the only locally owned television company in Minnesota. Twin Cities CBS station WCCO-TV and FOX station KMSP-TV are owned-and-operated by their respective networks. There are 39 analog broadcast stations and 23 digital channels broadcast over Minnesota.
The four largest daily newspapers are the "Star Tribune" in Minneapolis, the "Pioneer Press" in Saint Paul, the "Duluth News Tribune" in Duluth, and the "Post-Bulletin" in Rochester. "The Minnesota Daily" is the largest student-run newspaper in the U.S. Sites offering daily news on the Web include "The UpTake", "MinnPost", the Twin Cities "Daily Planet", business news site "Finance and Commerce" and Washington D.C.-based "Minnesota Independent". Weeklies including "City Pages" and monthly publications such as "Minnesota Monthly" are available.
Two of the largest public radio networks, Minnesota Public Radio (MPR) and Public Radio International (PRI), are based in the state. MPR has the largest audience of any regional public radio network in the nation, broadcasting on 46 radio stations as of 2019. PRI weekly provides more than 400 hours of programming to almost 800 affiliates. The state's oldest radio station, KUOM-AM, was launched in 1922 and is among the 10-oldest radio stations in the United States. The University of Minnesota-owned station is still on the air, and since 1993 broadcasts a college rock format.
Minnesota has an active program of organized amateur and professional sports. Tourism has become an important industry, especially in the Lake region. In the North Country, what had been an industrial area focused on mining and timber has largely been transformed into a vacation destination. Popular interest in the environment and environmentalism, added to traditional interests in hunting and fishing, has attracted a large urban audience within driving range.
Minnesota has professional men's teams in all major sports.
The Minnesota Vikings have played in the National Football League since their admission as an expansion franchise in 1961. They played in Metropolitan Stadium from 1961 through 1981 and in the Hubert H. Humphrey Metrodome from 1982 until its demolition after the 2013 season for the construction of the team's new home, U.S. Bank Stadium. The Vikings' current stadium hosted Super Bowl LII in February, 2018. Super Bowl XXVI was played in the Metrodome in 1992. The Vikings have advanced to the Super Bowl Super Bowl IV, Super Bowl VIII, Super Bowl IX, and Super Bowl XI, losing all four games to their AFC/AFL opponent
The Minnesota Twins have played in the Major League Baseball in the Twin Cities since 1961. The Twins began play as the original Washington Senators, a founding member of the American League in 1901, relocating to Minnesota in 1961. The Twins won the 1987 and 1991 World Series in seven game matches where the home team was victorious in all games. The Twins also advanced to the 1965 World Series, where they lost to the Los Angeles Dodgers in seven games. The team has played at Target Field since 2010.
The Minneapolis Lakers of the National Basketball Association played in the Minneapolis Auditorium from 1947 to 1960, after which they relocated to Los Angeles. The Minnesota Timberwolves joined the NBA in 1989, and have played in Target Center since 1990.
The National Hockey League's Minnesota Wild play in St. Paul's Xcel Energy Center, and reached 300 consecutive sold-out games on January 16, 2008. Previously, the Minnesota North Stars competed in NHL from 1967 to 1993, which played in and lost the 1981 and 1991 Stanley Cup Finals.
Minnesota United FC joined Major League Soccer as an expansion team in 2017, having played in the lower-division North American Soccer League from 2010 to 2016. The team plays at Allianz Field in St. Paul. Previous professional soccer teams have included the Minnesota Kicks, which played at Metropolitan Stadium from 1976 to 1981, and the Minnesota Strikers from 1984 to 1988.
Minnesota also has minor-league professional sports teams. The Minnesota Swarm of the National Lacrosse League played at the Xcel Energy Center until the team moved to Georgia in 2015. Minor league baseball is represented by major league-sponsored teams and independent teams such as the St. Paul Saints, who play at CHS Field in St. Paul.
Professional women's sports include the Minnesota Lynx of the Women's National Basketball Association, winners of the 2011, 2013, 2015, and 2017 WNBA Championships, the Minnesota Lightning of the United Soccer Leagues W-League, the Minnesota Vixen of the Independent Women's Football League, the Minnesota Valkyrie of the Legends Football League, and the Minnesota Whitecaps of the National Women's Hockey League.
The Twin Cities campus of the University of Minnesota is a National Collegiate Athletic Association (NCAA) Division I school competing in the Big Ten Conference. Four additional schools in the state compete in NCAA Division I ice hockey: the University of Minnesota Duluth; Minnesota State University, Mankato; St. Cloud State University and Bemidji State University. There are nine NCAA Division II colleges in the Northern Sun Intercollegiate Conference, and twenty NCAA Division III colleges in the Minnesota Intercollegiate Athletic Conference and Upper Midwest Athletic Conference.
Minneapolis has hosted the NCAA Men's Division I Basketball Championship in 1951, 1992, 2001, and 2019.
The Hazeltine National Golf Club has hosted the U.S. Open, U.S. Women's Open, U.S. Senior Open and PGA Championship. The course also hosted the Ryder Cup in the fall of 2016, when it became one of two courses in the U.S. to host all major golf competitions. The Ryder Cup is scheduled to return in 2028.
Interlachen Country Club has hosted the U.S. Open, U.S. Women's Open, and Solheim Cup.
Winter Olympic Games medalists from the state include twelve of the twenty members of the gold medal 1980 ice hockey team (coached by Minnesota native Herb Brooks) and the bronze medalist U.S. men's curling team in the 2006 Winter Olympics. Swimmer Tom Malchow won an Olympic gold medal in the 2000 Summer games and a silver medal in 1996.
Grandma's Marathon is run every summer along the scenic North Shore of Lake Superior, and the Twin Cities Marathon winds around lakes and the Mississippi River during the peak of the fall color season. Farther north, Eveleth is the location of the United States Hockey Hall of Fame.
As the state's tourism promotion office, Explore Minnesota pursues an entrepreneurial approach, leveraging the state's tourism investment with increased involvement by the private sector. A council of representatives from the state's tourism industry strongly connects Explore Minnesota with tourism businesses and organizations. Explore Minnesota's mission is to inspire and facilitate travel to and within the state of Minnesota.
Tourism is a $15.3 billion industry in Minnesota, and a key sector of the state's economy. The leisure and hospitality industry—a major provider of tourism services—employs more than 270,000 workers, representing 11 percent of Minnesota's private sector employment. Leisure and hospitality also generates 18 percent of the state's sales tax revenues. Minnesota welcomes more than 73 million domestic and international travelers annually.
In 2014, Explore Minnesota launched an all-new travel marketing campaign, themed "Only in Minnesota", to increase awareness about Minnesota as a one-of-a-kind Midwest travel destination. The strategic effort, which includes a new and improved website and market reach to audiences in 14 states and provinces, is the largest travel marketing campaign in the state's history. A new series of advertisements and a strong, user-driven #OnlyinMN social media movement has been well-received and has engaged travelers, residents, businesses and visitors bureaus across the state. In the latest evolution of the popular #OnlyinMN campaign, Explore Minnesota generated 3.5 million trips to Minnesota, and more than $388 million in traveler spending. Explore Minnesota engaged with hundreds of thousands of people through social media, surpassing half a million uses of the campaign hashtag as of May 2017. The newly branded slogan represents the diversity of Minnesota, from its bustling downtowns to untouched wilderness, pine forests to bluff country and historic landmarks to modern-day attractions. #OnlyinMN celebrates the inspiring and sometimes unexpected experiences that await travelers on a Minnesota vacation.
In 2019, Explore Minnesota launched the "Find Your True North" marketing campaign that supports and strengthens the award-winning #OnlyinMN positioning. The "True North" campaign was designed to tell specific stories that make Minnesota stand apart as a unique travel destination, and invite people to have real and meaningful experiences around the state. Campaign elements will be integrated on the agency's website, Explore Minnesota social channels, statewide publications and more.
Minnesotans participate in high levels of physical activity, and many of these activities are outdoors. The strong interest of Minnesotans in environmentalism has been attributed to the popularity of these pursuits.
In the warmer months, these activities often involve water. Weekend and longer trips to family cabins on Minnesota's numerous lakes are a way of life for many residents. Activities include water sports such as water skiing, which originated in the state, boating, canoeing, and fishing. More than 36 percent of Minnesotans fish, second only to Alaska.
Fishing does not cease when the lakes freeze; ice fishing has been around since the arrival of early Scandinavian immigrants. Minnesotans have learned to embrace their long, harsh winters in ice sports such as skating, hockey, curling, and broomball, and snow sports such as cross-country skiing, alpine skiing, luge, snowshoeing, and snowmobiling. Minnesota is the only U.S. state where bandy is played.
State and national forests and the seventy-two state parks are used year-round for hunting, camping, and hiking. There are almost of snowmobile trails statewide. Minnesota has more miles of bike trails than any other state, and a growing network of hiking trails, including the Superior Hiking Trail in the northeast. Many hiking and bike trails are used for cross-country skiing during the winter.
|
https://en.wikipedia.org/wiki?curid=19590
|
Missouri River
The Missouri River is the longest river in North America. Rising in the Rocky Mountains of western Montana, the Missouri flows east and south for before entering the Mississippi River north of St. Louis, Missouri. The river drains a sparsely populated, semi-arid watershed of more than 500,000 square miles (1,300,000 km2), which includes parts of ten U.S. states and two Canadian provinces. Although nominally considered a tributary of the Mississippi, the Missouri River above the confluence is much longer and carries a comparable volume of water. When combined with the lower Mississippi River, it forms the world's fourth longest river system.
For over 12,000 years, people have depended on the Missouri River and its tributaries as a source of sustenance and transportation. More than ten major groups of Native Americans populated the watershed, most leading a nomadic lifestyle and dependent on enormous bison herds that roamed through the Great Plains. The first Europeans encountered the river in the late seventeenth century, and the region passed through Spanish and French hands before becoming part of the United States through the Louisiana Purchase.
The Missouri River was one of the main routes for the westward expansion of the United States during the 19th century. The growth of the fur trade in the early 19th century laid much of the groundwork as trappers explored the region and blazed trails. Pioneers headed west "en masse" beginning in the 1830s, first by covered wagon, then by the growing numbers of steamboats that entered service on the river. Settlers took over former Native American lands in the watershed, leading to some of the most longstanding and violent wars against indigenous peoples in American history.
During the 20th century, the Missouri River basin was extensively developed for irrigation, flood control and the generation of hydroelectric power. Fifteen dams impound the main stem of the river, with hundreds more on tributaries. Meanders have been cut and the river channelized to improve navigation, reducing its length by almost from pre-development times. Although the lower Missouri valley is now a populous and highly productive agricultural and industrial region, heavy development has taken its toll on wildlife and fish populations as well as water quality.
The flooding along the Missouri which began during the spring 2019 Midwestern U.S. floods is expected to persist through the winter.
From the Rocky Mountains, three streams rise to form the headwaters of the Missouri River:
The Missouri River officially starts at the confluence of the Jefferson and Madison in Missouri Headwaters State Park near Three Forks, Montana, and is joined by the Gallatin a mile (1.6 km) downstream. It then passes through Canyon Ferry Lake, a reservoir west of the Big Belt Mountains. Issuing from the mountains near Cascade, the river flows northeast to the city of Great Falls, where it drops over the Great Falls of the Missouri, a series of five substantial waterfalls. It then winds east through a scenic region of canyons and badlands known as the Missouri Breaks, receiving the Marias River from the west then widening into the Fort Peck Lake reservoir a few miles above the confluence with the Musselshell River. Farther on, the river passes through the Fort Peck Dam, and immediately downstream, the Milk River joins from the north.
Flowing eastward through the plains of eastern Montana, the Missouri receives the Poplar River from the north before crossing into North Dakota where the Yellowstone River, its greatest tributary by volume, joins from the southwest. At the confluence, the Yellowstone is actually the larger river. The Missouri then meanders east past Williston and into Lake Sakakawea, the reservoir formed by Garrison Dam. Below the dam the Missouri receives the Knife River from the west and flows south to Bismarck, the capital of North Dakota, where the Heart River joins from the west. It slows into the Lake Oahe reservoir just before the Cannonball River confluence. While it continues south, eventually reaching Oahe Dam in South Dakota, the Grand, Moreau and Cheyenne Rivers all join the Missouri from the west.
The Missouri makes a bend to the southeast as it winds through the Great Plains, receiving the Niobrara River and many smaller tributaries from the southwest. It then proceeds to form the boundary of South Dakota and Nebraska, then after being joined by the James River from the north, forms the Iowa–Nebraska boundary. At Sioux City the Big Sioux River comes in from the north. The Missouri flows south to the city of Omaha where it receives its longest tributary, the Platte River, from the west. Downstream, it begins to define the Nebraska–Missouri border, then flows between Missouri and Kansas. The Missouri swings east at Kansas City, where the Kansas River enters from the west, and so on into north-central Missouri. To the east of Kansas City, the Missouri receives, on the left side, the Grand River. It passes south of Columbia and receives the Osage and Gasconade Rivers from the south downstream of Jefferson City. The river then rounds the northern side of St. Louis to join the Mississippi River on the border between Missouri and Illinois.
With a drainage basin spanning ,
the Missouri River's catchment encompasses nearly one-sixth of the area of the United States or just over five percent of the continent of North America. Comparable to the size of the Canadian province of Quebec, the watershed encompasses most of the central Great Plains, stretching from the Rocky Mountains in the west to the Mississippi River Valley in the east and from the southern extreme of western Canada to the border of the Arkansas River watershed. Compared with the Mississippi River above their confluence, the Missouri is twice as long and drains an area three times as large. The Missouri accounts for 45 percent of the annual flow of the Mississippi past St. Louis, and as much as 70 percent in certain droughts.
In 1990, the Missouri River watershed was home to about 12 million people. This included the entire population of the U.S. state of Nebraska, parts of the U.S. states of Colorado, Iowa, Kansas, Minnesota, Missouri, Montana, North Dakota, South Dakota, and Wyoming, and small southern portions of the Canadian provinces of Alberta and Saskatchewan. The watershed's largest city is Denver, Colorado, with a population of more than six hundred thousand. Denver is the main city of the Front Range Urban Corridor whose cities had a combined population of over four million in 2005, making it the largest metropolitan area in the Missouri River basin. Other major population centers – mostly in the watershed's southeastern portion – include Omaha, Nebraska, north of the confluence of the Missouri and Platte Rivers; Kansas City, Missouri – Kansas City, Kansas, at the confluence of the Missouri with the Kansas River; and the St. Louis metropolitan area, south of the Missouri River just below the latter's mouth, on the Mississippi. In contrast, the northwestern part of the watershed is sparsely populated. However, many northwestern cities, such as Billings, Montana, are among the fastest growing in the Missouri basin.
With more than under the plow, the Missouri River watershed includes roughly one-fourth of all the agricultural land in the United States, providing more than a third of the country's wheat, flax, barley and oats. However, only of farmland in the basin is irrigated. A further of the basin is devoted to the raising of livestock, mainly cattle. Forested areas of the watershed, mostly second-growth, total about . Urban areas, on the other hand, comprise less than of land. Most built-up areas are along the main stem and a few major tributaries, including the Platte and Yellowstone Rivers.
Elevations in the watershed vary widely, ranging from just over at the Missouri's mouth to the summit of Mount Lincoln in central Colorado. The river drops from Brower's Spring, the farthest source. Although the plains of the watershed have extremely little local vertical relief, the land rises about 10 feet per mile (1.9 m/km) from east to west. The elevation is less than at the eastern border of the watershed, but is over above sea level in many places at the base of the Rockies.
The Missouri's drainage basin has highly variable weather and rainfall patterns, Overall, the watershed is defined by a Continental climate with warm, wet summers and harsh, cold winters. Most of the watershed receives an average of of precipitation each year. However, the westernmost portions of the basin in the Rockies as well as southeastern regions in Missouri may receive as much as . The vast majority of precipitation occurs in summer in most of the lower and middle basin, although the upper basin is known for short-lived but intense summer thunderstorms such as the one which produced the 1972 Black Hills flood through Rapid City, South Dakota. Winter temperatures in the northern and western portions of the basin typically drop to -20 °F (-29 °C) or lower every winter with extremes as low as , while summer highs occasionally exceed 100 °F (38 °C) in all areas except the higher elevations of Montana, Wyoming and Colorado. Extreme maximums have exceeded 115 °F (46 °C) in all the states and provinces in the basin - almost all prior to 1960.
As one of the continent's most significant river systems, the Missouri's drainage basin borders on many other major watersheds of the United States and Canada. The Continental Divide, running along the spine of the Rocky Mountains, forms most of the western border of the Missouri watershed. The Clark Fork and Snake River, both part of the Columbia River basin, drain the area west of the Rockies in Montana, Idaho and western Wyoming. The Columbia, Missouri and Colorado River watersheds meet at Three Waters Mountain in Wyoming's Wind River Range. South of there, the Missouri basin is bordered on the west by the drainage of the Green River, a tributary of the Colorado, then on the south by the mainstem of the Colorado. Both the Colorado and Columbia Rivers flow to the Pacific Ocean. However, a large endorheic drainage called the Great Divide Basin exists between the Missouri and Green watersheds in western Wyoming. This area is sometimes counted as part of the Missouri River watershed, even though its waters do not flow to either side of the Continental Divide.
To the north, the much lower Laurentian Divide separates the Missouri River watershed from those of the Oldman River, a tributary of the South Saskatchewan River, as well as the Souris, Sheyenne, and smaller tributaries of the Red River of the North. All of these streams are part of Canada's Nelson River drainage basin, which empties into Hudson Bay. There are also several large endorheic basins between the Missouri and Nelson watersheds in southern Alberta and Saskatchewan. The Minnesota and Des Moines Rivers, tributaries of the upper Mississippi, drain most of the area bordering the eastern side of the Missouri River basin. Finally, on the south, the Ozark Mountains and other low divides through central Missouri, Kansas and Colorado separate the Missouri watershed from those of the White River and Arkansas River, also tributaries of the Mississippi River.
Over 95 significant tributaries and hundreds of smaller ones feed the Missouri River, with most of the larger ones coming in as the river draws close to the mouth. Most rivers and streams in the Missouri River basin flow from west to east, following the incline of the Great Plains; however, some eastern tributaries such as the James, Big Sioux and Grand River systems flow from north to south.
The Missouri's largest tributaries by runoff are the Yellowstone in Montana and Wyoming, the Platte in Wyoming, Colorado, and Nebraska, and the Kansas–Republican/Smoky Hill and Osage in Kansas and Missouri. Each of these tributaries drains an area greater than , and has an average discharge greater than . The Yellowstone River has the highest discharge, even though the Platte is longer and drains a larger area. In fact, the Yellowstone's flow is about – accounting for sixteen percent of total runoff in the Missouri basin and nearly double that of the Platte. On the other end of the scale is the tiny Roe River in Montana, which at long is one the world's shortest rivers.
The table on the right lists the ten longest tributaries of the Missouri, along with their respective catchment areas and flows. Length is measured to the hydrologic source, regardless of naming convention. The main stem of the Kansas River, for example, is long. However, including the longest headwaters tributaries, the Republican River and the Arikaree River, brings the total length to . Similar naming issues are encountered with the Platte River, whose longest tributary, the North Platte River, is more than twice as long as its mainstream.
The Missouri's headwaters above Three Forks extend much farther upstream than the main stem. Measured to the farthest source at Brower's Spring, the Jefferson River is long. Thus measured to its highest headwaters, the Missouri River stretches for . When combined with the lower Mississippi, the Missouri and its headwaters form part of the fourth-longest river system in the world, at .
By discharge, the Missouri is the ninth largest river of the United States, after the Mississippi, St. Lawrence, Ohio, Columbia, Niagara, Yukon, Detroit, and St. Clair. The latter two, however, are sometimes considered part of a strait between Lake Huron and Lake Erie. Among rivers of North America as a whole, the Missouri is thirteenth largest, after the Mississippi, Mackenzie, St. Lawrence, Ohio, Columbia, Niagara, Yukon, Detroit, St. Clair, Fraser, Slave, and Koksoak.
As the Missouri drains a predominantly semi-arid region, its discharge is much lower and more variable than other North American rivers of comparable length. Before the construction of dams, the river flooded twice each year – once in the "April Rise" or "Spring Fresh", with the melting of snow on the plains of the watershed, and in the "June Rise", caused by snowmelt and summer rainstorms in the Rocky Mountains. The latter was far more destructive, with the river increasing to over ten times its normal discharge in some years. The Missouri's discharge is affected by over 17,000 reservoirs with an aggregate capacity of some . By providing flood control, the reservoirs dramatically reduce peak flows and increase low flows. Evaporation from reservoirs significantly reduces the river's runoff, causing an annual loss of over from mainstem reservoirs alone.
The United States Geological Survey operates fifty-one stream gauges along the Missouri River. The river's average discharge at Bismarck, from the mouth, is . This is from a drainage area of , or 35% of the total river basin. At Kansas City, from the mouth, the river's average flow is . The river here drains about , representing about 91% of the entire basin.
The lowermost gage with a period of record greater than fifty years is at Hermann, Missouri – upstream of the mouth of the Missouri – where the average annual flow was from 1897 to 2010. About , or 98.7% of the watershed, lies above Hermann. The highest annual mean was in 1993, and the lowest was in 2006. Extremes of the flow vary even further. The largest discharge ever recorded was over on July 31, 1993, during a historic flood. The lowest, a mere – caused by the formation of an ice dam – was measured on December 23, 1963.
The Rocky Mountains of southwestern Montana at the headwaters of the Missouri River first rose in the Laramide Orogeny, a mountain-building episode that occurred from around 70 to 45 million years ago (the end of the Mesozoic through the early Cenozoic). This orogeny uplifted Cretaceous rocks along the western side of the Western Interior Seaway, a vast shallow sea that stretched from the Arctic Ocean to the Gulf of Mexico, and deposited the sediments that now underlie much of the drainage basin of the Missouri River.
This Laramide uplift caused the sea to retreat and laid the framework for a vast drainage system of rivers flowing from the Rocky and Appalachian Mountains, the predecessor of the modern-day Mississippi watershed. The Laramide Orogeny is essential to modern Missouri River hydrology, as snow and ice melt from the Rockies provide the majority of the flow in the Missouri and its tributaries.
The Missouri and many of its tributaries cross the Great Plains, flowing over or cutting into the Ogallala Group and older mid-Cenozoic sedimentary rocks. The lowest major Cenozoic unit, the White River Formation, was deposited between roughly 35 and 29 million years ago and consists of claystone, sandstone, limestone, and conglomerate. Channel sandstones and finer-grained overbank deposits of the fluvial Arikaree Group were deposited between 29 and 19 million years ago. The Miocene-age Ogallala and the slightly younger Pliocene-age Broadwater Formation deposited atop the Arikaree Group, and are formed from material eroded off of the Rocky Mountains during a time of increased generation of topographic relief; these formations stretch from the Rocky Mountains nearly to the Iowa border and give the Great Plains much of their gentle but persistent eastward tilt, and also constitute a major aquifer.
Immediately before the Quaternary Ice Age, the Missouri River was likely split into three segments: an upper portion that drained northwards into Hudson Bay,
and middle and lower sections that flowed eastward down the regional slope.
As the Earth plunged into the Ice Age, a pre-Illinoian (or possibly the Illinoian) glaciation diverted the Missouri River southeastward toward its present confluence with the Mississippi and caused it to integrate into a single river system that cuts across the regional slope. In western Montana, the Missouri River is thought to have once flowed north then east around the Bear Paw Mountains. Sapphires are found in some spots along the river in western Montana. Advances of the continental ice sheets diverted the river and its tributaries, causing them to pool up into large temporary lakes such as Glacial Lakes Great Falls, Musselshell and others. As the lakes rose, the water in them often spilled across adjacent local drainage divides, creating now-abandoned channels and coulees including the Shonkin Sag, long. When the glaciers retreated, the Missouri flowed in a new course along the south side of the Bearpaws, and the lower part of the Milk River tributary took over the original main channel.
The Missouri's nickname, the "Big Muddy", was inspired by its enormous loads of sediment or silt – some of the largest of any North American river. In its pre-development state, the river transported some per year. The construction of dams and levees has drastically reduced this to in the present day. Much of this sediment is derived from the river's floodplain, also called the meander belt; every time the river changed course, it would erode tons of soil and rocks from its banks. However, damming and channeling the river has kept it from reaching its natural sediment sources along most of its course. Reservoirs along the Missouri trap roughly of sediment each year. Despite this, the river still transports more than half the total silt that empties into the Gulf of Mexico; the Mississippi River Delta, formed by sediment deposits at the mouth of the Mississippi, constitutes a majority of sediments carried by the Missouri.
Archaeological evidence, especially in Missouri, suggests that human beings first inhabited the watershed of the Missouri River between 10,000 and 12,000 years ago at the end of the Pleistocene. During the end of the last glacial period, large migration of humans were taking place, such as those via the Bering land bridge between the Americas and Eurasia. Over centuries, the Missouri River formed one of these main migration paths. Most migratory groups that passed through the area eventually settled in the Ohio Valley and the lower Mississippi River Valley, but many, including the Mound builders, stayed along the Missouri, becoming the ancestors of the later Indigenous peoples of the Great Plains.
Indigenous peoples of North America who have lived along the Missouri have historically had access to ample food, water, and shelter. Many migratory animals naturally inhabit the plains area. Before they were hunted by colonists and Native Americans, these animals, such as the buffalo, provided meat, clothing, and other everyday items; there were also great riparian areas in the river's floodplain that provided habitat for herbs and other staple foods. No written records from the tribes and peoples of the pre-European contact period exist because they did not yet use writing. According to the writings of early colonists, some of the major tribes along the Missouri River included the Otoe, Missouria, Omaha, Ponca, Brulé, Lakota, Arikara, Hidatsa, Mandan, Assiniboine, Gros Ventres and Blackfeet.
In this pre-colonial and early-colonial era, the Missouri river was used as a path of trade and transport, and the river and its tributaries often formed territorial boundaries. Most of the Indigenous peoples in the region at that time had semi-nomadic cultures, with many tribes maintaining different summer and winter camps. However, the center of Native American wealth and trade lay along the Missouri River in the Dakotas region on its great bend south. A large cluster of walled Mandan, Hidatsa and Arikara villages situated on bluffs and islands of the river was home to thousands, and later served as a market and trading post used by early French and British explorers and fur traders. Following the introduction of horses to Missouri River tribes, possibly from feral European-introduced populations, Natives' way of life changed dramatically. The use of the horse allowed them to travel greater distances, and thus facilitated hunting, communications and trade.
Once, tens of millions of American bison (commonly called buffalo), one of the keystone species of the Great Plains and the Ohio Valley, roamed the plains of the Missouri River basin. Most Native American nations in the basin relied heavily on the bison as a food source, and their hides and bones served to create other household items. In time, the species came to benefit from the indigenous peoples' periodic controlled burnings of the grasslands surrounding the Missouri to clear out old and dead growth. The large bison population of the region gave rise to the term "great bison belt", an area of rich annual grasslands that extended from Alaska to Mexico along the eastern flank of the Continental Divide. However, after the arrival of Europeans in North America, both the bison and the Native Americans saw a rapid decline in population. Massive over-hunting for sport by colonists eliminated bison populations east of the Mississippi River by 1833 and reduced the numbers in the Missouri basin to a mere few hundred. Foreign diseases brought by settlers, such as smallpox, raged across the land, decimating Native American populations. Left without their primary source of sustenance, many of the remaining indigenous people were forced onto resettlement areas and reservations, often at gunpoint.
In May 1673, the French-Canadian explorer Louis Jolliet and the French explorer Jacques Marquette left the settlement of St. Ignace on Lake Huron and traveled down the Wisconsin and Mississippi Rivers, aiming to reach the Pacific Ocean. In late June, Jolliet and Marquette became the first documented European discoverers of the Missouri River, which according to their journals was in full flood. "I never saw anything more terrific," Jolliet wrote, "a tangle of entire trees from the mouth of the Pekistanoui [Missouri] with such impetuosity that one could not attempt to cross it without great danger. The commotion was such that the water was made muddy by it and could not clear itself." They recorded "Pekitanoui" or "Pekistanoui" as the local name for the Missouri. However, the party never explored the Missouri beyond its mouth, nor did they linger in the area. In addition, they later learned that the Mississippi drained into the Gulf of Mexico and not the Pacific as they had originally presumed; the expedition turned back about short of the Gulf at the confluence of the Arkansas River with the Mississippi.
In 1682, France expanded its territorial claims in North America to include land on the western side of the Mississippi River, which included the lower portion of the Missouri. However, the Missouri itself remained formally unexplored until Étienne de Veniard, Sieur de Bourgmont commanded an expedition in 1714 that reached at least as far as the mouth of the Platte River. It is unclear exactly how far Bourgmont traveled beyond there; he described the blond-haired Mandans in his journals, so it is likely he reached as far as their villages in present-day North Dakota. Later that year, Bourgmont published "The Route To Be Taken To Ascend The Missouri River", the first known document to use the name "Missouri River"; many of the names he gave to tributaries, mostly for the native tribes that lived along them, are still in use today. The expedition's discoveries eventually found their way to cartographer Guillaume Delisle, who used the information to create a map of the lower Missouri. In 1718, Jean-Baptiste Le Moyne, Sieur de Bienville requested that the French government bestow upon Bourgmont the Cross of St. Louis because of his "outstanding service to France".
Bourgmont had in fact been in trouble with the French colonial authorities since 1706, when he deserted his post as commandant of Fort Detroit after poorly handling an attack by the Ottawa that resulted in thirty-one deaths. However, his reputation was enhanced in 1720 when the Pawnee – who had earlier been befriended by Bourgmont – massacred the Spanish Villasur expedition near present-day Columbus, Nebraska on the Missouri River and temporarily ending Spanish encroachment on French Louisiana.
Bourgmont established Fort Orleans, the first European settlement of any kind on the Missouri River, near present-day Brunswick, Missouri, in 1723. The following year Bourgmont led an expedition to enlist Comanche support against the Spanish, who continued to show interest in taking over the Missouri. In 1725 Bourgmont brought the chiefs of several Missouri River tribes to visit France. There he was raised to the rank of nobility and did not accompany the chiefs back to North America. Fort Orleans was either abandoned or its small contingent massacred by Native Americans in 1726.
The French and Indian War erupted when territorial disputes between France and Great Britain in North America reached a head in 1754. By 1763, France was defeated by the much greater strength of the British army and was forced to cede its Canadian possessions to the English and Louisiana to the Spanish in the Treaty of Paris, amounting to most of its colonial holdings in North America. Initially, the Spanish did not extensively explore the Missouri and let French traders continue their activities under license. However, this ended after news of the British Hudson's Bay Company incursions in the upper Missouri River watershed was brought back following an expedition by Jacques D'Eglise in the early 1790s. In 1795 the Spanish chartered the Company of Discoverers and Explorers of the Missouri, popularly referred to as the "Missouri Company", and offered a reward for the first person to reach the Pacific Ocean via the Missouri. In 1794 and 1795 expeditions led by Jean Baptiste Truteau and Antoine Simon Lecuyer de la Jonchšre did not even make it as far north as the Mandan villages in central North Dakota.
Arguably the most successful of the Missouri Company expeditions was that of James MacKay and John Evans. The two set out along the Missouri, and established Fort Charles about south of present-day Sioux City as a winter camp in 1795. At the Mandan villages in North Dakota, they expelled several British traders, and while talking to the populace they pinpointed the location of the Yellowstone River, which was called "Roche Jaune" ("Yellow Rock") by the French. Although MacKay and Evans failed to accomplish their original goal of reaching the Pacific, they did create the first accurate map of the upper Missouri River.
In 1795, the young United States and Spain signed Pinckney's Treaty, which recognized American rights to navigate the Mississippi River and store goods for export in New Orleans. Three years later, Spain revoked the treaty and in 1800 secretly returned Louisiana to Napoleonic France in the Third Treaty of San Ildefonso. This transfer was so secret that the Spanish continued to administer the territory. In 1801, Spain restored rights to use the Mississippi and New Orleans to the United States.
Fearing that the cutoffs could occur again, President Thomas Jefferson proposed to buy the port of New Orleans from France for $10 million. Instead, faced with a debt crisis, Napoleon offered to sell the entirety of Louisiana, including the Missouri River, for $15 million – amounting to less than 3¢ per acre. The deal was signed in 1803, doubling the size of the United States with the acquisition of the Louisiana Territory. In 1803, Jefferson instructed Meriwether Lewis to explore the Missouri and search for a water route to the Pacific Ocean. By then, it had been discovered that the Columbia River system, which drains into the Pacific, had a similar latitude as the headwaters of the Missouri River, and it was widely believed that a connection or short portage existed between the two. However, Spain balked at the takeover, citing that they had never formally returned Louisiana to the French. Spanish authorities warned Lewis not to take the journey and forbade him from seeing the MacKay and Evans map of the Missouri, although Lewis eventually managed to gain access to it.
Meriwether Lewis and William Clark began their famed expedition in 1804 with a party of thirty-three people in three boats. Although they became the first Europeans to travel the entire length of the Missouri and reach the Pacific Ocean via the Columbia, they found no trace of the Northwest Passage. The maps made by Lewis and Clark, especially those of the Pacific Northwest region, provided a foundation for future explorers and emigrants. They also negotiated relations with numerous Native American tribes and wrote extensive reports on the climate, ecology and geology of the region. Many present-day names of geographic features in the upper Missouri basin originated from their expedition.
As early as the 18th century, fur trappers entered the extreme northern basin of the Missouri River in the hopes of finding populations of beaver and river otter, the sale of whose pelts drove the thriving North American fur trade. They came from many different places – some from the Canadian fur corporations at Hudson Bay, some from the Pacific Northwest ("see also": Maritime fur trade), and some from the midwestern United States. Most did not stay in the area for long, as they failed to find significant resources.
The first glowing reports of country rich with thousands of game animals came in 1806 when Meriwether Lewis and William Clark returned from their two-year expedition. Their journals described lands amply stocked with thousands of buffalo, beaver, and river otter; and also an abundant population of sea otters on the Pacific Northwest coast. In 1807, explorer Manuel Lisa organized an expedition which would lead to the explosive growth of the fur trade in the upper Missouri River country. Lisa and his crew traveled up the Missouri and Yellowstone Rivers, trading manufactured items in return for furs from local Native American tribes, and established a fort at the confluence of the Yellowstone and a tributary, the Bighorn, in southern Montana. Although the business started small, it quickly grew into a thriving trade.
Lisa's men started construction of Fort Raymond, which sat on a bluff overlooking the confluence of the Yellowstone and Bighorn, in the fall of 1807. The fort would serve primarily as a trading post for bartering with the Native Americans for furs. This method was unlike that of the Pacific Northwest fur trade, which involved trappers hired by the various fur enterprises, namely Hudson's Bay. Fort Raymond was later replaced by Fort Lisa at the confluence of the Missouri and Yellowstone in North Dakota; a second fort also called Fort Lisa was built downstream on the Missouri River in Nebraska. In 1809 the St. Louis Missouri Fur Company was founded by Lisa in conjunction with William Clark and Pierre Choteau, among others. In 1828, the American Fur Company founded Fort Union at the confluence of the Missouri and Yellowstone Rivers. Fort Union gradually became the main headquarters for the fur trade in the upper Missouri basin.
Fur trapping activities in the early 19th century encompassed nearly all of the Rocky Mountains on both the eastern and western slopes. Trappers of the Hudson's Bay Company, St. Louis Missouri Fur Company, American Fur Company, Rocky Mountain Fur Company, North West Company and other outfits worked thousands of streams in the Missouri watershed as well as the neighboring Columbia, Colorado, Arkansas, and Saskatchewan river systems. During this period, the trappers, also called mountain men, blazed trails through the wilderness that would later form the paths pioneers and settlers would travel by into the West. Transport of the thousands of beaver pelts required ships, providing one of the first large motives for river transport on the Missouri to start.
As the 1830s drew to a close, the fur industry slowly began to die as silk replaced beaver fur as a desirable clothing item. By this time, also, the beaver population of streams in the Rocky Mountains had been decimated by intense hunting. Furthermore, frequent Native American attacks on trading posts made it dangerous for employees of the fur companies. In some regions, the industry continued well into the 1840s, but in others such as the Platte River valley, declines of the beaver population contributed to an earlier demise. The fur trade finally disappeared in the Great Plains around 1850, with the primary center of industry shifting to the Mississippi Valley and central Canada. Despite the demise of the once-prosperous trade, however, its legacy led to the opening of the American West and a flood of settlers, farmers, ranchers, adventurers, hopefuls, financially bereft, and entrepreneurs took their place.
The river roughly defined the American frontier in the 19th century, particularly downstream from Kansas City, where it takes a sharp eastern turn into the heart of the state of Missouri, an area known as the Boonslick. As first area settled by Europeans along the river it was largely populated by slave-owning southerners following the Boone's Lick Road. The major trails for the opening of the American West all have their starting points on the river, including the California, Mormon, Oregon, and Santa Fe trails. The first westward leg of the Pony Express was a ferry across the Missouri at St. Joseph, Missouri. Similarly, most emigrants arrived at the eastern terminus of the First Transcontinental Railroad via a ferry ride across the Missouri between Council Bluffs, Iowa, and Omaha. The Hannibal Bridge became the first bridge to cross the Missouri River in 1869, and its location was a major reason why Kansas City became the largest city on the river upstream from its mouth at St. Louis.
True to the then-ideal of Manifest Destiny, over 500,000 people set out from the river town of Independence, Missouri to their various destinations in the American West from the 1830s to the 1860s. These people had many reasons to embark on this strenuous year-long journey – economic crisis, and later gold strikes including the California Gold Rush, for example. For most, the route took them up the Missouri to Omaha, Nebraska, where they would set out along the Platte River, which flows from the Rocky Mountains in Wyoming and Colorado eastward through the Great Plains. An early expedition led by Robert Stuart from 1812 to 1813 proved the Platte impossible to navigate by the dugout canoes they used, let alone the large sidewheelers and sternwheelers that would later ply the Missouri in increasing numbers. One explorer remarked that the Platte was "too thick to drink, too thin to plow". Nevertheless, the Platte provided an abundant and reliable source of water for the pioneers as they headed west. Covered wagons, popularly referred to as "prairie schooners", provided the primary means of transport until the beginning of regular boat service on the river in the 1850s.
During the 1860s, gold strikes in Montana, Colorado, Wyoming, and northern Utah attracted another wave of hopefuls to the region. Although some freight was hauled overland, most transport to and from the gold fields was done through the Missouri and Kansas Rivers, as well as the Snake River in western Wyoming and the Bear River in Utah, Idaho, and Wyoming. It is estimated that of the passengers and freight hauled from the Midwest to Montana, over 80 percent were transported by boat, a journey that took 150 days in the upstream direction. A route more directly west into Colorado lay along the Kansas River and its tributary the Republican River as well as pair of smaller Colorado streams, Big Sandy Creek and the South Platte River, to near Denver. The gold rushes precipitated the decline of the Bozeman Trail as a popular emigration route, as it passed through land held by often-hostile Native Americans. Safer paths were blazed to the Great Salt Lake near Corinne, Utah during the gold rush period, which led to the large-scale settlement of the Rocky Mountains region and eastern Great Basin.
As settlers expanded their holdings into the Great Plains, they ran into land conflicts with Native American tribes. This resulted in frequent raids, massacres and armed conflicts, leading to the federal government creating multiple treaties with the Plains tribes, which generally involved establishing borders and reserving lands for the indigenous. As with many other treaties between the U.S. and Native Americans, they were soon broken, leading to huge wars. Over 1,000 battles, big and small, were fought between the U.S. military and Native Americans before the tribes were forced out of their land onto reservations.
Conflicts between natives and settlers over the opening of the Bozeman Trail in the Dakotas, Wyoming and Montana led to Red Cloud's War, in which the Lakota and Cheyenne fought against the U.S. Army. The fighting resulted in a complete Native American victory. In 1868, the Treaty of Fort Laramie was signed, which "guaranteed" the use of the Black Hills, Powder River Country and other regions surrounding the northern Missouri River to Native Americans without white intervention. The Missouri River was also a significant landmark as it divides northeastern Kansas from western Missouri; pro-slavery forces from Missouri would cross the river into Kansas and spark mayhem during Bleeding Kansas, leading to continued tension and hostility even today between Kansas and Missouri. Another significant military engagement on the Missouri River during this period was the 1861 Battle of Boonville, which did not affect Native Americans but rather was a turning point in the American Civil War that allowed the Union to seize control of transport on the river, discouraging the state of Missouri from joining the Confederacy.
However, the peace and freedom of the Native Americans did not last for long. The Great Sioux War of 1876–77 was sparked when American miners discovered gold in the Black Hills of western South Dakota and eastern Wyoming. These lands were originally set aside for Native American use by the Treaty of Fort Laramie. When the settlers intruded onto the lands, they were attacked by Native Americans. U.S. troops were sent to the area to protect the miners, and drive out the natives from the new settlements. During this bloody period, both the Native Americans and the U.S. military won victories in major battles, resulting in the loss of nearly a thousand lives. The war eventually ended in an American victory, and the Black Hills were opened to settlement. Native Americans of that region were relocated to reservations in Wyoming and southeastern Montana.
In the late 19th and early 20th centuries, a great number of dams were built along the course of the Missouri, transforming 35 percent of the river into a chain of reservoirs. River development was stimulated by a variety of factors, first by growing demand for electricity in the rural northwestern parts of the basin, and by floods and droughts that plagued rapidly growing agricultural and urban areas along the lower Missouri River. Small, privately owned hydroelectric projects have existed since the 1890s, but the large flood-control and storage dams that characterize the middle reaches of the river today were not constructed until the 1950s.
Between 1890 and 1940, five dams were built in the vicinity of Great Falls to generate power from the Great Falls of the Missouri, a chain of giant waterfalls formed by the river in its path through western Montana. Black Eagle Dam, built in 1891 on Black Eagle Falls, was the first dam of the Missouri. Replaced in 1926 with a more modern structure, the dam was little more than a small weir atop Black Eagle Falls, diverting part of the Missouri's flow into the Black Eagle power plant. The largest of the five dams, Ryan Dam, was built in 1913. The dam lies directly above the Big Falls, the largest waterfall of the Missouri.
In the same period, several private establishments – most notably the Montana Power Company – began to develop the Missouri River above Great Falls and below Helena for power generation. A small run-of-the river structure completed in 1898 near the present site of Canyon Ferry Dam became the second dam built on the Missouri. This rock-filled timber crib dam generated seven and a half megawatts of electricity for Helena and the surrounding countryside. The nearby steel Hauser Dam was finished in 1907, but failed in 1908 because of structural deficiencies, causing catastrophic flooding all the way downstream past Craig. At Great Falls, a section of the Black Eagle Dam was dynamited to save nearby factories from inundation. Hauser was rebuilt in 1910 as a concrete gravity structure, and stands to this day.
Holter Dam, about downstream of Helena, was the third hydroelectric dam built on this stretch of the Missouri River. When completed in 1918 by the Montana Power Company and the United Missouri River Power Company, its reservoir flooded the Gates of the Mountains, a limestone canyon which Meriwether Lewis described as "the most remarkable clifts that we have yet seen ... the tow[er]ing and projecting rocks in many places seem ready to tumble on us." In 1949, the U.S. Bureau of Reclamation (USBR) began construction on the modern Canyon Ferry Dam to provide flood control to the Great Falls area. By 1954, the rising waters of Canyon Ferry Lake submerged the old 1898 dam, whose powerhouse still stands underwater about upstream of the present-day dam.
"[The Missouri's temperament was] uncertain as the actions of a jury or the state of a woman's mind." –"Sioux City Register", March 28, 1868
The Missouri basin suffered a series of catastrophic floods around the turn of the 20th century, most notably in 1844, 1881, and 1926–1927. In 1940, as part of the Great Depression-era New Deal, the U.S. Army Corps of Engineers (USACE) completed Fort Peck Dam in Montana. Construction of this massive public works project provided jobs for more than 50,000 laborers during the Depression and was a major step in providing flood control to the lower half of the Missouri River. However, Fort Peck only controls runoff from 11 percent of the Missouri River watershed, and had little effect on a severe snowmelt flood that struck the lower basin three years later. This event was particularly destructive as it submerged manufacturing plants in Omaha and Kansas City, greatly delaying shipments of military supplies in World War II.
Flooding damages on the Mississippi–Missouri river system were one of the primary reasons for which Congress passed the Flood Control Act of 1944, opening the way for the USACE to develop the Missouri on a massive scale. The 1944 act authorized the Pick–Sloan Missouri Basin Program (Pick–Sloan Plan), which was a composite of two widely varying proposals. The Pick plan, with an emphasis on flood control and hydroelectric power, called for the construction of large storage dams along the main stem of the Missouri. The Sloan plan, which stressed the development of local irrigation, included provisions for roughly 85 smaller dams on tributaries.
In the early stages of Pick–Sloan development, tentative plans were made to build a low dam on the Missouri at Riverdale, North Dakota and 27 smaller dams on the Yellowstone River and its tributaries. This was met with controversy from inhabitants of the Yellowstone basin, and eventually the USBR proposed a solution: to greatly increase the size of the proposed dam at Riverdale – today's Garrison Dam, thus replacing the storage that would have been provided by the Yellowstone dams. Because of this decision, the Yellowstone is now the longest free-flowing river in the contiguous United States. In the 1950s, construction commenced on the five mainstem dams – Garrison, Oahe, Big Bend, Fort Randall and Gavins Point – proposed under the Pick-Sloan Plan. Along with Fort Peck, which was integrated as a unit of the Pick-Sloan Plan in the 1940s, these dams now form the Missouri River Mainstem System.
The flooding of lands along the Missouri River heavily impacted Native American groups whose reservations included fertile bottomlands and floodplains, especially in the arid Dakotas where it was some of the only good farmland they had. These consequences were pronounced in North Dakota's Fort Berthold Indian Reservation, where of land was taken by the construction of Garrison Dam. The Mandan, Hidatsa and Arikara/Sanish tribes sued the federal government on the basis of the 1851 Treaty of Fort Laramie which provided that reservation land could not be taken without the consent of both the tribes and Congress. After a lengthy legal battle the tribes were coerced in 1947 to accept a $5.1 million ($55 million today) settlement for the land, just $33 per acre. In 1949 this was increased to $12.6 million. The tribes were even denied the right to use the reservoir shore "for grazing, hunting, fishing, and other purposes."
The six dams of the Mainstem System, chiefly Fort Peck, Garrison and Oahe, are among the largest dams in the world by volume; their sprawling reservoirs also rank among the biggest of the nation. Holding up to in total, the six reservoirs can store more than three years' worth of the river's flow as measured below Gavins Point, the lowermost dam. This capacity makes it the largest reservoir system in the United States and one of the largest in North America. In addition to storing irrigation water, the system also includes an annual flood-control reservation of . Mainstem power plants generate about 9.3 billion KWh annually – equal to a constant output of almost 1,100 megawatts. Along with nearly 100 smaller dams on tributaries, namely the Bighorn, Platte, Kansas, and Osage Rivers, the system provides irrigation water to nearly of land.
The table at left lists statistics of all fifteen dams on the Missouri River, ordered downstream. Many of the run-of-the-river dams on the Missouri (marked in yellow) form very small impoundments which may or may not have been given names; those unnamed are left blank. All dams are on the upper half of the river above Sioux City; the lower river is uninterrupted due to its longstanding use as a shipping channel.
"[Missouri River shipping] never achieved its expectations. Even under the very best of circumstances, it was never a huge industry."~Richard Opper, former Missouri River Basin Association executive director
Boat travel on the Missouri began with the wood-framed canoes and bull boats that Native Americans used for thousands of years before the colonization of the Great Plains introduced larger craft to the river. The first steamboat on the Missouri was the "Independence", which started running between St. Louis and Keytesville, Missouri around 1819. By the 1830s, large mail and freight-carrying vessels were running regularly between Kansas City and St. Louis, and many traveled even farther upstream. A handful, such as the "Western Engineer" and the "Yellowstone", could make it up the river as far as eastern Montana.
During the early 19th century, at the height of the fur trade, steamboats and keelboats travelled nearly the whole length of the Missouri from Montana's rugged Missouri Breaks to the mouth, carrying beaver and buffalo furs to and from the areas the trappers frequented. This resulted in the development of the Missouri River mackinaw, which specialized in carrying furs. Since these boats could only travel downriver, they were dismantled and sold for lumber upon their arrival at St. Louis.
Water transport increased through the 1850s with multiple craft ferrying pioneers, emigrants and miners; many of these runs were from St. Louis or Independence to near Omaha. There, most of these people would set out overland along the large but shallow and unnavigable Platte River, which pioneers described as "a mile wide and an inch deep" and "the most magnificent and useless of rivers". Steamboat navigation peaked in 1858 with over 130 boats operating full-time on the Missouri, with many more smaller vessels. Many of the earlier vessels were built on the Ohio River before being transferred to the Missouri. Side-wheeler steamboats were preferred over the larger sternwheelers used on the Mississippi and Ohio because of their greater maneuverability.
The industry's success, however, did not guarantee safety. In the early decades before man controlled the river's flow, its sketchy rises and falls and its massive amounts of sediment, which prevented a clear view of the bottom, wrecked some 300 vessels. Because of the dangers of navigating the Missouri River, the average ship's lifespan was only about four years. The development of the Transcontinental and Northern Pacific Railroads marked the beginning of the end of steamboat commerce on the Missouri. Outcompeted by trains, the number of boats slowly dwindled, until there was almost nothing left by the 1890s. Transport of agricultural and mining products by barge, however, saw a revival in the early twentieth century.
Since the beginning of the 20th century, the Missouri River has been extensively engineered for water transport purposes, and about 32 percent of the river now flows through artificially straightened channels. In 1912, the USACE was authorized to maintain the Missouri to a depth of from the Port of Kansas City to the mouth, a distance of . This was accomplished by constructing levees and wing dams to direct the river's flow into a straight, narrow channel and prevent sedimentation. In 1925, the USACE began a project to widen the river's navigation channel to ; two years later, they began dredging a deep-water channel from Kansas City to Sioux City. These modifications have reduced the river's length from some in the late 19th century to in the present day.
Construction of dams on the Missouri under the Pick-Sloan Plan in the mid-twentieth century was the final step in aiding navigation. The large reservoirs of the Mainstem System help provide a dependable flow to maintain the navigation channel year-round, and are capable of halting most of the river's annual freshets. However, high and low water cycles of the Missouri – notably the protracted early-21st-century drought in the Missouri River basin and historic floods in 1993 and 2011 – are difficult for even the massive Mainstem System reservoirs to control.
In 1945, the USACE began the Missouri River Bank Stabilization and Navigation Project, which would permanently increase the river's navigation channel to a width of and a depth of . During work that continues to this day, the navigation channel from Sioux City to St. Louis has been controlled by building rock dikes to direct the river's flow and scour out sediments, sealing and cutting off meanders and side channels, and dredging the riverbed. However, the Missouri has often resisted the efforts of the USACE to control its depth. In 2006, several U.S. Coast Guard boats ran aground in the Missouri River because the navigation channel had been severely silted. The USACE was blamed for failing to maintain the channel to the minimum depth.
In 1929, the Missouri River Navigation Commission estimated the amount of goods shipped on the river annually at 15 million tons (13.6 million metric tons), providing widespread consensus for the creation of a navigation channel. However, shipping traffic has since been far lower than expected – shipments of commodities including produce, manufactured items, lumber, and oil averaged only 683,000 tons (616,000 t) per year from 1994 to 2006.
By tonnage of transported material, Missouri is by far the largest user of the river accounting for 83 percent of river traffic, while Kansas has 12 percent, Nebraska three percent and Iowa two percent. Almost all of the barge traffic on the Missouri River ships sand and gravel dredged from the lower of the river; the remaining portion of the shipping channel now sees little to no use by commercial vessels.
For navigation purposes, the Missouri River is divided into two main sections. The Upper Missouri River is north of Gavins Point Dam, the last hydroelectric dam of fifteen on the river, just upstream from Sioux City, Iowa. The Lower Missouri River is the of river below Gavins Point until it meets the Mississippi just above St. Louis. The Lower Missouri River has no hydroelectric dams or locks but it has a plethora of wing dams that enable barge traffic by directing the flow of the river into a , channel. These wing dams have been put in place by and are maintained by the U.S. Army Corps of Engineers, and there are no plans to construct any locks to replace these wing dams on the Missouri River.
Tonnage of goods shipped by barges on the Missouri River has seen a serious decline from the 1960s to the present. In the 1960s, the USACE predicted an increase to per year by 2000, but instead the opposite has happened. The amount of goods plunged from in 1977 to just in 2000. One of the largest drops has been in agricultural products, especially wheat. Part of the reason is that irrigated land along the Missouri has only been developed to a fraction of its potential. In 2006, barges on the Missouri hauled only of products which is equal to the "daily" freight traffic on the Mississippi.
Drought conditions in the early 21st century and competition from other modes of transport – mainly railroads – are the primary reason for decreasing river traffic on the Missouri. The USACE's failure to consistently maintain the navigation channel has also hampered the industry. Efforts are being made to revive the shipping industry on the Missouri River, because of the efficiency and cheapness of river transport to haul agricultural products, and the overcrowding of alternative transportation routes. Solutions such as expanding the navigation channel and releasing more water from reservoirs during the peak of the navigation season are under consideration. Drought conditions lifted in 2010, in which about were barged on the Missouri, representing the first significant increase in shipments since 2000. However, flooding in 2011 closed record stretches of the river to boat traffic – "wash[ing] away hopes for a bounce-back year."
There are no lock and dams on the lower Missouri River, but there are plenty of wing dams that jettie out into the river and make it harder for barges to navigate. In contrast, the upper Mississippi has 29 locks and dams and averaged 61.3 million tons of cargo annually from 2008 to 2011, and its locks are closed in the winter.
Historically, the thousands of square miles that comprised the floodplain of the Missouri River supported a wide range of plant and animal species. Biodiversity generally increased proceeding downstream from the cold, subalpine headwaters in Montana to the temperate, moist climate of Missouri. Today, the river's riparian zone consists primarily of cottonwoods, willows and sycamores, with several other types of trees such as maple and ash. Average tree height generally increases farther from the riverbanks for a limited distance, as land next to the river is vulnerable to soil erosion during floods. Because of its large sediment concentrations, the Missouri does not support many aquatic invertebrates. However, the basin supports about 300 species of birds and 150 species of fish, some of which are endangered such as the pallid sturgeon. The Missouri's aquatic and riparian habitats also support several species of mammals, such as minks, river otters, beavers, muskrats, and raccoons.
The World Wide Fund For Nature divides the Missouri River watershed into three freshwater ecoregions: the Upper Missouri, Lower Missouri and Central Prairie. The Upper Missouri, roughly encompassing the area within Montana, Wyoming, southern Alberta and Saskatchewan, and North Dakota, comprises mainly semiarid shrub-steppe grasslands with sparse biodiversity because of Ice Age glaciations. There are no known endemic species within the region. Except for the headwaters in the Rockies, there is little precipitation in this part of the watershed. The Middle Missouri ecoregion, extending through Colorado, southwestern Minnesota, northern Kansas, Nebraska, and parts of Wyoming and Iowa, has greater rainfall and is characterized by temperate forests and grasslands. Plant life is more diverse in the Middle Missouri, which is also home to about twice as many animal species. Finally, the Central Prairie ecoregion is situated on the lower part of the Missouri, encompassing all or parts of Missouri, Kansas, Oklahoma and Arkansas. Despite large seasonal temperature fluctuations, this region has the greatest diversity of plants and animals of the three. Thirteen species of crayfish are endemic to the lower Missouri.
Since river commerce and industrial development began in the 1800s, human activity has severely polluted the Missouri and degraded its water quality. Most of the river's floodplain habitat is long gone, replaced by irrigated agricultural land. Development of the floodplain has led to increasing numbers of people and infrastructure within areas at high risk of inundation. Levees have been constructed along more than a third of the river to keep floodwater within the channel, but with the consequences of faster stream velocity and a resulting increase of peak flows in downstream areas. Fertilizer runoff, which causes elevated levels of nitrogen and other nutrients, is a major problem along the Missouri River, especially in Iowa and Missouri. This form of pollution also affects the upper Mississippi, Illinois and Ohio Rivers. Low oxygen levels in rivers and the vast Gulf of Mexico dead zone at the end of the Mississippi Delta are both results of high nutrient concentrations in the Missouri and other tributaries of the Mississippi.
Channelization of the lower Missouri waters has made the river narrower, deeper and less accessible to riparian flora and fauna. Many dams and bank stabilization projects have been built to help convert of Missouri River floodplain to agricultural land. Channel control has reduced the volume of sediment transported downstream by the river and eliminated critical habitat for fish, birds and amphibians. By the early 21st century, declines in populations of native species prompted the U.S. Fish and Wildlife Service to issue a biological opinion recommending restoration of river habitats for federally endangered bird and fish species.
The USACE began work on ecosystem restoration projects along the lower Missouri River in the early 21st century. Because of the low use of the shipping channel in the lower Missouri maintained by the USACE, it is now considered feasible to remove some of the levees, dikes, and wing dams that constrict the river's flow, thus allowing it to naturally restore its banks. By 2001, there were of riverside floodplain undergoing active restoration.
Restoration projects have re-mobilized some of the sediments that had been trapped behind bank stabilization structures, prompting concerns of exacerbated nutrient and sediment pollution locally and downstream in the northern Gulf of Mexico. A 2010 National Research Council report assessed the roles of sediment in the Missouri River, evaluating current habitat restoration strategies and alternative ways to manage sediment. The report found that a better understanding of sediment processes in the Missouri River, including the creation of a "sediment budget" – an accounting of sediment transport, erosion, and deposition volumes for the length of the Missouri River – would provide a foundation for projects to improve water quality standards and protect endangered species.
Several sections of the Missouri River were added to the National Wild and Scenic Rivers System from Fort Benton to Robinson Bridge, Gavins Point Dam to Ponca State Park and Fort Randall Dam to Lewis and Clark Lake. A total of of the river were designated including of wild river and of scenic river in Montana. of the river is listed as recreational under the National Wild and Scenic Rivers System.
With over of open water, the six reservoirs of the Missouri River Mainstem System provide some of the main recreational areas within the basin. Visitation has increased from 10 million visitor-hours in the mid-1960s to over 60 million visitor-hours in 1990. Development of visitor facilities was spurred by the Federal Water Project Recreation Act of 1965, which required the USACE to build and maintain boat ramps, campgrounds and other public facilities along major reservoirs. Recreational use of Missouri River reservoirs is estimated to contribute $85–100 million to the regional economy each year.
The Lewis and Clark National Historic Trail, some long, follows nearly the entire Missouri River from its mouth to its source, retracing the route of the Lewis and Clark Expedition. Extending from Wood River, Illinois, in the east, to Astoria, Oregon, in the west, it also follows portions of the Mississippi and Columbia Rivers. The trail, which spans through eleven U.S. states, is maintained by various federal and state government agencies; it passes through some 100 historic sites, notably archaeological locations including the Knife River Indian Villages National Historic Site.
Parts of the river itself are designated for recreational or preservational use. The Missouri National Recreational River consists of portions of the Missouri downstream from Fort Randall and Gavins Point Dams that total . These reaches exhibit islands, meanders, sandbars, underwater rocks, riffles, snags, and other once-common features of the lower river that have now disappeared under reservoirs or have been destroyed by channeling. About forty-five steamboat wrecks are scattered along these reaches of the river.
Downstream from Great Falls, Montana, about of the river course through a rugged series of canyons and badlands known as the Missouri Breaks. This part of the river, designated a U.S. National Wild and Scenic River in 1976, flows within the Upper Missouri Breaks National Monument, a preserve comprising steep cliffs, deep gorges, arid plains, badlands, archaeological sites, and whitewater rapids on the Missouri itself. The preserve includes a wide variety of plant and animal life; recreational activities include boating, rafting, hiking and wildlife observation.
In north-central Montana, some along over of the Missouri River, centering on Fort Peck Lake, comprise the Charles M. Russell National Wildlife Refuge. The wildlife refuge consists of a native northern Great Plains ecosystem that has not been heavily affected by human development, except for the construction of Fort Peck Dam. Although there are few designated trails, the whole preserve is open to hiking and camping.
Many U.S. national parks, such as Glacier National Park, Rocky Mountain National Park, Yellowstone National Park and Badlands National Park are, at least partially, in the watershed. Parts of other rivers in the basin are set aside for preservation and recreational use – notably the Niobrara National Scenic River, which is a protected stretch of the Niobrara River, one of the Missouri's longest tributaries. The Missouri flows through or past many National Historic Landmarks, which include Three Forks of the Missouri, Fort Benton, Montana, Big Hidatsa Village Site, Fort Atkinson, Nebraska and Arrow Rock Historic District.
|
https://en.wikipedia.org/wiki?curid=19591
|
Missile
In modern language, a missile, also known as a guided missile, is a guided airborne ranged weapon capable of self-propelled flight usually by a jet engine or rocket motor. Missiles have four system components: targeting/guidance system, flight system, engine and warhead. Missiles come in types adapted for different purposes: surface-to-surface and air-to-surface missiles (ballistic, cruise, anti-ship, anti-tank, etc.), surface-to-air missiles (and anti-ballistic), air-to-air missiles, and anti-satellite weapons.
Airborne explosive devices without propulsion are referred to as shells if fired by an artillery piece and bombs if dropped by an aircraft. Unguided jet-propelled missiles are usually described as rocket artillery.
In ordinary language, the word "missile" can refer to any projectile that is thrown, shot or propelled toward a target.
The first missiles to be used operationally were a series of missiles developed by Nazi Germany in World War II. Most famous of these are the V-1 flying bomb and V-2 rocket, both of which used a mechanical autopilot to keep the missile flying along a pre-chosen route. Less well known were a series of Anti-Ship and Anti-aircraft missiles, typically based on a simple radio control (command guidance) system directed by the operator. However, these early systems in World War II were only built in small numbers.
Guided missiles have a number of different system components:
The most common method of guidance is to use some form of radiation, such as infrared, lasers or radio waves, to guide the missile onto its target. This radiation may emanate from the target (such as the heat of an engine or the radio waves from an enemy radar), it may be provided by the missile itself (such as a radar), or it may be provided by a friendly third party (such as the radar of the launch vehicle/platform, or a laser designator operated by friendly infantry). The first two are often known as fire-and-forget as they need no further support or control from the launch vehicle/platform in order to function. Another method is to use a TV guidance, with a visible light or infrared picture produced in order to see the target. The picture may be used either by a human operator who steering the missile onto its target or by a computer doing much the same job. One of the more bizarre guidance methods instead used a pigeon to steer a missile to its target. Some missiles also have a home-on-jam capability to guide itself to a radar-emitting source. Many missiles use a combination of two or more of the methods to improve accuracy and the chances of a successful engagement.
Another method is to target the missile by knowing the location of the target and using a guidance system such as INS, TERCOM or satellite guidance. This guidance system guides the missile by knowing the missile's current position and the position of the target, and then calculating a course between them. This job can also be performed somewhat crudely by a human operator who can see the target and the missile and guide it using either cable- or radio-based remote control, or by an automatic system that can simultaneously track the target and the missile.
Furthermore, some missiles use initial targeting, sending them to a target area, where they will switch to primary targeting, using either radar or IR targeting to acquire the target.
Whether a guided missile uses a targeting system, a guidance system or both, it needs a flight system. The flight system uses the data from the targeting or guidance system to maneuver the missile in flight, allowing it to counter inaccuracies in the missile or to follow a moving target. There are two main systems: vectored thrust (for missiles that are powered throughout the guidance phase of their flight) and aerodynamic maneuvering (wings, fins, canard (aeronautics), etc.).
Missiles are powered by an engine, generally either a type of rocket engine or jet engine. Rockets are generally of the solid propellant type for ease of maintenance and fast deployment, although some larger ballistic missiles use liquid-propellant rockets. Jet engines are generally used in cruise missiles, most commonly of the turbojet type, due to its relative simplicity and low frontal area. Turbofans and ramjets are the only other common forms of jet engine propulsion, although any type of engine could theoretically be used. Long-range missiles may have multiple engine stages, particularly in those launched from the surface. These stages may all be of similar types or may include a mix of engine types − for example, surface-launched cruise missiles often have a rocket booster for launching and a jet engine for sustained flight.
Some missiles may have additional propulsion from another source at launch; for example, the V1 was launched by a catapult, and the MGM-51 Shillelagh was fired out of a tank gun (using a smaller charge than would be used for a shell).
Missiles generally have one or more explosive warheads, although other weapon types may also be used. The warheads of a missile provide its primary destructive power (many missiles have extensive secondary destructive power due to the high kinetic energy of the weapon and unburnt fuel that may be on board). Warheads are most commonly of the high explosive type, often employing shaped charges to exploit the accuracy of a guided weapon to destroy hardened targets. Other warhead types include submunitions, incendiaries, nuclear weapons, chemical, biological or radiological weapons or kinetic energy penetrators.
Warheadless missiles are often used for testing and training purposes.
Missiles are generally categorized by their launch platform and intended target. In broadest terms, these will either be surface (ground or water) or air, and then sub-categorized by range and the exact target type (such as anti-tank or anti-ship). Many weapons are designed to be launched from both surface or the air, and a few are designed to attack either surface or air targets (such as the ADATS missile). Most weapons require some modification in order to be launched from the air or surface, such as adding boosters to the surface-launched version.
After the boost stage, ballistic missiles follow a trajectory mainly determined by ballistics. The guidance is for relatively small deviations from that.
Ballistic missiles are largely used for land attack missions. Although normally associated with nuclear weapons, some conventionally armed ballistic missiles are in service, such as MGM-140 ATACMS. The V2 had demonstrated that a ballistic missile could deliver a warhead to a target city with no possibility of interception, and the introduction of nuclear weapons meant it could efficiently do damage when it arrived. The accuracy of these systems was fairly poor, but post-war development by most military forces improved the basic Inertial navigation system concept to the point where it could be used as the guidance system on Intercontinental ballistic missiles flying thousands of kilometers. Today, the ballistic missile represents the only strategic deterrent in most military forces; however, some ballistic missiles are being adapted for conventional roles, such as the Russian Iskander or the Chinese DF-21D anti-ship ballistic missile. Ballistic missiles are primarily surface-launched from mobile launchers, silos, ships or submarines, with air launch being theoretically possible with a weapon such as the cancelled Skybolt missile.
The Russian Topol M (SS-27 Sickle B) is the fastest (7,320 m/s) missile currently in service.
The V1 had been successfully intercepted during World War II, but this did not make the cruise missile concept entirely useless. After the war, the US deployed a small number of nuclear-armed cruise missiles in Germany, but these were considered to be of limited usefulness. Continued research into much longer-ranged and faster versions led to the US's SM-64 Navaho and its Soviet counterparts, the Burya and Buran cruise missile. However, these were rendered largely obsolete by the ICBM, and none were used operationally. Shorter-range developments have become widely used as highly accurate attack systems, such as the US Tomahawk missile and Russian Kh-55 . Cruise missiles are generally further divided into subsonic or supersonic weapons - supersonic weapons such as BrahMos (India,Russia) are difficult to shoot down, whereas subsonic weapons tend to be much lighter and cheaper allowing more to be fired.
Cruise missiles are generally associated with land-attack operations, but also have an important role as anti-shipping weapons. They are primarily launched from air, sea or submarine platforms in both roles, although land-based launchers also exist.
Another major German missile development project was the anti-shipping class (such as the Fritz X and Henschel Hs 293), intended to stop any attempt at a cross-channel invasion. However, the British were able to render their systems useless by jamming their radios, and missiles with wire guidance were not ready by D-Day. After the war, the anti-shipping class slowly developed and became a major class in the 1960s with the introduction of the low-flying jet- or rocket-powered cruise missiles known as "sea-skimmers". These became famous during the Falklands War, when an Argentine Exocet missile sank a Royal Navy destroyer.
A number of anti-submarine missiles also exist; these generally use the missile in order to deliver another weapon system such as a torpedo or depth charge to the location of the submarine, at which point the other weapon will conduct the underwater phase of the mission.
By the end of WWII, all forces had widely introduced unguided rockets using High-explosive anti-tank warheads as their major anti-tank weapon (see Panzerfaust, Bazooka). However, these had a limited useful range of 100 m or so, and the Germans were looking to extend this with the use of a missile using wire guidance, the X-7. After the war, this became a major design class in the later 1950s and, by the 1960s, had developed into practically the only non-tank anti-tank system in general use. During the 1973 Yom Kippur War between Israel and Egypt, the 9M14 Malyutka (aka "Sagger") man-portable anti-tank missile proved potent against Israeli tanks. While other guidance systems have been tried, the basic reliability of wire guidance means this will remain the primary means of controlling anti-tank missiles in the near future. Anti-tank missiles may be launched from aircraft, vehicles or by ground troops in the case of smaller weapons.
By 1944, US and British air forces were sending huge air fleets over occupied Europe, increasing the pressure on the Luftwaffe day and night fighter forces. The Germans were keen to get some sort of useful ground-based anti-aircraft system into operation. Several systems were under development, but none had reached operational status before the war's end. The US Navy also started missile research to deal with the Kamikaze threat. By 1950, systems based on this early research started to reach operational service, including the US Army's MIM-3 Nike Ajax and the Navy's "3T's" (Talos, Terrier, Tartar), soon followed by the Soviet S-25 Berkut and S-75 Dvina and French and British systems. Anti-aircraft weapons exist for virtually every possible launch platform, with surface-launched systems ranging from huge, self-propelled or ship-mounted launchers to man-portable systems. Subsurface-to-air missiles are usually launched from below water (usually from submarines).
Like most missiles, the S-300, S-400 (missile), Advanced Air Defence and MIM-104 Patriot are for defense against short-range missiles and carry explosive warheads.
In the case of a large closing speed, a projectile without explosives is used; just a collision is sufficient to destroy the target. See Missile Defense Agency for the following systems being developed:
For the first time used by Soviet pilots in the summer of 1939 during the Battle of Khalkhin Gol. On August 20, 1939, the Japanese Nakajima Ki-27 fighter was attacked by the Soviet Polikarpov I-16 fighter of the Captain N. Zvonarev, he fired a rocket salvo, from a distance of about a kilometer, after which the Ki-27 crashed to the ground. A group of Polikarpov I-16 fighters under command of Captain N. Zvonarev were using RS-82 rockets against Japanese aircraft, shooting down 16 fighters and 3 bombers in total.
German experience in World War II demonstrated that destroying a large aircraft was quite difficult, and they had invested considerable effort into air-to-air missile systems to do this. Their Messerschmitt Me 262's jets often carried R4M rockets, and other types of "bomber destroyer" aircraft had unguided rockets as well. In the post-war period, the R4M served as the pattern for a number of similar systems, used by almost all interceptor aircraft during the 1940s and 1950s. Most rockets (except for the AIR-2 Genie, due to its nuclear warhead with a large blast radius) had to be carefully aimed at relatively close range to hit the target successfully. The United States Navy and U.S. Air Force began deploying guided missiles in the early 1950s, most famous being the US Navy's AIM-9 Sidewinder and the USAF's AIM-4 Falcon. These systems have continued to advance, and modern air warfare consists almost entirely of missile firing. In the Falklands War, less powerful British Harriers were able to defeat faster Argentinian opponents using AIM-9L missiles provided by the United States as the conflict began. The latest heat-seeking designs can lock onto a target from various angles, not just from behind, where the heat signature from the engines is strongest. Other types rely on radar guidance (either on board or "painted" by the launching aircraft). Air-to-air missiles also have a wide range of sizes, ranging from helicopter-launched self-defense weapons with a range of a few kilometers, to long-range weapons designed for interceptor aircraft such as the R-37 (missile).
In the 1950s and 1960s, Soviet designers started work on an anti-satellite weapon, called the Istrebitel Sputnik, which literally means "interceptor of satellites" or "destroyer of satellites". After a lengthy development process of roughly twenty years, it was finally decided that testing of the Istrebitel Sputnik be canceled. This was when the United States started testing their own systems. The Brilliant Pebbles defense system proposed during the 1980s would have used kinetic energy collisions without explosives. Anti-satellite weapons may be launched either by an aircraft or a surface platform, depending on the design. To date, only a few known tests have occurred. As of 2019, only 4 countries - United States, India, Russia and China have operational anti-satellite weapons.
|
https://en.wikipedia.org/wiki?curid=19594
|
Mendelian inheritance
Mendelian inheritance is a type of biological inheritance that follows the principles originally proposed by Gregor Mendel in 1865 and 1866, re-discovered in 1900 and popularised by William Bateson. These principles were initially controversial. When Mendel's theories were integrated with the Boveri–Sutton chromosome theory of inheritance by Thomas Hunt Morgan in 1915, they became the core of classical genetics. Ronald Fisher combined these ideas with the theory of natural selection in his 1930 book "The Genetical Theory of Natural Selection", putting evolution onto a mathematical footing and forming the basis for population genetics within the modern evolutionary synthesis.
The principles of Mendelian inheritance were named for and first derived by Gregor Johann Mendel, a nineteenth-century Moravian monk who formulated his ideas after conducting simple hybridisation experiments with pea plants ("Pisum sativum") he had planted in the garden of his monastery. Between 1856 and 1863, Mendel cultivated and tested some 5,000 pea plants. From these experiments, he induced two generalizations which later became known as "Mendel's Principles of Heredity" or "Mendelian inheritance". He described his experiments in a two-part paper, "Versuche über Pflanzen-Hybriden" ("Experiments on Plant Hybridization"), that he presented to the Natural History Society of Brno on 8 February and 8 March 1865, and which was published in 1866.
Mendel's results were largely ignored by the vast majority. Although they were not completely unknown to biologists of the time, they were not seen as generally applicable, even by Mendel himself, who thought they only applied to certain categories of species or traits. A major block to understanding their significance was the importance attached by 19th-century biologists to the apparent blending of many inherited traits in the overall appearance of the progeny, now known to be due to multi-gene interactions, in contrast to the organ-specific binary characters studied by Mendel. In 1900, however, his work was "re-discovered" by three European scientists, Hugo de Vries, Carl Correns, and Erich von Tschermak. The exact nature of the "re-discovery" has been debated: De Vries published first on the subject, mentioning Mendel in a footnote, while Correns pointed out Mendel's priority after having read De Vries' paper and realizing that he himself did not have priority. De Vries may not have acknowledged truthfully how much of his knowledge of the laws came from his own work and how much came only after reading Mendel's paper. Later scholars have accused Von Tschermak of not truly understanding the results at all.
Regardless, the "re-discovery" made Mendelism an important but controversial theory. Its most vigorous promoter in Europe was William Bateson, who coined the terms "genetics" and "allele" to describe many of its tenets. The model of heredity was contested by other biologists because it implied that heredity was discontinuous, in opposition to the apparently continuous variation observable for many traits. Many biologists also dismissed the theory because they were not sure it would apply to all species. However, later work by biologists and statisticians such as Ronald Fisher showed that if multiple Mendelian factors were involved in the expression of an individual trait, they could produce the diverse results observed, and thus showed that Mendelian genetics is compatible with natural selection. Thomas Hunt Morgan and his assistants later integrated Mendel's theoretical model with the chromosome theory of inheritance, in which the chromosomes of cells were thought to hold the actual hereditary material, and created what is now known as classical genetics, a highly successful foundation which eventually cemented Mendel's place in history.
Mendel's findings allowed scientists such as Fisher and J.B.S. Haldane to predict the expression of traits on the basis of mathematical probabilities. An important aspect of Mendel's success can be traced to his decision to start his crosses only with plants he demonstrated were true-breeding. He only measured discrete (binary) characteristics, such as color, shape, and position of the seeds, rather than quantitatively variable characteristics. He expressed his results numerically and subjected them to statistical analysis. His method of data analysis and his large sample size gave credibility to his data. He had the foresight to follow several successive generations (P, F1, F2, F3) of pea plants and record their variations. Finally, he performed "test crosses" (backcrossing descendants of the initial hybridization to the initial true-breeding lines) to reveal the presence and proportions of recessive characters.
Five parts of Mendel's discoveries were an important divergence from the common theories at the time and were the prerequisite for the establishment of his rules.
According to customary terminology we refer here to the principles of inheritance discovered by Gregor Mendel as Mendelian laws, although today's geneticists also speak of "Mendelian rules" or "Mendelian principles", as there are many exceptions summarized under the collective term Non-Mendelian inheritance.
Mendel selected for the experiment the following characters of pea plants:
When he crossed purebred white flower and purple flower pea plants (the parental or P generation) by artificial pollination, the resulting flower colour was not a blend. Rather than being a mix of the two, the offspring in the first generation (F1-generation) were all purple-flowered. Therefore he called this biological trait dominant. When he allowed self-fertilization in the uniform looking F1-generation, he obtained both colours in the F2 generation with a purple flower to white flower ratio of 3 : 1. In some of the other characters also one of the traits was dominant.
He then conceived the idea of heredity units, which he called hereditary "factors". Mendel found that there are alternative forms of factors — now called genes — that account for variations in inherited characteristics. For example, the gene for flower color in pea plants exists in two forms, one for purple and the other for white. The alternative "forms" are now called alleles. For each trait, an organism inherits two alleles, one from each parent. These alleles may be the same or different. An organism that has two identical alleles for a gene is said to be homozygous for that gene (and is called a homozygote). An organism that has two different alleles for a gene is said be heterozygous for that gene (and is called a heterozygote).
Mendel hypothesized that allele pairs separate randomly, or segregate, from each other during the production of the gametes in the seed plant (egg cell) and the pollen plant (sperm). Because allele pairs separate during gamete production, a sperm or egg carries only one allele for each inherited trait. When sperm and egg unite at fertilization, each contributes its allele, restoring the paired condition in the offspring. Mendel also found that each pair of alleles segregates independently of the other pairs of alleles during gamete formation.
The genotype of an individual is made up of the many alleles it possesses. The phenotype is the result of the expression of all characteristics that are genetically determined by its alleles as well as by its environment. The presence of an allele does not mean that the trait will be expressed in the individual that possesses it. If the two alleles of an inherited pair differ (the heterozygous condition), then one determines the organism’s appearance and is called the dominant allele; the other has no noticeable effect on the organism’s appearance and is called the recessive allele.
If two parents are mated with each other who differ in one genetic characteristic for which they are both homozygous (each pure-bred), all offspring in the first generation (F1) are equal to the examined characteristic in genotype and phenotype showing the dominant trait. This "uniformity rule" or "reciprocity rule" applies to all individuals of the F1-generation.
The principle of dominant inheritance discovered by Mendel states that in a heterozygote the dominant allele will cause the recessive allele to be "masked": that is, not expressed in the phenotype. Only if an individual is homozygous with respect to the recessive allele will the recessive trait be expressed. Therefore a cross between a homozygous dominant and a homozygous recessive organism yields a heterozygous organism whose phenotype displays only the dominant trait.
The F1 offspring of Mendel's pea crosses always looked like one of the two parental varieties. In this situation of "complete dominance," the dominant allele had the same phenotypic effect whether present in one or two copies.
But for some characteristics, the F1 hybrids have an appearance "in between" the phenotypes of the two parental varieties. A cross between two four o'clock ("Mirabilis jalapa") plants shows an exception to Mendel's principle, called "incomplete dominance". Flowers of heterozygous plants have a phenotype somewhere between the two homozygous genotypes. In cases of intermediate inheritance (incomplete dominance) in the F1-generation Mendel's principle of uniformity in genotype and phenotype applies as well. Research about intermediate inheritance was done by other scientists. The first was Carl Correns with his studies about Mirabilis jalapa.
The Law of Segregation of genes applies when two individuals, both heterozygous for a certain trait are crossed, for example hybrids of the F1-generation. The offspring in the F2-generation differ in genotype and phenotype, so that the characteristics of the grandparents (P-generation) regularly occur again. In a dominant-recessive inheritance an average of 25 % are homozygous with the dominant trait, 50 % are heterozygous showing the dominant trait in the phenotype (genetic carriers), 25 % are homozygous with the recessive trait and therefore express the recessive trait in the phenotype. The genotypic ratio is 1 : 2 : 1, the phenotypic ratio is 3 : 1.
In the pea plant example, the capital "B" represents the dominant allele for purple blossom and lowercase "b" represents the recessive allele for white blossom. The pistil plant and the pollen plant are both F1-hybrids with genotype "B b". Each has one allele for purple and one allele for white. In the offspring, in the F2-plants in the Punnett-square, three combinations are possible. The genotypic ratio is 1 "BB" : 2 "Bb" : 1 "bb". But the phenotypic ratio of plants with purple blossoms to those with white blossoms is 3 : 1 due to the dominance of the allele for purple. Plants with homozygous "b b" are white flowered like one of the grandparents in the P-generation.
In cases of incomplete dominance the same segregation of alleles takes place in the F2-generation, but here also the phenotypes show a ratio of 1 : 2 : 1, as the heterozygous are different in phenotype from the homozygous because the genetic expression of one allele compensates the missing expression of the other allele only partially. This results in an intermediate inheritance which was later described by other scientists.
In some literature sources the principle of segregation is cited as "first law". Nevertheless, Mendel did his crossing experiments with heterozygous plants after obtaining these hybrids by crossing two purebred plants, discovering the principle of dominance and uniformity at first.
Molecular proof of segregation of genes was subsequently found through observation of meiosis by two scientists independently, the German botanist Oscar Hertwig in 1876, and the Belgian zoologist Edouard Van Beneden in 1883. Most alleles are located in chromosomes in the cell nucleus. Paternal and maternal chromosomes get separated in meiosis, because during spermatogenesis the chromosomes are segregated on the four sperm cells that arise from one mother sperm cell, and during oogenesis the chromosomes are distributed between the polar bodies and the egg cell. Every individual organism contains two alleles for each trait. They segregate (separate) during meiosis such that each gamete contains only one of the alleles. When the gametes unite in the zygote the alleles - one from the mother one from the father - get passed on to the offspring. An offspring thus receives a pair of alleles for a trait by inheriting homologous chromosomes from the parent organisms: one allele for each trait from each parent. Heterozygous individuals with the dominant trait in the phenotype are genetic carriers of the recessive trait.
The Law of Independent Assortment states that alleles for separate traits are passed independently of one another. That is, the biological selection of an allele for one trait has nothing to do with the selection of an allele for any other trait. Mendel found support for this law in his dihybrid cross experiments. In his monohybrid crosses, an idealized 3:1 ratio between dominant and recessive phenotypes resulted. In dihybrid crosses, however, he found a 9:3:3:1 ratios. This shows that each of the two alleles is inherited independently from the other, with a 3:1 phenotypic ratio for each.
Independent assortment occurs in eukaryotic organisms during meiotic metaphase I, and produces a gamete with a mixture of the organism's chromosomes. The physical basis of the independent assortment of chromosomes is the random orientation of each bivalent chromosome along the metaphase plate with respect to the other bivalent chromosomes. Along with crossing over, independent assortment increases genetic diversity by producing novel genetic combinations.
There are many deviations from the principle of independent assortment due to genetic linkage.
Of the 46 chromosomes in a normal diploid human cell, half are maternally derived (from the mother's egg) and half are paternally derived (from the father's sperm). This occurs as sexual reproduction involves the fusion of two haploid gametes (the egg and sperm) to produce a zygote and a new organism, in which every cell has two sets of chromosomes (diploid). During gametogenesis the normal complement of 46 chromosomes needs to be halved to 23 to ensure that the resulting haploid gamete can join with another haploid gamete to produce a diploid organism.
In independent assortment, the chromosomes that result are randomly sorted from all possible maternal and paternal chromosomes. Because zygotes end up with a mix instead of a pre-defined "set" from either parent, chromosomes are therefore considered assorted independently. As such, the zygote can end up with any combination of paternal or maternal chromosomes. For human gametes, with 23 chromosomes, the number of possibilities is 223 or 8,388,608 possible combinations. This contributes to the genetic variability of progeny. Generally, the recombination of genes has important implications for many evolutionary processes.
A Mendelian trait is one that is controlled by a single locus in an inheritance pattern. In such cases, a mutation in a single gene can cause a disease that is inherited according to Mendel's principles. Dominant diseases manifest in heterozygous individuals. Recessive ones are sometimes inherited unnoticeably by genetic carriers. Examples include sickle-cell anemia, Tay–Sachs disease, cystic fibrosis and xeroderma pigmentosa. A disease controlled by a single gene contrasts with a multi-factorial disease, like heart disease, which is affected by several loci (and the environment) as well as those diseases inherited in a non-Mendelian fashion.
After Mendels studies and discoveries more and more new discoveries about genetics were made. Mendel himself has said that the regularities he discovered apply only to the organisms and characteristics he consciously chose for his experiments.
Mendel explained inheritance in terms of discrete factors —genes— that are passed along from generation to generation according to the rules of probability. Mendel's laws are valid for all sexually reproducing organisms, including garden peas and human beings. However, Mendel's laws stop short of explaining some patterns of genetic inheritance. For most sexually reproducing organisms, cases where Mendel's laws can strictly account for all patterns of inheritance are relatively rare. Often the inheritance patterns are more complex.
In cases of codominance the phenotypes produced by both alleles are clearly expressed. Mendel chose genetic traits in plants that are determined by only two alleles, such as "A" and "a". In nature, genes often exist in several different forms with multiple alleles. Furthermore, many traits are produced by the interaction of several genes. Traits controlled by two or more genes are said to be polygenic traits.
|
https://en.wikipedia.org/wiki?curid=19595
|
Machinima
Machinima () is the use of real-time computer graphics engines to create a cinematic production. Most often, video games are used to generate the computer animation.
Machinima-based artists, sometimes called machinimists or machinimators, are often fan laborers, by virtue of their re-use of copyrighted materials (see below). Machinima offers to provide an archive of gaming performance and access to the look and feel of software and hardware that may already have become unavailable or even obsolete. For game studies, "Machinima’s gestures grant access to gaming's historical conditions of possibility and how machinima offers links to a comparative horizon that informs, changes, and fully participates in videogame culture."
The practice of using graphics engines from video games arose from the animated software introductions of the 1980s demoscene, Disney Interactive Studios' 1992 video game "Stunt Island", and 1990s recordings of gameplay in first-person shooter (FPS) video games, such as id Software's "Doom" and "Quake". Originally, these recordings documented speedruns—attempts to complete a level as quickly as possible—and multiplayer matches. The addition of storylines to these films created ""Quake" movies". The more general term "machinima", a blend of "machine" and "cinema", arose when the concept spread beyond the "Quake" series to other games and software. After this generalization, machinima appeared in mainstream media, including television series and advertisements.
Machinima has advantages and disadvantages when compared to other styles of filmmaking. Its relative simplicity over traditional frame-based animation limits control and range of expression. Its real-time nature favors speed, cost saving, and flexibility over the higher quality of pre-rendered computer animation. Virtual acting is less expensive, dangerous, and physically restricted than live action. Machinima can be filmed by relying on in-game artificial intelligence (AI) or by controlling characters and cameras through digital puppetry. Scenes can be precisely scripted, and can be manipulated during post-production using video editing techniques. Editing, custom software, and creative cinematography may address technical limitations. Game companies have provided software for and have encouraged machinima, but the widespread use of digital assets from copyrighted games has resulted in complex, unresolved legal issues.
Machinima productions can remain close to their gaming roots and feature stunts or other portrayals of gameplay. Popular genres include dance videos, comedy, and drama. Alternatively, some filmmakers attempt to stretch the boundaries of the rendering engines or to mask the original 3-D context. The Academy of Machinima Arts & Sciences (AMAS), a non-profit organization dedicated to promoting machinima, recognizes exemplary productions through Mackie awards given at its annual Machinima Film Festival. Some general film festivals accept machinima, and game companies, such as Epic Games, Blizzard Entertainment and Jagex, have sponsored contests involving it.
1980s software crackers added custom introductory credits sequences (intros) to programs whose copy protection they had removed. Increasing computing power allowed for more complex intros, and the demoscene formed when focus shifted to the intros instead of the cracks. The goal became to create the best 3-D demos in real-time with the least amount of software code. Disk storage was too slow for this, so graphics had to be calculated on the fly and without a pre-existing game engine.
In Disney Interactive Studios' 1992 computer game "Stunt Island", users could stage, record, and play back stunts. As Nitsche stated, the game's goal was "not ... a high score but a spectacle." Released the following year, id Software's "Doom" included the ability to record gameplay as sequences of events that the game engine could later replay in real-time. Because events and not video frames were saved, the resulting game demo files were small and easily shared among players. A culture of recording gameplay developed, as Henry Lowood of Stanford University called it, "a context for spectatorship... The result was nothing less than a metamorphosis of the player into a performer." Another important feature of "Doom" was that it allowed players to create their own modifications, maps, and software for the game, thus expanding the concept of game authorship. In machinima, there is a dual register of gestures: the trained motions of the player determine the in-game images of expressive motion.
In parallel of the video game approach, in the media art field, Maurice Benayoun’s Virtual Reality artwork "The Tunnel under the Atlantic" (1995), often compared to video games, introduced a virtual film director, fully autonomous intelligent agent, to shoot and edit in real time a full video from the digging performance in the Pompidou Center in Paris and the Museum of Contemporary art in Montreal. The full movie, "Inside the Tunnel under the Atlantic", 21h long, was followed in 1997 by "Inside the Paris New-Delhi Tunnel" (13h long). Only short excerpts where presented to the public. The complex behavior of the Tunnel’s virtual director makes it a significant precursor of later application to video games based machinimas.
"Doom"s 1996 successor, "Quake", offered new opportunities for both gameplay and customization, while retaining the ability to record demos. Multiplayer video games became popular, and demos of matches between teams of players (clans) were recorded and studied. Paul Marino, executive director of the AMAS, stated that deathmatches, a type of multiplayer game, became more "cinematic". At this point, however, they still documented gameplay without a narrative.
On October 26, 1996, a well-known gaming clan, the Rangers, surprised the "Quake" community with "Diary of a Camper", the first widely known machinima film. This short, 100-second demo file contained the action and gore of many others, but in the context of a brief story, rather than the usual deathmatch. An example of transformative or emergent gameplay, this shift from competition to theater required both expertise in and subversion of the game's mechanics. The Ranger demo emphasized this transformation by retaining specific gameplay references in its story.
"Diary of a Camper" inspired many other ""Quake" movies," as these films were then called. A community of game modifiers (modders), artists, expert players, and film fans began to form around them. The works were distributed and reviewed on websites such as The Cineplex, Psyk's Popcorn Jungle, and the Quake Movie Library (QML). Production was supported by dedicated demo-processing software, such as Uwe Girlich's Little Movie Processing Center (LMPC) and David "crt" Wright's non-linear editor Keygrip, which later became known as "Adobe Premiere for Quake demo files". Among the notable films were Clan Phantasm's "Devil's Covenant", the first feature-length "Quake" movie; Avatar and Wendigo's "Blahbalicious", which the QML awarded seven Quake Movie Oscars; and Clan Undead's "Operation Bayshield", which introduced simulated lip synchronization and featured customized digital assets.
Released in December 1997, id Software's "Quake II" improved support for user-created 3-D models. However, without compatible editing software, filmmakers continued to create works based on the original "Quake". These included the ILL Clan's "Apartment Huntin'" and the Quake done Quick group's "Scourge Done Slick". "Quake II" demo editors became available in 1998. In particular, Keygrip 2.0 introduced "recamming", the ability to adjust camera locations after recording. Paul Marino called the addition of this feature "a defining moment for [m]achinima". With "Quake II" filming now feasible, Strange Company's 1999 production "Eschaton: Nightfall" was the first work to feature entirely custom-made character models.
The December 1999 release of id's "Quake III Arena" posed a problem to the "Quake" movie community. The game's demo file included information needed for computer networking; however, to prevent cheating, id warned of legal action for dissemination of the file format. Thus, it was impractical to enhance software to work with "Quake III". Concurrently, the novelty of "Quake" movies was waning. New productions appeared less frequently, and, according to Marino, the community needed to "reinvent itself" to offset this development.
"Borg War", a 90-minute animated Star Trek fan film, was produced using Elite Force 2 (a "Quake III" variant) and Starfleet Command 3, repurposing the games' voiceover clips to create a new plot. "Borg War" was nominated for two "Mackie" awards by the Academy of Machinima Arts & Sciences. An August 2007 screening at a "Star Trek" convention in Las Vegas was the first time that CBS/Paramount had approved the screening of a non-parody fan film at a licensed convention.
In January 2000, Hugh Hancock, the founder of Strange Company, launched a new website, machinima.com. A misspelled contraction of "machine cinema" ("machinema"), the term "machinima" was intended to dissociate in-game filming from a specific engine. The misspelling stuck because it also referenced anime. The new site featured tutorials, interviews, articles, and the exclusive release of Tritin Films' "Quad God". The first film made with "Quake III Arena", "Quad God" was also the first to be distributed as recorded video frames, not game-specific instructions. This change was initially controversial among machinima producers who preferred the smaller size of demo files. However, demo files required a copy of the game to view. The more accessible traditional video format broadened "Quad God"s viewership, and the work was distributed on CDs bundled with magazines. Thus, id's decision to protect "Quake III"s code inadvertently caused machinima creators to use more general solutions and thus widen their audience. Within a few years, machinima films were almost exclusively distributed in common video file formats.
Machinima began to receive mainstream notice. Roger Ebert discussed it in a June 2000 article and praised Strange Company's machinima setting of Percy Bysshe Shelley's sonnet "Ozymandias". At Showtime Network's 2001 Alternative Media Festival, the ILL Clan's 2000 machinima film "Hardly Workin'" won the Best Experimental and Best in SHO awards. Steven Spielberg used "Unreal Tournament" to test special effects while working on his 2001 film "" Eventually, interest spread to game developers. In July 2001, Epic Games announced that its upcoming game "Unreal Tournament 2003" would include Matinee, a machinima production software utility. As involvement increased, filmmakers released fewer new productions to focus on quality.
At the March 2002 Game Developers Conference, five machinima makers—Anthony Bailey, Hugh Hancock, Katherine Anna Kang, Paul Marino, and Matthew Ross—founded the AMAS, a non-profit organization dedicated to promoting machinima. At QuakeCon in August, the new organization held the first Machinima Film Festival, which received mainstream media coverage. "", by Jake Hughes and Tom Hall, won three awards, including Best Picture. The next year, "In the Waiting Line", directed by Tommy Pallotta and animated by Randy Cole, utilizing Fountainhead Entertainment's Machinimation tools, it became the first machinima music video to air on MTV. As graphics technology improved, machinima filmmakers used other video games and consumer-grade video editing software. Using Bungie's 2001 game "", Rooster Teeth Productions created a popular comedy series "Red vs. Blue: The Blood Gulch Chronicles". The series' second season premiered at the Lincoln Center for the Performing Arts in 2004.
Machinima has appeared on television, starting with G4's series "Portal". In the BBC series "Time Commanders", players re-enacted historic battles using Creative Assembly's real-time game "". MTV2's "Video Mods" re-creates music videos using characters from video games such as "The Sims 2", "BloodRayne", and "Tribes". Blizzard Entertainment helped to set part of "Make Love, Not Warcraft", an Emmy Award–winning 2006 episode of the comedy series "South Park", in its massively multiplayer online role-playing game (MMORPG) "World of Warcraft". By purchasing broadcast rights to Douglas Gayeton's machinima documentary "Molotov Alva and His Search for the Creator" in September 2007, HBO became the first television network to buy a work created completely in a virtual world. In December 2008, machinima.com signed fifteen experienced television comedy writers—including Patric Verrone, Bill Oakley, and Mike Rowe—to produce episodes for the site.
Commercial use of machinima has increased.Rooster Teeth sells DVDs of their "Red vs. Blue" series and, under sponsorship from Electronic Arts, helped to promote "The Sims 2" by using the game to make a machinima series, "The Strangerhood". Volvo Cars sponsored the creation of a 2004 advertisement, "", the first film to combine machinima and live action. Later, Electronic Arts commissioned Rooster Teeth to promote their "Madden NFL 07" video game. Blockhouse TV uses Moviestorm's machinima software to produce its pre-school educational DVD series "Jack and Holly"
Game developers have continued to increase support for machinima. Products such as Lionhead Studios' 2005 business simulation game "The Movies", Linden Research's virtual world "Second Life", and Bungie's 2007 first-person shooter "Halo 3" encourage the creation of user content by including machinima software tools. Using "The Movies", Alex Chan, a French resident with no previous filmmaking experience, took four days to create "The French Democracy", a short political film about the 2005 civil unrest in France. Third-party mods like "Garry's Mod" usually offer the ability to manipulate characters and take advantage of custom or migrated content, allowing for the creation of works like "Counter-Strike For Kids" that can be filmed using multiple games.
In a 2010 interview with PC Magazine, Valve CEO and co-founder Gabe Newell said that they wanted to make a "Half-Life" feature film themselves, rather than hand it off to a big-name director like Sam Raimi, and that their recent "Team Fortress 2" "Meet The Team" machinima shorts were experiments in doing just that. Two years later, Valve released their proprietary non-linear machinima software, Source Filmmaker.
Machinima has also been used for music video clips. "Second Life" virtual artist Bryn Oh created a work for Australian performer Megan Bernard's song "Clean Up Your Life", released in 2016. The first music video for 2018's "Old Town Road", by Lil Nas X, was composed entirely of footage from the 2018 Western action-adventure game "Red Dead Redemption 2".
The AMAS defines machinima as "animated filmmaking within a real-time virtual 3-D environment". In other 3-D animation methods, creators can control every frame and nuance of their characters but, in turn, must consider issues such as key frames and inbetweening. Machinima creators leave many rendering details to their host environments, but may thus inherit those environments' limitations. Second Life Machinima film maker Ozymandius King provided a detailed account of the process by which the artists at MAGE Magazine produce their videos. "Organizing for a photo shoot is similar to organizing for a film production. Once you find the actors / models, you have to scout locations, find clothes and props for the models and type up a shooting script. The more organized you are the less time it takes to shoot the scene." Because game animations focus on dramatic rather than casual actions, the range of character emotions is often limited. However, Kelland, Morris, and Lloyd state that a small range of emotions is often sufficient, as in successful Japanese anime television series.
Another difference is that machinima is created in real time, but other animation is pre-rendered. Real-time engines need to trade quality for speed and use simpler algorithms and models. In the 2001 animated film "", every strand of hair on a character's head was independent; real-time needs would likely force them to be treated as a single unit. Kelland, Morris, and Lloyd argue that improvement in consumer-grade graphics technology will allow more realism. Similarly, Paul Marino connects machinima to the increasing computing power predicted by Moore's law. For cut scenes in video games, issues other than visual fidelity arise. Pre-rendered scenes can require more digital storage space, weaken suspension of disbelief through contrast with real-time animation of normal gameplay, and limit interaction.
Like live action, machinima is recorded in real-time, and real people can act and control the camera. Filmmakers are often encouraged to follow traditional cinematic conventions, such as avoiding wide fields of view, the overuse of slow motion, and errors in visual continuity. Unlike live action, machinima involves less expensive, digital special effects and sets, possibly with a science-fiction or historical theme. Explosions and stunts can be tried and repeated without monetary cost and risk of injury, and the host environment may allow unrealistic physical constraints. University of Cambridge experiments in 2002 and 2003 attempted to use machinima to re-create a scene from the 1942 live-action film "Casablanca". Machinima filming differed from traditional cinematography in that character expression was limited, but camera movements were more flexible and improvised. Nitsche compared this experiment to an unpredictable Dogme 95 production.
Berkeley sees machinima as "a strangely hybrid form, looking forwards and backwards, cutting edge and conservative at the same time". Machinima is a digital medium based on 3-D computer games, but most works have a linear narrative structure. Some, such as "Red vs. Blue" and "The Strangerhood", follow narrative conventions of television situational comedy. Nitsche agrees that pre-recorded ("reel") machinima tends to be linear and offers limited interactive storytelling while machinima has more opportunities performed live and with audience interaction. In creating their improvisational comedy series "On the Campaign Trail with Larry & Lenny Lumberjack" and talk show "Tra5hTa1k with ILL Will", the ILL Clan blended real and virtual performance by creating the works on-stage and interacting with a live audience. In another combination of real and virtual worlds, Chris Burke's talk show "This Spartan Life" takes place in "Halo 2"s open multiplayer environment. There, others playing in earnest may attack the host or his interviewee. Although other virtual theatrical performances have taken place in chat rooms and multi-user dungeons, machinima adds "cinematic camera work". Previously, such virtual cinematic performances with live audience interaction were confined to research labs equipped with powerful computers.
Machinima can be less expensive than other forms of filmmaking. Strange Company produced its feature-length machinima film "BloodSpell" for less than £10,000. Before using machinima, Burnie Burns and Matt Hullum of Rooster Teeth Productions spent US$9,000 to produce a live-action independent film. In contrast, the four Xbox game consoles used to make "Red vs. Blue" in 2005 cost $600. The low cost caused a product manager for Electronic Arts to compare machinima to the low-budget independent film "The Blair Witch Project", without the need for cameras and actors. Because these are seen as low barriers to entry, machinima has been called a "democratization of filmmaking". Berkeley weighs increased participation and a blurred line between producer and consumer against concerns that game copyrights limit commercialization and growth of machinima.
Comparatively, machinimists using pre-made virtual platforms like "Second Life" have indicated that their productions can be made quite successfully with no cost at all. Creators like Dutch director Chantal Harvey, producer of the 48 Hour Film Project Machinima sector, have created upwards of 200 films using the platform. Harvey's advocacy of the genre has resulted in the involvement of film director Peter Greenaway who served as a juror for the Machinima category and gave a keynote speech during the event.
Kelland, Morris, and Lloyd list four main methods of creating machinima. From simple to advanced, these are: relying on the game's AI to control most actions, digital puppetry, recamming, and precise scripting of actions. Although simple to produce, AI-dependent results are unpredictable, thus complicating the realization of a preconceived film script. For example, when Rooster Teeth produced "The Strangerhood" using "The Sims 2", a game that encourages the use of its AI, the group had to create multiple instances of each character to accommodate different moods. Individual instances were selected at different times to produce appropriate actions.
In digital puppetry, machinima creators become virtual actors. Each crew member controls a character in real-time, as in a multiplayer game. The director can use built-in camera controls, if available. Otherwise, video is captured from the perspectives of one or more puppeteers who serve as camera operators. Puppetry allows for improvisation and offers controls familiar to gamers, but requires more personnel than the other methods and is less precise than scripted recordings. However, some games, such as the "Halo" series, (except for Halo PC and Custom Edition, which allow AI and custom objects and characters), allow filming only through puppetry. According to Marino, other disadvantages are the possibility of disruption when filming in an open multi-user environment and the temptation for puppeteers to play the game in earnest, littering the set with blood and dead bodies. However, Chris Burke intentionally hosts "This Spartan Life" in these unpredictable conditions, which are fundamental to the show. Other works filmed using puppetry are the ILL Clan's improvisational comedy series "On the Campaign Trail with Larry & Lenny Lumberjack" and Rooster Teeth Productions' "Red vs. Blue". In recamming, which builds on puppetry, actions are first recorded to a game engine's demo file format, not directly as video frames. Without re-enacting scenes, artists can then manipulate the demo files to add cameras, tweak timing and lighting, and change the surroundings. This technique is limited to the few engines and software tools that support it.
A technique common in cut scenes of video games, scripting consists of giving precise directions to the game engine. A filmmaker can work alone this way, as J. Thaddeus "Mindcrime" Skubis did in creating the nearly four-hour "The Seal of Nehahra" (2000), the longest work of machinima at the time. However, perfecting scripts can be time-consuming. Unless what-you-see-is-what-you-get (WYSIWYG) editing is available, as in "", changes may need to be verified in additional runs, and non-linear editing may be difficult. In this respect, Kelland, Morris, and Lloyd compare scripting to stop-motion animation. Another disadvantage is that, depending on the game, scripting capabilities may be limited or unavailable. Matinee, a machinima software tool included with "Unreal Tournament 2004", popularized scripting in machinima.
When "Diary of a Camper" was created, no software tools existed to edit demo files into films. Rangers clan member Eric "ArchV" Fowler wrote his own programs to reposition the camera and to splice footage from the "Quake" demo file. "Quake" movie editing software later appeared, but the use of conventional non-linear video editing software is now common. For example, Phil South inserted single, completely white frames into his work "No Licence" to enhance the visual impact of explosions. In the post-production of "", Rooster Teeth Productions added letterboxing with Adobe Premiere Pro to hide the camera player's head-up display.
Machinima creators have used different methods to handle limited character expression. The most typical ways that amateur-style machinima gets around limitations of expression include taking advantage of speech bubbles seen above players' heads when speaking, relying on the visual matching between a character's voice and appearance, and finding methods available within the game itself. "Garry's Mod" and Source Filmmaker include the ability to manipulate characters and objects in real-time, though the former relies on community addons to take advantage of certain engine features, and the latter renders scenes using non-real-time effects. In the "Halo" video game series, helmets completely cover the characters' faces. To prevent confusion, Rooster Teeth's characters move slightly when speaking, a convention shared with anime. Some machinima creators use custom software. For example, Strange Company uses Take Over GL Face Skins to add more facial expressions to their characters filmed in BioWare's 2002 role-playing video game "Neverwinter Nights". Similarly, Atussa Simon used a "library of faces" for characters in "The Battle of Xerxes". Some software, such as Epic Games' Impersonator for "Unreal Tournament 2004" and Valve's Faceposer for Source games, have been provided by the developer. Another solution is to blend in non-machinima elements, as nGame did by inserting painted characters with more expressive faces into its 1999 film "Berlin Assassins". It may be possible to point the camera elsewhere or employ other creative cinematography or acting. For example, Tristan Pope combined creative character and camera positioning with video editing to suggest sexual actions in his controversial film "Not Just Another Love Story".
New machinima filmmakers often want to use game-provided digital assets, but doing so raises legal issues. As derivative works, their films could violate copyright or be controlled by the assets' copyright holder, an arrangement that can be complicated by separate publishing and licensing rights. The software license agreement for "The Movies" stipulates that Activision, the game's publisher, owns "any and all content within... Game Movies that was either supplied with the Program or otherwise made available... by Activision or its licensors..." Some game companies provide software to modify their own games, and machinima makers often cite fair use as a defense, but the issue has never been tested in court. A potential problem with this defense is that many works, such as "Red vs. Blue", focus more on satire, which is not as explicitly protected by fair use as parody. Berkeley adds that, even if machinima artists use their own assets, their works could be ruled derivative if filmed in a proprietary engine. The risk inherent in a fair-use defense would cause most machinima artists simply to yield to a cease-and-desist order. The AMAS has attempted to negotiate solutions with video game companies, arguing that an open-source or reasonably priced alternative would emerge from an unfavorable situation. Unlike "The Movies", some dedicated machinima software programs, such as Reallusion's iClone, have licenses that avoid claiming ownership of users' films featuring bundled assets.
Generally, companies want to retain creative control over their intellectual properties and are wary of fan-created works, like fan fiction. However, because machinima provides free marketing, they have avoided a response demanding strict copyright enforcement. In 2003, Linden Lab was praised for changing license terms to allow users to retain ownership of works created in its virtual world "Second Life". Rooster Teeth initially tried to release "Red vs. Blue" unnoticed by "Halo"s owners because they feared that any communication would force them to end the project. However, Microsoft, Bungie's parent company at the time, contacted the group shortly after episode 2, and allowed them to continue without paying licensing fees.
A case in which developer control was asserted involved Blizzard Entertainment's action against Tristan Pope's "Not Just Another Love Story". Blizzard's community managers encouraged users to post game movies and screenshots, but viewers complained that Pope's suggestion of sexual actions through creative camera and character positioning was pornographic. Citing the user license agreement, Blizzard closed discussion threads about the film and prohibited links to it. Although Pope accepted Blizzard's right to some control, he remained concerned about censorship of material that already existed in-game in some form. Discussion ensued about boundaries between MMORPG player and developer control. Lowood asserted that this controversy demonstrated that machinima could be a medium of negotiation for players.
In August 2007, Microsoft issued its Game Content Usage Rules, a license intended to address the legal status of machinima based on its games, including the "Halo" series. Microsoft intended the rules to be "flexible", and, because it was unilateral, the license was legally unable to reduce rights. However, machinima artists, such as Edgeworks Entertainment, protested the prohibitions on extending Microsoft's fictional universes (a common component of fan fiction) and on selling anything from sites hosting derivative works. Compounding the reaction was the license's statement, "If you do any of these things, you can expect to hear from Microsoft's lawyers who will tell you that you have to stop distributing your items right away."
Surprised by the negative feedback, Microsoft revised and reissued the license after discussion with Hugh Hancock and an attorney for the Electronic Frontier Foundation. The rules allow noncommercial use and distribution of works derived from Microsoft-owned game content, except audio effects and soundtracks. The license prohibits reverse engineering and material that is pornographic or otherwise "objectionable". On distribution, derivative works that elaborate on a game's fictional universe or story are automatically licensed to Microsoft and its business partners. This prevents legal problems if a fan and Microsoft independently conceive similar plots.
A few weeks later, Blizzard Entertainment posted on WorldofWarcraft.com their "Letter to the Machinimators of the World", a license for noncommercial use of game content. It differs from Microsoft's declaration in that it addresses machinima specifically instead of general game-derived content, allows use of game audio if Blizzard can legally license it, requires derivative material to meet the Entertainment Software Rating Board's Teen content rating guideline, defines noncommercial use differently, and does not address extensions of fictional universes.
Hayes states that, although licensees' benefits are limited, the licenses reduce reliance on fair use regarding machinima. In turn, this recognition may reduce film festivals' concerns about copyright clearance. In an earlier analogous situation, festivals were concerned about documentary films until best practices for them were developed. According to Hayes, Microsoft and Blizzard helped themselves through their licenses because fan creations provide free publicity and are unlikely to harm sales. If the companies had instead sued for copyright infringement, defendants could have claimed estoppel or implied license because machinima had been unaddressed for a long time. Thus, these licenses secured their issuers' legal rights. Even though other companies, such as Electronic Arts, have encouraged machinima, they have avoided licensing it. Because of the involved legal complexity, they may prefer to under-enforce copyrights. Hayes believes that this legal uncertainty is a suboptimal solution and that, though limited and "idiosyncratic", the Microsoft and Blizzard licenses move towards an ideal video gaming industry standard for handling derivative works.
Just as machinima can be the cause of legal dispute in copyright ownership and illegal use, it makes heavy use of intertextuality and raises the question of authorship. Machinima takes copyrighted property (such as characters in a game engine) and repurposes it to tell a story, but another common practice in machinima-making is to retell an existing story from a different medium in that engine.
This re-appropriation of established texts, resources, and artistic properties to tell a story or make a statement is an example of a semiotic phenomenon known as intertextuality or resemiosis. A more common term for this phenomenon is “parody”, but not all of these intertextual productions are intended for humor or satire, as demonstrated by the "Few Good G-Men" video. Furthermore, the argument of how well-protected machinima is under the guise of parody or satire is still highly debated. A piece of machinima may be reliant upon a protected property, but may not necessarily be making a statement about that property. Therefore, it is more accurate to refer to it simply as resemiosis, because it takes an artistic work and presents it in a new way, form, or medium. This resemiosis can be manifested in a number of ways. The machinima-maker can be considered an author who restructures the story and/or the world that the chosen game engine is built around. In the popular web series "Red vs. Blue", most of the storyline takes place within the game engine of "" and its subsequent sequels. "" has an extensive storyline already, but "Red vs. Blue" only ever makes mention of this storyline once in the first episode. Even after over 200 episodes of the show being broadcast onto the Internet since 2003, the only real similarities that can be drawn between "Red vs. Blue" and the game-world it takes place in are the character models, props, vehicles, and settings. Yet Burnie Burns and the machinima team at Rooster Teeth created an extensive storyline of their own using these game resources.
The ability to re-appropriate a game engine to film a video demonstrates intertextuality because it is an obvious example of art being a product of creation-through-manipulation rather than creation per se. The art historian Ernst Gombrich likened art to the "manipulation of a vocabulary" and this can be demonstrated in the creation of machinima. When using a game world to create a story, the author is influenced by the engine. For example, since so many video games are built around the concept of war, a significant portion of machinima films also take place in war-like environments.
Intertextuality is further demonstrated in machinima not only in the re-appropriation of content but in artistic and communicatory techniques. Machinima by definition is a form of puppetry, and thus this new form of digital puppetry employs age-old techniques from the traditional artform. It is also, however, a form of filmmaking, and must employ filmmaking techniques such as camera angles and proper lighting. Some machinima takes place in online environments with participants, actors, and "puppeteers" working together from thousands of miles apart. This means other techniques born from long-distance communication must also be employed. Thus, techniques and practices that would normally never be used in conjunction with one another in the creation of an artistic work end up being used intertextually in the creation of machinima.
Another way that machinima demonstrates intertextuality is in its tendency to make frequent references to texts, works, and other media just like TV ads or humorous cartoons such as "The Simpsons" might do. For example, the machinima series "Freeman's Mind", created by Ross Scott, is filmed by taking a recording of Scott playing through the game "Half Life" as a player normally would and combining it with a voiceover (also recorded by Scott) to emulate an inner monologue of the normally voiceless protagonist Gordon Freeman. Scott portrays Freeman as a snarky, sociopathic character who makes frequent references to works and texts including science fiction, horror films, action movies, American history, and renowned novels such as Moby Dick. These references to works outside the game, often triggered by events within the game, are prime examples of the densely intertextual nature of machinima.
Nitsche and Lowood describe two methods of approaching machinima: starting from a video game and seeking a medium for expression or for documenting gameplay ("inside-out"), and starting outside a game and using it merely as animation tool ("outside-in"). Kelland, Morris, and Lloyd similarly distinguish between works that retain noticeable connections to games, and those closer to traditional animation. Belonging to the former category, gameplay and stunt machinima began in 1997 with "Quake done Quick". Although not the first speedrunners, its creators used external software to manipulate camera positions after recording, which, according to Lowood, elevated speedrunning "from cyberathleticism to making movies". Stunt machinima remains popular. Kelland, Morris, and Lloyd state that "" stunt videos offer a new way to look at the game, and compare "Battlefield 1942" machinima creators to the Harlem Globetrotters. Built-in features for video editing and post-recording camera positioning in "Halo 3" were expected to facilitate gameplay-based machinima. MMORPGs and other virtual worlds have been captured in documentary films, such as "Miss Galaxies 2004", a beauty pageant that took place in the virtual world of "Star Wars Galaxies". Footage was distributed in the cover disc of the August 2004 issue of "PC Gamer". Douglas Gayeton's "Molotov Alva and His Search for the Creator" documents the title character's interactions in "Second Life".
Gaming-related comedy offers another possible entry point for new machinima producers. Presented as five-minute sketches, many machinima comedies are analogous to Internet Flash animations. After Clan Undead's 1997 work "Operation Bayshield" built on the earliest "Quake" movies by introducing narrative conventions of linear media and sketch comedy reminiscent of the television show "Saturday Night Live", the New-York-based ILL Clan further developed the genre in machinima through works including "Apartment Huntin'" and "Hardly Workin'". "Red vs. Blue: The Blood Gulch Chronicles" chronicles a futile civil war over five seasons and 100 episodes. Marino wrote that although the series' humor was rooted in video games, strong writing and characters caused the series to "transcend the typical gamer". An example of a comedy film that targets a more general audience is Strange Company's "Tum Raider", produced for the BBC in 2004.
Machinima has been used in music videos, of which the first documented example is Ken Thain's 2002 "Rebel vs. Thug", made in collaboration with Chuck D. For this, Thain used Quake2Max, a modification of "Quake II" that provided cel-shaded animation. The following year, Tommy Pallotta directed "In the Waiting Line" for the British group Zero 7. He told "Computer Graphics World", "It probably would have been quicker to do the film in a 3D animated program. But now, we can reuse the assets in an improvisational way." Scenes of the game "Postal 2" can be seen in the music video of the Black Eyed Peas single "Where Is the Love?". In television, MTV features video game characters on its show "Video Mods". Among "World of Warcraft" players, dance and music videos became popular after dancing animations were discovered in the game.
Others use machinima in drama. These works may or may not retain signs of their video game provenance. "Unreal Tournament" is often used for science fiction and "Battlefield 1942" for war, but some artists subvert their chosen game's setting or completely detach their work from it. In 1999, Strange Company used "Quake II" in "Eschaton: Nightfall", a horror film based on the work of H. P. Lovecraft (although Quake I was also based on the Lovecraft lore). A later example is Damien Valentine's series "Consanguinity", made using BioWare's 2002 computer game "Neverwinter Nights" and based on the television series "Buffy the Vampire Slayer". Another genre consists of experimental works that attempt to push the boundaries of game engines. One example, Fountainhead's "Anna", is a short film that focuses on the cycle of life and is reminiscent of "Fantasia". Other productions go farther and completely eschew a 3-D appearance. Friedrich Kirschner's "The Tournament" and "The Journey" deliberately appear hand-drawn, and Dead on Que's "Fake Science" resembles two-dimensional Eastern European modernist animation from the 1970s.
Another derivative genre termed "machinima verite", from cinéma vérité, seeks to add a documentary and additional realism to the machinima piece. L.M. Sabo's "CATACLYSM" achieves a machinima verite style through displaying and recapturing the machinima video with a low resolution black and white hand-held video camera to produce a shaky camera effect. Other element of cinéma vérité, such as longer takes, sweeping camera transitions, and jump cuts may be included to complete the effect.
Some have used machinima to make political statements, often from left-wing perspectives. Alex Chan's take on the 2005 civil unrest in France, "The French Democracy", attained mainstream attention and inspired other machinima commentaries on American and British society. Horwatt deemed Thuyen Nguyen's 2006 "An Unfair War", a criticism of the Iraq War, similar in its attempt "to speak for those who cannot". Joshua Garrison mimicked Chan's "political pseudo-documentary style" in his "Virginia Tech Massacre", a controversial "Halo 3"–based re-enactment and explanation of the eponymous real-life events. More recently, "War of Internet Addiction" addressed internet censorship in China using "World of Warcraft".
After the QML's Quake Movie Oscars, dedicated machinima awards did not reappear until the AMAS created the Mackies for its first Machinima Film Festival in 2002. The annual festival has become an important one for machinima creators. Ho Chee Yue, a founder of the marketing company AKQA, helped to organize the first festival for the Asia chapter of the AMAS in 2006. In 2007, the AMAS supported the first machinima festival held in Europe. In addition to these smaller ceremonies, Hugh Hancock of Strange Company worked to add an award for machinima to the more general Bitfilm Festival in 2003. Other general festivals that allow machinima include the Sundance Film Festival, the Florida Film Festival, and the New Media Film Festival. The Ottawa International Animation Festival opened a machinima category in 2004, but, citing the need for "a certain level of excellence", declined to award anything to the category's four entries that year.
Machinima has been showcased in contests sponsored by game companies. Epic Games' popular Make Something Unreal contest included machinima that impressed event organizer Jeff Morris because of "the quality of entries that really push the technology, that accomplish things that Epic never envisioned". In December 2005, Blizzard Entertainment and Xfire, a gaming-focused instant messaging service, jointly sponsored a "World of Warcraft" machinima contest.
|
https://en.wikipedia.org/wiki?curid=19597
|
Mutagenesis
Mutagenesis is a process by which the genetic information of an organism is changed, resulting in a mutation. It may occur spontaneously in nature, or as a result of exposure to mutagens. It can also be achieved experimentally using laboratory procedures. In nature mutagenesis can lead to cancer and various heritable diseases, but it is also a driving force of evolution. Mutagenesis as a science was developed based on work done by Hermann Muller, Charlotte Auerbach and J. M. Robson in the first half of the 20th century.
DNA may be modified, either naturally or artificially, by a number of physical, chemical and biological agents, resulting in mutations. Hermann Muller found that "High temperatures" have the ability to mutate genes in the early 1920s, and in 1927, demonstrated a causal link to mutation upon experimenting with an x-ray machine and noting phylogenetic changes when irradiating fruit flies with relatively high dose of X-rays. Muller observed a number of chromosome rearrangements in his experiments, and suggested mutation as a cause of cancer. The association of exposure to radiation and cancer had been observed as early as 1902, six years after the discovery of X-ray by Wilhelm Röntgen and radioactivity by Henri Becquerel. Muller's contemporary Lewis Stadler also showed the mutational effect of X-ray on barley in 1928, and ultraviolet (UV) radiation on maize in 1936. In 1940s, Charlotte Auerbach and J. M. Robson, found that mustard gas can also cause mutations in fruit flies.
While changes to the chromosome caused by X-ray and mustard gas were readily observable to the early researchers, other changes to the DNA induced by other mutagens were not so easily observable, and the mechanism may be complex and takes longer to unravel. For example, soot was suggested to be a cause of cancer as early as 1775, and coal tar was demonstrated to cause cancer in 1915. The chemicals involved in both were later shown to be polycyclic aromatic hydrocarbons (PAH). PAHs by themselves are not carcinogenic, and it was proposed in 1950 that the carcinogenic forms of PAHs are the oxides produced as metabolites from cellular processes. The metabolic process was identified in 1960s as catalysis by cytochrome P450 which produces reactive species that can interact with the DNA to form adducts,; the mechanism by which the PAH adducts give rise to mutation, however, is still under investigation.
Mammalian nuclear DNA may sustain more than 60,000 damage episodes per cell per day, as listed with references in DNA damage (naturally occurring). If left uncorrected, these adducts, after misreplication past the damaged sites, can give rise to mutations. In nature, the mutations that arise may be beneficial or deleterious—this is the driving force of evolution. An organism may acquire new traits through genetic mutation, but mutation may also result in impaired function of the genes, and in severe cases, cause the death of the organism. Mutation is also a major source for acquisition of resistance to antibiotics in bacteria and possibly to antifungal agents in yeasts. In the laboratory, however, mutagenesis is a useful technique for generating mutations that allows the functions of genes and gene products to be examined in detail, producing proteins with improved characteristics or novel functions, as well as mutant strains with useful properties. Initially, the ability of radiation and chemical mutagens to cause mutation was exploited to generate random mutations, but later techniques were developed to introduce specific mutations.
Humans on average naturally pass 60 new mutations to their children but fathers pass more mutations depending on their age, transmitting an average of two new mutations with every additional year of their age to the child.
DNA damage is an abnormal alteration in the structure of DNA that cannot, itself, be replicated when DNA replicates. In contrast, a mutation is a change in the nucleic acid sequence that can be replicated; hence, a mutation can be inherited from one generation to the next. Damage can occur from chemical addition (adduct), or structural disruption to a base of DNA (creating an abnormal nucleotide or nucleotide fragment), or a break in one or both DNA strands. Such DNA damage may result in mutation. When DNA containing damage is replicated, an incorrect base may be inserted in the new complementary strand as it is being synthesized (see ). The incorrect insertion in the new strand will occur opposite the damaged site in the template strand, and this incorrect insertion can become a mutation (i.e. a changed base pair) in the next round of replication. Furthermore, double-strand breaks in DNA may be repaired by an inaccurate repair process, non-homologous end joining, which produces mutations. Mutations can ordinarily be avoided if accurate DNA repair systems recognize DNA damage and repair it prior to completion of the next round of replication. At least 169 enzymes are either directly employed in DNA repair or influence DNA repair processes. Of these, 83 are directly employed in the 5 types of DNA repair processes indicated in the chart shown in the article DNA repair.
Mutagenesis may occur endogenously, for example, through spontaneous hydrolysis, or through normal cellular processes that can generate reactive oxygen species and DNA adducts, or through error in replication and repair. Mutagenesis may also arise as a result of the presence of environmental mutagens that induce changes to the DNA. The mechanism by which mutation arises varies according to the causative agent, the mutagen, involved. Most mutagens act either directly, or indirectly via mutagenic metabolites, on the DNA producing lesions. Some, however, may affect the replication or chromosomal partition mechanism, and other cellular processes.
Mutagenesis may also be self-induced by unicellular organisms when environmental conditions are very restrictive, for instance, in presence of toxic substances like antibiotics or, in yeasts, in presence of an antifungal agent or in absence of a nutrient
Many chemical mutagens require biological activation to become mutagenic. An important group of enzymes involved in the generation of mutagenic metabolites is cytochrome P450. Other enzymes that may also produce mutagenic metabolites include glutathione S-transferase and microsomal epoxide hydrolase. Mutagens that are not mutagenic by themselves but require biological activation are called promutagens.
Many mutations arise as a result of problems caused by DNA lesions during replication, resulting in errors in replication. In bacteria, extensive damage to DNA due to mutagens results in single-stranded DNA gaps during replication. This induces the SOS response, an emergency repair process that is also error-prone, thereby generating mutations. In mammalian cells, stalling of replication at damaged sites induces a number of rescue mechanisms that help bypass DNA lesions, but which also may result in errors. The Y family of DNA polymerases specializes in DNA lesion bypass in a process termed translesion synthesis (TLS) whereby these lesion-bypass polymerases replace the stalled high-fidelity replicative DNA polymerase, transit the lesion and extend the DNA until the lesion has been passed so that normal replication can resume. These processes may be error-prone or error-free.
DNA is not entirely stable in aqueous solution. Under physiological conditions the glycosidic bond may be hydrolyzed spontaneously and 10,000 purine sites in DNA are estimated to be depurinated each day in a cell. Numerous DNA repair pathways exist for DNA; however, if the apurinic site is not repaired, misincorporation of nucleotides may occur during replication. Adenine is preferentially incorporated by DNA polymerases in an apurinic site.
Cytidine may also become deaminated to uridine at one five-hundredth of the rate of depurination and can result in G to A transition. Eukaryotic cells also contain 5-methylcytosine, thought to be involved in the control of gene transcription, which can become deaminated into thymine.
Bases may be modified endogenously by normal cellular molecules. For example, DNA may be methylated by S-adenosylmethionine, and glycosylated by reducing sugars.
Many compounds, such as PAHs, aromatic amines, aflatoxin and pyrrolizidine alkaloids, may form reactive oxygen species catalyzed by cytochrome P450. These metabolites form adducts with the DNA, which can cause errors in replication, and the bulky aromatic adducts may form stable intercalation between bases and block replication. The adducts may also induce conformational changes in the DNA. Some adducts may also result in the depurination of the DNA; it is, however, uncertain how significant such depurination as caused by the adducts is in generating mutation.
Alkylation and arylation of bases can cause errors in replication. Some alkylating agents such as N-Nitrosamines may require the catalytic reaction of cytochrome-P450 for the formation of a reactive alkyl cation. N7 and O6 of guanine and the N3 and N7 of adenine are most susceptible to attack. N7-guanine adducts form the bulk of DNA adducts, but they appear to be non-mutagenic. Alkylation at O6 of guanine, however, is harmful because excision repair of O6-adduct of guanine may be poor in some tissues such as the brain. The O6 methylation of guanine can result in G to A transition, while O4-methylthymine can be mispaired with guanine. The type of the mutation generated, however, may be dependent on the size and type of the adduct as well as the DNA sequence.
Ionizing radiation and reactive oxygen species often oxidize guanine to produce 8-oxoguanine.
As noted above, the number of DNA damage episodes occurring in a mammalian cell per day is high (more than 60,000 per day). Frequent occurrence of DNA damage is likely a problem for all DNA- containing organisms, and the need to cope with DNA damage and minimize their deleterious effects is likely a fundamental problem for life.
Most spontaneous mutations likely arise from error-prone trans-lesion synthesis past a DNA damage site in the template strand during DNA replication. This process can overcome potentially lethal blockages, but at the cost of introducing inaccuracies in daughter DNA. The causal relationship of DNA damage to spontaneous mutation is illustrated by aerobically growing "E. coli" bacteria, in which 89% of spontaneously occurring base substitution mutations are caused by reactive oxygen species (ROS)-induced DNA damage. In yeast, more than 60% of spontaneous single-base pair substitutions and deletions are likely caused by trans-lesion synthesis.
An additional significant source of mutations in eukaryotes is the inaccurate DNA repair process non-homologous end joining, that is often employed in repair of double strand breaks.
In general, it appears that the main underlying cause of spontaneous mutation is error prone trans-lesion synthesis during DNA replication and that the error-prone non-homologous end joining repair pathway may also be an important contributor in eukaryotes.
Some alkylating agents may produce crosslinking of DNA. Some natural occurring chemicals may also promote crosslinking, such as psoralens after activation by UV radiation, and nitrous acid. Interstrand cross-linking is more damaging as it blocks replication and transcription and can cause chromosomal breakages and rearrangements. Some crosslinkers such as cyclophosphamide, mitomycin C and cisplatin are used as anticancer chemotherapeutic because of their high degree of toxicity to proliferating cells.
UV radiation promotes the formation of a cyclobutyl ring between adjacent thymines, resulting in the formation of pyrimidine dimers. In human skin cells, thousands of dimers may be formed in a day due to normal exposure to sunlight. DNA polymerase η may help bypass these lesions in an error-free manner; however, individuals with defective DNA repair function, such as sufferers of xeroderma pigmentosum, are sensitive to sunlight and may be prone to skin cancer.
The planar structure of chemicals such as ethidium bromide and proflavine allows them to insert between bases in DNA. This insert causes the DNA's backbone to stretch and makes slippage in DNA during replication more likely to occur since the bonding between the strands is made less stable by the stretching. Forward slippage will result in deletion mutation, while reverse slippage will result in an insertion mutation. Also, the intercalation into DNA of anthracyclines such as daunorubicin and doxorubicin interferes with the functioning of the enzyme topoisomerase II, blocking replication as well as causing mitotic homologous recombination.
Ionizing radiation may produce highly reactive free radicals that can break the bonds in the DNA. Double-stranded breakages are especially damaging and hard to repair, producing translocation and deletion of part of a chromosome. Alkylating agents like mustard gas may also cause breakages in the DNA backbone. Oxidative stress may also generate highly reactive oxygen species that can damage DNA. Incorrect repair of other damage induced by the highly reactive species can also lead to mutations.
Transposons and viruses may insert DNA sequences into coding regions or functional elements of a gene and result in inactivation of the gene.
While most mutagens produce effects that ultimately result in errors in replication, for example creating adducts that interfere with replication, some mutagens may directly affect the replication process or reduce its fidelity. Base analog such as 5-bromouracil may substitute for thymine in replication. Metals such as cadmium, chromium, and nickel can increase mutagenesis in a number of ways in addition to direct DNA damage, for example reducing the ability to repair errors, as well as producing epigenetic changes.
Adaptive mutagenesis has been defined as mutagenesis mechanisms that enable an organism to adapt to an environmental stress. Since the variety of environmental stresses is very broad, the mechanisms that enable it are also quite broad, as far as research on the field has shown. For instance, in bacteria, while modulation of the SOS response and endogenous prophage DNA synthesis has been shown to increase "Acinetobacter baumannii" resistance to ciprofloxacin. Resistance mechanisms are presumed to be linked to chromosomal mutation untransferable via horizontal gene transfer in some members of family Enterobacteriaceae, such as "E. coli, Salmonella" spp., "Klebsiella" spp., and "Enterobacter" spp. Chromosomal events, specially gene aplification, seem also to be relevant to this adaptive mutagenesis in bacteria.
Research in eukaryotic cells is much scarcer, but chromosomal events seem also to be rather relevant: while an ectopic intrachromosomal recombination has been reported to be involved in acquisition of resistance to 5-fluorocytosine in "Saccharomyces cerevisiae", genome duplications have been found to confer resistance in S. cerevisiae to nutrient-poor environments.
Mutagenesis in the laboratory is an important technique whereby DNA mutations are deliberately engineered to produce mutant genes, proteins, or strains of organism. Various constituents of a gene, such as its control elements and its gene product, may be mutated so that the functioning of a gene or protein can be examined in detail. The mutation may also produce mutant proteins with interesting properties, or enhanced or novel functions that may be of commercial use. Mutant strains may also be produced that have practical application or allow the molecular basis of particular cell function to be investigated.
Early methods of mutagenesis produced entirely random mutations; however, later methods of mutagenesis may produce site-specific mutation.
|
https://en.wikipedia.org/wiki?curid=19599
|
Mackenzie Bowell
Sir Mackenzie Bowell (; December 27, 1823 – December 10, 1917) was a Canadian newspaper publisher and politician, who served as the fifth prime minister of Canada, in office from 1894 to 1896.
Bowell was born in Rickinghall, Suffolk, England. He and his family moved to Belleville, Ontario, in 1832. His mother died two years after their arrival. When in his early teens, Bowell was apprenticed to the printing shop of the local newspaper, the "Belleville Intelligencer", and some 15 years later, became its owner and proprietor.
In 1867, following Confederation, he was elected to the House of Commons for the Conservative Party. Bowell entered cabinet in 1878, and would serve under three prime ministers: John A. Macdonald, John Abbott, and John Thompson. He served variously as Minister of Customs (1878–1892), Minister of Militia and Defence (1892), and Minister of Trade and Commerce (1892–1894). Bowell kept his Commons seat continuously for 25 years, through a period of Liberal Party rule in the 1870s. In 1892, Bowell was appointed to the Senate. He became Leader of the Government in the Senate the following year.
In December 1894, Prime Minister Thompson unexpectedly died in office, aged only 49. The Earl of Aberdeen, Canada's governor general, appointed Bowell to replace Thompson as prime minister, due to his status as the most senior cabinet member. The main problem of Bowell's tenure as prime minister was the Manitoba Schools Question. His attempts at compromise alienated members of his own party, and following a Cabinet revolt in early 1896 he was forced to resign in favour of Charles Tupper. Bowell stayed on as a senator until his death at the age of 93, but never again held ministerial office; he served continuously as a Canadian parliamentarian for 50 years.
Bowell was born in Rickinghall, England, to John Bowell and Elizabeth Marshall. In 1832 his family emigrated to Belleville, Upper Canada, where he apprenticed with the printer at the town newspaper, "The Belleville Intelligencer". He became a successful printer and editor with that newspaper, and later its owner. He was a Freemason but also an Orangeman, becoming Grandmaster of the Orange Order of British North America, 1870–1878.
In 1847 he married Harriet Moore, with whom he had five sons and four daughters.
Bowell was first elected to the House of Commons in 1867 as a Conservative for the riding of North Hastings, Ontario. He held his seat for the Conservatives when they lost the election of January 1874, in the wake of the Pacific Scandal. Later that year he was instrumental in having Louis Riel expelled from the House.
In 1878, with the Conservatives again governing, he joined the Cabinet as Minister of Customs. In 1892 he became Minister of Militia and Defence, having held his Commons seat continuously for 25 years. A competent, hardworking administrator, Bowell remained in Cabinet as Minister of Trade and Commerce, a newly created portfolio, after he became a Senator that same year. His visit to Australia in 1893 led to the first leaders' conference of British colonies and territories, held in Ottawa in 1894. He became Leader of the Government in the Senate on October 31, 1893.
In December 1894, Prime Minister Sir John Sparrow David Thompson died suddenly, and Bowell, as the most senior Cabinet minister, was appointed in Thompson's stead by the Governor General. Bowell thus became the second of just two Canadian Prime Ministers (after John Abbott) to hold that office while serving in the Senate rather than the House of Commons.
As Prime Minister, Bowell faced the Manitoba Schools Question. In 1890 Manitoba had abolished public funding for denominational schools, both Catholic and Protestant, which many thought was contrary to the provisions made for denominational schools in the Manitoba Act of 1870. However, in a court challenge, the Judicial Committee of the Privy Council held that Manitoba's abolition of public funding for denominational schools was consistent with the Manitoba Act provision. In a second court case, the Judicial Committee held that the federal Parliament had the authority to enact remedial legislation to force Manitoba to re-establish the funding.
Bowell and his predecessors struggled to solve this problem, which divided the country, the government, and even Bowell's own Cabinet. He was further hampered in his handling of the issue by his own indecisiveness on it and by his inability, as a Senator, to take part in debates in the House of Commons. Bowell backed legislation, already drafted, that would have forced Manitoba to restore its Catholic schools, but then postponed it due to opposition within his Cabinet. With the ordinary business of government at a standstill, Bowell's Cabinet decided that he was incompetent to lead and so, to force him to step down, seven ministers resigned and then foiled the appointment of successors.
Though Bowell denounced the rebellious ministers as "a nest of traitors," he had to agree to resign. After ten days, following an intervention on Bowell's behalf by the Governor General, the government crisis was resolved and matters seemingly returned to normal when six of the ministers were reinstated, but leadership was then effectively held by Charles Tupper, who had joined Cabinet at the same time, filling the seventh place. Tupper, who had been Canadian High Commissioner to the United Kingdom, had been recalled by the plotters to replace Bowell. Bowell formally resigned in favour of Tupper at the end of the parliamentary session.
Bowell stayed in the Senate, serving as his party's leader there until 1906, and afterward as a regular Senator until his death in 1917, having served continuously for more than 50 years as a federal parliamentarian.
He died of pneumonia in Belleville, seventeen days short of his 94th birthday. He was buried in the Belleville cemetery. His funeral was attended by a full complement of the Orange Order, but not by any currently or formerly elected member of the government.
Bowell was designated a National Historic Person in 1945, on the advice of the national Historic Sites and Monuments Board.
The Post Office Department honored Bowell with a commemorative stamp in 1954, part of a series on prime ministers.
In their 1998 study of the Canadian prime ministers up through Jean Chrétien, J. L. Granatstein and Norman Hillmer found that a survey of Canadian historians ranked Bowell #19 out of the 20 Prime Ministers up until then.
Until 2017, Bowell remained the only Canadian prime minister without a full-length biography of his life and career. This shortfall was solved when the Belleville historian Betsy Dewar Boyce's book "The Accidental Prime Minister" was published by Bancroft, Ontario publisher Kirby Books. The book was published on the centennial of Bowell's death. Boyce had died in 2007, having unsuccessfully sought a publisher for her work for a decade.
The following jurist was appointed to the Supreme Court of Canada by the Governor General during Bowell's tenure:
"The Accidental Prime Minister", by Betsy Dewar Boyce, 2017, Kirby Publishing, Bancroft, Ontario, .
|
https://en.wikipedia.org/wiki?curid=19602
|
Manhattan Project
The Manhattan Project was a research and development undertaking during World War II that produced the first nuclear weapons. It was led by the United States with the support of the United Kingdom and Canada. From 1942 to 1946, the project was under the direction of Major General Leslie Groves of the U.S. Army Corps of Engineers. Nuclear physicist Robert Oppenheimer was the director of the Los Alamos Laboratory that designed the actual bombs. As engineer districts by convention carried the name of the city where they were located, the Army component of the project was designated the Manhattan District; "Manhattan" gradually superseded the official codename, Development of Substitute Materials, for the entire project. Along the way, the project absorbed its earlier British counterpart, Tube Alloys. The Manhattan Project began modestly in 1939, but grew to employ more than 130,000 people and cost nearly US$2 billion (equivalent to about $ billion in ). Over 90 percent of the cost was for building factories and to produce fissile material, with less than 10 percent for development and production of the weapons. Research and production took place at more than thirty sites across the United States, the United Kingdom, and Canada.
Two types of atomic bombs were developed concurrently during the war: a relatively simple gun-type fission weapon and a more complex implosion-type nuclear weapon. The Thin Man gun-type design proved impractical to use with plutonium, and therefore a simpler gun-type called Little Boy was developed that used uranium-235, an isotope that makes up only 0.7 percent of natural uranium. Since it was chemically identical to the most common isotope, uranium-238, and had almost the same mass, separating the two proved difficult. Three methods were employed for uranium enrichment: electromagnetic, gaseous and thermal. Most of this work was performed at the Clinton Engineer Works at Oak Ridge, Tennessee.
In parallel with the work on uranium was an effort to produce plutonium, which was discovered at the University of California in 1940. After the feasibility of the world's first artificial nuclear reactor, the Chicago Pile-1, was demonstrated in 1942 at the Metallurgical Laboratory in the University of Chicago, the Project designed the X-10 Graphite Reactor at Oak Ridge and the production reactors at the Hanford Site in Washington state, in which uranium was irradiated and transmuted into plutonium. The plutonium was then chemically separated from the uranium, using the bismuth phosphate process. The Fat Man plutonium implosion-type weapon was developed in a concerted design and development effort by the Los Alamos Laboratory.
The project was also charged with gathering intelligence on the German nuclear weapon project. Through Operation Alsos, Manhattan Project personnel served in Europe, sometimes behind enemy lines, where they gathered nuclear materials and documents, and rounded up German scientists. Despite the Manhattan Project's tight security, Soviet atomic spies successfully penetrated the program. The first nuclear device ever detonated was an implosion-type bomb at the Trinity test, conducted at New Mexico's Alamogordo Bombing and Gunnery Range on 16 July 1945. Little Boy and Fat Man bombs were used a month later in the atomic bombings of Hiroshima and Nagasaki, respectively, with Manhattan Project personnel serving as bomb assembly technicians, and as weaponeers on the attack aircraft. In the immediate postwar years, the Manhattan Project conducted weapons testing at Bikini Atoll as part of Operation Crossroads, developed new weapons, promoted the development of the network of national laboratories, supported medical research into radiology and laid the foundations for the nuclear navy. It maintained control over American atomic weapons research and production until the formation of the United States Atomic Energy Commission in January 1947.
The discovery of nuclear fission by German chemists Otto Hahn and Fritz Strassmann in 1938, and its theoretical explanation by Lise Meitner and Otto Frisch, made the development of an atomic bomb a theoretical possibility. There were fears that a German atomic bomb project would develop one first, especially among scientists who were refugees from Nazi Germany and other fascist countries. In August 1939, Hungarian-born physicists Leo Szilard and Eugene Wigner drafted the Einstein–Szilard letter, which warned of the potential development of "extremely powerful bombs of a new type". It urged the United States to take steps to acquire stockpiles of uranium ore and accelerate the research of Enrico Fermi and others into nuclear chain reactions. They had it signed by Albert Einstein and delivered to President Franklin D. Roosevelt. Roosevelt called on Lyman Briggs of the National Bureau of Standards to head the Advisory Committee on Uranium to investigate the issues raised by the letter. Briggs held a meeting on 21 October 1939, which was attended by Szilárd, Wigner and Edward Teller. The committee reported back to Roosevelt in November that uranium "would provide a possible source of bombs with a destructiveness vastly greater than anything now known."
The U.S. Navy awarded Columbia University $6,000 in funding, most of which Enrico Fermi and Szilard spent on purchasing graphite. A team of Columbia professors including Fermi, Szilard, Eugene T. Booth and John Dunning created the first nuclear fission reaction in the Americas, verifying the work of Hahn and Strassmann. The same team subsequently built a series of prototype nuclear reactors (or "piles" as Fermi called them) in Pupin Hall at Columbia, but were not yet able to achieve a chain reaction. The Advisory Committee on Uranium became the National Defense Research Committee (NDRC) on Uranium when that organization was formed on 27 June 1940. Briggs proposed spending $167,000 on research into uranium, particularly the uranium-235 isotope, and plutonium, which was discovered in 1940 at the University of California. On 28 June 1941, Roosevelt signed Executive Order 8807, which created the Office of Scientific Research and Development (OSRD), with Vannevar Bush as its director. The office was empowered to engage in large engineering projects in addition to research. The NDRC Committee on Uranium became the S-1 Section of the OSRD; the word "uranium" was dropped for security reasons.
In Britain, Frisch and Rudolf Peierls at the University of Birmingham had made a breakthrough investigating the critical mass of uranium-235 in June 1939. Their calculations indicated that it was within an order of magnitude of , which was small enough to be carried by a bomber of the day. Their March 1940 Frisch–Peierls memorandum initiated the British atomic bomb project and its MAUD Committee, which unanimously recommended pursuing the development of an atomic bomb. In July 1940, Britain had offered to give the United States access to its scientific research, and the Tizard Mission's John Cockcroft briefed American scientists on British developments. He discovered that the American project was smaller than the British, and not as far advanced.
As part of the scientific exchange, the MAUD Committee's findings were conveyed to the United States. One of its members, the Australian physicist Mark Oliphant, flew to the United States in late August 1941 and discovered that data provided by the MAUD Committee had not reached key American physicists. Oliphant then set out to find out why the committee's findings were apparently being ignored. He met with the Uranium Committee and visited Berkeley, California, where he spoke persuasively to Ernest O. Lawrence. Lawrence was sufficiently impressed to commence his own research into uranium. He in turn spoke to James B. Conant, Arthur H. Compton and George B. Pegram. Oliphant's mission was therefore a success; key American physicists were now aware of the potential power of an atomic bomb.
On 9 October 1941, President Roosevelt approved the atomic program after he convened a meeting with Vannevar Bush and Vice President Henry A. Wallace. To control the program, he created a Top Policy Group consisting of himself—although he never attended a meeting—Wallace, Bush, Conant, Secretary of War Henry L. Stimson, and the Chief of Staff of the Army, General George C. Marshall. Roosevelt chose the Army to run the project rather than the Navy, because the Army had more experience with management of large-scale construction projects. He also agreed to coordinate the effort with that of the British, and on 11 October he sent a message to Prime Minister Winston Churchill, suggesting that they correspond on atomic matters.
The S-1 Committee held its meeting on 18 December 1941 "pervaded by an atmosphere of enthusiasm and urgency" in the wake of the attack on Pearl Harbor and the subsequent United States declaration of war upon Japan and then on Germany. Work was proceeding on three different techniques for isotope separation to separate uranium-235 from the more abundant uranium-238. Lawrence and his team at the University of California investigated electromagnetic separation, while Eger Murphree and Jesse Wakefield Beams's team looked into gaseous diffusion at Columbia University, and Philip Abelson directed research into thermal diffusion at the Carnegie Institution of Washington and later the Naval Research Laboratory. Murphree was also the head of an unsuccessful separation project using gas centrifuges.
Meanwhile, there were two lines of research into nuclear reactor technology, with Harold Urey continuing research into heavy water at Columbia, while Arthur Compton brought the scientists working under his supervision from Columbia, California and Princeton University to join his team at the University of Chicago, where he organized the Metallurgical Laboratory in early 1942 to study plutonium and reactors using graphite as a neutron moderator. Briggs, Compton, Lawrence, Murphree, and Urey met on 23 May 1942 to finalize the S-1 Committee recommendations, which called for all five technologies to be pursued. This was approved by Bush, Conant, and Brigadier General Wilhelm D. Styer, the chief of staff of Major General Brehon B. Somervell's Services of Supply, who had been designated the Army's representative on nuclear matters. Bush and Conant then took the recommendation to the Top Policy Group with a budget proposal for $54 million for construction by the United States Army Corps of Engineers, $31 million for research and development by OSRD and $5 million for contingencies in fiscal year 1943. The Top Policy Group in turn sent it on 17 June 1942 to the President, who approved it by writing "OK FDR" on the document.
Compton asked theoretical physicist J. Robert Oppenheimer of the University of California to take over research into fast neutron calculations—the key to calculations of critical mass and weapon detonation—from Gregory Breit, who had quit on 18 May 1942 because of concerns over lax operational security. John H. Manley, a physicist at the Metallurgical Laboratory, was assigned to assist Oppenheimer by contacting and coordinating experimental physics groups scattered across the country. Oppenheimer and Robert Serber of the University of Illinois examined the problems of neutron diffusion—how neutrons moved in a nuclear chain reaction—and hydrodynamics—how the explosion produced by a chain reaction might behave. To review this work and the general theory of fission reactions, Oppenheimer and Fermi convened meetings at the University of Chicago in June and at the University of California in July 1942 with theoretical physicists Hans Bethe, John Van Vleck, Edward Teller, Emil Konopinski, Robert Serber, Stan Frankel, and Eldred C. Nelson, the latter three former students of Oppenheimer, and experimental physicists Emilio Segrè, Felix Bloch, Franco Rasetti, John Henry Manley, and Edwin McMillan. They tentatively confirmed that a fission bomb was theoretically possible.
There were still many unknown factors. The properties of pure uranium-235 were relatively unknown, as were those of plutonium, an element that had only been discovered in February 1941 by Glenn Seaborg and his team. The scientists at the (July 1942) Berkeley conference envisioned creating plutonium in nuclear reactors where uranium-238 atoms absorbed neutrons that had been emitted from fissioning uranium-235 atoms. At this point no reactor had been built, and only tiny quantities of plutonium were available from cyclotrons at institutions such as Washington University in St. Louis. Even by December 1943, only two milligrams had been produced. There were many ways of arranging the fissile material into a critical mass. The simplest was shooting a "cylindrical plug" into a sphere of "active material" with a "tamper"—dense material that would focus neutrons inward and keep the reacting mass together to increase its efficiency. They also explored designs involving spheroids, a primitive form of "implosion" suggested by Richard C. Tolman, and the possibility of autocatalytic methods, which would increase the efficiency of the bomb as it exploded.
Considering the idea of the fission bomb theoretically settled—at least until more experimental data was available—the 1942 Berkeley conference then turned in a different direction. Edward Teller pushed for discussion of a more powerful bomb: the "super", now usually referred to as a "hydrogen bomb", which would use the explosive force of a detonating fission bomb to ignite a nuclear fusion reaction in deuterium and tritium. Teller proposed scheme after scheme, but Bethe refused each one. The fusion idea was put aside to concentrate on producing fission bombs. Teller also raised the speculative possibility that an atomic bomb might "ignite" the atmosphere because of a hypothetical fusion reaction of nitrogen nuclei. Bethe calculated that it could not happen, and a report co-authored by Teller showed that "no self-propagating chain of nuclear reactions is likely to be started." In Serber's account, Oppenheimer mentioned the possibility of this scenario to Arthur Compton, who "didn't have enough sense to shut up about it. It somehow got into a document that went to Washington" and was "never laid to rest".
The Chief of Engineers, Major General Eugene Reybold, selected Colonel James C. Marshall to head the Army's part of the project in June 1942. Marshall created a liaison office in Washington, D.C., but established his temporary headquarters on the 18th floor of 270 Broadway in New York, where he could draw on administrative support from the Corps of Engineers' North Atlantic Division. It was close to the Manhattan office of Stone & Webster, the principal project contractor, and to Columbia University. He had permission to draw on his former command, the Syracuse District, for staff, and he started with Lieutenant Colonel Kenneth Nichols, who became his deputy.
Because most of his task involved construction, Marshall worked in cooperation with the head of the Corps of Engineers Construction Division, Major General Thomas M. Robbins, and his deputy, Colonel Leslie Groves. Reybold, Somervell, and Styer decided to call the project "Development of Substitute Materials", but Groves felt that this would draw attention. Since engineer districts normally carried the name of the city where they were located, Marshall and Groves agreed to name the Army's component of the project the Manhattan District. This became official on 13 August, when Reybold issued the order creating the new district. Informally, it was known as the Manhattan Engineer District, or MED. Unlike other districts, it had no geographic boundaries, and Marshall had the authority of a division engineer. Development of Substitute Materials remained as the official codename of the project as a whole, but was supplanted over time by "Manhattan".
Marshall later conceded that, "I had never heard of atomic fission but I did know that you could not build much of a plant, much less four of them for $90 million." A single TNT plant that Nichols had recently built in Pennsylvania had cost $128 million. Nor were they impressed with estimates to the nearest order of magnitude, which Groves compared with telling a caterer to prepare for between ten and a thousand guests. A survey team from Stone & Webster had already scouted a site for the production plants. The War Production Board recommended sites around Knoxville, Tennessee, an isolated area where the Tennessee Valley Authority could supply ample electric power and the rivers could provide cooling water for the reactors. After examining several sites, the survey team selected one near Elza, Tennessee. Conant advised that it be acquired at once and Styer agreed but Marshall temporized, awaiting the results of Conant's reactor experiments before taking action. Of the prospective processes, only Lawrence's electromagnetic separation appeared sufficiently advanced for construction to commence.
Marshall and Nichols began assembling the resources they would need. The first step was to obtain a high priority rating for the project. The top ratings were AA-1 through AA-4 in descending order, although there was also a special AAA rating reserved for emergencies. Ratings AA-1 and AA-2 were for essential weapons and equipment, so Colonel Lucius D. Clay, the deputy chief of staff at Services and Supply for requirements and resources, felt that the highest rating he could assign was AA-3, although he was willing to provide a AAA rating on request for critical materials if the need arose. Nichols and Marshall were disappointed; AA-3 was the same priority as Nichols' TNT plant in Pennsylvania.
Vannevar Bush became dissatisfied with Colonel Marshall's failure to get the project moving forward expeditiously, specifically the failure to acquire the Tennessee site, the low priority allocated to the project by the Army and the location of his headquarters in New York City. Bush felt that more aggressive leadership was required, and spoke to Harvey Bundy and Generals Marshall, Somervell, and Styer about his concerns. He wanted the project placed under a senior policy committee, with a prestigious officer, preferably Styer, as overall director.
Somervell and Styer selected Groves for the post, informing him on 17 September of this decision, and that General Marshall ordered that he be promoted to brigadier general, as it was felt that the title "general" would hold more sway with the academic scientists working on the Manhattan Project. Groves' orders placed him directly under Somervell rather than Reybold, with Colonel Marshall now answerable to Groves. Groves established his headquarters in Washington, D.C., on the fifth floor of the New War Department Building, where Colonel Marshall had his liaison office. He assumed command of the Manhattan Project on 23 September 1942. Later that day, he attended a meeting called by Stimson, which established a Military Policy Committee, responsible to the Top Policy Group, consisting of Bush (with Conant as an alternate), Styer and Rear Admiral William R. Purnell. Tolman and Conant were later appointed as Groves' scientific advisers.
On 19 September, Groves went to Donald Nelson, the chairman of the War Production Board, and asked for broad authority to issue a AAA rating whenever it was required. Nelson initially balked but quickly caved in when Groves threatened to go to the President. Groves promised not to use the AAA rating unless it was necessary. It soon transpired that for the routine requirements of the project the AAA rating was too high but the AA-3 rating was too low. After a long campaign, Groves finally received AA-1 authority on 1 July 1944. According to Groves, "In Washington you became aware of the importance of top priority. Most everything proposed in the Roosevelt administration would have top priority. That would last for about a week or two and then something else would get top priority".
One of Groves' early problems was to find a director for Project Y, the group that would design and build the bomb. The obvious choice was one of the three laboratory heads, Urey, Lawrence, or Compton, but they could not be spared. Compton recommended Oppenheimer, who was already intimately familiar with the bomb design concepts. However, Oppenheimer had little administrative experience, and, unlike Urey, Lawrence, and Compton, had not won a Nobel Prize, which many scientists felt that the head of such an important laboratory should have. There were also concerns about Oppenheimer's security status, as many of his associates were Communists, including his brother, Frank Oppenheimer; his wife, Kitty; and his girlfriend, Jean Tatlock. A long conversation on a train in October 1942 convinced Groves and Nichols that Oppenheimer thoroughly understood the issues involved in setting up a laboratory in a remote area and should be appointed as its director. Groves personally waived the security requirements and issued Oppenheimer a clearance on 20 July 1943.
The British and Americans exchanged nuclear information but did not initially combine their efforts. Britain rebuffed attempts by Bush and Conant in 1941 to strengthen cooperation with its own project, codenamed Tube Alloys, because it was reluctant to share its technological lead and help the United States develop its own atomic bomb. An American scientist who brought a personal letter from Roosevelt to Churchill offering to pay for all research and development in an Anglo-American project was poorly treated, and Churchill did not reply to the letter. The United States as a result decided as early as April 1942 that if its offer was rejected, they should proceed alone. The British, who had made significant contributions early in the war, did not have the resources to carry through such a research program while fighting for their survival. As a result, Tube Alloys soon fell behind its American counterpart. and on 30 July 1942, Sir John Anderson, the minister responsible for Tube Alloys, advised Churchill that: "We must face the fact that ... [our] pioneering work ... is a dwindling asset and that, unless we capitalise it quickly, we shall be outstripped. We now have a real contribution to make to a 'merger.' Soon we shall have little or none." That month Churchill and Roosevelt made an informal, unwritten agreement for atomic collaboration.
The opportunity for an equal partnership no longer existed, however, as shown in August 1942 when the British unsuccessfully demanded substantial control over the project while paying none of the costs. By 1943 the roles of the two countries had reversed from late 1941; in January Conant notified the British that they would no longer receive atomic information except in certain areas. While the British were shocked by the abrogation of the Churchill-Roosevelt agreement, head of the Canadian National Research Council C. J. Mackenzie was less surprised, writing "I can't help feeling that the United Kingdom group [over] emphasizes the importance of their contribution as compared with the Americans." As Conant and Bush told the British, the order came "from the top".
The British bargaining position had worsened; the American scientists had decided that the United States no longer needed outside help, and they wanted to prevent Britain exploiting post-war commercial applications of atomic energy. The committee supported, and Roosevelt agreed to, restricting the flow of information to what Britain could use during the war—especially not bomb design—even if doing so slowed down the American project. By early 1943 the British stopped sending research and scientists to America, and as a result the Americans stopped all information sharing. The British considered ending the supply of Canadian uranium and heavy water to force the Americans to again share, but Canada needed American supplies to produce them. They investigated the possibility of an independent nuclear program, but determined that it could not be ready in time to affect the outcome of the war in Europe.
By March 1943 Conant decided that British help would benefit some areas of the project. James Chadwick and one or two other British scientists were important enough that the bomb design team at Los Alamos needed them, despite the risk of revealing weapon design secrets. In August 1943 Churchill and Roosevelt negotiated the Quebec Agreement, which resulted in a resumption of cooperation between scientists working on the same problem. Britain, however, agreed to restrictions on data on the building of large-scale production plants necessary for the bomb. The subsequent Hyde Park Agreement in September 1944 extended this cooperation to the postwar period. The Quebec Agreement established the Combined Policy Committee to coordinate the efforts of the United States, United Kingdom and Canada. Stimson, Bush and Conant served as the American members of the Combined Policy Committee, Field Marshal Sir John Dill and Colonel J. J. Llewellin were the British members, and C. D. Howe was the Canadian member. Llewellin returned to the United Kingdom at the end of 1943 and was replaced on the committee by Sir Ronald Ian Campbell, who in turn was replaced by the British Ambassador to the United States, Lord Halifax, in early 1945. Sir John Dill died in Washington, D.C., in November 1944 and was replaced both as Chief of the British Joint Staff Mission and as a member of the Combined Policy Committee by Field Marshal Sir Henry Maitland Wilson.
When cooperation resumed after the Quebec agreement, the Americans' progress and expenditures amazed the British. The United States had already spent more than $1 billion ($ billion today), while in 1943, the United Kingdom had spent about £0.5 million. Chadwick thus pressed for British involvement in the Manhattan Project to the fullest extent and abandon any hopes of a British project during the war. With Churchill's backing, he attempted to ensure that every request from Groves for assistance was honored. The British Mission that arrived in the United States in December 1943 included Niels Bohr, Otto Frisch, Klaus Fuchs, Rudolf Peierls, and Ernest Titterton. More scientists arrived in early 1944. While those assigned to gaseous diffusion left by the fall of 1944, the 35 working with Lawrence at Berkeley were assigned to existing laboratory groups and stayed until the end of the war. The 19 sent to Los Alamos also joined existing groups, primarily related to implosion and bomb assembly, but not the plutonium-related ones. Part of the Quebec Agreement specified that nuclear weapons would not be used against another country without mutual consent. In June 1945, Wilson agreed that the use of nuclear weapons against Japan would be recorded as a decision of the Combined Policy Committee.
The Combined Policy Committee created the Combined Development Trust in June 1944, with Groves as its chairman, to procure uranium and thorium ores on international markets. The Belgian Congo and Canada held much of the world's uranium outside Eastern Europe, and the Belgian government in exile was in London. Britain agreed to give the United States most of the Belgian ore, as it could not use most of the supply without restricted American research. In 1944, the Trust purchased of uranium oxide ore from companies operating mines in the Belgian Congo. In order to avoid briefing US Secretary of the Treasury Henry Morgenthau Jr. on the project, a special account not subject to the usual auditing and controls was used to hold Trust monies. Between 1944 and the time he resigned from the Trust in 1947, Groves deposited a total of $37.5 million into the Trust's account.
Groves appreciated the early British atomic research and the British scientists' contributions to the Manhattan Project, but stated that the United States would have succeeded without them. He also said that Churchill was "the best friend the atomic bomb project had [as] he kept Roosevelt's interest up ... He just stirred him up all the time by telling him how important he thought the project was."
The British wartime participation was crucial to the success of the United Kingdom's independent nuclear weapons program after the war when the McMahon Act of 1946 temporarily ended American nuclear cooperation.
Image:Manhattan Project US Canada Map 2.svg|thumb|upright=3.2|center|A selection of US and Canadian sites important to the Manhattan Project. Click on the location for more information.|alt=Map of the United States and southern Canada with major project sites marked
circle 50 280 20 Berkeley, California
circle 140 400 20 Inyokern, California
circle 170 100 20 Richland, Washington
circle 220 20 20 Trail, British Columbia
circle 230 270 20 Wendover, Utah
circle 290 360 20 Monticello, Utah
circle 320 360 20 Uravan, Colorado
circle 340 440 20 Los Alamos, New Mexico
circle 340 500 20 Alamogordo, New Mexico
circle 610 290 20 Ames, Iowa
circle 660 400 20 St Louis, Missouri
circle 710 310 20 Chicago, Illinois
circle 730 370 20 Dana, Indiana
circle 800 350 20 Dayton, Ohio
circle 760 540 20 Sylacauga, Alabama
circle 890 390 20 Morgantown, West Virginia
circle 800 460 20 Oak Ridge, Tennessee
circle 910 160 20 Chalk River Laboratories
circle 920 260 20 Rochester, New York
circle 950 360 20 Washington, D.C.
desc none
The day after he took over the project, Groves took a train to Tennessee with Colonel Marshall to inspect the proposed site there, and Groves was impressed. On 29 September 1942, United States Under Secretary of War Robert P. Patterson authorized the Corps of Engineers to acquire of land by eminent domain at a cost of $3.5 million. An additional was subsequently acquired. About 1,000 families were affected by the condemnation order, which came into effect on 7 October. Protests, legal appeals, and a 1943 Congressional inquiry were to no avail. By mid-November U.S. Marshals were tacking notices to vacate on farmhouse doors, and construction contractors were moving in. Some families were given two weeks' notice to vacate farms that had been their homes for generations; others had settled there after being evicted to make way for the Great Smoky Mountains National Park in the 1920s or the Norris Dam in the 1930s. The ultimate cost of land acquisition in the area, which was not completed until March 1945, was only about $2.6 million, which worked out to around $47 an acre. When presented with Public Proclamation Number Two, which declared Oak Ridge a total exclusion area that no one could enter without military permission, the Governor of Tennessee, Prentice Cooper, angrily tore it up.
Initially known as the Kingston Demolition Range, the site was officially renamed the Clinton Engineer Works (CEW) in early 1943. While Stone & Webster concentrated on the production facilities, the architectural and engineering firm Skidmore, Owings & Merrill designed and built a residential community for 13,000. The community was located on the slopes of Black Oak Ridge, from which the new town of Oak Ridge got its name. The Army presence at Oak Ridge increased in August 1943 when Nichols replaced Marshall as head of the Manhattan Engineer District. One of his first tasks was to move the district headquarters to Oak Ridge although the name of the district did not change. In September 1943 the administration of community facilities was outsourced to Turner Construction Company through a subsidiary, the Roane-Anderson Company (for Roane and Anderson Counties, in which Oak Ridge was located). Chemical engineers, including William J. Wilcox Jr. and Warren Fuchs, were part of "frantic efforts" to make 10% to 12% enriched uranium 235, known as the code name "tuballoy tetroxide", with tight security and fast approvals for supplies and materials. The population of Oak Ridge soon expanded well beyond the initial plans, and peaked at 75,000 in May 1945, by which time 82,000 people were employed at the Clinton Engineer Works, and 10,000 by Roane-Anderson.
Fine-arts photographer, Josephine Herrick, and her colleague, Mary Steers, helped document the work at Oak Ridge.
The idea of locating Project Y at Oak Ridge was considered, but in the end it was decided that it should be in a remote location. On Oppenheimer's recommendation, the search for a suitable site was narrowed to the vicinity of Albuquerque, New Mexico, where Oppenheimer owned a ranch. In October 1942, Major John H. Dudley of the Manhattan Project was sent to survey the area, and he recommended a site near Jemez Springs, New Mexico. On 16 November, Oppenheimer, Groves, Dudley and others toured the site. Oppenheimer feared that the high cliffs surrounding the site would make his people feel claustrophobic, while the engineers were concerned with the possibility of flooding. The party then moved on to the vicinity of the Los Alamos Ranch School. Oppenheimer was impressed and expressed a strong preference for the site, citing its natural beauty and views of the Sangre de Cristo Mountains, which, it was hoped, would inspire those who would work on the project. The engineers were concerned about the poor access road, and whether the water supply would be adequate, but otherwise felt that it was ideal.
Patterson approved the acquisition of the site on 25 November 1942, authorizing $440,000 for the purchase of the site of , all but of which were already owned by the Federal Government. Secretary of Agriculture Claude R. Wickard granted use of some of United States Forest Service land to the War Department "for so long as the military necessity continues". The need for land, for a new road, and later for a right of way for a power line, eventually brought wartime land purchases to , but only $414,971 was spent. Construction was contracted to the M. M. Sundt Company of Tucson, Arizona, with Willard C. Kruger and Associates of Santa Fe, New Mexico, as architect and engineer. Work commenced in December 1942. Groves initially allocated $300,000 for construction, three times Oppenheimer's estimate, with a planned completion date of 15 March 1943. It soon became clear that the scope of Project Y was greater than expected, and by the time Sundt finished on 30 November 1943, over $7 million had been spent.
Because it was secret, Los Alamos was referred to as "Site Y" or "the Hill". Birth certificates of babies born in Los Alamos during the war listed their place of birth as PO Box 1663 in Santa Fe. Initially Los Alamos was to have been a military laboratory with Oppenheimer and other researchers commissioned into the Army. Oppenheimer went so far as to order himself a lieutenant colonel's uniform, but two key physicists, Robert Bacher and Isidor Rabi, balked at the idea. Conant, Groves and Oppenheimer then devised a compromise whereby the laboratory was operated by the University of California under contract to the War Department.
An Army-OSRD council on 25 June 1942 decided to build a pilot plant for plutonium production in Red Gate Woods southwest of Chicago. In July, Nichols arranged for a lease of from the Cook County Forest Preserve District, and Captain James F. Grafton was appointed Chicago area engineer. It soon became apparent that the scale of operations was too great for the area, and it was decided to build the plant at Oak Ridge, and keep a research and testing facility in Chicago.
Delays in establishing the plant in Red Gate Woods led Compton to authorize the Metallurgical Laboratory to construct the first nuclear reactor beneath the bleachers of Stagg Field at the University of Chicago. The reactor required an enormous amount of graphite blocks and uranium pellets. At the time, there was a limited source of pure uranium. Frank Spedding of Iowa State University were able to produce only two short tons of pure uranium. Additional three short tons of uranium metal was supplied by Westinghouse Lamp Plant which was produced in a rush with makeshift process. A large square balloon was constructed by Goodyear Tire to encase the reactor. On 2 December 1942, a team led by Enrico Fermi initiated the first artificial self-sustaining nuclear chain reaction in an experimental reactor known as Chicago Pile-1. The point at which a reaction becomes self-sustaining became known as "going critical". Compton reported the success to Conant in Washington, D.C., by a coded phone call, saying, "The Italian navigator [Fermi] has just landed in the new world."
In January 1943, Grafton's successor, Major Arthur V. Peterson, ordered Chicago Pile-1 dismantled and reassembled at Red Gate Woods, as he regarded the operation of a reactor as too hazardous for a densely populated area. At the Argonne site, Chicago Pile-3, the first heavy water reactor, went critical on 15 May 1944. After the war, the operations that remained at Red Gate moved to the new site of the Argonne National Laboratory about away.
By December 1942 there were concerns that even Oak Ridge was too close to a major population center (Knoxville) in the unlikely event of a major nuclear accident. Groves recruited DuPont in November 1942 to be the prime contractor for the construction of the plutonium production complex. DuPont was offered a standard cost plus fixed-fee contract, but the President of the company, Walter S. Carpenter, Jr., wanted no profit of any kind, and asked for the proposed contract to be amended to explicitly exclude the company from acquiring any patent rights. This was accepted, but for legal reasons a nominal fee of one dollar was agreed upon. After the war, DuPont asked to be released from the contract early, and had to return 33 cents.
DuPont recommended that the site be located far from the existing uranium production facility at Oak Ridge. In December 1942, Groves dispatched Colonel Franklin Matthias and DuPont engineers to scout potential sites. Matthias reported that Hanford Site near Richland, Washington, was "ideal in virtually all respects". It was isolated and near the Columbia River, which could supply sufficient water to cool the reactors that would produce the plutonium. Groves visited the site in January and established the Hanford Engineer Works (HEW), codenamed "Site W".
Under Secretary Patterson gave his approval on 9 February, allocating $5 million for the acquisition of of land in the area. The federal government relocated some 1,500 residents of White Bluffs and Hanford, and nearby settlements, as well as the Wanapum and other tribes using the area. A dispute arose with farmers over compensation for crops, which had already been planted before the land was acquired. Where schedules allowed, the Army allowed the crops to be harvested, but this was not always possible. The land acquisition process dragged on and was not completed before the end of the Manhattan Project in December 1946.
The dispute did not delay work. Although progress on the reactor design at Metallurgical Laboratory and DuPont was not sufficiently advanced to accurately predict the scope of the project, a start was made in April 1943 on facilities for an estimated 25,000 workers, half of whom were expected to live on-site. By July 1944, some 1,200 buildings had been erected and nearly 51,000 people were living in the construction camp. As area engineer, Matthias exercised overall control of the site. At its peak, the construction camp was the third most populous town in Washington state. Hanford operated a fleet of over 900 buses, more than the city of Chicago. Like Los Alamos and Oak Ridge, Richland was a gated community with restricted access, but it looked more like a typical wartime American boomtown: the military profile was lower, and physical security elements like high fences, towers, and guard dogs were less evident.
Cominco had produced electrolytic hydrogen at Trail, British Columbia, since 1930. Urey suggested in 1941 that it could produce heavy water. To the existing $10 million plant consisting of 3,215 cells consuming 75 MW of hydroelectric power, secondary electrolysis cells were added to increase the deuterium concentration in the water from 2.3% to 99.8%. For this process, Hugh Taylor of Princeton developed a platinum-on-carbon catalyst for the first three stages while Urey developed a nickel-chromia one for the fourth stage tower. The final cost was $2.8 million. The Canadian Government did not officially learn of the project until August 1942. Trail's heavy water production started in January 1944 and continued until 1956. Heavy water from Trail was used for Chicago Pile 3, the first reactor using heavy water and natural uranium, which went critical on 15 May 1944.
The Chalk River, Ontario, site was established to rehouse the Allied effort at the Montreal Laboratory away from an urban area. A new community was built at Deep River, Ontario, to provide residences and facilities for the team members. The site was chosen for its proximity to the industrial manufacturing area of Ontario and Quebec, and proximity to a rail head adjacent to a large military base, Camp Petawawa. Located on the Ottawa River, it had access to abundant water. The first director of the new laboratory was Hans von Halban. He was replaced by John Cockcroft in May 1944, who in turn was succeeded by Bennett Lewis in September 1946. A pilot reactor known as ZEEP (zero-energy experimental pile) became the first Canadian reactor, and the first to be completed outside the United States, when it went critical in September 1945, ZEEP remained in use by researchers until 1970. A larger 10 MW NRX reactor, which was designed during the war, was completed and went critical in July 1947.
The Eldorado Mine at Port Radium was a source of uranium ore.
Although DuPont's preferred designs for the nuclear reactors were helium cooled and used graphite as a moderator, DuPont still expressed an interest in using heavy water as a backup, in case the graphite reactor design proved infeasible for some reason. For this purpose, it was estimated that of heavy water would be required per month. The "P-9 Project" was the government's code name for the heavy water production program. As the plant at Trail, which was then under construction, could produce per month, additional capacity was required. Groves therefore authorized DuPont to establish heavy water facilities at the Morgantown Ordnance Works, near Morgantown, West Virginia; at the Wabash River Ordnance Works, near Dana and Newport, Indiana; and at the Alabama Ordnance Works, near Childersburg and Sylacauga, Alabama. Although known as Ordnance Works and paid for under Ordnance Department contracts, they were built and operated by the Army Corps of Engineers. The American plants used a process different from Trail's; heavy water was extracted by distillation, taking advantage of the slightly higher boiling point of heavy water.
The key raw material for the project was uranium, which was used as fuel for the reactors, as feed that was transformed into plutonium, and, in its enriched form, in the atomic bomb itself. There were four known major deposits of uranium in 1940: in Colorado, in northern Canada, in Joachimsthal in Czechoslovakia, and in the Belgian Congo. All but Joachimstal were in allied hands. A November 1942 survey determined that sufficient quantities of uranium were available to satisfy the project's requirements. Nichols arranged with the State Department for export controls to be placed on uranium oxide and negotiated for the purchase of of uranium ore from the Belgian Congo that was being stored in a warehouse on Staten Island and the remaining stocks of mined ore stored in the Congo. He negotiated with Eldorado Gold Mines for the purchase of ore from its refinery in Port Hope, Ontario, and its shipment in 100-ton lots. The Canadian government subsequently bought up the company's stock until it acquired a controlling interest.
While these purchases assured a sufficient supply to meet wartime needs, the American and British leaders concluded that it was in their countries' interest to gain control of as much of the world's uranium deposits as possible. The richest source of ore was the Shinkolobwe mine in the Belgian Congo, but it was flooded and closed. Nichols unsuccessfully attempted to negotiate its reopening and the sale of the entire future output to the United States with Edgar Sengier, the director of the company that owned the mine, Union Minière du Haut Katanga. The matter was then taken up by the Combined Policy Committee. As 30 percent of Union Minière's stock was controlled by British interests, the British took the lead in negotiations. Sir John Anderson and Ambassador John Winant hammered out a deal with Sengier and the Belgian government in May 1944 for the mine to be reopened and of ore to be purchased at $1.45 a pound. To avoid dependence on the British and Canadians for ore, Groves also arranged for the purchase of US Vanadium Corporation's stockpile in Uravan, Colorado. Uranium mining in Colorado yielded about of ore.
Mallinckrodt Incorporated in St. Louis, Missouri, took the raw ore and dissolved it in nitric acid to produce uranyl nitrate. Ether was then added in a liquid–liquid extraction process to separate the impurities from the uranyl nitrate. This was then heated to form uranium trioxide, which was reduced to highly pure uranium dioxide. By July 1942, Mallinckrodt was producing a ton of highly pure oxide a day, but turning this into uranium metal initially proved more difficult for contractors Westinghouse and Metal Hydrides. Production was too slow and quality was unacceptably low. A special branch of the Metallurgical Laboratory was established at Iowa State College in Ames, Iowa, under Frank Spedding to investigate alternatives. This became known as the Ames Project, and its Ames process became available in 1943.
Natural uranium consists of 99.3% uranium-238 and 0.7% uranium-235, but only the latter is fissile. The chemically identical uranium-235 has to be physically separated from the more plentiful isotope. Various methods were considered for uranium enrichment, most of which was carried out at Oak Ridge.
The most obvious technology, the centrifuge, failed, but electromagnetic separation, gaseous diffusion, and thermal diffusion technologies were all successful and contributed to the project. In February 1943, Groves came up with the idea of using the output of some plants as the input for others.
The centrifuge process was regarded as the only promising separation method in April 1942. Jesse Beams had developed such a process at the University of Virginia during the 1930s, but had encountered technical difficulties. The process required high rotational speeds, but at certain speeds harmonic vibrations developed that threatened to tear the machinery apart. It was therefore necessary to accelerate quickly through these speeds. In 1941 he began working with uranium hexafluoride, the only known gaseous compound of uranium, and was able to separate uranium-235. At Columbia, Urey had Karl Cohen investigate the process, and he produced a body of mathematical theory making it possible to design a centrifugal separation unit, which Westinghouse undertook to construct.
Scaling this up to a production plant presented a formidable technical challenge. Urey and Cohen estimated that producing a kilogram (2.2 lb) of uranium-235 per day would require up to 50,000 centrifuges with rotors, or 10,000 centrifuges with rotors, assuming that 4-meter rotors could be built. The prospect of keeping so many rotors operating continuously at high speed appeared daunting, and when Beams ran his experimental apparatus, he obtained only 60% of the predicted yield, indicating that more centrifuges would be required. Beams, Urey and Cohen then began work on a series of improvements which promised to increase the efficiency of the process. However, frequent failures of motors, shafts and bearings at high speeds delayed work on the pilot plant. In November 1942 the centrifuge process was abandoned by the Military Policy Committee following a recommendation by Conant, Nichols and August C. Klein of Stone & Webster.
Although the centrifuge method was abandoned by the Manhattan Project, research into it advanced significantly after the war with the introduction of the Zippe-type centrifuge, which was developed in the Soviet Union by Soviet and captured German engineers. It eventually became the preferred method of Uranium isotope separation, being far more economical than the other separation methods used during WWII.
Electromagnetic isotope separation was developed by Lawrence at the University of California Radiation Laboratory. This method employed devices known as calutrons, a hybrid of the standard laboratory mass spectrometer and the cyclotron magnet. The name was derived from the words "California", "university" and "cyclotron". In the electromagnetic process, a magnetic field deflected charged particles according to mass. The process was neither scientifically elegant nor industrially efficient. Compared with a gaseous diffusion plant or a nuclear reactor, an electromagnetic separation plant would consume more scarce materials, require more manpower to operate, and cost more to build. Nonetheless, the process was approved because it was based on proven technology and therefore represented less risk. Moreover, it could be built in stages, and rapidly reach industrial capacity.
Marshall and Nichols discovered that the electromagnetic isotope separation process would require of copper, which was in desperately short supply. However, silver could be substituted, in an 11:10 ratio. On 3 August 1942, Nichols met with Under Secretary of the Treasury Daniel W. Bell and asked for the transfer of 6,000 tons of silver bullion from the West Point Bullion Depository. "Young man," Bell told him, "you may think of silver in tons but the Treasury will always think of silver in troy ounces!" Eventually, were used.
The silver bars were cast into cylindrical billets and taken to Phelps Dodge in Bayway, New Jersey, where they were extruded into strips thick, wide and long. These were wound onto magnetic coils by Allis-Chalmers in Milwaukee, Wisconsin. After the war, all the machinery was dismantled and cleaned and the floorboards beneath the machinery were ripped up and burned to recover minute amounts of silver. In the end, only 1/3,600,000th was lost. The last silver was returned in May 1970.
Responsibility for the design and construction of the electromagnetic separation plant, which came to be called Y-12, was assigned to Stone & Webster by the S-1 Committee in June 1942. The design called for five first-stage processing units, known as Alpha racetracks, and two units for final processing, known as Beta racetracks. In September 1943 Groves authorized construction of four more racetracks, known as Alpha II. Construction began in February 1943.
When the plant was started up for testing on schedule in October, the 14-ton vacuum tanks crept out of alignment because of the power of the magnets, and had to be fastened more securely. A more serious problem arose when the magnetic coils started shorting out. In December Groves ordered a magnet to be broken open, and handfuls of rust were found inside. Groves then ordered the racetracks to be torn down and the magnets sent back to the factory to be cleaned. A pickling plant was established on-site to clean the pipes and fittings. The second Alpha I was not operational until the end of January 1944, the first Beta and first and third Alpha I's came online in March, and the fourth Alpha I was operational in April. The four Alpha II racetracks were completed between July and October 1944.
Tennessee Eastman was contracted to manage Y-12 on the usual cost plus fixed-fee basis, with a fee of $22,500 per month plus $7,500 per racetrack for the first seven racetracks and $4,000 per additional racetrack. The calutrons were initially operated by scientists from Berkeley to remove bugs and achieve a reasonable operating rate. They were then turned over to trained Tennessee Eastman operators who had only a high school education. Nichols compared unit production data, and pointed out to Lawrence that the young "hillbilly" girl operators were outperforming his PhDs. They agreed to a production race and Lawrence lost, a morale boost for the Tennessee Eastman workers and supervisors. The girls were "trained like soldiers not to reason why", while "the scientists could not refrain from time-consuming investigation of the cause of even minor fluctuations of the dials."
Y-12 initially enriched the uranium-235 content to between 13% and 15%, and shipped the first few hundred grams of this to Los Alamos in March 1944. Only 1 part in 5,825 of the uranium feed emerged as final product. Much of the rest was splattered over equipment in the process. Strenuous recovery efforts helped raise production to 10% of the uranium-235 feed by January 1945. In February the Alpha racetracks began receiving slightly enriched (1.4%) feed from the new S-50 thermal diffusion plant. The next month it received enhanced (5%) feed from the K-25 gaseous diffusion plant. By August K-25 was producing uranium sufficiently enriched to feed directly into the Beta tracks.
The most promising but also the most challenging method of isotope separation was gaseous diffusion. Graham's law states that the rate of effusion of a gas is inversely proportional to the square root of its molecular mass, so in a box containing a semi-permeable membrane and a mixture of two gases, the lighter molecules will pass out of the container more rapidly than the heavier molecules. The gas leaving the container is somewhat enriched in the lighter molecules, while the residual gas is somewhat depleted. The idea was that such boxes could be formed into a cascade of pumps and membranes, with each successive stage containing a slightly more enriched mixture. Research into the process was carried out at Columbia University by a group that included Harold Urey, Karl P. Cohen, and John R. Dunning.
In November 1942 the Military Policy Committee approved the construction of a 600-stage gaseous diffusion plant. On 14 December, M. W. Kellogg accepted an offer to construct the plant, which was codenamed K-25. A cost plus fixed-fee contract was negotiated, eventually totaling $2.5 million. A separate corporate entity called Kellex was created for the project, headed by Percival C. Keith, one of Kellogg's vice presidents. The process faced formidable technical difficulties. The highly corrosive gas uranium hexafluoride would have to be used, as no substitute could be found, and the motors and pumps would have to be vacuum tight and enclosed in inert gas. The biggest problem was the design of the barrier, which would have to be strong, porous and resistant to corrosion by uranium hexafluoride. The best choice for this seemed to be nickel. Edward Adler and Edward Norris created a mesh barrier from electroplated nickel. A six-stage pilot plant was built at Columbia to test the process, but the Norris-Adler prototype proved to be too brittle. A rival barrier was developed from powdered nickel by Kellex, the Bell Telephone Laboratories and the Bakelite Corporation. In January 1944, Groves ordered the Kellex barrier into production.
Kellex's design for K-25 called for a four-story long U-shaped structure containing 54 contiguous buildings. These were divided into nine sections. Within these were cells of six stages. The cells could be operated independently, or consecutively within a section. Similarly, the sections could be operated separately or as part of a single cascade. A survey party began construction by marking out the site in May 1943. Work on the main building began in October 1943, and the six-stage pilot plant was ready for operation on 17 April 1944. In 1945 Groves canceled the upper stages of the plant, directing Kellex to instead design and build a 540-stage side feed unit, which became known as K-27. Kellex transferred the last unit to the operating contractor, Union Carbide and Carbon, on 11 September 1945. The total cost, including the K-27 plant completed after the war, came to $480 million.
The production plant commenced operation in February 1945, and as cascade after cascade came online, the quality of the product increased. By April 1945, K-25 had attained a 1.1% enrichment and the output of the S-50 thermal diffusion plant began being used as feed. Some product produced the next month reached nearly 7% enrichment. In August, the last of the 2,892 stages commenced operation. K-25 and K-27 achieved their full potential in the early postwar period, when they eclipsed the other production plants and became the prototypes for a new generation of plants.
The thermal diffusion process was based on Sydney Chapman and David Enskog's theory, which explained that when a mixed gas passes through a temperature gradient, the heavier one tends to concentrate at the cold end and the lighter one at the warm end. Since hot gases tend to rise and cool ones tend to fall, this can be used as a means of isotope separation. This process was first demonstrated by Klaus Clusius and Gerhard Dickel in Germany in 1938. It was developed by US Navy scientists, but was not one of the enrichment technologies initially selected for use in the Manhattan Project. This was primarily due to doubts about its technical feasibility, but the inter-service rivalry between the Army and Navy also played a part.
The Naval Research Laboratory continued the research under Philip Abelson's direction, but there was little contact with the Manhattan Project until April 1944, when Captain William S. Parsons, the naval officer in charge of ordnance development at Los Alamos, brought Oppenheimer news of encouraging progress in the Navy's experiments on thermal diffusion. Oppenheimer wrote to Groves suggesting that the output of a thermal diffusion plant could be fed into Y-12. Groves set up a committee consisting of Warren K. Lewis, Eger Murphree and Richard Tolman to investigate the idea, and they estimated that a thermal diffusion plant costing $3.5 million could enrich of uranium per week to nearly 0.9% uranium-235. Groves approved its construction on 24 June 1944.
Groves contracted with the H. K. Ferguson Company of Cleveland, Ohio, to build the thermal diffusion plant, which was designated S-50. Groves's advisers, Karl Cohen and W. I. Thompson from Standard Oil, estimated that it would take six months to build. Groves gave Ferguson just four. Plans called for the installation of 2,142 diffusion columns arranged in 21 racks. Inside each column were three concentric tubes. Steam, obtained from the nearby K-25 powerhouse at a pressure of and temperature of , flowed downward through the innermost nickel pipe, while water at flowed upward through the outermost iron pipe. The uranium hexafluoride flowed in the middle copper pipe, and isotope separation of the uranium occurred between the nickel and copper pipes.
Work commenced on 9 July 1944, and S-50 began partial operation in September. Ferguson operated the plant through a subsidiary known as Fercleve. The plant produced just of 0.852% uranium-235 in October. Leaks limited production and forced shutdowns over the next few months, but in June 1945 it produced . By March 1945, all 21 production racks were operating. Initially the output of S-50 was fed into Y-12, but starting in March 1945 all three enrichment processes were run in series. S-50 became the first stage, enriching from 0.71% to 0.89%. This material was fed into the gaseous diffusion process in the K-25 plant, which produced a product enriched to about 23%. This was, in turn, fed into Y-12, which boosted it to about 89%, sufficient for nuclear weapons.
About of uranium enriched to 89% uranium-235 was delivered to Los Alamos by July 1945. The entire 50 kg, along with some 50%-enriched, averaging out to about 85% enriched, were used in Little Boy.
The second line of development pursued by the Manhattan Project used the fissile element plutonium. Although small amounts of plutonium exist in nature, the best way to obtain large quantities of the element is in a nuclear reactor, in which natural uranium is bombarded by neutrons. The uranium-238 is transmuted into uranium-239, which rapidly decays, first into neptunium-239 and then into plutonium-239. Only a small amount of the uranium-238 will be transformed, so the plutonium must be chemically separated from the remaining uranium, from any initial impurities, and from fission products.
In March 1943, DuPont began construction of a plutonium plant on a site at Oak Ridge. Intended as a pilot plant for the larger production facilities at Hanford, it included the air-cooled X-10 Graphite Reactor, a chemical separation plant, and support facilities. Because of the subsequent decision to construct water-cooled reactors at Hanford, only the chemical separation plant operated as a true pilot. The X-10 Graphite Reactor consisted of a huge block of graphite, long on each side, weighing around , surrounded by of high-density concrete as a radiation shield.
The greatest difficulty was encountered with the uranium slugs produced by Mallinckrodt and Metal Hydrides. These somehow had to be coated in aluminum to avoid corrosion and the escape of fission products into the cooling system. The Grasselli Chemical Company attempted to develop a hot dipping process without success. Meanwhile, Alcoa tried canning. A new process for flux-less welding was developed, and 97% of the cans passed a standard vacuum test, but high temperature tests indicated a failure rate of more than 50%. Nonetheless, production began in June 1943. The Metallurgical Laboratory eventually developed an improved welding technique with the help of General Electric, which was incorporated into the production process in October 1943.
Watched by Fermi and Compton, the X-10 Graphite Reactor went critical on 4 November 1943 with about of uranium. A week later the load was increased to , raising its power generation to 500 kW, and by the end of the month the first 500 mg of plutonium was created. Modifications over time raised the power to 4,000 kW in July 1944. X-10 operated as a production plant until January 1945, when it was turned over to research activities.
Although an air-cooled design was chosen for the reactor at Oak Ridge to facilitate rapid construction, it was recognized that this would be impractical for the much larger production reactors. Initial designs by the Metallurgical Laboratory and DuPont used helium for cooling, before they determined that a water-cooled reactor would be simpler, cheaper and quicker to build. The design did not become available until 4 October 1943; in the meantime, Matthias concentrated on improving the Hanford Site by erecting accommodations, improving the roads, building a railway switch line, and upgrading the electricity, water and telephone lines.
As at Oak Ridge, the most difficulty was encountered while canning the uranium slugs, which commenced at Hanford in March 1944. They were pickled to remove dirt and impurities, dipped in molten bronze, tin, and aluminum-silicon alloy, canned using hydraulic presses, and then capped using arc welding under an argon atmosphere. Finally, they were subjected to a series of tests to detect holes or faulty welds. Disappointingly, most canned slugs initially failed the tests, resulting in an output of only a handful of canned slugs per day. But steady progress was made and by June 1944 production increased to the point where it appeared that enough canned slugs would be available to start Reactor B on schedule in August 1944.
Work began on Reactor B, the first of six planned 250 MW reactors, on 10 October 1943. The reactor complexes were given letter designations A through F, with B, D and F sites chosen to be developed first, as this maximised the distance between the reactors. They would be the only ones constructed during the Manhattan Project. Some of steel, of concrete, 50,000 concrete blocks and 71,000 concrete bricks were used to construct the high building.
Construction of the reactor itself commenced in February 1944. Watched by Compton, Matthias, DuPont's Crawford Greenewalt, Leona Woods and Fermi, who inserted the first slug, the reactor was powered up beginning on 13 September 1944. Over the next few days, 838 tubes were loaded and the reactor went critical. Shortly after midnight on 27 September, the operators began to withdraw the control rods to initiate production. At first all appeared well but around 03:00 the power level started to drop and by 06:30 the reactor had shut down completely. The cooling water was investigated to see if there was a leak or contamination. The next day the reactor started up again, only to shut down once more.
Fermi contacted Chien-Shiung Wu, who identified the cause of the problem as neutron poisoning from xenon-135, which has a half-life of 9.2 hours. Fermi, Woods, Donald J. Hughes and John Archibald Wheeler then calculated the nuclear cross section of xenon-135, which turned out to be 30,000 times that of uranium. DuPont engineer George Graves had deviated from the Metallurgical Laboratory's original design in which the reactor had 1,500 tubes arranged in a circle, and had added an additional 504 tubes to fill in the corners. The scientists had originally considered this overengineering a waste of time and money, but Fermi realized that by loading all 2,004 tubes, the reactor could reach the required power level and efficiently produce plutonium. Reactor D was started on 17 December 1944 and Reactor F on 25 February 1945.
Meanwhile, the chemists considered the problem of how plutonium could be separated from uranium when its chemical properties were not known. Working with the minute quantities of plutonium available at the Metallurgical Laboratory in 1942, a team under Charles M. Cooper developed a lanthanum fluoride process for separating uranium and plutonium, which was chosen for the pilot separation plant. A second separation process, the bismuth phosphate process, was subsequently developed by Seaborg and Stanly G. Thomson. This process worked by toggling plutonium between its +4 and +6 oxidation states in solutions of bismuth phosphate. In the former state, the plutonium was precipitated; in the latter, it stayed in solution and the other products were precipitated.
Greenewalt favored the bismuth phosphate process due to the corrosive nature of lanthanum fluoride, and it was selected for the Hanford separation plants. Once X-10 began producing plutonium, the pilot separation plant was put to the test. The first batch was processed at 40% efficiency but over the next few months this was raised to 90%.
At Hanford, top priority was initially given to the installations in the 300 area. This contained buildings for testing materials, preparing uranium, and assembling and calibrating instrumentation. One of the buildings housed the canning equipment for the uranium slugs, while another contained a small test reactor. Notwithstanding the high priority allocated to it, work on the 300 area fell behind schedule due to the unique and complex nature of the 300 area facilities, and wartime shortages of labor and materials.
Early plans called for the construction of two separation plants in each of the areas known as 200-West and 200-East. This was subsequently reduced to two, the T and U plants, in 200-West and one, the B plant, at 200-East. Each separation plant consisted of four buildings: a process cell building or "canyon" (known as 221), a concentration building (224), a purification building (231) and a magazine store (213). The canyons were each long and wide. Each consisted of forty cells.
Work began on 221-T and 221-U in January 1944, with the former completed in September and the latter in December. The 221-B building followed in March 1945. Because of the high levels of radioactivity involved, all work in the separation plants had to be conducted by remote control using closed-circuit television, something unheard of in 1943. Maintenance was carried out with the aid of an overhead crane and specially designed tools. The 224 buildings were smaller because they had less material to process, and it was less radioactive. The 224-T and 224-U buildings were completed on 8 October 1944, and 224-B followed on 10 February 1945. The purification methods that were eventually used in 231-W were still unknown when construction commenced on 8 April 1944, but the plant was complete and the methods were selected by the end of the year. On 5 February 1945, Matthias hand-delivered the first shipment of 80 g of 95%-pure plutonium nitrate to a Los Alamos courier in Los Angeles.
In 1943, development efforts were directed to a gun-type fission weapon with plutonium called Thin Man. Initial research on the properties of plutonium was done using cyclotron-generated plutonium-239, which was extremely pure, but could only be created in very small amounts. Los Alamos received the first sample of plutonium from the Clinton X-10 reactor in April 1944 and within days Emilio Segrè discovered a problem: the reactor-bred plutonium had a higher concentration of plutonium-240, resulting in up to five times the spontaneous fission rate of cyclotron plutonium. Seaborg had correctly predicted in March 1943 that some of the plutonium-239 would absorb a neutron and become plutonium-240.
This made reactor plutonium unsuitable for use in a gun-type weapon. The plutonium-240 would start the chain reaction too quickly, causing a predetonation that would release enough energy to disperse the critical mass with a minimal amount of plutonium reacted (a fizzle). A faster gun was suggested but found to be impractical. The possibility of separating the isotopes was considered and rejected, as plutonium-240 is even harder to separate from plutonium-239 than uranium-235 from uranium-238.
Work on an alternative method of bomb design, known as implosion, had begun earlier under the direction of the physicist Seth Neddermeyer. Implosion used explosives to crush a subcritical sphere of fissile material into a smaller and denser form. When the fissile atoms are packed closer together, the rate of neutron capture increases, and the mass becomes a critical mass. The metal needs to travel only a very short distance, so the critical mass is assembled in much less time than it would take with the gun method. Neddermeyer's 1943 and early 1944 investigations into implosion showed promise, but also made it clear that the problem would be much more difficult from a theoretical and engineering perspective than the gun design. In September 1943, John von Neumann, who had experience with shaped charges used in armor-piercing shells, argued that not only would implosion reduce the danger of predetonation and fizzle, but would make more efficient use of the fissionable material. He proposed using a spherical configuration instead of the cylindrical one that Neddermeyer was working on.
By July 1944, Oppenheimer had concluded plutonium could not be used in a gun design, and opted for implosion. The accelerated effort on an implosion design, codenamed Fat Man, began in August 1944 when Oppenheimer implemented a sweeping reorganization of the Los Alamos laboratory to focus on implosion. Two new groups were created at Los Alamos to develop the implosion weapon, X (for explosives) Division headed by explosives expert George Kistiakowsky and G (for gadget) Division under Robert Bacher. The new design that von Neumann and T (for theoretical) Division, most notably Rudolf Peierls, had devised used explosive lenses to focus the explosion onto a spherical shape using a combination of both slow and fast high explosives.
The design of lenses that detonated with the proper shape and velocity turned out to be slow, difficult and frustrating. Various explosives were tested before settling on composition B as the fast explosive and baratol as the slow explosive. The final design resembled a soccer ball, with 20 hexagonal and 12 pentagonal lenses, each weighing about . Getting the detonation just right required fast, reliable and safe electrical detonators, of which there were two for each lens for reliability. It was therefore decided to use exploding-bridgewire detonators, a new invention developed at Los Alamos by a group led by Luis Alvarez. A contract for their manufacture was given to Raytheon.
To study the behavior of converging shock waves, Robert Serber devised the RaLa Experiment, which used the short-lived radioisotope lanthanum-140, a potent source of gamma radiation. The gamma ray source was placed in the center of a metal sphere surrounded by the explosive lenses, which in turn were inside in an ionization chamber. This allowed the taking of an X-ray movie of the implosion. The lenses were designed primarily using this series of tests. In his history of the Los Alamos project, David Hawkins wrote: "RaLa became the most important single experiment affecting the final bomb design".
Within the explosives was the thick aluminum pusher, which provided a smooth transition from the relatively low density explosive to the next layer, the thick tamper of natural uranium. Its main job was to hold the critical mass together as long as possible, but it would also reflect neutrons back into the core. Some part of it might fission as well. To prevent predetonation by an external neutron, the tamper was coated in a thin layer of boron. A polonium-beryllium modulated neutron initiator, known as an "urchin" because its shape resembled a sea urchin, was developed to start the chain reaction at precisely the right moment. This work with the chemistry and metallurgy of radioactive polonium was directed by Charles Allen Thomas of the Monsanto Company and became known as the Dayton Project. Testing required up to 500 curies per month of polonium, which Monsanto was able to deliver. The whole assembly was encased in a duralumin bomb casing to protect it from bullets and flak.
The ultimate task of the metallurgists was to determine how to cast plutonium into a sphere. The difficulties became apparent when attempts to measure the density of plutonium gave inconsistent results. At first contamination was believed to be the cause, but it was soon determined that there were multiple allotropes of plutonium. The brittle α phase that exists at room temperature changes to the plastic β phase at higher temperatures. Attention then shifted to the even more malleable δ phase that normally exists in the 300 °C to 450 °C range. It was found that this was stable at room temperature when alloyed with aluminum, but aluminum emits neutrons when bombarded with alpha particles, which would exacerbate the pre-ignition problem. The metallurgists then hit upon a plutonium-gallium alloy, which stabilized the δ phase and could be hot pressed into the desired spherical shape. As plutonium was found to corrode readily, the sphere was coated with nickel.
The work proved dangerous. By the end of the war, half the experienced chemists and metallurgists had to be removed from work with plutonium when unacceptably high levels of the element appeared in their urine. A minor fire at Los Alamos in January 1945 led to a fear that a fire in the plutonium laboratory might contaminate the whole town, and Groves authorized the construction of a new facility for plutonium chemistry and metallurgy, which became known as the DP-site. The hemispheres for the first plutonium pit (or core) were produced and delivered on 2 July 1945. Three more hemispheres followed on 23 July and were delivered three days later.
Because of the complexity of an implosion-style weapon, it was decided that, despite the waste of fissile material, an initial test would be required. Groves approved the test, subject to the active material being recovered. Consideration was therefore given to a controlled fizzle, but Oppenheimer opted instead for a full-scale nuclear test, codenamed "Trinity".
In March 1944, planning for the test was assigned to Kenneth Bainbridge, a professor of physics at Harvard, working under Kistiakowsky. Bainbridge selected the bombing range near Alamogordo Army Airfield as the site for the test. Bainbridge worked with Captain Samuel P. Davalos on the construction of the Trinity Base Camp and its facilities, which included barracks, warehouses, workshops, an explosive magazine and a commissary.
Groves did not relish the prospect of explaining the loss of a billion dollars worth of plutonium to a Senate committee, so a cylindrical containment vessel codenamed "Jumbo" was constructed to recover the active material in the event of a failure. Measuring long and wide, it was fabricated at great expense from of iron and steel by Babcock & Wilcox in Barberton, Ohio. Brought in a special railroad car to a siding in Pope, New Mexico, it was transported the last to the test site on a trailer pulled by two tractors. By the time it arrived, however, confidence in the implosion method was high enough, and the availability of plutonium was sufficient, that Oppenheimer decided not to use it. Instead, it was placed atop a steel tower from the weapon as a rough measure of how powerful the explosion would be. In the end, Jumbo survived, although its tower did not, adding credence to the belief that Jumbo would have successfully contained a fizzled explosion.
A pre-test explosion was conducted on 7 May 1945 to calibrate the instruments. A wooden test platform was erected from Ground Zero and piled with of TNT spiked with nuclear fission products in the form of an irradiated uranium slug from Hanford, which was dissolved and poured into tubing inside the explosive. This explosion was observed by Oppenheimer and Groves's new deputy commander, Brigadier General Thomas Farrell. The pre-test produced data that proved vital for the Trinity test.
For the actual test, the weapon, nicknamed "the gadget", was hoisted to the top of a steel tower, as detonation at that height would give a better indication of how the weapon would behave when dropped from a bomber. Detonation in the air maximized the energy applied directly to the target, and generated less nuclear fallout. The gadget was assembled under the supervision of Norris Bradbury at the nearby McDonald Ranch House on 13 July, and precariously winched up the tower the following day. Observers included Bush, Chadwick, Conant, Farrell, Fermi, Groves, Lawrence, Oppenheimer and Tolman. At 05:30 on 16 July 1945 the gadget exploded with an energy equivalent of around 20 kilotons of TNT, leaving a crater of Trinitite (radioactive glass) in the desert wide. The shock wave was felt over away, and the mushroom cloud reached in height. It was heard as far away as El Paso, Texas, so Groves issued a cover story about an ammunition magazine explosion at Alamogordo Field.
Oppenheimer later recalled that, while witnessing the explosion, he thought of a verse from the Hindu holy book, the "Bhagavad Gita" (XI,12):
Years later he would explain that another verse had also entered his head at that time:
In June 1944, the Manhattan Project employed some 129,000 workers, of whom 84,500 were construction workers, 40,500 were plant operators and 1,800 were military personnel. As construction activity fell off, the workforce declined to 100,000 a year later, but the number of military personnel increased to 5,600. Procuring the required numbers of workers, especially highly skilled workers, in competition with other vital wartime programs proved very difficult. In 1943, Groves obtained a special temporary priority for labor from the War Manpower Commission. In March 1944, both the War Production Board and the War Manpower Commission gave the project their highest priority.
Tolman and Conant, in their role as the project's scientific advisers, drew up a list of candidate scientists and had them rated by scientists already working on the project. Groves then sent a personal letter to the head of their university or company asking for them to be released for essential war work. At the University of Wisconsin–Madison, Stanislaw Ulam gave one of his students, Joan Hinton, an exam early, so she could leave to do war work. A few weeks later, Ulam received a letter from Hans Bethe, inviting him to join the project. Conant personally persuaded Kistiakowsky to join the project.
One source of skilled personnel was the Army itself, particularly the Army Specialized Training Program. In 1943, the MED created the Special Engineer Detachment (SED), with an authorized strength of 675. Technicians and skilled workers drafted into the Army were assigned to the SED. Another source was the Women's Army Corps (WAC). Initially intended for clerical tasks handling classified material, the WACs were soon tapped for technical and scientific tasks as well. On 1 February 1945, all military personnel assigned to the MED, including all SED detachments, were assigned to the 9812th Technical Service Unit, except at Los Alamos, where military personnel other than SED, including the WACs and Military Police, were assigned to the 4817th Service Command Unit.
An Associate Professor of Radiology at the University of Rochester School of Medicine, Stafford L. Warren, was commissioned as a colonel in the United States Army Medical Corps, and appointed as chief of the MED's Medical Section and Groves' medical advisor. Warren's initial task was to staff hospitals at Oak Ridge, Richland and Los Alamos. The Medical Section was responsible for medical research, but also for the MED's health and safety programs. This presented an enormous challenge, because workers were handling a variety of toxic chemicals, using hazardous liquids and gases under high pressures, working with high voltages, and performing experiments involving explosives, not to mention the largely unknown dangers presented by radioactivity and handling fissile materials. Yet in December 1945, the National Safety Council presented the Manhattan Project with the Award of Honor for Distinguished Service to Safety in recognition of its safety record. Between January 1943 and June 1945, there were 62 fatalities and 3,879 disabling injuries, which was about 62 percent below the rate of private industry.
A 1945 "Life" article estimated that before the Hiroshima and Nagasaki bombings "probably no more than a few dozen men in the entire country knew the full meaning of the Manhattan Project, and perhaps only a thousand others even were aware that work on atoms was involved." The magazine wrote that the more than 100,000 others employed with the project "worked like moles in the dark". Warned that disclosing the project's secrets was punishable by 10 years in prison or a $10,000 ($ today) fine, they saw enormous quantities of raw materials enter factories with nothing coming out, and monitored "dials and switches while behind thick concrete walls mysterious reactions took place" without knowing the purpose of their jobs.
In December 1945 the United States Army published a secret report analysing and assessing the security apparatus surrounding the Manhattan Project. The report states that the Manhattan Project was "more drastically guarded than any other highly secret war development." The security infrastructure surrounding the Manhattan Project was so vast and thorough that in the early days of the project in 1943, security investigators vetted 400,000 potential employees and 600 companies that would be involved in all aspects of the project for potential security risks.
Oak Ridge security personnel considered any private party with more than seven people as suspicious, and residents—who believed that US government agents were secretly among them—avoided repeatedly inviting the same guests. Although original residents of the area could be buried in existing cemeteries, every coffin was reportedly opened for inspection. Everyone, including top military officials, and their automobiles were searched when entering and exiting project facilities. One Oak Ridge worker stated that "if you got inquisitive, you were called on the carpet within two hours by government secret agents. Usually those summoned to explain were then escorted bag and baggage to the gate and ordered to keep going".
Despite being told that their work would help end the war and perhaps all future wars, not seeing or understanding the results of their often tedious duties—or even typical side effects of factory work such as smoke from smokestacks—and the war in Europe ending without the use of their work, caused serious morale problems among workers and caused many rumors to spread. One manager stated after the war:
Another worker told of how, working in a laundry, she every day held "a special instrument" to uniforms and listened for "a clicking noise". She learned only after the war that she had been performing the important task of checking for radiation with a geiger counter. To improve morale among such workers Oak Ridge created an extensive system of intramural sports leagues, including 10 baseball teams, 81 softball teams, and 26 football teams.
Voluntary censorship of atomic information began before the Manhattan Project. After the start of the European war in 1939 American scientists began avoiding publishing military-related research, and in 1940 scientific journals began asking the National Academy of Sciences to clear articles. William L. Laurence of "The New York Times", who wrote an article on atomic fission in "The Saturday Evening Post" of 7 September 1940, later learned that government officials asked librarians nationwide in 1943 to withdraw the issue. The Soviets noticed the silence, however. In April 1942 nuclear physicist Georgy Flyorov wrote to Josef Stalin on the absence of articles on nuclear fission in American journals; this resulted in the Soviet Union establishing its own atomic bomb project.
The Manhattan Project operated under tight security lest its discovery induce Axis powers, especially Germany, to accelerate their own nuclear projects or undertake covert operations against the project. The government's Office of Censorship, by contrast, relied on the press to comply with a voluntary code of conduct it published, and the project at first avoided notifying the office. By early 1943 newspapers began publishing reports of large construction in Tennessee and Washington based on public records, and the office began discussing with the project how to maintain secrecy. In June the Office of Censorship asked newspapers and broadcasters to avoid discussing "atom smashing, atomic energy, atomic fission, atomic splitting, or any of their equivalents. The use for military purposes of radium or radioactive materials, heavy water, high voltage discharge equipment, cyclotrons." The office also asked to avoid discussion of "polonium, uranium, ytterbium, hafnium, protactinium, radium, rhenium, thorium, deuterium"; only uranium was sensitive, but was listed with other elements to hide its importance.
The prospect of sabotage was always present, and sometimes suspected when there were equipment failures. While there were some problems believed to be the result of careless or disgruntled employees, there were no confirmed instances of Axis-instigated sabotage. However, on 10 March 1945, a Japanese fire balloon struck a power line, and the resulting power surge caused the three reactors at Hanford to be temporarily shut down. With so many people involved, security was a difficult task. A special Counter Intelligence Corps detachment was formed to handle the project's security issues. By 1943, it was clear that the Soviet Union was attempting to penetrate the project. Lieutenant Colonel Boris T. Pash, the head of the Counter Intelligence Branch of the Western Defense Command, investigated suspected Soviet espionage at the Radiation Laboratory in Berkeley. Oppenheimer informed Pash that he had been approached by a fellow professor at Berkeley, Haakon Chevalier, about passing information to the Soviet Union.
The most successful Soviet spy was Klaus Fuchs, a member of the British Mission who played an important part at Los Alamos. The 1950 revelation of his espionage activities damaged the United States' nuclear cooperation with Britain and Canada. Subsequently, other instances of espionage were uncovered, leading to the arrest of Harry Gold, David Greenglass, and Ethel and Julius Rosenberg. Other spies like George Koval and Theodore Hall remained unknown for decades. The value of the espionage is difficult to quantify, as the principal constraint on the Soviet atomic bomb project was a shortage of uranium ore. The consensus is that espionage saved the Soviets one or two years of effort.
In addition to developing the atomic bomb, the Manhattan Project was charged with gathering intelligence on the German nuclear energy project. It was believed that the Japanese nuclear weapons program was not far advanced because Japan had little access to uranium ore, but it was initially feared that Germany was very close to developing its own weapons. At the instigation of the Manhattan Project, a bombing and sabotage campaign was carried out against heavy water plants in German-occupied Norway. A small mission was created, jointly staffed by the Office of Naval Intelligence, OSRD, the Manhattan Project, and Army Intelligence (G-2), to investigate enemy scientific developments. It was not restricted to those involving nuclear weapons. The Chief of Army Intelligence, Major General George V. Strong, appointed Boris Pash to command the unit, which was codenamed "Alsos", a Greek word meaning "grove".
The Alsos Mission to Italy questioned staff of the physics laboratory at the University of Rome following the capture of the city in June 1944. Meanwhile, Pash formed a combined British and American Alsos mission in London under the command of Captain Horace K. Calvert to participate in Operation Overlord. Groves considered the risk that the Germans might attempt to disrupt the Normandy landings with radioactive poisons was sufficient to warn General Dwight D. Eisenhower and send an officer to brief his chief of staff, Lieutenant General Walter Bedell Smith. Under the codename Operation Peppermint, special equipment was prepared and Chemical Warfare Service teams were trained in its use.
Following in the wake of the advancing Allied armies, Pash and Calvert interviewed Frédéric Joliot-Curie about the activities of German scientists. They spoke to officials at Union Minière du Haut Katanga about uranium shipments to Germany. They tracked down 68 tons of ore in Belgium and 30 tons in France. The interrogation of German prisoners indicated that uranium and thorium were being processed in Oranienburg, 20 miles north of Berlin, so Groves arranged for it to be bombed on 15 March 1945.
An Alsos team went to Stassfurt in the Soviet Occupation Zone and retrieved 11 tons of ore from WIFO. In April 1945, Pash, in command of a composite force known as T-Force, conducted Operation Harborage, a sweep behind enemy lines of the cities of Hechingen, Bisingen, and Haigerloch that were the heart of the German nuclear effort. T-Force captured the nuclear laboratories, documents, equipment and supplies, including heavy water and 1.5 tons of metallic uranium.
Alsos teams rounded up German scientists including Kurt Diebner, Otto Hahn, Walther Gerlach, Werner Heisenberg, and Carl Friedrich von Weizsäcker, who were taken to England where they were interned at Farm Hall, a bugged house in Godmanchester. After the bombs were detonated in Japan, the Germans were forced to confront the fact that the Allies had done what they could not.
Starting in November 1943, the Army Air Forces Materiel Command at Wright Field, Ohio, began Silverplate, the codename modification of B-29s to carry the bombs. Test drops were carried out at Muroc Army Air Field, California, and the Naval Ordnance Test Station at Inyokern, California. Groves met with the Chief of United States Army Air Forces (USAAF), General Henry H. Arnold, in March 1944 to discuss the delivery of the finished bombs to their targets. The only Allied aircraft capable of carrying the long Thin Man or the wide Fat Man was the British Avro Lancaster, but using a British aircraft would have caused difficulties with maintenance. Groves hoped that the American Boeing B-29 Superfortress could be modified to carry Thin Man by joining its two bomb bays together. Arnold promised that no effort would be spared to modify B-29s to do the job, and designated Major General Oliver P. Echols as the USAAF liaison to the Manhattan Project. In turn, Echols named Colonel Roscoe C. Wilson as his alternate, and Wilson became Manhattan Project's main USAAF contact. President Roosevelt instructed Groves that if the atomic bombs were ready before the war with Germany ended, he should be ready to drop them on Germany.
The 509th Composite Group was activated on 17 December 1944 at Wendover Army Air Field, Utah, under the command of Colonel Paul W. Tibbets. This base, close to the border with Nevada, was codenamed "Kingman" or "W-47". Training was conducted at Wendover and at Batista Army Airfield, Cuba, where the 393d Bombardment Squadron practiced long-distance flights over water, and dropping dummy pumpkin bombs. A special unit known as Project Alberta was formed at Los Alamos under Navy Captain William S. Parsons from Project Y as part of the Manhattan Project to assist in preparing and delivering the bombs. Commander Frederick L. Ashworth from Alberta met with Fleet Admiral Chester W. Nimitz on Guam in February 1945 to inform him of the project. While he was there, Ashworth selected North Field on the Pacific Island Tinian as a base for the 509th Composite Group, and reserved space for the group and its buildings. The group deployed there in July 1945. Farrell arrived at Tinian on 30 July as the Manhattan Project representative.
Most of the components for Little Boy left San Francisco on the cruiser on 16 July and arrived on Tinian on 26 July. Four days later the ship was sunk by a Japanese submarine. The remaining components, which included six uranium-235 rings, were delivered by three C-54 Skymasters of the 509th Group's 320th Troop Carrier Squadron. Two Fat Man assemblies travelled to Tinian in specially modified 509th Composite Group B-29s. The first plutonium core went in a special C-54. In late April, a joint targeting committee of the Manhattan District and USAAF was established to determine which cities in Japan should be targets, and recommended Kokura, Hiroshima, Niigata, and Kyoto. At this point, Secretary of War Henry L. Stimson intervened, announcing that he would be making the targeting decision, and that he would not authorize the bombing of Kyoto on the grounds of its historical and religious significance. Groves therefore asked Arnold to remove Kyoto not just from the list of nuclear targets, but from targets for conventional bombing as well. One of Kyoto's substitutes was Nagasaki.
In May 1945, the Interim Committee was created to advise on wartime and postwar use of nuclear energy. The committee was chaired by Stimson, with James F. Byrnes, a former US Senator soon to be Secretary of State, as President Harry S. Truman's personal representative; Ralph A. Bard, the Under Secretary of the Navy; William L. Clayton, the Assistant Secretary of State; Vannevar Bush; Karl T. Compton; James B. Conant; and George L. Harrison, an assistant to Stimson and president of New York Life Insurance Company. The Interim Committee in turn established a scientific panel consisting of Arthur Compton, Fermi, Lawrence and Oppenheimer to advise it on scientific issues. In its presentation to the Interim Committee, the scientific panel offered its opinion not just on the likely physical effects of an atomic bomb, but on its probable military and political impact.
At the Potsdam Conference in Germany, Truman was informed that the Trinity test had been successful. He told Stalin, the leader of the Soviet Union, that the US had a new superweapon, without giving any details. This was the first official communication to the Soviet Union about the bomb, but Stalin already knew about it from spies. With the authorization to use the bomb against Japan already given, no alternatives were considered after the Japanese rejection of the Potsdam Declaration.
On 6 August 1945, a Boeing B-29 Superfortress ("Enola Gay") of the 393d Bombardment Squadron, piloted by Tibbets, lifted off from North Field, and Little Boy in its bomb bay. Hiroshima, the headquarters of the 2nd General Army and Fifth Division and a port of embarkation, was the primary target of the mission, with Kokura and Nagasaki as alternatives. With Farrell's permission, Parsons, the weaponeer in charge of the mission, completed the bomb assembly in the air to minimize the risks during takeoff. The bomb detonated at an altitude of with a blast that was later estimated to be the equivalent of 13 kilotons of TNT. An area of approximately was destroyed. Japanese officials determined that 69% of Hiroshima's buildings were destroyed and another 6–7% damaged. About 70,000 to 80,000 people, of whom 20,000 were Japanese combatants and 20,000 were Korean slave laborers, or some 30% of the population of Hiroshima, were killed immediately, and another 70,000 injured.
On the morning of 9 August 1945, a second B-29 ("Bockscar"), piloted by the 393d Bombardment Squadron's commander, Major Charles W. Sweeney, lifted off with Fat Man on board. This time, Ashworth served as weaponeer and Kokura was the primary target. Sweeney took off with the weapon already armed but with the electrical safety plugs still engaged. When they reached Kokura, they found cloud cover had obscured the city, prohibiting the visual attack required by orders. After three runs over the city, and with fuel running low, they headed for the secondary target, Nagasaki. Ashworth decided that a radar approach would be used if the target was obscured, but a last-minute break in the clouds over Nagasaki allowed a visual approach as ordered. The Fat Man was dropped over the city's industrial valley midway between the Mitsubishi Steel and Arms Works in the south and the Mitsubishi-Urakami Ordnance Works in the north. The resulting explosion had a blast yield equivalent to 21 kilotons of TNT, roughly the same as the Trinity blast, but was confined to the Urakami Valley, and a major portion of the city was protected by the intervening hills, resulting in the destruction of about 44% of the city. The bombing also crippled the city's industrial production extensively and killed 23,200–28,200 Japanese industrial workers and 150 Japanese soldiers. Overall, an estimated 35,000–40,000 people were killed and 60,000 injured.
Groves expected to have another atomic bomb ready for use on 19 August, with three more in September and a further three in October. Two more Fat Man assemblies were readied, and scheduled to leave Kirtland Field for Tinian on 11 and 14 August. At Los Alamos, technicians worked 24 hours straight to cast another plutonium core. Although cast, it still needed to be pressed and coated, which would take until 16 August. It could therefore have been ready for use on 19 August. On 10 August, Truman secretly requested that additional atomic bombs not be dropped on Japan without his express authority. Groves suspended the third core's shipment on his own authority on 13 August.
On 11 August, Groves phoned Warren with orders to organize a survey team to report on the damage and radioactivity at Hiroshima and Nagasaki. A party equipped with portable Geiger counters arrived in Hiroshima on 8 September headed by Farrell and Warren, with Japanese Rear Admiral Masao Tsuzuki, who acted as a translator. They remained in Hiroshima until 14 September and then surveyed Nagasaki from 19 September to 8 October. This and other scientific missions to Japan would provide valuable scientific and historical data.
The necessity of the bombings of Hiroshima and Nagasaki became a subject of controversy among historians. Some questioned whether an "atomic diplomacy" would not have attained the same goals and disputed whether the bombings or the Soviet declaration of war on Japan was decisive. The Franck Report was the most notable effort pushing for a demonstration but was turned down by the Interim Committee's scientific panel. The Szilárd petition, drafted in July 1945 and signed by dozens of scientists working on the Manhattan Project, was a late attempt at warning President Harry S. Truman about his responsibility in using such weapons.
Seeing the work they had not understood produce the Hiroshima and Nagasaki bombs amazed the workers of the Manhattan Project as much as the rest of the world; newspapers in Oak Ridge announcing the Hiroshima bomb sold for $1 ($ today). Although the bombs' existence was public, secrecy continued, and many workers remained ignorant of their jobs; one stated in 1946, "I don't know what the hell I'm doing besides looking into a ——— and turning a ——— alongside a ———. I don't know anything about it, and there's nothing to say". Many residents continued to avoid discussion of "the stuff" in ordinary conversation despite it being the reason for their town's existence.
In anticipation of the bombings, Groves had Henry DeWolf Smyth prepare a history for public consumption. "Atomic Energy for Military Purposes", better known as the "Smyth Report", was released to the public on 12 August 1945. Groves and Nichols presented Army–Navy "E" Awards to key contractors, whose involvement had hitherto been secret. Over 20 awards of the Presidential Medal for Merit were made to key contractors and scientists, including Bush and Oppenheimer. Military personnel received the Legion of Merit, including the commander of the Women's Army Corps detachment, Captain Arlene G. Scheidenhelm.
At Hanford, plutonium production fell off as Reactors B, D and F wore out, poisoned by fission products and swelling of the graphite moderator known as the Wigner effect. The swelling damaged the charging tubes where the uranium was irradiated to produce plutonium, rendering them unusable. In order to maintain the supply of polonium for the urchin initiators, production was curtailed and the oldest unit, B pile, was closed down so at least one reactor would be available in the future. Research continued, with DuPont and the Metallurgical Laboratory developing a redox solvent extraction process as an alternative plutonium extraction technique to the bismuth phosphate process, which left unspent uranium in a state from which it could not easily be recovered.
Bomb engineering was carried out by the Z Division, named for its director, Dr. Jerrold R. Zacharias from Los Alamos. Z Division was initially located at Wendover Field but moved to Oxnard Field, New Mexico, in September 1945 to be closer to Los Alamos. This marked the beginning of Sandia Base. Nearby Kirtland Field was used as a B-29 base for aircraft compatibility and drop tests. By October, all the staff and facilities at Wendover had been transferred to Sandia. As reservist officers were demobilized, they were replaced by about fifty hand-picked regular officers.
Nichols recommended that S-50 and the Alpha tracks at Y-12 be closed down. This was done in September. Although performing better than ever, the Alpha tracks could not compete with K-25 and the new K-27, which had commenced operation in January 1946. In December, the Y-12 plant was closed, thereby cutting the Tennessee Eastman payroll from 8,600 to 1,500 and saving $2 million a month.
Nowhere was demobilization more of a problem than at Los Alamos, where there was an exodus of talent. Much remained to be done. The bombs used on Hiroshima and Nagasaki were like laboratory pieces; work would be required to make them simpler, safer and more reliable. Implosion methods needed to be developed for uranium in place of the wasteful gun method, and composite uranium-plutonium cores were needed now that plutonium was in short supply because of the problems with the reactors. However, uncertainty about the future of the laboratory made it hard to induce people to stay. Oppenheimer returned to his job at the University of California and Groves appointed Norris Bradbury as an interim replacement. In fact, Bradbury would remain in the post for the next 25 years. Groves attempted to combat the dissatisfaction caused by the lack of amenities with a construction program that included an improved water supply, three hundred houses, and recreation facilities.
Two Fat Man–type detonations were conducted at Bikini Atoll in July 1946 as part of Operation Crossroads to investigate the effect of nuclear weapons on warships. Able was detonated on 1 July 1946. The more spectacular Baker was detonated underwater on 25 July 1946.
After the bombings at Hiroshima and Nagasaki, a number of Manhattan Project physicists founded the "Bulletin of the Atomic Scientists", which began as an emergency action undertaken by scientists who saw urgent need for an immediate educational program about atomic weapons. In the face of the destructiveness of the new weapons and in anticipation of the nuclear arms race several project members including Bohr, Bush and Conant expressed the view that it was necessary to reach agreement on international control of nuclear research and atomic weapons. The Baruch Plan, unveiled in a speech to the newly formed United Nations Atomic Energy Commission (UNAEC) in June 1946, proposed the establishment of an international atomic development authority, but was not adopted.
Following a domestic debate over the permanent management of the nuclear program, the United States Atomic Energy Commission (AEC) was created by the Atomic Energy Act of 1946 to take over the functions and assets of the Manhattan Project. It established civilian control over atomic development, and separated the development, production and control of atomic weapons from the military. Military aspects were taken over by the Armed Forces Special Weapons Project (AFSWP). Although the Manhattan Project ceased to exist on 31 December 1946, the Manhattan District was not abolished until 15 August 1947.
The project expenditure through 1 October 1945 was $1.845 billion, equivalent to less than nine days of wartime spending, and was $2.191 billion when the AEC assumed control on 1 January 1947. Total allocation was $2.4 billion. Over 90% of the cost was for building plants and producing the fissionable materials, and less than 10% for development and production of the weapons.
A total of four weapons (the Trinity gadget, Little Boy, Fat Man, and an unused Fat Man bomb) were produced by the end of 1945, making the average cost per bomb around $500 million in 1945 dollars. By comparison, the project's total cost by the end of 1945 was about 90% of the total spent on the production of US small arms (not including ammunition) and 34% of the total spent on US tanks during the same period. Overall, it was the second most expensive weapons project undertaken by the United States in World War II, behind only the design and production of the Boeing B-29 Superfortress.
The political and cultural impacts of the development of nuclear weapons were profound and far-reaching. William Laurence of "The New York Times", the first to use the phrase "Atomic Age", became the official correspondent for the Manhattan Project in spring 1945. In 1943 and 1944 he unsuccessfully attempted to persuade the Office of Censorship to permit writing about the explosive potential of uranium, and government officials felt that he had earned the right to report on the biggest secret of the war. Laurence witnessed both the Trinity test and the bombing of Nagasaki and wrote the official press releases prepared for them. He went on to write a series of articles extolling the virtues of the new weapon. His reporting before and after the bombings helped to spur public awareness of the potential of nuclear technology and motivated its development in the United States and the Soviet Union.
The wartime Manhattan Project left a legacy in the form of the network of national laboratories: the Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, Argonne National Laboratory, and Ames Laboratory. Two more were established by Groves soon after the war, the Brookhaven National Laboratory at Upton, New York, and the Sandia National Laboratories at Albuquerque, New Mexico. Groves allocated $72 million to them for research activities in fiscal year 1946–1947. They would be in the vanguard of the kind of large-scale research that Alvin Weinberg, the director of the Oak Ridge National Laboratory, would call Big Science.
The Naval Research Laboratory had long been interested in the prospect of using nuclear power for warship propulsion, and sought to create its own nuclear project. In May 1946, Nimitz, now Chief of Naval Operations, decided that the Navy should instead work with the Manhattan Project. A group of naval officers were assigned to Oak Ridge, the most senior of whom was Captain Hyman G. Rickover, who became assistant director there. They immersed themselves in the study of nuclear energy, laying the foundations for a nuclear-powered navy. A similar group of Air Force personnel arrived at Oak Ridge in September 1946 with the aim of developing nuclear aircraft. Their Nuclear Energy for the Propulsion of Aircraft (NEPA) project ran into formidable technical difficulties, and was ultimately cancelled.
The ability of the new reactors to create radioactive isotopes in previously unheard-of quantities sparked a revolution in nuclear medicine in the immediate postwar years. Starting in mid-1946, Oak Ridge began distributing radioisotopes to hospitals and universities. Most of the orders were for iodine-131 and phosphorus-32, which were used in the diagnosis and treatment of cancer. In addition to medicine, isotopes were also used in biological, industrial and agricultural research.
On handing over control to the Atomic Energy Commission, Groves bid farewell to the people who had worked on the Manhattan Project:
In 2014, the United States Congress passed a law providing for a national park dedicated to the history of the Manhattan Project. The Manhattan Project National Historical Park was established on 10 November 2015.
|
https://en.wikipedia.org/wiki?curid=19603
|
Main sequence
In astronomy, the main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightness. These color-magnitude plots are known as Hertzsprung–Russell diagrams after their co-developers, Ejnar Hertzsprung and Henry Norris Russell. Stars on this band are known as main-sequence stars or dwarf stars. These are the most numerous true stars in the universe, and include the Earth's Sun.
After condensation and ignition of a star, it generates thermal energy in its dense core region through nuclear fusion of hydrogen into helium. During this stage of the star's lifetime, it is located on the main sequence at a position determined primarily by its mass, but also based upon its chemical composition and age. The cores of main-sequence stars are in hydrostatic equilibrium, where outward thermal pressure from the hot core is balanced by the inward pressure of gravitational collapse from the overlying layers. The strong dependence of the rate of energy generation on temperature and pressure helps to sustain this balance. Energy generated at the core makes its way to the surface and is radiated away at the photosphere. The energy is carried by either radiation or convection, with the latter occurring in regions with steeper temperature gradients, higher opacity or both.
The main sequence is sometimes divided into upper and lower parts, based on the dominant process that a star uses to generate energy. Stars below about 1.5 times the mass of the Sun () primarily fuse hydrogen atoms together in a series of stages to form helium, a sequence called the proton–proton chain. Above this mass, in the upper main sequence, the nuclear fusion process mainly uses atoms of carbon, nitrogen and oxygen as intermediaries in the CNO cycle that produces helium from hydrogen atoms. Main-sequence stars with more than two solar masses undergo convection in their core regions, which acts to stir up the newly created helium and maintain the proportion of fuel needed for fusion to occur. Below this mass, stars have cores that are entirely radiative with convective zones near the surface. With decreasing stellar mass, the proportion of the star forming a convective envelope steadily increases. Main-sequence stars below undergo convection throughout their mass. When core convection does not occur, a helium-rich core develops surrounded by an outer layer of hydrogen.
In general, the more massive a star is, the shorter its lifespan on the main sequence. After the hydrogen fuel at the core has been consumed, the star evolves away from the main sequence on the HR diagram, into a supergiant, red giant, or directly to a white dwarf.
In the early part of the 20th century, information about the types and distances of stars became more readily available. The spectra of stars were shown to have distinctive features, which allowed them to be categorized. Annie Jump Cannon and Edward C. Pickering at Harvard College Observatory developed a method of categorization that became known as the Harvard Classification Scheme, published in the "Harvard Annals" in 1901.
In Potsdam in 1906, the Danish astronomer Ejnar Hertzsprung noticed that the reddest stars—classified as K and M in the Harvard scheme—could be divided into two distinct groups. These stars are either much brighter than the Sun, or much fainter. To distinguish these groups, he called them "giant" and "dwarf" stars. The following year he began studying star clusters; large groupings of stars that are co-located at approximately the same distance. He published the first plots of color versus luminosity for these stars. These plots showed a prominent and continuous sequence of stars, which he named the Main Sequence.
At Princeton University, Henry Norris Russell was following a similar course of research. He was studying the relationship between the spectral classification of stars and their actual brightness as corrected for distance—their absolute magnitude. For this purpose he used a set of stars that had reliable parallaxes and many of which had been categorized at Harvard. When he plotted the spectral types of these stars against their absolute magnitude, he found that dwarf stars followed a distinct relationship. This allowed the real brightness of a dwarf star to be predicted with reasonable accuracy.
Of the red stars observed by Hertzsprung, the dwarf stars also followed the spectra-luminosity relationship discovered by Russell. However, the giant stars are much brighter than dwarfs and so do not follow the same relationship. Russell proposed that the "giant stars must have low density or great surface-brightness, and the reverse is true of dwarf stars". The same curve also showed that there were very few faint white stars.
In 1933, Bengt Strömgren introduced the term Hertzsprung–Russell diagram to denote a luminosity-spectral class diagram. This name reflected the parallel development of this technique by both Hertzsprung and Russell earlier in the century.
As evolutionary models of stars were developed during the 1930s, it was shown that, for stars of a uniform chemical composition, a relationship exists between a star's mass and its luminosity and radius. That is, for a given mass and composition, there is a unique solution for determining the star's radius and luminosity. This became known as the Vogt–Russell theorem; named after Heinrich Vogt and Henry Norris Russell. By this theorem, when a star's chemical composition and its position on the main sequence is known, so too is the star's mass and radius. (However, it was subsequently discovered that the theorem breaks down somewhat for stars of non-uniform composition.)
A refined scheme for stellar classification was published in 1943 by William Wilson Morgan and Philip Childs Keenan. The MK classification assigned each star a spectral type—based on the Harvard classification—and a luminosity class. The Harvard classification had been developed by assigning a different letter to each star based on the strength of the hydrogen spectral line, before the relationship between spectra and temperature was known. When ordered by temperature and when duplicate classes were removed, the spectral types of stars followed, in order of decreasing temperature with colors ranging from blue to red, the sequence O, B, A, F, G, K and M. (A popular mnemonic for memorizing this sequence of stellar classes is "Oh Be A Fine Girl/Guy, Kiss Me".) The luminosity class ranged from I to V, in order of decreasing luminosity. Stars of luminosity class V belonged to the main sequence.
In April 2018, astronomers reported the detection of the most distant "ordinary" (i.e., main sequence) star, named Icarus (formally, MACS J1149 Lensed Star 1), at 9 billion light-years away from Earth.
When a protostar is formed from the collapse of a giant molecular cloud of gas and dust in the local interstellar medium, the initial composition is homogeneous throughout, consisting of about 70% hydrogen, 28% helium and trace amounts of other elements, by mass. The initial mass of the star depends on the local conditions within the cloud. (The mass distribution of newly formed stars is described empirically by the initial mass function.) During the initial collapse, this pre-main-sequence star generates energy through gravitational contraction. Once sufficiently dense, stars begin converting hydrogen into helium and giving off energy through an exothermic nuclear fusion process.
When nuclear fusion of hydrogen becomes the dominant energy production process and the excess energy gained from gravitational contraction has been lost, the star lies along a curve on the Hertzsprung–Russell diagram (or HR diagram) called the standard main sequence. Astronomers will sometimes refer to this stage as "zero age main sequence", or ZAMS. The ZAMS curve can be calculated using computer models of stellar properties at the point when stars begin hydrogen fusion. From this point, the brightness and surface temperature of stars typically increase with age.
A star remains near its initial position on the main sequence until a significant amount of hydrogen in the core has been consumed, then begins to evolve into a more luminous star. (On the HR diagram, the evolving star moves up and to the right of the main sequence.) Thus the main sequence represents the primary hydrogen-burning stage of a star's lifetime.
The majority of stars on a typical HR diagram lie along the main-sequence curve. This line is pronounced because both the spectral type and the luminosity depend only on a star's mass, at least to zeroth-order approximation, as long as it is fusing hydrogen at its core—and that is what almost all stars spend most of their "active" lives doing.
The temperature of a star determines its spectral type via its effect on the physical properties of plasma in its photosphere. A star's energy emission as a function of wavelength is influenced by both its temperature and composition. A key indicator of this energy distribution is given by the color index, "B" − "V", which measures the star's magnitude in blue ("B") and green-yellow ("V") light by means of filters. This difference in magnitude provides a measure of a star's temperature.
Main-sequence stars are called dwarf stars, but this terminology is partly historical and can be somewhat confusing. For the cooler stars, dwarfs such as red dwarfs, orange dwarfs, and yellow dwarfs are indeed much smaller and dimmer than other stars of those colors. However, for hotter blue and white stars, the difference in size and brightness between so-called "dwarf" stars that are on the main sequence and so-called "giant" stars that are not, becomes smaller. For the hottest stars the difference is not directly observable and for these stars the terms "dwarf" and "giant" refer to differences in spectral lines which indicate whether a star is on or off the main sequence. Nevertheless, very hot main-sequence stars are still sometimes called dwarfs, even though they have roughly the same size and brightness as the "giant" stars of that temperature.
The common use of "dwarf" to mean main sequence is confusing in another way, because there are dwarf stars which are not main-sequence stars. For example, a white dwarf is the dead core left over after a star has shed its outer layers, and is much smaller than a main-sequence star, roughly the size of Earth. These represent the final evolutionary stage of many main-sequence stars.
By treating the star as an idealized energy radiator known as a black body, the luminosity "L" and radius "R" can be related to the effective temperature "T"eff by the Stefan–Boltzmann law:
where "σ" is the Stefan–Boltzmann constant. As the position of a star on the HR diagram shows its approximate luminosity, this relation can be used to estimate its radius.
The mass, radius and luminosity of a star are closely interlinked, and their respective values can be approximated by three relations. First is the Stefan–Boltzmann law, which relates the luminosity "L", the radius "R" and the surface temperature "Teff". Second is the mass–luminosity relation, which relates the luminosity "L" and the mass "M". Finally, the relationship between "M" and "R" is close to linear. The ratio of "M" to "R" increases by a factor of only three over 2.5 orders of magnitude of "M". This relation is roughly proportional to the star's inner temperature "TI", and its extremely slow increase reflects the fact that the rate of energy generation in the core strongly depends on this temperature, whereas it has to fit the mass–luminosity relation. Thus, a too high or too low temperature will result in stellar instability.
A better approximation is to take "ε" = "L/M", the energy generation rate per unit mass, as ε is proportional to "TI"15, where "TI" is the core temperature. This is suitable for stars at least as massive as the Sun, exhibiting the CNO cycle, and gives the better fit "R" ∝ "M"0.78.
The table below shows typical values for stars along the main sequence. The values of luminosity ("L"), radius ("R") and mass ("M") are relative to the Sun—a dwarf star with a spectral classification of G2 V. The actual values for a star may vary by as much as 20–30% from the values listed below.
All main-sequence stars have a core region where energy is generated by nuclear fusion. The temperature and density of this core are at the levels necessary to sustain the energy production that will support the remainder of the star. A reduction of energy production would cause the overlaying mass to compress the core, resulting in an increase in the fusion rate because of higher temperature and pressure. Likewise an increase in energy production would cause the star to expand, lowering the pressure at the core. Thus the star forms a self-regulating system in hydrostatic equilibrium that is stable over the course of its main sequence lifetime.
Main-sequence stars employ two types of hydrogen fusion processes, and the rate of energy generation from each type depends on the temperature in the core region. Astronomers divide the main sequence into upper and lower parts, based on which of the two is the dominant fusion process. In the lower main sequence, energy is primarily generated as the result of the proton-proton chain, which directly fuses hydrogen together in a series of stages to produce helium. Stars in the upper main sequence have sufficiently high core temperatures to efficiently use the CNO cycle (see chart). This process uses atoms of carbon, nitrogen and oxygen as intermediaries in the process of fusing hydrogen into helium.
At a stellar core temperature of 18 million Kelvin, the PP process and CNO cycle are equally efficient, and each type generates half of the star's net luminosity. As this is the core temperature of a star with about 1.5 , the upper main sequence consists of stars above this mass. Thus, roughly speaking, stars of spectral class F or cooler belong to the lower main sequence, while A-type stars or hotter are upper main-sequence stars. The transition in primary energy production from one form to the other spans a range difference of less than a single solar mass. In the Sun, a one solar-mass star, only 1.5% of the energy is generated by the CNO cycle. By contrast, stars with 1.8 or above generate almost their entire energy output through the CNO cycle.
The observed upper limit for a main-sequence star is 120–200 . The theoretical explanation for this limit is that stars above this mass can not radiate energy fast enough to remain stable, so any additional mass will be ejected in a series of pulsations until the star reaches a stable limit. The lower limit for sustained proton–proton nuclear fusion is about 0.08 or 80 times the mass of Jupiter. Below this threshold are sub-stellar objects that can not sustain hydrogen fusion, known as brown dwarfs.
Because there is a temperature difference between the core and the surface, or photosphere, energy is transported outward. The two modes for transporting this energy are radiation and convection. A radiation zone, where energy is transported by radiation, is stable against convection and there is very little mixing of the plasma. By contrast, in a convection zone the energy is transported by bulk movement of plasma, with hotter material rising and cooler material descending. Convection is a more efficient mode for carrying energy than radiation, but it will only occur under conditions that create a steep temperature gradient.
In massive stars (above 10 ) the rate of energy generation by the CNO cycle is very sensitive to temperature, so the fusion is highly concentrated at the core. Consequently, there is a high temperature gradient in the core region, which results in a convection zone for more efficient energy transport. This mixing of material around the core removes the helium ash from the hydrogen-burning region, allowing more of the hydrogen in the star to be consumed during the main-sequence lifetime. The outer regions of a massive star transport energy by radiation, with little or no convection.
Intermediate-mass stars such as Sirius may transport energy primarily by radiation, with a small core convection region. Medium-sized, low-mass stars like the Sun have a core region that is stable against convection, with a convection zone near the surface that mixes the outer layers. This results in a steady buildup of a helium-rich core, surrounded by a hydrogen-rich outer region. By contrast, cool, very low-mass stars (below 0.4 ) are convective throughout. Thus the helium produced at the core is distributed across the star, producing a relatively uniform atmosphere and a proportionately longer main sequence lifespan.
As non-fusing helium ash accumulates in the core of a main-sequence star, the reduction in the abundance of hydrogen per unit mass results in a gradual lowering of the fusion rate within that mass. Since it is the outflow of fusion-supplied energy that supports the higher layers of the star, the core is compressed, producing higher temperatures and pressures. Both factors increase the rate of fusion thus moving the equilibrium towards a smaller, denser, hotter core producing more energy whose increased outflow pushes the higher layers further out. Thus there is a steady increase in the luminosity and radius of the star over time. For example, the luminosity of the early Sun was only about 70% of its current value. As a star ages this luminosity increase changes its position on the HR diagram. This effect results in a broadening of the main sequence band because stars are observed at random stages in their lifetime. That is, the main sequence band develops a thickness on the HR diagram; it is not simply a narrow line.
Other factors that broaden the main sequence band on the HR diagram include uncertainty in the distance to stars and the presence of unresolved binary stars that can alter the observed stellar parameters. However, even perfect observation would show a fuzzy main sequence because mass is not the only parameter that affects a star's color and luminosity. Variations in chemical composition caused by the initial abundances, the star's evolutionary status, interaction with a close companion, rapid rotation, or a magnetic field can all slightly change a main-sequence star's HR diagram position, to name just a few factors. As an example, there are metal-poor stars (with a very low abundance of elements with higher atomic numbers than helium) that lie just below the main sequence and are known as subdwarfs. These stars are fusing hydrogen in their cores and so they mark the lower edge of main sequence fuzziness caused by variance in chemical composition.
A nearly vertical region of the HR diagram, known as the instability strip, is occupied by pulsating variable stars known as Cepheid variables. These stars vary in magnitude at regular intervals, giving them a pulsating appearance. The strip intersects the upper part of the main sequence in the region of class "A" and "F" stars, which are between one and two solar masses. Pulsating stars in this part of the instability strip that intersects the upper part of the main sequence are called Delta Scuti variables. Main-sequence stars in this region experience only small changes in magnitude and so this variation is difficult to detect. Other classes of unstable main-sequence stars, like Beta Cephei variables, are unrelated to this instability strip.
The total amount of energy that a star can generate through nuclear fusion of hydrogen is limited by the amount of hydrogen fuel that can be consumed at the core. For a star in equilibrium, the energy generated at the core must be at least equal to the energy radiated at the surface. Since the luminosity gives the amount of energy radiated per unit time, the total life span can be estimated, to first approximation, as the total energy produced divided by the star's luminosity.
For a star with at least 0.5 , when the hydrogen supply in its core is exhausted and it expands to become a red giant, it can start to fuse helium atoms to form carbon. The energy output of the helium fusion process per unit mass is only about a tenth the energy output of the hydrogen process, and the luminosity of the star increases. This results in a much shorter length of time in this stage compared to the main sequence lifetime. (For example, the Sun is predicted to spend burning helium, compared to about 12 billion years burning hydrogen.) Thus, about 90% of the observed stars above 0.5 will be on the main sequence. On average, main-sequence stars are known to follow an empirical mass-luminosity relationship. The luminosity ("L") of the star is roughly proportional to the total mass ("M") as the following power law:
This relationship applies to main-sequence stars in the range 0.1–50 .
The amount of fuel available for nuclear fusion is proportional to the mass of the star. Thus, the lifetime of a star on the main sequence can be estimated by comparing it to solar evolutionary models. The Sun has been a main-sequence star for about 4.5 billion years and it will become a red giant in 6.5 billion years, for a total main sequence lifetime of roughly 1010 years. Hence:
where "M" and "L" are the mass and luminosity of the star, respectively, formula_4 is a solar mass, formula_5 is the solar luminosity and formula_6 is the star's estimated main sequence lifetime.
Although more massive stars have more fuel to burn and might intuitively be expected to last longer, they also radiate a proportionately greater amount with increased mass. This is required by the stellar equation of state; for a massive star to maintain equilibrium, the outward pressure of radiated energy generated in the core not only must but "will" rise to match the titanic inward gravitational pressure of its envelope. Thus, the most massive stars may remain on the main sequence for only a few million years, while stars with less than a tenth of a solar mass may last for over a trillion years.
The exact mass-luminosity relationship depends on how efficiently energy can be transported from the core to the surface. A higher opacity has an insulating effect that retains more energy at the core, so the star does not need to produce as much energy to remain in hydrostatic equilibrium. By contrast, a lower opacity means energy escapes more rapidly and the star must burn more fuel to remain in equilibrium. A sufficiently high opacity can result in energy transport via convection, which changes the conditions needed to remain in equilibrium.
In high-mass main-sequence stars, the opacity is dominated by electron scattering, which is nearly constant with increasing temperature. Thus the luminosity only increases as the cube of the star's mass. For stars below 10 , the opacity becomes dependent on temperature, resulting in the luminosity varying approximately as the fourth power of the star's mass. For very low-mass stars, molecules in the atmosphere also contribute to the opacity. Below about 0.5 , the luminosity of the star varies as the mass to the power of 2.3, producing a flattening of the slope on a graph of mass versus luminosity. Even these refinements are only an approximation, however, and the mass-luminosity relation can vary depending on a star's composition.
When a main-sequence star has consumed the hydrogen at its core, the loss of energy generation causes its gravitational collapse to resume and the star evolves off the main sequence. The path which the star follows across the HR diagram is called an evolutionary track.
Stars with less than are predicted to directly become white dwarfs when energy generation by nuclear fusion of hydrogen at their core comes to a halt, although no stars are old enough for this to have occurred.
In stars more massive than , the hydrogen surrounding the helium core reaches sufficient temperature and pressure to undergo fusion, forming a hydrogen-burning shell and causing the outer layers of the star to expand and cool. The stage as these stars move away from the main sequence is known as the subgiant branch; it is relatively brief and appears as a gap in the evolutionary track since few stars are observed at that point.
When the helium core of low-mass stars becomes degenerate, or the outer layers of intermediate-mass stars cool sufficiently to become opaque, their hydrogen shells increase in temperature and the stars start to become more luminous. This is known as the red giant branch; it is a relatively long-lived stage and it appears prominently in H–R diagrams. These stars will eventually end their lives as white dwarfs.
The most massive stars do not become red giants; instead, their cores quickly become hot enough to fuse helium and eventually heavier elements and they are known as supergiants. They follow approximately horizontal evolutionary tracks from the main sequence across the top of the H–R diagram. Supergiants are relatively rare and do not show prominently on most H–R diagrams. Their cores will eventually collapse, usually leading to a supernova and leaving behind either a neutron star or black hole.
When a cluster of stars is formed at about the same time, the main sequence lifespan of these stars will depend on their individual masses. The most massive stars will leave the main sequence first, followed in sequence by stars of ever lower masses. The position where stars in the cluster are leaving the main sequence is known as the turnoff point. By knowing the main sequence lifespan of stars at this point, it becomes possible to estimate the age of the cluster.
|
https://en.wikipedia.org/wiki?curid=19605
|
Memory leak
In computer science, a memory leak is a type of resource leak that occurs when a computer program incorrectly manages memory allocations in such a way that memory which is no longer needed is not released. A memory leak may also happen when an object is stored in memory but cannot be accessed by the running code. A memory leak has symptoms similar to a number of other problems and generally can only be diagnosed by a programmer with access to the programs' source code.
A space leak occurs when a computer program uses more memory than necessary. In contrast to memory leaks, where the leaked memory is never released, the memory consumed by a space leak is released, but later than expected.
Because they can exhaust available system memory as an application runs, memory leaks are often the cause of or a contributing factor to software aging.
A memory leak reduces the performance of the computer by reducing the amount of available memory. Eventually, in the worst case, too much of the available memory may become allocated and all or part of the system or device stops working correctly, the application fails, or the system slows down vastly due to thrashing.
Memory leaks may not be serious or even detectable by normal means. In modern operating systems, normal memory used by an application is released when the application terminates. This means that a memory leak in a program that only runs for a short time may not be noticed and is rarely serious.
Much more serious leaks include those:
The following example, written in pseudocode, is intended to show how a memory leak can come about, and its effects, without needing any programming knowledge. The program in this case is part of some very simple software designed to control an elevator. This part of the program is run whenever anyone inside the elevator presses the button for a floor.
The memory leak would occur if the floor number requested is the same floor that the elevator is on; the condition for releasing the memory would be skipped. Each time this case occurs, more memory is leaked.
Cases like this wouldn't usually have any immediate effects. People do not often press the button for the floor they are already on, and in any case, the elevator might have enough spare memory that this could happen hundreds or thousands of times. However, the elevator will eventually run out of memory. This could take months or years, so it might not be discovered despite thorough testing.
The consequences would be unpleasant; at the very least, the elevator would stop responding to requests to move to another floor (such as when an attempt is made to call the elevator or when someone is inside and presses the floor buttons). If other parts of the program need memory (a part assigned to open and close the door, for example), then someone may be trapped inside, or if no one is in, then no one would be able to use the elevator since the software cannot open the door.
The memory leak lasts until the system is reset. For example: if the elevator's power were turned off or in a power outage, the program would stop running. When power was turned on again, the program would restart and all the memory would be available again, but the slow process of memory leak would restart together with the program, eventually prejudicing the correct running of the system.
The leak in the above example can be corrected by bringing the 'release' operation outside of the conditional:
Memory leaks are a common error in programming, especially when using languages that have no built in automatic garbage collection, such as C and C++. Typically, a memory leak occurs because dynamically allocated memory has become unreachable. The prevalence of memory leak bugs has led to the development of a number of debugging tools to detect unreachable memory. "BoundsChecker", "Deleaker", "IBM Rational Purify", "Valgrind", "Parasoft Insure++", "Dr. Memory" and "memwatch" are some of the more popular memory debuggers for C and C++ programs. "Conservative" garbage collection capabilities can be added to any programming language that lacks it as a built-in feature, and libraries for doing this are available for C and C++ programs. A conservative collector finds and reclaims most, but not all, unreachable memory.
Although the memory manager can recover unreachable memory, it cannot free memory that is still reachable and therefore potentially still useful. Modern memory managers therefore provide techniques for programmers to semantically mark memory with varying levels of usefulness, which correspond to varying levels of "reachability". The memory manager does not free an object that is strongly reachable. An object is strongly reachable if it is reachable either directly by a strong reference or indirectly by a chain of strong references. (A "strong reference" is a reference that, unlike a weak reference, prevents an object from being garbage collected.) To prevent this, the developer is responsible for cleaning up references after use, typically by setting the reference to null once it is no longer needed and, if necessary, by deregistering any event listeners that maintain strong references to the object.
In general, automatic memory management is more robust and convenient for developers, as they don't need to implement freeing routines or worry about the sequence in which cleanup is performed or be concerned about whether or not an object is still referenced. It is easier for a programmer to know when a reference is no longer needed than to know when an object is no longer referenced. However, automatic memory management can impose a performance overhead, and it does not eliminate all of the programming errors that cause memory leaks.
RAII, short for Resource Acquisition Is Initialization, is an approach to the problem commonly taken in C++, D, and Ada. It involves associating scoped objects with the acquired resources, and automatically releasing the resources once the objects are out of scope. Unlike garbage collection, RAII has the advantage of knowing when objects exist and when they do not. Compare the following C and C++ examples:
/* C version */
void f(int n)
// C++ version
void f(int n)
The C version, as implemented in the example, requires explicit deallocation; the array is dynamically allocated (from the heap in most C implementations), and continues to exist until explicitly freed.
The C++ version requires no explicit deallocation; it will always occur automatically as soon as the object codice_1 goes out of scope, including if an exception is thrown. This avoids some of the overhead of garbage collection schemes. And because object destructors can free resources other than memory, RAII helps to prevent the leaking of input and output resources accessed through a handle, which mark-and-sweep garbage collection does not handle gracefully. These include open files, open windows, user notifications, objects in a graphics drawing library, thread synchronisation primitives such as critical sections, network connections, and connections to the Windows Registry or another database.
However, using RAII correctly is not always easy and has its own pitfalls. For instance, if one is not careful, it is possible to create dangling pointers (or references) by returning data by reference, only to have that data be deleted when its containing object goes out of scope.
D uses a combination of RAII and garbage collection, employing automatic destruction when it is clear that an object cannot be accessed outside its original scope, and garbage collection otherwise.
More modern garbage collection schemes are often based on a notion of reachability – if you don't have a usable reference to the memory in question, it can be collected. Other garbage collection schemes can be based on reference counting, where an object is responsible for keeping track of how many references are pointing to it. If the number goes down to zero, the object is expected to release itself and allow its memory to be reclaimed. The flaw with this model is that it doesn't cope with cyclic references, and this is why nowadays most programmers are prepared to accept the burden of the more costly mark and sweep type of systems.
The following Visual Basic code illustrates the canonical reference-counting memory leak:
Dim A, B
Set A = CreateObject("Some.Thing")
Set B = CreateObject("Some.Thing")
' At this point, the two objects each have one reference,
Set A.member = B
Set B.member = A
' Now they each have two references.
Set A = Nothing ' You could still get out of it...
Set B = Nothing ' And now you've got a memory leak!
End
In practice, this trivial example would be spotted straight away and fixed. In most real examples, the cycle of references spans more than two objects, and is more difficult to detect.
A well-known example of this kind of leak came to prominence with the rise of AJAX programming techniques in web browsers in the lapsed listener problem. JavaScript code which associated a DOM element with an event handler, and failed to remove the reference before exiting, would leak memory (AJAX web pages keep a given DOM alive for a lot longer than traditional web pages, so this leak was much more apparent).
If a program has a memory leak and its memory usage is steadily increasing, there will not usually be an immediate symptom. Every physical system has a finite amount of memory, and if the memory leak is not contained (for example, by restarting the leaking program) it will eventually cause problems.
Most modern consumer desktop operating systems have both main memory which is physically housed in RAM microchips, and secondary storage such as a hard drive. Memory allocation is dynamic – each process gets as much memory as it requests. Active pages are transferred into main memory for fast access; inactive pages are pushed out to secondary storage to make room, as needed. When a single process starts consuming a large amount of memory, it usually occupies more and more of main memory, pushing other programs out to secondary storage – usually significantly slowing performance of the system. Even if the leaking program is terminated, it may take some time for other programs to swap back into main memory, and for performance to return to normal.
When all the memory on a system is exhausted (whether there is virtual memory or only main memory, such as on an embedded system) any attempt to allocate more memory will fail. This usually causes the program attempting to allocate the memory to terminate itself, or to generate a segmentation fault. Some programs are designed to recover from this situation (possibly by falling back on pre-reserved memory). The first program to experience the out-of-memory may or may not be the program that has the memory leak.
Some multi-tasking operating systems have special mechanisms to deal with an out-of-memory condition, such as killing processes at random (which may affect "innocent" processes), or killing the largest process in memory (which presumably is the one causing the problem). Some operating systems have a per-process memory limit, to prevent any one program from hogging all of the memory on the system. The disadvantage to this arrangement is that the operating system sometimes must be re-configured to allow proper operation of programs that legitimately require large amounts of memory, such as those dealing with graphics, video, or scientific calculations.
If the memory leak is in the kernel, the operating system itself will likely fail. Computers without sophisticated memory management, such as embedded systems, may also completely fail from a persistent memory leak.
Publicly accessible systems such as web servers or routers are prone to denial-of-service attacks if an attacker discovers a sequence of operations which can trigger a leak. Such a sequence is known as an exploit.
A "sawtooth" pattern of memory utilization may be an indicator of a memory leak within an application, particularly if the vertical drops coincide with reboots or restarts of that application. Care should be taken though because garbage collection points could also cause such a pattern and would show a healthy usage of the heap.
Note that constantly increasing memory usage is not necessarily evidence of a memory leak. Some applications will store ever increasing amounts of information in memory (e.g. as a cache). If the cache can grow so large as to cause problems, this may be a programming or design error, but is not a memory leak as the information remains nominally in use. In other cases, programs may require an unreasonably large amount of memory because the programmer has assumed memory is always sufficient for a particular task; for example, a graphics file processor might start by reading the entire contents of an image file and storing it all into memory, something that is not viable where a very large image exceeds available memory.
To put it another way, a memory leak arises from a particular kind of programming error, and without access to the program code, someone seeing symptoms can only guess that there "might" be a memory leak. It would be better to use terms such as "constantly increasing memory use" where no such inside knowledge exists.
The following C function deliberately leaks memory by losing the pointer to the allocated memory. The leak can be said to occur as soon as the pointer 'a' goes out of scope, i.e. when function_which_allocates() returns without freeing 'a'.
void function_which_allocates(void) {
int main(void) {
|
https://en.wikipedia.org/wiki?curid=19609
|
Molecular orbital
In chemistry, a molecular orbital (MO) is a mathematical function describing the location and wave-like behavior of an electron in a molecule. This function can be used to calculate chemical and physical properties such as the probability of finding an electron in any specific region. The term "orbital" was introduced by Robert S. Mulliken in 1932 as an abbreviation for "one-electron orbital wave function". At an elementary level, it is used to describe the "region" of space in which the function has a significant amplitude. In an isolated atom, the orbital electrons' location is determined by functions called atomic orbitals. When multiple atoms combine chemically into a molecule, the electrons' locations are determined by the molecule as a whole, so the atomic orbitals combine to form molecular orbitals. Molecular orbitals are usually constructed by combining atomic orbitals or hybrid orbitals from each atom of the molecule, or other molecular orbitals from groups of atoms. They can be quantitatively calculated using the Hartree–Fock or self-consistent field (SCF) methods.
Molecular orbitals are of three types: "bonding orbitals" which have an energy lower than the energy of the atomic orbitals which formed them, and thus promote the chemical bonds which hold the molecule together; "antibonding orbitals" which have an energy higher than the energy of their constituent atomic orbitals, and so oppose the bonding of the molecule, and "nonbonding orbitals" which have the same energy as their constituent atomic orbitals and thus have no effect on the bonding of the molecule.
A molecular orbital (MO) can be used to represent the regions in a molecule where an electron occupying that orbital is likely to be found. Molecular orbitals are obtained from the combination of atomic orbitals, which predict the location of an electron in an atom. A molecular orbital can specify the electron configuration of a molecule: the spatial distribution and energy of one (or one pair of) electron(s). Most commonly a MO is represented as a linear combination of atomic orbitals (the LCAO-MO method), especially in qualitative or very approximate usage. They are invaluable in providing a simple model of bonding in molecules, understood through molecular orbital theory.
Most present-day methods in computational chemistry begin by calculating the MOs of the system. A molecular orbital describes the behavior of one electron in the electric field generated by the nuclei and some average distribution of the other electrons. In the case of two electrons occupying the same orbital, the Pauli principle demands that they have opposite spin. Necessarily this is an approximation, and highly accurate descriptions of the molecular electronic wave function do not have orbitals (see configuration interaction).
Molecular orbitals are, in general, delocalized throughout the entire molecule. Moreover, if the molecule has symmetry elements, its nondegenerate molecular orbitals are either symmetric or antisymmetric with respect to any of these symmetries. In other words, application of a symmetry operation S (e.g., a reflection, rotation, or inversion) to molecular orbital ψ results in the molecular orbital being unchanged or reversing its mathematical sign: Sψ = ±ψ. In planar molecules, for example, molecular orbitals are either symmetric (sigma) or antisymmetric (pi) with respect to reflection in the molecular plane. If molecules with degenerate orbital energies are also considered, a more general statement that molecular orbitals form bases for the irreducible representations of the molecule's symmetry group holds. The symmetry properties of molecular orbitals means that delocalization is an inherent feature of molecular orbital theory and makes it fundamentally different from (and complementary to) valence bond theory, in which bonds are viewed as localized electron pairs, with allowance for resonance to account for delocalization.
In contrast to these symmetry-adapted "canonical" molecular orbitals, localized molecular orbitals can be formed by applying certain mathematical transformations to the canonical orbitals. The advantage of this approach is that the orbitals will correspond more closely to the "bonds" of a molecule as depicted by a Lewis structure. As a disadvantage, the energy levels of these localized orbitals no longer have physical meaning. (The discussion in the rest of this article will focus on canonical molecular orbitals. For further discussions on localized molecular orbitals, see: natural bond orbital and sigma-pi and equivalent-orbital models.)
Molecular orbitals arise from allowed interactions between atomic orbitals, which are allowed if the symmetries (determined from group theory) of the atomic orbitals are compatible with each other. Efficiency of atomic orbital interactions is determined from the overlap (a measure of how well two orbitals constructively interact with one another) between two atomic orbitals, which is significant if the atomic orbitals are close in energy. Finally, the number of molecular orbitals formed must be equal to the number of atomic orbitals in the atoms being combined to form the molecule.
For an imprecise, but qualitatively useful, discussion of the molecular structure, the molecular orbitals can be obtained from the "Linear combination of atomic orbitals molecular orbital method" ansatz. Here, the molecular orbitals are expressed as linear combinations of atomic orbitals.
Molecular orbitals were first introduced by Friedrich Hund and Robert S. Mulliken in 1927 and 1928. The linear combination of atomic orbitals or "LCAO" approximation for molecular orbitals was introduced in 1929 by Sir John Lennard-Jones. His ground-breaking paper showed how to derive the electronic structure of the fluorine and oxygen molecules from quantum principles. This qualitative approach to molecular orbital theory is part of the start of modern quantum chemistry.
Linear combinations of atomic orbitals (LCAO) can be used to estimate the molecular orbitals that are formed upon bonding between the molecule's constituent atoms. Similar to an atomic orbital, a Schrödinger equation, which describes the behavior of an electron, can be constructed for a molecular orbital as well. Linear combinations of atomic orbitals, or the sums and differences of the atomic wavefunctions, provide approximate solutions to the Hartree–Fock equations which correspond to the independent-particle approximation of the molecular Schrödinger equation. For simple diatomic molecules, the wavefunctions obtained are represented mathematically by the equations
where formula_3 and formula_4 are the molecular wavefunctions for the bonding and antibonding molecular orbitals, respectively, formula_5 and formula_6 are the atomic wavefunctions from atoms a and b, respectively, and formula_7 and formula_8 are adjustable coefficients. These coefficients can be positive or negative, depending on the energies and symmetries of the individual atomic orbitals. As the two atoms become closer together, their atomic orbitals overlap to produce areas of high electron density, and, as a consequence, molecular orbitals are formed between the two atoms. The atoms are held together by the electrostatic attraction between the positively charged nuclei and the negatively charged electrons occupying bonding molecular orbitals.
When atomic orbitals interact, the resulting molecular orbital can be of three types: bonding, antibonding, or nonbonding.
Bonding MOs:
Antibonding MOs:
Nonbonding MOs:
The type of interaction between atomic orbitals can be further categorized by the molecular-orbital symmetry labels σ (sigma), π (pi), δ (delta), φ (phi), γ (gamma) etc. These are the Greek letters corresponding to the atomic orbitals s, p, d, f and g respectively. The number of nodal planes containing the internuclear axis between the atoms concerned is zero for σ MOs, one for π, two for δ, three for φ and four for γ.
A MO with σ symmetry results from the interaction of either two atomic s-orbitals or two atomic pz-orbitals. An MO will have σ-symmetry if the orbital is symmetric with respect to the axis joining the two nuclear centers, the internuclear axis. This means that rotation of the MO about the internuclear axis does not result in a phase change. A σ* orbital, sigma antibonding orbital, also maintains the same phase when rotated about the internuclear axis. The σ* orbital has a nodal plane that is between the nuclei and perpendicular to the internuclear axis.
A MO with π symmetry results from the interaction of either two atomic px orbitals or py orbitals. An MO will have π symmetry if the orbital is asymmetric with respect to rotation about the internuclear axis. This means that rotation of the MO about the internuclear axis will result in a phase change. There is one nodal plane containing the internuclear axis, if real orbitals are considered.
A π* orbital, pi antibonding orbital, will also produce a phase change when rotated about the internuclear axis. The π* orbital also has a second nodal plane between the nuclei.
A MO with δ symmetry results from the interaction of two atomic dxy or dx2-y2 orbitals. Because these molecular orbitals involve low-energy d atomic orbitals, they are seen in transition-metal complexes. A δ bonding orbital has two nodal planes containing the internuclear axis, and a δ* antibonding orbital also has a third nodal plane between the nuclei.
Theoretical chemists have conjectured that higher-order bonds, such as phi bonds corresponding to overlap of f atomic orbitals, are possible. There is as of 2005 only one known example of a molecule purported to contain a phi bond (a U−U bond, in the molecule U2).
For molecules that possess a center of inversion (centrosymmetric molecules) there are additional labels of symmetry that can be applied to molecular orbitals.
Centrosymmetric molecules include:
Non-centrosymmetric molecules include:
If inversion through the center of symmetry in a molecule results in the same phases for the molecular orbital, then the MO is said to have gerade (g) symmetry, from the German word for even.
If inversion through the center of symmetry in a molecule results in a phase change for the molecular orbital, then the MO is said to have ungerade (u) symmetry, from the German word for odd.
For a bonding MO with σ-symmetry, the orbital is σg (s' + s" is symmetric), while an antibonding MO with σ-symmetry the orbital is σu, because inversion of s' – s" is antisymmetric.
For a bonding MO with π-symmetry the orbital is πu because inversion through the center of symmetry for would produce a sign change (the two p atomic orbitals are in phase with each other but the two lobes have opposite signs), while an antibonding MO with π-symmetry is πg because inversion through the center of symmetry for would not produce a sign change (the two p orbitals are antisymmetric by phase).
The qualitative approach of MO analysis uses a molecular orbital diagram to visualize bonding interactions in a molecule. In this type of diagram, the molecular orbitals are represented by horizontal lines; the higher a line the higher the energy of the orbital, and degenerate orbitals are placed on the same level with a space between them. Then, the electrons to be placed in the molecular orbitals are slotted in one by one, keeping in mind the Pauli exclusion principle and Hund's rule of maximum multiplicity (only 2 electrons, having opposite spins, per orbital; place as many unpaired electrons on one energy level as possible before starting to pair them). For more complicated molecules, the wave mechanics approach loses utility in a qualitative understanding of bonding (although is still necessary for a quantitative approach).
Some properties:
The general procedure for constructing a molecular orbital diagram for a reasonably simple molecule can be summarized as follows:
1. Assign a point group to the molecule.
2. Look up the shapes of the SALCs.
3. Arrange the SALCs of each molecular fragment in increasing order of energy, first noting whether they stem from "s", "p", or "d" orbitals
(and put them in the order "s" < "p" < "d"), and then their number of internuclear nodes.
4. Combine SALCs of the same symmetry type from the two fragments, and from N SALCs form N molecular orbitals.
5. Estimate the relative energies of the molecular orbitals from considerations of overlap and relative energies of the parent orbitals, and draw the levels on a molecular orbital energy level diagram (showing the origin of the orbitals).
6. Confirm, correct, and revise this qualitative order by carrying out a molecular orbital calculation by using commercial software.
Molecular orbitals are said to be degenerate if they have the same energy. For example, in the homonuclear diatomic molecules of the first ten elements, the molecular orbitals derived from the px and the py atomic orbitals result in two degenerate bonding orbitals (of low energy) and two degenerate antibonding orbitals (of high energy).
When the energy difference between the atomic orbitals of two atoms is quite large, one atom's orbitals contribute almost entirely to the bonding orbitals, and the other atom's orbitals contribute almost entirely to the antibonding orbitals. Thus, the situation is effectively that one or more electrons have been transferred from one atom to the other. This is called an (mostly) ionic bond.
The bond order, or number of bonds, of a molecule can be determined by combining the number of electrons in bonding and antibonding molecular orbitals. A pair of electrons in a bonding orbital creates a bond, whereas a pair of electrons in an antibonding orbital negates a bond. For example, N2, with eight electrons in bonding orbitals and two electrons in antibonding orbitals, has a bond order of three, which constitutes a triple bond.
Bond strength is proportional to bond order—a greater amount of bonding produces a more stable bond—and bond length is inversely proportional to it—a stronger bond is shorter.
There are rare exceptions to the requirement of molecule having a positive bond order. Although Be2 has a bond order of 0 according to MO analysis, there is experimental evidence of a highly unstable Be2 molecule having a bond length of 245 pm and bond energy of 10 kJ/mol.
The highest occupied molecular orbital and lowest unoccupied molecular orbital are often referred to as the HOMO and LUMO, respectively. The difference of the energies of the HOMO and LUMO is called the HOMO-LUMO gap. This notion is often the matter of confusion in literature and should be considered with caution. Its value is usually located between the fundamental gap (difference between ionization potential and electron affinity) and the optical gap. In addition, HOMO-LUMO gap can be related to a bulk material band gap or transport gap, which is usually much smaller than fundamental gap.
Homonuclear diatomic MOs contain equal contributions from each atomic orbital in the basis set. This is shown in the homonuclear diatomic MO diagrams for H2, He2, and Li2, all of which containing symmetric orbitals.
As a simple MO example, consider the electrons in a hydrogen molecule, H2 (see molecular orbital diagram), with the two atoms labelled H' and H". The lowest-energy atomic orbitals, 1s' and 1s", do not transform according to the symmetries of the molecule. However, the following symmetry adapted atomic orbitals do:
The symmetric combination (called a bonding orbital) is lower in energy than the basis orbitals, and the antisymmetric combination (called an antibonding orbital) is higher. Because the H2 molecule has two electrons, they can both go in the bonding orbital, making the system lower in energy (hence more stable) than two free hydrogen atoms. This is called a covalent bond. The bond order is equal to the number of bonding electrons minus the number of antibonding electrons, divided by 2. In this example, there are 2 electrons in the bonding orbital and none in the antibonding orbital; the bond order is 1, and there is a single bond between the two hydrogen atoms.
On the other hand, consider the hypothetical molecule of He2 with the atoms labeled He' and He". As with H2, the lowest energy atomic orbitals are the 1s' and 1s", and do not transform according to the symmetries of the molecule, while the symmetry adapted atomic orbitals do. The symmetric combination—the bonding orbital—is lower in energy than the basis orbitals, and the antisymmetric combination—the antibonding orbital—is higher. Unlike H2, with two valence electrons, He2 has four in its neutral ground state. Two electrons fill the lower-energy bonding orbital, σg(1s), while the remaining two fill the higher-energy antibonding orbital, σu*(1s). Thus, the resulting electron density around the molecule does not support the formation of a bond between the two atoms; without a stable bond holding the atoms together, the molecule would not be expected to exist. Another way of looking at it is that there are two bonding electrons and two antibonding electrons; therefore, the bond order is 0 and no bond exists (the molecule has one bound state supported by the Van der Waals potential).
Dilithium Li2 is formed from the overlap of the 1s and 2s atomic orbitals (the basis set) of two Li atoms. Each Li atom contributes three electrons for bonding interactions, and the six electrons fill the three MOs of lowest energy, σg(1s), σu*(1s), and σg(2s). Using the equation for bond order, it is found that dilithium has a bond order of one, a single bond.
Considering a hypothetical molecule of He2, since the basis set of atomic orbitals is the same as in the case of H2, we find that both the bonding and antibonding orbitals are filled, so there is no energy advantage to the pair. HeH would have a slight energy advantage, but not as much as H2 + 2 He, so the molecule is very unstable and exists only briefly before decomposing into hydrogen and helium. In general, we find that atoms such as He that have full energy shells rarely bond with other atoms. Except for short-lived Van der Waals complexes, there are very few noble gas compounds known.
While MOs for homonuclear diatomic molecules contain equal contributions from each interacting atomic orbital, MOs for heteronuclear diatomics contain different atomic orbital contributions. Orbital interactions to produce bonding or antibonding orbitals in heteronuclear diatomics occur if there is sufficient overlap between atomic orbitals as determined by their symmetries and similarity in orbital energies.
In hydrogen fluoride HF overlap between the H 1s and F 2s orbitals is allowed by symmetry but the difference in energy between the two atomic orbitals prevents them from interacting to create a molecular orbital. Overlap between the H 1s and F 2pz orbitals is also symmetry allowed, and these two atomic orbitals have a small energy separation. Thus, they interact, leading to creation of σ and σ* MOs and a molecule with a bond order of 1. Since HF is a non-centrosymmetric molecule, the symmetry labels g and u do not apply to its molecular orbitals.
To obtain quantitative values for the molecular energy levels, one needs to have molecular orbitals that are such that the configuration interaction (CI) expansion converges fast towards the full CI limit. The most common method to obtain such functions is the Hartree–Fock method, which expresses the molecular orbitals as eigenfunctions of the Fock operator. One usually solves this problem by expanding the molecular orbitals as linear combinations of Gaussian functions centered on the atomic nuclei (see linear combination of atomic orbitals and basis set (chemistry)). The equation for the coefficients of these linear combinations is a generalized eigenvalue equation known as the Roothaan equations, which are in fact a particular representation of the Hartree–Fock equation. There are a number of programs in which quantum chemical calculations of MOs can be performed, including Spartan and HyperChem.
Simple accounts often suggest that experimental molecular orbital energies can be obtained by the methods of ultra-violet photoelectron spectroscopy for valence orbitals and X-ray photoelectron spectroscopy for core orbitals. This, however, is incorrect as these experiments measure the ionization energy, the difference in energy between the molecule and one of the ions resulting from the removal of one electron. Ionization energies are linked approximately to orbital energies by Koopmans' theorem. While the agreement between these two values can be close for some molecules, it can be very poor in other cases.
|
https://en.wikipedia.org/wiki?curid=19614
|
Systems Concepts
Systems Concepts (now the SC Group) is a company co-founded by Stewart Nelson and Mike Levitt focused on making hardware products related to the DEC PDP-10 series of computers. One of its major products was the SA-10, an interface which allowed PDP-10s to be connected to disk and tape drives designed for use with the channel interfaces of IBM mainframes.
Later, Systems Concepts attempted to produce a compatible replacement for the DEC PDP-10 computers. "Mars" was the code name for a family of PDP-10-compatible computers built by Systems Concepts, including the initial SC-30M, the smaller SC-25, and the slower SC-20. These machines were marvels of engineering design; although not much slower than the unique Foonly F-1, they were physically smaller and consumed less power than the much slower DEC KS10 or Foonly F-2, F-3, or F-4 machines. They were also completely compatible with the DEC KL10, and ran all KL10 binaries (including the operating system) with no modifications at about 2-3 times faster than a KL10.
When DEC cancelled the Jupiter project in 1983, Systems Concepts hoped to sell their machine to customers with a software investment in PDP-10s. Their spring 1984 announcement generated excitement in the PDP-10 world. TOPS-10 was running on the Mars by the summer of 1984, and TOPS-20 by early fall. However, people at Systems Concepts were better at designing machines than at mass-producing or selling them; the company continually improved the design, but lost credibility as delivery dates continued to slip. They also overpriced; believing they were competing with the KL10 and VAX 8600 and not startups such as Sun Microsystems building workstations with comparable power at a fraction of the price. By the time SC shipped the first SC-30M to Stanford University in late 1985, most customers had already abandoned the PDP-10, usually for VMS or Unix systems. Nevertheless, a number were purchased by CompuServe, which depended on PDP-10s to run its online service and was eager to move to newer but fully compatible systems. CompuServe's demand for the computers outpaced Systems Concepts' ability to produce them, so CompuServe licensed the design and built SC-designed computers itself.
Other companies that purchased the SC-30 machines included Telmar, Reynolds and Reynolds, The Danish National Railway.
Peter Samson was director of marketing and program development.
SC later designed the SC-40, released in 1993, a faster follow-on to the SC-30M and SC-25. It can perform up to 8 times as fast as a DEC KL-10, and it also supports more physical memory, a larger virtual address space, and more modern input/output devices. These systems were also used at CompuServe.
In 1985, the company contracted to engineer and produce a PC-based cellular automata system for Tommaso Toffoli of MIT, called the CAM-6. The CAM-6 was a 2-card "sandwich" that plugged into an IBM PC slot and ran cellular automata rules at a 60 Hz update rate. Toffoli provided Forth-based software to operate the card. The production problems that plagued the company's computer products were demonstrated here as well, and only a few boards were produced.
Systems Concepts remains in business, having changed its name to the SC Group when it moved from California to Nevada.
|
https://en.wikipedia.org/wiki?curid=19615
|
Margaret Mead
Margaret Mead (December 16, 1901 – November 15, 1978) was an American cultural anthropologist who featured frequently as an author and speaker in the mass media during the 1960s and 1970s. She earned her bachelor's degree at Barnard College in New York City and her MA and PhD degrees from Columbia University. Mead served as President of the American Association for the Advancement of Science in 1975.
Mead was a communicator of anthropology in modern American and Western culture and was often controversial as an academic. Her reports detailing the attitudes towards sex in South Pacific and Southeast Asian traditional cultures influenced the 1960s sexual revolution. She was a proponent of broadening sexual conventions within the context of Western cultural traditions.
Margaret Mead, the first of five children, was born in Philadelphia, but raised in nearby Doylestown, Pennsylvania. Her father, Edward Sherwood Mead, was a professor of finance at the Wharton School of the University of Pennsylvania, and her mother, Emily (née Fogg) Mead, was a sociologist who studied Italian immigrants. Her sister Katharine (1906–1907) died at the age of nine months. This was a traumatic event for Mead, who had named the girl, and thoughts of her lost sister permeated her daydreams for many years. Her family moved frequently, so her early education was directed by her grandmother until, at age 11, she was enrolled by her family at Buckingham Friends School in Lahaska, Pennsylvania. Her family owned the Longland farm from 1912 to 1926. Born into a family of various religious outlooks, she searched for a form of religion that gave an expression of the faith that she had been formally acquainted with, Christianity. In doing so, she found the rituals of the Episcopal Church to fit the expression of religion she was seeking. Mead studied one year, 1919, at DePauw University, then transferred to Barnard College where she found anthropology mired in "the stupid underbrush of nineteenth century arguments."
Mead earned her bachelor's degree from Barnard in 1923, then began studying with professor Franz Boas and Ruth Benedict at Columbia University, earning her master's degree in 1924. Mead set out in 1925 to do fieldwork in Samoa. In 1926, she joined the American Museum of Natural History, New York City, as assistant curator. She received her PhD from Columbia University in 1929.
Before departing for Samoa, Mead had a short affair with the linguist Edward Sapir, a close friend of her instructor Ruth Benedict. But Sapir's conservative ideas about marriage and the woman's role were unacceptable to Mead, and as Mead left to do field work in Samoa the two separated permanently. Mead received news of Sapir's remarriage while living in Samoa, where, on a beach, she later burned their correspondence.
Mead was married three times. After a six-year engagement, she married her first husband (1923–1928) American Luther Cressman, a theology student at the time who eventually became an anthropologist. Between 1925 and 1926 she was in Samoa returning wherefrom on the boat she met Reo Fortune, a New Zealander headed to Cambridge, England, to study psychology. They were married in 1928, after Mead's divorce from Cressman. Mead dismissively characterized her union with her first husband as "my student marriage" in her 1972 autobiography "Blackberry Winter", a sobriquet with which Cressman took vigorous issue. Mead's third and longest-lasting marriage (1936–1950) was to the British anthropologist Gregory Bateson, with whom she had a daughter, Mary Catherine Bateson, who would also become an anthropologist.
Mead's pediatrician was Benjamin Spock, whose subsequent writings on child rearing incorporated some of Mead's own practices and beliefs acquired from her ethnological field observations which she shared with him; in particular, breastfeeding on the baby's demand rather than a schedule. She readily acknowledged that Gregory Bateson was the husband she loved the most. She was devastated when he left her, and she remained his loving friend ever after, keeping his photograph by her bedside wherever she traveled, including beside her hospital deathbed.
Mead also had an exceptionally close relationship with Ruth Benedict, one of her instructors. In her memoir about her parents, "With a Daughter's Eye", Mary Catherine Bateson implies that the relationship between Benedict and Mead was partly sexual. Mead never openly identified herself as lesbian or bisexual. In her writings, she proposed that it is to be expected that an individual's sexual orientation may evolve throughout life.
She spent her last years in a close personal and professional collaboration with anthropologist Rhoda Metraux, with whom she lived from 1955 until her death in 1978. Letters between the two published in 2006 with the permission of Mead's daughter clearly express a romantic relationship.
Mead had two sisters and a brother, Elizabeth, Priscilla, and Richard. Elizabeth Mead (1909–1983), an artist and teacher, married cartoonist William Steig, and Priscilla Mead (1911–1959) married author Leo Rosten. Mead's brother, Richard, was a professor. Mead was also the aunt of Jeremy Steig.
During World War II, Mead served as executive secretary of the National Research Council's Committee on Food Habits. She served as curator of ethnology at the American Museum of Natural History from 1946 to 1969. She was elected a Fellow of the American Academy of Arts and Sciences in 1948. She taught at The New School and Columbia University, where she was an adjunct professor from 1954 to 1978 and was a professor of anthropology and chair of the Division of Social Sciences at Fordham University's Lincoln Center campus from 1968 to 1970, founding their anthropology department. In 1970, she joined the faculty of the University of Rhode Island as a Distinguished Professor of Sociology and Anthropology.
Following Ruth Benedict's example, Mead focused her research on problems of child rearing, personality, and culture. She served as president of the Society for Applied Anthropology in 1950 and of the American Anthropological Association in 1960. In the mid-1960s, Mead joined forces with communications theorist Rudolf Modley, jointly establishing an organization called Glyphs Inc., whose goal was to create a universal graphic symbol language to be understood by any members of culture, no matter how primitive. In the 1960s, Mead served as the Vice President of the New York Academy of Sciences. She held various positions in the American Association for the Advancement of Science, notably president in 1975 and chair of the executive committee of the board of directors in 1976. She was a recognizable figure in academia, usually wearing a distinctive cape and carrying a walking-stick.
Mead was featured on two record albums published by Folkways Records. The first, released in 1959, "An Interview With Margaret Mead," explored the topics of morals and anthropology. In 1971, she was included in a compilation of talks by prominent women, "But the Women Rose, Vol.2: Voices of Women in American History".
She is credited with the term "semiotics", making it a noun.
In later life, Mead was a mentor to many young anthropologists and sociologists, including Jean Houston.
In 1976, Mead was a key participant at UN Habitat I, the first UN forum on human settlements.
Mead died of pancreatic cancer on November 15, 1978, and is buried at Trinity Episcopal Church Cemetery, Buckingham, Pennsylvania.
In the foreword to "Coming of Age in Samoa", Mead's advisor, Franz Boas, wrote of its significance:
Courtesy, modesty, good manners, conformity to definite ethical standards are universal, but what constitutes courtesy, modesty, very good manners, and definite ethical standards is not universal. It is instructive to know that standards differ in the most unexpected ways.
Mead's findings suggested that the community ignores both boys and girls until they are about 15 or 16. Before then, children have no social standing within the community. Mead also found that marriage is regarded as a social and economic arrangement where wealth, rank, and job skills of the husband and wife are taken into consideration.
In 1983, five years after Mead had died, New Zealand anthropologist Derek Freeman published "Margaret Mead and Samoa: The Making and Unmaking of an Anthropological Myth", in which he challenged Mead's major findings about sexuality in Samoan society. Freeman's book was controversial in its turn: later in 1983 a special session of Mead's supporters in the American Anthropological Association (to which Freeman was not invited) declared it to be "poorly written, unscientific, irresponsible and misleading."
In 1999, Freeman published another book, "The Fateful Hoaxing of Margaret Mead: A Historical Analysis of Her Samoan Research", including previously unavailable material. In his obituary in "The New York Times", John Shaw stated that his thesis, though upsetting many, had by the time of his death generally gained widespread acceptance. Recent work has nonetheless challenged his critique. A frequent criticism of Freeman is that he regularly misrepresented Mead's research and views. In a 2009 evaluation of the debate, anthropologist Paul Shankman concluded that:
There is now a large body of criticism of Freeman's work from a number of perspectives in which Mead, Samoa, and anthropology appear in a very different light than they do in Freeman's work. Indeed, the immense significance that Freeman gave his critique looks like 'much ado about nothing' to many of his critics.
While nurture-oriented anthropologists are more inclined to agree with Mead's conclusions, there are other non-anthropologists who take a nature-oriented approach following Freeman's lead, among them Harvard psychologist Steven Pinker, biologist Richard Dawkins, evolutionary psychologist David Buss, science writer Matt Ridley and classicist Mary Lefkowitz. The philosopher Peter Singer has also criticized Mead in his book "A Darwinian Left", where he states that "Freeman compiles a convincing case that Mead had misunderstood Samoan customs".
In 1996, author Martin Orans examined Mead's notes preserved at the Library of Congress, and credits her for leaving all of her recorded data available to the general public. Orans point out that Freeman's basic criticisms, that Mead was duped by ceremonial virgin Fa'apua'a Fa'amu (who later swore to Freeman that she had played a joke on Mead) were equivocal for several reasons: first, Mead was well aware of the forms and frequency of Samoan joking; second, she provided a careful account of the sexual restrictions on ceremonial virgins that corresponds to Fa'apua'a Fa'auma'a's account to Freeman, and third, that Mead's notes make clear that she had reached her conclusions about Samoan sexuality before meeting Fa'apua'a Fa'amu. Orans points out that Mead's data support several different conclusions, and that Mead's conclusions hinge on an interpretive, rather than positivist, approach to culture. Orans goes on to point out, concerning Mead's work elsewhere, that her own notes do not support her published conclusive claims. However, there are still those who claim Mead was hoaxed, including Peter Singer and zoologist David Attenborough. Evaluating Mead's work in Samoa from a positivist stance, Martin Orans' assessment of the controversy was that Mead did not formulate her research agenda in scientific terms, and that "her work may properly be damned with the harshest scientific criticism of all, that it is 'not even wrong'."
The Intercollegiate Review , published by the Intercollegiate Studies Institute which promotes conservative thought on college campuses, listed the book as No. 1 on its "The Fifty Worst Books of the Century" list.
Another influential book by Mead was "Sex and Temperament in Three Primitive Societies". This became a major cornerstone of the feminist movement, since it claimed that females are dominant in the Tchambuli (now spelled Chambri) Lake region of the Sepik basin of Papua New Guinea (in the western Pacific) without causing any special problems. The lack of male dominance may have been the result of the Australian administration's outlawing of warfare. According to contemporary research, males are dominant throughout Melanesia (although some believe that female witches have special powers). Others have argued that there is still much cultural variation throughout Melanesia, and especially in the large island of New Guinea. Moreover, anthropologists often overlook the significance of networks of political influence among females. The formal male-dominated institutions typical of some areas of high population density were not, for example, present in the same way in Oksapmin, West Sepik Province, a more sparsely populated area. Cultural patterns there were different from, say, Mt. Hagen. They were closer to those described by Mead.
Mead stated that the Arapesh people, also in the Sepik, were pacifists, although she noted that they do on occasion engage in warfare. Her observations about the sharing of garden plots among the Arapesh, the egalitarian emphasis in child rearing, and her documentation of predominantly peaceful relations among relatives are very different from the "big man" displays of dominance that were documented in more stratified New Guinea cultures—e.g. by Andrew Strathern. They are a different cultural pattern.
In brief, her comparative study revealed a full range of contrasting gender roles:
Deborah Gewertz (1981) studied the Chambri (called Tchambuli by Mead) in 1974–1975 and found no evidence of such gender roles. Gewertz states that as far back in history as there is evidence (1850s) Chambri men dominated over the women, controlled their produce and made all important political decisions. In later years there has been a diligent search for societies in which women dominate men, or for signs of such past societies, but none have been found (Bamberger, 1974). Jessie Bernard criticised Mead's interpretations of her findings, arguing that Mead was biased in her descriptions due to use of subjective descriptions. Bernard argues that while Mead claimed the Mundugumor women were temperamentally identical to men, her reports indicate that there were in fact sex differences; Mundugumor women hazed each other less than men hazed each other, they made efforts to make themselves physically desirable to others, married women had fewer affairs than married men, women were not taught to use weapons, women were used less as hostages and Mundugumor men engaged in physical fights more often than women. Conversely, the Arapesh were also described as equal in temperament, yet Bernard states that Mead's own writings indicate that men fought physically over women, yet women did not fight physically over men, despite the two being supposedly equal in temperament. The Arapesh also seemed to have some conception of sex differences in temperament, as they would sometimes describe a woman as acting like a particularly quarrelsome man. Bernard also questioned if the behaviour of men and women in these societies differed as much from Western behaviour as Mead claimed it did, arguing that some of her descriptions could be equally descriptive of a Western context.
Despite its feminist roots, Mead's work on women and men was also criticized by Betty Friedan
on the basis that it contributes to infantilizing women.
In 1926, there was much debate about race and intelligence. Mead felt the methodologies involved in the experimental psychology research supporting arguments of racial superiority in intelligence were substantially flawed. In "The Methodology of Racial Testing: Its Significance for Sociology" Mead proposes that there are three problems with testing for racial differences in intelligence. First, there are concerns with the ability to validly equate one's test score with what Mead refers to as "racial admixture" or how much "Negro or Indian blood" an individual possesses. She also considers whether this information is relevant when interpreting IQ scores. Mead remarks that a genealogical method could be considered valid if it could be "subjected to extensive verification". In addition, the experiment would need a steady control group to establish whether racial admixture was actually affecting intelligence scores. Next, Mead argues that it is difficult to measure the effect that social status has on the results of a person's intelligence test. By this she meant that environment (i.e., family structure, socioeconomic status, exposure to language) has too much influence on an individual to attribute inferior scores solely to a physical characteristic such as race. Lastly, Mead adds that language barriers sometimes create the biggest problem of all. Similarly, Stephen J. Gould finds three main problems with intelligence testing, in his 1981 book "The Mismeasure of Man", that relate to Mead's view of the problem of determining whether there are indeed racial differences in intelligence.
In 1929 Mead and Fortune visited Manus, now the northern-most province of Papua New Guinea, travelling there by boat from Rabaul. She amply describes her stay there in her autobiography and it is mentioned in her 1984 biography by Jane Howard. On Manus she studied the Manus people of the south coast village of Peri. "Over the next five decades Mead would come back oftener to Peri than to any other field site of her career.
Mead has been credited with persuading the American Jewish Committee to sponsor a project to study European Jewish villages, "shtetls", in which a team of researchers would conduct mass interviews with Jewish immigrants living in New York City. The resulting book, widely cited for decades, allegedly created the Jewish mother stereotype, a mother intensely loving but controlling to the point of smothering, and engendering guilt in her children through the suffering she professed to undertake for their sakes.
Mead worked for the RAND Corporation, a U.S. Air Force military funded private research organization, from 1948 to 1950 to study Russian culture and attitudes toward authority.
As an Anglican Christian, Mead played a considerable part in the drafting of the 1979 American Episcopal Book of Common Prayer.
After her death, Mead's Samoan research was criticized by anthropologist Derek Freeman, who published a book that argued against many of Mead's conclusions. Freeman argued that Mead had misunderstood Samoan culture when she argued that Samoan culture did not place many restrictions on youths' sexual explorations. Freeman argued instead that Samoan culture prized female chastity and virginity and that Mead had been misled by her female Samoan informants. Freeman's critique was met with a considerable backlash and harsh criticism from the anthropology community, whereas it was received enthusiastically by communities of scientists who believed that sexual mores were more or less universal across cultures. Some anthropologists who studied Samoan culture argued in favor of Freeman's findings and contradicted those of Mead, whereas others argued that Freeman's work did not invalidate Mead's work because Samoan culture had been changed by the integration of Christianity in the decades between Mead's and Freeman's fieldwork periods. While Mead was careful to shield the identity of all her subjects for confidentiality Freeman was able to find and interview one of her original participants, and Freeman reported that she admitted to having wilfully misled Mead. She said that she and her friends were having fun with Mead and telling her stories.
On the whole, anthropologists have rejected the notion that Mead's conclusions rested on the validity of a single interview with a single person, finding instead that Mead based her conclusions on the sum of her observations and interviews during her time in Samoa, and that the status of the single interview did not falsify her work. Some anthropologists have however maintained that even though Freeman's critique was invalid, Mead's study was not sufficiently scientifically rigorous to support the conclusions she drew.
In her 2015 book "Galileo's Middle Finger", Alice Dreger argues that Freeman's accusations were unfounded and misleading. A detailed review of the controversy by Paul Shankman, published by the University of Wisconsin Press in 2009, supports the contention that Mead's research was essentially correct, and concludes that Freeman cherry-picked his data and misrepresented both Mead and Samoan culture.
In 1976, Mead was inducted into the National Women's Hall of Fame.
On January 19, 1979, President Jimmy Carter announced that he was awarding the Presidential Medal of Freedom posthumously to Mead. UN Ambassador Andrew Young presented the award to Mead's daughter at a special program honoring Mead's contributions, sponsored by the American Museum of Natural History, where she spent many years of her career. The citation read:
In 1979, the Supersisters trading card set was produced and distributed; one of the cards featured Mead's name and picture.
The 2014 novel "Euphoria" by Lily King is a fictionalized account of Mead's love/marital relationships with fellow anthropologists Reo Fortune and Gregory Bateson in pre-WWII New Guinea.
In addition, there are several schools named after Mead in the United States: a junior high school in Elk Grove Village, Illinois, an elementary school in Sammamish, Washington and another in Sheepshead Bay, Brooklyn, New York.
The USPS issued a stamp of face value 32¢ on May 28, 1998, as part of the Celebrate the Century stamp sheet series.
In the 1967 musical "Hair", her name is given to a tranvestite 'tourist' disturbing the show with the song 'My Conviction'.
Note: See also "Margaret Mead: The Complete Bibliography 1925–1975", Joan Gordan, ed., The Hague: Mouton.
|
https://en.wikipedia.org/wiki?curid=19617
|
Michael Palin
Sir Michael Edward Palin (; born 5 May 1943) is an English comedian, actor, writer and television presenter. He was a member of the comedy group Monty Python. Since 1980 he has made a number of travel documentaries.
Palin wrote most of his comedic material with fellow Python member Terry Jones. Before Monty Python, they had worked on other shows such as the "Ken Dodd Show", "The Frost Report", and "Do Not Adjust Your Set". Palin appeared in some of the most famous Python sketches, including "Argument Clinic", "Dead Parrot sketch", "The Lumberjack Song", "The Spanish Inquisition", "Bicycle Repair Man" and "The Fish-Slapping Dance". He also regularly played a Gumby.
Palin continued to work with Jones after Python, co-writing "Ripping Yarns". He has also appeared in several films directed by fellow Python Terry Gilliam and made notable appearances in other films such as "A Fish Called Wanda" (1988), for which he won the BAFTA Award for Best Actor in a Supporting Role. In a 2005 poll to find "The Comedians' Comedian", he was voted the 30th favourite by fellow comedians and comedy insiders. After Python, he began a new career as a travel writer and travel documentarian. His journeys have taken him across the world, including the North and South Poles, the Sahara Desert, the Himalayas, Eastern Europe, Brazil, and in 2018, he visited North Korea; documenting his visit to the isolated country in a series broadcast on Channel 5.
Having been awarded a CBE for services to television in the 2000 New Year Honours, Palin received a knighthood in the 2019 New Year Honours for services to travel, culture and geography. From 2009–12 he was the president of the Royal Geographical Society. On 12 May 2013, Palin was made a BAFTA fellow, the highest honour that is conferred by the organisation.
Palin was born in Ranmoor, Sheffield, the second child and only son of Edward Moreton Palin (1900–1977) and Mary Rachel Lockhart (née Ovey; 1903–1990). His father was a Shrewsbury and Cambridge-educated engineer working for a steel firm. His maternal grandfather, Lieutenant-Colonel Richard Lockhart Ovey, DSO, was High Sheriff of Oxfordshire in 1927. He was educated at Birkdale and Shrewsbury School. His sister Angela was nine years older than he was. Despite the age gap the two had a close relationship until her suicide in 1987. He has ancestral roots in Letterkenny, County Donegal.
When he was five years old, Palin had his first acting experience at Birkdale playing Martha Cratchit in a school performance of "A Christmas Carol". At the age of 10, Palin, still interested in acting, made a comedy monologue and read a Shakespeare play to his mother while playing all the parts. After leaving Shrewsbury in 1962, he went on to read modern history at Brasenose College, Oxford. With fellow student Robert Hewison he performed and wrote, for the first time, comedy material at a university Christmas party. Terry Jones, also a student in Oxford, saw that performance and began writing together with Hewison and Palin. In the same year Palin joined the Brightside and Carbrook Co-operative Society Players and first gained fame when he won an acting award at a Co-op drama festival. He also performed and wrote in the Oxford Revue (called the Et ceteras) with Jones.
In 1966, Palin married Helen Gibbins, whom he first met in 1959 on holiday in Southwold in Suffolk. This meeting was later fictionalised in Palin's teleplay for the 1987 BBC television drama "East of Ipswich". The couple have three children and four grandchildren. Daughter Rachel is a BBC TV director, whose work includes "". Son William is Director of Conservation at the Old Royal Naval College, Greenwich, London and oversaw the 2018–19 restoration of the Painted Hall. A photograph of William as a baby briefly appeared in "Monty Python and the Holy Grail" as "Sir Not-appearing-in-this-film". His nephew is the theatre designer Jeremy Herbert. Palin is an agnostic.
After finishing university in 1965 Palin became a presenter on a comedy pop show called "Now!" for the television contractor Television Wales and the West. At the same time Palin was contacted by Jones, who had left university a year earlier, for assistance in writing a theatrical documentary about sex through the ages. Although this project was eventually abandoned, it brought Palin and Jones together as a writing duo and led them to write comedy for various BBC programmes, such as "The Ken Dodd Show", "The Billy Cotton Bandshow", and "The Illustrated Weekly Hudd". They collaborated in writing lyrics for an album by Barry Booth called "Diversions". They were also in the team of writers working for "The Frost Report", whose other members included Frank Muir, Barry Cryer, Marty Feldman, Ronnie Barker, Ronnie Corbett, Dick Vosburgh and future Monty Python members Graham Chapman, John Cleese and Eric Idle.
Although the members of Monty Python had already encountered each other over the years, "The Frost Report" was the first time all the British members of Monty Python (its sixth member, Terry Gilliam, was at that time an American citizen) worked together. During the run of "The Frost Report" the Palin/Jones team contributed material to two shows starring John Bird: "The Late Show" and "A Series of Birds". For "A Series of Birds" the Palin/Jones team had their first experience of writing narrative instead of the short sketches they were accustomed to conceiving.
Following "The Frost Report" the Palin/Jones team worked both as actors and writers on the show "Twice a Fortnight" with Graeme Garden, Bill Oddie and Jonathan Lynn, and the successful children's comedy show "Do Not Adjust Your Set" with Idle and David Jason. The show also featured musical numbers by the Bonzo Dog Doo-Dah Band, including future Monty Python musical collaborator Neil Innes. The animations for "Do Not Adjust Your Set" were made by Terry Gilliam. Eager to work with Palin sans Jones, Cleese later asked him to perform in "How to Irritate People" together with Chapman and Tim Brooke-Taylor. The Palin/Jones team were reunited for "The Complete and Utter History of Britain".
On the strength of their work on "The Frost Report" and other programmes, Cleese and Chapman had been offered a show by the BBC, but Cleese was reluctant to do a two-man show for various reasons, among them Chapman's reputedly difficult personality. During this period Cleese contacted Palin about doing the show that would ultimately become "Monty Python's Flying Circus". At the same time the success of "Do Not Adjust Your Set" had led Palin, Jones, Idle and Gilliam to be offered their own series and, while it was still in production, Palin agreed to Cleese's proposal and brought along Idle, Jones and Gilliam. Thus the formation of the Monty Python troupe has been referred to as a result of Cleese's desire to work with Palin and the chance circumstances that brought the other four members into the fold.
Palin played various roles in "Monty Python", which ranged from manic enthusiasm (such as the lumberjack of "The Lumberjack Song", or Herbert Anchovy, host of the game show "Blackmail") to unflappable calmness (such as the Dead parrot vendor or Cheese Shop proprietor). As a straight man he was often a foil to the rising ire of characters portrayed by Cleese. He also played timid, socially inept characters such as Arthur Putey, the man who sits quietly as a marriage counsellor (Eric Idle) makes love to his wife (Carol Cleveland), and Mr Anchovy, a chartered accountant who wants to become a lion tamer. He appeared as the "It's" man (a Robinson Crusoe-type castaway with torn clothes and a long, unkempt beard) at the beginning of most episodes. He also frequently played a Gumby, a character Palin said "had these moronic views that were expressed with extraordinary force."
Palin frequently co-wrote sketches with Terry Jones and also initiated the "Spanish Inquisition sketch", which included the catchphrase "Nobody expects the Spanish Inquisition!" He also composed songs with Jones including "The Lumberjack Song", "Every Sperm Is Sacred" and "Spam". His solo musical compositions included "Decomposing Composers" and "Finland".
After the "Monty Python" television series ended in 1974, the Palin/Jones team worked on "Ripping Yarns", an intermittent television comedy series broadcast over three years from 1976. They had earlier collaborated on the play "Secrets" from the BBC series "Black and Blue" in 1973. He starred as Dennis the Peasant in Terry Gilliam's 1977 film "Jabberwocky". Palin also appeared in "All You Need Is Cash" (1978) as Eric Manchester (based on Derek Taylor), the press agent for the Rutles. In 1980, Palin co-wrote "Time Bandits" with Terry Gilliam. He also acted in the film.
In 1982, Palin wrote and starred in "The Missionary", co-starring Maggie Smith. In it, he plays the Reverend Charles Fortescue, who is recalled from Africa to aid prostitutes. He co-starred with Maggie Smith again in the 1984 comedy film "A Private Function". In 1984, he reunited with Terry Gilliam to appear in "Brazil". He appeared in the comedy film "A Fish Called Wanda", for which he won the BAFTA Award for Best Actor in a Supporting Role. Cleese reunited the main cast almost a decade later to make "Fierce Creatures". After filming for "Fierce Creatures" finished, Palin went on a travel journey for a BBC documentary and, returning a year later, found that the end of "Fierce Creatures" had failed at test screenings and had to be reshot.
After "Fierce Creatures" and a small part in "The Wind in the Willows", a film directed by and starring Terry Jones, it would be twenty more years until Palin's next film role, as Soviet politician Vyacheslav Molotov in the 2017 satirical black comedy "The Death of Stalin". Palin also appeared with John Cleese in his documentary, "The Human Face". Palin was cast in a supporting role in the Tom Hanks and Meg Ryan romantic comedy "You've Got Mail", but his role was eventually cut entirely.
Palin has also appeared in serious drama. In 1991 Palin appeared in a film, "American Friends", he wrote based upon a real event in the life of his great-grandfather, a fellow at St John's College, Oxford. In that same year he also played the part of a headmaster in Alan Bleasdale's Channel 4 drama series "GBH". In 1994, Palin narrated the English language audiobook version of "Esio Trot" by children's author Roald Dahl.
In 1997, Palin had a small cameo role in Australian soap opera "Home and Away". He played an English surfer with a fear of sharks, who interrupts a conversation between two main characters to ask whether there were any sharks in the sea. This was filmed while he was in Australia for the "Full Circle" series, with a segment about the filming of the role featuring in the series. In November 2005, he appeared in the "John Peel's Record Box" documentary.
In 2013, Palin appeared in a First World War drama titled "The Wipers Times" written by Ian Hislop and Nick Newman. At the Cannes Film Festival in 2016, it was announced that Palin was set to star alongside Adam Driver in Terry Gilliam's "The Man Who Killed Don Quixote". Palin, however, dropped out of the film after it ran into a financial problem.
While speaking at the Edinburgh International Film Festival, Palin announced that he was presenting the two-part documentary "Michael Palin in North Korea" to be broadcast on the British television network Channel 5. The documentary was broadcast in September 2018, in two one-hour segments on Channel 5 in the UK and in a single two-hour programme on National Geographic in the United States. It was broadcast again by Channel 5, in a single two-hour programme in December 2018.
In July 2019, Palin performed a one-man stage show at the Torch Theatre, Milford Haven, Wales, about the loss of HMS "Erebus" during the third Franklin expedition, which is recounted in his book "Erebus: The Story of a Ship".
Palin assisted Campaign for Better Transport and others with campaigns on sustainable transport, particularly those relating to urban areas, and has been president of the campaign since 1986. On 2 January 2011, he became the first person to sign the UK-based Campaign for Better Transport's Fair Fares Now campaign. In July 2015, he signed an open letter and gave an interview to support "a strong BBC at the centre of British life" at a time the government was reviewing the corporation's size and activities.
In July 2010, Palin sent a message of support for the Dongria Kondh tribe of India, who are resisting mining on their land by the company Vedanta Resources. Palin said, "I've been to the Nyamgiri Hills in Orissa and seen the forces of money and power that Vedanta Resources have arrayed against a people who have occupied their land for thousands of years, who husband the forest sustainably and make no great demands on the state or the government. The tribe I visited simply want to carry on living in the villages that they and their ancestors have always lived in".
Palin's first travel documentary was episode 4 of the 1980 BBC Television series "Great Railway Journeys of the World", entitled "Confessions of a Trainspotter". Throughout the hour long show, Palin humorously reminisces about his childhood hobby of train spotting while he travels throughout the UK by train from London to the Kyle of Lochalsh, via Manchester, York, Newcastle upon Tyne, Edinburgh and Inverness. He rides vintage railway lines and trains including the "Flying Scotsman". At the Kyle of Lochalsh, Palin bought the station's long metal platform sign and is seen lugging it back to London with him.
In 1994, Palin travelled through Ireland for the same series, entitled "Derry to Kerry". In a quest for family roots, he attempted to trace his great grandmother – Brita Gallagher – who set sail from Ireland years ago during the Great Famine (1845–1849), bound for a new life in Burlington, New Jersey. The series is a trip along the Palin family line.
Starting in 1989, Palin appeared as presenter in a series of travel programmes made for the BBC. It was after the veteran TV globetrotter Alan Whicker and journalist Miles Kington turned down presenting the first of these, "Around the World in 80 Days with Michael Palin", that gave Palin the opportunity to present his first and subsequent travel shows. These programmes have been broadcast worldwide in syndication, and were also sold on VHS tape and later on DVD:
Following each trip, Palin wrote a book about his travels, providing information and insights not included in the TV programme. Each book is illustrated with photographs by Basil Pao, the stills photographer who was on the team. (Exception: the first book, "Around the World in 80 Days", contains some pictures by Pao but most are by other photographers.)
All seven of these books were also made available as audio books, and all of them are read by Palin himself. "Around the World in 80 Days" and "Hemingway Adventure" are unabridged, while the other four books were made in both abridged and unabridged versions.
For four of the trips a photography book was made by Pao, each with an introduction written by Palin. These are large coffee-table style books with pictures printed on glossy paper. The majority of the pictures are of various people encountered on the trip, as informal portraits or showing them engaged in some interesting activity. Some of the landscape photos are displayed as two-page spreads.
Palin's travel programmes are responsible for a phenomenon termed the "Palin effect": areas of the world that he has visited suddenly become popular tourist attractions – for example, the significant increase in the number of tourists interested in Peru after Palin visited Machu Picchu. In a 2006 survey of "15 of the world's top travel writers" by "The Observer", Palin named Peru's Pongo de Mainique (canyon below the Machu Picchu) his "favourite place in the world".
Palin notes in his book of "Around the World in 80 Days" that the final leg of his journey could originally have taken him and his crew on one of the trains involved in the Clapham Junction rail crash, but they arrived ahead of schedule and caught an earlier train.
In recent years, Palin has written and presented occasional documentary programmes on artists that interest him. The first, on Scottish painter Anne Redpath, was "Palin on Redpath" in 1997. In "The Bright Side of Life" (2000), Palin continued on a Scottish theme, looking at the work of the Scottish Colourists. Two further programmes followed on European painters; "Michael Palin and the Ladies Who Loved Matisse" (2004) and "Michael Palin and the Mystery of Hammershøi" (2005), about the French artist Henri Matisse and Danish artist Vilhelm Hammershøi respectively. The DVD "Michael Palin on Art" contains all these documentaries except for the Matisse programme.
In November 2008, Palin presented a First World War documentary about Armistice Day, 11 November 1918, when thousands of soldiers lost their lives in battle after the war had officially ended. Palin filmed on the battlefields of Northern France and Belgium for the programme, called the "Last Day of World War One", produced for the BBC's "Timewatch" series.
Palin was instrumental in setting up the Michael Palin Centre for Stammering Children in 1993. Also in 1993, each member of Monty Python had an asteroid named after them. Palin's is Asteroid 9621 Michaelpalin. In 2003, inside the Globe a commemorative stone was placed – Palin has his own stone, to mark donors to the theatre, but it is misspelt as "Michael Pallin". The story goes that John Cleese paid for the stone, and mischievously insisted on misspelling his name.
In honour of his achievements as a traveller, especially rail travel, Palin has two British trains named after him. In 2002, Virgin Trains' new £5 million high speed Super Voyager train number 221130 was named "Michael Palin" – it carries his name externally and a plaque is located adjacent to the onboard shop with information on Palin and his many journeys. Also, National Express East Anglia named a British Rail Class 153 (unit number 153335) after him. (He is also a model railway enthusiast.)
In 2008, he received the James Joyce Award of the Literary and Historical Society in Dublin. In recognition of his services to the promotion of geography, Palin was awarded the Livingstone Medal of the Royal Scottish Geographical Society in March 2009, along with a Fellowship of this Society (FRGS). In June 2013, he was similarly honoured in Canada with a gold medal for achievements in geography by the Royal Canadian Geographical Society. In June 2009, Palin was elected for a three-year term as President of the Royal Geographical Society. Because of his self-described "amenable, conciliatory character" Michael Palin has been referred to as unofficially "Britain's Nicest Man." In a 2018 poll for Yorkshire Day he was named the greatest Yorkshireman ever, ahead of Sean Bean and Patrick Stewart.
In September 2013, Moorlands School, Leeds named one of their school houses "Palin" after him. The University of St Andrews awarded Palin an honorary Doctor of Science degree during their June 2017 graduation ceremonies, with the degree recognising his contribution to the public's understanding of contemporary geography. He joins his fellow Pythons John Cleese and Terry Jones in receiving an honorary degree from the Fife institution. In October 2018, the Royal Canadian Geographical Society awarded Palin the first Louie Kamookak Medal for advances in geography, for his book on the history of the polar exploration vessel HMS "Erebus".
He was appointed a Commander of the Order of the British Empire (CBE) in the 2000 New Year Honours. Palin was appointed a Knight Commander of the Order of St Michael and St George (KCMG) in the 2019 New Year Honours for “services to travel, culture and geography”. Palin is the only member of the Monty Python team to receive a knighthood. (John Cleese had turned down a CBE in 1996, calling it "too silly", and declined a life peerage in 1999).
In 2017 the British Library acquired Palin's archive consisting of project files relating to his work, notebooks, and his personal diaries. The papers in the archive relate to his work with "Monty Python", his later TV work, and his children's and humorous books.
All his travel books can also be read at no charge, complete and unabridged, on his website.
|
https://en.wikipedia.org/wiki?curid=19620
|
Materials science
The interdisciplinary field of materials science, also commonly termed materials science and engineering, is the design and discovery of new materials, particularly solids. The intellectual origins of materials science stem from the Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study.
Many of the most pressing scientific problems humans currently face are due to the limits of available materials and how they are used. Thus, breakthroughs in materials science are likely to affect the future of technology significantly.
Materials scientists emphasize understanding how the history of a material (its "processing") influences its structure, and thus the material's properties and performance. The understanding of processing-structure-properties relationships is called the . This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy. Materials science is also an important part of forensic engineering and failure analysis investigating materials, products, structures or components which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents.
The material of choice of a given era is often a defining point. Phrases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from mining and (likely) ceramics and earlier from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race: the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials.
Before the 1960s (and in some cases decades after), many eventual "materials science" departments were "metallurgy" or "ceramics engineering" departments, reflecting the 19th and early 20th century emphasis on metals and ceramics. The growth of materials science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s "to expand the national program of basic research and training in the materials sciences." The field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties, and understand phenomena.
A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us—they can be found in anything from buildings to spacecraft. Materials can generally be further divided into two classes: crystalline and non-crystalline. The traditional examples of materials are metals, semiconductors, ceramics and polymers. New and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few.
The basis of materials science involves studying the structure of materials, and relating them to their properties. Once a materials scientist knows about this structure-property correlation, they can then go on to study the relative performance of a material in a given application. The major determinants of the structure of a material and thus of its properties are its constituent chemical elements and the way in which it has been processed into its final form. These characteristics, taken together and related through the laws of thermodynamics and kinetics, govern a material's microstructure, and thus its properties.
As mentioned above, structure is one of the most important components of the field of materials science. Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons, or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy (EDS), chromatography, thermal analysis, electron microscope analysis, etc. Structure is studied at various levels, as detailed below.
This deals with the atoms of the materials, and how they are arranged to give molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms(Å).
The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material.
To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure.
Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Mostly, materials do not occur as a single crystal, but in polycrystalline form, i.e., as an aggregate of small crystals with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination.
Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely noncrystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical, descriptions of physical properties.
Nanostructure deals with objects and structures that are in the 1–100 nm range. In many materials, atoms or molecules agglomerate together to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties.
In describing nanostructures it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have "one dimension" on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have "two dimensions" on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have "three dimensions" on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure.
Materials which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructure) are called nanomaterials. Nanomaterials are subject of intense research in the materials science community due to the unique properties that they exhibit.
Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured.
The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects, so that they can be studied, with significant advances in simulation resulting in exponentially increasing understanding of how defects can be used to enhance material properties.
Macrostructure is the appearance of a material in the scale millimeters to meters—it is the structure of the material as seen with the naked eye.
Materials exhibit myriad properties, including the following.
The properties of a material determine its usability and hence its engineering application.
Synthesis and processing involves the creation of a material with the desired micro-nanostructure. From an engineering standpoint, a material cannot be used in industry if no economical production method for it has been developed. Thus, the processing of materials is vital to the field of materials science.
Different materials require different processing or synthesis methods. For example, the processing of metals has historically been very important and is studied under the branch of materials science named "physical metallurgy". Also, chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, thin films, etc. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene.
Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics.
The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It also helps in the understanding of phase diagrams and phase equilibrium.
Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change.
Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat.
Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics – the following non-exhaustive list highlights a few important research areas.
Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter) but is usually 1–100 nm.
Nanomaterials research takes a materials science-based approach to nanotechnology, using advances in materials metrology and synthesis which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties.
The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc.
A biomaterial is any matter, surface, or construct that interacts with biological systems. The study of biomaterials is called "bio materials science". It has experienced steady and strong growth over its history, with many companies investing large amounts of money into developing new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science.
Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material.
Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance.
Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer.
This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics.
With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding Integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more.
Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.).
Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced.
Another application of material science is the structures of ceramics and glass typically associated with the most brittle materials. Bonding in ceramics and glasses uses covalent and ionic-covalent types with SiO2 (silica or sand) as a fundamental building block. Ceramics are as soft as clay or as hard as stone and concrete. Usually, they are crystalline in form. Most glasses contain a metal oxide fused with silica. At high temperatures used to prepare glass, the material is a viscous liquid. The structure of glass forms into an amorphous state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also available. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components. Diamond and carbon in its graphite form are considered to be ceramics.
Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties.
Filaments are commonly used for reinforcement in composite materials.
Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases. Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles which play a key and integral role in NASA's Space Shuttle thermal protection system which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material which withstands re-entry temperatures up to and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfural alcohol in a vacuum chamber, and cured-pyrolized to convert the furfural alcohol to carbon. To provide oxidation resistance for reuse ability, the outer layers of the RCC are converted to silicon carbide.
Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose.
Polymers are chemical compounds made up of a large number of identical components linked together like chains. They are an important part of materials science. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are really the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics which have been around, and which are in current widespread use, include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates and also rubbers which have been around are natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as "commodity", "specialty" and "engineering" plastics.
Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties.
Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics.
Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc.
The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints.
The study of metal alloys is a significant part of materials science. Of all the metallic alloys in use today, the alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00%. For the steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. Cast Iron is defined as an iron–carbon alloy with more than 2.00% but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of Chromium. Nickel and Molybdenum are typically also found in stainless steels.
Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength-to-weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength-to-weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications.
The study of semiconductors is a significant part of materials science. A semiconductor is a material that has a resistivity between a metal and insulator. Its electronic properties can be greatly altered through intentionally introducing impurities or doping. From these semiconductor materials, things such as diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits can be built, making them materials of interest in industry. Semiconductor devices have replaced thermionic devices (vacuum tubes) in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate.
Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Second to silicon, gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications.
Materials science evolved—starting from the 1950s—because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more.
The field is inherently interdisciplinary, and the materials scientists/engineers must be aware and make use of the methods of the physicist, chemist and engineer. The field thus maintains close relationships with these fields. Also, many physicists, chemists and engineers also find themselves working in materials science.
The field of materials science and engineering is important both from a scientific perspective, as well as from an engineering one. When discovering new materials, one encounters new phenomena that may not have been observed before. Hence, there is a lot of science to be discovered when working with materials. Materials science also provides a test for theories in condensed matter physics.
Materials are of the utmost importance for engineers, as the usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education.
The main branches of materials science stem from the three main classes of materials: ceramics, metals, and polymers.
There are additionally broadly applicable, materials independent, endeavors.
Further, there are relatively broad focuses across materials on specific phenomena.
|
https://en.wikipedia.org/wiki?curid=19622
|
Mitsubishi A6M Zero
The Mitsubishi A6M "Zero" is a long-range fighter aircraft formerly manufactured by Mitsubishi Aircraft Company, a part of Mitsubishi Heavy Industries, and operated by the Imperial Japanese Navy from 1940 to 1945. The A6M was designated as the , or the Mitsubishi A6M Rei-sen. The A6M was usually referred to by its pilots as the "Reisen" (, zero fighter), "0" being the last digit of the imperial year 2600 (1940) when it entered service with the Imperial Navy. The official Allied reporting name was "Zeke", although the use of the name "Zero" (from Type 0) was used colloquially by the Allies as well.
The Zero is considered to have been the most capable carrier-based fighter in the world when it was introduced early in World War II, combining excellent maneuverability and very long range. The Imperial Japanese Navy Air Service (IJNAS) also frequently used it as a land-based fighter.
In early combat operations, the Zero gained a reputation as a dogfighter, achieving an outstanding kill ratio of 12 to 1, but by mid-1942 a combination of new tactics and the introduction of better equipment enabled Allied pilots to engage the Zero on generally equal terms. By 1943, due to inherent design weaknesses, such as a lack of hydraulic ailerons and rudder rendering it extremely unmaneuverable at high speeds, and an inability to equip it with a more powerful aircraft engine, the Zero gradually became less effective against newer Allied fighters. By 1944, with opposing Allied fighters approaching its levels of maneuverability and consistently exceeding its firepower, armor, and speed, the A6M had largely become outdated as a fighter aircraft. However, as design delays and production difficulties hampered the introduction of newer Japanese aircraft models, the Zero continued to serve in a front-line role until the end of the war in the Pacific. During the final phases, it was also adapted for use in "kamikaze" operations. Japan produced more Zeros than any other model of combat aircraft during the war.
The Mitsubishi A5M fighter was just entering service in early 1937, when the Imperial Japanese Navy (IJN) started looking for its eventual replacement. On October 5, 1937, it issued "Planning Requirements for the Prototype 12-shi Carrier-based Fighter", sending it to Nakajima and Mitsubishi. Both firms started preliminary design work while they awaited more definitive requirements to be handed over in a few months.
Based on the experiences of the A5M in China, the IJN sent out updated requirements in October calling for a speed of at and a climb to in 9.5 minutes. With drop tanks, it wanted an endurance of two hours at normal power, or six to eight hours at economical cruising speed. Armament was to consist of two 20 mm cannons, two 7.7 mm (.303 in) machine guns and two bombs. A complete radio set was to be mounted in all aircraft, along with a radio direction finder for long-range navigation. The maneuverability was to be at least equal to that of the A5M, while the wingspan had to be less than to allow for use on aircraft carriers.
Nakajima's team considered the new requirements unachievable and pulled out of the competition in January. Mitsubishi's chief designer, Jiro Horikoshi, thought that the requirements could be met, but only if the aircraft were made as light as possible. Every possible weight-saving measure was incorporated into the design. Most of the aircraft was built of a new top-secret aluminium alloy developed by Sumitomo Metal Industries in 1936. Called "extra super duralumin" (ESD), it was lighter, stronger and more ductile than other alloys (e.g. 24S alloy) used at the time, but was prone to corrosive attack, which made it brittle. This detrimental effect was countered with an anti-corrosion coating applied after fabrication. No armour protection was provided for the pilot, engine or other critical points of the aircraft, and self-sealing fuel tanks, which were becoming common at the time, were not used. This made the Zero lighter, more maneuverable, and the longest-ranged single-engine fighter of World War II, which made it capable of searching out an enemy hundreds of kilometres away, bringing it to battle, then returning to its base or aircraft carrier. However, that tradeoff in weight and construction also made it prone to catching fire and exploding when struck by enemy fire.
With its low-wing cantilever monoplane layout, retractable, wide-set conventional landing gear and enclosed cockpit, the Zero was one of the most modern carrier-based aircraft in the world at the time of its introduction. It had a fairly high-lift, low-speed wing with very low wing loading. This, combined with its light weight, resulted in a very low stalling speed of well below . This was the main reason for its phenomenal maneuverability, allowing it to out-turn any Allied fighter of the time. Early models were fitted with servo tabs on the ailerons after pilots complained that control forces became too heavy at speeds above . They were discontinued on later models after it was found that the lightened control forces were causing pilots to overstress the wings during vigorous maneuvers.
It has been claimed that the Zero's design showed a clear influence from British and American fighter aircraft and components exported to Japan in the 1930s, and in particular on the American side, the Vought V-143 fighter. Chance Vought had sold the prototype for this aircraft and its plans to Japan in 1937. Eugene Wilson, president of Vought, claimed that when shown a captured Zero in 1943, he found that "There on the floor was the Vought V 142 or just the spitting image of it, Japanese-made", while the "power-plant installation was distinctly Chance Vought, the wheel stowage into the wing roots came from Northrop, and the Japanese designers had even copied the Navy inspection stamp from Pratt & Whitney type parts." While the sale of the V-143 was fully legal, Wilson later acknowledged the conflicts of interest that can arise whenever military technology is exported. Counterclaims maintain that there was no significant relationship between the V-143 (which was an unsuccessful design that had been rejected by the U.S. Army Air Corps and several export customers) and the Zero, with only a superficial similarity in layout.
The Zero resembled the 1937 British Gloster F.5/34. Performance of the Gloster F.5/34 was comparable to that of early model Zeros, with its dimensions and appearance remarkably close to the Zero. Gloster had a relationship with the Japanese between the wars, with Nakajima building the carrier-based plane, the Gloster Gambet, under license. However allegations about the Zero being a copy have been discredited by some authors.
The A6M is usually known as the "Zero" from its Japanese Navy type designation, Type 0 carrier fighter ("Rei shiki Kanjō sentōki", ), taken from the last digit of the Imperial year 2600 (1940) when it entered service. In Japan, it was unofficially referred to as both "Rei-sen" and "Zero-sen"; Japanese pilots most commonly called it "Zero-sen," where "sen" is the first syllable of "sentōki," Japanese for "fighter plane".
In the official designation "A6M", the "A" signified a carrier-based fighter, "6" meant that it was the sixth such model built for the Imperial Navy, and "M" indicated Mitsubishi as the manufacturer.
The official Allied code name was "Zeke", in keeping with the practice of giving male names to Japanese fighters, female names to bombers, bird names to gliders, and tree names to trainers. "Zeke" was part of the first batch of "hillbilly" code names assigned by Captain Frank T. McCoy of Nashville, Tennessee (assigned to the Allied Technical Air Intelligence Unit (ATAIU) at Eagle Farm Airport in Australia), who wanted quick, distinctive, easy-to-remember names. The Allied code for Japanese aircraft was introduced in 1942, and McCoy chose "Zeke" for the "Zero". Later, two variants of the fighter received their own code names. The Nakajima A6M2-N floatplane version of the Zero was called "Rufe", and the A6M3-32 variant was initially called "Hap". General "Hap" Arnold, commander of the USAAF, objected to that name, however, so it was changed to "Hamp".
The first Zeros (pre-series of 15 A6M2) went into operation with the 12th Rengo Kōkūtai in July 1940. On 13 September 1940, the Zeros scored their first air-to-air victories when 13 A6M2s led by Lieutenant Saburo Shindo attacked 27 Soviet-built Polikarpov I-15s and I-16s of the Chinese Nationalist Air Force, shooting down all the fighters without loss to themselves. By the time they were redeployed a year later, the Zeros had shot down 99 Chinese aircraft (266 according to other sources).
At the time of the attack on Pearl Harbor, 521 Zeros were active in the Pacific, 328 in first-line units. The carrier-borne Model 21 was the type encountered by the Americans. Its tremendous range of over allowed it to range farther from its carrier than expected, appearing over distant battlefronts and giving Allied commanders the impression that there were several times as many Zeros as actually existed.
The Zero quickly gained a fearsome reputation. Thanks to a combination of unsurpassed maneuverability – compared to contemporary Axis fighters – and excellent firepower, it easily disposed of Allied aircraft sent against it in the Pacific in 1941. It proved a difficult opponent even for the Supermarine Spitfire. "The RAF pilots were trained in methods that were excellent against German and Italian equipment but suicide against the acrobatic Japs", as Lt.Gen. Claire Lee Chennault had to notice. Although not as fast as the British fighter, the Mitsubishi fighter could out-turn the Spitfire with ease, sustain a climb at a very steep angle, and stay in the air for three times as long.
Allied pilots soon developed tactics to cope with the Zero. Due to its extreme agility, engaging a Zero in a traditional, turning dogfight was likely to be fatal. It was better to swoop down from above in a high-speed pass, fire a quick burst, then climb quickly back up to altitude. A short burst of fire from heavy machine guns or cannon was often enough to bring down the fragile Zero. These tactics were regularly employed by Grumman F4F Wildcat fighters during Guadalcanal defense through high-altitude ambush, which was possible due to early warning system consisted of Coastwatchers and radar. Such "boom-and-zoom" tactics were also successfully used in the China Burma India Theater (CBI) by the "Flying Tigers" of the American Volunteer Group (AVG) against similarly maneuverable Japanese Army aircraft such as the Nakajima Ki-27 "Nate" and Nakajima Ki-43 "Oscar". AVG pilots were trained by their commander Claire Chennault to exploit the advantages of their P-40s, which were very sturdy, heavily armed, generally faster in a dive and level flight at low altitude, with a good rate of roll.
Another important maneuver was Lieutenant Commander John S. "Jimmy" Thach's "Thach Weave", in which two fighters would fly about apart. If a Zero latched onto the tail of one of the fighters, the two aircraft would turn toward each other. If the Zero followed his original target through the turn, he would come into a position to be fired on by the target's wingman. This tactic was first used to good effect during the Battle of Midway and later over the Solomon Islands.
Many highly experienced Japanese aviators were lost in combat, resulting in a progressive decline in quality, which became a significant factor in Allied successes. Unexpected heavy losses of pilots at the Battles of the Coral Sea and Midway dealt the Japanese carrier air force a blow from which it never fully recovered.
Throughout the Battle of Midway Allied pilots expressed a high level of dissatisfaction with the Grumman F4F Wildcat. The Commanding Officer of noted: They were astounded by the Zero's superiority:
In contrast, Allied fighters were designed with ruggedness and pilot protection in mind. The Japanese ace Saburō Sakai described how the toughness of early Grumman aircraft was a factor in preventing the Zero from attaining total domination:
When the powerfully armed Lockheed P-38 Lightning, armed with four "light barrel" AN/M2 .50 cal. Browning machine guns and one 20 mm autocannon, and the Grumman F6F Hellcat and Vought F4U Corsair, each with six AN/M2 heavy calibre Browning guns, appeared in the Pacific theater, the A6M, with its low-powered engine and lighter armament, was hard-pressed to remain competitive. In combat with an F6F or F4U, the only positive thing that could be said of the Zero at this stage of the war was that, in the hands of a skillful pilot, it could maneuver as well as most of its opponents. Nonetheless, in competent hands, the Zero could still be deadly.
Due to shortages of high-powered aviation engines and problems with planned successor models, the Zero remained in production until 1945, with over 10,000 of all variants produced.
The American military discovered many of the A6M's unique attributes when they recovered a largely intact specimen of an A6M2, the Akutan Zero, on Akutan Island in the Aleutians. During an air raid over Dutch Harbor on June 4, 1942, one A6M fighter was hit by ground-based anti-aircraft fire. Losing oil, Flight Petty Officer Tadayoshi Koga attempted an emergency landing on Akutan Island about northeast of Dutch Harbor, but his Zero flipped over on soft ground in a sudden crash-landing. Koga died instantly of head injuries (his neck was broken by the tremendous impact), but his wingmen hoped he had survived and so went against Japanese doctrine to destroy disabled Zeros. The relatively-undamaged fighter was found over a month later by an American salvage team and was shipped to Naval Air Station North Island, where testing flights of the repaired A6M revealed both strengths and deficiencies in design and performance.
The experts who evaluated the captured Zero found that the plane weighed about fully loaded, some lighter than the F4F Wildcat, the standard United States Navy fighter of the time. The A6M's airframe was "built like a fine watch"; the Zero was constructed with flush rivets, and even the guns were flush with the wings. The instrument panel was a "marvel of simplicity ... with no superfluities to distract [the pilot]". What most impressed the experts was that the Zero's fuselage and wings were constructed in one piece, unlike the American method that built them separately and joined the two parts together. The Japanese method was much slower, but resulted in a very strong structure and improved close maneuverability.
American test pilots found that the Zero's controls were "very light" at , but stiffened at faster speeds (above ) to safeguard against wing failure. The Zero could not keep up with Allied aircraft in high-speed maneuvers, and its low "never exceed speed" (VNE) made it vulnerable in a dive. Testing also revealed that the Zero could not roll as quickly to the right as it could to the left, which could be exploited. While stable on the ground despite its light weight, the aircraft was designed purely for the attack role, emphasizing long range, maneuverability, and firepower at the expense of protection of its pilot. Most lacked self-sealing tanks and armor plating.
Captain Eric Brown, the Chief Naval Test Pilot of the Royal Navy, recalled being impressed by the Zero during tests of captured aircraft. "I don't think I have ever flown a fighter that could match the rate of turn of the Zero. The Zero had ruled the roost totally and was the finest fighter in the world until mid-1943."
The first two A6M1 prototypes were completed in March 1939, powered by the Mitsubishi Zuisei 13 engine with a two-blade propeller. It first flew on 1 April, and passed testing within a remarkably short period. By September, it had already been accepted for Navy testing as the A6M1 Type 0 Carrier Fighter, with the only notable change being a switch to a three-bladed propeller to cure a vibration problem.
While the navy was testing the first two prototypes, they suggested that the third be fitted with the Nakajima Sakae 12 engine instead. Mitsubishi had its own engine of this class in the form of the Kinsei, so they were somewhat reluctant to use the Sakae. Nevertheless, when the first A6M2 was completed in January 1940, the Sakae's extra power pushed the performance of the Zero well past the original specifications.
The new version was so promising that the Navy had 15 built and shipped to China before they had completed testing. They arrived in Manchuria in July 1940, and first saw combat over Chungking in August. There they proved to be completely untouchable by the Polikarpov I-16s and I-153s that had been such a problem for the A5Ms when in service. In one encounter, 13 Zeros shot down 27 I-15s and I-16s in under three minutes without loss. After hearing of these reports, the navy immediately ordered the A6M2 into production as the Type 0 Carrier Fighter, Model 11.
Reports of the Zero's performance filtered back to the US slowly. There they were dismissed by most military officials, who thought it was impossible for the Japanese to build such an aircraft.
After the delivery of the 65th aircraft, a further change was worked into the production lines, which introduced folding wingtips to allow them to fit on aircraft carriers. The resulting Model 21 would become one of the most produced versions early in the war. A feature was the improved range with wing tank and drop tank. When the lines switched to updated models, 740 Model 21s had been completed by Mitsubishi, and another 800 by Nakajima. Two other versions of the Model 21 were built in small numbers, the Nakajima-built A6M2-N "Rufe" floatplane (based on the Model 11 with a slightly modified tail), and the A6M2-K two-seat trainer of which a total of 508 were built by Hitachi and the Sasebo Naval Air Arsenal.
In 1941, Nakajima introduced the Sakae 21 engine, which used a two-speed supercharger for better altitude performance, and increased power to . A prototype Zero with the new engine was first flown on July 15, 1941.
The new Sakae was slightly heavier and somewhat longer due to the larger supercharger, which moved the center of gravity too far forward on the existing airframe. To correct for this, the engine mountings were cut back by to move the engine toward the cockpit. This had the side effect of reducing the size of the main fuselage fuel tank (located between the engine and the cockpit) from to . The cowling was redesigned to enlarge the cowl flaps, revise the oil cooler air intake, and move the carburetor air intake to the upper half of the cowling.
The wings were redesigned to reduce span, eliminate the folding tips, and square off the wingtips. The inboard edge of the aileron was moved outboard by one rib, and the wing fuel tanks were enlarged accordingly to . The two 20 mm wing cannon were upgraded from the Type 99 Mark l to the Type 99 Mark II, which required a bulge in the sheet metal of the wing below each cannon. The wings also included larger ammunition boxes and thus allowing 100 rounds per cannon.
The Sakae 21 engine and other changes increased maximum speed by only compared to the Model 21, but sacrificed nearly of range. Nevertheless, the navy accepted the type and it entered production in April 1942.
The shorter wing span led to better roll, and the reduced drag allowed the diving speed to be increased to . On the downside, turning and range, which were the strengths of the Model 21, suffered due to smaller ailerons, decreased lift and greater fuel consumption. The shorter range proved a significant limitation during the Solomons Campaign, during which Zeros based at Rabaul had to travel nearly to their maximum range to reach Guadalcanal and return. Consequently, the Model 32 was unsuited to that campaign and was used mainly for shorter range offensive missions and interception.
The appearance of the redesigned A6M3-32 prompted the US to assign the Model 32 a new code name, "Hap". This name was short-lived, as a protest from USAAF Commanding General Henry "Hap" Arnold forced a change to "Hamp". Soon after, it was realized that it was simply a new model of the "Zeke" and was termed "Zeke 32".
This variant was flown by only a small number of units, and only 343 were built.
In order to correct the deficiencies of the Model 32, a new version with folding wingtips and redesigned wing was introduced. The fuel tanks were moved to the outer wings, fuel lines for a drop tank were installed under each wing and the internal fuel capacity was increased to . More importantly, it regained its capabilities for long operating ranges, similar to the previous A6M2 Model 21, which was vastly shortened by the Model 32.
However, before the new design type was accepted formally by the Navy, the A6M3 Model 22 already stood ready for service in December 1942. Approximately 560 aircraft of the new type had been produced in the meantime by Mitsubishi Jukogyo K.K.
According to a theory, the very late production Model 22 might have had wings similar to the shortened, rounded-tip wing of the Model 52. One plane of such arrangement was photographed at Lakunai Airfield ("Rabaul East") in the second half of 1943, and has been published widely in a number of Japanese books. While the engine cowling is the same of previous Model 32 and 22, the theory proposes that the plane is an early production Model 52.
The Model 32, 22, 22 kou, 52, 52 kou and 52 otsu were all powered by the Nakajima ("Sakae") engine. That engine kept its designation in spite of changes in the exhaust system for the Model 52.
Mitsubishi is unable to state with certainty that it ever used the designation "A6M4" or model numbers for it. However, "A6M4" does appear in a translation of a captured Japanese memo from a Naval Air Technical Arsenal, titled Quarterly Report on Research Experiments, dated 1 October 1942. It mentions a "cross-section of the A6M4 intercooler" then being designed. Some researchers believe "A6M4" was applied to one or two prototype planes fitted with an experimental turbo-supercharged Sakae engine designed for high altitude. Mitsubishi's involvement in the project was probably quite limited or nil; the unmodified Sakae engine was made by Nakajima. The design and testing of the turbo-supercharger was the responsibility of the First Naval Air [Technical] Arsenal (, "") at Yokosuka. At least one photo of a prototype plane exists. It shows a turbo unit mounted in the forward left fuselage.
Lack of suitable alloys for use in the manufacture of a turbo-supercharger and its related ducting caused numerous ruptures, resulting in fires and poor performance. Consequently, further development of a turbo-supercharged A6M was cancelled. The lack of acceptance by the navy suggests that the navy did not bestow model number 41 or 42 formally, although it appears that the arsenal did use the designation "A6M4". The prototype engines nevertheless provided useful experience for future engine designs.
Sometimes considered as the most effective variant, the Model 52 was developed to again shorten the wings to increase speed and dispense with the folding wing mechanism. In addition, ailerons, aileron trim tab and flaps were revised. Produced first by Mitsubishi, most Model 52s were made by Nakajima. The prototype was made in June 1943 by modifying an A6M3 and was first flown in August 1943. The first Model 52 is said in the handling manual to have production number 3904, which apparently refers to the prototype.
Research by Mr. Bunzo Komine published by Mr. Kenji Miyazaki states that aircraft 3904 through 4103 had the same exhaust system and cowl flaps as on the Model 22. This is partially corroborated by two wrecks researched by Mr. Stan Gajda and Mr. L. G. Halls, production number 4007 and 4043, respectively. (The upper cowling was slightly redesigned from that of the Model 22.)
An early production A6M5 Zero with non separated exhaust, with an A6M3 Model 22 in the background.
A new exhaust system provided an increment of thrust by aiming the stacks aft and distributing them around the forward fuselage. The new exhaust system required "notched" cowl flaps and heat shields just aft of the stacks. (Note, however, that the handling manual translation states that the new style of exhaust commenced with number 3904. Whether this is correct, indicates retrofitting intentions, refers to the prototype but not to all subsequent planes, or is in error is not clear.) From production number 4274, the wing fuel tanks received carbon dioxide fire extinguishers. From number 4354, the radio became the Model 3, aerial Mark 1, and at that point it is said the antenna mast was shortened slightly. Through production number 4550, the lowest exhaust stacks were approximately the same length as those immediately above them. This caused hot exhaust to burn the forward edge of the landing gear doors and heat the tires. Therefore, from number 4551 Mitsubishi began to install shorter bottom stacks. Nakajima manufactured the Model 52 at its Koizumi plant in Gunma Prefecture. The A6M5 had a maximum speed of ) at and reached that altitude in 7:01 minutes.
Subsequent variants included:
Some Model 21 and 52 aircraft were converted to "bakusen" (fighter-bombers) by mounting a bomb rack and bomb in place of the centerline drop tank.
Perhaps seven Model 52 planes were ostensibly converted into A6M5-K two-seat trainers. Mass production was contemplated by Hitachi, but not undertaken.
The A6M6 was developed to use the Sakae 31a engine, featuring water-methanol engine boost and self-sealing wing tanks. During preliminary testing, its performance was considered unsatisfactory due to the additional engine power failing to materialize and the unreliability of the fuel injection system. Testing continued on the A6M6 but the end of war stopped further development. Only one prototype was produced.
The A6M7 was the last variant to see service. It was designed to meet a requirement by the Navy for a dedicated attack/dive bomber version that could operate from smaller aircraft carriers or according to another source, replace the obsolete Aichi D3A. The A6M7 had considerable design changes compared to previous attempts to make the A6M suitable for dive bombing. This included a reinforced vertical stabilizer, a special bomb rack, provision of two 350 litre drop tanks and fixed bomb/rocket swing stoppers on the underside of the wings. It was also given a new powerplant, the Sakae-31 engine, producing 1,130hp on take-off. The A6M7 had a similar armament layout to the A6M5c with the exception of the bomb centreline bomb rack, capable of carrying 250kg or 500kg bombs. Entering production in May 1945,
the A6M7 was also used in the special attack role.
Similar to the A6M6 but with the Sakae (now out of production) replaced by the Mitsubishi Kinsei 62 engine with , 60% more powerful than the engine of the A6M2. This resulted in an extensively modified cowling and nose for the aircraft. The carburetor intake was much larger, a long duct like that on the Nakajima B6N Tenzan was added, and a large spinner—like that on the Yokosuka D4Y Suisei with the Kinsei 62—was mounted. The armament consisted of two 13.2 mm (.52 in) Type 3 machine guns and two 20 mm (.80 in) Type 99 cannons in the wings. In addition, the Model 64 was modified to carry two drop tanks on either wing in order to permit the mounting of a bomb on the underside of the fuselage. Two prototypes were completed in April 1945 but the chaotic situation of Japanese industry and the end of the war obstructed the start of the ambitious program of production for 6,300 A6M8s, only the two prototypes being completed and flown.
Not included:
Like many surviving World War II Japanese aircraft, most surviving Zeros are made up of parts from multiple airframes. As a result, some are referred to by conflicting manufacturer serial numbers. In other cases, such as those recovered after decades in a wrecked condition, they have been reconstructed to the point that the majority of their structure is made up of modern parts. All of this means the identities of survivors can be difficult to confirm.
Most flying Zeros have had their engines replaced with similar American units. Only one, the Planes of Fame Museum's A6M5, has the original Sakae engine.
The rarity of flyable Zeros accounts for the use of single-seat North American T-6 Texans, with heavily modified fuselages and painted in Japanese markings, as substitutes for Zeros in the films "Tora! Tora! Tora!", "The Final Countdown", and many other television and film depictions of the aircraft, such as "Baa Baa Black Sheep" (renamed "Black Sheep Squadron"). One Model 52 was used during the production of "Pearl Harbor".
[[Category:Carrier-based aircraft|Mitsubishi A6M Zero]]
[[Category:1930s Japanese fighter aircraft]]
[[Category:Mitsubishi aircraft]]
[[Category:Attack on Pearl Harbor]]
[[Category:World War II Japanese fighter aircraft]]
[[Category:Articles containing video clips]]
[[Category:Single-engined tractor aircraft]]
[[Category:Low-wing aircraft]]
[[Category:Aircraft first flown in 1939]]
[[Category:Retractable conventional landing gear]]
|
https://en.wikipedia.org/wiki?curid=19623
|
Monasticism
Monasticism (from Ancient Greek , , from , , 'alone') or monkhood, is a religious way of life in which one renounces worldly pursuits to devote oneself fully to spiritual work. Monastic life plays an important role in many Christian churches, especially in the Catholic and Orthodox traditions as well as in other faiths such as Buddhism, Hinduism and Jainism. In other religions monasticism is criticized and not practiced, as in Islam and Zoroastrianism, or plays a marginal role, as in modern Judaism. Women pursuing a monastic life are generally called "nuns", "religious" or "sisters"or rarely, Canonesses, while monastic men are called "monks", "friars" or "brothers".
Many monastics live in abbeys, convents, monasteries or priories to separate themselves from the secular world, unless they are in mendicant or missionary orders. Titles for monastics differs between the Christian denominations. In Roman Catholicism and Anglicanism, monks and nuns are addressed as Brother (or Father, if ordained to the priesthood) or Mother/Sister, while in Eastern Orthodoxy, they are addressed as Father or Mother.
The Sangha or community of ordained Buddhist bhikkhus ("beggar" or "one who lives by alms".) and original bhikkhunis (nuns) was founded by Gautama Buddha during his lifetime over 2500 years ago. This communal monastic lifestyle grew out of the lifestyle of earlier sects of wandering ascetics, some of whom the Buddha had studied under. It was initially fairly eremitic or reclusive in nature. Bhikkhus and bhikkunis were expected to live with a minimum of possessions, which were to be voluntarily provided by the lay community. Lay followers also provided the daily food that bhikkhus required, and provided shelter for bhikkhus when they needed it.
After the Parinibbana (Final Passing) of the Buddha, the Buddhist monastic order developed into a primarily cenobitic or communal movement. The practice of living communally during the rainy vassa season, prescribed by the Buddha, gradually grew to encompass a settled monastic life centered on life in a community of practitioners. Most of the modern disciplinary rules followed by bhikkhus and bhikkhunis — as encoded in the Patimokkha — relate to such an existence, prescribing in great detail proper methods for living and relating in a community of bhikkhus or bhikkhunis. The number of rules observed varies with the order; Theravada bhikkhus follow around 227 rules, the Vinaya. There are a larger number of rules specified for bhikkhunis (nuns).
The Buddhist monastic order consists of the male bhikkhu assembly and the female bhikkhuni assembly. Initially consisting only of males, it grew to include females after the Buddha's stepmother, Mahaprajapati, asked for and received permission to live as an ordained practitioner.
Bhikkhus and bhikkhunis are expected to fulfill a variety of roles in the Buddhist community. First and foremost, they are expected to preserve the doctrine and discipline now known as Buddhism. They are also expected to provide a living example for the laity, and to serve as a "field of merit" for lay followers—providing laymen and women with the opportunity to earn merit by giving gifts and support to the bhikkhus. In return for the support of the laity, bhikkhus and bhikkhunis are expected to live an austere life focused on the study of Buddhist doctrine, the practice of meditation, and the observance of good moral character.
A bhikkhu (the term in the Pali language) or bhikshu (in Sanskrit), first ordains as a "Samanera" (novice). Novices often ordain at a young age, but generally no younger than eight. Samaneras live according to the Ten Precepts, but are not responsible for living by the full set of monastic rules. Higher ordination, conferring the status of a full Bhikkhu, is given only to men who are aged 20 or older. Bhikkhunis follow a similar progression, but are required to live as Samaneras for longer periods of time- typically five years.
The disciplinary regulations for bhikkhus and bhikkhunis are intended to create a life that is simple and focused, rather than one of deprivation or severe asceticism. However, celibacy is a fundamental part of this form of monastic discipline.
Monasticism in Christianity, which provides the origins of the words "monk" and "monastery", comprises several diverse forms of religious living. It began to develop early in the history of the Church, but is not mentioned in the scriptures. It has come to be regulated by religious rules (e.g. the Rule of St Basil, the Rule of St Benedict) and, in modern times, the Church law of the respective apostolic Christian churches that have forms of monastic living.
The Christian monk embraces the monastic life as a vocation from God. His objective is to imitate the life of Christ as far as possible in preparation for attaining eternal life after death.
In 4th century Egypt, Christians felt called to a more reclusive or eremitic form of living (in the spirit of the "Desert Theology" for the purpose of spiritual renewal and return to God). Saint Anthony the Great is cited by Athanasius as one of the early "Hermit monks". Especially in the Middle East, eremitic monasticism continued to be common until the decline of Syriac Christianity in the late Middle Ages.
Around 318 Saint Pachomius started to organize his many followers in what was to become the first Christian cenobitic or communal monastery. Soon, similar institutions were established throughout the Egyptian desert as well as the rest of the eastern half of the Roman Empire. Notable monasteries in the East include:
In the West, the most significant development occurred when the rules for monastic communities were written down, the Rule of St Basil being credited with having been the first. The precise dating of the Rule of the Master is problematic. It has been argued that it antedates the Rule of Saint Benedict created by Benedict of Nursia for his monastery in Monte Cassino, Italy (c. 529), and the other Benedictine monasteries he had founded as part of the Order of St Benedict. It would become the most common rule throughout the Middle Ages and is still in use today. The Augustinian Rule, due to its brevity, has been adopted by various communities, chiefly the Canons Regular. Around the 12th century, the Franciscan, Carmelite, Dominican, Servite Order (see Servants of Mary) and Augustinian mendicant orders chose to live in city convents among the people instead of being secluded in monasteries. St. Augustine's Monastery, founded in 1277 in Erfurt, Germany is regarded by many historians and theologians as the "cradle of the Reformation", as it is where Martin Luther lived as a monk from 1505 to 1511.
Today new expressions of Christian monasticism, many of which are ecumenical, are developing in various places such as the Bose Monastic Community in Italy, the Monastic Fraternities of Jerusalem throughout Europe, the New Skete, the Anglo-Celtic Society of Nativitists, the Taizé Community in France, and the mainly Evangelical Protestant New Monasticism.
In their quest to attain the spiritual goal of life, some Hindus choose the path of monasticism (Sannyasa). Monastics commit themselves to a life of simplicity, celibacy, detachment from worldly pursuits, and the contemplation of God. A Hindu monk is called a s"anyāsī, sādhu", or "swāmi". A nun is called a "sanyāsini", "sādhvi", or "swāmini". Such renunciates are accorded high respect in Hindu society, because their outward renunciation of selfishness and worldliness serves as an inspiration to householders who strive for "mental" renunciation. Some monastics live in monasteries, while others wander from place to place, trusting in God alone to provide for their physical needs. It is considered a highly meritorious act for a lay devotee to provide sadhus with food or other necessaries. Sādhus are expected to treat all with respect and compassion, whether a person may be poor or rich, good or wicked. They are also expected to be indifferent to praise, blame, pleasure, and pain. A sādhu can typically be recognized by his ochre-colored clothing. Generally, Vaisnava monks shave their heads except for a small patch of hair on the back of the head, while Saivite monks let their hair and beard grow uncut.
A "sadhu's" vow of renunciation typically forbids him from:
Islam forbids the practice of monasticism. In Sunni Islam, one example is Uthman bin Maz'oon; one of the companions of Muhammad. He was married to Khawlah bint Hakim, both being two of the earliest converts to Islam. There is a Sunni narration that, out of religious devotion, Uthman bin Maz'oon decided to dedicate himself to night prayers and take a vow of chastity from his wife. His wife got upset and spoke to Muhammad about this. Muhammad reminded Uthman that he himself, as the Prophet, also had a family life, and that Uthman had a responsibility to his family and should not adopt monasticism as a form of religious practice.
Muhammad told his companions to ease their burden and avoid excess. According to some Sunni hadiths, in a message to some companions who wanted to put an end to their sexual life, pray all night long or fast continuously, Muhammad said: “Do not do that! Fast on some days and eat on others. Sleep part of the night, and stand in prayer another part. For your body has rights upon you, your eyes have a right upon you, your wife has a right upon you, your guest has a right upon you.” Muhammad once exclaimed, repeating it three times: “Woe to those who exaggerate [who are too strict]!” And, on another occasion, Muhammad said: “Moderation, moderation! For only with moderation will you succeed.”
Monasticism is also mentioned in four places in the following verses of Qur'an:
They have taken as lords beside Allah their rabbis and their monks and the Messiah son of Mary, when they were bidden to worship only One God. There is no god save Him. Be He glorified from all that they ascribe as partner (unto Him)!
O ye who believe! Lo! many of the (Jewish) rabbis and the (Christian) monks devour the wealth of mankind wantonly and debar (men) from the way of Allah. They who hoard up gold and silver and spend it not in the way of Allah, unto them give tidings (O Muhammad) of a painful doom
Thou wilt find the most vehement of mankind in hostility to those who believe (to be) the Jews and the idolaters. And thou wilt find the nearest of them in affection to those who believe (to be) those who say: Lo! We are Christians. That is because there are among them priests and monks, and because they are not proud.
In Jainism, monasticism is encouraged and respected. Rules for monasticism are rather strict. A Jain ascetic has neither a permanent home nor any possessions, wandering barefoot from place to place except during the months of Chaturmas. The quality of life they lead is difficult because of the many constraints placed on them. They don't use a vehicle for commuting and always commute barefoot from one place to another, irrespective of the distance. They don't possess any materialistic things and also don't use the basic services like that of a phone, electricity etc. They don't prepare food and live only on what people offer them.
Judaism does not encourage the monastic ideal of celibacy and poverty. To the contrary—all of the Torah's Commandments are a means of sanctifying the physical world. As further disseminated through the teachings of the Yisrael Ba'al Shem Tov, the pursuit of permitted physical pleasures is encouraged as a means to "serve God with joy" (Deut. 28:47).
However, until the Destruction of the Second Temple, about two thousand years ago, taking Nazirite vows was a common feature of the religion. Nazirite Jews (in Hebrew: נזיר) abstained from grape products, haircuts, and contact with the dead. However, they did not withdraw from general society, and they were permitted to marry and own property; moreover, in most cases a Nazirite vow was for a specified time period and not permanent. In Modern Hebrew, the term "Nazir" is most often used to refer to non-Jewish monastics.
Unique among Jewish communities is the monasticism of the Beta Israel of Ethiopia, a practice believed to date to the 15th century.
A form of asceticism was practiced by some individuals in pre–World War II European Jewish communities. Its principal expression was "prishut", the practice of a married Talmud student going into self-imposed exile from his home and family to study in the kollel of a different city or town. This practice was associated with, but not exclusive to, the Perushim.
The Essenes (in Modern but not in Ancient Hebrew: , "Isiyim"; Greek: Εσσηνοι, Εσσαιοι, or Οσσαιοι; "Essēnoi", "Essaioi", or "Ossaioi") were a Jewish sect that flourished from the 2nd century BC to AD 100 which some scholars claim seceded from the Zadokite priests. Being much fewer in number than the Pharisees and the Sadducees (the other two major sects at the time), the Essenes lived in various cities but congregated in communal life dedicated to asceticism, voluntary poverty, daily immersion (in mikvah), and abstinence from worldly pleasures, including (for some groups) marriage. Many separate but related religious groups of that era shared similar mystic, eschatological, messianic, and ascetic beliefs. These groups are collectively referred to by various scholars as the "Essenes". Josephus records that Essenes existed in large numbers, and thousands lived throughout Roman Judaea.
The Essenes have gained fame in modern times as a result of the discovery of an extensive group of religious documents known as the Dead Sea Scrolls, which are commonly believed to be the Essenes' library—although there is no proof that the Essenes wrote them. These documents include multiple preserved copies of the Hebrew Bible which were untouched from as early as 300 years before Christ until their discovery in 1946. Some scholars, however, dispute the notion that the Essenes wrote the Dead Sea Scrolls. Rachel Elior, a prominent Israeli scholar, even questions the existence of the Essenes.
Throughout the centuries Taoism developed its own extensive monastic traditions and practices. Particularly well known is the White Cloud Monastery in Beijing, which houses a rare complete copy of the "Daozang", a major Taoist Scripture.
|
https://en.wikipedia.org/wiki?curid=19626
|
Mathematical logic
Mathematical logic is a subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.
Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. These areas share basic results on logic, particularly first-order logic, and definability. In computer science (particularly in the ACM Classification) mathematical logic encompasses additional topics not detailed in this article; see Logic in computer science for those.
Since its inception, mathematical logic has both contributed to, and has been motivated by, the study of foundations of mathematics. This study began in the late 19th century with the development of axiomatic frameworks for geometry, arithmetic, and analysis. In the early 20th century it was shaped by David Hilbert's program to prove the consistency of foundational theories. Results of Kurt Gödel, Gerhard Gentzen, and others provided partial resolution to the program, and clarified the issues involved in proving consistency. Work in set theory showed that almost all ordinary mathematics can be formalized in terms of sets, although there are some theorems that cannot be proven in common axiom systems for set theory. Contemporary work in the foundations of mathematics often focuses on establishing which parts of mathematics can be formalized in particular formal systems (as in reverse mathematics) rather than trying to find theories in which all of mathematics can be developed.
The "Handbook of Mathematical Logic" makes a rough division of contemporary mathematical logic into four areas:
Each area has a distinct focus, although many techniques and results are shared among multiple areas. The borderlines amongst these fields, and the lines separating mathematical logic and other fields of mathematics, are not always sharp. Gödel's incompleteness theorem marks not only a milestone in recursion theory and proof theory, but has also led to Löb's theorem in modal logic. The method of forcing is employed in set theory, model theory, and recursion theory, as well as in the study of intuitionistic mathematics.
The mathematical field of category theory uses many formal axiomatic methods, and includes the study of categorical logic, but category theory is not ordinarily considered a subfield of mathematical logic. Because of its applicability in diverse fields of mathematics, mathematicians including Saunders Mac Lane have proposed category theory as a foundational system for mathematics, independent of set theory. These foundations use toposes, which resemble generalized models of set theory that may employ classical or nonclassical logic.
Mathematical logic emerged in the mid-19th century as a subfield of mathematics, reflecting the confluence of two traditions: formal philosophical logic and mathematics (Ferreirós 2001, p. 443). "Mathematical logic, also called 'logistic', 'symbolic logic', the 'algebra of logic', and, more recently, simply 'formal logic', is the set of logical theories elaborated in the course of the last [nineteenth] century with the aid of an artificial notation and a rigorously deductive method." Before this emergence, logic was studied with rhetoric, with "calculationes", through the syllogism, and with philosophy. The first half of the 20th century saw an explosion of fundamental results, accompanied by vigorous debate over the foundations of mathematics.
Theories of logic were developed in many cultures in history, including China, India, Greece and the Islamic world. Greek methods, particularly Aristotelian logic (or term logic) as found in the "Organon", found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of predicate logic. In 18th-century Europe, attempts to treat the operations of formal logic in a symbolic or algebraic way had been made by philosophical mathematicians including Leibniz and Lambert, but their labors remained isolated and little known.
In the middle of the nineteenth century, George Boole and then Augustus De Morgan presented systematic mathematical treatments of logic. Their work, building on work by algebraists such as George Peacock, extended the traditional Aristotelian doctrine of logic into a sufficient framework for the study of foundations of mathematics (Katz 1998, p. 686).
Charles Sanders Peirce built upon the work of Boole to develop a logical system for relations and quantifiers, which he published in several papers from 1870 to 1885.
Gottlob Frege presented an independent development of logic with quantifiers in his "Begriffsschrift", published in 1879, a work generally considered as marking a turning point in the history of logic. Frege's work remained obscure, however, until Bertrand Russell began to promote it near the turn of the century. The two-dimensional notation Frege developed was never widely adopted and is unused in contemporary texts.
From 1890 to 1905, Ernst Schröder published "Vorlesungen über die Algebra der Logik" in three volumes. This work summarized and extended the work of Boole, De Morgan, and Peirce, and was a comprehensive reference to symbolic logic as it was understood at the end of the 19th century.
Concerns that mathematics had not been built on a proper foundation led to the development of axiomatic systems for fundamental areas of mathematics such as arithmetic, analysis, and geometry.
In logic, the term "arithmetic" refers to the theory of the natural numbers. Giuseppe Peano (1889) published a set of axioms for arithmetic that came to bear his name (Peano axioms), using a variation of the logical system of Boole and Schröder but adding quantifiers. Peano was unaware of Frege's work at the time. Around the same time Richard Dedekind showed that the natural numbers are uniquely characterized by their induction properties. Dedekind (1888) proposed a different characterization, which lacked the formal logical character of Peano's axioms. Dedekind's work, however, proved theorems inaccessible in Peano's system, including the uniqueness of the set of natural numbers (up to isomorphism) and the recursive definitions of addition and multiplication from the successor function and mathematical induction.
In the mid-19th century, flaws in Euclid's axioms for geometry became known (Katz 1998, p. 774). In addition to the independence of the parallel postulate, established by Nikolai Lobachevsky in 1826 (Lobachevsky 1840), mathematicians discovered that certain theorems taken for granted by Euclid were not in fact provable from his axioms. Among these is the theorem that a line contains at least two points, or that circles of the same radius whose centers are separated by that radius must intersect. Hilbert (1899) developed a complete set of axioms for geometry, building on previous work by Pasch (1882). The success in axiomatizing geometry motivated Hilbert to seek complete axiomatizations of other areas of mathematics, such as the natural numbers and the real line. This would prove to be a major area of research in the first half of the 20th century.
The 19th century saw great advances in the theory of real analysis, including theories of convergence of functions and Fourier series. Mathematicians such as Karl Weierstrass began to construct functions that stretched intuition, such as nowhere-differentiable continuous functions. Previous conceptions of a function as a rule for computation, or a smooth graph, were no longer adequate. Weierstrass began to advocate the arithmetization of analysis, which sought to axiomatize analysis using properties of the natural numbers. The modern (ε, δ)-definition of limit and continuous functions was already developed by Bolzano in 1817 (Felscher 2000), but remained relatively unknown.
Cauchy in 1821 defined continuity in terms of infinitesimals (see Cours d'Analyse, page 34). In 1858, Dedekind proposed a definition of the real numbers in terms of Dedekind cuts of rational numbers (Dedekind 1872), a definition still employed in contemporary texts.
Georg Cantor developed the fundamental concepts of infinite set theory. His early results developed the theory of cardinality and proved that the reals and the natural numbers have different cardinalities (Cantor 1874). Over the next twenty years, Cantor developed a theory of transfinite numbers in a series of publications. In 1891, he published a new proof of the uncountability of the real numbers that introduced the diagonal argument, and used this method to prove Cantor's theorem that no set can have the same cardinality as its powerset. Cantor believed that every set could be well-ordered, but was unable to produce a proof for this result, leaving it as an open problem in 1895 (Katz 1998, p. 807).
In the early decades of the 20th century, the main areas of study were set theory and formal logic. The discovery of paradoxes in informal set theory caused some to wonder whether mathematics itself is inconsistent, and to look for proofs of consistency.
In 1900, Hilbert posed a famous list of 23 problems for the next century. The first two of these were to resolve the continuum hypothesis and prove the consistency of elementary arithmetic, respectively; the tenth was to produce a method that could decide whether a multivariate polynomial equation over the integers has a solution. Subsequent work to resolve these problems shaped the direction of mathematical logic, as did the effort to resolve Hilbert's "Entscheidungsproblem", posed in 1928. This problem asked for a procedure that would decide, given a formalized mathematical statement, whether the statement is true or false.
Ernst Zermelo (1904) gave a proof that every set could be well-ordered, a result Georg Cantor had been unable to obtain. To achieve the proof, Zermelo introduced the axiom of choice, which drew heated debate and research among mathematicians and the pioneers of set theory. The immediate criticism of the method led Zermelo to publish a second exposition of his result, directly addressing criticisms of his proof (Zermelo 1908a). This paper led to the general acceptance of the axiom of choice in the mathematics community.
Skepticism about the axiom of choice was reinforced by recently discovered paradoxes in naive set theory. Cesare Burali-Forti (1897) was the first to state a paradox: the Burali-Forti paradox shows that the collection of all ordinal numbers cannot form a set. Very soon thereafter, Bertrand Russell discovered Russell's paradox in 1901, and Jules Richard (1905) discovered Richard's paradox.
Zermelo (1908b) provided the first set of axioms for set theory. These axioms, together with the additional axiom of replacement proposed by Abraham Fraenkel, are now called Zermelo–Fraenkel set theory (ZF). Zermelo's axioms incorporated the principle of limitation of size to avoid Russell's paradox.
In 1910, the first volume of "Principia Mathematica" by Russell and Alfred North Whitehead was published. This seminal work developed the theory of functions and cardinality in a completely formal framework of type theory, which Russell and Whitehead developed in an effort to avoid the paradoxes. "Principia Mathematica" is considered one of the most influential works of the 20th century, although the framework of type theory did not prove popular as a foundational theory for mathematics (Ferreirós 2001, p. 445).
Fraenkel (1922) proved that the axiom of choice cannot be proved from the axioms of Zermelo's set theory with urelements. Later work by Paul Cohen (1966) showed that the addition of urelements is not needed, and the axiom of choice is unprovable in ZF. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory.
Leopold Löwenheim (1915) and Thoralf Skolem (1920) obtained the Löwenheim–Skolem theorem, which says that first-order logic cannot control the cardinalities of infinite structures. Skolem realized that this theorem would apply to first-order formalizations of set theory, and that it implies any such formalization has a countable model. This counterintuitive fact became known as Skolem's paradox.
In his doctoral thesis, Kurt Gödel (1929) proved the completeness theorem, which establishes a correspondence between syntax and semantics in first-order logic. Gödel used the completeness theorem to prove the compactness theorem, demonstrating the finitary nature of first-order logical consequence. These results helped establish first-order logic as the dominant logic used by mathematicians.
In 1931, Gödel published "On Formally Undecidable Propositions of Principia Mathematica and Related Systems", which proved the incompleteness (in a different meaning of the word) of all sufficiently strong, effective first-order theories. This result, known as Gödel's incompleteness theorem, establishes severe limitations on axiomatic foundations for mathematics, striking a strong blow to Hilbert's program. It showed the impossibility of providing a consistency proof of arithmetic within any formal theory of arithmetic. Hilbert, however, did not acknowledge the importance of the incompleteness theorem for some time.
Gödel's theorem shows that a consistency proof of any sufficiently strong, effective axiom system cannot be obtained in the system itself, if the system is consistent, nor in any weaker system. This leaves open the possibility of consistency proofs that cannot be formalized within the system they consider. Gentzen (1936) proved the consistency of arithmetic using a finitistic system together with a principle of transfinite induction. Gentzen's result introduced the ideas of cut elimination and proof-theoretic ordinals, which became key tools in proof theory. Gödel (1958) gave a different consistency proof, which reduces the consistency of classical arithmetic to that of intuitionistic arithmetic in higher types.
Alfred Tarski developed the basics of model theory.
Beginning in 1935, a group of prominent mathematicians collaborated under the pseudonym Nicolas Bourbaki to publish a series of encyclopedic mathematics texts. These texts, written in an austere and axiomatic style, emphasized rigorous presentation and set-theoretic foundations. Terminology coined by these texts, such as the words "bijection", "injection", and "surjection", and the set-theoretic foundations the texts employed, were widely adopted throughout mathematics.
The study of computability came to be known as recursion theory, because early formalizations by Gödel and Kleene relied on recursive definitions of functions. When these definitions were shown equivalent to Turing's formalization involving Turing machines, it became clear that a new concept – the computable function – had been discovered, and that this definition was robust enough to admit numerous independent characterizations. In his work on the incompleteness theorems in 1931, Gödel lacked a rigorous concept of an effective formal system; he immediately realized that the new definitions of computability could be used for this purpose, allowing him to state the incompleteness theorems in generality that could only be implied in the original paper.
Numerous results in recursion theory were obtained in the 1940s by Stephen Cole Kleene and Emil Leon Post. Kleene (1943) introduced the concepts of relative computability, foreshadowed by Turing (1939), and the arithmetical hierarchy. Kleene later generalized recursion theory to higher-order functionals. Kleene and Kreisel studied formal versions of intuitionistic mathematics, particularly in the context of proof theory.
At its core, mathematical logic deals with mathematical concepts expressed using formal logical systems. These systems, though they differ in many details, share the common property of considering only expressions in a fixed formal language. The systems of propositional logic and first-order logic are the most widely studied today, because of their applicability to foundations of mathematics and because of their desirable proof-theoretic properties. Stronger classical logics such as second-order logic or infinitary logic are also studied, along with nonclassical logics such as intuitionistic logic.
First-order logic is a particular formal system of logic. Its syntax involves only finite expressions as well-formed formulas, while its semantics are characterized by the limitation of all quantifiers to a fixed domain of discourse.
Early results from formal logic established limitations of first-order logic. The Löwenheim–Skolem theorem (1919) showed that if a set of sentences in a countable first-order language has an infinite model then it has at least one model of each infinite cardinality. This shows that it is impossible for a set of first-order axioms to characterize the natural numbers, the real numbers, or any other infinite structure up to isomorphism. As the goal of early foundational studies was to produce axiomatic theories for all parts of mathematics, this limitation was particularly stark.
Gödel's completeness theorem (Gödel 1929) established the equivalence between semantic and syntactic definitions of logical consequence in first-order logic. It shows that if a particular sentence is true in every model that satisfies a particular set of axioms, then there must be a finite deduction of the sentence from the axioms. The compactness theorem first appeared as a lemma in Gödel's proof of the completeness theorem, and it took many years before logicians grasped its significance and began to apply it routinely. It says that a set of sentences has a model if and only if every finite subset has a model, or in other words that an inconsistent set of formulas must have a finite inconsistent subset. The completeness and compactness theorems allow for sophisticated analysis of logical consequence in first-order logic and the development of model theory, and they are a key reason for the prominence of first-order logic in mathematics.
Gödel's incompleteness theorems (Gödel 1931) establish additional limits on first-order axiomatizations. The first incompleteness theorem states that for any consistent, effectively given (defined below) logical system that is capable of interpreting arithmetic, there exists a statement that is true (in the sense that it holds for the natural numbers) but not provable within that logical system (and which indeed may fail in some non-standard models of arithmetic which may be consistent with the logical system). For example, in every logical system capable of expressing the Peano axioms, the Gödel sentence holds for the natural numbers but cannot be proved.
Here a logical system is said to be effectively given if it is possible to decide, given any formula in the language of the system, whether the formula is an axiom, and one which can express the Peano axioms is called "sufficiently strong." When applied to first-order logic, the first incompleteness theorem implies that any sufficiently strong, consistent, effective first-order theory has models that are not elementarily equivalent, a stronger limitation than the one established by the Löwenheim–Skolem theorem. The second incompleteness theorem states that no sufficiently strong, consistent, effective axiom system for arithmetic can prove its own consistency, which has been interpreted to show that Hilbert's program cannot be completed.
Many logics besides first-order logic are studied. These include infinitary logics, which allow for formulas to provide an infinite amount of information, and higher-order logics, which include a portion of set theory directly in their semantics.
The most well studied infinitary logic is formula_1. In this logic, quantifiers may only be nested to finite depths, as in first-order logic, but formulas may have finite or countably infinite conjunctions and disjunctions within them. Thus, for example, it is possible to say that an object is a whole number using a formula of formula_1 such as
Higher-order logics allow for quantification not only of elements of the domain of discourse, but subsets of the domain of discourse, sets of such subsets, and other objects of higher type. The semantics are defined so that, rather than having a separate domain for each higher-type quantifier to range over, the quantifiers instead range over all objects of the appropriate type. The logics studied before the development of first-order logic, for example Frege's logic, had similar set-theoretic aspects. Although higher-order logics are more expressive, allowing complete axiomatizations of structures such as the natural numbers, they do not satisfy analogues of the completeness and compactness theorems from first-order logic, and are thus less amenable to proof-theoretic analysis.
Another type of logics are s that allow inductive definitions, like one writes for primitive recursive functions.
One can formally define an extension of first-order logic — a notion which encompasses all logics in this section because they behave like first-order logic in certain fundamental ways, but does not encompass all logics in general, e.g. it does not encompass intuitionistic, modal or fuzzy logic. Lindström's theorem implies that the only extension of first-order logic satisfying both the compactness theorem and the Downward Löwenheim–Skolem theorem is first-order logic.
Modal logics include additional modal operators, such as an operator which states that a particular formula is not only true, but necessarily true. Although modal logic is not often used to axiomatize mathematics, it has been used to study the properties of first-order provability (Solovay 1976) and set-theoretic forcing (Hamkins and Löwe 2007).
Intuitionistic logic was developed by Heyting to study Brouwer's program of intuitionism, in which Brouwer himself avoided formalization. Intuitionistic logic specifically does not include the law of the excluded middle, which states that each sentence is either true or its negation is true. Kleene's work with the proof theory of intuitionistic logic showed that constructive information can be recovered from intuitionistic proofs. For example, any provably total function in intuitionistic arithmetic is computable; this is not true in classical theories of arithmetic such as Peano arithmetic.
Algebraic logic uses the methods of abstract algebra to study the semantics of formal logics. A fundamental example is the use of Boolean algebras to represent truth values in classical propositional logic, and the use of Heyting algebras to represent truth values in intuitionistic propositional logic. Stronger logics, such as first-order logic and higher-order logic, are studied using more complicated algebraic structures such as cylindric algebras.
Set theory is the study of sets, which are abstract collections of objects. Many of the basic notions, such as ordinal and cardinal numbers, were developed informally by Cantor before formal axiomatizations of set theory were developed. The first such axiomatization, due to Zermelo (1908b), was extended slightly to become Zermelo–Fraenkel set theory (ZF), which is now the most widely used foundational theory for mathematics.
Other formalizations of set theory have been proposed, including von Neumann–Bernays–Gödel set theory (NBG), Morse–Kelley set theory (MK), and New Foundations (NF). Of these, ZF, NBG, and MK are similar in describing a cumulative hierarchy of sets. New Foundations takes a different approach; it allows objects such as the set of all sets at the cost of restrictions on its set-existence axioms. The system of Kripke–Platek set theory is closely related to generalized recursion theory.
Two famous statements in set theory are the axiom of choice and the continuum hypothesis. The axiom of choice, first stated by Zermelo (1904), was proved independent of ZF by Fraenkel (1922), but has come to be widely accepted by mathematicians. It states that given a collection of nonempty sets there is a single set "C" that contains exactly one element from each set in the collection. The set "C" is said to "choose" one element from each set in the collection. While the ability to make such a choice is considered obvious by some, since each set in the collection is nonempty, the lack of a general, concrete rule by which the choice can be made renders the axiom nonconstructive. Stefan Banach and Alfred Tarski (1924) showed that the axiom of choice can be used to decompose a solid ball into a finite number of pieces which can then be rearranged, with no scaling, to make two solid balls of the original size. This theorem, known as the Banach–Tarski paradox, is one of many counterintuitive results of the axiom of choice.
The continuum hypothesis, first proposed as a conjecture by Cantor, was listed by David Hilbert as one of his 23 problems in 1900. Gödel showed that the continuum hypothesis cannot be disproven from the axioms of Zermelo–Fraenkel set theory (with or without the axiom of choice), by developing the constructible universe of set theory in which the continuum hypothesis must hold. In 1963, Paul Cohen showed that the continuum hypothesis cannot be proven from the axioms of Zermelo–Fraenkel set theory (Cohen 1966). This independence result did not completely settle Hilbert's question, however, as it is possible that new axioms for set theory could resolve the hypothesis. Recent work along these lines has been conducted by W. Hugh Woodin, although its importance is not yet clear (Woodin 2001).
Contemporary research in set theory includes the study of large cardinals and determinacy. Large cardinals are cardinal numbers with particular properties so strong that the existence of such cardinals cannot be proved in ZFC. The existence of the smallest large cardinal typically studied, an inaccessible cardinal, already implies the consistency of ZFC. Despite the fact that large cardinals have extremely high cardinality, their existence has many ramifications for the structure of the real line. "Determinacy" refers to the possible existence of winning strategies for certain two-player games (the games are said to be "determined"). The existence of these strategies implies structural properties of the real line and other Polish spaces.
Model theory studies the models of various formal theories. Here a theory is a set of formulas in a particular formal logic and signature, while a model is a structure that gives a concrete interpretation of the theory. Model theory is closely related to universal algebra and algebraic geometry, although the methods of model theory focus more on logical considerations than those fields.
The set of all models of a particular theory is called an elementary class; classical model theory seeks to determine the properties of models in a particular elementary class, or determine whether certain classes of structures form elementary classes.
The method of quantifier elimination can be used to show that definable sets in particular theories cannot be too complicated. Tarski (1948) established quantifier elimination for real-closed fields, a result which also shows the theory of the field of real numbers is decidable. (He also noted that his methods were equally applicable to algebraically closed fields of arbitrary characteristic.) A modern subfield developing from this is concerned with o-minimal structures.
Morley's categoricity theorem, proved by Michael D. Morley (1965), states that if a first-order theory in a countable language is categorical in some uncountable cardinality, i.e. all models of this cardinality are isomorphic, then it is categorical in all uncountable cardinalities.
A trivial consequence of the continuum hypothesis is that a complete theory with less than continuum many nonisomorphic countable models can have only countably many. Vaught's conjecture, named after Robert Lawson Vaught, says that this is true even independently of the continuum hypothesis. Many special cases of this conjecture have been established.
Recursion theory, also called computability theory, studies the properties of computable functions and the Turing degrees, which divide the uncomputable functions into sets that have the same level of uncomputability. Recursion theory also includes the study of generalized computability and definability. Recursion theory grew from the work of Rózsa Péter, Alonzo Church and Alan Turing in the 1930s, which was greatly extended by Kleene and Post in the 1940s.
Classical recursion theory focuses on the computability of functions from the natural numbers to the natural numbers. The fundamental results establish a robust, canonical class of computable functions with numerous independent, equivalent characterizations using Turing machines, λ calculus, and other systems. More advanced results concern the structure of the Turing degrees and the lattice of recursively enumerable sets.
Generalized recursion theory extends the ideas of recursion theory to computations that are no longer necessarily finite. It includes the study of computability in higher types as well as areas such as hyperarithmetical theory and α-recursion theory.
Contemporary research in recursion theory includes the study of applications such as algorithmic randomness, computable model theory, and reverse mathematics, as well as new results in pure recursion theory.
An important subfield of recursion theory studies algorithmic unsolvability; a decision problem or function problem is algorithmically unsolvable if there is no possible computable algorithm that returns the correct answer for all legal inputs to the problem. The first results about unsolvability, obtained independently by Church and Turing in 1936, showed that the Entscheidungsproblem is algorithmically unsolvable. Turing proved this by establishing the unsolvability of the halting problem, a result with far-ranging implications in both recursion theory and computer science.
There are many known examples of undecidable problems from ordinary mathematics. The word problem for groups was proved algorithmically unsolvable by Pyotr Novikov in 1955 and independently by W. Boone in 1959. The busy beaver problem, developed by Tibor Radó in 1962, is another well-known example.
Hilbert's tenth problem asked for an algorithm to determine whether a multivariate polynomial equation with integer coefficients has a solution in the integers. Partial progress was made by Julia Robinson, Martin Davis and Hilary Putnam. The algorithmic unsolvability of the problem was proved by Yuri Matiyasevich in 1970 (Davis 1973).
Proof theory is the study of formal proofs in various logical deduction systems. These proofs are represented as formal mathematical objects, facilitating their analysis by mathematical techniques. Several deduction systems are commonly considered, including Hilbert-style deduction systems, systems of natural deduction, and the sequent calculus developed by Gentzen.
The study of constructive mathematics, in the context of mathematical logic, includes the study of systems in non-classical logic such as intuitionistic logic, as well as the study of predicative systems. An early proponent of predicativism was Hermann Weyl, who showed it is possible to develop a large part of real analysis using only predicative methods (Weyl 1918).
Because proofs are entirely finitary, whereas truth in a structure is not, it is common for work in constructive mathematics to emphasize provability. The relationship between provability in classical (or nonconstructive) systems and provability in intuitionistic (or constructive, respectively) systems is of particular interest. Results such as the Gödel–Gentzen negative translation show that it is possible to embed (or "translate") classical logic into intuitionistic logic, allowing some properties about intuitionistic proofs to be transferred back to classical proofs.
Recent developments in proof theory include the study of proof mining by Ulrich Kohlenbach and the study of proof-theoretic ordinals by Michael Rathjen.
"Mathematical logic has been successfully applied not only to mathematics and its foundations (G. Frege, B. Russell, D. Hilbert, P. Bernays, H. Scholz, R. Carnap, S. Lesniewski, T. Skolem), but also to physics (R. Carnap, A. Dittrich, B. Russell, C. E. Shannon, A. N. Whitehead, H. Reichenbach, P. Fevrier), to biology (J. H. Woodger, A. Tarski), to psychology (F. B. Fitch, C. G. Hempel), to law and morals (K. Menger, U. Klug, P. Oppenheim), to economics (J. Neumann, O. Morgenstern), to practical questions (E. C. Berkeley, E. Stamm), and even to metaphysics (J. [Jan] Salamucha, H. Scholz, J. M. Bochenski). Its applications to the history of logic have proven extremely fruitful (J. Lukasiewicz, H. Scholz, B. Mates, A. Becker, E. Moody, J. Salamucha, K. Duerr, Z. Jordan, P. Boehner, J. M. Bochenski, S. [Stanislaw] T. Schayer, D. Ingalls)." "Applications have also been made to theology (F. Drewnowski, J. Salamucha, I. Thomas)."
The study of computability theory in computer science is closely related to the study of computability in mathematical logic. There is a difference of emphasis, however. Computer scientists often focus on concrete programming languages and feasible computability, while researchers in mathematical logic often focus on computability as a theoretical concept and on noncomputability.
The theory of semantics of programming languages is related to model theory, as is program verification (in particular, model checking). The Curry–Howard isomorphism between proofs and programs relates to proof theory, especially intuitionistic logic. Formal calculi such as the lambda calculus and combinatory logic are now studied as idealized programming languages.
Computer science also contributes to mathematics by developing techniques for the automatic checking or even finding of proofs, such as automated theorem proving and logic programming.
Descriptive complexity theory relates logics to computational complexity. The first significant result in this area, Fagin's theorem (1974) established that NP is precisely the set of languages expressible by sentences of existential second-order logic.
In the 19th century, mathematicians became aware of logical gaps and inconsistencies in their field. It was shown that Euclid's axioms for geometry, which had been taught for centuries as an example of the axiomatic method, were incomplete. The use of infinitesimals, and the very definition of function, came into question in analysis, as pathological examples such as Weierstrass' nowhere-differentiable continuous function were discovered.
Cantor's study of arbitrary infinite sets also drew criticism. Leopold Kronecker famously stated "God made the integers; all else is the work of man," endorsing a return to the study of finite, concrete objects in mathematics. Although Kronecker's argument was carried forward by constructivists in the 20th century, the mathematical community as a whole rejected them. David Hilbert argued in favor of the study of the infinite, saying "No one shall expel us from the Paradise that Cantor has created."
Mathematicians began to search for axiom systems that could be used to formalize large parts of mathematics. In addition to removing ambiguity from previously naive terms such as function, it was hoped that this axiomatization would allow for consistency proofs. In the 19th century, the main method of proving the consistency of a set of axioms was to provide a model for it. Thus, for example, non-Euclidean geometry can be proved consistent by defining "point" to mean a point on a fixed sphere and "line" to mean a great circle on the sphere. The resulting structure, a model of elliptic geometry, satisfies the axioms of plane geometry except the parallel postulate.
With the development of formal logic, Hilbert asked whether it would be possible to prove that an axiom system is consistent by analyzing the structure of possible proofs in the system, and showing through this analysis that it is impossible to prove a contradiction. This idea led to the study of proof theory. Moreover, Hilbert proposed that the analysis should be entirely concrete, using the term "finitary" to refer to the methods he would allow but not precisely defining them. This project, known as Hilbert's program, was seriously affected by Gödel's incompleteness theorems, which show that the consistency of formal theories of arithmetic cannot be established using methods formalizable in those theories. Gentzen showed that it is possible to produce a proof of the consistency of arithmetic in a finitary system augmented with axioms of transfinite induction, and the techniques he developed to do so were seminal in proof theory.
A second thread in the history of foundations of mathematics involves nonclassical logics and constructive mathematics. The study of constructive mathematics includes many different programs with various definitions of "constructive". At the most accommodating end, proofs in ZF set theory that do not use the axiom of choice are called constructive by many mathematicians. More limited versions of constructivism limit themselves to natural numbers, number-theoretic functions, and sets of natural numbers (which can be used to represent real numbers, facilitating the study of mathematical analysis). A common idea is that a concrete means of computing the values of the function must be known before the function itself can be said to exist.
In the early 20th century, Luitzen Egbertus Jan Brouwer founded intuitionism as a philosophy of mathematics. This philosophy, poorly understood at first, stated that in order for a mathematical statement to be true to a mathematician, that person must be able to "intuit" the statement, to not only believe its truth but understand the reason for its truth. A consequence of this definition of truth was the rejection of the law of the excluded middle, for there are statements that, according to Brouwer, could not be claimed to be true while their negations also could not be claimed true. Brouwer's philosophy was influential, and the cause of bitter disputes among prominent mathematicians. Later, Kleene and Kreisel would study formalized versions of intuitionistic logic (Brouwer rejected formalization, and presented his work in unformalized natural language). With the advent of the BHK interpretation and Kripke models, intuitionism became easier to reconcile with classical mathematics.
|
https://en.wikipedia.org/wiki?curid=19636
|
Molecular nanotechnology
Molecular nanotechnology (MNT) is a technology based on the ability to build structures to complex, atomic specifications by means of mechanosynthesis. This is distinct from nanoscale materials. Based on Richard Feynman's vision of miniature factories using nanomachines to build complex products (including additional nanomachines), this advanced form of nanotechnology (or "molecular manufacturing") would make use of positionally-controlled mechanosynthesis guided by molecular machine systems. MNT would involve combining physical principles demonstrated by biophysics, chemistry, other nanotechnologies, and the molecular machinery of life with the systems engineering principles found in modern macroscale factories.
While conventional chemistry uses inexact processes obtaining inexact results, and biology exploits inexact processes to obtain definitive results, molecular nanotechnology would employ original definitive processes to obtain definitive results. The desire in molecular nanotechnology would be to balance molecular reactions in positionally-controlled locations and orientations to obtain desired chemical reactions, and then to build systems by further assembling the products of these reactions.
A roadmap for the development of MNT is an objective of a broadly based technology project led by Battelle (the manager of several U.S. National Laboratories) and the Foresight Institute. The roadmap was originally scheduled for completion by late 2006, but was released in January 2008. The Nanofactory Collaboration is a more focused ongoing effort involving 23 researchers from 10 organizations and 4 countries that is developing a practical research agenda specifically aimed at positionally-controlled diamond mechanosynthesis and diamondoid nanofactory development. In August 2005, a task force consisting of 50+ international experts from various fields was organized by the Center for Responsible Nanotechnology to study the societal implications of molecular nanotechnology.
One proposed application of MNT is so-called smart materials. This term refers to any sort of material designed and engineered at the nanometer scale for a specific task. It encompasses a wide variety of possible commercial applications. One example would be materials designed to respond differently to various molecules; such a capability could lead, for example, to artificial drugs which would recognize and render inert specific viruses. Another is the idea of self-healing structures, which would repair small tears in a surface naturally in the same way as self-sealing tires or human skin.
A MNT nanosensor would resemble a smart material, involving a small component within a larger machine that would react to its environment and change in some fundamental, intentional way. A very simple example: a photosensor might passively measure the incident light and discharge its absorbed energy as electricity when the light passes above or below a specified threshold, sending a signal to a larger machine. Such a sensor would supposedly cost less and use less power than a conventional sensor, and yet function usefully in all the same applications — for example, turning on parking lot lights when it gets dark.
While smart materials and nanosensors both exemplify useful applications of MNT, they pale in comparison with the complexity of the technology most popularly associated with the term: the replicating nanorobot.
MNT nanofacturing is popularly linked with the idea of swarms of coordinated nanoscale robots working together, a popularization of an early proposal by K. Eric Drexler in his 1986 discussions of MNT, but superseded in 1992. In this early proposal, sufficiently capable nanorobots would construct more nanorobots in an artificial environment containing special molecular building blocks.
Critics have doubted both the feasibility of self-replicating nanorobots and the feasibility of control if self-replicating nanorobots could be achieved: they cite the possibility of mutations removing any control and favoring reproduction of mutant pathogenic variations. Advocates address the first doubt by pointing out that the first macroscale autonomous machine replicator, made of Lego blocks, was built and operated experimentally in 2002. While there are sensory advantages present at the macroscale compared to the limited sensorium available at the nanoscale, proposals for positionally controlled nanoscale mechanosynthetic fabrication systems employ dead reckoning of tooltips combined with reliable reaction sequence design to ensure reliable results, hence a limited sensorium is no handicap; similar considerations apply to the positional assembly of small nanoparts. Advocates address the second doubt by arguing that bacteria are (of necessity) evolved to evolve, while nanorobot mutation could be actively prevented by common error-correcting techniques. Similar ideas are advocated in the Foresight Guidelines on Molecular Nanotechnology, and a map of the 137-dimensional replicator design space recently published by Freitas and Merkle provides numerous proposed methods by which replicators could, in principle, be safely controlled by good design.
However, the concept of suppressing mutation raises the question: How can design evolution occur at the nanoscale without a process of random mutation and deterministic selection? Critics argue that MNT advocates have not provided a substitute for such a process of evolution in this nanoscale arena where conventional sensory-based selection processes are lacking. The limits of the sensorium available at the nanoscale could make it difficult or impossible to winnow successes from failures. Advocates argue that design evolution should occur deterministically and strictly under human control, using the conventional engineering paradigm of modeling, design, prototyping, testing, analysis, and redesign.
In any event, since 1992 technical proposals for MNT do not include self-replicating nanorobots, and recent ethical guidelines put forth by MNT advocates prohibit unconstrained self-replication.
One of the most important applications of MNT would be medical nanorobotics or nanomedicine, an area pioneered by Robert Freitas in numerous books and papers. The ability to design, build, and deploy large numbers of medical nanorobots would, at a minimum, make possible the rapid elimination of disease and the reliable and relatively painless recovery from physical trauma. Medical nanorobots might also make possible the convenient correction of genetic defects, and help to ensure a greatly expanded lifespan. More controversially, medical nanorobots might be used to augment natural human capabilities. One study has reported on how conditions like tumors, arteriosclerosis, blood clots leading to stroke, accumulation of scar tissue and localized pockets of infection can possibly be addressed by employing medical nanorobots.
Another proposed application of molecular nanotechnology is "utility fog" — in which a cloud of networked microscopic robots (simpler than assemblers) would change its shape and properties to form macroscopic objects and tools in accordance with software commands. Rather than modify the current practices of consuming material goods in different forms, utility fog would simply replace many physical objects.
Yet another proposed application of MNT would be phased-array optics (PAO). However, this appears to be a problem addressable by ordinary nanoscale technology. PAO would use the principle of phased-array millimeter technology but at optical wavelengths. This would permit the duplication of any sort of optical effect but virtually. Users could request holograms, sunrises and sunsets, or floating lasers as the mood strikes. PAO systems were described in BC Crandall's "Nanotechnology: Molecular Speculations on Global Abundance" in the Brian Wowk article "Phased-Array Optics."
Molecular manufacturing is a potential future subfield of nanotechnology that would make it possible to build complex structures at atomic precision. Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories weighing a kilogram or more. When nanofactories gain the ability to produce other nanofactories production may only be limited by relatively abundant factors such as input materials, energy and software.
The products of molecular manufacturing could range from cheaper, mass-produced versions of known high-tech products to novel products with added capabilities in many areas of application. Some applications that have been suggested are advanced smart materials, nanosensors, medical nanorobots and space travel. Additionally, molecular manufacturing could be used to cheaply produce highly advanced, durable weapons, which is an area of special concern regarding the impact of nanotechnology. Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.
According to Chris Phoenix and Mike Treder from the Center for Responsible Nanotechnology as well as Anders Sandberg from the Future of Humanity Institute molecular manufacturing is the application of nanotechnology that poses the most significant global catastrophic risk. Several nanotechnology researchers state that the bulk of risk from nanotechnology comes from the potential to lead to war, arms races and destructive global government. Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races): (1) A large number of players may be tempted to enter the race since the threshold for doing so is low; (2) the ability to make weapons with molecular manufacturing will be cheap and easy to hide; (3) therefore lack of insight into the other parties' capabilities can tempt players to arm out of caution or to launch preemptive strikes; (4) molecular manufacturing may reduce dependency on international trade, a potential peace-promoting factor; (5) wars of aggression may pose a smaller economic threat to the aggressor since manufacturing is cheap and humans may not be needed on the battlefield.
Since self-regulation by all state and non-state actors seems hard to achieve, measures to mitigate war-related risks have mainly been proposed in the area of international cooperation. International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control. International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed. One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour. The Center for Responsible Nanotechnology also suggest some technical restrictions. Improved transparency regarding technological capabilities may be another important facilitator for arms-control.
A grey goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book "Engines of Creation", has been analyzed by Freitas in "Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations" and has been a theme in mainstream media and fiction. This scenario involves tiny self-replicating robots that consume the entire biosphere using it as a source of energy and building blocks. Nanotech experts including Drexler now discredit the scenario. According to Chris Phoenix a "So-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident". With the advent of nano-biotech, a different scenario called green goo has been forwarded. Here, the malignant substance is not nanobots but rather self-replicating biological organisms engineered through nanotechnology.
Nanotechnology (or molecular nanotechnology to refer more specifically to the goals discussed here) will let us continue the historical trends in manufacturing right up to the fundamental limits imposed by physical law. It will let us make remarkably powerful molecular computers. It will let us make materials over fifty times lighter than steel or aluminium alloy but with the same strength. We'll be able to make jets, rockets, cars or even chairs that, by today's standards, would be remarkably light, strong, and inexpensive. Molecular surgical tools, guided by molecular computers and injected into the blood stream could find and destroy cancer cells or invading bacteria, unclog arteries, or provide oxygen when the circulation is impaired.
Nanotechnology will replace our entire manufacturing base with a new, radically more precise, radically less expensive, and radically more flexible way of making products. The aim is not simply to replace today's computer chip making plants, but also to replace the assembly lines for cars, televisions, telephones, books, surgical tools, missiles, bookcases, airplanes, tractors, and all the rest. The objective is a pervasive change in manufacturing, a change that will leave virtually no product untouched. Economic progress and military readiness in the 21st Century will depend fundamentally on maintaining a competitive position in nanotechnology.
Despite the current early developmental status of nanotechnology and molecular nanotechnology, much concern surrounds MNT's anticipated impact on economics and on law. Whatever the exact effects, MNT, if achieved, would tend to reduce the scarcity of manufactured goods and make many more goods (such as food and health aids) manufacturable.
MNT should make possible nanomedical capabilities able to cure any medical condition not already cured by advances in other areas. Good health would be common, and poor health of any form would be as rare as smallpox and scurvy are today. Even cryonics would be feasible, as cryopreserved tissue could be fully repaired.
Molecular nanotechnology is one of the technologies that some analysts believe could lead to a technological singularity, in which technological growth has accelerated to the point of having unpredictable effects. Some effects could be beneficial, while others could be detrimental, such as the utilization of molecular nanotechnology by an unfriendly artificial general intelligence.
Some feel that molecular nanotechnology would have daunting risks. It conceivably could enable cheaper and more destructive conventional weapons. Also, molecular nanotechnology might permit weapons of mass destruction that could self-replicate, as viruses and cancer cells do when attacking the human body. Commentators generally agree that, in the event molecular nanotechnology were developed, its self-replication should be permitted only under very controlled or "inherently safe" conditions.
A fear exists that nanomechanical robots, if achieved, and if designed to self-replicate using naturally occurring materials (a difficult task), could consume the entire planet in their hunger for raw materials, or simply crowd out natural life, out-competing it for energy (as happened historically when blue-green algae appeared and outcompeted earlier life forms). Some commentators have referred to this situation as the "grey goo" or "ecophagy" scenario. K. Eric Drexler considers an accidental "grey goo" scenario extremely unlikely and says so in later editions of "Engines of Creation".
In light of this perception of potential danger, the Foresight Institute, founded by Drexler, has prepared a set of guidelines for the ethical development of nanotechnology. These include the banning of free-foraging self-replicating pseudo-organisms on the Earth's surface, at least, and possibly in other places.
The feasibility of the basic technologies analyzed in "Nanosystems" has been the subject of a formal scientific review by U.S. National Academy of Sciences, and has also been the focus of extensive debate on the internet and in the popular press.
In 2006, U.S. National Academy of Sciences released the report of a study of molecular manufacturing as part of a longer report, "A Matter of Size: Triennial Review of the National Nanotechnology Initiative" The study committee reviewed the technical content of "Nanosystems", and in its conclusion states that no current theoretical analysis can be considered definitive regarding several questions of potential system performance, and that optimal paths for implementing high-performance systems cannot be predicted with confidence. It recommends experimental research to advance knowledge in this area:
A section heading in Drexler's "Engines of Creation" reads "Universal Assemblers", and the following text speaks of multiple types of assemblers which, collectively, could hypothetically "build almost anything that the laws of nature allow to exist." Drexler's colleague Ralph Merkle has noted that, contrary to widespread legend, Drexler never claimed that assembler systems could build absolutely any molecular structure. The endnotes in Drexler's book explain the qualification "almost": "For example, a delicate structure might be designed that, like a stone arch, would self-destruct unless all its pieces were already in place. If there were no room in the design for the placement and removal of a scaffolding, then the structure might be impossible to build. Few structures of practical interest seem likely to exhibit such a problem, however."
In 1992, Drexler published "Nanosystems: Molecular Machinery, Manufacturing, and Computation", a detailed proposal for synthesizing stiff covalent structures using a table-top factory. Diamondoid structures and other stiff covalent structures, if achieved, would have a wide range of possible applications, going far beyond current MEMS technology. An outline of a path was put forward in 1992 for building a table-top factory in the absence of an assembler. Other researchers have begun advancing tentative, alternative proposed paths for this in the years since Nanosystems was published.
In 2004 Richard Jones wrote Soft Machines (nanotechnology and life), a book for lay audiences published by Oxford University. In this book he describes radical nanotechnology (as advocated by Drexler) as a deterministic/mechanistic idea of nano engineered machines that does not take into account the nanoscale challenges such as wetness, stickiness, Brownian motion, and high viscosity. He also explains what is soft nanotechnology or more appropriately biomimetic nanotechnology which is the way forward, if not the best way, to design functional nanodevices that can cope with all the problems at a nanoscale. One can think of soft nanotechnology as the development of nanomachines that uses the lessons learned from biology on how things work, chemistry to precisely engineer such devices and stochastic physics to model the system and its natural processes in detail.
Several researchers, including Nobel Prize winner Dr. Richard Smalley (1943–2005), attacked the notion of universal assemblers, leading to a rebuttal from Drexler and colleagues, and eventually to an exchange of letters. Smalley argued that chemistry is extremely complicated, reactions are hard to control, and that a universal assembler is science fiction. Drexler and colleagues, however, noted that Drexler never proposed universal assemblers able to make absolutely anything, but instead proposed more limited assemblers able to make a very wide variety of things. They challenged the relevance of Smalley's arguments to the more specific proposals advanced in "Nanosystems". Also, Smalley argued that nearly all of modern chemistry involves reactions that take place in a solvent (usually water), because the small molecules of a solvent contribute many things, such as lowering binding energies for transition states. Since nearly all known chemistry requires a solvent, Smalley felt that Drexler's proposal to use a high vacuum environment was not feasible. However, Drexler addresses this in Nanosystems by showing mathematically that well designed catalysts can provide the effects of a solvent and can fundamentally be made even more efficient than a solvent/enzyme reaction could ever be. It is noteworthy that, contrary to Smalley's opinion that enzymes require water, "Not only do enzymes work vigorously in anhydrous organic media, but in this unnatural milieu they acquire remarkable properties such as greatly enhanced stability, radically altered substrate and enantiomeric specificities, molecular memory, and the ability to catalyse unusual reactions."
For the future, some means have to be found for MNT design evolution at the nanoscale which mimics the process of biological evolution at the molecular scale. Biological evolution proceeds by random variation in ensemble averages of organisms combined with culling of the less-successful variants and reproduction of the more-successful variants, and macroscale engineering design also proceeds by a process of design evolution from simplicity to complexity as set forth somewhat satirically by John Gall: "A complex system that works is invariably found to have evolved from a simple system that worked. . . . A complex system designed from scratch never works and can not be patched up to make it work. You have to start over, beginning with a system that works." A breakthrough in MNT is needed which proceeds from the simple atomic ensembles which can be built with, e.g., an STM to complex MNT systems via a process of design evolution. A handicap in this process is the difficulty of seeing and manipulation at the nanoscale compared to the macroscale which makes deterministic selection of successful trials difficult; in contrast biological evolution proceeds via action of what Richard Dawkins has called the "blind watchmaker"
comprising random molecular variation and deterministic reproduction/extinction.
At present in 2007 the practice of nanotechnology embraces both stochastic approaches (in which, for example, supramolecular chemistry creates waterproof pants) and deterministic approaches wherein single molecules (created by stochastic chemistry) are manipulated on substrate surfaces (created by stochastic deposition methods) by deterministic methods comprising nudging them with STM or AFM probes and causing simple binding or cleavage reactions to occur. The dream of a complex, deterministic molecular nanotechnology remains elusive. Since the mid-1990s, thousands of surface scientists and thin film technocrats have latched on to the nanotechnology bandwagon and redefined their disciplines as nanotechnology. This has caused much confusion in the field and has spawned thousands of "nano"-papers on the peer reviewed literature. Most of these reports are extensions of the more ordinary research done in the parent fields.
The feasibility of Drexler's proposals largely depends, therefore, on whether designs like those in "Nanosystems" could be built in the absence of a universal assembler to build them and would work as described. Supporters of molecular nanotechnology frequently claim that no significant errors have been discovered in "Nanosystems" since 1992. Even some critics concede that "Drexler has carefully considered a number of physical principles underlying the 'high level' aspects of the nanosystems he proposes and, indeed, has thought in some detail" about some issues.
Other critics claim, however, that "Nanosystems" omits important chemical details about the low-level 'machine language' of molecular nanotechnology. They also claim that much of the other low-level chemistry in "Nanosystems" requires extensive further work, and that Drexler's higher-level designs therefore rest on speculative foundations. Recent such further work by Freitas and Merkle is aimed at strengthening these foundations by filling the existing gaps in the low-level chemistry.
Drexler argues that we may need to wait until our conventional nanotechnology improves before solving these issues: "Molecular manufacturing will result from a series of advances in molecular machine systems, much as the first Moon landing resulted from a series of advances in liquid-fuel rocket systems. We are now in a position like that of the British Interplanetary Society of the 1930s which described how multistage liquid-fueled rockets could reach the Moon and pointed to early rockets as illustrations of the basic principle." However, Freitas and Merkle argue that a focused effort to achieve diamond mechanosynthesis (DMS) can begin now, using existing technology, and might achieve success in less than a decade if their "direct-to-DMS approach is pursued rather than a more circuitous development approach that seeks to implement less efficacious nondiamondoid molecular manufacturing technologies before progressing to diamondoid".
To summarize the arguments against feasibility: First, critics argue that a primary barrier to achieving molecular nanotechnology is the lack of an efficient way to create machines on a molecular/atomic scale, especially in the absence of a well-defined path toward a self-replicating assembler or diamondoid nanofactory. Advocates respond that a preliminary research path leading to a diamondoid nanofactory is being developed.
A second difficulty in reaching molecular nanotechnology is design. Hand design of a gear or bearing at the level of atoms might take a few to several weeks. While Drexler, Merkle and others have created designs of simple parts, no comprehensive design effort for anything approaching the complexity of a Model T Ford has been attempted. Advocates respond that it is difficult to undertake a comprehensive design effort in the absence of significant funding for such efforts, and that despite this handicap much useful design-ahead has nevertheless been accomplished with new software tools that have been developed, e.g., at Nanorex.
In the latest report "A Matter of Size: Triennial Review of the National Nanotechnology Initiative" put out by the National Academies Press in December 2006 (roughly twenty years after Engines of Creation was published), no clear way forward toward molecular nanotechnology could yet be seen, as per the conclusion on page 108 of that report: "Although theoretical calculations can be made today, the eventually attainable
range of chemical reaction cycles, error rates, speed of operation, and thermodynamic
efficiencies of such bottom-up manufacturing systems cannot be reliably
predicted at this time. Thus, the eventually attainable perfection and complexity of
manufactured products, while they can be calculated in theory, cannot be predicted
with confidence. Finally, the optimum research paths that might lead to systems
which greatly exceed the thermodynamic efficiencies and other capabilities of
biological systems cannot be reliably predicted at this time. Research funding that
is based on the ability of investigators to produce experimental demonstrations
that link to abstract models and guide long-term vision is most appropriate to
achieve this goal." This call for research leading to demonstrations is welcomed by groups such as the Nanofactory Collaboration who are specifically seeking experimental successes in diamond mechanosynthesis. The "Technology Roadmap for Productive Nanosystems" aims to offer additional constructive insights.
It is perhaps interesting to ask whether or not most structures consistent with physical law can in fact be manufactured. Advocates assert that to achieve most of the vision of molecular manufacturing it is not necessary to be able to build "any structure that is compatible with natural law." Rather, it is necessary to be able to build only a sufficient (possibly modest) subset of such structures—as is true, in fact, of any practical manufacturing process used in the world today, and is true even in biology. In any event, as Richard Feynman once said, "It is scientific only to say what's more likely or less likely, and not to be proving all the time what's possible or impossible."
There is a growing body of peer-reviewed theoretical work on synthesizing diamond by mechanically removing/adding hydrogen atoms and depositing carbon atoms (a process known as mechanosynthesis). This work is slowly permeating the broader nanoscience community and is being critiqued. For instance, Peng et al. (2006) (in the continuing research effort by Freitas, Merkle and their collaborators) reports that the most-studied mechanosynthesis tooltip motif (DCB6Ge) successfully places a C2 carbon dimer on a C(110) diamond surface at both 300 K (room temperature) and 80 K (liquid nitrogen temperature), and that the silicon variant (DCB6Si) also works at 80 K but not at 300 K. Over 100,000 CPU hours were invested in this latest study. The DCB6 tooltip motif, initially described by Merkle and Freitas at a Foresight Conference in 2002, was the first complete tooltip ever proposed for diamond mechanosynthesis and remains the only tooltip motif that has been successfully simulated for its intended function on a full 200-atom diamond surface.
The tooltips modeled in this work are intended to be used only in carefully controlled environments (e. g., vacuum). Maximum acceptable limits for tooltip translational and rotational misplacement errors are reported in Peng et al. (2006) -- tooltips must be positioned with great accuracy to avoid bonding the dimer incorrectly. Peng et al. (2006) reports that increasing the handle thickness from 4 support planes of C atoms above the tooltip to 5 planes decreases the resonance frequency of the entire structure from 2.0 THz to 1.8 THz. More importantly, the vibrational footprints of a DCB6Ge tooltip mounted on a 384-atom handle and of the same tooltip mounted on a similarly constrained but much larger 636-atom "crossbar" handle are virtually identical in the non-crossbar directions. Additional computational studies modeling still bigger handle structures are welcome, but the ability to precisely position SPM tips to the requisite atomic accuracy has been repeatedly demonstrated experimentally at low temperature, or even at room temperature constituting a basic existence proof for this capability.
Further research to consider additional tooltips will require time-consuming computational chemistry and difficult laboratory work.
A working nanofactory would require a variety of well-designed tips for different reactions, and detailed analyses of placing atoms on more complicated surfaces. Although this appears a challenging problem given current resources, many tools will be available to help future researchers: Moore's law predicts further increases in computer power, semiconductor fabrication techniques continue to approach the nanoscale, and researchers grow ever more skilled at using proteins, ribosomes and DNA to perform novel chemistry.
|
https://en.wikipedia.org/wiki?curid=19637
|
Microelectromechanical systems
Microelectromechanical systems (MEMS), also written as micro-electro-mechanical systems (or microelectronic and microelectromechanical systems) and the related micromechatronics and microsystems constitute the technology of microscopic devices, particularly those with moving parts. They merge at the nanoscale into nanoelectromechanical systems (NEMS) and nanotechnology. MEMS are also referred to as micromachines in Japan and microsystem technology (MST) in Europe.
MEMS are made up of components between 1 and 100 micrometers in size (i.e., 0.001 to 0.1 mm), and MEMS devices generally range in size from 20 micrometres to a millimetre (i.e., 0.02 to 1.0 mm), although components arranged in arrays (e.g., digital micromirror devices) can be more than 1000 mm2.
They usually consist of a central unit that processes data (an integrated circuit chip such as microprocessor) and several components that interact with the surroundings (such as microsensors). Because of the large surface area to volume ratio of MEMS, forces produced by ambient electromagnetism (e.g., electrostatic charges and magnetic moments), and fluid dynamics (e.g., surface tension and viscosity) are more important design considerations than with larger scale mechanical devices. MEMS technology is distinguished from molecular nanotechnology or molecular electronics in that the latter must also consider surface chemistry.
The potential of very small machines was appreciated before the technology existed that could make them (see, for example, Richard Feynman's famous 1959 lecture There's Plenty of Room at the Bottom). MEMS became practical once they could be fabricated using modified semiconductor device fabrication technologies, normally used to make electronics. These include molding and plating, wet etching (KOH, TMAH) and dry etching (RIE and DRIE), electrical discharge machining (EDM), and other technologies capable of manufacturing small devices.
MEMS technology has roots in the silicon revolution, which can be traced back to two important silicon semiconductor inventions from 1959: the monolithic integrated circuit (IC) chip by Robert Noyce at Fairchild Semiconductor, and the MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor) by Mohamed M. Atalla and Dawon Kahng at Bell Labs. MOSFET scaling, the miniaturisation of MOSFETs on IC chips, led to the miniaturisation of electronics (as predicted by Moore's law and Dennard scaling). This laid the foundations for the miniaturisation of mechanical systems, with the development of micromachining technology based on silicon semiconductor technology, as engineers began realizing that silicon chips and MOSFETs could interact and communicate with the surroundings and process things such as chemicals, motions and light. One of the first silicon pressure sensors was isotropically micromachined by Honeywell in 1962.
An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Harvey C. Nathanson in 1965. Another early example is the resonistor, an electromechanical monolithic resonator patented by Raymond J. Wilfinger between 1966 and 1971. During the 1970s to early 1980s, a number of MOSFET microsensors were developed for measuring physical, chemical, biological and environmental parameters.
There are two basic types of MEMS switch technology: capacitive and ohmic. A capacitive MEMS switch is developed using a moving plate or sensing element, which changes the capacitance. Ohmic switches are controlled by electrostatically controlled cantilevers. Ohmic MEMS switches can fail from metal fatigue of the MEMS actuator (cantilever) and contact wear, since cantilevers can deform over time.
The fabrication of MEMS evolved from the process technology in semiconductor device fabrication, i.e. the basic techniques are deposition of material layers, patterning by photolithography and etching to produce the required shapes.
Silicon is the material used to create most integrated circuits used in consumer electronics in the modern industry. The economies of scale, ready availability of inexpensive high-quality materials, and ability to incorporate electronic functionality make silicon attractive for a wide variety of MEMS applications. Silicon also has significant advantages engendered through its material properties. In single crystal form, silicon is an almost perfect Hookean material, meaning that when it is flexed there is virtually no hysteresis and hence almost no energy dissipation. As well as making for highly repeatable motion, this also makes silicon very reliable as it suffers very little fatigue and can have service lifetimes in the range of billions to trillions of cycles without breaking. Semiconductor nanostructures based on silicon are gaining increasing importance in the field of microelectronics and MEMS in particular. Silicon nanowires, fabricated through the thermal oxidation of silicon, are of further interest in electrochemical conversion and storage, including nanowire batteries and photovoltaic systems.
Even though the electronics industry provides an economy of scale for the silicon industry, crystalline silicon is still a complex and relatively expensive material to produce. Polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. MEMS devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges.
Metals can also be used to create MEMS elements. While metals do not have some of the advantages displayed by silicon in terms of mechanical properties, when used within their limitations, metals can exhibit very high degrees of reliability. Metals can be deposited by electroplating, evaporation, and sputtering processes. Commonly used metals include gold, nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver.
The nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics are increasingly applied in MEMS fabrication due to advantageous combinations of material properties. AlN crystallizes in the wurtzite structure and thus shows pyroelectric and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and shear forces. TiN, on the other hand, exhibits a high electrical conductivity and large elastic modulus, making it possible to implement electrostatic MEMS actuation schemes with ultrathin beams. Moreover, the high resistance of TiN against biocorrosion qualifies the material for applications in biogenic environments. The figure shows an electron-microscopic picture of a MEMS biosensor with a 50 nm thin bendable TiN beam above a TiN ground plate. Both can be driven as opposite electrodes of a capacitor, since the beam is fixed in electrically isolating side walls. When a fluid is suspended in the cavity its viscosity may be derived from bending the beam by electrical attraction to the ground plate and measuring the bending velocity.
One of the basic building blocks in MEMS processing is the ability to deposit thin films of material with a thickness anywhere between one micrometre, to about 100 micrometres. The NEMS process is the same, although the measurement of film deposition ranges from a few nanometres to one micrometre. There are two types of deposition processes, as follows.
Physical vapor deposition ("PVD") consists of a process in which a material is removed from a target, and deposited on a surface. Techniques to do this include the process of sputtering, in which an ion beam liberates atoms from a target, allowing them to move through the intervening space and deposit on the desired substrate, and evaporation, in which a material is evaporated from a target using either heat (thermal evaporation) or an electron beam (e-beam evaporation) in a vacuum system.
Chemical deposition techniques include chemical vapor deposition (CVD), in which a stream of source gas reacts on the substrate to grow the material desired. This can be further divided into categories depending on the details of the technique, for example LPCVD (low-pressure chemical vapor deposition) and PECVD (plasma-enhanced chemical vapor deposition).
Oxide films can also be grown by the technique of thermal oxidation, in which the (typically silicon) wafer is exposed to oxygen and/or steam, to grow a thin surface layer of silicon dioxide.
Patterning in MEMS is the transfer of a pattern into a material.
Lithography in MEMS context is typically the transfer of a pattern into a photosensitive material by selective exposure to a radiation source such as light. A photosensitive material is a material that experiences a change in its physical properties when exposed to a radiation source. If a photosensitive material is selectively exposed to radiation (e.g. by masking some of the radiation) the pattern of the radiation on the material is transferred to the material exposed, as the properties of the exposed and unexposed regions differs.
This exposed region can then be removed or treated providing a mask for the underlying substrate. Photolithography is typically used with metal or other thin film deposition, wet and dry etching. Sometimes, photolithography is used to create structure without any kind of post etching. One example is SU8 based lens where SU8 based square blocks are generated. Then the photoresist is melted to form a semi-sphere which acts as a lens.
Electron beam lithography (often abbreviated as e-beam lithography) is the practice of scanning a beam of electrons in a patterned fashion across a surface covered with a film (called the resist), ("exposing" the resist) and of selectively removing either exposed or non-exposed regions of the resist ("developing"). The purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching. It was developed for manufacturing integrated circuits, and is also used for creating nanotechnology architectures.
The primary advantage of electron beam lithography is that it is one of the ways to beat the diffraction limit of light and make features in the nanometer range. This form of maskless lithography has found wide usage in photomask-making used in photolithography, low-volume production of semiconductor components, and research & development.
The key limitation of electron beam lithography is throughput, i.e., the very long time it takes to expose an entire silicon wafer or glass substrate. A long exposure time leaves the user vulnerable to beam drift or instability which may occur during the exposure. Also, the turn-around time for reworking or re-design is lengthened unnecessarily if the pattern is not being changed the second time.
It is known that focused-ion beam lithography has the capability of writing extremely fine lines (less than 50 nm line and space has been achieved) without proximity effect. However, because the writing field in ion-beam lithography is quite small, large area patterns must be created by stitching together the small fields.
Ion track technology is a deep cutting tool with a resolution limit around 8 nm applicable to radiation resistant minerals, glasses and polymers. It is capable of generating holes in thin films without any development process. Structural depth can be defined either by ion range or by material thickness. Aspect ratios up to several 104 can be reached. The technique can shape and texture materials at a defined inclination angle. Random pattern, single-ion track structures and aimed pattern consisting of individual single tracks can be generated.
X-ray lithography is a process used in electronic industry to selectively remove parts of a thin film. It uses X-rays to transfer a geometric pattern from a mask to a light-sensitive chemical photoresist, or simply "resist", on the substrate. A series of chemical treatments then engraves the produced pattern into the material underneath the photoresist.
A simple way to carve or create patterns on the surface of nanodiamonds without damaging them could lead to a new photonic devices.
Diamond patterning is a method of forming diamond MEMS. It is achieved by the lithographic application of diamond films to a substrate such as silicon. The patterns can be formed by selective deposition through a silicon dioxide mask, or by deposition followed by micromachining or focused ion beam milling.
There are two basic categories of etching processes: wet etching and dry etching. In the former, the material is dissolved when immersed in a chemical solution. In the latter, the material is sputtered or dissolved using reactive ions or a vapor phase etchant.
Wet chemical etching consists in selective removal of material by dipping a substrate into a solution that dissolves it. The chemical nature of this etching process provides a good selectivity, which means the etching rate of the target material is considerably higher than the mask material if selected carefully.
Etching progresses at the same speed in all directions. Long and narrow holes in a mask will produce v-shaped grooves in the silicon. The surface of these grooves can be atomically smooth if the etch is carried out correctly, with dimensions and angles being extremely accurate.
Some single crystal materials, such as silicon, will have different etching rates depending on the crystallographic orientation of the substrate. This is known as anisotropic etching and one of the most common examples is the etching of silicon in KOH (potassium hydroxide), where Si <111> planes etch approximately 100 times slower than other planes (crystallographic orientations). Therefore, etching a rectangular hole in a (100)-Si wafer results in a pyramid shaped etch pit with 54.7° walls, instead of a hole with curved sidewalls as with isotropic etching.
Hydrofluoric acid is commonly used as an aqueous etchant for silicon dioxide (, also known as BOX for SOI), usually in 49% concentrated form, 5:1, 10:1 or 20:1 BOE (buffered oxide etchant) or BHF (Buffered HF). They were first used in medieval times for glass etching. It was used in IC fabrication for patterning the gate oxide until the process step was replaced by RIE.
Hydrofluoric acid is considered one of the more dangerous acids in the cleanroom. It penetrates the skin upon contact and it diffuses straight to the bone. Therefore, the damage is not felt until it is too late.
Electrochemical etching (ECE) for dopant-selective removal of silicon is a common method to automate and to selectively control etching. An active p-n diode junction is required, and either type of dopant can be the etch-resistant ("etch-stop") material. Boron is the most common etch-stop dopant. In combination with wet anisotropic etching as described above, ECE has been used successfully for controlling silicon diaphragm thickness in commercial piezoresistive silicon pressure sensors. Selectively doped regions can be created either by implantation, diffusion, or epitaxial deposition of silicon.
Xenon difluoride () is a dry vapor phase isotropic etch for silicon originally applied for MEMS in 1995 at University of California, Los Angeles. Primarily used for releasing metal and dielectric structures by undercutting silicon, has the advantage of a stiction-free release unlike wet etchants. Its etch selectivity to silicon is very high, allowing it to work with photoresist, , silicon nitride, and various metals for masking. Its reaction to silicon is "plasmaless", is purely chemical and spontaneous and is often operated in pulsed mode. Models of the etching action are available, and university laboratories and various commercial tools offer solutions using this approach.
Modern VLSI processes avoid wet etching, and use plasma etching instead. Plasma etchers can operate in several modes by adjusting the parameters of the plasma. Ordinary plasma etching operates between 0.1 and 5 Torr. (This unit of pressure, commonly used in vacuum engineering, equals approximately 133.3 pascals.) The plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. Since neutral particles attack the wafer from all angles, this process is isotropic.
Plasma etching can be isotropic, i.e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i.e., exhibiting a smaller lateral undercut rate than its downward etch rate. Such anisotropy is maximized in deep reactive ion etching. The use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation-dependent etching.
The source gas for the plasma usually contains small molecules rich in chlorine or fluorine. For instance, carbon tetrachloride () etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. A plasma containing oxygen is used to oxidize ("ash") photoresist and facilitate its removal.
Ion milling, or sputter etching, uses lower pressures, often as low as 10−4 Torr (10 mPa). It bombards the wafer with energetic ions of noble gases, often Ar+, which knock atoms from the substrate by transferring momentum. Because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. On the other hand, it tends to display poor selectivity. Reactive-ion etching (RIE) operates under conditions intermediate between sputter and plasma etching (between 10–3 and 10−1 Torr). Deep reactive-ion etching (DRIE) modifies the RIE technique to produce deep, narrow features.
In reactive-ion etching (RIE), the substrate is placed inside a reactor, and several gases are introduced. A plasma is struck in the gas mixture using an RF power source, which breaks the gas molecules into ions. The ions accelerate towards, and react with, the surface of the material being etched, forming another gaseous material. This is known as the chemical part of reactive ion etching. There is also a physical part, which is similar to the sputtering deposition process. If the ions have high enough energy, they can knock atoms out of the material to be etched without a chemical reaction. It is a very complex task to develop dry etch processes that balance chemical and physical etching, since there are many parameters to adjust. By changing the balance it is possible to influence the anisotropy of the etching, since the chemical part is isotropic and the physical part highly anisotropic the combination can form sidewalls that have shapes from rounded to vertical.
Deep RIE (DRIE) is a special subclass of RIE that is growing in popularity. In this process, etch depths of hundreds of micrometres are achieved with almost vertical sidewalls. The primary technology is based on the so-called "Bosch process", named after the German company Robert Bosch, which filed the original patent, where two different gas compositions alternate in the reactor. Currently there are two variations of the DRIE. The first variation consists of three distinct steps (the original Bosch process) while the second variation only consists of two steps.
In the first variation, the etch cycle is as follows:
(i) isotropic etch;
(ii) passivation;
(iii) anisoptropic etch for floor cleaning.
In the 2nd variation, steps (i) and (iii) are combined.
Both variations operate similarly. The creates a polymer on the surface of the substrate, and the second gas composition ( and ) etches the substrate. The polymer is immediately sputtered away by the physical part of the etching, but only on the horizontal surfaces and not the sidewalls. Since the polymer only dissolves very slowly in the chemical part of the etching, it builds up on the sidewalls and protects them from etching. As a result, etching aspect ratios of 50 to 1 can be achieved. The process can easily be used to etch completely through a silicon substrate, and etch rates are 3–6 times higher than wet etching.
After preparing a large number of MEMS devices on a silicon wafer, individual dies have to be separated, which is called die preparation in semiconductor technology. For some applications, the separation is preceded by wafer backgrinding in order to reduce the wafer thickness. Wafer dicing may then be performed either by sawing using a cooling liquid or a dry laser process called stealth dicing.
Bulk micromachining is the oldest paradigm of silicon-based MEMS. The whole thickness of a silicon wafer is used for building the micro-mechanical structures. Silicon is machined using various etching processes. Anodic bonding of glass plates or additional silicon wafers is used for adding features in the third dimension and for hermetic encapsulation. Bulk micromachining has been essential in enabling high performance pressure sensors and accelerometers that changed the sensor industry in the 1980s and 90's.
Surface micromachining uses layers deposited on the surface of a substrate as the structural materials, rather than using the substrate itself. Surface micromachining was created in the late 1980s to render micromachining of silicon more compatible with planar integrated circuit technology, with the goal of combining MEMS and integrated circuits on the same silicon wafer. The original surface micromachining concept was based on thin polycrystalline silicon layers patterned as movable mechanical structures and released by sacrificial etching of the underlying oxide layer. Interdigital comb electrodes were used to produce in-plane forces and to detect in-plane movement capacitively. This MEMS paradigm has enabled the manufacturing of low cost accelerometers for e.g. automotive air-bag systems and other applications where low performance and/or high g-ranges are sufficient. Analog Devices has pioneered the industrialization of surface micromachining and has realized the co-integration of MEMS and integrated circuits.
To control the size of micro and nano-scale components, the use of so-called etchless processes is often applied. This approach to MEMS fabrication relies mostly on the oxidation of silicon, as described by the Deal-Grove model. Thermal oxidation processes are used to produced diverse silicon structures with highly precise dimensional control. Devices including optical frequency combs, and silicon MEMS pressure sensors, have been produced through the use of thermal oxidation processes to fine-tune silicon structures in one or two dimensions. Thermal oxidation is of particular value in the fabrication of silicon nanowires, which are widely employed in MEMS systems as both mechanical and electrical components.
Both bulk and surface silicon micromachining are used in the industrial production of sensors, ink-jet nozzles, and other devices. But in many cases the distinction between these two has diminished. A new etching technology, deep reactive-ion etching, has made it possible to combine good performance typical of bulk micromachining with comb structures and in-plane operation typical of surface micromachining. While it is common in surface micromachining to have structural layer thickness in the range of 2 µm, in HAR silicon micromachining the thickness can be from 10 to 100 µm. The materials commonly used in HAR silicon micromachining are thick polycrystalline silicon, known as epi-poly, and bonded silicon-on-insulator (SOI) wafers although processes for bulk silicon wafer also have been created (SCREAM). Bonding a second wafer by glass frit bonding, anodic bonding or alloy bonding is used to protect the MEMS structures. Integrated circuits are typically not combined with HAR silicon micromachining.
Some common commercial applications of MEMS include:
The global market for micro-electromechanical systems, which includes products such as automobile airbag systems, display systems and inkjet cartridges totaled $40 billion in 2006 according to Global MEMS/Microsystems Markets and Opportunities, a research report from SEMI and Yole Development and is forecasted to reach $72 billion by 2011.
Companies with strong MEMS programs come in many sizes. Larger firms specialize in manufacturing high volume inexpensive components or packaged solutions for end markets such as automobiles, biomedical, and electronics. Smaller firms provide value in innovative solutions and absorb the expense of custom fabrication with high sales margins. Both large and small companies typically invest in R&D to explore new MEMS technology.
The market for materials and equipment used to manufacture MEMS devices topped $1 billion worldwide in 2006. Materials demand is driven by substrates, making up over 70 percent of the market, packaging coatings and increasing use of chemical mechanical planarization (CMP). While MEMS manufacturing continues to be dominated by used semiconductor equipment, there is a migration to 200 mm lines and select new tools, including etch and bonding for certain MEMS applications.
|
https://en.wikipedia.org/wiki?curid=19638
|
Marvin Minsky
Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist concerned largely with research of artificial intelligence (AI), co-founder of the Massachusetts Institute of Technology's AI laboratory, and author of several texts concerning AI and philosophy.
Minsky received many accolades and honors, such as the 1969 Turing Award.
Marvin Lee Minsky was born in New York City, to an eye surgeon father, Henry, and to a mother, Fannie (Reiser), who was a Zionist activist. His family was Jewish. He attended the Ethical Culture Fieldston School and the Bronx High School of Science. He later attended Phillips Academy in Andover, Massachusetts. He then served in the US Navy from 1944 to 1945. He received a B.A. in mathematics from Harvard University in 1950 and a Ph.D. in mathematics from Princeton University in 1954. His doctoral dissertation was titled "Theory of neural-analog reinforcement systems and its application to the brain-model problem." He was a Junior Fellow of the Harvard Society of Fellows from 1954-1957.
He was on the MIT faculty from 1958 to his death. He joined the staff at MIT Lincoln Laboratory in 1958, and a year later he and John McCarthy initiated what is, , named the MIT Computer Science and Artificial Intelligence Laboratory. He was the Toshiba Professor of Media Arts and Sciences, and professor of electrical engineering and computer science.
Minsky's inventions include the first head-mounted graphical display (1963) and the confocal microscope (1957, a predecessor to today's widely used confocal laser scanning microscope). He developed, with Seymour Papert, the first Logo "turtle". Minsky also built, in 1951, the first randomly wired neural network learning machine, SNARC.
In 1962, Minsky published a (7,4) Turing machine and proved it universal. It was the simplest known universal Turing machine until Stephen Wolfram's (2,3) Turing machine was proven to be universal in 2007.
Minsky wrote the book "Perceptrons" (with Seymour Papert), attacking the work of Frank Rosenblatt, which became the foundational work in the analysis of artificial neural networks. This book is the center of a controversy in the history of AI, as some claim it to have had great importance in discouraging research of neural networks in the 1970s, and contributing to the so-called "AI winter". He also founded several other AI models. His book "A framework for representing knowledge" created a new paradigm in programming. While his "Perceptrons" is now more a historical than practical book, the theory of frames is in wide use. Minsky also wrote of the possibility that extraterrestrial life may think like humans, permitting communication.
In the early 1970s, at the MIT Artificial Intelligence Lab, Minsky and Papert started developing what came to be known as the Society of Mind theory. The theory attempts to explain how what we call intelligence could be a product of the interaction of non-intelligent parts. Minsky says that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks. In 1986, Minsky published "The Society of Mind", a comprehensive book on the theory which, unlike most of his previously published work, was written for the general public.
In November 2006, Minsky published "The Emotion Machine", a book that critiques many popular theories of how human minds work and suggests alternative theories, often replacing simple ideas with more complex ones. Recent drafts of the book are freely available from his webpage.
Minsky was an adviser on Stanley Kubrick's movie ; one of the movie's characters, Victor Kaminski, was named in Minsky's honor. Minsky is mentioned explicitly in Arthur C. Clarke's derivative novel of the same name, where he is portrayed as achieving a crucial break-through in artificial intelligence in the then-future 1980s, paving the way for HAL 9000 in the early 21st century:
In 1952, Minsky married pediatrician Gloria Rudisch; together they had three children. Minsky was a talented improvisational pianist who published musings on the relations between music and psychology.
Minsky was an atheist, a signatory to the Scientists' Open Letter on Cryonics.
He was a critic of the Loebner Prize for conversational robots, and argued that a fundamental difference between humans and machines was that while humans are machines, they are machines in which intelligence emerges from the interplay of the many unintelligent but semi-autonomous agents that comprise the brain. He argued that "somewhere down the line, some computers will become more intelligent than most people," but that it was very hard to predict how fast progress would be. He cautioned that an artificial superintelligence designed to solve an innocuous mathematical problem might decide to assume control of Earth's resources to build supercomputers to help achieve its goal, but believed that such negative scenarios are "hard to take seriously" because he felt confident that AI would go through a lot of testing before being deployed.
In January 2016 Minsky died of a cerebral hemorrhage, at the age of 88. Minsky was a member of Alcor Life Extension Foundation's Scientific Advisory Board. Alcor will neither confirm nor deny whether Minsky was cryonically preserved.
Minsky received a $100,000 research grant from Jeffrey Epstein in 2002, four years before Epstein's first arrest for sex offenses; it was the first from Epstein to MIT. Minsky received no further research grants from him.
Minsky organized two academic symposia on Epstein's private island Little Saint James, one in 2002 and another in 2011, after Epstein was a registered sex offender. Virginia Guiffre testified in a 2015 deposition in her defamation lawsuit against Epstein associate Ghislaine Maxwell that Maxwell directed her to have sex with Minsky among others. There is no independent corroboration of the sex allegations, and there has been no lawsuit against Minsky's estate. Minsky's widow, Gloria Rudisch, says that he could not have had sex with any of the women at Epstein's residences, as they were always together during all of the visits to Epstein's residences.
Minsky won the Turing Award (the greatest distinction in computer science) in 1969, the Golden Plate Award of the American Academy of Achievement in 1982, the Japan Prize in 1990, the IJCAI Award for Research Excellence for 1991, and the Benjamin Franklin Medal from the Franklin Institute for 2001. In 2006, he was inducted as a Fellow of the Computer History Museum "for co-founding the field of artificial intelligence, creating early neural networks and robots, and developing theories of human and machine cognition." In 2011, Minsky was inducted into IEEE Intelligent Systems' AI Hall of Fame for the "significant contributions to the field of AI and intelligent systems". In 2014, Minsky won the Dan David Prize for "Artificial Intelligence, the Digital Mind". He was also awarded with the 2013 BBVA Foundation Frontiers of Knowledge Award in the Information and Communication Technologies category.
Minsky was affiliated with the following organizations:
|
https://en.wikipedia.org/wiki?curid=19639
|
Milton Friedman
Milton Friedman (; July 31, 1912 – November 16, 2006) was an American economist who received the 1976 Nobel Memorial Prize in Economic Sciences for his research on consumption analysis, monetary history and theory and the complexity of stabilization policy. With George Stigler and others, Friedman was among the intellectual leaders of the Chicago school of economics, a neoclassical school of economic thought associated with the work of the faculty at the University of Chicago that rejected Keynesianism in favor of monetarism until the mid-1970s, when it turned to new classical macroeconomics heavily based on the concept of rational expectations. Several students and young professors who were recruited or mentored by Friedman at Chicago went on to become leading economists, including Gary Becker, Robert Fogel, Thomas Sowell and Robert Lucas Jr.
Friedman's challenges to what he later called "naive Keynesian" theory began with his 1950s reinterpretation of the consumption function. In the 1960s, he became the main advocate opposing Keynesian government policies and described his approach (along with mainstream economics) as using "Keynesian language and apparatus" yet rejecting its "initial" conclusions. He theorized that there existed a "natural" rate of unemployment and argued that unemployment below this rate would cause inflation to accelerate. He argued that the Phillips curve was in the long run vertical at the "natural rate" and predicted what would come to be known as stagflation. Friedman promoted an alternative macroeconomic viewpoint known as "monetarism" and argued that a steady, small expansion of the money supply was the preferred policy. His ideas concerning monetary policy, taxation, privatization and deregulation influenced government policies, especially during the 1980s. His monetary theory influenced the Federal Reserve's response to the global financial crisis of 2007–2008.
Friedman was an advisor to Republican President Ronald Reagan and Conservative British Prime Minister Margaret Thatcher. His political philosophy extolled the virtues of a free market economic system with minimal intervention. He once stated that his role in eliminating conscription in the United States was his proudest accomplishment. In his 1962 book "Capitalism and Freedom", Friedman advocated policies such as a volunteer military, freely floating exchange rates, abolition of medical licenses, a negative income tax and school vouchers and opposed the war on drugs. His support for school choice led him to found the Friedman Foundation for Educational Choice, later renamed EdChoice.
Friedman's works include monographs, books, scholarly articles, papers, magazine columns, television programs, and lectures, and cover a broad range of economic topics and public policy issues. His books and essays have had global influence, including in former communist states. A survey of economists ranked Friedman as the second-most popular economist of the 20th century, following only John Maynard Keynes, and "The Economist" described him as "the most influential economist of the second half of the 20th century ... possibly of all of it".
Friedman was born in Brooklyn, New York on July 31, 1912. His parents, Sára Ethel (née Landau) and Jenő Saul Friedman, were Jewish immigrants from Beregszász in Carpathian Ruthenia, Kingdom of Hungary (now Berehove in Ukraine). They both worked as dry goods merchants. Shortly after his birth, the family relocated to Rahway, New Jersey. In his early teens, Friedman was injured in a car accident, which scarred his upper lip. A talented student, Friedman graduated from Rahway High School in 1928, just before his 16th birthday. He was awarded a competitive scholarship to Rutgers University (then a private university receiving limited support from the State of New Jersey, e.g., for such scholarships). He specialized in mathematics and economics, and became influenced by two economics professors, Arthur F. Burns and Homer Jones, who convinced him that modern economics could help end the Great Depression.
Friedman graduated in 1932, and initially intended to become an actuary. But he was offered two scholarships to do graduate work, one in mathematics at Brown University and the other in economics at the University of Chicago. Friedman chose the second, earning a Master of Arts degree in 1933. He was strongly influenced by Jacob Viner, Frank Knight, and Henry Simons. At Chicago Friedman met his future wife, economist Rose Director.
During the 1933–1934 academic year he had a fellowship at Columbia University, where he studied statistics with statistician and economist Harold Hotelling. He was back in Chicago for the 1934–1935 academic year, working as a research assistant for Henry Schultz, who was then working on "Theory and Measurement of Demand". That year, Friedman formed what would prove to be lifelong friendships with George Stigler and W. Allen Wallis.
Friedman was unable to find academic employment, so in 1935 followed his friend W. Allen Wallis to Washington, D.C., where Franklin D. Roosevelt's New Deal was "a lifesaver" for many young economists. At this stage, Friedman said that he and his wife "regarded the job-creation programs such as the WPA, CCC, and PWA appropriate responses to the critical situation," but not "the price- and wage-fixing measures of the National Recovery Administration and the Agricultural Adjustment Administration." Foreshadowing his later ideas, he believed price controls interfered with an essential signaling mechanism to help resources be used where they were most valued. Indeed, Friedman later concluded that all government intervention associated with the New Deal was "the wrong cure for the wrong disease," arguing that the money supply should simply have been expanded, instead of contracted. Later, Friedman and his colleague Anna Schwartz wrote "A Monetary History of the United States, 1867–1960", which argued that the Great Depression was caused by a severe monetary contraction due to banking crises and poor policy on the part of the Federal Reserve. Robert J. Shiller describes the book as the "most influential account" of the Great Depression.
During 1935, he began working for the National Resources Planning Board, which was then working on a large consumer budget survey. Ideas from this project later became a part of his "Theory of the Consumption Function". Friedman began employment with the National Bureau of Economic Research during autumn 1937 to assist Simon Kuznets in his work on professional income. This work resulted in their jointly authored publication "Incomes from Independent Professional Practice", which introduced the concepts of permanent and transitory income, a major component of the Permanent Income Hypothesis that Friedman worked out in greater detail in the 1950s. The book hypothesizes that professional licensing artificially restricts the supply of services and raises prices.
During 1940, Friedman was appointed as an assistant professor teaching Economics at the University of Wisconsin–Madison, but encountered antisemitism in the Economics department and returned to government service. From 1941 to 1943 Friedman worked on wartime tax policy for the federal government, as an advisor to senior officials of the United States Department of the Treasury. As a Treasury spokesman during 1942 he advocated a Keynesian policy of taxation. He helped to invent the payroll withholding tax system, since the federal government needed money to fund the war. He later said, "I have no apologies for it, but I really wish we hadn't found it necessary and I wish there were some way of abolishing withholding now."
In 1940, Friedman accepted a position at the University of Wisconsin–Madison, but left because of differences with faculty regarding United States involvement in World War II. Friedman believed the United States should enter the war. In 1943, Friedman joined the Division of War Research at Columbia University (headed by W. Allen Wallis and Harold Hotelling), where he spent the rest of World War II working as a mathematical statistician, focusing on problems of weapons design, military tactics, and metallurgical experiments.
In 1945, Friedman submitted "Incomes from Independent Professional Practice" (co-authored with Kuznets and completed during 1940) to Columbia as his doctoral dissertation. The university awarded him a PhD in 1946. Friedman spent the 1945–1946 academic year teaching at the University of Minnesota (where his friend George Stigler was employed). On February 12, 1945, his son, David D. Friedman was born.
In 1946, Friedman accepted an offer to teach economic theory at the University of Chicago (a position opened by departure of his former professor Jacob Viner to Princeton University). Friedman would work for the University of Chicago for the next 30 years. There he contributed to the establishment of an intellectual community that produced a number of Nobel Prize winners, known collectively as the Chicago school of economics.
At that time, Arthur F. Burns, who was then the head of the National Bureau of Economic Research, asked Friedman to rejoin the Bureau's staff. He accepted the invitation, and assumed responsibility for the Bureau's inquiry into the role of money in the business cycle. As a result, he initiated the "Workshop in Money and Banking" (the "Chicago Workshop"), which promoted a revival of monetary studies. During the latter half of the 1940s, Friedman began a collaboration with Anna Schwartz, an economic historian at the Bureau, that would ultimately result in the 1963 publication of a book co-authored by Friedman and Schwartz, "A Monetary History of the United States, 1867–1960".
Friedman spent the 1954–1955 academic year as a Fulbright Visiting Fellow at Gonville and Caius College, Cambridge. At the time, the Cambridge economics faculty was divided into a Keynesian majority (including Joan Robinson and Richard Kahn) and an anti-Keynesian minority (headed by Dennis Robertson). Friedman speculated that he was invited to the fellowship, because his views were unacceptable to both of the Cambridge factions. Later his weekly columns for "Newsweek" magazine (1966–84) were well read and increasingly influential among political and business people, and helped earn the magazine a Gerald Loeb Special Award in 1968. From 1968 to 1978, he and Paul Samuelson participated in the Economics Cassette Series, a biweekly subscription series where the economist would discuss the days' issues for about a half-hour at a time.
One of Milton Friedman's most popular works, "A Theory of the Consumption Function", challenged traditional Keynesian viewpoints about the household. This work was originally published in 1957 by the Princeton University Press, and it reanalysed the relationship displayed "between aggregate consumption or aggregate savings and aggregate income." Keynes believed that people would modify their household consumption expenditures to relate to their existing income levels. Friedman's research introduced the term "permanent income" to the world, which was the average of a household's expected income over several years, and he also developed the permanent income hypothesis. Milton Friedman's research changed how economists interpreted the consumption function, and his work pushed the idea that current income was not the only factor that affected people's adjustment household consumption expenditures. Instead, expected income levels also affected how households would change their consumption expenditures. Friedman's contributions strongly influenced research on consumer behavior, and he further defined how to predict consumption smoothing, which contradicts Keynes' marginal propensity to consume. Although this work presented many controversial points of view that differed from existing viewpoints established by Keynes, "A Theory of the Consumption Function" helped Friedman gain respect in the field of economics.
His "Capitalism and Freedom" brought him national and international attention outside academia. It was published in 1962 by the University of Chicago Press and consists of essays that used non-mathematical economic models to explore issues of public policy. It sold over 400,000 copies in the first eighteen years and more than half a million since 1962. It has been translated into eighteen languages. Friedman talks about the need to move to a classically liberal society, that free markets would help nations and individuals in the long-run and fix the efficiency problems currently faced by the United States and other major countries of the 1950s and 1960s. He goes through the chapters specifying a specific issue in each respective chapter from the role of government and money supply to social welfare programs to a special chapter on occupational licensure. Friedman concludes "Capitalism and Freedom" with his "classical liberal" (more accurately, libertarian) stance, that government should stay out of matters that do not need and should only involve itself when absolutely necessary for the survival of its people and the country. He recounts how the best of a country's abilities come from its free markets while its failures come from government intervention.
In 1977, at the age of 65, Friedman retired from the University of Chicago after teaching there for 30 years. He and his wife moved to San Francisco, where he became a visiting scholar at the Federal Reserve Bank of San Francisco. From 1977 on, he was affiliated with the Hoover Institution at Stanford University. During the same year, Friedman was approached by the Free To Choose Network and asked to create a television program presenting his economic and social philosophy.
The Friedmans worked on this project for the next three years, and during 1980, the ten-part series, titled "Free to Choose", was broadcast by the Public Broadcasting Service (PBS). The companion book to the series (co-authored by Milton and his wife, Rose Friedman), also titled "Free To Choose", was the bestselling nonfiction book of 1980 and has since been translated into 14 languages.
Friedman served as an unofficial adviser to Ronald Reagan during his 1980 presidential campaign, and then served on the President's Economic Policy Advisory Board for the rest of the Reagan Administration. Ebenstein says Friedman was "the 'guru' of the Reagan administration." In 1988 he received the National Medal of Science and Reagan honored him with the Presidential Medal of Freedom.
Friedman is known now as one of the most influential economists of the 20th century. Throughout the 1980s and 1990s, Friedman continued to write editorials and appear on television. He made several visits to Eastern Europe and to China, where he also advised governments. He was also for many years a Trustee of the Philadelphia Society.
According to a 2007 article in "Commentary" magazine, his "parents were moderately observant Jews, but Friedman, after an intense burst of childhood piety, rejected religion altogether." He described himself as an agnostic. Friedman wrote extensively of his life and experiences, especially in 1998 in his memoirs with his wife, Rose, titled "Two Lucky People".
Friedman died of heart failure at the age of 94 years in San Francisco on November 16, 2006. He was still a working economist performing original economic research; his last column was published in "The Wall Street Journal" the day after his death. He was survived by his wife (who died on August 18, 2009) and their two children, David, known for the anarcho-capitalist book "The Machinery of Freedom", and bridge expert Jan Martel.
Friedman was best known for reviving interest in the money supply as a determinant of the nominal value of output, that is, the quantity theory of money. Monetarism is the set of views associated with modern quantity theory. Its origins can be traced back to the 16th-century School of Salamanca or even further; however, Friedman's contribution is largely responsible for its modern popularization. He co-authored, with Anna Schwartz, "A Monetary History of the United States, 1867–1960" (1963), which was an examination of the role of the money supply and economic activity in the U.S. history. A striking conclusion of their research regarded the way in which money supply fluctuations contribute to economic fluctuations. Several regression studies with David Meiselman during the 1960s suggested the primacy of the money supply over investment and government spending in determining consumption and output. These challenged a prevailing, but largely untested, view on their relative importance. Friedman's empirical research and some theory supported the conclusion that the short-run effect of a change of the money supply was primarily on output but that the longer-run effect was primarily on the price level.
Friedman was the main proponent of the monetarist school of economics. He maintained that there is a close and stable association between inflation and the money supply, mainly that inflation could be avoided with proper regulation of the monetary base's growth rate. He famously used the analogy of "dropping money out of a helicopter", in order to avoid dealing with money injection mechanisms and other factors that would overcomplicate his models.
Friedman's arguments were designed to counter the popular concept of cost-push inflation, that the increased general price level at the time was the result of increases in the price of oil, or increases in wages; as he wrote:
Friedman rejected the use of fiscal policy as a tool of demand management; and he held that the government's role in the guidance of the economy should be restricted severely. Friedman wrote extensively on the Great Depression, and he termed the 1929–1933 period the Great Contraction. He argued that the Depression had been caused by an ordinary financial shock whose duration and seriousness were greatly increased by the subsequent contraction of the money supply caused by the misguided policies of the directors of the Federal Reserve.
This theory was put forth in "A Monetary History of the United States", and the chapter on the Great Depression was then published as a stand-alone book entitled "The Great Contraction, 1929–1933". Both books are still in print from Princeton University Press, and some editions include as an appendix a speech at a University of Chicago event honoring Friedman in which Ben Bernanke made this statement:
Let me end my talk by abusing slightly my status as an official representative of the Federal Reserve. I would like to say to Milton and Anna: Regarding the Great Depression, you're right. We did it. We're very sorry. But thanks to you, we won't do it again.
Friedman also argued for the cessation of government intervention in currency markets, thereby spawning an enormous literature on the subject, as well as promoting the practice of freely floating exchange rates. His close friend George Stigler explained, "As is customary in science, he did not win a full victory, in part because research was directed along different lines by the theory of rational expectations, a newer approach developed by Robert Lucas, also at the University of Chicago." The relationship between Friedman and Lucas, or new classical macroeconomics as a whole, was highly complex. The Friedmanian Phillips curve was an interesting starting point for Lucas, but he soon realized that the solution provided by Friedman was not quite satisfactory. Lucas elaborated a new approach in which rational expectations were presumed instead of the Friedmanian adaptive expectations. Due to this reformulation, the story in which the theory of the new classical Phillips curve was embedded radically changed. This modification, however, had a significant effect on Friedman's own approach, so, as a result, the theory of the Friedmanian Phillips curve also changed. Moreover, new classical Neil Wallace, who was a graduate student at the University of Chicago between 1960 and 1963, regarded Friedman's theoretical courses as a mess. This evaluation clearly indicates the broken relationship between Friedmanian monetarism and new classical macroeconomics.
Friedman was also known for his work on the consumption function, the permanent income hypothesis (1957), which Friedman himself referred to as his best scientific work. This work contended that rational consumers would spend a proportional amount of what they perceived to be their permanent income. Windfall gains would mostly be saved. Tax reductions likewise, as rational consumers would predict that taxes would have to increase later to balance public finances. Other important contributions include his critique of the Phillips curve and the concept of the natural rate of unemployment (1968). This critique associated his name, together with that of Edmund Phelps, with the insight that a government that brings about greater inflation cannot permanently reduce unemployment by doing so. Unemployment may be temporarily lower, if the inflation is a surprise, but in the long run unemployment will be determined by the frictions and imperfections of the labor market.
Friedman's essay "The Methodology of Positive Economics" (1953) provided the epistemological pattern for his own subsequent research and to a degree that of the Chicago School. There he argued that economics as "science" should be free of value judgments for it to be objective. Moreover, a useful economic theory should be judged not by its descriptive realism but by its simplicity and fruitfulness as an engine of prediction. That is, students should measure the accuracy of its predictions, rather than the 'soundness of its assumptions'. His argument was part of an ongoing debate among such statisticians as Jerzy Neyman, Leonard Savage, and Ronald Fisher.
However, despite being an advocate of the free market, Milton Friedman believed that the government had two crucial roles. In an interview with Phil Donahue, Milton Friedman argued that "the two basic functions of a government are to protect the nation against foreign enemy, and to protect citizens against its fellows.”. He also, admitted that although privatisation of national defence could reduce the overall cost, he has not yet thought of a way to make this privatisation possible.
One of his most famous contributions to statistics is sequential sampling. Friedman did statistical work at the Division of War Research at Columbia, where he and his colleagues came up with the technique. It became, in the words of "The New Palgrave Dictionary of Economics", "the standard analysis of quality control inspection". The dictionary adds, "Like many of Friedman's contributions, in retrospect it seems remarkably simple and obvious to apply basic economic ideas to quality control; that, however, is a measure of his genius."
Although Friedman concluded the government does have a role in the monetary system he was critical of the Federal Reserve due to its poor performance and felt it should be abolished. He was opposed to Federal Reserve policies, even during the so-called 'Volcker shock' that was labeled 'monetarist'. Friedman believed that the Federal Reserve System should ultimately be replaced with a computer program. He favored a system that would automatically buy and sell securities in response to changes in the money supply.
The proposal to constantly grow the money supply at a certain predetermined amount every year has become known as Friedman's k-percent rule. There is debate about the effectiveness of a theoretical money supply targeting regime. The Fed's inability to meet its money supply targets from 1978–1982 has led some to conclude it is not a feasible alternative to more conventional inflation and interest rate targeting. Towards the end of his life, Friedman expressed doubt about the validity of targeting the quantity of money.
Idealistically, Friedman actually favored the principles of the 1930s Chicago plan, which would have ended fractional reserve banking and, thus, private money creation. It would force banks to have 100% reserves backing deposits, and instead place money creation powers solely in the hands of the US Government. This would make targeting money growth more possible, as endogenous money created by fractional reserve lending would no longer be a major issue.
Friedman was a strong advocate for floating exchange rates throughout the entire Bretton-Woods period. He argued that a flexible exchange rate would make external adjustment possible and allow countries to avoid balance of payments crises. He saw fixed exchange rates as an undesirable form of government intervention. The case was articulated in an influential 1953 paper, "The Case for Flexible Exchange Rates", at a time, when most commentators regarded the possibility of floating exchange rates as a fantasy.
In his 1955 article "The Role of Government in Education" Friedman proposed supplementing publicly operated schools with privately run but publicly funded schools through a system of school vouchers. Reforms similar to those proposed in the article were implemented in, for example, Chile in 1981 and Sweden in 1992. In 1996, Friedman, together with his wife, founded the Friedman Foundation for Educational Choice to advocate school choice and vouchers. In 2016, the Friedman Foundation changed its name to EdChoice to honor the Friedmans' desire to have the educational choice movement live on without their names attached to it after their deaths.
While Walter Oi is credited with establishing the economic basis for a volunteer military, Friedman was a proponent, stating that the draft was "inconsistent with a free society."
In "Capitalism and Freedom", he argued that conscription is inequitable and arbitrary, preventing young men from shaping their lives as they see fit. During the Nixon administration he headed the committee to research a conversion to paid/volunteer armed force. He would later state that his role in eliminating the conscription in the United States was his proudest accomplishment. Friedman did, however, believe that the introduction of a system of universal military training as a reserve in cases of war-time could be justified.
But opposed its implementation in the United States, describing it as a “monstrosity”.
Biographer Lanny Ebenstein noted a drift over time in Friedman's views from an interventionist to a more cautious foreign policy. He supported US involvement in the Second World War and initially supported a hard-line against Communism, but moderated over time. However, Friedman did state in a 1995 interview that he was an anti-interventionist. He opposed the Gulf War and the Iraq War. In a spring 2006 interview, Friedman said that the US's stature in the world had been eroded by the Iraq War, but that it might be improved if Iraq were to become a peaceful and independent country.
Friedman was an economic advisor and speech writer in Barry Goldwater's presidential campaign in 1964. He was an advisor to California governor Ronald Reagan, and was active in Reagan's presidential campaigns. He served as a member of President Reagan's Economic Policy Advisory Board starting in 1981. In 1988, he received the Presidential Medal of Freedom and the National Medal of Science. He said that he was a libertarian philosophically, but a member of the U.S. Republican Party for the sake of "expediency" ("I am a libertarian with a small 'l' and a Republican with a capital 'R.' And I am a Republican with a capital 'R' on grounds of expediency, not on principle.") But, he said, "I think the term classical liberal is also equally applicable. I don't really care very much what I'm called. I'm much more interested in having people thinking about the ideas, rather than the person."
Friedman was supportive of the state provision of some public goods that private businesses are not considered as being able to provide. However, he argued that many of the services performed by government could be performed better by the private sector. Above all, if some public goods are provided by the state, he believed that they should not be a legal monopoly where private competition is prohibited; for example, he wrote:
In 1962, Friedman criticized Social Security in his book "Capitalism and Freedom", arguing that it had created welfare dependency. However, in the penultimate chapter of the same book, Friedman argued that while capitalism had greatly reduced the extent of poverty in absolute terms, "poverty is in part a relative matter, [and] even in [wealthy Western] countries, there are clearly many people living under conditions that the rest of us label as poverty." Friedman also noted that while private charity could be one recourse for alleviating poverty and cited late 19th century Britain and the United States as exemplary periods of extensive private charity and eleemosynary activity, he made the following point:
Friedman argued further that other advantages of the negative income tax were that it could fit directly into the tax system, would be less costly, and would reduce the administrative burden of implementing a social safety net. Friedman reiterated these arguments 18 years later in "Free to Choose", with the additional proviso that such a reform would only be satisfactory if it replaced the current system of welfare programs rather than augment it. According to economist Robert H. Frank, writing in "The New York Times", Friedman's views in this regard were grounded in a belief that while "market forces ... accomplish wonderful things", they "cannot ensure a distribution of income that enables all citizens to meet basic economic needs".
Friedman also supported libertarian policies such as legalization of drugs and prostitution. During 2005, Friedman and more than 500 other economists advocated discussions regarding the economic benefits of the legalization of marijuana.
Friedman was also a supporter of gay rights. He never specifically supported same-sex marriage, instead saying "I do not believe there should be any discrimination against gays."
Friedman favored immigration, saying "legal and illegal immigration has a very positive impact on the U.S. economy." Friedman however suggested that immigrants ought not to have access to the welfare system. Friedman stated that immigration from Mexico had been a "good thing", in particular illegal immigration. Friedman argued that illegal immigration was a boon because they "take jobs that most residents of this country are unwilling to take, they provide employers with workers of a kind they cannot get" and they do not use welfare. In "Free to Choose", Friedman wrote:
No arbitrary obstacles should prevent people from achieving those positions for which their talents fit them and which their values lead them to seek. Not birth, nationality, color, religion, sex, nor any other irrelevant characteristic should determine the opportunities that are open to a person — only his abilities.
Michael Walker of the Fraser Institute and Friedman hosted a series of conferences from 1986 to 1994. The goal was to create a clear definition of economic freedom and a method for measuring it. Eventually this resulted in the first report on worldwide economic freedom, "Economic Freedom in the World". This annual report has since provided data for numerous peer-reviewed studies and has influenced policy in several nations.
Along with sixteen other distinguished economists he opposed the Copyright Term Extension Act, and signed on to an amicus brief filed in "Eldred v. Ashcroft". Friedman jokingly described it as a "no-brainer".
Friedman argued for stronger basic legal (constitutional) protection of economic rights and freedoms to further promote industrial-commercial growth and prosperity and buttress democracy and freedom and the rule of law generally in society.
George H. Nash, a leading historian of American conservatism, says that by "the end of the 1960s he was probably the most highly regarded and influential conservative scholar in the country, and one of the few with an international reputation." Friedman allowed the libertarian Cato Institute to use his name for its biannual Milton Friedman Prize for Advancing Liberty beginning in 2001. A Friedman Prize was given to the late British economist Peter Bauer in 2002, Peruvian economist Hernando de Soto in 2004, Mart Laar, former Estonian Prime Minister in 2006 and a young Venezuelan student Yon Goicoechea in 2008. His wife Rose, sister of Aaron Director, with whom he initiated the Friedman Foundation for Educational Choice, served on the international selection committee.
Friedman was also a recipient of the Nobel Memorial Prize in Economics.
Upon Friedman's death, Harvard President Lawrence Summers called him "The Great Liberator" saying "... any honest Democrat will admit that we are now all Friedmanites." He said Friedman's great popular contribution was "in convincing people of the importance of allowing free markets to operate."
In 2013 Stephen Moore, a member of the editorial forward of "The Wall Street Journal" said, "Quoting the most-revered champion of free-market economics since Adam Smith has become a little like quoting the Bible." He adds, "There are sometimes multiple and conflicting interpretations."
Friedman won the Nobel Memorial Prize in Economic Sciences, the sole recipient for 1976, "for his achievements in the fields of consumption analysis, monetary history and theory and for his demonstration of the complexity of stabilization policy."
Friedman once said: "If you want to see capitalism in action, go to Hong Kong." He wrote in 1990 that the Hong Kong economy was perhaps the best example of a free market economy.
One month before his death, he wrote the article "Hong Kong Wrong—What would Cowperthwaite say?" in "The Wall Street Journal", criticizing Donald Tsang, the Chief Executive of Hong Kong, for abandoning "positive noninterventionism." Tsang later said he was merely changing the slogan to "big market, small government," where small government is defined as less than 20% of GDP. In a debate between Tsang and his rival Alan Leong before the 2007 Hong Kong Chief Executive election, Leong introduced the topic and jokingly accused Tsang of angering Friedman to death.
During 1975, two years after the military coup that brought military dictator President Augusto Pinochet to power and ended the government of Salvador Allende, the economy of Chile experienced a severe crisis. Friedman and Arnold Harberger accepted an invitation of a private Chilean foundation to visit Chile and speak on principles of economic freedom. He spent seven days in Chile giving a series of lectures at the Universidad Católica de Chile and the (National) University of Chile. One of the lectures was entitled "The Fragility of Freedom" and according to Friedman, "dealt with precisely the threat to freedom from a centralized military government."
In an April 21, 1975 letter to Pinochet, Friedman considered the "key economic problems of Chile are clearly ... inflation and the promotion of a healthy social market economy". He stated that "There is only one way to end inflation: by drastically reducing the rate of increase of the quantity of money ..." and that "... cutting government spending is by far and away the most desirable way to reduce the fiscal deficit, because it ... strengthens the private sector thereby laying the foundations for "healthy" economic growth". As to how rapidly inflation should be ended, Friedman felt that "for Chile where inflation is raging at 10–20% a month ... gradualism is not feasible. It would involve so painful an operation over so long a period that the "patient" would not survive." Choosing "a brief period of higher unemployment ..." was the lesser evil.. and that "the experience of Germany, ... of Brazil ..., of the post-war adjustment in the U.S. ... all argue for shock treatment". In the letter Friedman recommended to deliver the shock approach with "... a package to eliminate the surprise and to relieve acute distress" and "... for definiteness let me sketch the contents of a package proposal ... to be taken as illustrative" although his knowledge of Chile was "too limited to enable [him] to be precise or comprehensive". He listed a "sample proposal" of 8 monetary and fiscal measures including "the removal of as many as obstacles as possible that now hinder the private market. For example, suspend ... the present law against discharging employees". He closed, stating "Such a shock program could end inflation in months". His letter suggested that cutting spending to reduce the fiscal deficit would result in less transitional unemployment than raising taxes.
Sergio de Castro, a Chilean Chicago School graduate, became the nation's Minister of Finance in 1975. During his six-year tenure, foreign investment increased, restrictions were placed on striking and labor unions, and GDP rose yearly. A foreign exchange program was created between the Catholic University of Chile and the University of Chicago. Many other Chicago School alumni were appointed government posts during and after the Pinochet years; others taught its economic doctrine at Chilean universities. They became known as the Chicago Boys.
Friedman did not criticize Pinochet's dictatorship at the time, nor the assassinations, illegal imprisonments, torture, or other atrocities that were well known by then.
In 1976 Friedman defended his unofficial adviser position with: "I do not consider it as evil for an economist to render technical economic advice to the Chilean Government, any more than I would regard it as evil for a physician to give technical medical advice to the Chilean Government to help end a medical plague."
Friedman defended his activity in Chile on the grounds that, in his opinion, the adoption of free market policies not only improved the economic situation of Chile but also contributed to the amelioration of Pinochet's rule and to the eventual transition to a democratic government during 1990. That idea is included in "Capitalism and Freedom", in which he declared that economic freedom is not only desirable in itself but is also a necessary condition for political freedom. In his 1980 documentary "Free to Choose", he said the following: "Chile is not a politically free system, and I do not condone the system. But the people there are freer than the people in Communist societies because government plays a smaller role. ... The conditions of the people in the past few years has been getting better and not worse. They would be still better to get rid of the junta and to be able to have a free democratic system." In 1984, Friedman stated that he has "never refrained from criticizing the political system in Chile." In 1991 he said: "I have nothing good to say about the political regime that Pinochet imposed. It was a terrible political regime. The real miracle of Chile is not how well it has done economically; the real miracle of Chile is that a military junta was willing to go against its principles and support a free market regime designed by principled believers in a free market. ... In Chile, the drive for political freedom, that was generated by economic freedom and the resulting economic success, ultimately resulted in a referendum that introduced political democracy. Now, at long last, Chile has all three things: political freedom, human freedom and economic freedom. Chile will continue to be an interesting experiment to watch to see whether it can keep all three or whether, now that it has political freedom, that political freedom will tend to be used to destroy or reduce economic freedom." He stressed that the lectures he gave in Chile were the same lectures he later gave in China and other socialist states.
During the 2000 PBS documentary "The Commanding Heights" (based on the book), Friedman continued to argue that "free markets would undermine [Pinochet's] political centralization and political control.", and that criticism over his role in Chile missed his main contention that freer markets resulted in freer people, and that Chile's unfree economy had caused the military government. Friedman advocated for free markets which undermined "political centralization and political control".
Friedman visited Iceland during the autumn of 1984, met with important Icelanders and gave a lecture at the University of Iceland on the "tyranny of the "status quo"." He participated in a lively television debate on August 31, 1984, with socialist intellectuals, including Ólafur Ragnar Grímsson, who later became the president of Iceland. When they complained that a fee was charged for attending his lecture at the university and that, hitherto, lectures by visiting scholars had been free-of-charge, Friedman replied that previous lectures had not been free-of-charge in a meaningful sense: lectures always have related costs. What mattered was whether attendees or non-attendees covered those costs. Friedman thought that it was fairer that only those who attended paid. In this discussion Friedman also stated that he did not receive any money for delivering that lecture.
Although Friedman never visited Estonia, his book "Free to Choose" exercised a great influence on that nation's then 32-year-old prime minister, Mart Laar, who has claimed that it was the only book on economics he had read before taking office. Laar's reforms are often credited with responsibility for transforming Estonia from an impoverished Soviet Republic to the "Baltic Tiger." A prime element of Laar's program was introduction of the flat tax. Laar won the 2006 Milton Friedman Prize for Advancing Liberty, awarded by the Cato Institute.
After 1950 Friedman was frequently invited to lecture in Britain, and by the 1970s his ideas had gained widespread attention in conservative circles. For example, he was a regular speaker at the Institute of Economic Affairs (IEA), a libertarian think tank. Conservative politician Margaret Thatcher closely followed IEA programs and ideas, and met Friedman there in 1978. He also strongly influenced Keith Joseph, who became Thatcher's senior advisor on economic affairs, as well as Alan Walters and Patrick Minford, two other key advisers. Major newspapers, including the "Daily Telegraph," "The Times," and "The Financial Times" all promulgated Friedman's monetarist ideas to British decision-makers. Friedman's ideas strongly influenced Thatcher and her allies when she became Prime Minister in 1979.
After his death a number of obituaries and articles were written in Friedman's honor, citing him as one of the most important and influential economists of the post-war era. Milton Friedman's somewhat controversial legacy in America remains strong within the conservative movement. However, some journalists and economists like Noah Smith and Scott Sumner have argued Friedman's academic legacy has been buried under his political philosophy and misinterpreted by modern conservatives.
Econometrician David Hendry criticized part of Friedman's and Anna Schwartz's 1982 "Monetary Trends". When asked about it during an interview with Icelandic TV in 1984, Friedman said that the criticism referred to a different problem from that which he and Schwartz had tackled, and hence was irrelevant, and pointed out the lack of consequential peer review amongst econometricians on Hendry's work. In 2006, Hendry said that Friedman was guilty of "serious errors" of misunderstanding that meant "the t-ratios he reported for UK money demand were overstated by nearly 100 per cent", and said that, in a paper published in 1991 with Neil Ericsson, he had refuted "almost every empirical claim ... made about UK money demand" by Friedman and Schwartz. A 2004 paper updated and confirmed the validity of the Hendry–Ericsson findings through 2000.
Although Keynesian Nobel laureate Paul Krugman praised Friedman as a "great economist and a great man" after Friedman's death in 2006, and acknowledged his many, widely accepted contributions to empirical economics, Krugman had been, and remains, a prominent critic of Friedman. Krugman has written that "he slipped all too easily into claiming both that markets always work and that only markets work. It's extremely hard to find cases in which Friedman acknowledged the possibility that markets could go wrong, or that government intervention could serve a useful purpose." Others agree Friedman was not open enough to the possibility of market inefficiencies. Economist Noah Smith argues that while Friedman made many important contributions to economic theory not all of his ideas relating to macroeconomics have entirely held up over the years and that too few people are willing to challenge them.
Political scientist C.B. Macpherson disagreed with Friedman's historical assessment of economic freedom leading to political freedom, suggesting that political freedom actually gave way to economic freedom for property-owning elites. He also challenged the notion that markets efficiently allocated resources and rejected Friedman's definition of liberty. Friedman's positivist methodological approach to economics has also been critiqued and debated. Finnish economist Uskali Mäki has argued some of his assumptions were unrealistic and vague.
In her book "The Shock Doctrine", author and social activist Naomi Klein criticized Friedman's economic liberalism, identifying it with the principles that guided the economic restructuring that followed the military coups in countries such as Chile and Argentina. Based on their assessments of the extent to which what she describes as neoliberal policies contributed to income disparities and inequality, both Klein and Noam Chomsky have suggested that the primary role of what they describe as neoliberalism was as an ideological cover for capital accumulation by multinational corporations.
Because of his involvement with the Pinochet government, there were international protests when Friedman was awarded the Nobel Prize in 1976. Friedman was accused of supporting the military dictatorship in Chile because of the relation of economists of the University of Chicago to Pinochet, and a controversial seven-day trip he took to Chile during March 1975 (less than two years after the coup that ended with the death of President Salvador Allende). Friedman answered that he was never an adviser to the dictatorship, but only gave some lectures and seminars on inflation, and met with officials, including Augusto Pinochet, while in Chile.
Chilean economist Orlando Letelier asserted that Pinochet's dictatorship resorted to oppression because of popular opposition to Chicago School policies in Chile. After a 1991 speech on drug legalisation, Friedman answered a question on his involvement with the Pinochet regime, saying that he was never an advisor to Pinochet (also mentioned in his 1984 Iceland interview), but that a group of his students at the University of Chicago were involved in Chile's economic reforms. Friedman credited these reforms with high levels of economic growth and with the establishment of democracy that has subsequently occurred in Chile. In October 1988, after returning from a lecture tour of China during which he had met with Zhao Ziyang, General Secretary of the Communist Party of China, Friedman wrote to "The Stanford Daily" asking if he should anticipate a similar "avalanche of protests for having been willing to give advice to so evil a government? And if not, why not?"
|
https://en.wikipedia.org/wiki?curid=19640
|
Mass media
Mass media refers to a diverse array of media technologies that reach a large audience via mass communication. The technologies through which this communication takes place include a variety of outlets.
Broadcast media transmit information electronically via media such as films, radio, recorded music, or television. Digital media comprises both Internet and mobile mass communication. Internet media comprise such services as email, social media sites, websites, and Internet-based radio and television. Many other mass media outlets have an additional presence on the web, by such means as linking to or running TV ads online, or distributing QR Codes in outdoor or print media to direct mobile users to a website. In this way, they can use the easy accessibility and outreach capabilities the Internet affords, as thereby easily broadcast information throughout many different regions of the world simultaneously and cost-efficiently. Outdoor media transmit information via such media as AR advertising; billboards; blimps; flying billboards (signs in tow of airplanes); placards or kiosks placed inside and outside buses, commercial buildings, shops, sports stadiums, subway cars, or trains; signs; or skywriting. Print media transmit information via physical objects, such as books, comics, magazines, newspapers, or pamphlets. Event organizing and public speaking can also be considered forms of mass media.
The organizations that control these technologies, such as movie studios, publishing companies, and radio and television stations, are also known as the mass media.
In the late 20th century, mass media could be classified into eight mass media industries: books, the Internet, magazines, movies, newspapers, radio, recordings, and television. The explosion of digital communication technology in the late 20th and early 21st centuries made prominent the question: what forms of media should be classified as "mass media"? For example, it is controversial whether to include cell phones, computer games (such as MMORPGs), and video games in the definition. In the 2000s, a classification called the "seven mass media" became popular. In order of introduction, they are:
Each mass medium has its own content types, creative artists, technicians, and business models. For example, the Internet includes blogs, podcasts, web sites, and various other technologies built atop the general distribution network. The sixth and seventh media, Internet and mobile phones, are often referred to collectively as digital media; and the fourth and fifth, radio and TV, as broadcast media. Some argue that video games have developed into a distinct mass form of media.
While a telephone is a two-way communication device, mass media communicates to a large group. In addition, the telephone has transformed into a cell phone which is equipped with Internet access. A question arises whether this makes cell phones a mass medium or simply a device used to access a mass medium (the Internet). There is currently a system by which marketers and advertisers are able to tap into satellites, and broadcast commercials and advertisements directly to cell phones, unsolicited by the phone's user. This transmission of mass advertising to millions of people is another form of mass communication.
Video games may also be evolving into a mass medium. Video games (for example massively multiplayer online role-playing games (MMORPGs), such as "RuneScape") provide a common gaming experience to millions of users across the globe and convey the same messages and ideologies to all their users. Users sometimes share the experience with one another by playing online. Excluding the Internet however, it is questionable whether players of video games are sharing a common experience when they play the game individually. It is possible to discuss in great detail the events of a video game with a friend one has never played with, because the experience is identical to each. The question, then, is whether this is a form of mass communication.
Five characteristics of mass communication have been identified by sociologist John Thompson of Cambridge University:
The term "mass media" is sometimes erroneously used as a synonym for "mainstream media". Mainstream media are distinguished from alternative media by their content and point of view. Alternative media are also "mass media" outlets in the sense that they use technology capable of reaching many people, even if the audience is often smaller than the mainstream.
In common usage, the term "mass" denotes not that a given number of individuals receives the products, but rather that the products are available in principle to a plurality of recipients.
The sequencing of content in a broadcast is called a schedule. With all technological endeavours a number of technical terms and slang have developed. Please see the list of broadcasting terms for a glossary of terms used.
Radio and television programs are distributed over frequency bands which are highly regulated in the United States. Such regulation includes determination of the width of the bands, range, licensing, types of receivers and transmitters used, and acceptable content.
Cable television programs are often broadcast simultaneously with radio and television programs, but have a more limited audience. By coding signals and requiring a cable converter box at individual recipients' locations, cable also enables subscription-based channels and pay-per-view services.
A broadcasting organisation may broadcast several programs simultaneously, through several channels (frequencies), for example BBC One and Two. On the other hand, two or more organisations may share a channel and each use it during a fixed part of the day, such as the Cartoon Network/Adult Swim. Digital radio and digital television may also transmit multiplexed programming, with several channels compressed into one ensemble.
When broadcasting is done via the Internet the term webcasting is often used. In 2004, a new phenomenon occurred when a number of technologies combined to produce podcasting. Podcasting is an asynchronous broadcast/narrowcast medium. Adam Curry and his associates, the "Podshow", are principal proponents of podcasting.
The term 'film' encompasses motion pictures as individual projects, as well as the field in general. The name comes from the photographic film (also called filmstock), historically the primary medium for recording and displaying motion pictures. Many other terms for film exist, such as "motion pictures" (or just "pictures" and "picture"), "the silver screen", "photoplays", "the cinema", "picture shows", "flicks", and most common, "movies".
Films are produced by recording people and objects with cameras, or by creating them using animation techniques or special effects. Films comprise a series of individual frames, but when these images are shown in rapid succession, an illusion of motion is created. Flickering between frames is not seen because of an effect known as persistence of vision, whereby the eye retains a visual image for a fraction of a second after the source has been removed. Also of relevance is what causes the perception of motion: a psychological effect identified as beta movement.
Film is considered by many to be an important art form; films entertain, educate, enlighten, and inspire audiences. Any film can become a worldwide attraction, especially with the addition of dubbing or subtitles that translate the film message. Films are also artifacts created by specific cultures, which reflect those cultures, and, in turn, affect them.
A video game is a computer-controlled game in which a video display, such as a monitor or television, is the primary feedback device. The term "computer game" also includes games which display only text (and which can, therefore, theoretically be played on a teletypewriter) or which use other methods, such as sound or vibration, as their primary feedback device, but there are very few new games in these categories. There always must also be some sort of input device, usually in the form of button/joystick combinations (on arcade games), a keyboard and mouse/trackball combination (computer games), a controller (console games), or a combination of any of the above. Also, more esoteric devices have been used for input, e.g., the player's motion. Usually there are rules and goals, but in more open-ended games the player may be free to do whatever they like within the confines of the virtual universe.
In common usage, an "arcade game" refers to a game designed to be played in an establishment in which patrons pay to play on a per-use basis. A "computer game" or "PC game" refers to a game that is played on a personal computer. A "Console game" refers to one that is played on a device specifically designed for the use of such, while interfacing with a standard television set. A "video game" (or "videogame") has evolved into a catchall phrase that encompasses the aforementioned along with any game made for any other device, including, but not limited to, advanced calculators, mobile phones, PDAs, etc.
Sound recording and reproduction is the electrical or mechanical re-creation or amplification of sound, often as music. This involves the use of audio equipment such as microphones, recording devices, and loudspeakers. From early beginnings with the invention of the phonograph using purely mechanical techniques, the field has advanced with the invention of electrical recording, the mass production of the 78 record, the magnetic wire recorder followed by the tape recorder, the vinyl LP record. The invention of the compact cassette in the 1960s, followed by Sony's Walkman, gave a major boost to the mass distribution of music recordings, and the invention of digital recording and the compact disc in 1983 brought massive improvements in ruggedness and quality. The most recent developments have been in digital audio players.
An album is a collection of related audio recordings, released together to the public, usually commercially.
The term record album originated from the fact that 78 RPM Phonograph disc records were kept together in a book resembling a photo album. The first collection of records to be called an "album" was Tchaikovsky's "Nutcracker Suite", release in April 1909 as a four-disc set by Odeon records. It retailed for 16 shillings – about £15 in modern currency.
A music video (also promo) is a short film or video that accompanies a complete piece of music, most commonly a song. Modern music videos were primarily made and used as a marketing device intended to promote the sale of music recordings. Although the origins of music videos go back much further, they came into their own in the 1980s, when Music Television's format was based on them. In the 1980s, the term "rock video" was often used to describe this form of entertainment, although the term has fallen into disuse.
Music videos can accommodate all styles of filmmaking, including animation, live action films, documentaries, and non-narrative, abstract film.
The Internet (also known simply as "the Net" or less precisely as "the Web") is a more interactive medium of mass media, and can be briefly described as "a network of networks". Specifically, it is the worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It consists of millions of smaller domestic, academic, business, and governmental networks, which together carry various information and services, such as email, online chat, file transfer, and the interlinked web pages and other documents of the World Wide Web.
Contrary to some common usage, the Internet and the World Wide Web are not synonymous: the Internet is the system of interconnected "computer networks", linked by copper wires, fiber-optic cables, wireless connections etc.; the Web is the contents, or the interconnected "documents", linked by hyperlinks and URLs. The World Wide Web is accessible through the Internet, along with many other services including e-mail, file sharing and others described below.
Toward the end of the 20th century, the advent of the World Wide Web marked the first era in which most individuals could have a means of exposure on a scale comparable to that of mass media. Anyone with a web site has the potential to address a global audience, although serving to high levels of web traffic is still relatively expensive. It is possible that the rise of peer-to-peer technologies may have begun the process of making the cost of bandwidth manageable. Although a vast amount of information, imagery, and commentary (i.e. "content") has been made available, it is often difficult to determine the authenticity and reliability of information contained in web pages (in many cases, self-published). The invention of the Internet has also allowed breaking news stories to reach around the globe within minutes. This rapid growth of instantaneous, decentralized communication is often deemed likely to change mass media and its relationship to society.
"Cross-media" means the idea of distributing the same message through different media channels. A similar idea is expressed in the news industry as "convergence". Many authors understand cross-media publishing to be the ability to publish in both print and on the web without manual conversion effort. An increasing number of wireless devices with mutually incompatible data and screen formats make it even more difficult to achieve the objective "create once, publish many".
The Internet is quickly becoming the center of mass media. Everything is becoming accessible via the internet. Rather than picking up a newspaper, or watching the 10 o'clock news, people can log onto the internet to get the news they want, when they want it. For example, many workers listen to the radio through the Internet while sitting at their desk.
Even the education system relies on the Internet. Teachers can contact the entire class by sending one e-mail. They may have web pages on which students can get another copy of the class outline or assignments. Some classes have class blogs in which students are required to post weekly, with students graded on their contributions.
Blogging, too, has become a pervasive form of media. A blog is a website, usually maintained by an individual, with regular entries of commentary, descriptions of events, or interactive media such as images or video. Entries are commonly displayed in reverse chronological order, with most recent posts shown on top. Many blogs provide commentary or news on a particular subject; others function as more personal online diaries. A typical blog combines text, images and other graphics, and links to other blogs, web pages, and related media. The ability for readers to leave comments in an interactive format is an important part of many blogs. Most blogs are primarily textual, although some focus on art (artlog), photographs (photoblog), sketchblog, videos (vlog), music (MP3 blog), audio (podcasting) are part of a wider network of social media. Microblogging is another type of blogging which consists of blogs with very short posts.
RSS is a format for syndicating news and the content of news-like sites, including major news sites like Wired, news-oriented community sites like Slashdot, and personal blogs. It is a family of Web feed formats used to publish frequently updated content such as blog entries, news headlines, and podcasts. An RSS document (which is called a "feed" or "web feed" or "channel") contains either a summary of content from an associated web site or the full text. RSS makes it possible for people to keep up with web sites in an automated manner that can be piped into special programs or filtered displays.
A podcast is a series of digital-media files which are distributed over the Internet using syndication feeds for playback on portable media players and computers. The term podcast, like broadcast, can refer either to the series of content itself or to the method by which it is syndicated; the latter is also called podcasting. The host or author of a podcast is often called a podcaster.
Mobile phones were introduced in Japan in 1979 but became a mass media only in 1998 when the first downloadable ringing tones were introduced in Finland. Soon most forms of media content were introduced on mobile phones, tablets and other portable devices, and today the total value of media consumed on mobile vastly exceeds that of internet content, and was worth over 31 billion dollars in 2007 (source Informa). The mobile media content includes over 8 billion dollars worth of mobile music (ringing tones, ringback tones, truetones, MP3 files, karaoke, music videos, music streaming services etc.); over 5 billion dollars worth of mobile gaming; and various news, entertainment and advertising services. In Japan mobile phone books are so popular that five of the ten best-selling printed books were originally released as mobile phone books.
Similar to the internet, mobile is also an interactive media, but has far wider reach, with 3.3 billion mobile phone users at the end of 2007 to 1.3 billion internet users (source ITU). Like email on the internet, the top application on mobile is also a personal messaging service, but SMS text messaging is used by over 2.4 billion people. Practically all internet services and applications exist or have similar cousins on mobile, from search to multiplayer games to virtual worlds to blogs. Mobile has several unique benefits which many mobile media pundits claim make mobile a more powerful media than either TV or the internet, starting with mobile being permanently carried and always connected. Mobile has the best audience accuracy and is the only mass media with a built-in payment channel available to every user without any credit cards or PayPal accounts or even an age limit. Mobile is often called the 7th Mass Medium and either the fourth screen (if counting cinema, TV and PC screens) or the third screen (counting only TV and PC).
A magazine is a periodical publication containing a variety of articles, generally financed by advertising or purchase by readers.
Magazines are typically published weekly, biweekly, monthly, bimonthly or quarterly, with a date on the cover that is in advance of the date it is actually published. They are often printed in color on coated paper, and are bound with a soft cover.
Magazines fall into two broad categories: consumer magazines and business magazines. In practice, magazines are a subset of , distinct from those periodicals produced by scientific, artistic, academic or special interest publishers which are subscription-only, more expensive, narrowly limited in circulation, and often have little or no advertising.
Magazines can be classified as:
A newspaper is a publication containing news and information and advertising, usually printed on low-cost paper called newsprint. It may be general or special interest, most often published daily or weekly. The most important function of newspapers is to inform the public of significant events. Local newspapers inform local communities and include advertisements from local businesses and services, while national newspapers tend to focus on a theme, which can be exampled with "The Wall Street Journal" as they offer news on finance and business related-topics. The first printed newspaper was published in 1605, and the form has thrived even in the face of competition from technologies such as radio and television. Recent developments on the Internet are posing major threats to its business model, however. Paid circulation is declining in most countries, and advertising revenue, which makes up the bulk of a newspaper's income, is shifting from print to online; some commentators, nevertheless, point out that historically new media such as radio and television did not entirely supplant existing.
The internet has challenged the press as an alternative source of information and opinion but has also provided a new platform for newspaper organizations to reach new audiences. According to the World Trends Report, between 2012 and 2016, print newspaper circulation continued to fall in almost all regions, with the exception of Asia and the Pacific, where the dramatic increase in sales in a few select countries has offset falls in historically strong Asian markets such as Japan and the Republic of Korea. Most notably, between 2012 and 2016, India’s print circulation grew by 89 per cent.
Outdoor media is a form of mass media which comprises billboards, signs, placards placed inside and outside commercial buildings/objects like shops/buses, flying billboards (signs in tow of airplanes), blimps, skywriting, AR Advertising. Many commercial advertisers use this form of mass media when advertising in sports stadiums. Tobacco and alcohol manufacturers used billboards and other outdoor media extensively. However, in 1998, the Master Settlement Agreement between the US and the tobacco industries prohibited the billboard advertising of cigarettes. In a 1994 Chicago-based study, Diana Hackbarth and her colleagues revealed how tobacco- and alcohol-based billboards were concentrated in poor neighbourhoods. In other urban centers, alcohol and tobacco billboards were much more concentrated in African-American neighborhoods than in white neighborhoods.
Mass media encompasses much more than just news, although it is sometimes misunderstood in this way. It can be used for various purposes:
Journalism is the discipline of collecting, analyzing, verifying and presenting information regarding current events, trends, issues and people. Those who practice journalism are known as journalists.
News-oriented journalism is sometimes described as the "first rough draft of history" (attributed to Phil Graham), because journalists often record important events, producing news articles on short deadlines. While under pressure to be first with their stories, news media organizations usually edit and proofread their reports prior to publication, adhering to each organization's standards of accuracy, quality and style. Many news organizations claim proud traditions of holding government officials and institutions accountable to the public, while media critics have raised questions about holding the press itself accountable to the standards of professional journalism.
Public relations is the art and science of managing communication between an organization and its key publics to build, manage and sustain its positive image. Examples include:
Publishing is the industry concerned with the production of literature or information – the activity of making information available for public view. In some cases, authors may be their own publishers.
Traditionally, the term refers to the distribution of printed works such as books and newspapers. With the advent of digital information systems and the Internet, the scope of publishing has expanded to include websites, blogs, and the like.
As a business, publishing includes the development, marketing, production, and distribution of newspapers, magazines, books, literary works, musical works, software, other works dealing with information.
Publication is also important as a legal concept; (1) as the process of giving formal notice to the world of a significant intention, for example, to marry or enter bankruptcy, and; (2) as the essential precondition of being able to claim defamation; that is, the alleged libel must have been published.
A software publisher is a publishing company in the software industry between the developer and the distributor. In some companies, two or all three of these roles may be combined (and indeed, may reside in a single person, especially in the case of shareware).
Software publishers often license software from developers with specific limitations, such as a time limit or geographical region. The terms of licensing vary enormously, and are typically secret.
Developers may use publishers to reach larger or foreign markets, or to avoid focussing on marketing. Or publishers may use developers to create software to meet a market need that the publisher has identified.
A YouTuber is anyone who has made their fame from creating and promoting videos on the public video-sharing site, YouTube. Many YouTube celebrities have made a profession from their site through sponsorships, advertisements, product placement, and network support.
The history of mass media can be traced back to the days when dramas were performed in various ancient cultures. This was the first time when a form of media was "broadcast" to a wider audience. The first dated printed book known is the "Diamond Sutra", printed in China in 868 AD, although it is clear that books were printed earlier. Movable clay type was invented in 1041 in China. However, due to the slow spread of literacy to the masses in China, and the relatively high cost of paper there, the earliest printed mass-medium was probably European popular prints from about 1400. Although these were produced in huge numbers, very few early examples survive, and even most known to be printed before about 1600 have not survived. The term "mass media" was coined with the creation of print media, which is notable for being the first example of mass media, as we use the term today. This form of media started in Europe in the Middle Ages.
Johannes Gutenberg's invention of the printing press allowed the mass production of books to sweep the nation. He printed the first book, a Latin Bible, on a printing press with movable type in 1453. The invention of the printing press gave rise to some of the first forms of mass communication, by enabling the publication of books and newspapers on a scale much larger than was previously possible. The invention also transformed the way the world received printed materials, although books remained too expensive really to be called a mass-medium for at least a century after that. Newspapers developed from about 1612, with the first example in English in 1620; but they took until the 19th century to reach a mass-audience directly. The first high-circulation newspapers arose in London in the early 1800s, such as The Times, and were made possible by the invention of high-speed rotary steam printing presses, and railroads which allowed large-scale distribution over wide geographical areas. The increase in circulation, however, led to a decline in feedback and interactivity from the readership, making newspapers a more one-way medium.
The phrase "the media" began to be used in the 1920s. The notion of "mass media" was generally restricted to print media up until the post-Second World War, when radio, television and video were introduced. The audio-visual facilities became very popular, because they provided both information and entertainment, because the colour and sound engaged the viewers/listeners and because it was easier for the general public to passively watch TV or listen to the radio than to actively read. In recent times, the Internet become the latest and most popular mass medium. Information has become readily available through websites, and easily accessible through search engines. One can do many activities at the same time, such as playing games, listening to music, and social networking, irrespective of location. Whilst other forms of mass media are restricted in the type of information they can offer, the internet comprises a large percentage of the sum of human knowledge through such things as Google Books. Modern day mass media includes the internet, mobile phones, blogs, podcasts and RSS feeds.
During the 20th century, the growth of mass media was driven by technology, including that which allowed much duplication of material. Physical duplication technologies such as printing, record pressing and film duplication allowed the duplication of books, newspapers and movies at low prices to huge audiences. Radio and television allowed the electronic duplication of information for the first time. Mass media had the economics of linear replication: a single work could make money. An example of Riel and Neil's theory. proportional to the number of copies sold, and as volumes went up, unit costs went down, increasing profit margins further. Vast fortunes were to be made in mass media. In a democratic society, the media can serve the electorate about issues regarding government and corporate entities (see Media influence). Some consider the concentration of media ownership to be a threat to democracy.
Between 1985 and 2018 about 76,720 deals have been announced in the Media industry. This sums up to an overall value of around 5,634 bil USD. There have been three major waves of M&A in the Mass Media Sector (2000, 2007 and 2015), while the most active year in terms of numbers was 2007 with around 3,808 deals. The U.S. is the most prominent country in Media M&A with 41 of the top 50 deals having an acquiror from the United States.
The largest deal in history was the acquisition of Time Warner by America Online Inc for 164,746.86 mil USD.
Limited-effects theory, originally tested in the 1940s and 1950s, considers that because people usually choose what media to interact with based on what they already believe, media exerts a negligible influence. Class-dominant theory argues that the media reflects and projects the view of a minority elite, which controls it. Culturalist theory, which was developed in the 1980s and 1990s, combines the other two theories and claims that people interact with media to create their own meanings out of the images and messages they receive. This theory states that audience members play an active, rather than passive role in relation to mass media.
There is an article that argues 90 percent of all mass media including radio broadcast networks and programing, video news, sports entertainment, and others are owned by 6 major companies (GE, News-Corp, Disney, Viacom, Time Warner, and CBS). According to Morris Creative Group, these six companies made over 200 billion dollars in revenue in 2010. More diversity is brewing among many companies, but they have recently merged to form an elite which have the power to control the narrative of stories and alter people's beliefs. In the new media-driven age we live in, marketing has more value than ever before because of the various ways it can be implemented. Advertisements can convince citizens to purchase a specific product or have consumers avoid a particular product. The definition of what is acceptable by society can be heavily dictated by the media in regards to the amount of attention it receives.
The documentary "Super Size Me" describes how companies like McDonald's have been sued in the past, the plaintiffs claiming that it was the fault of their liminal and subliminal advertising that "forced" them to purchase the product. The Barbie and Ken dolls of the 1950s are sometimes cited as the main cause for the obsession in modern-day society for women to be skinny and men to be buff. After the attacks of 9/11, the media gave extensive coverage of the event and exposed Osama Bin Laden's guilt for the attack, information they were told by the authorities. This shaped the public opinion to support the war on terrorism, and later, the war on Iraq. A main concern is that due to this extreme power of the mass media, portraying inaccurate information could lead to an immense public concern. In his book The Commercialization of American Culture, Matthew P. McAllister says that "a well-developed media system, informing and teaching its citizens, helps democracy move toward its ideal state."
In 1997, J. R. Finnegan Jr. and K. Viswanath identified three main effects or functions of mass media:
Since the 1950s, when cinema, radio and TV began to be the primary or the only source of information for a larger and larger percentage of the population, these media began to be considered as central instruments of mass control. Up to the point that it emerged the idea that when a country has reached a high level of industrialization, the country itself "belongs to the person who controls communications."
Mass media play a significant role in shaping public perceptions on a variety of important issues, both through the information that is dispensed through them, and through the interpretations they place upon this information. They also play a large role in shaping modern culture, by selecting and portraying a particular set of beliefs, values, and traditions (an entire way of life), as reality. That is, by portraying a certain interpretation of reality, they shape reality to be more in line with that interpretation. Mass media also play a crucial role in the spread of civil unrest activities such as anti-government demonstrations, riots, and general strikes. That is, the use of radio and television receivers has made the unrest influence among cities not only by the geographic location of cities, but also by proximity within the mass media distribution networks.
Mass media sources, through theories like framing and agenda-setting, can affect the scope of a story as particular facts and information are highlighted (Media influence). This can directly correlate with how individuals may perceive certain groups of people, as the only media coverage a person receives can be very limited and may not reflect the whole story or situation; stories are often covered to reflect a particular perspective to target a specific demographic.
According to Stephen Balkaran, an Instructor of Political Science and African American Studies at Central Connecticut State University, mass media has played a large role in the way white Americans perceive African-Americans. The media focus on African-American in the contexts of crime, drug use, gang violence, and other forms of anti-social behavior has resulted in a distorted and harmful public perception of African-Americans.
In his 1999 article "Mass Media and Racism", Balkaran states: "The media has played a key role in perpetuating the effects of this historical oppression and in contributing to African-Americans' continuing status as second-class citizens". This has resulted in an uncertainty among white Americans as to what the genuine nature of African-Americans really is. Despite the resulting racial divide, the fact that these people are undeniably American has "raised doubts about the white man's value system". This means that there is a somewhat "troubling suspicion" among some Americans that their white America is tainted by the black influence. Mass media as well as propaganda tend to reinforce or introduce stereotypes to the general public.
Lack of local or specific topical focus is a common criticism of mass media. A mass news media outlet is often forced to cover national and international news due to it having to cater for and be relevant for a wide demographic. As such, it has to skip over many interesting or important local stories because they simply do not interest the large majority of their viewers. An example given by the website WiseGeek is that "the residents of a community might view their fight against development as critical, but the story would only attract the attention of the mass media if the fight became controversial or if precedents of some form were set".
The term "mass" suggests that the recipients of media products constitute a vast sea of passive, undifferentiated individuals. This is an image associated with some earlier critiques of "mass culture" and mass society which generally assumed that the development of mass communication has had a largely negative impact on modern social life, creating a kind of bland and homogeneous culture which entertains individuals without challenging them. However, interactive digital media have also been seen to challenge the read-only paradigm of earlier broadcast media.
Whilst some refer to the mass media as "opiate of the masses", others argue that is a vital aspect of human societies. By understanding mass media, one is then able to analyse and find a deeper understanding of one's population and culture. This valuable and powerful ability is one reason why the field of media studies is popular. As WiseGeek says, "watching, reading, and interacting with a nation's mass media can provide clues into how people think, especially if a diverse assortment of mass media sources are perused".
Since the 1950s, in the countries that have reached a high level of industrialization, the mass media of cinema, radio and TV have a key role in political power.
Contemporary research demonstrates an increasing level of concentration of media ownership, with many media industries already highly concentrated and dominated by a small number of firms.
Criticism
When the study of mass media began the media was compiled of only mass media which is a very different media system than the social media empire of the 21st-century experiences. With this in mind, there are critiques that mass media no longer exists, or at least that it doesn't exist in the same form as it once did. This original form of mass media put filters on what the general public would be exposed to in regards to "news" something that is harder to do in a society of social media.
Theorist Lance Bennett explains that excluding a few major events in recent history, it is uncommon for a group big enough to be labeled a mass, to be watching the same news via the same medium of mass production. Bennett's critique of 21st Century mass media argues that today it is more common for a group of people to be receiving different news stories, from completely different sources, and thus, mass media has been re-invented. As discussed above, filters would have been applied to original mass medias when the journalists decided what would or wouldn't be printed.
Social Media is a large contributor to the change from mass media to a new paradigm because through social media what is mass communication and what is interpersonal communication is confused. Interpersonal/niche communication is an exchange of information and information in a specific genre. In this form of communication, smaller groups of people are consuming news/information/opinions. In contrast, mass media in its original form is not restricted by genre and it is being consumed by the masses.
|
https://en.wikipedia.org/wiki?curid=19641
|
Mahabharata
The Mahābhārata (, ; , "", ) is one of the two major Sanskrit epics of ancient India, the other being the "Rāmāyaṇa". It narrates the struggle between two groups of cousins in the Kurukshetra War and the fates of the Kaurava and the Pāṇḍava princes and their successors.
It also contains philosophical and devotional material, such as a discussion of the four "goals of life" or "puruṣārtha" (12.161). Among the principal works and stories in the "Mahābhārata" are the "Bhagavad Gita", the story of Damayanti, story of Savitri and Satyavan, an abbreviated version of the "Rāmāyaṇa", and the story of Ṛṣyasringa, often considered as works in their own right.
Traditionally, the authorship of the "Mahābhārata" is attributed to Vyāsa. There have been many attempts to unravel its historical growth and compositional layers. The bulk of the "Mahābhārata" was probably compiled between the 3rd century BCE and the 3rd century CE, with the oldest preserved parts not much older than around 400 BCE. The original events related by the epic probably fall between the 9th and 8th centuries BCE. The text probably reached its final form by the early Gupta period (c. 4th century CE).
The "Mahābhārata" is the longest epic poem known and has been described as "the longest poem ever written". Its longest version consists of over 100,000 "śloka" or over 200,000 individual verse lines (each shloka is a couplet), and long prose passages. At about 1.8 million words in total, the "Mahābhārata" is roughly ten times the length of the "Iliad" and the "Odyssey" combined, or about four times the length of the "Rāmāyaṇa". W. J. Johnson has compared the importance of the "Mahābhārata" in the context of world civilization to that of the Bible, the works of William Shakespeare, the works of Homer, Greek drama, or the Quran. Within the Indian tradition it is sometimes called the fifth Veda.
The epic is traditionally ascribed to the sage Vyāsa, who is also a major character in the epic. Vyāsa described it as being "itihāsa" (Sanskrit: इतिहास, meaning "history"). He also describes the Guru-shishya parampara, which traces all great teachers and their students of the Vedic times.
The first section of the Mahābhārata states that it was Ganesha who wrote down the text to Vyasa's dictation.
The epic employs the story within a story structure, otherwise known as frametales, popular in many Indian religious and non-religious works. It is first recited at "Takshashila" by the sage Vaiśampāyana, a disciple of Vyāsa, to the King Janamejaya who was the great-grandson of the Pāṇḍava prince Arjuna. The story is then recited again by a professional storyteller named Ugraśrava Sauti, many years later, to an assemblage of sages performing the 12-year sacrifice for the king Saunaka Kulapati in the Naimiśa Forest.
The text was described by some early 20th-century Indologists as unstructured and chaotic. Hermann Oldenberg supposed that the original poem must once have carried an immense "tragic force" but dismissed the full text as a "horrible chaos." Moritz Winternitz ("Geschichte der indischen Literatur" 1909) considered that "only unpoetical theologists and clumsy scribes" could have lumped the parts of disparate origin into an unordered whole.
Research on the Mahābhārata has put an enormous effort into recognizing and dating layers within the text. Some elements of the present Mahābhārata can be traced back to Vedic times. The background to the Mahābhārata suggests the origin of the epic occurs "after the very early Vedic period" and before "the first Indian 'empire' was to rise in the third century B.C." That this is "a date not too far removed from the 8th or 9th century B.C." is likely. Mahābhārata started as an orally-transmitted tale of the charioteer bards. It is generally agreed that "Unlike the Vedas, which have to be preserved letter-perfect, the epic was a popular work whose reciters would inevitably conform to changes in language and style," so the earliest 'surviving' components of this dynamic text are believed to be no older than the earliest 'external' references we have to the epic, which may include an allusion in Panini's 4th century BCE grammar Aṣṭādhyāyī 4:2:56. It is estimated that the Sanskrit text probably reached something of a "final form" by the early Gupta period (about the 4th century CE). Vishnu Sukthankar, editor of the first great critical edition of the "Mahābhārata", commented: "It is useless to think of reconstructing a fluid text in a literally original shape, on the basis of an archetype and a "stemma codicum". What then is possible? Our objective can only be to reconstruct "the oldest form of the text which it is possible to reach" on the basis of the manuscript material available." That manuscript evidence is somewhat late, given its material composition and the climate of India, but it is very extensive.
The Mahābhārata itself (1.1.61) distinguishes a core portion of 24,000 verses: the "Bhārata" proper, as opposed to additional secondary material, while the "Aśvalāyana Gṛhyasūtra" (3.4.4) makes a similar distinction. At least three redactions of the text are commonly recognized: "Jaya" (Victory) with 8,800 verses attributed to Vyāsa, "Bhārata" with 24,000 verses as recited by Vaiśampāyana, and finally the Mahābhārata as recited by Ugraśrava Sauti with over 100,000 verses. However, some scholars, such as John Brockington, argue that "Jaya" and "Bharata" refer to the same text, and ascribe the theory of "Jaya" with 8,800 verses to a misreading of a verse in Ā"diparvan" (1.1.81).
The redaction of this large body of text was carried out after formal principles, emphasizing the numbers 18 and 12. The addition of the latest parts may be dated by the absence of the "Anuśāsana-parva" and the "Virāta parva" from the "Spitzer manuscript". The oldest surviving Sanskrit text dates to the Kushan Period (200 CE).
According to what one character says at Mbh. 1.1.50, there were three versions of the epic, beginning with "Manu" (1.1.27), "Astika" (1.3, sub-parva 5) or "Vasu" (1.57), respectively. These versions would correspond to the addition of one and then another 'frame' settings of dialogues. The "Vasu" version would omit the frame settings and begin with the account of the birth of Vyasa. The "astika" version would add the "sarpasattra" and "aśvamedha" material from Brahmanical literature, introduce the name "Mahābhārata", and identify Vyāsa as the work's author. The redactors of these additions were probably Pāñcarātrin scholars who according to Oberlies (1998) likely retained control over the text until its final redaction. Mention of the Huna in the "Bhīṣma-parva" however appears to imply that this parva may have been edited around the 4th century.
The Ādi-parva includes the snake sacrifice ("sarpasattra") of Janamejaya, explaining its motivation, detailing why all snakes in existence were intended to be destroyed, and why in spite of this, there are still snakes in existence. This "sarpasattra" material was often considered an independent tale added to a version of the Mahābhārata by "thematic attraction" (Minkowski 1991), and considered to have a particularly close connection to Vedic (Brahmana) literature. The Pañcavimśa Brahmana (at 25.15.3) enumerates the officiant priests of a "sarpasattra" among whom the names Dhṛtarāṣtra and Janamejaya, two main characters of the "Mahābhārata"'s "sarpasattra", as well as Takṣaka, the name of a snake in the "Mahābhārata", occur.
The "Suparṇākhyāna", a late Vedic period poem considered to be among the "earliest traces of epic poetry in India," is an older, shorter precursor to the expanded legend of Garuda that is included in the "Āstīka Parva", within the "Ādi Parva" of the "Mahābhārata".
The earliest known references to the Mahābhārata and its core "Bhārata" date to the "Aṣṭādhyāyī" (sutra 6.2.38) of Pāṇini ("fl." 4th century BCE) and in the "Aśvalāyana Gṛhyasūtra" (3.4.4). This may mean the core 24,000 verses, known as the "Bhārata", as well as an early version of the extended "Mahābhārata", were composed by the 4th century BCE. A report by the Greek writer Dio Chrysostom (c. 40 – c. 120 CE) about Homer's poetry being sung even in India seems to imply that the "Iliad" had been translated into Sanskrit. However, Indian scholars have, in general, taken this as evidence for the existence of a Mahābhārata at this date, whose episodes Dio or his sources identify with the story of the "Iliad".
Several stories within the Mahābhārata took on separate identities of their own in Classical Sanskrit literature. For instance, Abhijñānaśākuntala by the renowned Sanskrit poet Kālidāsa (c. 400 CE), believed to have lived in the era of the Gupta dynasty, is based on a story that is the precursor to the "Mahābhārata". Urubhaṅga, a Sanskrit play written by Bhāsa who is believed to have lived before Kālidāsa, is based on the slaying of Duryodhana by the splitting of his thighs by Bhīma.
The copper-plate inscription of the Maharaja Sharvanatha (533–534 CE) from Khoh (Satna District, Madhya Pradesh) describes the Mahābhārata as a "collection of 100,000 verses" ("śata-sahasri saṃhitā").
The division into 18 parvas is as follows:
The historicity of the Kurukshetra War is unclear. Many historians estimate the date of the Kurukshetra war to Iron Age India of the 10th century BCE. The setting of the epic has a historical precedent in Iron Age (Vedic) India, where the Kuru kingdom was the center of political power during roughly 1200 to 800 BCE. A dynastic conflict of the period could have been the inspiration for the "Jaya", the foundation on which the Mahābhārata corpus was built, with a climactic battle eventually coming to be viewed as an epochal event.
Puranic literature presents genealogical lists associated with the Mahābhārata narrative. The evidence of the Puranas is of two kinds. Of the first kind, there is the direct statement that there were 1015 (or 1050) years between the birth of Parikshit (Arjuna's grandson) and the accession of Mahapadma Nanda (400-329 BCE), which would yield an estimate of about 1400 BCE for the Bharata battle. However, this would imply improbably long reigns on average for the kings listed in the genealogies.
Of the second kind are analyses of parallel genealogies in the Puranas between the times of Adhisimakrishna (Parikshit's great-grandson) and Mahapadma Nanda. Pargiter accordingly estimated 26 generations by averaging 10 different dynastic lists and, assuming 18 years for the average duration of a reign, arrived at an estimate of 850 BCE for Adhisimakrishna, and thus approximately 950 BCE for the Bharata battle.
B. B. Lal used the same approach with a more conservative assumption of the average reign to estimate a date of 836 BCE, and correlated this with archaeological evidence from Painted Grey Ware (PGW) sites, the association being strong between PGW artifacts and places mentioned in the epic. John Keay confirms this and also gives 950 BCE for the Bharata battle.
Attempts to date the events using methods of archaeoastronomy have produced, depending on which passages are chosen and how they are interpreted, estimates ranging from the late 4th to the mid-2nd millennium BCE. The late 4th-millennium date has a precedent in the calculation of the Kaliyuga epoch, based on planetary conjunctions, by Aryabhata (6th century). Aryabhata's date of 18 February 3102 BCE for Mahābhārata war has become widespread in Indian tradition. Some sources mark this as the disappearance of Krishna from earth. The Aihole inscription of Pulikeshi II, dated to Saka 556 = 634 CE, claims that 3735 years have elapsed since the Bharata battle, putting the date of Mahābhārata war at 3137 BCE.
Another traditional school of astronomers and historians, represented by Vriddha-Garga, Varahamihira (author of the "Brhatsamhita") and Kalhana (author of the "Rajatarangini"), place the Bharata war 653 years after the Kaliyuga epoch, corresponding to 2449 BCE.
The core story of the work is that of a dynastic struggle for the throne of Hastinapura, the kingdom ruled by the Kuru clan. The two collateral branches of the family that participate in the struggle are the Kaurava and the Pandava. Although the Kaurava is the senior branch of the family, Duryodhana, the eldest Kaurava, is younger than Yudhishthira, the eldest Pandava. Both Duryodhana and Yudhishthira claim to be first in line to inherit the throne.
The struggle culminates in the great battle of Kurukshetra, in which the Pandavas are ultimately victorious. The battle produces complex conflicts of kinship and friendship, instances of family loyalty and duty taking precedence over what is right, as well as the converse.
The Mahābhārata itself ends with the death of Krishna, and the subsequent end of his dynasty and ascent of the Pandava brothers to heaven. It also marks the beginning of the Hindu age of Kali Yuga, the fourth and final age of humankind, in which great values and noble ideas have crumbled, and people are heading towards the complete dissolution of right action, morality and virtue.
King Janamejaya's ancestor Shantanu, the king of Hastinapura, has a short-lived marriage with the goddess Ganga and has a son, Devavrata (later to be called Bhishma, a great warrior), who becomes the heir apparent. Many years later, when King Shantanu goes hunting, he sees Satyavati, the daughter of the chief of fisherman, and asks her father for her hand. Her father refuses to consent to the marriage unless Shantanu promises to make any future son of Satyavati the king upon his death. To resolve his father's dilemma, Devavrata agrees to relinquish his right to the throne. As the fisherman is not sure about the prince's children honouring the promise, Devavrata also takes a vow of lifelong celibacy to guarantee his father's promise.
Shantanu has two sons by Satyavati, Chitrāngada and Vichitravirya. Upon Shantanu's death, Chitrangada becomes king. He lives a very short uneventful life and dies. Vichitravirya, the younger son, rules Hastinapura. Meanwhile, the King of Kāśī arranges a swayamvara for his three daughters, neglecting to invite the royal family of Hastinapur. In order to arrange the marriage of young Vichitravirya, Bhishma attends the swayamvara of the three princesses Amba, Ambika and Ambalika, uninvited, and proceeds to abduct them. Ambika and Ambalika consent to be married to Vichitravirya.
The oldest princess Amba, however, informs Bhishma that she wishes to marry king of Shalva whom Bhishma defeated at their swayamvara. Bhishma lets her leave to marry king of Shalva, but Shalva refuses to marry her, still smarting at his humiliation at the hands of Bhishma. Amba then returns to marry Bhishma but he refuses due to his vow of celibacy. Amba becomes enraged and becomes Bhishma's bitter enemy, holding him responsible for her plight. Later she is reborn to King Drupada as Shikhandi (or Shikhandini) and causes Bhishma's fall, with the help of Arjuna, in the battle of Kurukshetra.
When Vichitravirya dies young without any heirs, Satyavati asks her first son Vyasa to father children with the widows. The eldest, Ambika, shuts her eyes when she sees him, and so her son Dhritarashtra is born blind. Ambalika turns pale and bloodless upon seeing him, and thus her son Pandu is born pale and unhealthy (the term Pandu may also mean 'jaundiced'). Due to the physical challenges of the first two children, Satyavati asks Vyasa to try once again. However, Ambika and Ambalika send their maid instead, to Vyasa's room. Vyasa fathers a third son, Vidura, by the maid. He is born healthy and grows up to be one of the wisest characters in the "Mahabharata". He serves as Prime Minister (Mahamantri or Mahatma) to King Pandu and King Dhritarashtra.
When the princes grow up, Dhritarashtra is about to be crowned king by Bhishma when Vidura intervenes and uses his knowledge of politics to assert that a blind person cannot be king. This is because a blind man cannot control and protect his subjects. The throne is then given to Pandu because of Dhritarashtra's blindness. Pandu marries twice, to Kunti and Madri. Dhritarashtra marries Gandhari, a princess from Gandhara, who blindfolds herself for the rest of her life so that she may feel the pain that her husband feels. Her brother Shakuni is enraged by this and vows to take revenge on the Kuru family. One day, when Pandu is relaxing in the forest, he hears the sound of a wild animal. He shoots an arrow in the direction of the sound. However the arrow hits the sage Kindama, who was in engaged in a sexual act in the guise of a deer. He curses Pandu that if he engages in a sexual act, he will die. Pandu then retires to the forest along with his two wives, and his brother Dhritarashtra rules thereafter, despite his blindness.
Pandu's older queen Kunti, however, had been given a boon by Sage Durvasa that she could invoke any god using a special mantra. Kunti uses this boon to ask Dharma the god of justice, Vayu the god of the wind, and Indra the lord of the heavens for sons. She gives birth to three sons, Yudhishthira, Bhima, and Arjuna, through these gods. Kunti shares her mantra with the younger queen Madri, who bears the twins Nakula and Sahadeva through the Ashwini twins. However, Pandu and Madri indulge in sex, and Pandu dies. Madri commits Sati out of remorse. Kunti raises the five brothers, who are from then on usually referred to as the Pandava brothers.
Dhritarashtra has a hundred sons through Gandhari, all born after the birth of Yudhishthira. These are the Kaurava brothers, the eldest being Duryodhana, and the second Dushasana. Other Kaurava brothers were Vikarna and Sukarna. The rivalry and enmity between them and the Pandava brothers, from their youth and into manhood, leads to the Kurukshetra war.
After the deaths of their mother (Madri) and father (Pandu), the Pandavas and their mother Kunti return to the palace of Hastinapur. Yudhishthira is made Crown Prince by Dhritarashtra, under considerable pressure from his courtiers. Dhritarashtra wanted his own son Duryodhana to become king and lets his ambition get in the way of preserving justice.
Shakuni, Duryodhana and Dushasana plot to get rid of the Pandavas. Shakuni calls the architect Purochana to build a palace out of flammable materials like lac and ghee. He then arranges for the Pandavas and the Queen Mother Kunti to stay there, with the intention of setting it alight. However, the Pandavas are warned by their wise uncle, Vidura, who sends them a miner to dig a tunnel. They are able to escape to safety and go into hiding. During this time Bhima marries a demoness Hidimbi and has a son Ghatotkacha. Back in Hastinapur, the Pandavas and Kunti are presumed dead.
Whilst they were in hiding the Pandavas learn of a swayamvara which is taking place for the hand of the Pāñcāla princess Draupadī. The Pandavas disguised as Brahmins come to witness the event. Meanwhile, Krishna who has already befriended Draupadi, tells her to look out for Arjuna (though now believed to be dead). The task was to string a mighty steel bow and shoot a target on the ceiling, which was the eye of a moving artificial fish, while looking at its reflection in oil below. In popular versions, after all the princes fail, many being unable to lift the bow, Karna proceeds to the attempt but is interrupted by Draupadi who refuses to marry a suta (this has been excised from the Critical Edition of Mahabharata as later interpolation). After this the swayamvara is opened to the Brahmins leading Arjuna to win the contest and marry Draupadi. The Pandavas return home and inform their meditating mother that Arjuna has won a competition and to look at what they have brought back. Without looking, Kunti asks them to share whatever Arjuna has won amongst themselves, thinking it to be alms. Thus, Draupadi ends up being the wife of all five brothers.
After the wedding, the Pandava brothers are invited back to Hastinapura. The Kuru family elders and relatives negotiate and broker a split of the kingdom, with the Pandavas obtaining and demanding only a wild forest inhabited by Takshaka, the king of snakes and his family. Through hard work the Pandavas are able to build a new glorious capital for the territory at Indraprastha.
Shortly after this, Arjuna elopes with and then marries Krishna's sister, Subhadra. Yudhishthira wishes to establish his position as king; he seeks Krishna's advice. Krishna advises him, and after due preparation and the elimination of some opposition, Yudhishthira carries out the "rājasūya yagna" ceremony; he is thus recognised as pre-eminent among kings.
The Pandavas have a new palace built for them, by Maya the Danava. They invite their Kaurava cousins to Indraprastha. Duryodhana walks round the palace, and mistakes a glossy floor for water, and will not step in. After being told of his error, he then sees a pond, and assumes it is not water and falls in. Bhima, Arjun, the twins and the servants laugh at him. In popular adaptations, this insult is wrongly attributed to Draupadi, even though in the Sanskrit epic, it was the Pandavas (except Yudhishthira) who had insulted Duryodhana. Enraged by the insult, and jealous at seeing the wealth of the Pandavas, Duryodhana decides to host a dice-game at Shakuni's suggestion.
Shakuni, Duryodhana's uncle, now arranges a dice game, playing against Yudhishthira with loaded dice. In the dice game, Yudhishthira loses all his wealth, then his kingdom. Yudhishthira then gambles his brothers, himself, and finally his wife into servitude. The jubilant Kauravas insult the Pandavas in their helpless state and even try to disrobe Draupadi in front of the entire court, but Draupadi's disrobe is prevented by Krishna, who miraculously make her dress endless, therefore it couldn't be removed.
Dhritarashtra, Bhishma, and the other elders are aghast at the situation, but Duryodhana is adamant that there is no place for two crown princes in Hastinapura. Against his wishes Dhritarashtra orders for another dice game. The Pandavas are required to go into exile for 12 years, and in the 13th year, they must remain hidden. If they are discovered by the Kauravas in the 13th year of their exile, then they will be forced into exile for another 12 years.
The Pandavas spend thirteen years in exile; many adventures occur during this time. The Pandavas acquire many divine weapons, given by gods, during this period. They also prepare alliances for a possible future conflict. They spend their final year in disguise in the court of king Virata, and they are discovered just after the end of the year.
At the end of their exile, they try to negotiate a return to Indraprastha with Krishna as their emissary. However, this negotiation fails, because Duryodhana objected that they were discovered in the 13th year of their exile and the return of their kingdom was not agreed. Then the Pandavas fought the Kauravas, claiming their rights over Indraprastha.
The two sides summon vast armies to their help and line up at Kurukshetra for a war. The kingdoms of Panchala, Dwaraka, Kasi, Kekaya, Magadha, Matsya, Chedi, Pandyas, Telinga, and the Yadus of Mathura and some other clans like the Parama Kambojas were allied with the Pandavas. The allies of the Kauravas included the kings of Pragjyotisha, Anga, Kekaya, Sindhudesa (including Sindhus, Sauviras and Sivis), Mahishmati, Avanti in Madhyadesa, Madra, Gandhara, Bahlika people, Kambojas and many others. Before war being declared, Balarama had expressed his unhappiness at the developing conflict and leaves to go on pilgrimage; thus he does not take part in the battle itself. Krishna takes part in a non-combatant role, as charioteer for Arjuna.
Before the battle, Arjuna, noticing that the opposing army includes his own cousins and relatives, including his grandfather Bhishma and his teacher Drona, has grave doubts about the fight. He falls into despair and refuses to fight. At this time, Krishna reminds him of his duty as a Kshatriya to fight for a righteous cause in the famous Bhagavad Gita section of the epic.
Though initially sticking to chivalrous notions of warfare, both sides soon adopt dishonourable tactics. At the end of the 18-day battle, only the Pandavas, Satyaki, Kripa, Ashwatthama, Kritavarma, Yuyutsu and Krishna survive. Yudhisthir becomes King of Hastinapur and Gandhari curses Krishna that the downfall of his clan is imminent.
After "seeing" the carnage, Gandhari, who had lost all her sons, curses Krishna to be a witness to a similar annihilation of his family, for though divine and capable of stopping the war, he had not done so. Krishna accepts the curse, which bears fruit 36 years later.
The Pandavas, who had ruled their kingdom meanwhile, decide to renounce everything. Clad in skins and rags they retire to the Himalaya and climb towards heaven in their bodily form. A stray dog travels with them. One by one the brothers and Draupadi fall on their way. As each one stumbles, Yudhishthira gives the rest the reason for their fall (Draupadi was partial to Arjuna, Nakula and Sahadeva were vain and proud of their looks, and Bhima and Arjuna were proud of their strength and archery skills, respectively). Only the virtuous Yudhishthira, who had tried everything to prevent the carnage, and the dog remain. The dog reveals himself to be the god Yama (also known as Yama Dharmaraja), and then takes him to the underworld where he sees his siblings and wife. After explaining the nature of the test, Yama takes Yudhishthira back to heaven and explains that it was necessary to expose him to the underworld because (Rajyante narakam dhruvam) any ruler has to visit the underworld at least once. Yama then assures him that his siblings and wife would join him in heaven after they had been exposed to the underworld for measures of time according to their vices.
Arjuna's grandson Parikshit rules after them and dies bitten by a snake. His furious son, Janamejaya, decides to perform a snake sacrifice ("sarpasattra") in order to destroy the snakes. It is at this sacrifice that the tale of his ancestors is narrated to him.
The Mahābhārata mentions that Karna, the Pandavas, Draupadi and Dhritarashtra's sons eventually ascended to svarga and "attained the state of the gods", and banded together – "serene and free from anger".
The "Mahābhārata" offers one of the first instances of theorizing about "dharmayuddha", "just war", illustrating many of the standards that would be debated later across the world. In the story, one of five brothers asks if the suffering caused by war can ever be justified. A long discussion ensues between the siblings, establishing criteria like "proportionality" (chariots cannot attack cavalry, only other chariots; no attacking people in distress), "just means" (no poisoned or barbed arrows), "just cause" (no attacking out of rage), and fair treatment of captives and the wounded.
Between 1919 and 1966, scholars at the Bhandarkar Oriental Research Institute, Pune, compared the various manuscripts of the epic from India and abroad and produced the "Critical Edition" of the "Mahabharata", on 13,000 pages in 19 volumes, followed by the "Harivamsha" in another two volumes and six index volumes. This is the text that is usually used in current Mahābhārata studies for reference. This work is sometimes called the "Pune" or "Poona" edition of the "Mahabharata".
Many regional versions of the work developed over time, mostly differing only in minor details, or with verses or subsidiary stories being added. These include the Tamil street theatre, terukkuttu and kattaikkuttu, the plays of which use themes from the Tamil language versions of "Mahabharata", focusing on Draupadi.
Outside the Indian subcontinent, in Indonesia, a version was developed in ancient Java as Kakawin Bhāratayuddha in the 11th century under the patronage of King Dharmawangsa (990–1016) and later it spread to the neighboring island of Bali, which remains a Hindu majority island today. It has become the fertile source for Javanese literature, dance drama (wayang wong), and wayang shadow puppet performances. This Javanese version of the Mahābhārata differs slightly from the original Indian version. For example, Draupadi is only wed to Yudhishthira, not to all the Pandava brothers; this might demonstrate ancient Javanese opposition to polyandry. The author later added some female characters to be wed to the Pandavas, for example, Arjuna is described as having many wives and consorts next to Subhadra. Another difference is that Shikhandini does not change her sex and remains a woman, to be wed to Arjuna, and takes the role of a warrior princess during the war. Another twist is that Gandhari is described as antagonistic character who hates the Pandavas: her hate is out of jealousy because during Gandhari's swayamvara, she was in love with Pandu but was later wed to his blind elder brother instead, whom she did not love, so she blindfolded herself as protest. Another notable difference is the inclusion of the Punakawans, the clown servants of the main characters in the storyline. These characters include Semar, Petruk, Gareng and Bagong, who are much-loved by Indonesian audiences. There are also some spin-off episodes developed in ancient Java, such as Arjunawiwaha composed in 11th century.
A Kawi version of the "Mahabharata", of which eight of the eighteen "parvas" survive, is found on the Indonesian island of Bali. It has been translated into English by Dr. I. Gusti Putu Phalgunadi.
A Persian translation of "Mahabharata", titled "Razmnameh", was produced at Akbar's orders, by Faizi and ʽAbd al-Qadir Badayuni in the 18th century.
The first complete English translation was the Victorian prose version by Kisari Mohan Ganguli, published between 1883 and 1896 (Munshiram Manoharlal Publishers) and by M. N. Dutt (Motilal Banarsidass Publishers). Most critics consider the translation by Ganguli to be faithful to the original text. The complete text of Ganguli's translation is in the public domain and is available online.
Another English prose translation of the full epic, based on the "Critical Edition", is in progress, published by University of Chicago Press. It was initiated by Indologist J. A. B. van Buitenen (books 1–5) and, following a 20-year hiatus caused by the death of van Buitenen, is being continued by D. Gitomer of DePaul University (book 6), J. L. Fitzgerald of Brown University (books 11–13) and Wendy Doniger of the University of Chicago (books 14–18).
An early poetry translation by Romesh Chunder Dutt and published in 1898 condenses the main themes of the Mahābhārata into English verse. A later poetic "transcreation" (author's own description) of the full epic into English, done by the poet P. Lal, is complete, and in 2005 began being published by Writers Workshop, Calcutta. The P. Lal translation is a non-rhyming verse-by-verse rendering, and is the only edition in any language to include all slokas in all recensions of the work (not just those in the "Critical Edition"). The completion of the publishing project is scheduled for 2010. Sixteen of the eighteen volumes are now available.
A project to translate the full epic into English prose, translated by various hands, began to appear in 2005 from the Clay Sanskrit Library, published by New York University Press. The translation is based not on the "Critical Edition" but on the version known to the commentator Nīlakaṇṭha. Currently available are 15 volumes of the projected 32-volume edition.
Indian economist Bibek Debroy has also begun an unabridged English translation in ten volumes. Volume 1: Adi Parva was published in March 2010.
Many condensed versions, abridgements and novelistic prose retellings of the complete epic have been published in English, including works by Ramesh Menon, William Buck, R. K. Narayan, C. Rajagopalachari, K. M. Munshi, Krishna Dharma, Romesh C. Dutt, Bharadvaja Sarma, John D. Smith and Sharon Maas.
Bhasa, the 2nd- or 3rd-century CE Sanskrit playwright, wrote two plays on episodes in the "Marabharata", "Urubhanga" ("Broken Thigh"), about the fight between Duryodhana and Bhima, while "Madhyamavyayoga" ("The Middle One") set around Bhima and his son, Ghatotkacha. The first important play of 20th century was "Andha Yug" ("The Blind Epoch"), by Dharamvir Bharati, which came in 1955, found in "Mahabharat", both an ideal source and expression of modern predicaments and discontent. Starting with Ebrahim Alkazi, it was staged by numerous directors. V. S. Khandekar's Marathi novel, "Yayati" (1960), and Girish Karnad's debut play "Yayati" (1961) are based on the story of King Yayati found in the "Mahabharat". Bengali writer and playwright, Buddhadeva Bose wrote three plays set in Mahabharat, "Anamni Angana", "Pratham Partha" and "Kalsandhya". Pratibha Ray wrote an award winning novel entitled Yajnaseni from Draupadi's perspective in 1984. Later, Chitra Banerjee Divakaruni wrote a similar novel entitled "" in 2008. Gujarati poet Chinu Modi has written long narrative poetry "Bahuk" based on character Bahuka. Krishna Udayasankar, a Singapore-based Indian author, has written several novels which are modern-day retellings of the epic, most notably the Aryavarta Chronicles Series. Suman Pokhrel wrote a solo play based on Ray's novel by personalizing and taking Draupadi alone in the scene.
Amar Chitra Katha published a 1,260-page comic book version of the "Mahabharata".
In Indian cinema, several film versions of the epic have been made, dating back to 1920. The Mahābhārata was also reinterpreted by Shyam Benegal in Kalyug. Prakash Jha directed 2010 film Raajneeti was partially inspired by the "Mahabharata". A 2013 animated adaptation holds the record for India's most expensive animated film.
In 1988, B. R. Chopra created a television series named "Mahabharat." It was directed by Ravi Chopra, and was televised on India's national television (Doordarshan). The same year as "Mahabharat" was being shown on Doordarshan, that same company's other television show, "Bharat Ek Khoj", also directed by Shyam Benegal, showed a 2-episode abbreviation of the "Mahabharata", drawing from various interpretations of the work, be they sung, danced, or staged. In the Western world, a well-known presentation of the epic is Peter Brook's nine-hour play, which premiered in Avignon in 1985, and its five-hour movie version "The Mahābhārata" (1989). In the late 2013 "Mahabharat" was televised on STAR Plus. It was produced by Swastik Productions Pvt.
Uncompleted projects on the Mahābhārata include one by Rajkumar Santoshi, and a theatrical adaptation planned by Satyajit Ray.
Jain versions of Mahābhārata can be found in the various Jain texts like "Harivamsapurana" (the story of Harivamsa) "Trisastisalakapurusa Caritra" (Hagiography of 63 Illustrious persons), "Pandavacaritra" (lives of Pandavas) and "Pandavapurana" (stories of Pandavas). From the earlier canonical literature, "Antakrddaaśāh" (8th cannon) and "Vrisnidasa" ("upangagama" or secondary canon) contain the stories of Neminatha (22nd Tirthankara), Krishna and Balarama. Prof. Padmanabh Jaini notes that, unlike in the Hindu Puranas, the names Baladeva and Vasudeva are not restricted to Balarama and Krishna in Jain puranas. Instead they serve as names of two distinct class of mighty brothers, who appear nine times in each half of time cycles of the Jain cosmology and rule the half the earth as half-chakravartins. Jaini traces the origin of this list of brothers to the Jinacharitra by Bhadrabahu swami (4th–3rd century BCE). According to Jain cosmology Balarama, Krishna and Jarasandha are the ninth and the last set of Baladeva, Vasudeva, and Partivasudeva. The main battle is not the Mahabharata, but the fight between Krishna and Jarasandha (who is killed by Krishna). Ultimately, the Pandavas and Balarama take renunciation as Jain monks and are reborn in heavens, while on the other hand Krishna and Jarasandha are reborn in hell. In keeping with the law of karma, Krishna is reborn in hell for his exploits (sexual and violent) while Jarasandha for his evil ways. Prof. Jaini admits a possibility that perhaps because of his popularity, the Jain authors were keen to rehabilitate Krishna. The Jain texts predict that after his karmic term in hell is over sometime during the next half time-cycle, Krishna will be reborn as a Jain Tirthankara and attain liberation. Krishna and Balrama are shown as contemporaries and cousins of 22nd Tirthankara, Neminatha. According to this story, Krishna arranged young Neminath's marriage with Rajamati, the daughter of Ugrasena, but Neminatha, empathizing with the animals which were to be slaughtered for the marriage feast, left the procession suddenly and renounced the world.
In the "Bhagavad Gita", Krishna explains to Arjuna his duties as a warrior and prince and elaborates on different Yogic and Vedantic philosophies, with examples and analogies. This has led to the Gita often being described as a concise guide to Hindu philosophy and a practical, self-contained guide to life. In more modern times, Swami Vivekananda, Netaji Subhas Chandra Bose, Bal Gangadhar Tilak, Mahatma Gandhi and many others used the text to help inspire the Indian independence movement.
|
https://en.wikipedia.org/wiki?curid=19643
|
Mein Kampf
Mein Kampf (; "My Struggle" or "My Fight") is a 1925 autobiographical manifesto by Nazi Party leader Adolf Hitler. The work describes the process by which Hitler became antisemitic and outlines his political ideology and future plans for Germany. Volume 1 of "Mein Kampf" was published in 1925 and Volume 2 in 1926. The book was edited first by Emil Maurice, then by Hitler's deputy Rudolf Hess.
Hitler began "Mein Kampf" while imprisoned for what he considered to be "political crimes" following his failed Putsch in Munich in November 1923. Although he received many visitors initially, he soon devoted himself entirely to the book. As he continued, he realized that it would have to be a two-volume work, with the first volume scheduled for release in early 1925. The governor of Landsberg noted at the time that "he [Hitler] hopes the book will run into many editions, thus enabling him to fulfill his financial obligations and to defray the expenses incurred at the time of his trial." After slow initial sales, the book became a bestseller in Germany following Hitler's rise to power in 1933.
After Hitler's death, copyright of "Mein Kampf" passed to the state government of Bavaria, which refused to allow any copying or printing of the book in Germany. In 2016, following the expiration of the copyright held by the Bavarian state government, "Mein Kampf" was republished in Germany for the first time since 1945, which prompted public debate and divided reactions from Jewish groups.
Hitler originally wanted to call his forthcoming book "Viereinhalb Jahre (des Kampfes) gegen Lüge, Dummheit und Feigheit", or "Four and a Half Years (of Struggle) Against Lies, Stupidity and Cowardice". Max Amann, head of the Franz Eher Verlag and Hitler's publisher, is said to have suggested the much shorter ""Mein Kampf"" or ""My Struggle"".
The arrangement of chapters is as follows:
In "Mein Kampf", Hitler used the main thesis of "the Jewish peril", which posits a Jewish conspiracy to gain world leadership. The narrative describes the process by which he became increasingly antisemitic and militaristic, especially during his years in Vienna. He speaks of not having met a Jew until he arrived in Vienna, and that at first his attitude was liberal and tolerant. When he first encountered the antisemitic press, he says, he dismissed it as unworthy of serious consideration. Later he accepted the same antisemitic views, which became crucial to his program of national reconstruction of Germany.
"Mein Kampf" has also been studied as a work on political theory. For example, Hitler announces his hatred of what he believed to be the world's two evils: Communism and Judaism.
In the book Hitler blamed Germany's chief woes on the parliament of the Weimar Republic, the Jews, and Social Democrats, as well as Marxists, though he believed that Marxists, Social Democrats, and the parliament were all working for Jewish interests. He announced that he wanted to completely destroy the parliamentary system, believing it to be corrupt in principle, as those who reach power are inherent opportunists.
While historians dispute the exact date Hitler decided to exterminate the Jewish people, few place the decision before the mid-1930s. First published in 1925, "Mein Kampf" shows Hitler's personal grievances and his ambitions for creating a New Order. Hitler also wrote that "The Protocols of the Elders of Zion", a fabricated text which purported to expose the Jewish plot to control the world, was an authentic document. This later became a part of the Nazi propaganda effort to justify persecution and annihilation of the Jews.
The historian Ian Kershaw points out that several passages in "Mein Kampf" are undeniably of a genocidal nature. Hitler wrote "the nationalization of our masses will succeed only when, aside from all the positive struggle for the soul of our people, their international poisoners are exterminated", and he suggested that, "If at the beginning of the war and during the war twelve or fifteen thousand of these Hebrew corrupters of the nation had been subjected to poison gas, such as had to be endured in the field by hundreds of thousands of our very best German workers of all classes and professions, then the sacrifice of millions at the front would not have been in vain."
The racial laws to which Hitler referred resonate directly with his ideas in "Mein Kampf". In the first edition, Hitler stated that the destruction of the weak and sick is far more humane than their protection. Apart from this allusion to humane treatment, Hitler saw a purpose in destroying "the weak" in order to provide the proper space and purity for the "strong".
In the chapter "Eastern Orientation or Eastern Policy", Hitler argued that the Germans needed Lebensraum in the East, a "historic destiny" that would properly nurture the German people. Hitler believed that "the organization of a Russian state formation was not the result of the political abilities of the Slavs in Russia, but only a wonderful example of the state-forming efficacy of the German element in an inferior race."
In "Mein Kampf" Hitler openly stated the future German expansion in the East, foreshadowing Generalplan Ost:
Although Hitler originally wrote "Mein Kampf" mostly for the followers of National Socialism, it grew in popularity after he rose to power. (Two other books written by party members, Gottfried Feder's "Breaking The Interest Slavery" and Alfred Rosenberg's "The Myth of the Twentieth Century," have since lapsed into comparative literary obscurity.) Hitler had made about 1.2 million Reichsmarks from the income of the book by 1933 (), when the average annual income of a teacher was about 4,800 Marks (). He accumulated a tax debt of 405,500 Reichsmark (very roughly in 2015 1.1 million GBP, 1.4 million EUR, 1.5 million USD) from the sale of about 240,000 copies before he became chancellor in 1933 (at which time his debt was waived).
Hitler began to distance himself from the book after becoming chancellor of Germany in 1933. He dismissed it as "fantasies behind bars" that were little more than a series of articles for the "Völkischer Beobachter", and later told Hans Frank that "If I had had any idea in 1924 that I would have become Reich chancellor, I never would have written the book." Nevertheless, "Mein Kampf" was a bestseller in Germany during the 1930s. During Hitler's years in power, the book was in high demand in libraries and often reviewed and quoted in other publications. It was given free to every newlywed couple and every soldier fighting at the front. By 1939 it had sold 5.2 million copies in eleven languages. By the end of the war, about 10 million copies of the book had been sold or distributed in Germany.
"Mein Kampf", in essence, lays out the ideological program Hitler established for the German revolution, by identifying the Jews and "Bolsheviks" as racially and ideologically inferior and threatening, and "Aryans" and National Socialists as racially superior and politically progressive. Hitler's revolutionary goals included expulsion of the Jews from Greater Germany and the unification of German peoples into one Greater Germany. Hitler desired to restore German lands to their greatest historical extent, real or imagined.
Due to its racist content and the historical effect of Nazism upon Europe during World War II and the Holocaust, it is considered a highly controversial book. Criticism has not come solely from opponents of Nazism. Italian Fascist dictator and Nazi ally Benito Mussolini was also critical of the book, saying that it was "a boring tome that I have never been able to read" and remarking that Hitler's beliefs, as expressed in the book, were "little more than commonplace clichés".
The German journalist Konrad Heiden, an early critic of the Nazi Party, observed that the content of "Mein Kampf" is essentially a political argument with other members of the Nazi Party who had appeared to be Hitler's friends, but whom he was actually denouncing in the book's content – sometimes by not even including references to them.
The American literary theorist and philosopher Kenneth Burke wrote a 1939 rhetorical analysis of the work, "The Rhetoric of Hitler's "Battle"", which revealed an underlying message of aggressive intent.
The American journalist John Gunther said in 1940 that compared to the autobiographies such as Leon Trotsky's "My Life" or Henry Adams's "The Education of Henry Adams", "Mein Kampf" was "vapid, vain, rhetorical, diffuse, prolix." However, he added that "it is a powerful and moving book, the product of great passionate feeling". He suggested that the book exhausted curious German readers, but its "ceaseless repetition of the argument, left impregnably in their minds, fecund and germinating".
In March 1940, British writer George Orwell reviewed a then-recently published uncensored translation of "Mein Kampf" for "The New English Weekly". Orwell suggested that the force of Hitler's personality shone through the often "clumsy" writing, capturing the magnetic allure of Hitler for many Germans. In essence, Orwell notes, Hitler offers only visions of endless struggle and conflict in the creation of "a horrible brainless empire" that "stretch[es] to Afghanistan or thereabouts". He wrote, "Whereas Socialism, and even capitalism in a more grudging way, have said to people 'I offer you a good time,' Hitler has said to them, 'I offer you struggle, danger, and death,' and as a result a whole nation flings itself at his feet." Orwell's review was written in the aftermath of the 1939 Molotov–Ribbentrop Pact, when Hitler made peace with USSR after more than a decade of vitriolic rhetoric and threats between the two nations; with the pact in place, Orwell believed, England was now facing a risk of Nazi attack and the UK must not underestimate the appeal of Hitler's ideas.
In his 1943 book "The Menace of the Herd", Austrian scholar Erik von Kuehnelt-Leddihn described Hitler's ideas in "Mein Kampf" and elsewhere as "a veritable "reductio ad absurdum" of 'progressive' thought" and betraying "a curious lack of original thought" that shows Hitler offered no innovative or original ideas but was merely "a "virtuoso" of commonplaces which he may or may not repeat in the guise of a 'new discovery.'" Hitler's stated aim, Kuehnelt-Leddihn writes, is to quash individualism in furtherance of political goals:
In his "The Second World War", published in several volumes in the late 1940s and early 1950s, Winston Churchill wrote that he felt that after Hitler's ascension to power, no other book than "Mein Kampf" deserved more intensive scrutiny.
The critic George Steiner has suggested that "Mein Kampf" can be seen as one of several books that resulted from the crisis of German culture following Germany's defeat in World War I, comparable in this respect to the philosopher Ernst Bloch's "The Spirit of Utopia" (1918), the historian Oswald Spengler's "The Decline of the West" (1918), the theologian Franz Rosenzweig's "The Star of Redemption" (1921), the theologian Karl Barth's "The Epistle to the Romans" (1922), and the philosopher Martin Heidegger's "Being and Time" (1927).
While Hitler was in power (1933–1945), "Mein Kampf" came to be available in three common editions. The first, the "Volksausgabe" or People's Edition, featured the original cover on the dust jacket and was navy blue underneath with a gold swastika eagle embossed on the cover. The "Hochzeitsausgabe", or Wedding Edition, in a slipcase with the seal of the province embossed in gold onto a parchment-like cover was given free to marrying couples. In 1940, the "Tornister-Ausgabe", or Knapsack Edition, was released. This edition was a compact, but unabridged, version in a red cover and was released by the post office, available to be sent to loved ones fighting at the front. These three editions combined both volumes into the same book.
A special edition was published in 1939 in honour of Hitler's 50th birthday. This edition was known as the "Jubiläumsausgabe", or Anniversary Issue. It came in both dark blue and bright red boards with a gold sword on the cover. This work contained both volumes one and two. It was considered a deluxe version, relative to the smaller and more common "Volksausgabe".
The book could also be purchased as a two-volume set during Hitler's rule, and was available in soft cover and hardcover. The soft cover edition contained the original cover (as pictured at the top of this article). The hardcover edition had a leather spine with cloth-covered boards. The cover and spine contained an image of three brown oak leaves.
At the time of his suicide, Hitler's official place of residence was in Munich, which led to his entire estate, including all rights to "Mein Kampf", changing to the ownership of the state of Bavaria. The government of Bavaria, in agreement with the federal government of Germany, refused to allow any copying or printing of the book in Germany. It also opposed copying and printing in other countries, but with less success. As per German copyright law, the entire text entered the public domain on 1 January 2016, 70 years after the author's death.
Owning and buying the book in Germany is not an offence. Trading in old copies is lawful as well, unless it is done in such a fashion as to "promote hatred or war." In particular, the unmodified edition is not covered by §86 StGB that forbids dissemination of means of propaganda of unconstitutional organizations, since it is a "pre-constitutional work" and as such cannot be opposed to the free and democratic basic order, according to a 1979 decision of the Federal Court of Justice of Germany. Most German libraries carry heavily commented and excerpted versions of "Mein Kampf." In 2008, Stephan Kramer, secretary-general of the Central Council of Jews in Germany, not only recommended lifting the ban, but volunteered the help of his organization in editing and annotating the text, saying that it is time for the book to be made available to all online.
A variety of restrictions or special circumstances apply in other countries.
In 1934, the French government unofficially sponsored the publication of an unauthorized translation. It was meant as a warning and included a critical introduction by Marshal Lyautey ("Every Frenchman must read this book"). It was published by far-right publisher Fernand Sorlot in an agreement with the activists of LICRA who bought 5000 copies to be offered to "influential people"; however, most of them treated the book as a casual gift and did not read it. The Nazi regime unsuccessfully tried to have it forbidden. Hitler, as the author, and Eher-Verlag, his German publisher, had to sue for copyright infringement in the Commercial Court of France. Hitler's lawsuit succeeded in having all copies seized, the print broken up, and having an injunction against booksellers offering any copies. However, a large quantity of books had already been shipped and stayed available undercover by Sorlot.
In 1938, Hitler licensed for France an authorized edition by Fayard, translated by François Dauture and Georges Blond, lacking the threatening tone against France of the original. The French edition was 347 pages long, while the original title was 687 pages, and it was titled "Ma doctrine" ("My doctrine").
After the war, Fernand Sorlot re-edited, re-issued, and continued to sell the work, without permission from the state of Bavaria, to which the author's rights had defaulted.
In the 1970s, the rise of the extreme right in France along with the growing of Holocaust denial works, placed the "Mein Kampf" under judicial watch and in 1978, LICRA entered a complaint in the courts against the publisher for inciting antisemitism. Sorlot received a "substantial fine" but the court also granted him the right to continue publishing the work, provided certain warnings and qualifiers accompany the text.
On 1 January 2016, seventy years after the author's death, "Mein Kampf" entered the public domain in France.
A new edition was published in 2017 by Fayard, now part of the Groupe Hachette, with a critical introduction, just as the edition published in 2018 in Germany by the "Institut für Zeitgeschichte", the Institute of Contemporary History based in Munich.
Since its first publication in India in 1928, "Mein Kampf" has gone through hundreds of editions and sold over 100,000 copies.
On 5 May 1995 a translation of "Mein Kampf" released by a small Latvian publishing house "Vizītkarte" began appearing in bookstores, provoking a reaction from Latvian authorities, who confiscated the approximately 2,000 copies that had made their way to the bookstores and charged director of the publishing house Pēteris Lauva with offences under anti-racism law. Currently the publication of "Mein Kampf" is forbidden in Latvia.
In April 2018 a number of Russian-language news sites (Baltnews, Zvezda, Sputnik, Komsomolskaya Pravda and Komprava among others) reported that Adolf Hitler had allegedly become more popular in Latvia than Harry Potter, referring to a Latvian online book trading platform ibook.lv, where "Mein Kampf" had appeared at the No. 1 position in "The Most Current Books in 7 Days" list.
In research done by Polygraph.info who called the claim "false", ibook.lv was only the 878th most popular website and 149th most popular shopping site in Latvia at the time, according to Alexa Internet. In addition to that, the website only had 4 copies on sale by individual users and no users wishing to purchase the book. Owner of ibook.lv pointed out that the book list is not based on actual deals, but rather page views, of which 70% in the case of "Mein Kampf" had come from anonymous and unregistered users she believed could be fake users. Ambassador of Latvia to the Russian Federation Māris Riekstiņš responded to the story by tweeting "everyone, who wishes to know what books are actually bought and read in Latvia, are advised to address the largest book stores @JanisRoze; @valtersunrapa; @zvaigzneabc". BBC also acknowledged the story was fake news, adding that in the last three years "Mein Kampf" had been requested for borrowing for only 139 times across all libraries in Latvia, in comparison with around 25,000 requests for books about Harry Potter.
In the Netherlands the sale of "Mein Kampf" had been forbidden since World War II. In September 2018, however, Dutch publisher Prometheus officially released an academic edition of the 2016 German translation with comprehensive introductions and annotations by Dutch historians. It marks the first time the book is widely available to the general public in the Netherlands since World War II.
In the Russian Federation, "Mein Kampf" has been published at least three times since 1992; the Russian text is also available on websites. In 2006 the Public Chamber of Russia proposed banning the book. In 2009 St. Petersburg's branch of the Russian Ministry of Internal Affairs requested to remove an annotated and hyper-linked Russian translation of the book from a historiography website. On 13 April 2010, it was announced that "Mein Kampf" is outlawed on grounds of extremism promotion.
"Mein Kampf" has been reprinted several times since 1945; in 1970, 1992, 2002 and 2010. In 1992 the Government of Bavaria tried to stop the publication of the book, and the case went to the Supreme Court of Sweden which ruled in favour of the publisher, stating that the book is protected by copyright, but that the copyright holder is unidentified (and not the State of Bavaria) and that the original Swedish publisher from 1934 had gone out of business. It therefore refused the Government of Bavaria's claim.
The only translation changes came in the 1970 edition, but they were only linguistic, based on a new Swedish standard.
"Mein Kampf" was widely available and growing in popularity in Turkey, even to the point where it became a bestseller, selling up to 100,000 copies in just two months in 2005. Analysts and commentators believe the popularity of the book to be related to a rise in nationalism and anti-U.S. sentiment. A columnist in Shalom stated this was a result of "what is happening in the Middle East, the Israeli-Palestinian problem and the war in Iraq." Doğu Ergil, a political scientist at Ankara University, said both far-right ultranationalists and extremist Islamists had found common ground - "not on a common agenda for the future, but on their anxieties, fears and hate".
In the United States, "Mein Kampf" can be found at many community libraries and can be bought, sold and traded in bookshops. The U.S. government seized the copyright in September 1942 during the Second World War under the Trading with the Enemy Act and in 1979, Houghton Mifflin, the U.S. publisher of the book, bought the rights from the government pursuant to . More than 15,000 copies are sold a year. In 2016, Houghton Mifflin Harcourt reported that it was having difficulty finding a charity that would accept profits from the sales of its version of "Mein Kampf", which it had promised to donate.
In 1999, the Simon Wiesenthal Center documented that the book was available in Germany via major online booksellers such as Amazon and Barnes & Noble. After a public outcry, both companies agreed to stop those sales to addresses in Germany. In March 2020 Amazon banned sales of new and second-hand copies of "Mein Kampf", and several other Nazi publications, on its platform. The book remains available on Barnes and Noble's website. It is also available in various languages, including German, at the Internet Archive. One of the first complete English translations was published by James Vincent Murphy in 1939. The Murphy translation of the book is freely available on Project Gutenberg Australia.
On 3 February 2010, the Institute of Contemporary History (IfZ) in Munich announced plans to republish an annotated version of the text, for educational purposes in schools and universities, in 2015. The book had last been published in Germany in 1945. The IfZ argued that a republication was necessary to get an authoritative annotated edition by the time the copyright ran out, which might open the way for neo-Nazi groups to publish their own versions. The Bavarian Finance Ministry opposed the plan, citing respect for victims of the Holocaust. It stated that permits for reprints would not be issued, at home or abroad. This would also apply to a new annotated edition. There was disagreement about the issue of whether the republished book might be banned as Nazi propaganda. The Bavarian government emphasized that even after expiration of the copyright, "the dissemination of Nazi ideologies will remain prohibited in Germany and is punishable under the penal code". However, the Bavarian Science Minister Wolfgang Heubisch supported a critical edition, stating in 2010 that, "Once Bavaria's copyright expires, there is the danger of charlatans and neo-Nazis appropriating this infamous book for themselves".
On 12 December 2013 the Bavarian government cancelled its financial support for an annotated edition. IfZ, which was preparing the translation, announced that it intended to proceed with publication after the copyright expired. The IfZ scheduled an edition of "Mein Kampf" for release in 2016.
Richard Verber, vice-president of the Board of Deputies of British Jews, stated in 2015 that the board trusted the academic and educational value of republishing. "We would, of course, be very wary of any attempt to glorify Hitler or to belittle the Holocaust in any way", Verber declared to "The Observer". "But this is not that. I do understand how some Jewish groups could be upset and nervous, but it seems it is being done from a historical point of view and to put it in context".
An annotated edition of "Mein Kampf" was published in Germany in January 2016 and sold out within hours on Amazon's German site. The book's publication led to public debate in Germany, and divided reactions from Jewish groups, with some supporting, and others opposing, the decision to publish. German officials had previously said they would limit public access to the text amid fears that its republication could stir neo-Nazi sentiment. Some bookstores stated that they would not stock the book. Dussmann, a Berlin bookstore, stated that one copy was available on the shelves in the history section, but that it would not be advertised and more copies would be available only on order. By January 2017, the German annotated edition had sold over 85,000 copies.
After the party's poor showing in the 1928 elections, Hitler believed that the reason for his loss was the public's misunderstanding of his ideas. He then retired to Munich to dictate a sequel to "Mein Kampf" to expand on its ideas, with more focus on foreign policy.
Only two copies of the 200-page manuscript were originally made, and only one of these was ever made public. The document was neither edited nor published during the Nazi era and remains known as "Zweites Buch", or "Second Book". To keep the document strictly secret, in 1935 Hitler ordered that it be placed in a safe in an air raid shelter. It remained there until being discovered by an American officer in 1945.
The authenticity of the document found in 1945 has been verified by Josef Berg, a former employee of the Nazi publishing house Eher Verlag, and Telford Taylor, a former brigadier general of the United States Army Reserve and Chief Counsel at the Nuremberg war-crimes trials.
In 1958, the "Zweites Buch" was found in the archives of the United States by American historian Gerhard Weinberg. Unable to find an American publisher, Weinberg turned to his mentor – Hans Rothfels at the Institute of Contemporary History in Munich, and his associate Martin Broszat – who published "Zweites Buch" in 1961. A pirated edition was published in English in New York in 1962. The first authoritative English edition was not published until 2003 ("Hitler's Second Book: The Unpublished Sequel to Mein Kampf," ).
|
https://en.wikipedia.org/wiki?curid=19644
|
MVS
Multiple Virtual Storage, more commonly called MVS, was the most commonly used operating system on the System/370 and System/390 IBM mainframe computers. IBM developed MVS, along with OS/VS1 and SVS, as a successor to OS/360. It is unrelated to IBM's other mainframe operating system lines, e.g., VSE, VM, TPF.
First released in 1974, MVS was extended by program products with new names multiple times:
Page 5; June 26, 1978 - Page 8
At first IBM described MVS as simply a new release of OS/VS2, but it was, in fact a major rewrite. OS/VS2 release 1 was an upgrade of OS/360 MVT that retained most of the original code and, like MVT, was mainly written in assembly language. The MVS core was almost entirely written in Assembler XF, although a few modules were written in PL/S, but not the performance-sensitive ones, in particular not the Input/Output Supervisor (IOS). IBM's use of "OS/VS2" emphasized upwards compatibility: application programs that ran under MVT did not even need recompiling to run under MVS. The same Job Control Language files could be used unchanged; utilities and other non-core facilities like TSO ran unchanged. IBM and users almost unanimously called the new system MVS from the start, and IBM continued to use the term "MVS" in the naming of later "major" versions such as MVS/XA.
OS/360 MFT (Multitasking with a Fixed number of Tasks) provided multitasking: several memory partitions, each of a fixed size, were set up when the operating system was installed and when the operator redefined them. For example, there could be a small partition, two medium partitions, and a large partition. If there were two large programs ready to run, one would have to wait until the other finished and vacated the large partition.
OS/360 MVT (Multitasking with a Variable number of Tasks) was an enhancement that further refined memory use. Instead of using fixed-size memory partitions, MVT allocated memory to regions for job steps as needed, provided enough "contiguous" physical memory was available. This was a significant advance over MFT's memory management, but had some weaknesses: if a job allocated memory dynamically (as most sort programs and database management systems do), the programmers had to estimate the job's maximum memory requirement and pre-define it for MVT. A job step that contained a mix of small and large programs wasted memory while the small programs ran. Most seriously, memory could become fragmented, i.e., the memory not used by current jobs could be divided into uselessly small chunks between the areas used by current jobs, and the only remedy was to wait some current jobs finished before starting any new ones.
In the early 1970s IBM sought to mitigate these difficulties by introducing virtual memory (which IBM called "virtual storage"), which allowed programs to request address spaces larger than physical memory. The original implementations had a single virtual address space, shared by all jobs. OS/VS1 was OS/360 MFT within a single virtual address space; OS/VS2 SVS was OS/360 MVT within a single virtual address space. So OS/VS1 and SVS in principle had the same disadvantages as MFT and MVT, but the impacts were less severe because jobs could request much larger address spaces and the requests came out of a 16 MB pool even if physical storage was smaller.
In the mid-1970s IBM introduced MVS, which not only supported virtual storage that was larger than the available real storage, as did SVS, but also allowed an indefinite number of applications to run in different address spaces. Two concurrent programs might try to access the same virtual memory address, but the virtual memory system redirected these requests to different areas of physical memory. Each of these address spaces consisted of three areas: an operating system (one instance shared by all jobs), an application area unique for each application, and a shared virtual area used for various purposes, including inter-job communication. IBM promised that application areas would always be at least 8 MB. This made MVS the perfect solution for business problems that resulted from the need to run more applications.
MVS maximized processing potential by providing multiprogramming and multiprocessing capabilities. Like its MVT and OS/VS2 SVS predecessors, MVS supported multiprogramming; program instructions and associated data are scheduled by a control program and given processing cycles. Unlike a single-programming operating system, these systems maximize the use of the processing potential by dividing processing cycles among the instructions associated with several different concurrently running programs. This way, the control program does not have to wait for the I/O operation to complete before proceeding. By executing the instructions for multiple programs, the computer is able to switch back and forth between active and inactive programs.
Early editions of MVS (mid-1970s) were among the first of the IBM OS series to support multiprocessor configurations, though the M65MP variant of OS/360 running on 360 Models 65 and 67 had provided limited multiprocessor support. The 360 Model 67 had also hosted the multiprocessor capable TSS/360, MTS and CP-67 operating systems. Because multiprocessing systems can execute instructions simultaneously, they offer greater processing power than single-processing system. As a result, MVS was able to address the business problems brought on by the need to process large amounts of data.
Multiprocessing systems are either loosely coupled, which means that each computer has access to a common workload, or tightly coupled, which means that the computers share the same real storage and are controlled by a single copy of the operating system. MVS retained both the loosely coupled multiprocessing of Attached Support Processor (ASP) and the tightly coupled multiprocessing of OS/360 Model 65 Multiprocessing. In tightly coupled systems, two CPUs shared concurrent access to the same memory (and copy of the operating system) and peripherals, providing greater processing power and a degree of graceful degradation if one CPU failed. In loosely coupled configurations each of a group of processors (single and / or tightly coupled) had its own memory and operating system but shared peripherals and the operating system component JES3 allowed managing the whole group from one console. This provided greater resilience and let operators decide which processor should run which jobs from a central job queue. MVS JES3 gave users the opportunity to network together two or more data processing systems via shared disks and Channel-to-Channel Adapters (CTCA's). This capability eventually became available to JES2 users as Multi-Access SPOOL (MAS).
MVS originally supported 24-bit addressing (i.e., up to 16 MB). As the underlying hardware progressed, it supported 31-bit (XA and ESA; up to 2048 MB) and now (as z/OS) 64-bit addressing. The most significant motives for the rapid upgrade to 31-bit addressing were the growth of large transaction-processing networks, mostly controlled by CICS, which ran in a single address space—and the DB2 relational database management system needed more than 8 MB of application address space to run efficiently. (Early versions were configured into two address spaces that communicated via the shared virtual area, but this imposed a significant overhead since all such communications had transmit via the operating system.)
The main user interfaces to MVS are: Job Control Language (JCL), which was originally designed for batch processing but from the 1970s onwards was also used to start and allocate resources to long-running interactive jobs such as CICS; and TSO (Time Sharing Option), the interactive time-sharing interface, which was mainly used to run development tools and a few end-user information systems. ISPF is a TSO application for users on 3270-family terminals (and later, on VM as well), which allows the user to accomplish the same tasks as TSO's command line but in a menu and form oriented manner, and with a full screen editor and file browser. TSO's basic interface is command line, although facilities were added later for form-driven interfaces).
MVS took a major step forward in fault-tolerance, built on the earlier STAE facility, that IBM called "software recovery". IBM decided to do this after years of practical real-world experience with MVT in the business world. System failures were now having major impacts on customer businesses, and IBM decided to take a major design jump, to assume that despite the very best software development and testing techniques, that 'problems WILL occur.' This profound assumption was pivotal in adding great percentages of fault-tolerance code to the system and likely contributed to the system's success in tolerating software and hardware failures. Statistical information is hard to come by to prove the value of these design features (how can you measure 'prevented' or 'recovered' problems?), but IBM has, in many dimensions, enhanced these fault-tolerant software recovery and rapid problem resolution features, over time.
This design specified a hierarchy of error-handling programs, in system (kernel/'privileged') mode, called Functional Recovery Routines, and in user ('task' or 'problem program') mode, called "ESTAE" (Extended Specified Task Abnormal Exit routines) that were invoked in case the system detected an error (actually, hardware processor or storage error, or software error). Each recovery routine made the 'mainline' function reinvokable, captured error diagnostic data sufficient to debug the causing problem, and either 'retried' (reinvoke the mainline) or 'percolated' (escalated error processing to the next recovery routine in the hierarchy).
Thus, with each error the system captured diagnostic data, and attempted to perform a repair and keep the system up. The worst thing possible was to take down a user address space (a 'job') in the case of unrepaired errors. Though it was an initial design point, it was not until the most recent MVS version (z/OS), that recovery program was not only guaranteed its own recovery routine, but each recovery routine now has its own recovery routine. This recovery structure was embedded in the basic MVS control program, and programming facilities are available and used by application program developers and 3rd party developers.
Practically, the MVS software recovery made problem debugging both easier and more difficult. Software recovery requires that programs leave 'tracks' of where they are and what they are doing, thus facilitating debugging—but the fact that processing progresses despite an error can overwrite the tracks. Early data capture at the time of the error maximizes debugging, and facilities exist for the recovery routines (task and system mode, both) to do this.
IBM included additional criteria for a major software problem that required IBM service. If a mainline component failed to initiate software recovery, that was considered a valid reportable failure. Also, if a recovery routine failed to collect significant diagnostic data such that the original problem was solvable by data collected by that recovery routine, IBM standards dictated that this fault was reportable and required repair. Thus, IBM standards, when rigorously applied, encouraged continuous improvement.
IBM introduced an on-demand hypervisor, a major serviceability tool, called Dynamic Support System (DSS), in the first release of MVS. This facility could be invoked to initiate a session to create diagnostic procedures, or invoke already-stored procedures. The procedures 'trapped' special events, such as the loading of a program, device I/O, system procedure calls, and then triggered the activation of the previously defined procedures. These procedures, which could be invoked recursively, allowed for reading and writing of data, and alteration of instruction flow. Program Event Recording hardware was used. Due to the overhead of this tool, it was removed from customer-available MVS systems. Program-Event Recording (PER) exploitation was performed by the enhancement of the diagnostic "SLIP" command with the introduction of the PER support (SLIP/Per) in SU 64/65 (1978).
Multiple copies of MVS (or other IBM operating systems) could share the
same machine if that machine was controlled by VM/370. In this case VM/370 was the real operating system, and regarded the "guest" operating systems as applications with unusually high privileges. As a result of later hardware enhancements one instance of an operating system (either MVS, or VM with guests, or other) could also occupy a Logical Partition (LPAR) instead of an entire physical system.
Multiple MVS instances can be organized and collectively administered in a structure called a "systems complex" or "sysplex", introduced in September, 1990. Instances interoperate through a software component called a Cross-system Coupling Facility (XCF) and a hardware component called a "Hardware Coupling Facility" (CF or Integrated Coupling Facility, ICF, if co-located on the same mainframe hardware). Multiple sysplexes can be joined via standard network protocols such as IBM's proprietary Systems Network Architecture (SNA) or, more recently, via TCP/IP. The z/OS operating system (MVS' most recent descendant) also has native support to execute POSIX and Single UNIX Specification applications. The support began with MVS/SP V4R3, and IBM has obtained UNIX 95 certification for z/OS V1R2 and later.
The system is typically used in business and banking, and applications are often written in COBOL. COBOL programs were traditionally used with transaction processing systems like IMS and CICS. For a program running in CICS, special EXEC CICS statements are inserted in the COBOL source code. A preprocessor (translator) replaces those EXEC CICS statements with the appropriate COBOL code to call CICS before the program is compiled — not altogether unlike SQL used to call DB2. Applications can also be written in other languages such as C, C++, Java, assembly language, FORTRAN, BASIC, RPG, and REXX. Language support is packaged as a common component called "Language Environment" or "LE" to allow uniform debugging, tracing, profiling, and other language independent functions.
MVS systems are traditionally accessed by 3270 terminals or by PCs running 3270 emulators. However, many mainframe applications these days have custom web or GUI interfaces. The z/OS operating system has built-in support for TCP/IP. System management, done in the past with a 3270 terminal, is now done through the Hardware Management Console (HMC) and, increasingly, Web interfaces. Operator consoles are provided through 2074 emulators, so you are unlikely to see any S/390 or zSeries processor with a real 3270 connected to it.
The native character encoding scheme of MVS and its peripherals is EBCDIC, but the TR instruction made it easy to translate to other 7- and 8-bit codes. Over time IBM added hardware-accelerated services to perform translation to and between larger codes, hardware-specific service for Unicode transforms and software support of, e.g., ASCII, ISO/IEC 8859, UTF-8, UTF-16, and UTF-32. The software translation services take source and destination code pages as inputs.
Files are properly called data sets in MVS. Names of those files are organized in "catalogs" that are VSAM files themselves.
Data set names (DSNs, mainframe term for filenames) are organized in a hierarchy whose levels are separated with dots, e.g. "DEPT01.SYSTEM01.FILE01". Each level in the hierarchy can be up to eight characters long. The total filename length is a maximum of 44 characters including dots. By convention, the components separated by the dots are used to organize files similarly to directories in other operating systems. For example, there were utility programs that performed similar functions to those of Windows Explorer (but without the GUI and usually in batch processing mode) - adding, renaming or deleting new elements and reporting all the contents of a specified element. However, unlike in many other systems, these levels are not usually actual directories but just a naming convention (like the original Macintosh File System, where folder hierarchy was an illusion maintained by the Finder). TSO supports a default prefix for files (similar to a "current directory" concept), and RACF supports setting up access controls based on filename patterns, analogous to access controls on directories on other platforms.
As with other members of the OS family, MVS' data sets were record-oriented. MVS inherited three main types from its predecessors:
Sequential and ISAM datasets could store either fixed-length or variable length records, and all types could occupy more than one disk volume.
All of these are based on the VTOC disk structure.
Early IBM database management systems used various combinations of ISAM and BDAM datasets - usually BDAM for the actual data storage and ISAM for indexes.
In the early 1970s IBM's virtual memory operating systems introduced a new file management component, VSAM, which provided similar facilities:
These VSAM formats became the basis of IBM's database management systems, IMS/VS and DB2 - usually ESDS for the actual data storage and KSDS for indexes.
VSAM also included a catalog component used for MVS' master catalog.
Partitioned data sets (PDS) were sequential data sets subdivided into "members" that could each be processed as sequential files in their own right (like a folder in a hierarchical file system). The most important use of PDSes was for program libraries - system administrators used the main PDS as a way to allocate disk space to a project and the project team then created and edited the members. Other uses of PDSs were libraries of frequently used job control procedures (PROCs), and “copy books” of programming language statements such as record definitions used by several programs.
Generation Data Groups (GDGs) are groups of like named data sets, which can be referenced by absolute generation number, or by an offset from the most recent generation. They were originally designed to support grandfather-father-son backup procedures - if a file was modified, the changed version became the new "son", the previous "son" became the "father", the previous "father" became the "grandfather" and the previous "grandfather" was deleted. But one could set up GDGs with more than 3 generations and some applications used GDGs to collect data from several sources and feed the information to one program - each collecting program created a new generation of the file and the final program read the whole group as a single sequential file (by not specifying a generation in the JCL).
Modern versions of MVS (e.g., z/OS) also support POSIX-compatible "slash" filesystems along with facilities for integrating the two filesystems. That is, the OS can make an MVS dataset appear as a file to a POSIX program or subsystem. These newer filesystems include Hierarchical File System (HFS) (not to be confused with Apple's Hierarchical File System) and zFS (not to be confused with Sun's ZFS).
Programs running on network-connected computers (such as the AS/400) can use local data management interfaces to transparently create, manage, and access VSAM record-oriented files by using client-server products implemented according to Distributed Data Management Architecture (DDM). DDM is also the base architecture for the MVS DB2 server that implements Distributed Relational Database Architecture (DRDA).
MVS has now evolved into z/OS; older MVS releases are no longer supported by IBM and, since 2007, only 64-bit z/OS releases are supported. z/OS supports running older 24-bit and 31-bit MVS applications alongside newer 64-bit applications.
MVS releases up to 3.8j (24-bit, released in 1981) were freely available and it is now possible to run the MVS 3.8j release in mainframe emulators for free.
MVS/370 is a generic term for all versions of the MVS operating system prior to MVS/XA. The System/370 architecture, at the time MVS was released, supported only 24-bit virtual addresses, so the MVS/370 operating system architecture is based on a 24-bit address. Because of this 24-bit address length, programs running under MVS/370 are each given 16 MB of contiguous virtual storage.
MVS/XA, or Multiple Virtual Storage/Extended Architecture, was a version of MVS that supported the 370-XA architecture, which expanded addresses from 24 bits to 31 bits, providing a 2 gigabyte addressable memory area. It also supported a 24-bit legacy addressing mode for older 24-bit applications (i.e. those that stored a 24-bit address in the lower 24 bits of a 32-bit word and utilized the upper 8 bits of that word for other purposes).
MVS/ESA: MVS Enterprise System Architecture. Version of MVS, first introduced as MVS/SP Version 3 in February 1988. Replaced by/renamed as OS/390 late 1995 and subsequently as z/OS.
MVS/ESA OpenEdition: upgrade to Version 4 Release 3 of MVS/ESA announced February 1993 with support for POSIX and other standards. While the initial release only had National Institute of Standards and Technology (NIST) certification for Federal Information Processing Standard (FIPS) 151 compliance, subsequent releases were certified at higher levels and by other organizations, e.g. X/Open and its successor, The Open Group. It included about 1 million new lines of code, which provide an API shell, utilities, and an extended user interface. Works with a hierarchical file system provided by DFSMS (Data Facility System Managed Storage). The shell and utilities are based on Mortice Kerns' InterOpen products. Independent specialists estimate that it was over 80% open systems-compliant—more than most Unix systems. DCE2 support announced February 1994, and many application development tools in March 1995. From mid 1995, as all of the open features became a standard part of vanilla MVS/ESA SP Version 5 Release 1, IBM stopped distinguishing OpenEdition from the operating system. Under OS/390 V2R6 it became UNIX System Services, and has kept that name under z/OS.
In late 1995 IBM bundled MVS with several program products and changed the name from MVS/ESA to OS/390.
The current level of MVS is marketed as z/OS.
Japanese mainframe manufacturers Fujitsu and Hitachi both repeatedly and illegally obtained IBM's MVS source code and internal documentation in one of the 20th century's most famous cases of industrial espionage. Fujitsu relied heavily on IBM's code in its MSP mainframe operating system, and likewise Hitachi did the same for its VOS3 operating system. MSP and VOS3 were heavily marketed in Japan, where they still hold a substantial share of the mainframe installed base, but also to some degree in other countries, notably Australia. Even IBM's bugs and documentation misspellings were faithfully copied. IBM cooperated with the U.S. Federal Bureau of Investigation in a sting operation, reluctantly supplying Fujitsu and Hitachi with proprietary MVS and mainframe hardware technologies during the course of multi-year investigations culminating in the early 1980s—investigations which implicated senior company managers and even some Japanese government officials. Amdahl, however, was not involved in Fujitsu's theft of IBM's intellectual property. Any communications from Amdahl to Fujitsu were through "Amdahl Only Specifications" which were scrupulously cleansed of any IBM IP or any references to IBM's IP.
Subsequent to the investigations, IBM reached multimillion-dollar settlements with both Fujitsu and Hitachi, collecting substantial fractions of both companies' profits for many years. Reliable reports indicate that the settlements exceeded US$500,000,000.
The three companies have long since amicably agreed to many joint business ventures. For example, in 2000 IBM and Hitachi collaborated on developing the IBM z900 mainframe model.
Because of this historical copying, MSP and VOS3 are properly classified as "forks" of MVS, and many third party software vendors with MVS-compatible products were able to produce MSP- and VOS3-compatible versions with little or no modification.
When IBM introduced its 64-bit z/Architecture mainframes in the year 2000, IBM also introduced the 64-bit z/OS operating system, the direct successor to OS/390 and MVS. Fujitsu and Hitachi opted not to license IBM's z/Architecture for their quasi-MVS operating systems and hardware systems, and so MSP and VOS3, while still nominally supported by their vendors, maintain most of MVS's 1980s architectural limitations to the present day. Since z/OS still supports MVS-era applications and technologies—indeed, z/OS still contains most of MVS's code, albeit greatly enhanced and improved over decades of evolution—applications (and operational procedures) running on MSP and VOS3 can move to z/OS much more easily than to other operating systems.
|
https://en.wikipedia.org/wiki?curid=19649
|
Monoid
In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single associative binary operation and an identity element.
Monoids are semigroups with identity. They occur in several branches of mathematics.
For example, the functions from a set into itself form a monoid with respect to function composition. More generally, in category theory, the morphisms of an object to itself form a monoid, and, conversely, a monoid may be viewed as a category with a single object.
In computer science and computer programming, the set of strings built from a given set of characters is a free monoid. Transition monoids and syntactic monoids are used in describing finite-state machines. Trace monoids and history monoids provide a foundation for process calculi and concurrent computing.
In theoretical computer science, the study of monoids is fundamental for automata theory (Krohn–Rhodes theory), and formal language theory (star height problem).
See Semigroup for the history of the subject, and some other general properties of monoids.
Suppose that "S" is a set and • is some binary operation , then "S" with • is a monoid if it satisfies the following two axioms:
In other words, a monoid is a semigroup with an identity element. It can also be thought of as a magma with associativity and identity. The identity element of a monoid is unique. For this reason the identity is regarded as a constant, i. e. 0-ary (or nullary) operation. The monoid therefore is characterized by specification of the triple ("S", • , "e").
Depending on the context, the symbol for the binary operation may be omitted, so that the operation is denoted by juxtaposition; for example, the monoid axioms may be written formula_1 and formula_2. This notation does not imply that it is numbers being multiplied.
A monoid in which each element has an inverse is a group.
A submonoid of a monoid is a subset "N" of "M" that is closed under the monoid operation and contains the identity element "e" of "M". Symbolically, "N" is a submonoid of "M" if , whenever , and . "N" is thus a monoid under the binary operation inherited from "M".
A subset "S" of "M" is said to be a generator of "M" if "M" is the smallest set containing "S" that is closed under the monoid operation, or equivalently "M" is the result of applying the finitary closure operator to "S". If there is a generator of "M" that has finite cardinality, then "M" is said to be finitely generated. Not every set "S" will generate a monoid, as the generated structure may lack an identity element.
A monoid whose operation is commutative is called a commutative monoid (or, less commonly, an abelian monoid). Commutative monoids are often written additively. Any commutative monoid is endowed with its algebraic preordering , defined by if there exists "z" such that . An order-unit of a commutative monoid "M" is an element "u" of "M" such that for any element "x" of "M", there exists "v" in the set generated by "u" such that . This is often used in case "M" is the positive cone of a partially ordered abelian group "G", in which case we say that "u" is an order-unit of "G".
A monoid for which the operation is commutative for some, but not all elements is a trace monoid; trace monoids commonly occur in the theory of concurrent computation.
In a monoid, one can define positive integer powers of an element "x" : "x"1 = "x", and "x""n" = "x" • ... • "x" ("n" times) for "n" > 1 . The rule of powers "x""n" + "p" = "x""n" • "x""p" is obvious.
From the definition of a monoid, one can show that the identity element "e" is unique. Then, for any "x", one can set "x"0 = "e" and the rule of powers is still true with nonnegative exponents.
It is possible to define invertible elements: an element "x" is called invertible if there exists an element "y" such that and . The element "y" is called the inverse of "x". If "y" and "z" are inverses of "x", then by associativity . Thus inverses, if they exist, are unique.
If "y" is the inverse of "x", one can define negative powers of "x" by setting and ("n" times) for . And the rule of exponents is still verified for all integers . This is why the inverse of "x" is usually written . The set of all invertible elements in a monoid "M", together with the operation •, forms a group. In that sense, every monoid contains a group (possibly only the trivial group consisting of only the identity).
However, not every monoid sits inside a group. For instance, it is perfectly possible to have a monoid in which two elements "a" and "b" exist such that holds even though "b" is not the identity element. Such a monoid cannot be embedded in a group, because in the group we could multiply both sides with the inverse of "a" and would get that , which isn't true. A monoid has the cancellation property (or is cancellative) if for all "a", "b" and "c" in "M", always implies and always implies . A commutative monoid with the cancellation property can always be embedded in a group via the Grothendieck construction. That is how the additive group of the integers (a group with operation +) is constructed from the additive monoid of natural numbers (a commutative monoid with operation + and cancellation property). However, a non-commutative cancellative monoid need not be embeddable in a group.
If a monoid has the cancellation property and is "finite", then it is in fact a group. Proof: Fix an element "x" in the monoid. Since the monoid is finite, for some . But then, by cancellation we have that where "e" is the identity. Therefore, , so "x" has an inverse.
The right- and left-cancellative elements of a monoid each in turn form a submonoid (i.e. obviously include the identity and not so obviously are closed under the operation). This means that the cancellative elements of any commutative monoid can be extended to a group.
It turns out that requiring the cancellative property in a monoid is not required to perform the Grothendieck construction – commutativity is sufficient. However, if the original monoid has an absorbing element then its Grothendieck group is the trivial group. Hence the homomorphism is, in general, not injective.
An inverse monoid is a monoid where for every "a" in "M", there exists a unique "a"−1 in "M" such that and . If an inverse monoid is cancellative, then it is a group.
In the opposite direction, a zerosumfree monoid is an additively written monoid in which implies that and : equivalently, that no element other than zero has an additive inverse.
Let "M" be a monoid, with the binary operation denoted by • and the identity element denoted by "e". Then a (left) "M"-act (or left act over "M") is a set "X" together with an operation which is compatible with the monoid structure as follows:
This is the analogue in monoid theory of a (left) group action. Right "M"-acts are defined in a similar way. A monoid with an act is also known as an operator monoid. Important examples include transition systems of semiautomata. A transformation semigroup can be made into an operator monoid by adjoining the identity transformation.
A homomorphism between two monoids and is a function such that
where "e""M" and "e""N" are the identities on "M" and "N" respectively. Monoid homomorphisms are sometimes simply called monoid morphisms.
Not every semigroup homomorphism between monoids is a monoid homomorphism, since it may not map the identity to the identity of the target monoid, even though the identity is the identity of the image of homomorphism. So, a monoid homomorphism is a semigroup homomorphism between monoids that maps the identity of the first monoid to the identity of the second monoid (the latter condition must not be omitted).
In contrast, a semigroup homomorphism between groups is always a group homomorphism, as it necessarily preserves the identity (because, in a group, the identity is the only element such that ).
A bijective monoid homomorphism is called a monoid isomorphism. Two monoids are said to be isomorphic if there is a monoid isomorphism between them.
Monoids may be given a presentation, much in the same way that groups can be specified by means of a group presentation. One does this by specifying a set of generators Σ, and a set of relations on the free monoid Σ∗. One does this by extending (finite) binary relations on Σ∗ to monoid congruences, and then constructing the quotient monoid, as above.
Given a binary relation , one defines its symmetric closure as . This can be extended to a symmetric relation by defining if and only if and for some strings with . Finally, one takes the reflexive and transitive closure of "E", which is then a monoid congruence.
In the typical situation, the relation "R" is simply given as a set of equations, so that formula_13. Thus, for example,
is the equational presentation for the bicyclic monoid, and
is the plactic monoid of degree 2 (it has infinite order). Elements of this plactic monoid may be written as formula_16 for integers "i", "j", "k", as the relations show that "ba" commutes with both "a" and "b".
Monoids can be viewed as a special class of categories. Indeed, the axioms required of a monoid operation are exactly those required of morphism composition when restricted to the set of all morphisms whose source and target is a given object. That is,
More precisely, given a monoid , one can construct a small category with only one object and whose morphisms are the elements of "M". The composition of morphisms is given by the monoid operation •.
Likewise, monoid homomorphisms are just functors between single object categories. So this construction gives an equivalence between the category of (small) monoids Mon and a full subcategory of the category of (small) categories Cat. Similarly, the category of groups is equivalent to another full subcategory of Cat.
In this sense, category theory can be thought of as an extension of the concept of a monoid. Many definitions and theorems about monoids can be generalised to small categories with more than one object. For example, a quotient of a category with one object is just a quotient monoid.
Monoids, just like other algebraic structures, also form their own category, Mon, whose objects are monoids and whose morphisms are monoid homomorphisms.
There is also a notion of monoid object which is an abstract definition of what is a monoid in a category. A monoid object in Set is just a monoid.
In computer science, many abstract data types can be endowed with a monoid structure. In a common pattern, a sequence of elements of a monoid is "folded" or "accumulated" to produce a final value. For instance, many iterative algorithms need to update some kind of "running total" at each iteration; this pattern may be elegantly expressed by a monoid operation. Alternatively, the associativity of monoid operations ensures that the operation can be parallelized by employing a prefix sum or similar algorithm, in order to utilize multiple cores or processors efficiently.
Given a sequence of values of type "M" with identity element formula_17 and associative operation formula_18, the "fold" operation is defined as follows:
In addition, any data structure can be 'folded' in a similar way, given a serialization of its elements. For instance, the result of "folding" a binary tree might differ depending on pre-order vs. post-order tree traversal.
An application of monoids in computer science is so-called MapReduce programming model (see Encoding Map-Reduce As A Monoid With Left Folding). MapReduce, in computing, consists of two or three operations. Given a dataset, "Map" consists of mapping arbitrary data to elements of a specific monoid. "Reduce" consists of folding those elements, so that in the end we produce just one element.
For example, if we have a multiset, in a program it is represented as a map from elements to their numbers. Elements are called keys in this case. The number of distinct keys may be too big, and in this case the multiset is being sharded. To finalize reduction properly, the "Shuffling" stage regroups the data among the nodes. If we do not need this step, the whole Map/Reduce consists of mapping and reducing; both operation are parallelizable, the former due to its element-wise nature, the latter due to associativity of the monoid.
A complete monoid is a commutative monoid equipped with an infinitary sum operation formula_20 for any index set such that:
and
A continuous monoid is an ordered commutative monoid in which every directed set has a least upper bound compatible with the monoid operation:
These two concepts are closely related: a continuous monoid is a complete monoid in which the infinitary sum may be defined as
where the supremum on the right runs over all finite subsets of and each sum on the right is a finite sum in the monoid.
|
https://en.wikipedia.org/wiki?curid=19652
|
Global warming potential
Global warming potential (GWP) is the warming caused by any greenhouse gas, as a multiple of the warming caused by the same mass of carbon dioxide (CO2). GWP is 1 for CO2. For other gases it depends on the gas and the time frame. Some gases, like methane, have large GWP, since a ton of methane causes much more warming than a ton of CO2. Some gases, again like methane, break down over time, and their warming, or GWP, over the next 20 years is a bigger multiple of CO2 than their warming will be over 100 or 500 years. Values of GWP are estimated and updated for each time frame as methods improve.
Carbon dioxide equivalent (CO2e or CO2eq or CO2-e) is calculated from GWP. It can be measured in weight or concentration. For any amount of any gas, it is the amount of CO2 which would warm the earth as much as that amount of that gas. Thus it provides a common scale for measuring the climate effects of different gases. It is calculated as GWP times amount of the other gas. For example if a gas has GWP of 100, two tons of the gas have CO2e of 200 tons, and 1 part per million of the gas in the atmosphere has CO2e of 100 parts per million.
The gases subject to restrictions under the Paris Agreement are either rapidly increasing their concentrations in Earth's atmosphere or have a large GWP.
Carbon dioxide has a GWP of exactly 1 (since it is the baseline unit to which all other greenhouse gases are compared). Values for other gases have been estimated on
The values given in the table assume the same mass of compound is analyzed; different ratios will result from the conversion of one substance to another. For instance, burning methane to carbon dioxide would reduce the global warming impact, but by a smaller factor than 25:1 because the mass of methane burned is less than the mass of carbon dioxide released (ratio 1:2.74). If you started with 1 tonne of methane which has a GWP of 25, after combustion you would have 2.74 tonnes of CO2, each tonne of which has a GWP of 1. This is a net reduction of 22.26 tonnes of GWP, reducing the global warming effect by a ratio of 25:2.74 (approximately 9 times).
The global warming potential of perfluorotributylamine (PFTBA) over a 100-year time horizon has been estimated to be approximately 7100. It has been used by the electrical industry since the mid-20th century for electronic testing and as a heat transfer agent. PFTBA has the highest radiative efficiency (relative effectiveness of greenhouse gases to restrict long-wave radiation from escaping back into space) of any molecule detected in the atmosphere to date. The researchers found an average of 0.18 parts per trillion of PFTBA in Toronto air samples, whereas carbon dioxide exists around 400 parts per million.
Under the Kyoto Protocol, the Conference of the Parties standardized international reporting, by deciding (decision 2/CP.3) that the values of GWP calculated for the IPCC Second Assessment Report are to be used for converting the various greenhouse gas emissions into comparable CO2 equivalents when computing overall sources of greenhouse gase and "sinks" or absorption of greenhouse gases.
A substance's GWP depends on the number of years (denoted by a subscript) over which the potential is calculated. A gas which is quickly removed from the atmosphere may initially have a large effect, but for longer time periods, as it has been removed, it becomes less important. Thus methane has a potential of 34 over 100 years (GWP100 = 34) but 86 over 20 years (GWP20 = 86); conversely sulfur hexafluoride has a GWP of 22,800 over 100 years but 16,300 over 20 years (IPCC Third Assessment Report). The GWP value depends on how the gas concentration decays over time in the atmosphere. This is often not precisely known and hence the values should not be considered exact. For this reason when quoting a GWP it is important to give a reference to the calculation.
The GWP for a mixture of gases can be obtained from the mass-fraction-weighted average of the GWPs of the individual gases.
Commonly, a time horizon of 100 years is used by regulators (e.g., the California Air Resources Board).
Water vapour is one of the primary greenhouse gases, but some issues prevent its GWP to be calculated directly. It has a profound infrared absorption spectrum with more and broader absorption bands than CO2, and also absorbs non-zero amounts of radiation in its low absorbing spectral regions. Next, its concentration in the atmosphere depends on air temperature and water availability; using a global average temperature of ~16 °C, for example, creates an average humidity of ~18,000ppm at sea level (CO2 is ~400ppm and so concentrations of [H2O]/[CO2] ~ 45x). Unlike other GHG, water vapor does not decay in the environment, so an average over some time horizon or some other measure consistent with "time dependent decay," q.v., above, must be used in lieu of the time dependent decay of artificial or excess CO2 molecules. Other issues complicating its calculation are the Earth's temperature distribution, and the differing land masses in the Northern and Southern hemispheres.
The "Global Temperature change Potential" is another way to quantify the ratio change from a substance relative to that of CO2, in global mean surface temperature, used for a specific time span.
The GWP depends on the following factors:
A high GWP correlates with a large infrared absorption and a long atmospheric lifetime. The dependence of GWP on the wavelength of absorption is more complicated. Even if a gas absorbs radiation efficiently at a certain wavelength, this may not affect its GWP much if the atmosphere already absorbs most radiation at that wavelength. A gas has the most effect if it absorbs in a "window" of wavelengths where the atmosphere is fairly transparent. The dependence of GWP as a function of wavelength has been found empirically and published as a graph.
Because the GWP of a greenhouse gas depends directly on its infrared spectrum, the use of infrared spectroscopy to study greenhouse gases is centrally important in the effort to understand the impact of human activities on global climate change.
Just as radiative forcing provides a simplified means of comparing the various factors that are believed to influence the climate system to one another, global warming potentials (GWPs) are one type of simplified index based upon radiative properties that can be used to estimate the potential future impacts of emissions of different gases upon the climate system in a relative sense. GWP is based on a number of factors, including the radiative efficiency (infrared-absorbing ability) of each gas relative to that of carbon dioxide, as well as the decay rate of each gas (the amount removed from the atmosphere over a given number of years) relative to that of carbon dioxide.
The radiative forcing capacity (RF) is the amount of energy per unit area, per unit time, absorbed by the greenhouse gas, that would otherwise be lost to space. It can be expressed by the formula:
where the subscript "i" represents an interval of 10 inverse centimeters. Absi represents the integrated infrared absorbance of the sample in that interval, and Fi represents the RF for that interval.
The Intergovernmental Panel on Climate Change (IPCC) provides the generally accepted values for GWP, which changed slightly between 1996 and 2001. An exact definition of how GWP is calculated is to be found in the IPCC's 2001 Third Assessment Report. The GWP is defined as the ratio of the time-integrated radiative forcing from the instantaneous release of 1 kg of a trace substance relative to that of 1 kg of a reference gas:
where TH is the time horizon over which the calculation is considered; ax is the radiative efficiency due to a unit increase in atmospheric abundance of the substance (i.e., Wm−2 kg−1) and [x(t)] is the time-dependent decay in abundance of the substance following an instantaneous release of it at time t=0. The denominator contains the corresponding quantities for the reference gas (i.e. ). The radiative efficiencies ax and ar are not necessarily constant over time. While the absorption of infrared radiation by many greenhouse gases varies linearly with their abundance, a few important ones display non-linear behaviour for current and likely future abundances (e.g., CO2, CH4, and N2O). For those gases, the relative radiative forcing will depend upon abundance and hence upon the future scenario adopted.
Since all GWP calculations are a comparison to CO2 which is non-linear, all GWP values are affected. Assuming otherwise as is done above will lead to lower GWPs for other gases than a more detailed approach would. Clarifying this, while increasing CO2 has less and less effect on radiative absorption as ppm concentrations rise, more powerful greenhouse gases like methane and nitrous oxide have different thermal absorption frequencies to CO2 that are not filled up (saturated) as much as CO2, so rising ppms of these gases are far more significant.
Carbon dioxide equivalent (CO2e or CO2eq or CO2-e) is calculated from GWP. It can be measured in weight or concentration. For any amount of any gas, it is the amount of CO2 which would warm the earth as much as that amount of that gas. Thus it provides a common scale for measuring the climate effects of different gases. It is calculated as GWP times amount of the other gas.
As weight, CO2e is the weight of CO2 which would warm the earth as much as a particular weight of some other gas;
it is calculated as GWP times weight of the other gas. For example if a gas has GWP of 100, two tons of the gas have CO2e of 200 tons, and 9 tons of the gas has CO2e of 900 tons.
As concentration, e is the concentration of CO2 which would warm the earth as much as a particular concentration of some other gas or of all gases and aerosols in the atmosphere; it is calculated as GWP times concentration of the other gas(es). For example CO2e of 500 parts per million would reflect a mix of atmospheric gases which warm the earth as much as 500 parts per million of CO2 would warm it.
e calculations depend on the time-scale chosen, typically 100 years or 20 years,
since gases decay in the atmophere or are absorbed naturally, at different rates.
The following units are commonly used:
For example, the table above shows GWP for methane over 20 years at 86 and nitrous oxide at 289, so emissions of 1 million tonnes of methane or nitrous oxide are equivalent to emissions of 86 or 289 million tonnes of carbon dioxide, respectively.
|
https://en.wikipedia.org/wiki?curid=12908
|
Grothendieck topology
In category theory, a branch of mathematics, a Grothendieck topology is a structure on a category "C" that makes the objects of "C" act like the open sets of a topological space. A category together with a choice of Grothendieck topology is called a site.
Grothendieck topologies axiomatize the notion of an open cover. Using the notion of covering provided by a Grothendieck topology, it becomes possible to define sheaves on a category and their cohomology. This was first done in algebraic geometry and algebraic number theory by Alexander Grothendieck to define the étale cohomology of a scheme. It has been used to define other cohomology theories since then, such as ℓ-adic cohomology, flat cohomology, and crystalline cohomology. While Grothendieck topologies are most often used to define cohomology theories, they have found other applications as well, such as to John Tate's theory of rigid analytic geometry.
There is a natural way to associate a site to an ordinary topological space, and Grothendieck's theory is loosely regarded as a generalization of classical topology. Under meager point-set hypotheses, namely sobriety, this is completely accurate—it is possible to recover a sober space from its associated site. However simple examples such as the indiscrete topological space show that not all topological spaces can be expressed using Grothendieck topologies. Conversely, there are Grothendieck topologies that do not come from topological spaces.
The term "Grothendieck topology" has changed in meaning. In it meant what is now called a Grothendieck pretopology, and some authors still use this old meaning. modified the definition to use sieves rather than covers. Much of the time this does not make much difference, as each Grothendieck pretopology determines a unique Grothendieck topology, though quite different pretopologies can give the same topology.
André Weil's famous Weil conjectures proposed that certain properties of equations with integral coefficients should be understood as geometric properties of the algebraic variety that they define. His conjectures postulated that there should be a cohomology theory of algebraic varieties that gives number-theoretic information about their defining equations. This cohomology theory was known as the "Weil cohomology", but using the tools he had available, Weil was unable to construct it.
In the early 1960s, Alexander Grothendieck introduced étale maps into algebraic geometry as algebraic analogues of local analytic isomorphisms in analytic geometry. He used étale coverings to define an algebraic analogue of the fundamental group of a topological space. Soon Jean-Pierre Serre noticed that some properties of étale coverings mimicked those of open immersions, and that consequently it was possible to make constructions that imitated the cohomology functor "H"1. Grothendieck saw that it would be possible to use Serre's idea to define a cohomology theory that he suspected would be the Weil cohomology. To define this cohomology theory, Grothendieck needed to replace the usual, topological notion of an open covering with one that would use étale coverings instead. Grothendieck also saw how to phrase the definition of covering abstractly; this is where the definition of a Grothendieck topology comes from.
The classical definition of a sheaf begins with a topological space "X". A sheaf associates information to the open sets of "X". This information can be phrased abstractly by letting "O"("X") be the category whose objects are the open subsets "U" of "X" and whose morphisms are the inclusion maps "V" → "U" of open sets "U" and "V" of "X". We will call such maps "open immersions", just as in the context of schemes. Then a presheaf on "X" is a contravariant functor from "O"("X") to the category of sets, and a sheaf is a presheaf that satisfies the gluing axiom (here including the separation axiom). The gluing axiom is phrased in terms of pointwise covering, i.e., formula_1 covers "U" if and only if formula_2. In this definition, formula_3 is an open subset of "X". Grothendieck topologies replace each formula_3 with an entire family of open subsets; in this example, formula_3 is replaced by the family of all open immersions formula_6. Such a collection is called a "sieve". Pointwise covering is replaced by the notion of a "covering family"; in the above example, the set of all formula_7 as "i" varies is a covering family of "U". Sieves and covering families can be axiomatized, and once this is done open sets and pointwise covering can be replaced by other notions that describe other properties of the space "X".
In a Grothendieck topology, the notion of a collection of open subsets of "U" stable under inclusion is replaced by the notion of a sieve. If "c" is any given object in "C", a sieve on "c" is a subfunctor of the functor Hom(−, "c"); (this is the Yoneda embedding applied to "c"). In the case of "O"("X"), a sieve "S" on an open set "U" selects a collection of open subsets of "U" that is stable under inclusion. More precisely, consider that for any open subset "V" of "U", "S"("V") will be a subset of Hom("V", "U"), which has only one element, the open immersion "V" → "U". Then "V" will be considered "selected" by "S" if and only if "S"("V") is nonempty. If "W" is a subset of "V", then there is a morphism "S"("V") → "S"("W") given by composition with the inclusion "W" → "V". If "S"("V") is non-empty, it follows that "S"("W") is also non-empty.
If "S" is a sieve on "X", and "f": "Y" → "X" is a morphism, then left composition by "f" gives a sieve on "Y" called the pullback of "S" along "f", denoted by "f"formula_8"S". It is defined as the fibered product "S" ×Hom(−, "X") Hom(−, "Y") together with its natural embedding in Hom(−, "Y"). More concretely, for each object "Z" of "C", "f"formula_8"S"("Z") = { "g": "Z" → "Y" | "fg" formula_10"S"("Z") }, and "f"formula_8"S" inherits its action on morphisms by being a subfunctor of Hom(−, "Y"). In the classical example, the pullback of a collection {"V"i} of subsets of "U" along an inclusion "W" → "U" is the collection {"V"i∩W}.
A Grothendieck topology "J" on a category "C" is a collection, "for each object c of C", of distinguished sieves on "c", denoted by "J"("c") and called covering sieves of "c". This selection will be subject to certain axioms, stated below. Continuing the previous example, a sieve "S" on an open set "U" in "O"("X") will be a covering sieve if and only if the union of all the open sets "V" for which "S"("V") is nonempty equals "U"; in other words, if and only if "S" gives us a collection of open sets that cover "U" in the classical sense.
The conditions we impose on a Grothendieck topology are:
The base change axiom corresponds to the idea that if {"Ui"} covers "U", then {"Ui" ∩ "V"} should cover "U" ∩ "V". The local character axiom corresponds to the idea that if {"Ui"} covers "U" and {"Vij"}"j formula_10Ji" covers "Ui" for each "i", then the collection {"Vij"} for all "i" and "j" should cover "U". Lastly, the identity axiom corresponds to the idea that any set is covered by all its possible subsets.
In fact, it is possible to put these axioms in another form where their geometric character is more apparent, assuming that the underlying category "C" contains certain fibered products. In this case, instead of specifying sieves, we can specify that certain collections of maps with a common codomain should cover their codomain. These collections are called covering families. If the collection of all covering families satisfies certain axioms, then we say that they form a Grothendieck pretopology. These axioms are:
For any pretopology, the collection of all sieves that contain a covering family from the pretopology is always a Grothendieck topology.
For categories with fibered products, there is a converse. Given a collection of arrows {"X""α" → "X"}, we construct a sieve "S" by letting "S"("Y") be the set of all morphisms "Y" → "X" that factor through some arrow "X""α" → "X". This is called the sieve generated by {"X""α" → "X"}. Now choose a topology. Say that {"X""α" → "X"} is a covering family if and only if the sieve that it generates is a covering sieve for the given topology. It is easy to check that this defines a pretopology.
(PT 3) is sometimes replaced by a weaker axiom:
(PT 3) implies (PT 3'), but not conversely. However, suppose that we have a collection of covering families that satisfies (PT 0) through (PT 2) and (PT 3'), but not (PT 3). These families generate a pretopology. The topology generated by the original collection of covering families is then the same as the topology generated by the pretopology, because the sieve generated by an isomorphism "Y" → "X" is Hom(−, "X"). Consequently, if we restrict our attention to topologies, (PT 3) and (PT 3') are equivalent.
Let "C" be a category and let "J" be a Grothendieck topology on "C". The pair ("C", "J") is called a site.
A presheaf on a category is a contravariant functor from "C" to the category of all sets. Note that for this definition "C" is not required to have a topology. A sheaf on a site, however, should allow gluing, just like sheaves in classical topology. Consequently, we define a sheaf on a site to be a presheaf "F" such that for all objects "X" and all covering sieves "S" on "X", the natural map Hom(Hom(−, "X"), "F") → Hom("S", "F"), induced by the inclusion of "S" into Hom(−, "X"), is a bijection. Halfway in between a presheaf and a sheaf is the notion of a separated presheaf, where the natural map above is required to be only an injection, not a bijection, for all sieves "S". A morphism of presheaves or of sheaves is a natural transformation of functors. The category of all sheaves on "C" is the topos defined by the site ("C", "J").
Using the Yoneda lemma, it is possible to show that a presheaf on the category "O"("X") is a sheaf on the topology defined above if and only if it is a sheaf in the classical sense.
Sheaves on a pretopology have a particularly simple description: For each covering family {"X""α" → "X"}, the diagram
must be an equalizer. For a separated presheaf, the first arrow need only be injective.
Similarly, one can define presheaves and sheaves of abelian groups, rings, modules, and so on. One can require either that a presheaf "F" is a contravariant functor to the category of abelian groups (or rings, or modules, etc.), or that "F" be an abelian group (ring, module, etc.) object in the category of all contravariant functors from "C" to the category of sets. These two definitions are equivalent.
Let C be any category. To define the discrete topology, also known as the biggest or chaotic topology, we declare all sieves to be covering sieves. If C has all fibered products, this is equivalent to declaring all families to be covering families. To define the indiscrete topology, we declare only the sieves of the form Hom(−, "X") to be covering sieves. The indiscrete topology is generated by the pretopology that has only isomorphisms for covering families. A sheaf on the indiscrete site is the same thing as a presheaf.
Let C be any category. The Yoneda embedding gives a functor Hom(−, "X") for each object "X" of C. The canonical topology is the biggest (finest) topology such that every representable presheaf, i.e. presheaf of the form Hom(−, "X"), is a sheaf. A covering sieve or covering family for this site is said to be "strictly universally epimorphic" because it consists of the legs of a colimit cone (under the full diagram on the domains of its constituent morphisms) and these colimits are stable under pullbacks along morphisms in C. A topology that is less fine than the canonical topology, that is, for which every covering sieve is strictly universally epimorphic, is called subcanonical. Subcanonical sites are exactly the sites for which every presheaf of the form Hom(−, "X") is a sheaf. Most sites encountered in practice are subcanonical.
We repeat the example that we began with above. Let "X" be a topological space. We defined "O"("X") to be the category whose objects are the open sets of "X" and whose morphisms are inclusions of open sets. Note that for an open set "U" and a sieve "S" on "U", the set "S"("V") contains either zero or one element for every open set "V". The covering sieves on an object "U" of "O"("X") are those sieves "S" satisfying the following condition:
This notion of cover matches the usual notion in point-set topology.
This topology can also naturally be expressed as a pretopology. We say that a family of inclusions {"V""α" formula_16 "U"} is a covering family if and only if the union formula_17"V""α" equals "U". This site is called the "'small site associated to a topological space "X".
Let "Spc" be the category of all topological spaces. Given any family of functions {"u""α" : "V""α" → "X"}, we say that it is a surjective family or that the morphisms "u""α" are jointly surjective if formula_17 "u""α"("V""α") equals "X". We define a pretopology on "Spc" by taking the covering families to be surjective families all of whose members are open immersions. Let "S" be a sieve on "Spc". "S" is a covering sieve for this topology if and only if:
Fix a topological space "X". Consider the comma category "Spc/X" of topological spaces with a fixed continuous map to "X". The topology on "Spc" induces a topology on "Spc/X". The covering sieves and covering families are almost exactly the same; the only difference is that now all the maps involved commute with the fixed maps to "X". This is the big site associated to a topological space X . Notice that "Spc" is the big site associated to the one point space. This site was first considered by Jean Giraud.
Let "M" be a manifold. "M" has a category of open sets "O"("M") because it is a topological space, and it gets a topology as in the above example. For two open sets "U" and "V" of "M", the fiber product "U" ×"M" "V" is the open set "U" ∩ "V", which is still in "O"("M"). This means that the topology on "O"("M") is defined by a pretopology, the same pretopology as before.
Let "Mfd" be the category of all manifolds and continuous maps. (Or smooth manifolds and smooth maps, or real analytic manifolds and analytic maps, etc.) "Mfd" is a subcategory of "Spc", and open immersions are continuous (or smooth, or analytic, etc.), so "Mfd" inherits a topology from "Spc". This lets us construct the big site of the manifold "M" as the site "Mfd/M". We can also define this topology using the same pretopology we used above. Notice that to satisfy (PT 0), we need to check that for any continuous map of manifolds "X" → "Y" and any open subset "U" of "Y", the fibered product "U" ×"Y" "X" is in "Mfd/M". This is just the statement that the preimage of an open set is open. Notice, however, that not all fibered products exist in "Mfd" because the preimage of a smooth map at a critical value need not be a manifold.
The category of schemes, denoted "Sch", has a tremendous number of useful topologies. A complete understanding of some questions may require examining a scheme using several different topologies. All of these topologies have associated small and big sites. The big site is formed by taking the entire category of schemes and their morphisms, together with the covering sieves specified by the topology. The small site over a given scheme is formed by only taking the objects and morphisms that are part of a cover of the given scheme.
The most elementary of these is the Zariski topology. Let "X" be a scheme. "X" has an underlying topological space, and this topological space determines a Grothendieck topology. The Zariski topology on "Sch" is generated by the pretopology whose covering families are jointly surjective families of scheme-theoretic open immersions. The covering sieves "S" for "Zar" are characterized by the following two properties:
Despite their outward similarities, the topology on "Zar" is "not" the restriction of the topology on "Spc"! This is because there are morphisms of schemes that are topologically open immersions but that are not scheme-theoretic open immersions. For example, let "A" be a non-reduced ring and let "N" be its ideal of nilpotents. The quotient map "A" → "A/N" induces a map Spec "A/N" → Spec "A", which is the identity on underlying topological spaces. To be a scheme-theoretic open immersion it must also induce an isomorphism on structure sheaves, which this map does not do. In fact, this map is a closed immersion.
The étale topology is finer than the Zariski topology. It was the first Grothendieck topology to be closely studied. Its covering families are jointly surjective families of étale morphisms. It is finer than the Nisnevich topology, but neither finer nor coarser than the "cdh" and l′ topologies.
There are two flat topologies, the "fppf" topology and the "fpqc" topology. "fppf" stands for ', and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat, of finite presentation, and is quasi-finite. "fpqc" stands for ', and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat. In both categories, a covering family is defined to be a family that is a cover on Zariski open subsets. In the fpqc topology, any faithfully flat and quasi-compact morphism is a cover. These topologies are closely related to descent. The "fpqc" topology is finer than all the topologies mentioned above, and it is very close to the canonical topology.
Grothendieck introduced crystalline cohomology to study the "p"-torsion part of the cohomology of characteristic "p" varieties. In the "crystalline topology", which is the basis of this theory, the underlying category has objects given by infinitesimal thickenings together with divided power structures. Crystalline sites are examples of sites with no final object.
There are two natural types of functors between sites. They are given by functors that are compatible with the topology in a certain sense.
If ("C", "J") and ("D", "K") are sites and "u" : "C" → "D" is a functor, then "u" is continuous if for every sheaf "F" on "D" with respect to the topology "K", the presheaf "Fu" is a sheaf with respect to the topology "J". Continuous functors induce functors between the corresponding topoi by sending a sheaf "F" to "Fu". These functors are called pushforwards. If formula_19 and formula_20 denote the topoi associated to "C" and "D", then the pushforward functor is formula_21.
"u""s" admits a left adjoint "u""s" called the pullback. "u""s" need not preserve limits, even finite limits.
In the same way, "u" sends a sieve on an object "X" of "C" to a sieve on the object "uX" of "D". A continuous functor sends covering sieves to covering sieves. If "J" is the topology defined by a pretopology, and if "u" commutes with fibered products, then "u" is continuous if and only if it sends covering sieves to covering sieves and if and only if it sends covering families to covering families. In general, it is "not" sufficient for "u" to send covering sieves to covering sieves (see SGA IV 3, 1.9.3).
Again, let ("C", "J") and ("D", "K") be sites and "v" : "C" → "D" be a functor. If "X" is an object of "C" and "R" is a sieve on "vX", then "R" can be pulled back to a sieve "S" as follows: A morphism "f" : "Z" → "X" is in "S" if and only if "v"("f") : "vZ" → "vX" is in "R". This defines a sieve. "v" is cocontinuous if and only if for every object "X" of "C" and every covering sieve "R" of "vX", the pullback "S" of "R" is a covering sieve on "X".
Composition with "v" sends a presheaf "F" on "D" to a presheaf "Fv" on "C", but if "v" is cocontinuous, this need not send sheaves to sheaves. However, this functor on presheaf categories, usually denoted formula_22, admits a right adjoint formula_23. Then "v" is cocontinuous if and only if formula_23 sends sheaves to sheaves, that is, if and only if it restricts to a functor formula_25. In this case, the composite of formula_22 with the associated sheaf functor is a left adjoint of "v"* denoted "v"*. Furthermore, "v"* preserves finite limits, so the adjoint functors "v"* and "v"* determine a geometric morphism of topoi formula_27.
A continuous functor "u" : "C" → "D" is a morphism of sites "D" → "C" ("not" "C" → "D") if "u""s" preserves finite limits. In this case, "u""s" and "u""s" determine a geometric morphism of topoi formula_27. The reasoning behind the convention that a continuous functor "C" → "D" is said to determine a morphism of sites in the opposite direction is that this agrees with the intuition coming from the case of topological spaces. A continuous map of topological spaces "X" → "Y" determines a continuous functor "O"("Y") → "O"("X"). Since the original map on topological spaces is said to send "X" to "Y", the morphism of sites is said to as well.
A particular case of this happens when a continuous functor admits a left adjoint. Suppose that "u" : "C" → "D" and "v" : "D" → "C" are functors with "u" right adjoint to "v". Then "u" is continuous if and only if "v" is cocontinuous, and when this happens, "u""s" is naturally isomorphic to "v"* and "u""s" is naturally isomorphic to "v"*. In particular, "u" is a morphism of sites.
|
https://en.wikipedia.org/wiki?curid=12910
|
Ghost in the Shell
Animation studio Production I.G has produced several anime adaptations of the series. These include the 1995 film of the same name and its sequel, ""; the 2002 television series, "", and its 2020 follow-up, ""; and the "" original video animation (OVA) series. In addition, an American-produced live-action film was released on March 31, 2017.
Shirow stated that he had always wanted the title of his manga to be "Ghost in the Shell", even in Japan, but his original publishers preferred "Mobile Armored Riot Police". He chose "Ghost in the Shell" in homage to Arthur Koestler's "The Ghost in the Machine", from which he also drew inspiration.
Primarily set in the mid-twenty-first century in the fictional Japanese city of , otherwise known as , the manga and the many anime adaptations follow the members of Public Security Section 9, a task-force consisting of various professionals at solving and preventing crime, mostly with some sort of police background. Political intrigue and counter-terrorism operations are standard fare for Section 9, but the various actions of corrupt officials, companies, and cyber-criminals in each scenario are unique and require the diverse skills of Section 9's staff to prevent a series of incidents from escalating.
In this post-cyberpunk iteration of a possible future, computer technology has advanced to the point that many members of the public possess cyberbrains, technology that allows them to interface their biological brain with various networks. The level of cyberization varies from simple minimal interfaces to almost complete replacement of the brain with cybernetic parts, in cases of severe trauma. This can also be combined with various levels of prostheses, with a fully prosthetic body enabling a person to become a cyborg. The main character of "Ghost in the Shell", Major Motoko Kusanagi, is such a cyborg, having had a terrible accident befall her as a child that ultimately required her to use a full-body prosthesis to house her cyberbrain. This high level of cyberization, however, opens the brain up to attacks from highly skilled hackers, with the most dangerous being those who will hack a person to bend to their whims.
The original "Ghost in the Shell" manga ran in Japan from April 1989 to November 1990 in Kodansha's manga anthology "Young Magazine", and was released in a "tankōbon" volume on October 5, 1991. "Ghost in the Shell 2: Man-Machine Interface" followed 1997 for 9 issues in "Young Magazine", and was collected in the "Ghost in the Shell: Solid Box" on December 1, 2000. Four stories from "Man-Machine Interface" that were not released in tankobon format from previous releases were later collected in "Ghost in the Shell 1.5: Human-Error Processor", and published by Kodansha on July 23, 2003. Several art books have also been published for the manga.
Two animated films based on the original manga have been released, both directed by Mamoru Oshii and animated by Production I.G. "Ghost in the Shell" was released in 1995 and follows the "Puppet Master" storyline from the manga. It was re-released in 2008 as "Ghost in the Shell 2.0" with new audio and updated 3D computer graphics in certain scenes. "Innocence", otherwise known as "", was released in 2004, with its story based on a chapter from the first manga.
On September 5, 2014, it was revealed by Production I.G. that a new "Ghost in the Shell" animated film, in Japanese, would be released in 2015 promising to show the "further evolution [of the series]". On January 8, 2015, a short teaser trailer was revealed for the project unveiling a redesigned Major more closely resembling her appearance from the older films, and a plot following the Arise continuity of the franchise. The trailer listed Kazuya Nomura as the director, Kazuchika Kise as the general director and character designer, Toru Okubo as the animation director, Tow Ubukata as the screenplay writer and Cornelius as the composer. The film premiered on June 20, 2015, in Japanese theaters.
In 2008, DreamWorks and producer Steven Spielberg acquired the rights to a live-action film adaptation of the original "Ghost in the Shell" manga. On January 24, 2014, Rupert Sanders was announced as director, with a screenplay by William Wheeler. In April 2016, the full cast was announced, which included Juliette Binoche, Chin Han, Lasarus Ratuere and Kaori Momoi, and Scarlett Johansson in the lead role; the casting of Johansson drew accusations of whitewashing. Principal photography on the film began on location in Wellington, New Zealand, on February 1, 2016. Filming wrapped in June 2016. "Ghost in the Shell" premiered in Tokyo on March 16, 2017, and was released in the United States on March 31, 2017, in 2D, 3D and IMAX 3D. It received mixed reviews, with praise for its visuals and Johansson's performance but criticism for its script.
In 2002, "Ghost in the Shell: Stand Alone Complex" premiered on Animax, presenting a new telling of "Ghost in the Shell" independent from the original manga, focusing on Section 9's investigation of the Laughing Man hacker. It was followed in 2004 by a second season titled "Ghost in the Shell: S.A.C. 2nd GIG", which focused on the Individual Eleven terrorist group. The primary storylines of both seasons were compressed into OVAs broadcast as "Ghost in the Shell: Stand Alone Complex The Laughing Man" in 2005 and "Ghost in the Shell: Stand Alone Complex Individual Eleven" in 2006. Also in 2006, "", featuring Section 9's confrontation with a hacker known as the Puppeteer, was broadcast, serving as a finale to the anime series. for the series and its films was composed by Yoko Kanno.
Kodansha and Production I.G announced on April 7, 2017 that Kenji Kamiyama and Shinji Aramaki would be co-directing a new "Kōkaku Kidōtai" anime production. On December 7, 2018, it was reported by Netflix that they had acquired the worldwide streaming rights to the original net animation (ONA) anime series, titled "", and that it would premiere in April 23, 2020. The series will be in 3DCG and Sola Digital Arts will be collaborating with Production I.G on the project. It was later revealed that Ilya Kuvshinov will handle character designs. It was stated that the new series will have two seasons of 12 episodes each. For the first season, the opening theme song music was “Fly with me” as performed by Daiki Tsuneta, while the ending was “Sustain++” as performed by Mili.
In addition to the anime, a series of published books, two separate manga adaptations, and several video games for consoles and mobile phones have been released for "Stand Alone Complex".
In 2013, a new iteration of the series titled "" premiered, taking an original look at the "Ghost in the Shell" world, set before the original manga. It was released as a series of four original video animation (OVA) episodes (with limited theatrical releases) from 2013 to 2014, then recompiled as a 10-episode television series under the title of "Kōkaku Kidōtai: Arise - Alternative Architecture". An additional fifth OVA titled "Pyrophoric Cult", originally premiering in the "Alternative Architecture" broadcast as two original episodes, was released on August 26, 2015. Kazuchika Kise served as the chief director of the series, with Tow Ubukata as head writer. Cornelius was brought onto the project to compose the score for the series, with the Major's new voice actress Maaya Sakamoto also providing vocals for certain tracks.
"Ghost in the Shell: The New Movie", also known as "Ghost in the Shell: Arise − The Movie" or "New Ghost in the Shell", is a 2015 film directed by Kazuya Nomura that serves as a finale to the "Ghost in the Shell: Arise" story arc. The film is a continuation to the plot of the "Pyrophoric Cult" episode of "Arise", and ties up loose ends from that arc.
A manga adaptation was serialized in Kodansha's "Young Magazine", which started on March 13 and ended on August 26, 2013.
"Ghost in the Shell" was developed by Exact and released for the PlayStation on July 17, 1997, in Japan by Sony Computer Entertainment. It is a third-person shooter featuring an original storyline where the character plays a rookie member of Section 9. The video game's soundtrack "Megatech Body" features various electronica artists.
Several video games were also developed to tie into the "Stand Alone Complex" television series, in addition to a first-person shooter by Nexon and Neople titled "", released in 2016.
"Ghost in the Shell" influenced a number of prominent filmmakers. The Wachowskis, creators of "The Matrix" and its sequels, showed it to producer Joel Silver, saying, "We wanna do that for real." "The Matrix" series took several concepts from the film, including the Matrix digital rain, which was inspired by the opening credits of "Ghost in the Shell", and the way characters access the Matrix through holes in the back of their necks. Other parallels have been drawn to James Cameron's "Avatar", Steven Spielberg's "A.I. Artificial Intelligence", and Jonathan Mostow's "Surrogates". James Cameron cited "Ghost in the Shell" as a source of inspiration, citing it as an influence on "Avatar".
Bungie's 2001 third-person action game "Oni" draws substantial inspiration from "Ghost in the Shell" setting and characters. "Ghost in the Shell" also influenced video games such as the "Metal Gear Solid" series, "Deus Ex", and "Cyberpunk 2077".
|
https://en.wikipedia.org/wiki?curid=12914
|
Gauss–Legendre algorithm
The Gauss–Legendre algorithm is an algorithm to compute the digits of . It is notable for being rapidly convergent, with only 25 iterations producing 45 million correct digits of . However, the drawback is that it is computer memory-intensive and therefore sometimes Machin-like formulas are used instead.
The method is based on the individual work of Carl Friedrich Gauss (1777–1855) and Adrien-Marie Legendre (1752–1833) combined with modern algorithms for multiplication and square roots. It repeatedly replaces two numbers by their arithmetic and geometric mean, in order to approximate their arithmetic-geometric mean.
The version presented below is also known as the Gauss–Euler, Brent–Salamin (or Salamin–Brent) algorithm; it was independently discovered in 1975 by Richard Brent and Eugene Salamin. It was used to compute the first 206,158,430,000 decimal digits of on September 18 to 20, 1999, and the results were checked with Borwein's algorithm.
1. Initial value setting:
2. Repeat the following instructions until the difference of formula_2 and formula_3 is within the desired accuracy:
3. is then approximated as:
The first three iterations give (approximations given up to and including the first incorrect digit):
The algorithm has quadratic convergence, which essentially means that the number of correct digits doubles with each iteration of the algorithm.
The arithmetic–geometric mean of two numbers, a0 and b0, is found by calculating the limit of the sequences
which both converge to the same limit.
If formula_10 and formula_11 then the limit is formula_12 where formula_13 is the complete elliptic integral of the first kind
If formula_15, formula_16, then
where formula_18 is the complete elliptic integral of the second kind:
Gauss knew of both of these results.
For formula_21 and formula_22 such that formula_23 Legendre proved the identity:
The Gauss-Legendre algorithm can be proven without elliptic modular functions. This is done here and here using only integral calculus.
|
https://en.wikipedia.org/wiki?curid=12916
|
Great Internet Mersenne Prime Search
The Great Internet Mersenne Prime Search (GIMPS) is a collaborative project of volunteers who use freely available software to search for Mersenne prime numbers.
GIMPS was founded in 1996 by George Woltman, who also wrote the Prime95 client and its Linux port MPrime. Scott Kurowski wrote the back end PrimeNet server to demonstrate distributed computing software by Entropia, a company he founded in 1997. GIMPS is registered as Mersenne Research, Inc. with Kurowski as Executive Vice President and board director. GIMPS is said to be one of the first large scale distributed computing projects over the Internet for research purposes.
, the project has found a total of seventeen Mersenne primes, fifteen of which were the largest known prime number at their respective times of discovery. The largest known prime is 282,589,933 − 1 (or M82,589,933 for short) and was discovered on December 7, 2018 by Patrick Laroche.
The project relies primarily on the Lucas–Lehmer primality test as it is an algorithm that is both specialized for testing Mersenne primes and particularly efficient on binary computer architectures. There is also a trial division phase, used to rapidly eliminate many Mersenne numbers with small factors. Pollard's p − 1 algorithm is also used to search for smooth factors. In 2017, GIMPS adopted the Fermat primality test as an alternative option for primality testing.
The project began in early January 1996, with a program that ran on i386 computers.
The name for the project was coined by Luther Welsh, one of its earlier searchers and the co-discoverer of the 29th Mersenne prime.
Within a few months, several dozen people had joined, and over a thousand by the end of the first year.
Joel Armengaud, a participant, discovered the primality of M1,398,269 on November 13, 1996.
, GIMPS has a sustained average aggregate throughput of approximately 1.17 PetaFLOPS (or PFLOPS). In November 2012, GIMPS maintained 95 TFLOPS, theoretically earning the GIMPS virtual computer a rank of 330 among the TOP500 most powerful known computer systems in the world. The preceding place was then held by an 'HP Cluster Platform 3000 BL460c G7' of Hewlett-Packard. As of November 2014 TOP500 results, these old GIMPS numbers would no longer make the list.
Previously, this was approximately 50 TFLOPS in early 2010, 30 TFLOPS in mid-2008, 20 TFLOPS in mid-2006, and 14 TFLOPS in early 2004.
Although the GIMPS software's source code is publicly available, technically it is not free software, since it has a restriction that users must abide by the project's distribution terms.
Specifically, if the software is used to discover a prime number with at least 100,000,000 decimal digits, the user will only win $50,000 of the $150,000 prize offered by the Electronic Frontier Foundation.
Third-party programs for testing Mersenne numbers, such as Mlucas and Glucas (for non-x86 systems), do not have this restriction.
GIMPS also "reserves the right to change this EULA without notice and with reasonable retroactive effect".""
All Mersenne primes are of the form , where "p" is a prime number itself. The smallest Mersenne prime in this table is
The first column is the rank of the Mersenne prime in the (ordered) sequence of all Mersenne primes; GIMPS has found all known Mersenne primes beginning with the 35th.
Whenever a possible prime is reported to the server, it is verified first before it is announced. The importance of this was illustrated in 2003, when a false positive was reported to possibly be the 40th Mersenne prime but verification failed.
The official "discovery date" of a prime is the date that a human first noticed the result for the prime, which may differ from the date that the result was first reported to the server. For example, M74207281 was reported to the server on September 17, 2015, but the report was overlooked until January 7, 2016.
|
https://en.wikipedia.org/wiki?curid=12917
|
Game.com
The Game.com is a fifth-generation handheld game console released by Tiger Electronics in August 1997. A smaller version, the Game.com Pocket Pro, was released in mid-1999. The first version of the Game.com can be connected to a 14.4 kbit/s modem for Internet connectivity, hence its name referencing the top level domain .com. It was the first video game console to include a touchscreen and the first handheld console to include Internet connectivity. The Game.com sold less than 300,000 units and was discontinued in 2000 because of poor sales.
Tiger Electronics had previously introduced its R-Zone game console in 1995 – as a competitor to Nintendo's Virtual Boy – but the system was a failure. Prior to the R-Zone, Tiger had also manufactured handheld games consisting of LCD screens with imprinted graphics.
By February 1997, Tiger was planning to release a new game console, the handheld "game.com", as a direct competitor to Nintendo's portable Game Boy console. Prior to its release, Tiger Electronics stated that the Game.com would "change the gaming world as we know it," while a spokesperson stated that it would be "one of this summer's hits." The Game.com, the only new game console of the year, was on display at the Electronic Entertainment Expo (E3) in May 1997, with sales expected to begin in July. Dennis Lynch of the "Chicago Tribune" considered the Game.com to be the "most interesting hand-held device" on display at E3, describing it as a "sort of Game Boy for adults".
The Game.com was released in the United States in August 1997, with a retail price of $69.95, while an Internet-access cartridge was scheduled for release in October. "Lights Out" was included with the console as a pack-in game and Solitaire was built into the handheld itself. The console's release marked Tiger's largest product launch ever. Tiger also launched a website for the system at the domain "game.com". The Game.com was marketed with a television commercial in which a spokesperson insults gamers who ask questions about the console, while stating that it "plays more games than you idiots have brain cells"; GamesRadar stated that the advertisement "probably didn't help matters much". By the end of 1997, the console had been released in the United Kingdom, at a retail price of £79.99.
The Game.com came in a black-and-white color, and featured a design similar to Sega's Game Gear console. The screen is larger than the Game Boy's and has higher resolution. The Game.com included a phone directory, a calculator, and a calendar, and had an older target audience with its PDA features. Tiger designed the console's features to be simple and cheap. The device was powered by four AA batteries, and an optional AC adapter was also available. One of the major peripherals that Tiger produced for the system was the compete.com serial cable, allowing players to connect their consoles to play multiplayer games. The console includes two game cartridge slots. In addition to reducing the need to swap out cartridges, this enabled Game.com games to include online elements, since both a game cartridge and the modem cartridge could be inserted at the same time.
The Game.com was the first video game console to feature a touchscreen and also the first handheld video game console to have Internet connectivity. The Game.com's black-and-white monochrome touchscreen measures approximately one and a half inches by two inches, and is divided into square zones that are imprinted onto the screen itself, to aid players in determining where to apply the stylus. The touchscreen lacks a backlight. The Game.com was also the first handheld gaming console to have internal memory, which is used to save information such as high scores and contact information.
Because of poor sales with the original Game.com, Tiger developed an updated version known as the Game.com.pocket.pro. The console was shown at the American International Toy Fair in February 1999, and was later shown along with several future games at E3 in May 1999. The Game.com Pocket Pro had been released by June 1999, with a retail price of $29.99. The new console was available in five different colors: green, orange, pink, purple, and teal.
The Pocket Pro was the only handheld console at that time to have a backlit screen, although it lacked color like its predecessor. The Pocket Pro was reduced in size from its predecessor to be equivalent to the Game Boy Pocket. The screen size was also reduced, and the new console featured only one cartridge slot. Unlike the original Game.com, the Pocket Pro required only two AA batteries. The Game.com Pocket Pro included a phone directory, a calendar and a calculator, but lacked Internet capabilities.
The Game.com Pocket Pro's primary competitor was the Game Boy Color. Despite several games based on popular franchises, the Game.com console line failed to sell in large numbers, and was discontinued in 2000 because of poor sales. The Game.com was a commercial failure, with less than 300,000 units sold, although the idea of a touchscreen would later be used successfully in the Nintendo DS, released in 2004.
Accessing the Internet required the use of an Internet cartridge and a modem, neither of which were included with the console. Email messages could be read and sent on the Game.com using the Internet cartridge, and the Game.com supported text-only web browsing through Internet service providers. Email messages could not be saved to the Game.com's internal memory. In addition to a Game.com-branded 14.4 kbit/s modem, Tiger also offered an Internet service provider through Delphi that was made to work specifically with the Game.com.
Tiger subsequently released the Web Link cartridge, allowing players to connect their system to a desktop computer. Using the Web Link cartridge, players could upload their high scores to the Game.com website for a chance to be listed on a webpage featuring the top high scores. None of the console's games made use of the Internet feature.
Several games were available for the Game.com at the time of its 1997 launch, in comparison to hundreds of games available for the Game Boy. Tiger planned to have a dozen games available by the end of 1997, and hoped to have as many as 50 games available in 1998, with all of them to be produced or adapted internally by Tiger. Some third parties expressed interest in developing for the system, but Tiger decided against signing any initially. Tiger secured licenses for several popular game series, including "Duke Nukem", "Resident Evil", and "Mortal Kombat Trilogy". Game prices initially ranged between $19 and $29. Cartidge size was in the 16 megabit range.
At the time of the Pocket Pro's 1999 release, the Game.com library consisted primarily of games intended for an older audience. Some games that were planned for release in 1999 would be exclusive to Game.com consoles. Game prices at that time ranged from $14 to $30. Twenty games were ultimately released for the Game.com, most of them developed internally by Tiger.
At the time of the Game.com's launch in 1997, Chris Johnston of VideoGameSpot believed that the console would have difficulty competing against the Game Boy. Johnston also believed that text-based Internet and email would attract only limited appeal, stating that such features were outdated. Johnston concluded that the Game.com "is a decent system, but Nintendo is just way too powerful in the industry." Chip and Jonathan Carter wrote that the console did not play action games as well as it did with other games, although they praised the console's various options and wrote, "Graphically, we'd have to say this has the potential to perform better than Game Boy. As for sound, Game.com delivers better than any other hand-held on the market." A team of four "Electronic Gaming Monthly" editors gave the Game.com scores of 5.5, 4.5, 5.0, and 4.0. They were impressed by the PDA features and touchscreen, but commented that the games library had thus far failed to deliver on the Game.com's great potential. They elaborated that while the non-scrolling games, particularly "Wheel of Fortune", were great fun and made good use of the touchscreen, the more conventional action games were disappointing and suffered from prominent screen blurring.
"Wisconsin State Journal" stated that the Game.com offered "some serious" advantages over the Game Boy, including its touchscreen. It was also stated that in comparison to the Game Boy, the Game.com's 8-bit processor provided "marginal improvements" in the quality of speed and graphics. The newspaper noted that the Game.com had a "tiny, somewhat blurry screen." "The Philadelphia Inquirer" wrote a negative review of the Game.com, particularly criticizing Internet connectivity issues. Also criticized was the system's lack of a backlit screen, as the use of exterior lighting could cause difficulty in viewing the screen, which was highly reflective.
Steven L. Kent, writing for the "Chicago Tribune", wrote that the console had an elegant design, as well as better sound and a higher-definition screen than the Game Boy: "Elegant design, however, has not translated into ideal game play. Though Tiger has produced fighting, racing and shooting games for Game.com, the games have noticeably slow frame rates. The racing game looks like a flickering silent picture show." Cameron Davis of VideoGames.com wrote, "Sure, this is no Game Boy Color-killer, but the Game.Com was never meant to be. To deride it by comparing it with more powerful and established formats would be a bit unfair". Davis also wrote, "The touch screen is pretty sensitive, but it works well - you won't need more than a few seconds to get used to it." However, he criticized the screen's squared zones: "more often than not it proves distracting when you are playing games that don't require it."
"GamePro" criticized the Pocket Pro's lack of screen color and its difficult controls, but considered its two best qualities to be its cheap price and a game library of titles exclusive to the console. "The Philadelphia Inquirer" also criticized the Pocket Pro's lack of a color screen, as well as "frustrating" gameplay caused by the "unresponsive" controls, including the stylus. The newspaper stated that, "Even at $29.99, the pocket.pro is no bargain."
Brett Alan Weiss of the website AllGame wrote, "The Game.com, the little system that (almost) could, constantly amazes me with the strength and scope of its sound effects. [...] It's astounding what power comes out of such a tiny little speaker." In 2004, Kent included the modem and "some PDA functionality" as the console's strengths, while listing its "Slow processor" and "lackluster library of games" as weaknesses. In 2006, "Engadget" stated that "You can't fault Tiger Electronics for their ambition," but wrote that the Game.com "didn't do any one thing particularly well", criticizing its text-only Internet access and stating that its "disappointing games were made even worse" by the "outdated" screen.
In 2009, "PC World" ranked the Game.com at number nine on its list of the 10 worst video game systems ever released, criticizing its Internet aspect, its game library, its low-resolution touchscreen, and its "Silly name that attempted to capitalize on Internet mania." However, "PC World" positively noted its "primitive" PDA features and its solitaire game, considered by the magazine to be the system's best game. In 2011, Mikel Reparaz of GamesRadar ranked the Game.com at number 3 on a list of 7 failed handheld consoles, writing that while the Game.com had several licensed games, it "doesn't actually mean much when they all look like cruddy, poorly animated Game Boy ports." Raparaz also stated that the Game.com "looked dated even by Game Boy standards," noting that the Game Boy Pocket had a sharper display screen. Reparaz stated that the Game.com's continuation into 2000 was a "pretty significant achievement" considering its competition from the Game Boy Color.
In 2013, Jeff Dunn of GamesRadar criticized the Game.com for its "blurry" and "imprecise" touchscreen, as well as its "limited and unwieldy" Internet and email interfaces. Dunn also criticized the "painful" Internet setup process, and stated that all of the console's available games were "ugly and horrible." Dunn noted, however, that the Game.com's Internet aspect was a "smart" feature. In 2016, Motherboard stated that the Game.com was "perhaps one of the worst consoles of all time," due largely to its low screen quality. In 2018, Nadia Oxford of USgamer noted the Game.com's "paper-thin" library of games and stated that the console "died in record time because it was poorly-made, to say the least."
|
https://en.wikipedia.org/wiki?curid=12919
|
General Packet Radio Service
General Packet Radio Service (GPRS) is a packet oriented mobile data standard on the 2G and 3G cellular communication network's global system for mobile communications (GSM). GPRS was established by European Telecommunications Standards Institute (ETSI) in response to the earlier CDPD and i-mode packet-switched cellular technologies. It is now maintained by the 3rd Generation Partnership Project (3GPP).
GPRS is typically sold according to the total volume of data transferred during the billing cycle, in contrast with circuit switched data, which is usually billed per minute of connection time, or sometimes by one-third minute increments. Usage above the GPRS bundled data cap may be charged per MB of data, speed limited, or disallowed.
GPRS is a best-effort service, implying variable throughput and latency that depend on the number of other users sharing the service concurrently, as opposed to circuit switching, where a certain quality of service (QoS) is guaranteed during the connection. In 2G systems, GPRS provides data rates of 56–114 kbit/sec. 2G cellular technology combined with GPRS is sometimes described as "2.5G", that is, a technology between the second (2G) and third (3G) generations of mobile telephony. It provides moderate-speed data transfer, by using unused time division multiple access (TDMA) channels in, for example, the GSM system. GPRS is integrated into GSM Release 97 and newer releases.
The GPRS core network allows 2G, 3G and WCDMA mobile networks to transmit IP packets to external networks such as the Internet. The GPRS system is an integrated part of the GSM network switching subsystem.
GPRS extends the GSM Packet circuit switched data capabilities and makes the following services possible:
If SMS over GPRS is used, an SMS transmission speed of about 30 SMS messages per minute may be achieved. This is much faster than using the ordinary SMS over GSM, whose SMS transmission speed is about 6 to 10 SMS messages per minute.
GPRS supports the following protocols:
When TCP/IP is used, each phone can have one or more IP addresses allocated. GPRS will store and forward the IP packets to the phone even during handover. The TCP restores any packets lost (e.g. due to a radio noise induced pause).
Devices supporting GPRS are grouped into three classes:
Because a Class A device must service GPRS and GSM networks together, it effectively needs two radios. To avoid this hardware requirement, a GPRS mobile device may implement the dual transfer mode (DTM) feature. A DTM-capable mobile can handle both GSM packets and GPRS packets with network coordination to ensure both types are not transmitted at the same time. Such devices are considered pseudo-Class A, sometimes referred to as "simple class A". Some networks have supported DTM since 2007.
USB 3G/GPRS modems have a terminal-like interface over USB with V.42bis, and RFC 1144 data formats. Some models include an external antenna connector. Modem cards for laptop PCs, or external USB modems are available, similar in shape and size to a computer mouse, or a pendrive.
A GPRS connection is established by reference to its access point name (APN). The APN defines the services such as wireless application protocol (WAP)
access, short message service (SMS), multimedia messaging service (MMS), and for Internet communication services such as email and World Wide Web access.
In order to set up a GPRS connection for a wireless modem, a user must specify an APN, optionally a user name and password, and very rarely an IP address, provided by the network operator.
GSM module or GPRS modules are similar to modems, but there's one difference: the modem is an external piece of equipment, whereas the GSM module or GPRS module can be integrated within an electrical or electronic equipment. It is an embedded piece of hardware. A GSM mobile, on the other hand, is a complete embedded system in itself. It comes with embedded processors dedicated to provide a functional interface between the user and the mobile network.
The upload and download speeds that can be achieved in GPRS depend on a number of factors such as:
The multiple access methods used in GSM with GPRS are based on frequency division duplex (FDD) and TDMA. During a session, a user is assigned to one pair of up-link and down-link frequency channels. This is combined with time domain statistical multiplexing which makes it possible for several users to share the same frequency channel. The packets have constant length, corresponding to a GSM time slot. The down-link uses first-come first-served packet scheduling, while the up-link uses a scheme very similar to reservation ALOHA (R-ALOHA). This means that slotted ALOHA (S-ALOHA) is used for reservation inquiries during a contention phase, and then the actual data is transferred using dynamic TDMA with first-come first-served.
The channel encoding process in GPRS consists of two steps: first, a cyclic code is used to add parity bits, which are also referred to as the Block Check Sequence, followed by coding with a possibly punctured convolutional code. The Coding Schemes CS-1 to CS-4 specify the number of parity bits generated by the cyclic code and the puncturing rate of the convolutional code. In Coding Schemes CS-1 through CS-3, the convolutional code is of rate 1/2, i.e. each input bit is converted into two coded bits. In Coding Schemes CS-2 and CS-3, the output of the convolutional code is punctured to achieve the desired code rate. In Coding Scheme CS-4, no convolutional coding is applied. The following table summarises the options.
The least robust, but fastest, coding scheme (CS-4) is available near a base transceiver station (BTS), while the most robust coding scheme (CS-1) is used when the mobile station (MS) is further away from a BTS.
Using the CS-4 it is possible to achieve a user speed of 20.0 kbit/s per time slot. However, using this scheme the cell coverage is 25% of normal. CS-1 can achieve a user speed of only 8.0 kbit/s per time slot, but has 98% of normal coverage. Newer network equipment can adapt the transfer speed automatically depending on the mobile location.
In addition to GPRS, there are two other GSM technologies which deliver data services: circuit-switched data (CSD) and high-speed circuit-switched data (HSCSD). In contrast to the shared nature of GPRS, these instead establish a dedicated circuit (usually billed per minute). Some applications such as video calling may prefer HSCSD, especially when there is a continuous flow of data between the endpoints.
The following table summarises some possible configurations of GPRS and circuit switched data services.
The multislot class determines the speed of data transfer available in the Uplink and Downlink directions. It is a value between 1 and 45 which the network uses to allocate radio channels in the uplink and downlink direction. Multislot class with values greater than 31 are referred to as high multislot classes.
A multislot allocation is represented as, for example, 5+2. The first number is the number of downlink timeslots and the second is the number of uplink timeslots allocated for use by the mobile station. A commonly used value is class 10 for many GPRS/EGPRS mobiles which uses a maximum of 4 timeslots in downlink direction and 2 timeslots in uplink direction. However simultaneously a maximum number of 5 simultaneous timeslots can be used in both uplink and downlink. The network will automatically configure for either 3+2 or 4+1 operation depending on the nature of data transfer.
Some high end mobiles, usually also supporting UMTS, also support GPRS/EDGE multislot class 32. According to 3GPP TS 45.002 (Release 12), Table B.1, mobile stations of this class support 5 timeslots in downlink and 3 timeslots in uplink with a maximum number of 6 simultaneously used timeslots. If data traffic is concentrated in downlink direction the network will configure the connection for 5+1 operation. When more data is transferred in the uplink the network can at any time change the constellation to 4+2 or 3+3. Under the best reception conditions, i.e. when the best EDGE modulation and coding scheme can be used, 5 timeslots can carry a bandwidth of 5*59.2 kbit/s = 296 kbit/s. In uplink direction, 3 timeslots can carry a bandwidth of 3*59.2 kbit/s = 177.6 kbit/s.
Each multislot class identifies the following:
The different multislot class specification is detailed in the Annex B of the 3GPP Technical Specification 45.002 (Multiplexing and multiple access on the radio path)
The maximum speed of a GPRS connection offered in 2003 was similar to a modem connection in an analog wire telephone network, about 32–40 kbit/s, depending on the phone used. Latency is very high; round-trip time (RTT) is typically about 600–700 ms and often reaches 1s. GPRS is typically prioritized lower than speech, and thus the quality of connection varies greatly.
Devices with latency/RTT improvements (via, for example, the extended UL TBF mode feature) are generally available. Also, network upgrades of features are available with certain operators. With these enhancements the active round-trip time can be reduced, resulting in significant increase in application-level throughput speeds.
GPRS opened in 2000 as a packet-switched data service embedded to the channel-switched cellular radio network GSM. GPRS extends the reach of the fixed Internet by connecting mobile terminals worldwide.
The CELLPAC protocol developed 1991-1993 was the trigger point for starting in 1993 specification of standard GPRS by ETSI SMG. Especially, the CELLPAC Voice & Data functions introduced in a 1993 ETSI Workshop contribution anticipate what was later known to be the roots of GPRS. This workshop contribution is referenced in 22 GPRS related US-Patents. Successor systems to GSM/GPRS like W-CDMA (UMTS) and LTE rely on key GPRS functions for mobile Internet access as introduced by CELLPAC.
According to a study on history of GPRS development, Bernhard Walke and his student Peter Decker are the inventors of GPRS – the first system providing worldwide mobile Internet access.
|
https://en.wikipedia.org/wiki?curid=12920
|
Georgian architecture
Georgian architecture is the name given in most English-speaking countries to the set of architectural styles current between 1714 and 1830. It is eponymous for the first four British monarchs of the House of Hanover—George I, George II, George III, and George IV—who reigned in continuous succession from August 1714 to June 1830. The style was revived in the late 19th century in the United States as Colonial Revival architecture and in the early 20th century in Great Britain as Neo-Georgian architecture; in both it is also called Georgian Revival architecture. In the United States the term "Georgian" is generally used to describe all buildings from the period, regardless of style; in Britain it is generally restricted to buildings that are "architectural in intention", and have stylistic characteristics that are typical of the period, though that covers a wide range.
The Georgian style is highly variable, but marked by symmetry and proportion based on the classical architecture of Greece and Rome, as revived in Renaissance architecture. Ornament is also normally in the classical tradition, but typically restrained, and sometimes almost completely absent on the exterior. The period brought the vocabulary of classical architecture to smaller and more modest buildings than had been the case before, replacing English vernacular architecture (or becoming the new vernacular style) for almost all new middle-class homes and public buildings by the end of the period.
Georgian architecture is characterized by its proportion and balance; simple mathematical ratios were used to determine the height of a window in relation to its width or the shape of a room as a double cube. Regularity, as with ashlar (uniformly cut) stonework, was strongly approved, imbuing symmetry and adherence to classical rules: the lack of symmetry, where Georgian additions were added to earlier structures remaining visible, was deeply felt as a flaw, at least before John Nash began to introduce it in a variety of styles. Regularity of housefronts along a street was a desirable feature of Georgian town planning. Until the start of the Gothic Revival in the early 19th century, Georgian designs usually lay within the Classical orders of architecture and employed a decorative vocabulary derived from ancient Rome or Greece.
In towns, which expanded greatly during the period, landowners turned into property developers, and rows of identical terraced houses became the norm. Even the wealthy were persuaded to live in these in town, especially if provided with a square of garden in front of the house. There was an enormous amount of building in the period, all over the English-speaking world, and the standards of construction were generally high. Where they have not been demolished, large numbers of Georgian buildings have survived two centuries or more, and they still form large parts of the core of cities such as London, Edinburgh, Dublin, Newcastle upon Tyne and Bristol.
The period saw the growth of a distinct and trained architectural profession; before the mid-century "the high-sounding title, 'architect' was adopted by anyone who could get away with it". This contrasted with earlier styles, which were primarily disseminated among craftsmen through the direct experience of the apprenticeship system. But most buildings were still designed by builders and landlords together, and the wide spread of Georgian architecture, and the Georgian styles of design more generally, came from dissemination through pattern books and inexpensive suites of engravings. Authors such as the prolific William Halfpenny (active 1723–1755) had editions in America as well as Britain.
A similar phenomenon can be seen in the commonality of housing designs in Canada and the United States (though of a wider variety of styles) from the 19th century down to the 1950s, using pattern books drawn up by professional architects that were distributed by lumber companies and hardware stores to contractors and homebuilders.
From the mid-18th century, Georgian styles were assimilated into an architectural vernacular that became part and parcel of the training of every architect, designer, builder, carpenter, mason and plasterer, from Edinburgh to Maryland.
Georgian succeeded the English Baroque of Sir Christopher Wren, Sir John Vanbrugh, Thomas Archer, William Talman, and Nicholas Hawksmoor; this in fact continued into at least the 1720s, overlapping with a more restrained Georgian style. The architect James Gibbs was a transitional figure, his earlier buildings are Baroque, reflecting the time he spent in Rome in the early 18th century, but he adjusted his style after 1720. Major architects to promote the change in direction from Baroque were Colen Campbell, author of the influential book "Vitruvius Britannicus" (1715–1725); Richard Boyle, 3rd Earl of Burlington and his protégé William Kent; Isaac Ware; Henry Flitcroft and the Venetian Giacomo Leoni, who spent most of his career in England.
Other prominent architects of the early Georgian period include James Paine, Robert Taylor, and John Wood, the Elder. The European Grand Tour became very common for wealthy patrons in the period, and Italian influence remained dominant, though at the start of the period Hanover Square, Westminster (1713 on), developed and occupied by Whig supporters of the new dynasty, seems to have deliberately adopted German stylistic elements in their honour, especially vertical bands connecting the windows.
The styles that resulted fall within several categories. In the mainstream of Georgian style were both Palladian architecture—and its whimsical alternatives, Gothic and Chinoiserie, which were the English-speaking world's equivalent of European Rococo. From the mid-1760s a range of Neoclassical modes were fashionable, associated with the British architects Robert Adam, James Gibbs, Sir William Chambers, James Wyatt, George Dance the Younger, Henry Holland and Sir John Soane. John Nash was one of the most prolific architects of the late Georgian era known as The Regency style, he was responsible for designing large areas of London. Greek Revival architecture was added to the repertory, beginning around 1750, but increasing in popularity after 1800. Leading exponents were William Wilkins and Robert Smirke.
In Britain, brick or stone are almost invariably used; brick is often disguised with stucco. In America and other colonies wood remained very common, as its availability and cost-ratio with the other materials was more favourable. Raked roofs were mostly covered in earthenware tiles until Richard Pennant, 1st Baron Penrhyn led the development of the slate industry in Wales from the 1760s, which by the end of the century had become the usual material.
Versions of revived Palladian architecture dominated English country house architecture. Houses were increasingly placed in grand landscaped settings, and large houses were generally made wide and relatively shallow, largely to look more impressive from a distance. The height was usually highest in the centre, and the Baroque emphasis on corner pavilions often found on the continent generally avoided. In grand houses, an entrance hall led to steps up to a "piano nobile" or mezzanine floor where the main reception rooms were. Typically the basement area or "rustic", with kitchens, offices and service areas, as well as male guests with muddy boots, came some way above ground, and was lit by windows that were high on the inside, but just above ground level outside. A single block was typical, with perhaps a small court for carriages at the front marked off by railings and a gate, but rarely a stone gatehouse, or side wings around the court.
Windows in all types of buildings were large and regularly placed on a grid; this was partly to minimize window tax, which was in force throughout the period in the United Kingdom. Some windows were subsequently bricked-in. Their height increasingly varied between the floors, and they increasingly began below waist-height in the main rooms, making a small balcony desirable. Before this the internal plan and function of the rooms can generally not be deduced from the outside. To open these large windows the sash window, already developed by the 1670s, became very widespread. Corridor plans became universal inside larger houses.
Internal courtyards became more rare, except beside the stables, and the functional parts of the building were placed at the sides, or in separate buildings nearby hidden by trees. The views to and from the front and rear of the main block were concentrated on, with the side approaches usually much less important. The roof was typically invisible from the ground, though domes were sometimes visible in grander buildings. The roofline was generally clear of ornament except for a balustrade or the top of a pediment. Columns or pilasters, often topped by a pediment, were popular for ornament inside and out, and other ornament was generally geometrical or plant-based, rather than using the human figure.
Inside ornament was far more generous, and could sometimes be overwhelming. The chimneypiece continued to be the usual main focus of rooms, and was now given a classical treatment, and increasingly topped by a painting or a mirror. Plasterwork ceilings, carved wood, and bold schemes of wallpaint formed a backdrop to increasingly rich collections of furniture, paintings, porcelain, mirrors, and objets d'art of all kinds. Wood-panelling, very common since about 1500, fell from favour around the mid-century, and wallpaper included very expensive imports from China.
Smaller houses in the country, such as vicarages, were simple regular blocks with visible raked roofs, and a central doorway, often the only ornamented area. Similar houses, often referred to as "villas" became common around the fringes of the larger cities, especially London, and detached houses in towns remained common, though only the very rich could afford them in central London.
In towns even most better-off people lived in terraced houses, which typically opened straight onto the street, often with a few steps up to the door. There was often an open space, protected by iron railings, dropping down to the basement level, with a discreet entrance down steps off the street for servants and deliveries; this is known as the "area". This meant that the ground floor front was now removed and protected from the street and encouraged the main reception rooms to move there from the floor above. Where, as often, a new street or set of streets was developed, the road and pavements were raised up, and the gardens or yards behind the houses at a lower level, usually representing the original one.
Town terraced houses for all social classes remained resolutely tall and narrow, each dwelling occupying the whole height of the building. This contrasted with well-off continental dwellings, which had already begun to be formed of wide apartments occupying only one or two floors of a building; such arrangements were only typical in England when housing groups of batchelors, as in Oxbridge colleges, the lawyers in the Inns of Court or The Albany after it was converted in 1802. In the period in question, only in Edinburgh were working-class purpose-built tenements common, though lodgers were common in other cities. A curving crescent, often looking out at gardens or a park, was popular for terraces where space allowed. In early and central schemes of development, plots were sold and built on individually, though there was often an attempt to enforce some uniformity, but as development reached further out schemes were increasingly built as a uniform scheme and then sold.
The late Georgian period saw the birth of the semi-detached house, planned systematically, as a suburban compromise between the terraced houses of the city and the detached "villas" further out, where land was cheaper. There had been occasional examples in town centres going back to medieval times. Most early suburban examples are large, and in what are now the outer fringes of Central London, but were then in areas being built up for the first time. Blackheath, Chalk Farm and St John's Wood are among the areas contesting being the original home of the semi. Sir John Summerson gave primacy to the Eyre Estate of St John's Wood. A plan for this exists dated 1794, where "the whole development consists of "pairs of semi-detached houses", So far as I know, this is the first recorded scheme of the kind". In fact the French Wars put an end to this scheme, but when the development was finally built it retained the semi-detached form, "a revolution of striking significance and far-reaching effect".
Until the Church Building Act 1818, the period saw relatively few churches built in Britain, which was already well-supplied, although in the later years of the period the demand for Non-conformist and Roman Catholic places of worship greatly increased. Anglican churches that were built were designed internally to allow maximum audibility, and visibility, for preaching, so the main nave was generally wider and shorter than in medieval plans, and often there were no side-aisles. Galleries were common in new churches. Especially in country parishes, the external appearance generally retained the familiar signifiers of a Gothic church, with a tower or spire, a large west front with one or more doors, and very large windows along the nave, but all with any ornament drawn from the classical vocabulary. Where funds permitted, a classical temple portico with columns and a pediment might be used at the west front. Interior decoration was generally chaste; however, walls often became lined with plaques and monuments to the more prosperous members of the congregation.
In the colonies new churches were certainly required, and generally repeated similar formulae. British Non-conformist churches were often more classical in mood, and tended not to feel the need for a tower or steeple.
The archetypal Georgian church is St Martin-in-the-Fields in London (1720), by Gibbs, who boldly added to the classical temple façade at the west end a large steeple on top of a tower, set back slightly from the main frontage. This formula shocked purists and foreigners, but became accepted and was very widely copied, at home and in the colonies, for example at St Andrew's Church, Chennai in India.
The 1818 Act allocated some public money for new churches required to reflect changes in population, and a commission to allocate it. Building of Commissioners' churches gathered pace in the 1820s, and continued until the 1850s. The early churches, falling into the Georgian period, show a high proportion of Gothic Revival buildings, along with the classically inspired.
Public buildings generally varied between the extremes of plain boxes with grid windows and Italian Late Renaissance palaces, depending on budget. Somerset House in London, designed by Sir William Chambers in 1776 for government offices, was as magnificent as any country house, though never quite finished, as funds ran out. Barracks and other less prestigious buildings could be as functional as the mills and factories that were growing increasingly large by the end of the period. But as the period came to an end many commercial projects were becoming sufficiently large, and well-funded, to become "architectural in intention", rather than having their design left to the lesser class of "surveyors".
Georgian architecture was widely disseminated in the English colonies during the Georgian era. American buildings of the Georgian period were very often constructed of wood with clapboards; even columns were made of timber, framed up, and turned on an oversized lathe. At the start of the period the difficulties of obtaining and transporting brick or stone made them a common alternative only in the larger cities, or where they were obtainable locally. Dartmouth College, Harvard University and the College of William and Mary offer leading examples of Georgian architecture in the Americas.
Unlike the Baroque style that it replaced, which was mostly used for palaces and churches, and had little representation in the British colonies, simpler Georgian styles were widely used by the upper and middle classes. Perhaps the best remaining house is the pristine Hammond-Harwood House (1774) in Annapolis, Maryland, designed by the colonial architect William Buckland and modelled on the Villa Pisani at Montagnana, Italy as depicted in Andrea Palladio's "I quattro libri dell'architettura" ("The Four Books of Architecture").
After independence, in the former American colonies, Federal-style architecture represented the equivalent of Regency architecture, with which it had much in common.
In Australia, the Old Colonial Georgian residential and non-residential styles were developed in the period from .
After about 1840, Georgian conventions were slowly abandoned as a number of revival styles, including Gothic Revival, that had originated in the Georgian period, developed and contested in Victorian architecture, and in the case of Gothic became better researched, and closer to their originals. Neoclassical architecture remained popular, and was the opponent of Gothic in the Battle of the Styles of the early Victorian period. In the United States the Federalist Style contained many elements of Georgian style, but incorporated revolutionary symbols.
In the early decades of the twentieth century when there was a growing nostalgia for its sense of order, the style was revived and adapted and in the United States came to be known as the Colonial Revival. In Canada the United Empire Loyalists embraced Georgian architecture as a sign of their fealty to Britain, and the Georgian style was dominant in the country for most of the first half of the 19th century. The Grange, for example, a manor built in Toronto, was built in 1817. In Montreal, English-born architect John Ostell worked on a significant number of remarkable constructions in the Georgian style such as the Old Montreal Custom House and the Grand séminaire de Montréal.
The revived Georgian style that emerged in Britain at the beginning of the 20th century is usually referred to as Neo-Georgian; the work of Edwin Lutyens and Vincent Harris includes some examples. Versions of the Neo-Georgian style were commonly used in Britain for certain types of urban architecture until the late 1950s, Bradshaw Gass & Hope's Police Headquarters in Salford of 1958 being a good example. The British town of Welwyn Garden City, established in the 1920s, is an example of "pastiche" or Neo-Georgian development of the early 20th century in Britain. In both the United States and Britain, the Georgian style is still employed by architects like Quinlan Terry, Julian Bicknell, Robert Adam Architects, and Fairfax and Sammons for private residences. A debased form in commercial housing developments, especially in the suburbs, is known in the UK as mock-Georgian.
|
https://en.wikipedia.org/wiki?curid=12924
|
Goshen, Indiana
Goshen is a city in and the county seat of Elkhart County, Indiana, United States. It is the smaller of the two principal cities of the Elkhart-Goshen Metropolitan Statistical Area, which in turn is part of the South Bend-Elkhart-Mishawaka Combined Statistical Area. It is located in the northern part of Indiana near the Michigan border, in a region known as Michiana. Goshen is located 10 miles southeast of Elkhart, 25 miles southeast of South Bend, 120 miles east of Chicago, and 150 miles north of Indianapolis. The population was 31,719 at the 2010 census.
The city is known as a major recreational vehicle and accessories manufacturing center, the home of Goshen College, a small Mennonite liberal arts college, and the Elkhart County 4-H Fair, one of the largest county fairs in the United States.
Before the arrival of white colonists, the land that is today Goshen, Indiana was populated by Native Americans, specifically the Miami people, the Peoria people, and Potawatomi Peoples. These people inhabited this land for thousands of years. In 1830, the US Congress passed the Indian Removal Act, requiring all indigenous people to relocate west of the Mississippi River.
Goshen was platted in 1831. It was named after the Land of Goshen. The initial settlers consisted entirely of old stock "Yankee" immigrants, who were descended from the English Puritans who settled New England in the 1600s. The New England Yankee population that founded towns such as Goshen considered themselves the "chosen people" and identified with the Israelites of the Old Testament and they thought of North America as their Canaan. They founded a large number of towns and counties across what is known as the Northern Tier of the upper midwest. It was in this context that Goshen was named.
The Yankee migration to Indiana was a result of several factors, one of which was the overpopulation of New England. The old stock Yankee population had large families, often bearing up to ten children in one household. Most people were expected to have their own piece of land to farm, and due to the massive and nonstop population boom, land in New England became scarce as every son claimed his own farmstead. As a result, there was not enough land for every family to have a self-sustaining farm, and Yankee settlers began leaving New England for the Midwestern United States.
They were aided in this effort by the construction and completion of the Erie Canal which made traveling to the region much easier, causing an additional surge in migrants coming from New England. Added to this was the end of the Black Hawk War, which made the region much safer to travel through and settle in for white settlers. However, the Black Hawk War also forced the native people who called Goshen home for so long to leave. The 1833 Treaty of Chicago ultimately set the conditions that would force the Potawatomi in particular to leave the Midwest, Goshen included, in 1837. This forced exile is known today as the Potawatomi Trail of Death.
These settlers were primarily members of the Congregational Church, though due to the Second Great Awakening, many of them had converted to Methodism, and some had become Baptists before coming to what is now Indiana. The Congregational Church has subsequently gone through many divisions, and some factions, including those in Goshen, are now known as the Church of Christ and the United Church of Christ. When the New Englanders arrived in what is now Elkhart County there was nothing but dense virgin forest and wild prairie. They laid out farms, constructed roads, erected government buildings and established post routes.
On Palm Sunday, April 11, 1965, a large outbreak of tornadoes struck the Midwest. The most famous pair of tornadoes devastated the Midway Trailer Park (now inside the city limits of Goshen), and the Sunnyside Housing Addition in Dunlap, Indiana, but a smaller F4 tornado also struck neighborhoods on the southeast side of Goshen on the same day. Statewide, 137 Hoosiers died in the storms—55 of them in Elkhart County. Days later, President Lyndon B. Johnson visited the Dunlap site.
The Goshen Historic district, added in 1983 to the National Registor of Historic Places is bounded by Pike, RR, Cottage, Plymouth, Main, Purl, the Canal, and Second Sts. with the Elkhart County Courthouse at its center.
In April 2006, Goshen was the site for an immigration march. Officials estimated that from 2000 to 3000 people marched from Linway Plaza to the County Courthouse.
For much of its history, Goshen was a "sundown town", forbidding African Americans from living in, or entering, the town, often under threat of violence. In March 2015, the city acknowledged this part of its past, apologizing and saying that it no longer condones such behavior.
The Elkhart County Courthouse, Fort Wayne Street Bridge, Goshen Carnegie Public Library, Goshen Historic District, William N. Violett House, and Violett-Martin House and Gardens are listed on the National Register of Historic Places.
Goshen is located at . The Elkhart River winds its way through the city and through a dam on the south side making the Goshen Dam Pond. Rock Run Creek also runs through town. The city is divided east/west by Main Street and north/south by Lincoln Avenue.
According to the United States Census Bureau, the city has a total area of , of which is land and is water.
In February 2018, the Elkhart River flooded as a result of heavy rain and snow melt. The river rose to a record 13.2 feet, damaging more than 300 structures and prompting evacuations. City government has responded to the increase in severe weather such as flooding, hail, and heavy rains with measures including stormwater management, and "an initiative to grow the town’s tree canopy by 45%." Goshen completed 92 solar projects in 2019. Goshen outranked Phoenix, Sacramento, Los Angeles, San Francisco, and Denver with its 2019 production of 116 watts of solar power per capita.
As of the census of 2010, there were 31,719 people, 11,344 households, and 7,580 families residing in the city. The population density was . There were 12,631 housing units at an average density of . The racial makeup of the city was 78.2% White, 2.6% African American, 0.5% Native American, 1.2% Asian, 14.8% from other races, and 2.7% from two or more races. Hispanic or Latino of any race were 28.1% of the population.
There were 11,344 households of which 36.1% had children under the age of 18 living with them, 47.4% were married couples living together, 13.1% had a female householder with no husband present, 6.3% had a male householder with no wife present, and 33.2% were non-families. 27.4% of all households were made up of individuals and 13.2% had someone living alone who was 65 years of age or older. The average household size was 2.67 and the average family size was 3.23.
The median age in the city was 32.4 years. 27.4% of residents were under the age of 18; 11.3% were between the ages of 18 and 24; 26.1% were from 25 to 44; 20% were from 45 to 64; and 14.9% were 65 years of age or older. The gender makeup of the city was 48.9% male and 51.1% female.
As of the census of 2000, there were 29,383 people, 10,675 households, and 7,088 families residing in the city. The population density was 2,227.7 people per square mile (860.1/km²). There were 11,264 housing units at an average density of 854.0 per square mile (329.7/km²). The racial makeup of the city was 83.15% White, 1.53% Black or African American, 0.26% Native American, 1.10% Asian, 0.02% Pacific Islander, 12.00% from other races, and 1.94% from two or more races. 19.33% of the population were Hispanic or Latino of any race.
There were 10,675 households out of which 32.6% had children under the age of 18 living with them, 50.8% were married couples living together, 10.1% had a female householder with no husband present, and 33.6% were non-families. 27.5% of all households were made up of individuals and 12.5% had someone living alone who was 65 years of age or older. The average household size was 2.61 and the average family size was 3.14.
In the city, the population was spread out with 25.9% under the age of 18, 12.9% from 18 to 24, 30.0% from 25 to 44, 17.6% from 45 to 64, and 13.6% who were 65 years of age or older. The median age was 32 years. For every 100 females, there were 100.6 males. For every 100 females age 18 and over, there were 97.7 males.
The median income for a household in the city was $39,383, and the median income for a family was $46,877. Males had a median income of $32,159 versus $23,290 for females. The per capita income for the city was $18,899. About 6.0% of families and 9.3% of the population were below the poverty line, including 11.8% of those under age 18 and 5.3% of those age 65 or over.
Industry in Goshen centers around the automotive and Recreational Vehicle business. There are automotive component manufacturers like Benteler; firms that build custom bodies onto chassis like Supreme, Independent Protection and Showhauler Trucks. RV manufacturing companies include Dutchmen, Forest River and Keystone.
The government consists of a mayor, a clerk treasurer, a city council, and a youth advisor. The mayor and clerk are elected in citywide vote. The city council consists of seven members. Five are elected from individual districts. Two are elected at-large. The youth advisor position was added in 2016 and is elected by the students of Goshen High School.
Goshen Community Schools serves the portion of the city in Elkhart Township. This system consists of seven elementary schools, Goshen Middle School, and Goshen High School.
In 2012, U.S. News & World Report ranked Goshen High School as the 12th best high school in Indiana, as well as in the top 6% of high schools in the entire country.
Additionally, Goshen is served by Bethany Christian Schools, a private Christian school for grades 4-12.
Small parts of the city of Goshen are covered by several other school districts, including Fairfield Community Schools, Middlebury Community Schools, Concord Community Schools, and WaNee Community Schools.
Goshen College, located on the south side of town, has a current enrollment of approximately 800, with 40% being male, and 60% being female. Tuition and fees for the 2017–2018 year were $33,200.
The town has a free lending library, the Goshen Public Library.
Goshen Municipal Airport is a public use airport located about 3.5 miles southeast of downtown Goshen. The Goshen Board of Aviation Commissioners owns the airport.
The Interurban Trolley bus connects Goshen to the nearby city of Elkhart and the unincorporated town of Dunlap via Concord and Elkhart-Goshen routes. The routes pass at Elkhart's Amtrak station, allowing passengers to connect to the "Capitol Limited" and "Lake Shore Limited" trains. Riders can also transfer to North Pointe route and Bittersweet/Mishawaka route. The former allows riders to connect to Elkhart's Greyhound bus station, while the later connects the riders to the city of Mishawaka and town of Osceola. The Bittersweet/Mishawaka route also allows them to transfer to TRANSPO Route 9 to connect to destinations throughout the South Bend-Goshen metropolitan region and the South Shore Line's South Bend International Airport station.
Goshen has seven parks and has a few different greenways and trails winding through the city, one of which runs along the old Mill Race and hydraulic canal which was once used to power an old hydroelectric power plant. Plans drawn up in 2005 call for the plant to be reopened and redevelopment to begin along the canal.
The Pumpkinvine Nature Trail runs from Goshen to Middlebury and Shipshewana, along the former Pumpkin Vine Railroad. The trail starts north east of Goshen at Abshire Park. It is one of the recreational highlights of Goshen. Along with the Maple City Greenway and the Millrace trail, they provide many miles of easily accessible trails for walking, running, and biking.
The Elkhart County Fairgrounds are also located in the city, where in late July, the Elkhart County 4-H Fair is held. It is the largest county fair in Indiana and one of the largest 4-H County Fairs in the United States.
The Goshen Air Show is also an annual event that takes place at the Goshen Municipal Airport.
In 2007, Downtown Goshen, Inc., a public-private partnership formed from the merger of Face of the City and the Downtown Action Team, started a First Fridays program. Occurring year round, First Fridays happens on the first Friday of each month with stores open until 9, music and other entertainment, and other events occurring within Goshen's downtown district.
The south side Wal-Mart is rumored to be the first Wal-Mart in the United States to provide a covered stable for its frequent Amish customers. The Amish built the stable with lumber and other supplies donated by Wal-Mart.
"Lonesome Jim" (2005) which was written by former resident James Strouse, directed by Steve Buscemi and starred Liv Tyler and Casey Affleck, was shot in Goshen.
Goshen has two sister cities as designated by Sister Cities International.
|
https://en.wikipedia.org/wiki?curid=12926
|
Gallipoli
The Gallipoli peninsula (; ; , "Chersónisos tis Kallípolis") is located in the southern part of East Thrace, the European part of Turkey, with the Aegean Sea to the west and the Dardanelles strait to the east.
Gallipoli is the Italian form of the Greek name "Καλλίπολις" ("Kallípolis"), meaning "Beautiful City", the original name of the modern town of Gelibolu. In antiquity, the peninsula was known as the Thracian Chersonese (, "Thrakiké Chersónesos"; ).
The peninsula runs in a south-westerly direction into the Aegean Sea, between the Dardanelles (formerly known as the Hellespont), and the Gulf of Saros (formerly the bay of Melas). In antiquity, it was protected by the Long Wall, a defensive structure built across the narrowest part of the peninsula near the ancient city of Agora. The isthmus traversed by the wall was only 36 stadia in breadth (about 6.5 km), but the length of the peninsula from this wall to its southern extremity, Cape Mastusia, was 420 stadia (about 77.5 km).
In ancient times, the Gallipoli Peninsula was known as the Thracian Chersonesus (from Greek "χερσόνησος", "peninsula") to the Greeks and later the Romans. It was the location of several prominent towns, including Cardia, Pactya, Callipolis (Gallipoli), Alopeconnesus (), Sestos, Madytos, and Elaeus. The peninsula was renowned for its wheat. It also benefited from its strategic importance on the main route between Europe and Asia, as well as from its control of the shipping route from Crimea. The city of Sestos was the main crossing-point on the Hellespont.
According to Herodotus, the Thracian tribe of Dolonci () (or "barbarians" according to Cornelius Nepos) held possession of Chersonesus before the Greek colonization. Then, settlers from Ancient Greece, mainly of Ionian and Aeolian stock, founded about 12 cities on the peninsula in the 7th century BC. The Athenian statesman Miltiades the Elder founded a major Athenian colony there around 560 BC. He took authority over the entire peninsula, building up its defences against incursions from the mainland. It eventually passed to his nephew, the more famous Miltiades the Younger, around 524 BC. The peninsula was abandoned to the Persians in 493 BC after the outbreak of the Greco-Persian Wars (499–478 BC).
The Persians were eventually expelled, after which the peninsula was for a time ruled over by Athens, which enrolled it into the Delian League in 478 BC. The Athenians established a number of cleruchies on the Thracian Chersonese and sent an additional 1,000 settlers around 448 BC. Sparta gained control after the decisive battle of Aegospotami in 404 BC, but the peninsula subsequently reverted to the Athenians. In the 4th century BC, the Thracian Chersonese became the focus of a bitter territorial dispute between Athens and Macedon, whose king Philip II sought possession. It was eventually ceded to Philip in 338 BC.
After the death of Philip's son Alexander the Great in 323 BC, the Thracian Chersonese became the object of contention among Alexander's successors. Lysimachus established his capital Lysimachia here. In 278 BC, Celtic tribes from Galatia in Asia Minor settled in the area. In 196 BC, the Seleucid king Antiochus III seized the peninsula. This alarmed the Greeks and prompted them to seek the aid of the Romans, who conquered the Thracian Chersonese, which they gave to their ally Eumenes II of Pergamon in 188 BC. At the extinction of the Attalid dynasty in 133 BC it passed again to the Romans, who from 129 BC administered it in the Roman province of Asia. It was subsequently made a state-owned territory ("ager publicus") and during the reign of the emperor Augustus it was imperial property.
The Thracian Chersonese was part of the Eastern Roman Empire from its foundation in 330 AD. In 443 AD, Attila the Hun invaded the Gallipoli Peninsula during one of the last stages of his grand campaign that year. He captured both Callipolis and Sestus. Aside from a brief period from 1204 to 1235, when it was controlled by the Republic of Venice, the Byzantine Empire ruled the territory until 1356. During the night between 1 and 2 March 1354, a strong earthquake destroyed the city of Gallipoli and its city walls, weakening its defenses.
Within a month after the devastating 1354 earthquake the Ottomans besieged and captured the town of Gallipoli, making it the first Ottoman stronghold in Europe and the staging area for Ottoman expansion across the Balkans. The Savoyard Crusade recaptured Gallipoli for Byzantium in 1366, but the beleaguered Byzantines were forced to hand it back in September 1376. The Greeks living there were allowed to continue their everyday activities. In the 19th century, Gallipoli () was a district (kaymakamlik) in the Vilayet of Adrianople, with about thirty thousand inhabitants: comprising Greeks, Turks, Armenians and Jews.
Gallipoli became a major encampment for British and French forces in 1854 during the Crimean War, and the harbour was also a stopping-off point between the western Mediterranean and Istanbul (formerly Constantinople.)
In March 1854 British and French engineers constructed an 11.5 km line of defence to protect the peninsula from a possible Russian attack and so secure control of the route to the Mediterranean Sea.
Gallipoli did not experience any more wars until the First Balkan War, when the 1913 Battle of Bulair and several minor skirmishes took place there.
A dispatch on 7 July 1913 reported that Ottoman troops treated Gallipoli's Greeks "with marked depravity" as they "destroyed, looted, and burned all the Greek villages near Gallipoli". Ottoman forces sacked and completely destroyed many villages and killed some Greeks. The cause of this savagery in the part of the Turks was their fear that if Thrace was declared autonomous the Greek population might be found numerically superior to the Muslims.
The Turkish Government, under the pretext that a village was within the firing line, ordered its evacuation within three hours. The residents abandoned everything they possessed, left their village and went to Gallipoli. Seven of the Greek villagers who stayed two minutes later than the three-hour limit allowed for the evacuation were shot by the soldiers. After the end of the Balkan War the exiles were allowed to return. But as the Government allowed only the Turks to rebuild their houses and furnish them, the exiled Greeks were compelled to remain in Gallipoli.
During World War I (1914-1918), French, British and colonial forces (Australian, New Zealand, Newfoundland, Irish and Indian) fought the Gallipoli campaign (1915-1916) in and near the peninsula, seeking to secure a sea route to relieve their eastern ally, Russia. The Ottomans set up defensive fortifications along the peninsula and contained the invading forces.
In early 1915, attempting to seize a strategic advantage in World War I by capturing Istanbul (formerly Constantinople), the British authorised an attack on the peninsula by French, British and British Empire forces. The first Australian troops landed at ANZAC Cove early in the morning of 25 April 1915. After eight months of heavy fighting the last Allied soldiers withdrew by 9 January 1916.
The campaign, one of the greatest Ottoman victories during the war, is considered by historians as a major Allied failure. Turks regard it as a defining moment in their nation's history: a final surge in the defence of the motherland as the Ottoman Empire crumbled. The struggle formed the basis for the Turkish War of Independence and the founding of the Republic of Turkey eight years later under President Mustafa Kemal Atatürk, who first rose to prominence as a commander at Gallipoli.
The Ottoman Empire instituted the Gallipoli Star as a military decoration in 1915 and awarded it throughout the rest of World War I.
The campaign was the first major military action of Australia and New Zealand (or Anzacs) as independent dominions, and is often considered to mark the birth of national consciousness in those nations. The date of the landing, 25 April, is known as "Anzac Day". It remains the most significant commemoration of military casualties and "returned soldiers" in Australia and New Zealand.
On the Allied side one of the promoters of the expedition was Britain's First Lord of the Admiralty, Winston Churchill, whose bullish optimism hurt his reputation that took years to recover: the English had the popular belief that there were more British and French casualties and deaths than among their Anzac comrades; Australians and New Zealanders resented perceived British incompetence and the alleged British propensity to use them wastefully as cannon fodder. No serious attempt was made to understand the nature of the terrain nor logistical support required for success against the Turkish army. The wishful thinking of generals not based on reality doomed the campaign before it began.
Prior to the Allied landings in April 1915, the Ottoman Empire deported Greek residents from Gallipoli and surrounding region and from the islands in the sea of Marmara, to the interior where they were at the mercy of hostile Turks. The Greeks had little time to pack and the Ottoman authorities permitted them to take only some bedding and the rest was handed over to the Government. The Turks also plundered Greek houses and properties. A testimony of a deportee described how the deportees were forced onto crowded steamers, standing-room only; how, on disembarking, men of military age were removed (for forced labour in the labour battalions of the Ottoman army) and how the rest were "scattered… among the farms like ownerless cattle".
The Metropolitan of Gallipoli wrote on 17 July 1915 that the extermination of the Christian refugees was methodical. He also mentions that "The Turks, like beasts of prey, immediately plundered all the Christians' property and carried it off. The inhabitants and refugees of my district are entirely without shelter, awaiting to be sent no one knows where ...". Many Greeks died from hunger and there were frequent cases of rape among women and young girls, as well as their forced conversion to Islam.
Greek troops occupied Gallipoli on 4 August 1920 during the Greco-Turkish War of 1919–22, considered part of the Turkish War of Independence. After the Armistice of Mudros of 30 October 1918 it became a Greek prefecture centre as "Kallipolis". However, Greece was forced to withdraw from Eastern Thrace after the Armistice of Mudanya of October 1922. Gallipoli was briefly handed over to British troops on 20 October 1922, but finally returned to Turkish rule on 26 November 1922.
In 1920, after the defeat of the Russian White army of General Pyotr Wrangel, a significant number of émigré soldiers and their families evacuated to Gallipoli from the Crimean Peninsula. From there, many went to European countries, such as Yugoslavia, where they found refuge.
There are now many cemeteries and war memorials on the Gallipoli peninsula.
Between 1923 and 1926 Gallipoli became the centre of Gelibolu Province, comprising the districts of Gelibolu, Eceabat, Keşan and Şarköy. After the dissolution of the province, it became a district centre in Çanakkale Province.
|
https://en.wikipedia.org/wiki?curid=12929
|
Gram stain
Gram stain or Gram staining, also called Gram's method, is a method of staining used to distinguish and classify bacterial species into two large groups: gram-positive bacteria and gram-negative bacteria. The name comes from the Danish bacteriologist Hans Christian Gram, who developed the technique.
Gram staining differentiates bacteria by the chemical and physical properties of their cell walls. Gram-positive cells have a thick layer of peptidoglycan in the cell wall that retains the primary stain, crystal violet. Gram-negative cells have a thinner peptidoglycan layer that allows the crystal violet to wash out on addition of ethanol. They are stained pink or red by the counterstain, commonly safranin or fuchsine. Lugol's iodine solution is always added after addition of crystal violet to strengthen the bonds of the stain with the cell membrane. Gram staining is almost always the first step in the preliminary identification of a bacterial organism. While Gram staining is a valuable diagnostic tool in both clinical and research settings, not all bacteria can be definitively classified by this technique. This gives rise to gram-variable and gram-indeterminate groups.
The method is named after its inventor, the Danish scientist Hans Christian Gram (1853–1938), who developed the technique while working with Carl Friedländer in the morgue of the city hospital in Berlin in 1884. Gram devised his technique not for the purpose of distinguishing one type of bacterium from another but to make bacteria more visible in stained sections of lung tissue. He published his method in 1884, and included in his short report the observation that the typhus bacillus did not retain the stain.
Gram staining is a bacteriological laboratory technique used to differentiate bacterial species into two large groups (gram-positive and gram-negative) based on the physical properties of their cell walls. Gram staining is not used to classify archaea, formerly archaeabacteria, since these microorganisms yield widely varying responses that do not follow their phylogenetic groups.
The Gram stain is not an infallible tool for diagnosis, identification, or phylogeny, and it is of extremely limited use in environmental microbiology. It is used mainly to make a preliminary morphologic identification or to establish that there are significant numbers of bacteria in a clinical specimen. It cannot identify bacteria to the species level, and for most medical conditions, it should not be used as the sole method of bacterial identification. In clinical microbiology laboratories, it is used in combination with other traditional and molecular techniques to identify bacteria. Some organisms are gram-variable (meaning they may stain either negative or positive); some are not stained with either dye used in the Gram technique and are not seen. In a modern environmental or molecular microbiology lab, most identification is done using genetic sequences and other molecular techniques, which are far more specific and informative than differential staining.
Gram staining has been suggested to be as effective a diagnostic tool as PCR in one primary research report regarding gonorrhea.
Gram stains are performed on body fluid or biopsy when infection is suspected. Gram stains yield results much more quickly than culturing, and are especially important when infection would make an important difference in the patient's treatment and prognosis; examples are cerebrospinal fluid for meningitis and synovial fluid for septic arthritis.
Gram-positive bacteria have a thick mesh-like cell wall made of peptidoglycan (50–90% of cell envelope), and as a result are stained purple by crystal violet, whereas gram-negative bacteria have a thinner layer (10% of cell envelope), so do not retain the purple stain and are counter-stained pink by safranin. There are four basic steps of the Gram stain:
Crystal violet (CV) dissociates in aqueous solutions into and chloride () ions. These ions penetrate the cell wall of both gram-positive and gram-negative cells. The ion interacts with negatively charged components of bacterial cells and stains the cells purple.
Iodide ( or ) interacts with and forms large complexes of crystal violet and iodine (CV–I) within the inner and outer layers of the cell. Iodine is often referred to as a mordant, but is a trapping agent that prevents the removal of the CV–I complex and, therefore, colors the cell.
When a decolorizer such as alcohol or acetone is added, it interacts with the lipids of the cell membrane. A gram-negative cell loses its outer lipopolysaccharide membrane, and the inner peptidoglycan layer is left exposed. The CV–I complexes are washed from the gram-negative cell along with the outer membrane. In contrast, a gram-positive cell becomes dehydrated from an ethanol treatment. The large CV–I complexes become trapped within the gram-positive cell due to the multilayered nature of its peptidoglycan. The decolorization step is critical and must be timed correctly; the crystal violet stain is removed from both gram-positive and negative cells if the decolorizing agent is left on too long (a matter of seconds).
After decolorization, the gram-positive cell remains purple and the gram-negative cell loses its purple color. Counterstain, which is usually positively charged safranin or basic fuchsine, is applied last to give decolorized gram-negative bacteria a pink or red color. Both gram-positive bacteria and gram-negative bacteria pick up the counterstain. The counterstain, however, is unseen on gram-positive bacteria because of the darker crystal violet stain.
Gram-positive bacteria generally have a single membrane ("monoderm") surrounded by a thick peptidoglycan.
This rule is followed by two phyla: Firmicutes (except for the classes Mollicutes and Negativicutes) and the Actinobacteria. In contrast, members of the Chloroflexi (green non-sulfur bacteria) are monoderms but possess a thin or absent (class Dehalococcoidetes) peptidoglycan and can stain negative, positive or indeterminate; members of the "Deinococcus–Thermus" group stain positive but are diderms with a thick peptidoglycan.
Historically, the gram-positive forms made up the phylum Firmicutes, a name now used for the largest group. It includes many well-known genera such as "Lactobacillus, Bacillus", "Listeria", "Staphylococcus", "Streptococcus", "Enterococcus", and "Clostridium". It has also been expanded to include the Mollicutes, bacteria such as "Mycoplasma and Thermoplasma" that lack cell walls and so cannot be Gram-stained, but are derived from such forms.
Some bacteria have cell walls which are particularly adept at retaining stains. These will appear positive by Gram stain even though they are not closely related to other gram-positive bacteria. These are called acid-fast bacteria, and can only be differentiated from other gram-positive bacteria by special staining procedures.
Gram-negative bacteria generally possess a thin layer of peptidoglycan between two membranes ("diderms"). Most bacterial phyla are gram-negative, including the cyanobacteria, green sulfur bacteria, and most Proteobacteria (exceptions being some members of the Rickettsiales and the insect-endosymbionts of the Enterobacteriales).
Some bacteria, after staining with the Gram stain, yield a gram-variable pattern: a mix of pink and purple cells are seen. In cultures of "Bacillus, Butyrivibrio", and "Clostridium", a decrease in peptidoglycan thickness during growth coincides with an increase in the number of cells that stain gram-negative. In addition, in all bacteria stained using the Gram stain, the age of the culture may influence the results of the stain.
Gram-indeterminate bacteria do not respond predictably to Gram staining and, therefore, cannot be determined as either gram-positive or gram-negative. Examples include many species of "Mycobacterium", including "Mycobacterium bovis", "Mycobactrium leprae" and "Mycobacterium tuberculosis", the latter two of which are the causative agents of leprosy and tuberculosis, respectively.
The term "Gram staining" is derived from the surname of Hans Christian Gram; the eponym (Gram) is therefore capitalized but not the common noun (stain) as is usual for scientific terms. The initial letters of "gram-positive" and "gram-negative", which are eponymous adjectives, can be either capital "G" or lowercase "g", depending on what style guide (if any) governs the document being written. Lowercase style is used by the US Centers for Disease Control and Prevention and other style regimens such as the AMA style. Dictionaries may use lowercase, uppercase, or both. Uppercase "Gram-positive" or "Gram-negative" usage is also common in many scientific journal articles and publications. When articles are submitted to journals, each journal may or may not apply house style to the postprint version. Preprint versions contain whichever style the author happened to use. Even style regimens that use lowercase for the adjectives "gram-positive" and "gram-negative" still typically use capital for "Gram stain".
|
https://en.wikipedia.org/wiki?curid=12935
|
Gram-positive bacteria
In bacteriology, Gram-positive bacteria are bacteria that give a positive result in the Gram stain test, which is traditionally used to quickly classify bacteria into two broad categories according to their cell wall.
Gram-positive bacteria take up the crystal violet stain used in the test, and then appear to be purple-coloured when seen through an optical microscope. This is because the thick peptidoglycan layer in the bacterial cell wall retains the stain after it is washed away from the rest of the sample, in the decolorization stage of the test.
Conversely, Gram-negative bacteria cannot retain the violet stain after the decolorization step; alcohol used in this stage degrades the outer membrane of gram-negative cells, making the cell wall more porous and incapable of retaining the crystal violet stain. Their peptidoglycan layer is much thinner and sandwiched between an inner cell membrane and a bacterial outer membrane, causing them to take up the counterstain (safranin or fuchsine) and appear red or pink.
Despite their thicker peptidoglycan layer, gram-positive bacteria are more receptive to certain cell wall targeting antibiotics than Gram-negative bacteria, due to the absence of the outer membrane.
In general, the following characteristics are present in gram-positive bacteria:
Only some species have a capsule, usually consisting of polysaccharides. Also, only some species are flagellates, and when they do have flagella, have only two basal body rings to support them, whereas gram-negative have four. Both gram-positive and gram-negative bacteria commonly have a surface layer called an S-layer. In gram-positive bacteria, the S-layer is attached to the peptidoglycan layer. Gram-negative bacteria's S-layer is attached directly to the outer membrane. Specific to gram-positive bacteria is the presence of teichoic acids in the cell wall. Some of these are lipoteichoic acids, which have a lipid component in the cell membrane that can assist in anchoring the peptidoglycan.
Along with cell shape, Gram staining is a rapid method used to differentiate bacterial species. Such staining, together with growth requirement and antibiotic susceptibility testing, and other macroscopic and physiologic tests, forms the full basis for classification and subdivision of the bacteria (e.g., see figure and pre-1990 versions of "Bergey's Manual").
Historically, the kingdom Monera was divided into four divisions based primarily on Gram staining: Firmicutes (positive in staining), Gracilicutes (negative in staining), Mollicutes (neutral in staining) and Mendocutes (variable in staining). Based on 16S ribosomal RNA phylogenetic studies of the late microbiologist Carl Woese and collaborators and colleagues at the University of Illinois, the monophyly of the gram-positive bacteria was challenged, with major implications for the therapeutic and general study of these organisms. Based on molecular studies of the 16S sequences, Woese recognised twelve bacterial phyla. Two of these were gram-positive and were divided on the proportion of the guanine and cytosine content in their DNA. The high G + C phylum was made up of the Actinobacteria and the low G + C phylum contained the Firmicutes. The Actinobacteria include the "Corynebacterium", "Mycobacterium", "Nocardia" and "Streptomyces" genera. The (low G + C) Firmicutes, have a 45–60% GC content, but this is lower than that of the Actinobacteria.
Although bacteria are traditionally divided into two main groups, gram-positive and gram-negative, based on their Gram stain retention property, this classification system is ambiguous as it refers to three distinct aspects (staining result, envelope organization, taxonomic group), which do not necessarily coalesce for some bacterial species. The gram-positive and gram-negative staining response is also not a reliable characteristic as these two kinds of bacteria do not form phylogenetic coherent groups. However, although Gram staining response is an empirical criterion, its basis lies in the marked differences in the ultrastructure and chemical composition of the bacterial cell wall, marked by the absence or presence of an outer lipid membrane.
All gram-positive bacteria are bounded by a single-unit lipid membrane, and, in general, they contain a thick layer (20–80 nm) of peptidoglycan responsible for retaining the Gram stain. A number of other bacteria—that are bounded by a single membrane, but stain gram-negative due to either lack of the peptidoglycan layer, as in the Mycoplasmas, or their inability to retain the Gram stain because of their cell wall composition—also show close relationship to the Gram-positive bacteria. For the bacterial cells bounded by a single cell membrane, the term "monoderm bacteria" or "monoderm prokaryotes" has been proposed.
In contrast to gram-positive bacteria, all archetypical gram-negative bacteria are bounded by a cytoplasmic membrane and an outer cell membrane; they contain only a thin layer of peptidoglycan (2–3 nm) between these membranes. The presence of inner and outer cell membranes defines a new compartment in these cells: the periplasmic space or the periplasmic compartment. These bacteria have been designated as "diderm bacteria." The distinction between the monoderm and diderm bacteria is supported by conserved signature indels in a number of important proteins (viz. DnaK, GroEL). Of these two structurally distinct groups of bacteria, monoderms are indicated to be ancestral. Based upon a number of observations including that the gram-positive bacteria are the major producers of antibiotics and that, in general, gram-negative bacteria are resistant to them, it has been proposed that the outer cell membrane in gram-negative bacteria (diderms) has evolved as a protective mechanism against antibiotic selection pressure. Some bacteria, such as "Deinococcus", which stain gram-positive due to the presence of a thick peptidoglycan layer and also possess an outer cell membrane are suggested as intermediates in the transition between monoderm (gram-positive) and diderm (gram-negative) bacteria. The diderm bacteria can also be further differentiated between simple diderms lacking lipopolysaccharide, the archetypical diderm bacteria where the outer cell membrane contains lipopolysaccharide, and the diderm bacteria where outer cell membrane is made up of mycolic acid.
In general, gram-positive bacteria are monoderms and have a single lipid bilayer whereas gram-negative bacteria are diderms and have two bilayers. Some taxa lack peptidoglycan (such as the domain Archaea, the class Mollicutes, some members of the Rickettsiales, and the insect-endosymbionts of the Enterobacteriales) and are gram-variable. This, however, does not always hold true. The "Deinococcus-Thermus" bacteria have gram-positive stains, although they are structurally similar to gram-negative bacteria with two layers. The Chloroflexi have a single layer, yet (with some exceptions) stain negative. Two related phyla to the Chloroflexi, the TM7 clade and the Ktedonobacteria, are also monoderms.
Some Firmicute species are not gram-positive. These belong to the class Mollicutes (alternatively considered a class of the phylum Tenericutes), which lack peptidoglycan (gram-indeterminate), and the class Negativicutes, which includes Selenomonas and stain gram-negative. Additionally, a number of bacterial taxa (viz. Negativicutes, Fusobacteria, Synergistetes, and Elusimicrobia) that are either part of the phylum Firmicutes or branch in its proximity are found to possess a diderm cell structure. However, a conserved signature indel (CSI) in the HSP60 (GroEL) protein distinguishes all traditional phyla of gram-negative bacteria (e.g., Proteobacteria, Aquificae, Chlamydiae, Bacteroidetes, Chlorobi, Cyanobacteria, Fibrobacteres, Verrucomicrobia, Planctomycetes, Spirochetes, Acidobacteria, etc.) from these other atypical diderm bacteria, as well as other phyla of monoderm bacteria (e.g., Actinobacteria, Firmicutes, Thermotogae, Chloroflexi, etc.). The presence of this CSI in all sequenced species of conventional LPS (lipopolysaccharide)-containing gram-negative bacterial phyla provides evidence that these phyla of bacteria form a monophyletic clade and that no loss of the outer membrane from any species from this group has occurred.
In the classical sense, six gram-positive genera are typically pathogenic in humans. Two of these, "Streptococcus" and "Staphylococcus", are cocci (sphere-shaped). The remaining organisms are bacilli (rod-shaped) and can be subdivided based on their ability to form spores. The non-spore formers are "Corynebacterium" and "Listeria" (a coccobacillus), whereas "Bacillus" and "Clostridium" produce spores. The spore-forming bacteria can again be divided based on their respiration: "Bacillus" is a facultative anaerobe, while "Clostridium" is an obligate anaerobe. Also, "Rathybacter", "Leifsonia", and "Clavibacter" are three gram-positive genera that cause plant disease. Gram-positive bacteria are capable of causing serious and sometimes fatal infections in newborn infants.
Transformation is one of three processes for horizontal gene transfer, in which exogenous genetic material passes from a donor bacterium to a recipient bacterium, the other two processes being conjugation (transfer of genetic material between two bacterial cells in direct contact) and transduction (injection of donor bacterial DNA by a bacteriophage virus into a recipient host bacterium). In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium.
As of 2014 about 80 species of bacteria were known to be capable of transformation, about evenly divided between gram-positive and gram-negative bacteria; the number might be an overestimate since several of the reports are supported by single papers. Transformation among gram-positive bacteria has been studied in medically important species such as "Streptococcus pneumoniae", "Streptococcus mutans", "Staphylococcus aureus" and "Streptococcus sanguinis" and in gram-positive soil bacterium "Bacillus subtilis, Bacillus cereus".
The adjectives "Gram-positive" and "Gram-negative" derive from the surname of Hans Christian Gram; as eponymous adjectives, their initial letter can be either capital "G" or lower-case "g", depending on which style guide (e.g., that of the CDC), if any, governs the document being written. This is further explained at "Gram staining § Orthographic note".
|
https://en.wikipedia.org/wiki?curid=12936
|
Gram-negative bacteria
Gram-negative bacteria are bacteria that do not retain the crystal violet stain used in the Gram staining method of bacterial differentiation. They are characterized by their cell envelopes, which are composed of a thin peptidoglycan cell wall sandwiched between an inner cytoplasmic cell membrane and a bacterial outer membrane.
Gram-negative bacteria are found everywhere, in virtually all environments on Earth that support life. The gram-negative bacteria include the model organism "Escherichia coli", as well as many pathogenic bacteria, such as "Pseudomonas aeruginosa", "Chlamydia trachomatis", and "Yersinia pestis". They are an important medical challenge, as their outer membrane protects them from many antibiotics (including penicillin); detergents that would normally damage the peptidoglycans of the (inner) cell membrane; and lysozyme, an antimicrobial enzyme produced by animals that forms part of the innate immune system. Additionally, the outer leaflet of this membrane comprises a complex lipopolysaccharide (LPS) whose lipid A component can cause a toxic reaction when these bacteria are lysed by immune cells. This toxic reaction can include fever, an increased respiratory rate, and low blood pressure — a life-threatening condition known as septic shock.
Several classes of antibiotics have been designed to target gram-negative bacteria, including aminopenicillins, ureidopenicillins, cephalosporins, beta-lactam-betalactamase combinations (e.g. pipercillin-tazobactam), Folate antagonists, quinolones, and carbapenems. Many of these antibiotics also cover gram positive organisms. The drugs that specifically target gram negative organisms include aminoglycosides, monobactams (aztreonam) and ciprofloxacin.
Gram-negative bacteria display :
Along with cell shape, Gram staining is a rapid diagnostic tool and once was used to group species at the subdivision of Bacteria.
Historically, the kingdom Monera was divided into four divisions based on Gram staining: Firmacutes (+), Gracillicutes (−), Mollicutes (0) and Mendocutes (var.).
Since 1987, the monophyly of the gram-negative bacteria has been disproven with molecular studies. However some authors, such as Cavalier-Smith still treat them as a monophyletic taxon (though not a clade; his definition of monophyly requires a single common ancestor but does not require holophyly, the property that all descendants be encompassed by the taxon) and refer to the group as a subkingdom "Negibacteria".
Bacteria are traditionally classified based on their Gram staining response into the gram-positive (or monoderm, "one membrane") and gram-negative (diderm, "two membranes") groups. It was traditionally thought that the groups represent lineages, i.e. the extra membrane only evolved once, such that gram-negative bacteria are more closely related to one another than to any gram-positive bacteria. While this is often true, the classification system breaks down in some cases, with lineage groupings not matching the staining result. Thus, Gram staining cannot be reliably used to assess familial relationships of bacteria. Nevertheless, staining often gives reliable information about the composition of the cell membrane, distinguishing between the presence or absence of an outer lipid membrane.
Of these two structurally distinct groups of prokaryotic organisms, monoderm prokaryotes are thought to be ancestral. Based upon a number of different observations including that the gram-positive bacteria are the major reactors to antibiotics and that gram-negative bacteria are, in general, resistant to them, it has been proposed that the outer cell membrane in gram-negative bacteria (diderms) evolved as a protective mechanism against antibiotic selection pressure. Some bacteria such as "Deinococcus", which stain gram-positive due to the presence of a thick peptidoglycan layer, but also possess an outer cell membrane are suggested as intermediates in the transition between monoderm (gram-positive) and diderm (gram-negative) bacteria. The diderm bacteria can also be further differentiated between simple diderms lacking lipopolysaccharide (LPS); the archetypical diderm bacteria, in which the outer cell membrane contains lipopolysaccharide; and the diderm bacteria, in which the outer cell membrane is made up of mycolic acid (e. g. "Mycobacterium").
The conventional LPS-"diderm" group of gram-negative bacteria (e.g., Proteobacteria, Aquificae, Chlamydiae, Bacteroidetes, Chlorobi, Cyanobacteria, Fibrobacteres, Verrucomicrobia, Planctomycetes, Spirochetes, Acidobacteria; "Hydrobacteria") are uniquely identified by a few conserved signature indel (CSI) in the HSP60 (GroEL) protein. In addition, a number of bacterial taxa (including Negativicutes, Fusobacteria, Synergistetes, and Elusimicrobia) that are either part of the phylum Firmicutes (a monoderm group) or branches in its proximity are also found to possess a diderm cell structure. They lack the GroEL signature. The presence of this CSI in all sequenced species of conventional lipopolysaccharide-containing gram-negative bacterial phyla provides evidence that these phyla of bacteria form a monophyletic clade and that no loss of the outer membrane from any species from this group has occurred.
The proteobacteria are a major phylum of gram-negative bacteria, including "Escherichia coli" ("E. coli"), "Salmonella", "Shigella", and other Enterobacteriaceae, "Pseudomonas", "Moraxella", "Helicobacter", "Stenotrophomonas", "Bdellovibrio", acetic acid bacteria, "Legionella" etc. Other notable groups of gram-negative bacteria include the cyanobacteria, spirochaetes, green sulfur, and green non-sulfur bacteria.
Medically relevant gram-negative cocci include the four types that cause a sexually transmitted disease ("Neisseria gonorrhoeae"), a meningitis ("Neisseria meningitidis"), and respiratory symptoms ("Moraxella catarrhalis", "Haemophilus influenzae").
Medically relevant gram-negative bacilli include a multitude of species. Some of them cause primarily respiratory problems ("Klebsiella pneumoniae", "Legionella pneumophila", "Pseudomonas aeruginosa"), primarily urinary problems ("Escherichia coli", "Proteus mirabilis", "Enterobacter cloacae", "Serratia marcescens"), and primarily gastrointestinal problems ("Helicobacter pylori", "Salmonella enteritidis", "Salmonella typhi").
Gram-negative bacteria associated with hospital-acquired infections include "Acinetobacter baumannii", which cause bacteremia, secondary meningitis, and ventilator-associated pneumonia in hospital intensive-care units.
Transformation is one of three processes for horizontal gene transfer, in which exogenous genetic material passes from bacterium to another, the other two being conjugation (transfer of genetic material between two bacterial cells in direct contact) and transduction (injection of foreign DNA by a bacteriophage virus into the host bacterium). In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium.
As of 2014 about 80 species of bacteria were known to be capable of transformation, about evenly divided between gram-positive and gram-negative bacteria; the number might be an overestimate since several of the reports are supported by single papers. Transformation has been studied in medically important gram-negative bacteria species such as "Helicobacter pylori", "Legionella pneumophila", "Neisseria meningitidis", "Neisseria gonorrhoeae", "Haemophilus influenzae" and "Vibrio cholerae". It has also been studied in gram-negative species found in soil such as "Pseudomonas stutzeri", "Acinetobacter baylyi", and gram-negative plant pathogens such as "Ralstonia solanacearum" and "Xylella fastidiosa".
One of the several unique characteristics of gram-negative bacteria is the structure of the bacterial outer membrane. The outer leaflet of this membrane comprises a complex lipopolysaccharide (LPS) whose lipid portion acts as an endotoxin. If gram-negative bacteria enter the circulatory system, the LPS can cause a toxic reaction. This results in fever, an increased respiratory rate, and low blood pressure. This may lead to life-threatening septic shock.
The outer membrane protects the bacteria from several antibiotics, dyes, and detergents that would normally damage either the inner membrane or the cell wall (made of peptidoglycan). The outer membrane provides these bacteria with resistance to lysozyme and penicillin. The periplasmic space (space between the two cell membranes) also contains enzymes which break down or modify antibiotics. Drugs commonly used to treat gram negative infections include amino, carboxy and ureido penicillins (ampicillin, amoxicillin, pipercillin, ticarcillin) these drugs may be combined with beta-lactamase inhibitors to combat the presence of enzymes that can digest these drugs (known as beta-lactamases) in the peri-plasmic space. Other classes of drugs that have gram negative spectrum include cephalosporins, monobactams (aztreonam), aminogylosides, quinolones, macrolides, chloramphenicol, folate antagonists, and carbapenems.
The pathogenic capability of gram-negative bacteria is often associated with certain components of their membrane, in particular, the LPS. In humans, the presence of LPS triggers an innate immune response, activating the immune system and producing cytokines (hormonal regulators). Inflammation is a common reaction to cytokine production, which can also produce host toxicity. The innate immune response to LPS, however, is not synonymous with pathogenicity, or the ability to cause disease.
The adjectives "Gram-positive" and "Gram-negative" derive from the surname of Hans Christian Gram, a Danish bacteriologist; as eponymous adjectives, their initial letter can be either capital "G" or lower-case "g", depending on which style guide (e.g., that of the CDC), if any, governs the document being written. This is further explained at "Gram staining § Orthographic note".
|
https://en.wikipedia.org/wiki?curid=12937
|
Greyhound
The Greyhound is a breed of dog, a sighthound which has been bred for coursing game and greyhound racing. It is also referred to as an English Greyhound. Since the rise in large-scale adoption of retired racing Greyhounds, the breed has seen a resurgence in popularity as a family pet.
According to Merriam-Webster, a Greyhound is "any of a breed of tall slender graceful smooth-coated dogs characterized by swiftness and keen sight", as well as "any of several related dogs", such as the Italian Greyhound.
It is a gentle and intelligent breed whose combination of long, powerful legs, deep chest, flexible spine and slim build allows it to reach average race speeds exceeding . The Greyhound can reach a full speed of within , or six strides from the boxes, traveling at almost for the first of a race.
Males are usually tall at the withers, and weigh on average . Females tend to be smaller, with shoulder heights ranging from and weights from , although weights can be above and below these average weights. Greyhounds have very short fur, which is easy to maintain. There are approximately thirty recognized color forms, of which variations of white, brindle, fawn, black, red and blue (gray) can appear uniquely or in combination. Greyhounds are dolichocephalic, with a skull which is relatively long in comparison to its breadth, and an elongated muzzle.
Greyhounds can be aloof and indifferent to strangers, but are affectionate with those they come to know. They are generally very docile, lazy, easy-going, and calm.
Greyhounds wear muzzles during racing, which can lead some to believe they are aggressive dogs, but this is not true. Muzzles are worn to prevent injuries resulting from dogs nipping one another during or immediately after a race, when the 'hare' has disappeared out of sight and the dogs are no longer racing but remain excited.
Contrary to popular belief, adult Greyhounds do not need extended periods of daily exercise, as they are bred for sprinting rather than endurance. Greyhound puppies that have not been taught how to utilize their energy, however, can be hyperactive and destructive if not given an outlet, and therefore require more experienced handlers.
Greyhound owners consider Greyhounds wonderful pets. They are very loving, and enjoy the company of their humans and other dogs. Whether a Greyhound will enjoy the company of other small animals, such as cats, depends on the individual dog's personality. Greyhounds will typically chase small animals; those lacking a high 'prey drive' will be able to coexist happily with toy dog breeds and/or cats. Many owners describe their Greyhounds as "45-mile-per-hour couch potatoes".
Greyhounds live most happily as pets in quiet environments. They do well in families with children, as long as the children are taught to treat the dog properly with politeness and appropriate respect. Greyhounds have a sensitive nature, and gentle commands work best as training methods.
Occasionally, a Greyhound may bark; however, Greyhounds are generally not barkers, which is beneficial in suburban environments, and are usually as friendly to strangers as they are with their own families.
A very common misconception regarding Greyhounds is that they are hyperactive. This is usually not the case with retired racing Greyhounds. Greyhounds can live comfortably as apartment dogs, as they do not require much space and sleep almost 18 hours per day. Due to their calm temperament, Greyhounds can make better "apartment dogs" than smaller, more active breeds.
Many Greyhound adoption groups recommend that owners keep their Greyhounds on a leash whenever outdoors, except in fully enclosed areas. This is due to their prey-drive, their speed, and the assertion that Greyhounds have no road sense. In some jurisdictions, it is illegal for Greyhounds to be allowed off-leash even in off-leash dog parks. Due to their size and strength, adoption groups recommend that fences be between 4 and 6 feet tall, to prevent Greyhounds from jumping over them. As per most breeds being rehomed greyhounds that are adopted after racing tend to need time to adjust to their new lives with a human family. Many guides and books have been published to aid Greyhound owners in helping their pet get comfortable in their new home.
The original primary use of Greyhounds, both in the British Isles and on the Continent of Europe, was in the coursing of deer for meat and sport; later, specifically in Britain, they specialized in competition hare coursing. Some Greyhounds are still used for coursing, although artificial lure sports like lure coursing and racing are far more common and popular. Many leading 300- to 550-yard sprinters have bloodlines traceable back through Irish sires, within a few generations of racers that won events such as the Irish Coursing Derby or the Irish Cup.
Until the early twentieth century, Greyhounds were principally bred and trained for hunting and coursing. During the 1920s, modern greyhound racing was introduced into the United States, England (1926), Northern Ireland (1927), Scotland (1927) and the Republic of Ireland (1927). Australia also has a significant racing culture.
In the United States, aside from professional racing, many Greyhounds enjoy success on the amateur race track. Organizations like the Large Gazehound Racing Association (LGRA) and the National Oval Track Racing Association (NOTRA) provide opportunities for Greyhounds to compete.
Historically, the Greyhound has, since its first appearance as a hunting type and breed, enjoyed a specific degree of fame and definition in Western literature, heraldry and art as the most elegant or noble companion and hunter of the canine world. In modern times, the professional racing industry, with its large numbers of track-bred greyhounds, as well as international adoption programs aimed at re-homing dogs has redefined the breed as a sporting dog that will supply friendly companionship in its retirement. This has been prevalent in recent years due to track closures in the United States. Outside the racing industry and coursing community, the Kennel Clubs' registered breed still enjoys a modest following as a show dog and pet.
Greyhounds are typically a healthy and long-lived breed, and hereditary illness is rare. Some Greyhounds have been known to develop esophageal achalasia, gastric dilatation volvulus (also known as bloat), and osteosarcoma. If exposed to "E. coli", they may develop Alabama rot. Because the Greyhound's lean physique makes it ill-suited to sleeping on hard surfaces, owners of both racing and companion Greyhounds generally provide soft bedding; without bedding, Greyhounds are prone to develop painful skin sores. The average lifespan of a Greyhound is 10 to 14 years.
Due to the Greyhound's unique physiology and anatomy, a veterinarian who understands the issues relevant to the breed is generally needed when the dogs need treatment, particularly when anesthesia is required. Greyhounds cannot metabolize barbiturate-based anesthesia in the same way that other breeds can because their livers have lower amounts of oxidative enzymes. Greyhounds demonstrate unusual blood chemistry , which can be misread by veterinarians not familiar with the breed and can result in an incorrect diagnosis.
Greyhounds are very sensitive to insecticides. Many vets do not recommend the use of flea collars or flea spray on Greyhounds if the product is pyrethrin-based. Products like Advantage, Frontline, Lufenuron, and Amitraz are safe for use on Greyhounds, however, and are very effective in controlling fleas and ticks.
Greyhounds have higher levels of red blood cells than other breeds. Since red blood cells carry oxygen to the muscles, this higher level allows the hound to move larger quantities of oxygen faster from the lungs to the muscles. Conversely, Greyhounds have lower levels of platelets than other breeds. Veterinary blood services often use Greyhounds as universal blood donors.
Greyhounds do not have undercoats and thus are less likely to trigger dog allergies in humans (they are sometimes incorrectly referred to as "hypoallergenic"). The lack of an undercoat, coupled with a general lack of body fat, also makes Greyhounds more susceptible to extreme temperatures (both hot and cold); because of this, they must be housed inside. Some greyhounds are susceptible to corns on their paw pads, a variety of methods are used to treat them.
The key to the speed of a Greyhound can be found in its light but muscular build, large heart, highest percentage of fast twitch muscle of any breed, double suspension gallop, and extreme flexibility of its spine. "Double suspension rotary gallop" describes the fastest running gait of the Greyhound in which all four feet are free from the ground in two phases, contracted and extended, during each full stride.
The ancient skeletal remains of a dog identified as being of the greyhound/saluki form was excavated at Tell Brak in modern Syria, and dated to approximately 4,000 years before present.
While similar in appearance to Saluki or Sloughi, DNA sequencing indicates that the greyhound is more closely related to herding dogs. This suggests that Greyhounds are either progenitors to or descendants of herding types. Historical literature by Arrian on the vertragus (from the Latin "vertragus", a word of Celtic origin), the first recorded sighthound in Europe and possible antecedent of the Greyhound, suggested that its origin lies with the Celts from Eastern Europe or Eurasia. Systematic archaeozoology of the British Isles, 1974 ruled out the existence of a true greyhound-type in Britain prior to the Roman occupation, confirmed in 2000. Written evidence of the historic time, the Vindolanda tablets (No 594), from the early period of Roman occupation demonstrates that the occupying troops from Continental Europe either had with them in the North of England, or certainly knew of the vertragus and its hunting use.
All modern pedigree Greyhounds derive from the Greyhound stock recorded and registered first in private studbooks in the 18th century, then in public studbooks in the 19th century, which ultimately were registered with coursing, racing, and kennel club authorities of the United Kingdom. Historically, these sighthounds were used primarily for hunting in the open where their pursuit speed and keen eyesight were essential.
The name "Greyhound" is generally believed to come from the Old English "grighund". "Hund" is the antecedent of the modern "hound", but the meaning of "grig" is undetermined, other than in reference to dogs in Old English and Old Norse. Its origin does not appear to have any common root with the modern word "grey" for color, and indeed the Greyhound is seen with a wide variety of coat colors. The lighter colors, patch-like markings and white appeared in the breed that was once ordinarily grey in color. The Greyhound is the only dog mentioned by name in the Bible; many versions, including the King James version, name the Greyhound as one of the "four things stately" in the Proverbs. However, some newer biblical translations, including The New International Version, have changed this to "strutting rooster", which appears to be an alternative translation of the Hebrew term "mothen zarzir". However, the Douay–Rheims Bible translation from the late 4th-century Latin Vulgate into English translates this term as "a cock."
According to Pokorny the English name "Greyhound" does not mean "grey dog/hound", but simply "fair dog". Subsequent words have been derived from the Proto-Indo-European root *g'her- "shine, twinkle": English "grey", Old High German "gris" "grey, old," Old Icelandic "griss" "piglet, pig," Old Icelandic "gryja" "to dawn," "gryjandi" "morning twilight," Old Irish "grian" "sun," Old Church Slavonic "zorja" "morning twilight, brightness." The common sense of these words is "to shine; bright."
In 1928, the first winner of Best in Show at Crufts was breeder/owner Mr. H. Whitley's Greyhound "Primley Sceptre".(No.584, pp19 & 121)
A group of greyhounds is called a "leash," or sometimes a "brace."
|
https://en.wikipedia.org/wiki?curid=12938
|
Geometric algebra
The geometric algebra (GA) of a vector space is an algebra over a field, noted for its multiplication operation called the geometric product on a space of elements called multivectors, which contains both the scalars formula_1 and the vector space formula_2. Mathematically, a geometric algebra may be defined as the Clifford algebra of a vector space with a quadratic form. Clifford's contribution was to define a new product, the geometric product, that united the Grassmann and Hamilton algebras into a single structure. Adding the dual of the Grassmann exterior product (the "meet") allows the use of the Grassmann–Cayley algebra, and a conformal version of the latter together with a conformal Clifford algebra yields a conformal geometric algebra (CGA) providing a framework for classical geometries. In practice, these and several derived operations allow a correspondence of elements, subspaces and operations of the algebra with geometric interpretations.
The scalars and vectors have their usual interpretation, and make up distinct subspaces of a GA. Bivectors provide a more natural representation of pseudovector quantities in vector algebra such as oriented area, oriented angle of rotation, torque, angular momentum, electromagnetic field and the Poynting vector. A trivector can represent an oriented volume, and so on. An element called a blade may be used to represent a subspace of formula_2 and orthogonal projections onto that subspace. Rotations and reflections are represented as elements. Unlike vector algebra, a GA naturally accommodates any number of dimensions and any quadratic form such as in relativity.
Examples of geometric algebras applied in physics include the spacetime algebra (and the less common algebra of physical space) and the conformal geometric algebra. Geometric calculus, an extension of GA that incorporates differentiation and integration, can be used to formulate other theories such as complex analysis, differential geometry, e.g. by using the Clifford algebra instead of differential forms. Geometric algebra has been advocated, most notably by David Hestenes and Chris Doran, as the preferred mathematical framework for physics. Proponents claim that it provides compact and intuitive descriptions in many areas including classical and quantum mechanics, electromagnetic theory and relativity. GA has also found use as a computational tool in computer graphics and robotics.
The geometric product was first briefly mentioned by Hermann Grassmann, who was chiefly interested in developing the closely related exterior algebra. In 1878, William Kingdon Clifford greatly expanded on Grassmann's work to form what are now usually called Clifford algebras in his honor (although Clifford himself chose to call them "geometric algebras"). For several decades, geometric algebras went somewhat ignored, greatly eclipsed by the vector calculus then newly developed to describe electromagnetism. The term "geometric algebra" was repopularized in the 1960s by Hestenes, who advocated its importance to relativistic physics.
There are a number of different ways to define a geometric algebra. Hestenes's original approach was axiomatic, "full of geometric significance" and equivalent to the universal Clifford algebra.
Given a finite-dimensional quadratic space formula_2 over a field formula_1 with a symmetric bilinear form (the "inner product", e.g. the Euclidean or Lorentzian metric) formula_6, the geometric algebra for this quadratic space is the Clifford algebra formula_7. As usual in this domain, for the remainder of this article, only the real case, formula_8, will be considered. The notation formula_9 (respectively formula_10) will be used to denote a geometric algebra for which the bilinear form formula_11 has the signature formula_12 (respectively formula_13).
The essential product in the algebra is called the "geometric product", and the product in the contained exterior algebra is called the "exterior product" (frequently called the "outer product" and less often the "wedge product"). It is standard to denote these respectively by juxtaposition (i.e., suppressing any explicit multiplication symbol) and the symbol formula_14. The above definition of the geometric algebra is abstract, so we summarize the properties of the geometric product by the following set of axioms. The geometric product has the following properties, for formula_15:
The exterior product has the same properties, except that the last property above is replaced by formula_25 for formula_26.
Note that in the last property above, the real number formula_27 need not be nonnegative if formula_11 is not positive-definite. An important property of the geometric product is the existence of elements having a multiplicative inverse. For a vector formula_23, if formula_30 then formula_31 exists and is equal to formula_32. A nonzero element of the algebra does not necessarily have a multiplicative inverse. For example, if formula_33 is a vector in formula_2 such that formula_35, the element formula_36 is both a nontrivial idempotent element and a nonzero zero divisor, and thus has no inverse.
It is usual to identify formula_37 and formula_2 with their images under the natural embeddings formula_39 and formula_40. In this article, this identification is assumed. Throughout, the terms "scalar" and "vector" refer to elements of formula_37 and formula_2 respectively (and of their images under this embedding).
For vectors formula_23 and formula_44, we may write the geometric product of any two vectors formula_23 and formula_44 as the sum of a symmetric product and an antisymmetric product:
Thus we can define the "inner product" of vectors as
so that the symmetric product can be written as
Conversely, formula_11 is completely determined by the algebra. The antisymmetric part is the exterior product of the two vectors, the product of the contained exterior algebra:
Then by simple addition:
The inner and exterior products are associated with familiar concepts from standard vector algebra. Geometrically, formula_23 and formula_44 are parallel if their geometric product is equal to their inner product, whereas formula_23 and formula_44 are perpendicular if their geometric product is equal to their exterior product. In a geometric algebra for which the square of any nonzero vector is positive, the inner product of two vectors can be identified with the dot product of standard vector algebra. The exterior product of two vectors can be identified with the signed area enclosed by a parallelogram the sides of which are the vectors. The cross product of two vectors in formula_57 dimensions with positive-definite quadratic form is closely related to their exterior product.
Most instances of geometric algebras of interest have a nondegenerate quadratic form. If the quadratic form is fully degenerate, the inner product of any two vectors is always zero, and the geometric algebra is then simply an exterior algebra. Unless otherwise stated, this article will treat only nondegenerate geometric algebras.
The exterior product is naturally extended as an associative bilinear binary operator between any two elements of the algebra, satisfying the identities
where the sum is over all permutations of the indices, with formula_59 the sign of the permutation, and formula_60 are vectors (not general elements of the algebra). Since every element of the algebra can be expressed as the sum of products of this form, this defines the exterior product for every pair of elements of the algebra. It follows from the definition that the exterior product forms an alternating algebra.
A multivector that is the exterior product of formula_61 linearly independent vectors is called a "blade", and is said to be of grade formula_61. A multivector that is the sum of blades of grade formula_61 is called a (homogeneous) multivector of grade formula_61. From the axioms, with closure, every multivector of the geometric algebra is a sum of blades.
Consider a set of formula_61 linearly independent vectors formula_66 spanning an formula_61-dimensional subspace of the vector space. With these, we can define a real symmetric matrix (in the same way as a Gramian matrix)
By the spectral theorem, formula_69 can be diagonalized to diagonal matrix formula_70 by an orthogonal matrix formula_71 via
Define a new set of vectors formula_73, known as orthogonal basis vectors, to be those transformed by the orthogonal matrix:
Since orthogonal transformations preserve inner products, it follows that formula_75 and thus the formula_73 are perpendicular. In other words, the geometric product of two distinct vectors formula_77 is completely specified by their exterior product, or more generally
Therefore, every blade of grade formula_61 can be written as a geometric product of formula_61 vectors. More generally, if a degenerate geometric algebra is allowed, then the orthogonal matrix is replaced by a block matrix that is orthogonal in the nondegenerate block, and the diagonal matrix has zero-valued entries along the degenerate dimensions. If the new vectors of the nondegenerate subspace are normalized according to
then these normalized vectors must square to formula_82 or formula_83. By Sylvester's law of inertia, the total number of formula_82s and the total number of formula_83s along the diagonal matrix is invariant. By extension, the total number formula_86 of these vectors that square to formula_82 and the total number formula_88 that square to formula_83 is invariant. (The total number of basis vectors that square to zero is also invariant, and may be nonzero if the degenerate case is allowed.) We denote this algebra formula_90. For example, formula_91 models formula_57-dimensional Euclidean space, formula_93 relativistic spacetime and formula_94 a conformal geometric algebra of a formula_57-dimensional space.
The set of all possible products of formula_96 orthogonal basis vectors with indices in increasing order, including formula_18 as the empty product, forms a basis for the entire geometric algebra (an analogue of the PBW theorem). For example, the following is a basis for the geometric algebra formula_98:
A basis formed this way is called a canonical basis for the geometric algebra, and any other orthogonal basis for formula_2 will produce another canonical basis. Each canonical basis consists of formula_101 elements. Every multivector of the geometric algebra can be expressed as a linear combination of the canonical basis elements. If the canonical basis elements are formula_102 with formula_103 being an index set, then the geometric product of any two multivectors is
The terminology "formula_105-vector" is often encountered to describe multivectors containing elements of only one grade. In higher dimensional space, some such multivectors are not blades (cannot be factored into the exterior product of formula_105 vectors). By way of example, formula_107 in formula_108 cannot be factored; typically, however, such elements of the algebra do not yield to geometric interpretation as objects, although they may represent geometric quantities such as rotations. Only formula_109 and formula_96-vectors are always blades in formula_96-space.
Using an orthogonal basis, a graded vector space structure can be established. Elements of the geometric algebra that are scalar multiples of formula_18 are grade-formula_113 blades and are called "scalars". Multivectors that are in the span of formula_114 are grade-formula_18 blades and are the ordinary vectors. Multivectors in the span of formula_116 are grade-formula_117 blades and are the bivectors. This terminology continues through to the last grade of formula_96-vectors. Alternatively, grade-formula_96 blades are called pseudoscalars, grade-formula_120 blades pseudovectors, etc. Many of the elements of the algebra are not graded by this scheme since they are sums of elements of differing grade. Such elements are said to be of "mixed grade". The grading of multivectors is independent of the basis chosen originally.
This is a grading as a vector space, but not as an algebra. Because the product of an formula_61-blade and an formula_122-blade is contained in the span of formula_113 through formula_124-blades, the geometric algebra is a filtered algebra.
A multivector formula_125 may be decomposed with the grade-projection operator formula_126, which outputs the grade-formula_61 portion of formula_125. As a result:
As an example, the geometric product of two vectors formula_130 since formula_131 and formula_132 and formula_133, for formula_134 other than formula_113 and formula_117.
The decomposition of a multivector formula_125 may also be split into those components that are even and those that are odd:
This is the result of forgetting structure from a formula_140-graded vector space to formula_141-graded vector space. The geometric product respects this coarser grading. Thus in addition to being a formula_141-graded vector space, the geometric algebra is a formula_141-graded algebra or superalgebra.
Restricting to the even part, the product of two even elements is also even. This means that the even multivectors defines an "even subalgebra". The even subalgebra of an formula_96-dimensional geometric algebra is isomorphic (without preserving either filtration or grading) to a full geometric algebra of formula_120 dimensions. Examples include formula_146 and formula_147.
Geometric algebra represents subspaces of formula_2 as blades, and so they coexist in the same algebra with vectors from formula_2. A formula_105-dimensional subspace formula_151 of formula_2 is represented by taking an orthogonal basis formula_153 and using the geometric product to form the blade formula_154. There are multiple blades representing formula_151; all those representing formula_151 are scalar multiples of formula_157. These blades can be separated into two sets: positive multiples of formula_157 and negative multiples of formula_157. The positive multiples of formula_157 are said to have "the same orientation" as formula_157, and the negative multiples the "opposite orientation".
Blades are important since geometric operations such as projections, rotations and reflections depend on the factorability via the exterior product that (the restricted class of) formula_96-blades provide but that (the generalized class of) grade-formula_96 multivectors do not when formula_164.
Unit pseudoscalars are blades that play important roles in GA. A unit pseudoscalar for a non-degenerate subspace formula_151 of formula_2 is a blade that is the product of the members of an orthonormal basis for formula_151. It can be shown that if formula_168 and formula_169 are both unit pseudoscalars for formula_151, then formula_171 and formula_172. If one doesn't choose an orthonormal basis for formula_151, then the Plucker embedding gives a vector in the exterior algebra but only up to scaling. Using the vector space isomorphism between the geometric algebra and exterior algebra, this gives the equivalence class of formula_174 for all formula_175. Orthonormality gets rid of this ambiguity except for the signs above.
Suppose the geometric algebra formula_176 with the familiar positive definite inner product on formula_177 is formed. Given a plane (formula_117-dimensional subspace) of formula_177, one can find an orthonormal basis formula_180 spanning the plane, and thus find a unit pseudoscalar formula_181 representing this plane. The geometric product of any two vectors in the span of formula_182 and formula_183 lies in formula_184, that is, it is the sum of a formula_113-vector and a formula_117-vector.
By the properties of the geometric product, formula_187. The resemblance to the imaginary unit is not incidental: the subspace formula_188 is formula_37-algebra isomorphic to the complex numbers. In this way, a copy of the complex numbers is embedded in the geometric algebra for each 2-dimensional subspace of formula_2 on which the quadratic form is definite.
It is sometimes possible to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in the real algebra that square to formula_83, and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces.
In formula_98, a further familiar case occurs. Given a canonical basis consisting of orthonormal vectors formula_193 of formula_2, the set of "all" formula_117-vectors is spanned by
Labelling these formula_134, formula_198 and formula_105 (momentarily deviating from our uppercase convention), the subspace generated by formula_113-vectors and formula_117-vectors is exactly formula_202. This set is seen to be the even subalgebra of formula_98, and furthermore is isomorphic as an formula_37-algebra to the quaternions, another important algebraic system.
Let formula_205 be a basis of formula_2, i.e. a set of formula_96 linearly independent vectors that span the formula_96-dimensional vector space formula_2. The basis that is dual to formula_205 is the set of elements of the dual vector space formula_211 that forms a biorthogonal system with this basis, thus being the elements denoted formula_212 satisfying
where formula_214 is the Kronecker delta.
Given a nondegenerate quadratic form on formula_2, formula_211 becomes naturally identified with formula_2, and the dual basis may be regarded as elements of formula_2, but are not in general the same set as the original basis.
Given further a GA of formula_2, let
be the pseudoscalar (which does not necessarily square to formula_221) formed from the basis formula_205. The dual basis vectors may be constructed as
where the formula_224 denotes that the formula_134th basis vector is omitted from the product.
It is common practice to extend the exterior product on vectors to the entire algebra. This may be done through the use of the grade projection operator:
This generalization is consistent with the above definition involving antisymmetrization. Another generalization related to the exterior product is the commutator product:
The regressive product (usually referred to as the "meet") is the dual of the exterior product (or "join" in this context). The dual specification of elements permits, for blades formula_125 and formula_229, the intersection (or meet) where the duality is to be taken relative to the smallest grade blade containing both formula_125 and formula_229 (the join).
with formula_168 the unit pseudoscalar of the algebra. The regressive product, like the exterior product, is associative.
The inner product on vectors can also be generalized, but in more than one non-equivalent way. The paper gives a full treatment of several different inner products developed for geometric algebras and their interrelationships, and the notation is taken from there. Many authors use the same symbol as for the inner product of vectors for their chosen extension (e.g. Hestenes and Perwass). No consistent notation has emerged.
Among these several different generalizations of the inner product on vectors are:
A number of identities incorporating the contractions are valid without restriction of their inputs.
For example,
Benefits of using the left contraction as an extension of the inner product on vectors include that the identity formula_244 is extended to formula_245 for any vector formula_23 and multivector formula_229, and that the projection operation formula_248 is extended to formula_249 for any blade formula_229 and any multivector formula_125 (with a minor modification to accommodate null formula_229, given below).
Although a versor is easier to work with because it can be directly represented in the algebra as a multivector, versors are a subgroup of linear functions on multivectors, which can still be used when necessary. The geometric algebra of an formula_96-dimensional vector space is spanned by a basis of formula_101 elements. If a multivector is represented by a formula_255 real column matrix of coefficients of a basis of the algebra, then all linear transformations of the multivector can be expressed as the matrix multiplication by a formula_256 real matrix. However, such a general linear transformation allows arbitrary exchanges among grades, such as a "rotation" of a scalar into a vector, which has no evident geometric interpretation.
A general linear transformation from vectors to vectors is of interest. With the natural restriction to preserving the induced exterior algebra, the "outermorphism" of the linear transformation is the unique(1) = 1 is usually added to ensure that the zero map is unique.}} extension of the versor. If formula_257 is a linear function that maps vectors to vectors, then its outermorphism is the function that obeys the rule
for a blade, extended to the whole algebra through linearity.
Although a lot of attention has been placed on CGA, it is to be noted that GA is not just one algebra, it is one of a family of algebras with the same essential structure.
formula_91 may be considered as an extension or completion of vector algebra. "From Vectors to Geometric Algebra" covers basic analytic geometry and gives an introduction to stereographic projection.
The even subalgebra of formula_260 is isomorphic to the complex numbers, as may be seen by writing a vector formula_261 in terms of its components in an orthonormal basis and left multiplying by the basis vector formula_262, yielding
where we identify formula_264 since
Similarly, the even subalgebra of formula_91 with basis formula_267 is isomorphic to the quaternions as may be seen by identifying formula_268, formula_269 and formula_270.
Every associative algebra has a matrix representation; replacing the three Cartesian basis vectors by the Pauli matrices gives a representation of formula_91:
Dotting the "Pauli vector" (a dyad):
In physics, the main applications are the geometric algebra of Minkowski 3+1 spacetime, , called spacetime algebra (STA), or less commonly, , interpreted the algebra of physical space (APS).
While in STA points of spacetime are represented simply by vectors, in APS, points of formula_280-dimensional spacetime are instead represented by paravectors: a formula_57-dimensional vector (space) plus a formula_18-dimensional scalar (time).
In spacetime algebra the electromagnetic field tensor has a bivector representation formula_283. Here, the formula_284 is the unit pseudoscalar (or four-dimensional volume element), formula_285 is the unit vector in time direction, and formula_286 and formula_229 are the classic electric and magnetic field vectors (with a zero time component). Using the four-current formula_288, Maxwell's equations then become
In geometric calculus, juxtapositioning of vectors such as in formula_289 indicate the geometric product and can be decomposed into parts as formula_290. Here formula_157 is the covector derivative in any spacetime and reduces to formula_292 in flat spacetime. Where formula_293 plays a role in Minkowski formula_294-spacetime which is synonymous to the role of formula_292 in Euclidean formula_57-space and is related to the d'Alembertian by formula_297. Indeed, given an observer represented by a future pointing timelike vector formula_285 we have
Boosts in this Lorentzian metric space have the same expression formula_301 as rotation in Euclidean space, where formula_302 is the bivector generated by the time and the space directions involved, whereas in the Euclidean case it is the bivector generated by the two space directions, strengthening the "analogy" to almost identity.
The Dirac matrices are a representation of formula_93, showing the equivalence with matrix representations used by physicists.
The first model here is formula_304, the GA version of homogeneous coordinates used in projective geometry. Here a vector represents a point and an outer product of vectors an oriented length yet we may work with the algebra in just the same way as in formula_91. However, a useful inner product cannot be defined in the space and so there is no geometric product either leaving only outer product and non-metric uses of duality such as meet and join.
Nevertheless, there has been investigation of 4-dimensional alternatives to the full 5-dimensional CGA for limited geometries such as rigid body movements. A selection of these can be found in Part IV of "Guide to Geometric Algebra in Practice". Note that the algebra formula_306 appears as a subalgebra of CGA by selecting just one null vector and dropping the other and further that the "motor algebra" (isomorphic to dual quaternions) is the even subalgebra of formula_306.
A compact description of the current state of the art is provided by , which also includes further references, in particular to . Other useful references are and .
Working within GA, Euclidean space formula_308 (along with a conformal point at infinity) is embedded projectively in the CGA formula_309 via the identification of Euclidean points with formula_18-d subspaces in the formula_294-d null cone of the formula_312-d CGA vector subspace. This allows all conformal transformations to be done as rotations and reflections and is covariant, extending incidence relations of projective geometry to circles and spheres.
Specifically, we add orthogonal basis vectors formula_313 and formula_314 such that formula_315 and formula_316 to the basis of the vector space that generates formula_317 and identify null vectors
This procedure has some similarities to the procedure for working with homogeneous coordinates in projective geometry and in this case allows the modeling of Euclidean transformations of formula_321 as orthogonal transformations of a subset of formula_322.
A fast changing and fluid area of GA, CGA is also being investigated for applications to relativistic physics.
Two potential candidates are currently under investigation as the foundation for affine and projective geometry in 3-dimensions formula_323and formula_324 which includes representations for shears and non-uniform scaling, as well as quadric surfaces and conic sections.
A new research model, Quadric Conformal Geometric Algebra (QCGA) formula_325 is an extension of CGA, dedicated to quadric surfaces. The idea is to represent the objects in low dimensional subspaces of the algebra. QCGA is capable of constructing quadric surfaces either using control points or implicit equations. Moreover, QCGA can compute the intersection of quadric surfaces, as well as, the surface tangent and normal vectors at a point that lies in the quadric surface.
For any vector formula_23 and any invertible vector formula_327,
where the projection of formula_23 onto formula_327 (or the parallel part) is
and the rejection of formula_23 from formula_327 (or the orthogonal part) is
Using the concept of a formula_105-blade formula_229 as representing a subspace of formula_2 and every multivector ultimately being expressed in terms of vectors, this generalizes to projection of a general multivector onto any invertible formula_105-blade formula_229 as
with the rejection being defined as
The projection and rejection generalize to null blades formula_229 by replacing the inverse formula_343 with the pseudoinverse formula_344 with respect to the contractive product. The outcome of the projection coincides in both cases for non-null blades. For null blades formula_229, the definition of the projection given here with the first contraction rather than the second being onto the pseudoinverse should be used, as only then is the result necessarily in the subspace represented by formula_229.
The projection generalizes through linearity to general multivectors formula_125. The projection is not linear in formula_229 and does not generalize to objects formula_229 that are not blades.
Simple reflections in a hyperplane are readily expressed in the algebra through conjugation with a single vector. These serve to generate the group of general rotoreflections and rotations.
The reflection formula_350 of a vector formula_351 along a vector formula_327, or equivalently in the hyperplane orthogonal to formula_327, is the same as negating the component of a vector parallel to formula_327. The result of the reflection will be
This is not the most general operation that may be regarded as a reflection when the dimension formula_164. A general reflection may be expressed as the composite of any odd number of single-axis reflections. Thus, a general reflection formula_357 of a vector formula_23 may be written
where
If we define the reflection along a non-null vector formula_327 of the product of vectors as the reflection of every vector in the product along the same vector, we get for any product of an odd number of vectors that, by way of example,
and for the product of an even number of vectors that
Using the concept of every multivector ultimately being expressed in terms of vectors, the reflection of a general multivector formula_125 using any reflection versor formula_366 may be written
where formula_368 is the automorphism of reflection through the origin of the vector space (formula_369) extended through linearity to the whole algebra.
If we have a product of vectors formula_370 then we denote the reverse as
As an example, assume that formula_372 we get
Scaling formula_374 so that formula_375 then
so formula_377 leaves the length of formula_378 unchanged. We can also show that
so the transformation formula_377 preserves both length and angle. It therefore can be identified as a rotation or rotoreflection; formula_374 is called a rotor if it is a proper rotation (as it is if it can be expressed as a product of an even number of vectors) and is an instance of what is known in GA as a "versor".
There is a general method for rotating a vector involving the formation of a multivector of the form formula_382 that produces a rotation formula_383 in the plane and with the orientation defined by a formula_117-blade formula_385.
Rotors are a generalization of quaternions to formula_96-dimensional spaces.
A formula_105-versor is a multivector that can be expressed as the geometric product of formula_105 invertible vectors. Unit quaternions (originally called versors by Hamilton) may be identified with rotors in 3D space in much the same way as real 2D rotors subsume complex numbers; for the details refer to Dorst.
Some authors use the term “versor product” to refer to the frequently occurring case where an operand is "sandwiched" between operators. The descriptions for rotations and reflections, including their outermorphisms, are examples of such sandwiching. These outermorphisms have a particularly simple algebraic form. Specifically, a mapping of vectors of the form
Since both operators and operand are versors there is potential for alternative examples such as rotating a rotor or reflecting a spinor always provided that some geometrical or physical significance can be attached to such operations.
By the Cartan–Dieudonné theorem we have that every isometry can be given as reflections in hyperplanes and since composed reflections provide rotations then we have that orthogonal transformations are versors.
In group terms, for a real, non-degenerate formula_9, having identified the group formula_392 as the group of all invertible elements of formula_393, Lundholm gives a proof that the "versor group" formula_394 (the set of invertible versors) is equal to the Lipschitz group formula_395 ( Clifford group, although Lundholm deprecates this usage).
Lundholm defines the formula_396, formula_397, and formula_398 subgroups, generated by unit vectors, and in the case of formula_397 and formula_398, only an even number of such vector factors can be present.
Spinors are defined as elements of the even subalgebra of a real GA; an analysis of the GA approach to spinors is given by Francis and Kosowsky.
For vectors formula_274 and formula_275 spanning a parallelogram we have
with the result that formula_404 is linear in the product of the "altitude" and the "base" of the parallelogram, that is, its area.
Similar interpretations are true for any number of vectors spanning an formula_96-dimensional parallelotope; the exterior product of vectors formula_406, that is formula_407, has a magnitude equal to the volume of the formula_96-parallelotope. An formula_96-vector does not necessarily have a shape of a parallelotope – this is a convenient visualization. It could be any shape, although the volume equals that of the parallelotope.
We may define the line parametrically by formula_410 where formula_86 and formula_412 are position vectors for points P and T and formula_378 is the direction vector for the line.
Then
so
and
The mathematical description of rotational forces such as torque and angular momentum often makes use of the cross product of vector calculus in three dimensions with a convention of orientation (handedness).
The cross product can be viewed in terms of the exterior product allowing a more natural geometric interpretation of the cross product as a bivector using the dual relationship
For example, torque is generally defined as the magnitude of the perpendicular force component times distance, or work per unit angle.
Suppose a circular path in an arbitrary plane containing orthonormal vectors formula_419 and formula_420 is parameterized by angle.
By designating the unit bivector of this plane as the imaginary number
this path vector can be conveniently written in complex exponential form
and the derivative with respect to angle is
So the torque, the rate of change of work formula_151, due to a force formula_1, is
Unlike the cross product description of torque, formula_429, the geometric algebra description does not introduce a vector in the normal direction; a vector that does not exist in two and that is not unique in greater than three dimensions. The unit bivector describes the plane and the orientation of the rotation, and the sense of the rotation is relative to the angle between the vectors formula_430 and formula_431.
Geometric calculus extends the formalism to include differentiation and integration including differential geometry and differential forms.
Essentially, the vector derivative is defined so that the GA version of Green's theorem is true,
and then one can write
as a geometric product, effectively generalizing Stokes' theorem (including the differential form version of it).
In formula_434 when formula_125 is a curve with endpoints formula_23 and formula_44, then
reduces to
or the fundamental theorem of integral calculus.
Also developed are the concept of vector manifold and geometric integration theory (which generalizes differential forms).
Although the connection of geometry with algebra dates as far back at least to Euclid's "Elements" in the third century B.C. (see Greek geometric algebra), GA in the sense used in this article was not developed until 1844, when it was used in a "systematic way" to describe the geometrical properties and "transformations" of a space. In that year, Hermann Grassmann introduced the idea of a geometrical algebra in full generality as a certain calculus (analogous to the propositional calculus) that encoded all of the geometrical information of a space. Grassmann's algebraic system could be applied to a number of different kinds of spaces, the chief among them being Euclidean space, affine space, and projective space. Following Grassmann, in 1878 William Kingdon Clifford examined Grassmann's algebraic system alongside the quaternions of William Rowan Hamilton in . From his point of view, the quaternions described certain "transformations" (which he called "rotors"), whereas Grassmann's algebra described certain "properties" (or "Strecken" such as length, area, and volume). His contribution was to define a new product — the "geometric product" – on an existing Grassmann algebra, which realized the quaternions as living within that algebra. Subsequently, Rudolf Lipschitz in 1886 generalized Clifford's interpretation of the quaternions and applied them to the geometry of rotations in formula_96 dimensions. Later these developments would lead other 20th-century mathematicians to formalize and explore the properties of the Clifford algebra.
Nevertheless, another revolutionary development of the 19th-century would completely overshadow the geometric algebras: that of vector analysis, developed independently by Josiah Willard Gibbs and Oliver Heaviside. Vector analysis was motivated by James Clerk Maxwell's studies of electromagnetism, and specifically the need to express and manipulate conveniently certain differential equations. Vector analysis had a certain intuitive appeal compared to the rigors of the new algebras. Physicists and mathematicians alike readily adopted it as their geometrical toolkit of choice, particularly following the influential 1901 textbook "Vector Analysis" by Edwin Bidwell Wilson, following lectures of Gibbs.
In more detail, there have been three approaches to geometric algebra: quaternionic analysis, initiated by Hamilton in 1843 and geometrized as rotors by Clifford in 1878; geometric algebra, initiated by Grassmann in 1844; and vector analysis, developed out of quaternionic analysis in the late 19th century by Gibbs and Heaviside. The legacy of quaternionic analysis in vector analysis can be seen in the use of formula_134, formula_198, formula_105 to indicate the basis vectors of formula_321: it is being thought of as the purely imaginary quaternions. From the perspective of geometric algebra, the even subalgebra of the Space Time Algebra is isomorphic to the GA of 3D Euclidean space and quaternions are isomorphic to the even subalgebra of the GA of 3D Euclidean space, which unifies the three approaches.
Progress on the study of Clifford algebras quietly advanced through the twentieth century, although largely due to the work of abstract algebraists such as Hermann Weyl and Claude Chevalley. The "geometrical" approach to geometric algebras has seen a number of 20th-century revivals. In mathematics, Emil Artin's "Geometric Algebra" discusses the algebra associated with each of a number of geometries, including affine geometry, projective geometry, symplectic geometry, and orthogonal geometry. In physics, geometric algebras have been revived as a "new" way to do classical mechanics and electromagnetism, together with more advanced topics such as quantum mechanics and gauge theory. David Hestenes reinterpreted the Pauli and Dirac matrices as vectors in ordinary space and spacetime, respectively, and has been a primary contemporary advocate for the use of geometric algebra.
In computer graphics and robotics, geometric algebras have been revived in order to efficiently represent rotations and other transformations. For applications of GA in robotics (screw theory, kinematics and dynamics using versors), computer vision, control and neural computing (geometric learning) see Bayro (2010).
GA is a very application-oriented subject. There is a reasonably steep initial learning curve associated with it, but this can be eased somewhat by the use of applicable software. The following is a list of freely available software that does not require ownership of commercial software or purchase of any commercial products for this purpose:
Software allowing script creation and including sample visualizations, manual and GA introduction.
For programmers, this is a code generator with support for C, C++, C# and Java.
English translations of early books and papers
Research groups
|
https://en.wikipedia.org/wiki?curid=12939
|
George Benson
George Washington Benson (born March 22, 1943) is an American guitarist, singer, and songwriter. He began his professional career at the age of 21 as a jazz guitarist.
A former child prodigy, Benson first came to prominence in the 1960s, playing soul jazz with Jack McDuff and others. He then launched a successful solo career, alternating between jazz, pop, R&B singing, and scat singing. His album "Breezin'" was certified triple-platinum, hitting no. 1 on the "Billboard" album chart in 1976. His concerts were well attended through the 1980s, and he still has a large following. Benson has been honored with a star on the Hollywood Walk of Fame.
Benson was born and raised in the Hill District in Pittsburgh, Pennsylvania. At the age of seven, he first played the ukulele in a corner drug store, for which he was paid a few dollars. At the age of eight, he played guitar in an unlicensed nightclub on Friday and Saturday nights, but the police soon closed the club down. At the age of 9, he started to record. Out of the four sides he cut, two were released: "She Makes Me Mad" backed with "It Should Have Been Me", with RCA-Victor in New York; although one source indicates this record was released under the name "Little Georgie", the 45rpm label is printed with the name George Benson. The single was produced by Leroy Kirkland for RCA's rhythm and blues label, Groove Records. As he has stated in an interview, Benson's introduction to showbusiness had an effect on his schooling. When this was discovered (tied with the failure of his single) his guitar was impounded. Luckily, after he spent time in a juvenile detention centre his stepfather made him a new guitar.
Benson attended and graduated from Schenley High School. As a youth he learned how to play straight-ahead instrumental jazz during a relationship performing for several years with organist Jack McDuff. One of his many early guitar heroes was country-jazz guitarist Hank Garland. At the age of 21, he recorded his first album as leader, "The New Boss Guitar", featuring McDuff. Benson's next recording was "It's Uptown with the George Benson Quartet", including Lonnie Smith on organ and Ronnie Cuber on baritone saxophone. Benson followed it up with "The George Benson Cookbook", also with Lonnie Smith and Ronnie Cuber on baritone and drummer Marion Booker. Miles Davis employed Benson in the mid-1960s, featuring his guitar on "Paraphernalia" on his 1968 Columbia release, "Miles in the Sky" before going to Verve Records.
Benson then signed with Creed Taylor's jazz label CTI Records, where he recorded several albums, with jazz heavyweights guesting, to some success, mainly in the jazz field. His 1974 release, "Bad Benson", climbed to the top spot in the "Billboard" jazz chart, while the follow-ups, "Good King Bad" (#51 Pop album) and "Benson and Farrell" (with Joe Farrell), both reached the jazz top-three sellers. Benson also did a version of The Beatles's 1969 album "Abbey Road" called "The Other Side of Abbey Road", also released in 1969, and a version of "White Rabbit", originally written and recorded by San Francisco rock group Great Society, and made famous by Jefferson Airplane. Benson played on numerous sessions for other CTI artists during this time, including Freddie Hubbard and Stanley Turrentine, notably on the latter's acclaimed album "Sugar".
By the mid-to-late 1970s, as he recorded for Warner Bros. Records, a whole new audience began to discover Benson. With the 1976 release "Breezin'", Benson sang a lead vocal on the track "This Masquerade" (notable also for the lush, romantic piano intro and solo by Jorge Dalto), which became a huge pop hit and won a Grammy Award for Record of the Year. (He had sung vocals infrequently on albums earlier in his career, notably his rendition of "Here Comes the Sun" on the "Other Side of Abbey Road" album.) The rest of the album is instrumental, including his rendition of the 1975 Jose Feliciano composition "Affirmation".
In 1976, Benson toured with soul singer Minnie Riperton, who had been diagnosed with terminal breast cancer earlier that year and, in addition, appeared as a guitarist and backup vocalist on Stevie Wonder's song "Another Star" from Wonder's album "Songs in the Key of Life".
During the same year, 1976, the top selling album 'Breezin' was released on the Warner Brothers label featuring the Bobby Womack penned title track and the Leon Russell penned This Masquerade which is now a jazz standard. Both tracks won Grammy awards that year and the LP put Benson into the musical limelight both in the USA and in Europe. Ironically, Benson had been discouraged up until this time, from using his singing skills, mainly as the company decision makers felt he wasn't competent enough vocally, and he should stick to playing the guitar. It was here that he clearly proved them wrong.
He also recorded the original version of "The Greatest Love of All" for the 1977 Muhammad Ali bio-pic, "The Greatest", which was later covered by Whitney Houston as "Greatest Love of All". During this time Benson recorded with the German conductor Claus Ogerman. The live take of "On Broadway", recorded a few months later from the 1978 release "Weekend in L.A.", also won a Grammy. He has worked with Freddie Hubbard on a number of his albums throughout the 1960s, 1970s and 1980s.
The Qwest record label (a subsidiary of Warner Bros., run by Quincy Jones) released Benson's breakthrough pop album "Give Me The Night", produced by Jones. Benson made it into the pop and R&B top ten with the song "Give Me the Night" (written by former Heatwave keyboardist Rod Temperton). He had many hit singles such as "Love All the Hurt Away", "Turn Your Love Around", "Inside Love", "Lady Love Me", "20/20", "Shiver", "Kisses in the Moonlight". More importantly, Quincy Jones encouraged Benson to search his roots for further vocal inspiration, and he rediscovered his love for Nat Cole, Ray Charles and Donny Hathaway in the process, influencing a string of further vocal albums into the 1990s. Despite returning to his jazz and guitar playing most recently, this theme was reflected again much later in Benson's 2000 release "Absolute Benson", featuring a cover of one of Hathaway's most notable songs, "The Ghetto". Benson accumulated three other platinum LPs and two gold albums.
In 1990, Benson was awarded an Honorary Doctorate of Music from the Berklee College of Music.
To commemorate the long relationship between Benson and Ibanez and to celebrate 30 years of collaboration on the GB Signature Models, Ibanez created the GB30TH, a limited-edition model with a gold-foil finish inspired by the traditional Japanese Garahaku art form. In 2009, Benson was recognized by the National Endowment of the Arts as a Jazz Master, the nation's highest honor in jazz. Benson performed at the 49th issue of the Ohrid Summer Festival in North Macedonia on July 25, 2009, and his tribute show to Nat King Cole "An Unforgettable Tribute to Nat King Cole" as part of the Istanbul International Jazz Festival in Turkey on July 27. In the fall of 2009, Benson finished recording an album entitled "Songs and Stories" with Marcus Miller, producer John Burk, and session musicians David Paich and Steve Lukather. As a part of the promotion for his album "Songs and Stories", Benson has appeared or performed on "The Tavis Smiley Show", "Jimmy Kimmel Live!" and "Late Night with Jimmy Fallon".
He performed at the Java Jazz Festival March 4–6, 2011. In 2011, Benson released the album "Guitar Man", revisiting his 1960s/early-1970s guitar-playing roots with a 12-song collection of covers of both jazz and pop standards produced by John Burk.
In June 2013, Benson released his fourth album for Concord, "", which included Wynton Marsalis, Idina Menzel, Till Brönner, and Judith Hill. In September, he returned to perform at Rock in Rio festival, in Rio de Janeiro, 35 years after his first performance at this festival, which was then the inaugural one.
In July 2016, Benson participated as a mentor in the Sky Arts program "Guitar Star" in the search for the UK and Republic of Ireland's most talented guitarist.
In May 2018, Benson was featured on the Gorillaz single "Humility".
On July 12, 2018, it was announced that Benson had signed to Mascot Label Group.
On June 25, 2019, "The New York Times Magazine" listed George Benson among hundreds of musicians whose material was destroyed in the 2008 Universal fire.
Benson has been married to Johnnie Lee since 1965 and has seven children. Benson describes his music as focusing more on love and romance, due to his commitment to his family and religious practices, with Benson being a Jehovah's Witness. Benson has been a resident of Englewood, New Jersey.
List of Grammy Awards received by George Benson
|
https://en.wikipedia.org/wiki?curid=12945
|
Grigory Barenblatt
Grigory Isaakovich Barenblatt (; 10 July 1927 – 22 June 2018) was a Russian mathematician.
Barenblatt graduated in 1950 from Moscow State University, Department of Mechanics and Mathematics. He received his Ph.D. in 1953 from Moscow State University under the supervision of A. N. Kolmogorov.
Barenblatt also received a D.Sc. from Moscow State University in 1957. He was an emeritus Professor in Residence at the Department of Mathematics of the University of California, Berkeley and Mathematician at Department of Mathematics, Lawrence Berkeley National Laboratory. He was G. I. Taylor Professor of Fluid Mechanics at the University of Cambridge from 1992 to 1994 and he was Emeritus G. I. Taylor Professor of Fluid Mechanics. His areas of research were:
|
https://en.wikipedia.org/wiki?curid=12946
|
Grammatical tense
In grammar, tense is a category that expresses time reference with reference to the moment of speaking. Tenses are usually manifested by the use of specific forms of verbs, particularly in their conjugation patterns.
The main tenses found in many languages include the past, present, and future. Some languages have only two distinct tenses, such as past and nonpast, or future and nonfuture. There are also tenseless languages, like most of the Chinese languages, though they can possess a future and nonfuture system, which is typical of Sino-Tibetan languages. Recent work by Bittner, Tonnhauser has described the different ways in which tenseless languages nonetheless mark time. On the other hand, some languages make finer tense distinctions, such as remote vs recent past, or near vs remote future.
Tenses generally express time relative to the moment of speaking. In some contexts, however, their meaning may be relativized to a point in the past or future which is established in the discourse (the moment being spoken about). This is called "relative" (as opposed to "absolute") tense. Some languages have different verb forms or constructions which manifest relative tense, such as pluperfect ("past-in-the-past") and "future-in-the-past".
Expressions of tense are often closely connected with expressions of the category of aspect; sometimes what are traditionally called tenses (in languages such as Latin) may in modern analysis be regarded as combinations of tense with aspect. Verbs are also often conjugated for mood, and since in many cases the four categories are not manifested separately, some languages may be described in terms of a combined tense–aspect–mood (TAM) system.
The English noun "tense" comes from Old French "tens" "time" (spelled "temps" in modern French through deliberate archaisation), from Latin "time". It is not related to the adjective "tense", which comes from Latin "tensus", the perfect passive participle of "tendere" "stretch".
In modern linguistic theory, tense is understood as a category that expresses (grammaticalizes) time reference; namely one which, using grammatical means, places a state or action in time. Nonetheless, in many descriptions of languages, particularly in traditional European grammar, the term "tense" is applied to verb forms or constructions that express not merely position in time, but also additional properties of the state or action – particularly aspectual or modal properties.
The category of aspect expresses how a state or action relates to time – whether it is seen as a complete event, an ongoing or repeated situation, etc. Many languages make a distinction between perfective aspect (denoting complete events) and imperfective aspect (denoting ongoing or repeated situations); some also have other aspects, such as a perfect aspect, denoting a state following a prior event. Some of the traditional "tenses" express time reference together with aspectual information. In Latin and French, for example, the imperfect denotes past time in combination with imperfective aspect, while other verb forms (the Latin perfect, and the French "passé composé" or "passé simple") are used for past time reference with perfective aspect.
The category of mood is used to express modality, which includes such properties as uncertainty, evidentiality, and obligation. Commonly encountered moods include the indicative, subjunctive, and conditional. Mood can be bound up with tense, aspect, or both, in particular verb forms. Hence certain languages are sometimes analysed as having a single tense–aspect–mood (TAM) system, without separate manifestation of the three categories.
The term "tense", then, particularly in less formal contexts, is sometimes used to denote any combination of tense proper, aspect, and mood. As regards English, there are many verb forms and constructions which combine time reference with continuous and/or perfect aspect, and with indicative, subjunctive or conditional mood. Particularly in some English language teaching materials, some or all of these forms can be referred to simply as tenses (see below).
Particular tense forms need not always carry their basic time-referential meaning in every case. A present tense form may sometimes refer to the past (as in the historical present), a past tense form may sometimes refer to the non-past (as in some English conditional sentences), and so on.
Not all languages have tense: tenseless languages include Chinese and Dyirbal. Some languages have all three basic tenses (the past, present, and future), while others have only two: some have past and nonpast tenses, the latter covering both present and future times (as in Arabic, Japanese, and in English in some analyses), whereas others such as Greenlandic and Quechua have future and nonfuture. Some languages have four or more tenses, making finer distinctions either in the past (e.g. remote vs. recent past) or in the future (e.g. near vs. remote future). The six-tense language Kalaw Lagaw Ya of Australia has the remote past, the recent past, the today past, the present, the today/near future and the remote future. The Amazonian Cubeo language has a historical past tense, used for events perceived as historical.
Tenses that refer specifically to "today" are called hodiernal tenses; these can be either past or future. Apart from Kalaw Lagaw Ya, another language which features such tenses is Mwera, a Bantu language of Tanzania. It is also suggested that in 17th-century French, the "passé composé" served as a hodiernal past. Tenses which contrast with hodiernals, by referring to the past before today or the future after today, are called pre-hodiernal and post-hodiernal respectively. Some languages also have a crastinal tense, a future tense referring specifically to tomorrow (found in some Bantu languages); or a hesternal tense, a past tense referring specifically to yesterday (although this name is also sometimes used to mean pre-hodiernal). A tense for after tomorrow is thus called post-crastinal, and one for before yesterday is called pre-hesternal.
Another tense found in some languages, including Luganda, is the persistive tense, used to indicate that a state or ongoing action is still the case (or, in the negative, is no longer the case). Luganda also has tenses meaning "so far" and "not yet".
Some languages have special tense forms that are used to express relative tense. Tenses that refer to the past relative to the time under consideration are called "anterior"; these include the pluperfect (for the past relative to a past time) and the future perfect (for the past relative to a future time). Similarly, "posterior" tenses refer to the future relative to the time under consideration, as with the English "future-in-the-past": "(he said that) he would go." Relative tense forms are also sometimes analysed as combinations of tense with aspect: the perfect aspect in the anterior case, or the prospective aspect in the posterior case.
Some languages have cyclic tense systems. This is a form of temporal marking where tense is given relative to a reference point or reference span. In Burarra language, for example, events that occurred earlier on the day of speaking are marked with the same verb forms as events that happened in the far past, while events that happened yesterday (compared to the moment of speech) are marked with the same forms as events in the present. This can be thought of as a system where events are marked as prior or contemporaneous to points of reference on a time line.
Tense is normally indicated by the use of a particular verb form – either an inflected form of the main verb, or a multi-word construction, or both in combination. Inflection may involve the use of affixes, such as the "-ed" ending that marks the past tense of English regular verbs, but can also entail stem modifications, such as ablaut, as found as in the strong verbs in English and other Germanic languages, or reduplication. Multi-word tense constructions often involve auxiliary verbs or clitics. Examples which combine both types of tense marking include the French "passé composé", which has an auxiliary verb together with the inflected past participle form of the main verb; and the Irish past tense, where the proclitic "do" (in various surface forms) appears in conjunction with the affixed or ablaut-modified past tense form of the main verb.
As has already been mentioned, indications of tense are often bound up with indications of other verbal categories, such as aspect and mood. The conjugation patterns of verbs often also reflect agreement with categories pertaining to the subject, such as person, number and gender. It is consequently not always possible to identify elements that mark any specific category, such as tense, separately from the others.
A few languages have been shown to mark tense information (as well as aspect and mood) on nouns. This may be called nominal TAM.
Languages that do not have grammatical tense, such as Chinese, express time reference chiefly by lexical means – through adverbials, time phrases, and so on. (The same is done in tensed languages, to supplement or reinforce the time information conveyed by the choice of tense.) Time information is also sometimes conveyed as a secondary feature by markers of other categories, as with the Chinese aspect markers "le" and "guo", which in most cases place an action in past time. However, much time information is conveyed implicitly by context – it is therefore not always necessary, when translating from a tensed to a tenseless language, say, to express explicitly in the target language all of the information conveyed by the tenses in the source.
Latin is traditionally described as having six tenses (the Latin for "tense" being "tempus", plural "tempora"):
Of these, the imperfect and perfect can be considered to represent a past tense combined with imperfective and perfective aspect respectively (the first is used for habitual or ongoing past actions or states, and the second for completed actions). The pluperfect and future perfect are relative tenses, referring to the past relative to a past time or relative to a future time.
Latin verbs are conjugated for tense (and aspect) together with mood (indicative, subjunctive, and sometimes imperative) and voice (active or passive). Most forms are produced by inflecting the verb stem, with endings that also depend on the person and number of the subject. Some of the passive forms are produced using a participle together with a conjugated auxiliary verb. For details of the forms, see Latin conjugation.
The tenses of Ancient Greek are similar, but with a three-way aspect contrast in the past: the aorist, the perfect and the imperfect. The aorist was the "simple past", while the imperfective denoted uncompleted action in the past, and the perfect was used for past events having relevance to the present.
The study of modern languages has been greatly influenced by the grammar of the Classical languages, since early grammarians, often monks, had no other reference point to describe their language. Latin terminology is often used to describe modern languages, sometimes with a change of meaning, as with the application of "perfect" to forms in English that do not necessarily have perfective meaning, or the words "Imperfekt" and "Perfekt" to German past tense forms that mostly lack any relationship to the aspects implied by those terms.
English has only two morphological tenses: the present (or non-past), as in "he goes, and the past (or preterite), as in "he went. The non-past usually references the present, but sometimes references the future (as in "the bus leaves tomorrow"). (It also sometimes references the past, however, in what is called the historical present.)
Constructions with the modal auxiliary verbs "will" and "shall" also frequently refer the future (although they have other uses as well); these are often described as the English future tense. Less commonly, forms with the auxiliaries ' "would" ' and (rarely) ' "should" ' are described as a relative tense, the future-in-the-past. (The same forms are used for the conditional mood, and for various other meanings.)
The present and past are distinguished by verb form, using either ablaut ("sing(s)" ~ "sang") or suffix ("walk(s)" ~ "walked"). (For details, see English verbs).
English also has continuous (progressive) aspect and perfect aspect; these together produce four aspectual types: simple, continuous, perfect, and perfect continuous. Each of these can combine with the tenses to produce a large set of different constructions, mostly involving one or more auxiliary verbs together with a participle or infinitive:
In some contexts, particularly in English language teaching, the tense–aspect combinations in the above table may be referred to simply as tenses. For details of the uses of these constructions, as well as additional verb forms representing different grammatical moods, see Uses of English verb forms.
Proto-Indo-European verbs had present, perfect (stative), imperfect and aorist forms – these can be considered as representing two tenses (present and past) with different aspects. Most languages in the Indo-European family have developed systems either with two morphological tenses (present or "non-past", and past) or with three (present, past and future). The tenses often form part of entangled tense–aspect–mood conjugation systems. Additional tenses, tense–aspect combinations, etc. can be provided by compound constructions containing auxiliary verbs.
The Germanic languages (which include English) have present (non-past) and past tenses formed morphologically, with future and other additional forms made using auxiliaries. In standard German, the compound past "(Perfekt)" has replaced the simple morphological past in most contexts.
The Romance languages (descendants of Latin) have past, present and future morphological tenses, with additional aspectual distinction in the past. French is an example of a language where, as in German, the simple morphological perfective past "(passé simple)" has mostly given way to a compound form "(passé composé)".
Irish, a Celtic language, has past, present and future tenses (see Irish conjugation). The past contrasts perfective and imperfective aspect, and some verbs retain such a contrast in the present. Classical Irish had a three-way aspectual contrast of simple–perfective–imperfective in the past and present tenses.
Persian, an Indo-Iranian language, has past and non-past forms, with additional aspectual distinctions. Future can be expressed using an auxiliary, but almost never in non-formal context.
In the Slavic languages, verbs are intrinsically perfective or imperfective. In Russian and some other languages in the group, perfective verbs have past and future tenses, while imperfective verbs have past, present and future, the imperfective future being a compound tense in most cases. The future tense of perfective verbs is formed in the same way as the present tense of imperfective verbs. However, in South Slavic languages, there may be a greater variety of forms – Bulgarian, for example, has present, past (both "imperfect" and "aorist") and future tenses, for both perfective and imperfective verbs, as well as perfect forms made with an auxiliary (see Bulgarian verbs).
Finnish and Hungarian, both members of the Uralic language family, have morphological present (non-past) and past tenses. The Hungarian verb "van" ("to be") also has a future form.
Turkish verbs conjugate for past, present and future, with a variety of aspects and moods.
Arabic verbs have past and non-past; future can be indicated by a prefix.
Korean verbs have a variety of affixed forms which can be described as representing present, past and future tenses, although they can alternatively be considered to be aspectual. Similarly, Japanese verbs are described as having present and past tenses, although they may be analysed as aspects. Chinese and many other East Asian languages generally lack inflection and are considered to be tenseless languages, although they may have aspect markers which convey certain information about time reference.
For examples of languages with a greater variety of tenses, see the section on possible tenses, above. Fuller information on tense formation and usage in particular languages can be found in the articles on those languages and their grammars.
Rapa is the French Polynesian language of the island of Rapa Iti. Verbs in the indigenous Old Rapa occur with a marker known as TAM which stands for tense, aspect, or mood which can be followed by directional particles or deictic particles. Of the markers there are three tense markers called: Imperfective, Progressive, and Perfective. Which simply mean, Before, Currently, and After. However, specific TAM markers and the type of deictic or directional particle that follows determine and denote different types of meanings in terms of tenses.
Imperfective: denotes actions that have not occurred yet but will occur and expressed by TAM e.
Example:
Progressive: Also expressed by TAM e and denotes actions that are currently happening when used with deictic na, and denotes actions that was just witnessed but still currently happening when used with deictic ra.
Example:
Perfective: denotes actions that have already occurred or have finished and is marked by TAM ka.
Example:
In Old Rapa there are also other types of tense markers known as Past, Imperative, and Subjunctive.
Past
TAM i marks past action. It is rarely used as a matrix TAM and is more frequently observed in past embedded clauses
Imperative
The imperative is marked in Old Rapa by TAM a. A second person subject is implied by the direct command of the imperative.
For a more polite form rather than a straightforward command imperative TAM a is used with adverbial kānei. Kānei is only shown to be used in imperative structures and was translated by the french as “please”.
It is also used in a more impersonal form. For example, how you would speak toward a pesky neighbor.
Subjunctive
The subjunctive in Old Rapa is marked by kia and can also be used in expressions of desire
The Tokelauan language is a tenseless language. The language uses the same words for all three tenses; the phrase E liliu mai au i te Aho Tōnai literally translates to Come back / me / on Saturday, but the translation becomes ‘I am coming back on Saturday’.
Wuvulu-Aua does not have an explicit tense, but rather tense is conveyed by mood, aspect markers, and time phrases. Wuvulu speakers use a realis mood to convey past tense as speakers can be certain about events that have occurred. In some cases, realis mood is used to convey present tense — often to indicate a state of being. Wuvulu speakers use an irrealis mood to convey future tense.
Tense in Wuvulu-Aua may also be implied by using time adverbials and aspectual markings. Wuvulu contains three verbal markers to indicate sequence of events. The preverbal adverbial "loʔo" 'first' indicates the verb occurs before any other. The postverbal morpheme "liai" and "linia" are the respective intransitive and transitive suffixes indicating a repeated action. The postverbal morpheme "li" and "liria" are the respective intransitive and transitive suffixes indicating a completed action.
Mortlockese uses tense markers such as "mii" and to denote the present tense state of a subject, "aa" to denote a present tense state that an object has changed to from a different, past state, "kɞ" to describe something that has already been completed, "pɞ" and "lɛ" to denote future tense, "pʷapʷ" to denote a possible action or state in future tense, and "sæn/mwo" for something that has not happened yet. Each of these markers is used in conjunction with the subject proclitics except for the markers "aa" and "mii". Additionally, the marker "mii" can be used with any type of intransitive verb.
|
https://en.wikipedia.org/wiki?curid=12947
|
Grammatical aspect
Aspect is a grammatical category that expresses how an action, event, or state, denoted by a verb, extends over time. Perfective aspect is used in referring to an event conceived as bounded and unitary, without reference to any flow of time during ("I helped him"). Imperfective aspect is used for situations conceived as existing continuously or repetitively as time flows ("I was helping him"; "I used to help people").
Further distinctions can be made, for example, to distinguish states and ongoing actions (continuous and progressive aspects) from repetitive actions (habitual aspect).
Certain aspectual distinctions express a relation in time between the event and the time of reference. This is the case with the perfect aspect, which indicates that an event occurred prior to (but has continuing relevance at) the time of reference: "I have eaten"; "I had eaten"; "I will have eaten".
Different languages make different grammatical aspectual distinctions; some (such as Standard German; see below) do not make any. The marking of aspect is often conflated with the marking of tense and mood (see tense–aspect–mood). Aspectual distinctions may be restricted to certain tenses: in Latin and the Romance languages, for example, the perfective–imperfective distinction is marked in the past tense, by the division between preterites and imperfects. Explicit consideration of aspect as a category first arose out of study of the Slavic languages; here verbs often occur in pairs, with two related verbs being used respectively for imperfective and perfective meanings.
The concept of grammatical aspect should not be confused with perfect and imperfect "verb forms"; the meanings of the latter terms are somewhat different, and in some languages, the common names used for verb forms may not follow the actual aspects precisely.
The Indian linguist Yaska (c. 7th century BCE) dealt with grammatical aspect, distinguishing actions that are processes ("bhāva"), from those where the action is considered as a completed whole ("mūrta"). This is the key distinction between the imperfective and perfective. Yaska also applied this distinction to a verb versus an action nominal.
Grammarians of the Greek and Latin languages also showed an interest in aspect, but the idea did not enter into the modern Western grammatical tradition until the 19th century via the study of the grammar of the Slavic languages. The earliest use of the term recorded in the Oxford English Dictionary dates from 1853.
Aspect is often confused with the closely related concept of tense, because they both convey information about time. While tense relates the time of referent to some other time, commonly the speech event, aspect conveys other temporal information, such as duration, completion, or frequency, as it relates to the time of action. Thus tense refers to "temporally when" while aspect refers to "temporally how". Aspect can be said to describe the texture of the time in which a situation occurs, such as a single point of time, a continuous range of time, a sequence of discrete points in time, etc., whereas tense indicates its location in time.
For example, consider the following sentences: "I eat", "I am eating", "I have eaten", and "I have been eating". All are in the present tense, indicated by the present-tense verb of each sentence ("eat", "am", and "have"). Yet since they differ in aspect each conveys different information or points of view as to how the action pertains to the present.
Grammatical aspect is a "formal" property of a language, distinguished through overt inflection, derivational affixes, or independent words that serve as grammatically required markers of those aspects. For example, the K'iche' language spoken in Guatemala has the inflectional prefixes "k"- and "x"- to mark incompletive and completive aspect; Mandarin Chinese has the aspect markers -"le" 了, -"zhe" 着, "zài"- 在, and -"guò" 过 to mark the perfective, durative stative, durative progressive, and experiential aspects, and also marks aspect with adverbs; and English marks the continuous aspect with the verb "to be" coupled with present participle and the perfect with the verb "to have" coupled with past participle. Even languages that do not mark aspect morphologically or through auxiliary verbs, however, can convey such distinctions by the use of adverbs or other syntactic constructions.
Grammatical aspect is distinguished from lexical aspect or "aktionsart", which is an inherent feature of verbs or verb phrases and is determined by the nature of the situation that the verb describes.
The most fundamental aspectual distinction, represented in many languages, is between perfective aspect and imperfective aspect. This is the basic aspectual distinction in the Slavic languages.
It semantically corresponds to the distinction between the morphological forms known respectively as the aorist and imperfect in Greek, the preterite and imperfect in Spanish, the simple past (passé simple) and imperfect in French, and the perfect and imperfect in Latin (from the Latin "perfectus", meaning "completed").
Essentially, the perfective aspect looks at an event as a complete action, while the imperfective aspect views an event as the process of unfolding or a repeated or habitual event (thus corresponding to the progressive/continuous aspect for events of short-term duration and to habitual aspect for longer terms).
For events of short durations in the past, the distinction often coincides with the distinction in the English language between the simple past "X-ed," as compared to the progressive "was X-ing". Compare "I wrote the letters this morning" (i.e. finished writing the letters: an action completed) and "I was writing letters this morning" (the letters may still be unfinished).
In describing longer time periods, English needs context to maintain the distinction between the habitual ("I called him often in the past" – a habit that has no point of completion) and perfective ("I called him once" – an action completed), although the construct "used to" marks both habitual aspect and past tense and can be used if the aspectual distinction otherwise is not clear.
Sometimes, English has a lexical distinction where other languages may use the distinction in grammatical aspect. For example, the English verbs "to know" (the state of knowing) and "to find out" (knowing viewed as a "completed action") correspond to the imperfect and perfect forms of the equivalent verbs in French and Spanish, "savoir" and "saber". This is also true when the sense of verb "to know" is "to know somebody", in this case opposed in aspect to the verb "to meet" (or even to the construction "to get to know"). These correspond to imperfect and perfect forms of "conocer" in Spanish. In German, on the other hand, the distinction is also lexical (as in English) through verbs "kennen" and "kennenlernen", although the semantic relation between both forms is much more straightforward since "kennen" means "to know" and "lernen" means "to learn".
The Germanic languages combine the concept of aspect with the concept of tense. Although English largely separates tense and aspect formally, its aspects (neutral, progressive, perfect, progressive perfect, and [in the past tense] habitual) do not correspond very closely to the distinction of perfective vs. imperfective that is found in most languages with aspect. Furthermore, the separation of tense and aspect in English is not maintained rigidly. One instance of this is the alternation, in some forms of English, between sentences such as "Have you eaten?" and "Did you eat?".
Like tense, aspect is a way that verbs represent time. However, rather than locating an event or state in time, the way tense does, aspect describes "the internal temporal constituency of a situation", or in other words, aspect is a way "of conceiving the flow of the process itself". English aspectual distinctions in the past tense include "I went, I used to go, I was going, I had gone"; in the present tense "I lose, I am losing, I have lost, I have been losing, I am going to lose"; and with the future modal "I will see, I will be seeing, I will have seen, I am going to see". What distinguishes these aspects within each tense is not (necessarily) when the event occurs, but how the time in which it occurs is viewed: as complete, ongoing, consequential, planned, etc.
In most dialects of Ancient Greek, aspect is indicated uniquely by verbal morphology. For example, the very frequently used aorist, though a functional preterite in the indicative mood, conveys historic or 'immediate' aspect in the subjunctive and optative. The perfect in all moods is used as an aspectual marker, conveying the sense of a resultant state. E.g. – I see (present); – I saw (aorist); – I am in a state of having seen = I know (perfect).
Many Sino-Tibetan languages, like Mandarin, lack grammatical tense but are rich in aspect (Heine, Kuteva 2010, p. 10).
There is a distinction between grammatical aspect, as described here, and lexical aspect. Other terms for the pair "lexical vs. grammatical" include: "situation vs. viewpoint" and "inner vs. outer". Lexical aspect, also known as aktionsart, is an inherent property of a verb or verb-complement phrase, and is not marked formally. The distinctions made as part of lexical aspect are different from those of grammatical aspect. Typical distinctions are between states ("I owned"), activities ("I shopped"), accomplishments ("I painted a picture"), achievements ("I bought"), and punctual, or semelfactive, events ("I sneezed"). These distinctions are often relevant syntactically. For example, states and activities, but not usually achievements, can be used in English with a prepositional "for"-phrase describing a time duration: "I had a car for five hours", "I shopped for five hours", but not "*I bought a car for five hours". Lexical aspect is sometimes called "Aktionsart", especially by German and Slavic linguists. Lexical or situation aspect is marked in Athabaskan languages.
One of the factors in situation aspect is telicity. Telicity might be considered a kind of lexical aspect, except that it is typically not a property of a verb in isolation, but rather a property of an entire verb "phrase". Achievements, accomplishments and semelfactives have telic situation aspect, while states and activities have atelic situation aspect.
The other factor in situation aspect is duration, which is also a property of a verb phrase. Accomplishments, states, and activities have duration, while achievements and semelfactives do not.
In some languages, aspect and time are very clearly separated, making them much more distinct to their speakers. There are a number of languages that mark aspect much more saliently than time. Prominent in this category are Chinese and American Sign Language, which both differentiate many aspects but rely exclusively on optional time-indicating terms to pinpoint an action with respect to time. In other language groups, for example in most modern Indo-European languages (except Slavic languages), aspect has become almost entirely conflated, in the verbal morphological system, with time.
In Russian, aspect is more salient than tense in narrative. Russian, like other Slavic languages, uses different lexical entries for the different aspects, whereas other languages mark them morphologically, and still others with auxiliaries (e.g., English).
In literary Arabic ( "al-fuṣḥā") the verb has two aspect-tenses: perfective (past), and imperfective (non-past). There is some disagreement among grammarians whether to view the distinction as a distinction in aspect, or tense, or both. The past verb ( "al-fiʿl al-māḍī") denotes an event ( "ḥadaṯ") completed in the past, but it says nothing about the relation of this past event to present status. For example, "waṣala", "arrived", indicates that arrival occurred in the past without saying anything about the present status of the arriver – maybe they stuck around, maybe they turned around and left, etc. – nor about the aspect of the past event except insofar as completeness can be considered aspectual. This past verb is clearly similar if not identical to the Greek aorist, which is considered a tense but is more of an aspect marker. In the Arabic, aorist aspect is the logical consequence of past tense. By contrast, the "Verb of Similarity" ( "al-fiʿl al-muḍāriʿ"), so called because of its resemblance to the active participial noun, is considered to denote an event in the present or future without committing to a specific aspectual sense beyond the incompleteness implied by the tense: ("yaḍribu", he strikes/is striking/will strike/etc.). Those are the only two "tenses" in Arabic (not counting "amr", command or imperative, which the tradition counts as denoting future events.) At least that's the way the tradition sees it. To explicitly mark aspect, Arabic uses a variety of lexical and syntactic devices.
Contemporary Arabic dialects are another matter. One major change from al-fuṣḥā is the use of a prefix particle ( "bi" in Egyptian and Levantine dialects—though it may have a slightly different range of functions in each dialect) to explicitly mark progressive, continuous, or habitual aspect: , "bi-yiktib", he is now writing, writes all the time, etc.
Aspect can mark the stage of an action. The prospective aspect is a combination of tense and aspect that indicates the action is in preparation to take place. The inceptive aspect identifies the beginning stage of an action (e.g. Esperanto uses "ek-", e.g. "Mi ekmanĝas", "I am beginning to eat".) and inchoative and ingressive aspects identify a change of state ("The flowers started blooming") or the start of an action ("He started running"). Aspects of stage continue through progressive, pausative, resumptive, cessive, and terminative.
Important qualifications:
The English tense–aspect system has two morphologically distinct tenses, present and past. No marker of a future tense exists on the verb in English; the futurity of an event may be expressed through the use of the auxiliary verbs "will" and "shall", by a present form plus an adverb, as in "tomorrow we go to New York City", or by some other means. Past is distinguished from present–future, in contrast, with internal modifications of the verb. These two tenses may be modified further for progressive aspect (also called "continuous" aspect), for the perfect, or for both. These two aspectual forms are also referred to as BE +ING and HAVE +EN, respectively, which avoids what may be unfamiliar terminology.
Aspects of the present tense:
(While many elementary discussions of English grammar classify the present perfect as a past tense, it relates the action to the present time. One cannot say of someone now deceased that he "has eaten" or "has been eating". The present auxiliary implies that he is in some way "present" (alive), even if the action denoted is completed (perfect) or partially completed (progressive perfect).)
Aspects of the past tense:
Aspects can also be marked on non-finite forms of the verb: "(to) be eating" (infinitive with progressive aspect), "(to) have eaten" (infinitive with perfect aspect), "having eaten" (present participle or gerund with perfect aspect), etc. The perfect infinitive can further be governed by modal verbs to express various meanings, mostly combining modality with past reference: "I should have eaten" etc. In particular, the modals "will" and "shall" and their subjunctive forms "would" and "should" are used to combine future or hypothetical reference with aspectual meaning:
The uses of the progressive and perfect aspects are quite complex. They may refer to the viewpoint of the speaker:
But they can have other illocutionary forces or additional modal components:
For further discussion of the uses of the various tense–aspect combinations, see Uses of English verb forms.
English expresses some other aspectual distinctions with other constructions. "Used to" + VERB is a past habitual, as in "I used to go to school," and "going to / gonna" + VERB is a prospective, a future situation highlighting current intention or expectation, as in "I'm going to go to school next year."
The aspectual systems of certain dialects of English, such as African-American Vernacular English (see for example habitual be), and of creoles based on English vocabulary, such as Hawaiian Creole English, are quite different from those of standard English, and often reflect a more elaborate paradigm of aspectual distinctions (often at the expense of tense). The following table, appearing originally in Green (2002) shows the possible aspectual distinctions in AAVE in their prototypical, negative and stressed/emphatic affirmative forms:
Although Standard German does not have aspects, many Upper German languages, all West Central German languages, and some more vernacular German languages do make one aspectual distinction, and so do the colloquial languages of many regions, the so-called German regiolects. While officially discouraged in schools and seen as 'bad language', local English teachers like the distinction, because it corresponds well with the English continuous form. It is formed by the conjugated auxiliary verb "sein" ("to be") followed by the preposition "am" and the infinitive, or the nominalized verb. The latter two are phonetically indistinguishable; in writing, capitalization differs: "Ich war am essen" vs. "Ich war am Essen" (I was eating, compared to the Standard German approximation: "Ich war beim Essen"); yet these forms are not standardized and thus are relatively infrequently written down or printed, even in quotations or direct speech.
In the Tyrolean and other Bavarian regiolect the prefix *da can be found, which form perfective aspects. "I hu's gleant" (Ich habe es gelernt = I learnt it) vs. "I hu's daleant" (*Ich habe es DAlernt = I succeeded in learning).
In Dutch (a West Germanic language), two types of continuous form are used. Both types are considered Standard Dutch.
The first type is very similar to the non-standard German type. It is formed by the conjugated auxiliary verb "zijn" ("to be"), followed by "aan het" and the gerund (which in Dutch matches the infinitive). For example:
The second type is formed by one of the conjugated auxiliary verbs "liggen" ("to lie"), "zitten" ("to sit"), "hangen" ("to hang"), "staan" ("to stand") or "lopen" ("to walk"), followed by the preposition "te" and the infinitive. The conjugated verbs indicate the stance of the subject performing or undergoing the action.
Sometimes the meaning of the auxiliary verb is diminished to 'being engaged in'. Take for instance these examples:
In these cases, there is generally an undertone of irritation.
The Slavic languages make a clear distinction between perfective and imperfective aspects; it was in relation to these languages that the modern concept of aspect originally developed.
In Slavic languages, a given verb is, in itself, either perfective or imperfective. Consequently each language contains many pairs of verbs, corresponding to each other in meaning, except that one expresses perfective aspect and the other imperfective. (This may be considered a form of lexical aspect.) Perfective verbs are commonly formed from imperfective ones by the addition of a prefix, or else the imperfective verb is formed from the perfective one by modification of the stem or ending. Suppletion also plays a small role. Perfective verbs cannot generally be used with the meaning of a present tense – their present-tense forms in fact have future reference. An example of such a pair of verbs, from Polish, is given below:
In at least the East Slavic and West Slavic languages, there is a three-way aspect differentiation for verbs of motion with the determinate imperfective, indeterminate imperfective, and perfective. The two forms of imperfective can be used in all three tenses (past, present, and future), but the perfective can only be used with past and future. The indeterminate imperfective expresses habitual aspect (or motion in no single direction), while the determinate imperfective expresses progressive aspect. The difference corresponds closely to that between the English "I (regularly) go to school" and "I am going to school (now)". The three-way difference is given below for the Russian basic (unprefixed) verbs of motion.
When prefixes are attached to Russian verbs of motion they become more or less normal imperfective/perfective pairs, with the indeterminate imperfective becoming the prefixed imperfective and the determinate imperfective becoming the prefixed perfective. For example, prefix "при-" "pri-" + indeterminate "ходи́ть" "khodít'" = "приходи́ть" "prikhodít'" (to arrive (on foot), impf.); and prefix "при-" "pri-" + determinate "идти́" "idtí" = "прийти" "prijtí" (to arrive (on foot), pf.).
Modern Romance languages merge the concepts of aspect and tense but consistently distinguish perfective and imperfective aspects in the past tense. This derives directly from the way the Latin language used to render both aspects and "consecutio temporum".
Italian language example using the verb "mangiare" ("to eat"):
Mood: "indicativo" (indicative)
The "imperfetto"/"trapassato prossimo" contrasts with the "passato remoto"/"trapassato remoto" in that "imperfetto" renders an imperfective (continuous) past while "passato remoto" expresses an aorist (punctual/historical) past.
Other aspects in Italian are rendered with other periphrases, like prospective ("io sto per mangiare" "I'm about to eat", "io starò per mangiare" "I shall be about to eat"), or continuous/progressive ("io sto mangiando" "I'm eating", "io starò mangiando" "I shall be eating").
Finnish and Estonian, among others, have a grammatical aspect contrast of telicity between telic and atelic. Telic sentences signal that the intended goal of an action is achieved. Atelic sentences do not signal whether any such goal has been achieved. The aspect is indicated by the case of the object: accusative is telic and partitive is atelic. For example, the (implicit) purpose of shooting is to kill, such that:
In rare cases corresponding telic and atelic forms can be unrelated by meaning.
Derivational suffixes exist for various aspects. Examples:
There are derivational suffixes for verbs, which carry frequentative, momentane, causative, and inchoative aspect meanings. Also, pairs of verbs differing only in transitivity exist.
The Rapa language (Reo Rapa) was not created by the combination of two languages, but through the introduction of Tahitian to the monolingual Rapa community. Old Rapa words are still used for grammar and sentence structure, but most common words were replaced by Tahitian words. Rapa is similar to English as they both have specific tense words such as "did" or "do".
The Hawaiian language conveys aspect as follows:
Wuvulu language is a minority language in Pacific. The Wuvulu verbal aspect is hard to organize because of its number of morpheme combinations and the interaction of semantics between morphemes. Perfective, imperfective negation, simultaneous and habitual are four aspects markers in Wuvulu language.
There are three types of aspects one must consider when analyzing the Tokelauan language: inherent aspect, situation aspect, and viewpoint aspect.
The inherent aspect describes the purpose of a verb and what separates verbs from one another. According to Vendler, inherent aspect can be categorized into four different types: activities, achievements, accomplishments, and states. Simple activities include verbs such as pull, jump, and punch. Some achievements are continue and win. Drive-a-car is an accomplishment while hate is an example of a state. Another way to recognize a state inherent aspect is to note whether or not it changes. For example, if someone were to hate vegetables because they are allergic, this state of hate is unchanging and thus, a state inherent aspect. On the other hand, an achievement, unlike a state, only lasts for a short amount of time. Achievement is the highpoint of an action.
Another type of aspect is situation aspect. Situation aspect is described to be what one is experiencing in his or her life through that circumstance. Therefore, it is his or her understanding of the situation. Situation aspect are abstract terms that are not physically tangible. They are also used based upon one's point of view. For example, a professor may say that a student who comes a minute before each class starts is a punctual student. Based upon the professor's judgment of what punctuality is, he or she may make that assumption of the situation with the student. Situation aspect is firstly divided into states and occurrences, then later subdivided under occurrences into processes and events, and lastly, under events, there are accomplishments and achievements.
The third type of aspect is viewpoint aspect. Viewpoint aspect can be likened to situation aspect such that they both take into consideration one's inferences. However, viewpoint aspect diverges from situation aspect because it is where one decides to view or see such event. A perfect example is the glass metaphor: Is the glass half full or is it half empty. The choice of being half full represents an optimistic viewpoint while the choice of being half empty represents a pessimistic viewpoint. Not only does viewpoint aspect separate into negative and positive, but rather different point of views. Having two people describe a painting can bring about two different viewpoints. One may describe a situation aspect as a perfect or imperfect. A perfect situation aspect entails an event with no reference to time, while an imperfect situation aspect makes a reference to time with the observation.
Aspect in Torau is marked with post-verbal particles or clitics. While the system for marking the imperfective aspect is complex and highly developed, it is unclear if Torau marks the perfective and neutral viewpoints. The imperfective clitics index one of the core arguments, usually the nominative subject, and follow the rightmost element in a syntactic structure larger than the word. The two distinct forms for marking the imperfective aspect are "(i)sa-" and "e-". While more work needs to be done on this language, the preliminary hypothesis is that "(i)sa-" encodes the stative imperfective and "e-" encodes the active imperfective. It is also important to note that reduplication always cooccurs with "e-", but it usually does not with "(i)sa-." This example below shows these two imperfective aspect markers giving different meanings to similar sentences.
"Pita ma-to mate=sa-la."
" " Peter RL.3SGS-PST be.dead=IPFV-3SGS
‘Peter was dead.’
"Pita ma-to maa≈mate=e-la."
Peter RF.3SG-PST RD≈be.dead=IPFV-3SGS
‘Peter was dying.’
In Torau, the suffix -"to", which must attach to a preverbal particle, may indicate similar meaning to the perfective aspect. In realis clauses, this suffix conveys an event that is entirely in the past and no longer occurring. When "-to" is used in irrealis clauses, the speaker conveys that the event will definitely occur (Palmer, 2007). Although this suffix isn’t explicitly stated as a perfective viewpoint marker, the meaning that it contributes is very similar to the perfective viewpoint.
Like many Austronesian languages, the verbs of the Malay language follow a system of affixes to express changes in meaning. To express the aspects, Malay uses a number of auxiliary verbs:
Like many Austronesian languages, the verbs of the Philippine languages follow a complex system of affixes to express subtle changes in meaning. However, the verbs in this family of languages are conjugated to express the aspects and not the tenses. Though many of the Philippine languages do not have a fully codified grammar, most of them follow the verb aspects that are demonstrated by Filipino or Tagalog.
Creole languages typically use the unmarked verb for timeless habitual aspect, or for stative aspect, or for perfective aspect in the past. Invariant pre-verbal markers are often used. Non-stative verbs typically can optionally be marked for the progressive, habitual, completive, or irrealis aspect. The progressive in English-based Atlantic Creoles often uses "de" (from English "be"). Jamaican Creole uses "a" (from English "are") or "de" for the present progressive and a combination of the past time marker ("did" , "behn" , "ehn" or "wehn") and the progressive marker ("a" or "de") for the past progressive (e.g. "did a" or "wehn de"). Haitian Creole uses the progressive marker "ap". Some Atlantic Creoles use one marker for both the habitual and progressive aspects. In Tok Pisin, the optional progressive marker follows the verb. Completive markers tend to come from superstrate words like "done" or "finish", and some creoles model the future/irrealis marker on the superstrate word for "go".
American Sign Language (ASL) is similar to many other sign languages in that it has no grammatical tense but many verbal aspects produced by modifying the base verb sign.
An example is illustrated with the verb TELL. The basic form of this sign is produced with the initial posture of the index finger on the chin, followed by a movement of the hand and finger tip toward the indirect object (the recipient of the telling). Inflected into the unrealized inceptive aspect ("to be just about to tell"), the sign begins with the hand moving from in front of the trunk in an arc to the initial posture of the base sign (i.e., index finger touching the chin) while inhaling through the mouth, dropping the jaw, and directing eye gaze toward the verb's object. The posture is then held rather than moved toward the indirect object. During the hold, the signer also stops the breath by closing the glottis. Other verbs (such as "look at", "wash the dishes", "yell", "flirt") are inflected into the unrealized inceptive aspect similarly: The hands used in the base sign move in an arc from in front of the trunk to the initial posture of the underlying verb sign while inhaling, dropping the jaw, and directing eye gaze toward the verb's object (if any), but subsequent movements and postures are dropped as the posture and breath are held.
Other aspects in ASL include the following: stative, inchoative ("to begin to..."), predispositional ("to tend to..."), susceptative ("to... easily"), frequentative ("to... often"), protractive ("to... continuously"), incessant ("to... incessantly"), durative ("to... for a long time"), iterative ("to... over and over again"), intensive ("to... very much"), resultative ("to... completely"), approximative ("to... somewhat"), semblitive ("to appear to..."), increasing ("to... more and more"). Some aspects combine with others to create yet finer distinctions.
Aspect is unusual in ASL in that transitive verbs derived for aspect lose their grammatical transitivity. They remain semantically transitive, typically assuming an object made prominent using a topic marker or mentioned in a previous sentence. See Syntax in ASL for details.
The following aspectual terms are found in the literature. Approximate English equivalents are given.
|
https://en.wikipedia.org/wiki?curid=12948
|
Glucose
Glucose is a simple sugar with the molecular formula . Glucose is the most abundant monosaccharide, a subcategory of carbohydrates. Glucose is mainly made by plants and most algae during photosynthesis from water and carbon dioxide, using energy from sunlight, where it is used to make cellulose in cell walls, which is the most abundant carbohydrate. In energy metabolism, glucose is the most important source of energy in all organisms. Glucose for metabolism is partially stored as a polymer, in plants mainly as starch and amylopectin and in animals as glycogen. Glucose circulates in the blood of animals as blood sugar. The naturally occurring form of glucose is -glucose, while -glucose is produced synthetically in comparatively small amounts and is of lesser importance. Glucose is a monosaccharide containing six carbon atoms, an aldehyde group and is therefore referred to as an aldohexose. The glucose molecule can exist in an open-chain (acyclic) as well as ring (cyclic) form, the latter being the result of an intramolecular reaction between the aldehyde C atom and the C-5 hydroxyl group to form an intramolecular hemiacetal. In water solution both forms are in equilibrium and at pH 7 the cyclic one is the predominant. It is naturally occurring and is found in fruits and other parts of plants in its free state. In animals glucose arises from the breakdown of glycogen in a process known as glycogenolysis.
Glucose, as intravenous sugar solution, is on the World Health Organization's List of Essential Medicines, the most important medications needed in a basic health system.
The name glucose derives through the French from the Greek ('glukos'), which means "sweet", in reference to must, the sweet, first press of grapes in the making of wine. The suffix "-ose" is a chemical classifier, denoting a sugar.
Glucose was first isolated from raisins in 1747 by the German chemist Andreas Marggraf. Glucose was discovered in grapes by Johann Tobias Lowitz in 1792 and recognized as different from cane sugar (sucrose). Glucose is the term coined by Jean Baptiste Dumas in 1838, which has prevailed in the chemical literature. Friedrich August Kekulé proposed the term dextrose (from Latin dexter = right), because in aqueous solution of glucose, the plane of linearly polarized light is turned to the right. In contrast, -fructose (a ketohexose) and -glucose turn linearly polarized light to the left. The earlier notation according to the rotation of the plane of linearly polarized light ("d" and "l"-nomenclature) was later abandoned in favor of the - and -notation, which refers to the absolute configuration of the asymmetric center farthest from the carbonyl group, and in concordance with the configuration of - or -glyceraldehyde.
Since glucose is a basic necessity of many organisms, a correct understanding of its chemical makeup and structure contributed greatly to a general advancement in organic chemistry. This understanding occurred largely as a result of the investigations of Emil Fischer, a German chemist who received the 1902 Nobel Prize in Chemistry for his findings. The synthesis of glucose established the structure of organic material and consequently formed the first definitive validation of Jacobus Henricus van 't Hoff's theories of chemical kinetics and the arrangements of chemical bonds in carbon-bearing molecules. Between 1891 and 1894, Fischer established the stereochemical configuration of all the known sugars and correctly predicted the possible isomers, applying van 't Hoff's theory of asymmetrical carbon atoms. The names initially referred to the natural substances. Their enantiomers were given the same name with the introduction of systematic nomenclatures, taking into account absolute stereochemistry (e.g. Fischer nomenclature, / nomenclature).
For the discovery of the metabolism of glucose Otto Meyerhof received the Nobel Prize in Physiology or Medicine in 1922. Hans von Euler-Chelpin was awarded the Nobel Prize in Chemistry along with Arthur Harden in 1929 for their "research on the fermentation of sugar and their share of enzymes in this process". In 1947, Bernardo Houssay (for his discovery of the role of the pituitary gland in the metabolism of glucose and the derived carbohydrates) as well as Carl and Gerty Cori (for their discovery of the conversion of glycogen from glucose) received the Nobel Prize in Physiology or Medicine. In 1970, Luis Leloir was awarded the Nobel Prize in Chemistry for the discovery of glucose-derived sugar nucleotides in the biosynthesis of carbohydrates.
With six carbon atoms, it is classed as a hexose, a subcategory of the monosaccharides. -Glucose is one of the sixteen aldohexose stereoisomers. The -isomer, -glucose, also known as dextrose, occurs widely in nature, but the -isomer, -glucose, does not. Glucose can be obtained by hydrolysis of carbohydrates such as milk sugar (lactose), cane sugar (sucrose), maltose, cellulose, glycogen, etc. It is commonly commercially manufactured from cornstarch by hydrolysis via pressurized steaming at controlled pH in a jet followed by further enzymatic depolymerization. Unbonded glucose is one of the main ingredients of honey. All forms of glucose are colorless and easily soluble in water, acetic acid, and several other solvents. They are only sparingly soluble in methanol and ethanol.
Glucose is a monosaccharide with formula C6H12O6 or H−(C=O)−(CHOH)5−H, whose five hydroxyl (OH) groups are arranged in a specific way along its six-carbon back. Glucose is usually present in solid form as a monohydrate with a closed pyran ring (dextrose hydrate). In aqueous solution, on the other hand, it is an open-chain to a small extent and is present predominantly as α- or β-pyranose, which partially mutually merge by mutarotation. From aqueous solutions, the three known forms can be crystallized: α-glucopyranose, β-glucopyranose and β-glucopyranose hydrate. Glucose is a building block of the disaccharides lactose and sucrose (cane or beet sugar), of oligosaccharides such as raffinose and of polysaccharides such as starch and amylopectin, glycogen or cellulose. The glass transition temperature of glucose is 31 °C and the Gordon–Taylor constant (an experimentally determined constant for the prediction of the glass transition temperature for different mass fractions of a mixture of two substances) is 4.5.
In its fleeting open-chain form, the glucose molecule has an open (as opposed to cyclic) and unbranched backbone of six carbon atoms, C-1 through C-6; where C-1 is part of an aldehyde group H(C=O)−, and each of the other five carbons bears one hydroxyl group −OH. The remaining bonds of the backbone carbons are satisfied by hydrogen atoms −H. Therefore, glucose is both a hexose and an aldose, or an aldohexose. The aldehyde group makes glucose a reducing sugar giving a positive reaction with the Fehling test.
Each of the four carbons C-2 through C-5 is a stereocenter, meaning that its four bonds connect to four different substituents. (Carbon C-2, for example, connects to −(C=O)H, −OH, −H, and −(CHOH)4H.) In -glucose, these four parts must be in a specific three-dimensional arrangement. Namely, when the molecule is drawn in the Fischer projection, the hydroxyls on C-2, C-4, and C-5 must be on the right side, while that on C-3 must be on the left side.
The positions of those four hydroxyls are exactly reversed in the Fischer diagram of -glucose. - and -glucose are two of the 16 possible aldohexoses; the other 14 are allose, altrose, galactose, gulose, idose, mannose, and talose, each with two enantiomers, “-” and “-”.
It is important to note that the linear form of glucose makes up less than 0.02% of the glucose molecules in a water solution. The rest is one of two cyclic forms of glucose that are formed when the hydroxyl group on carbon 5 (C5) bonds to the aldehyde carbon 1 (C1).
In solutions, the open-chain form of glucose (either "-" or "-") exists in equilibrium with several cyclic isomers, each containing a ring of carbons closed by one oxygen atom. In aqueous solution however, more than 99% of glucose molecules, at any given time, exist as pyranose forms. The open-chain form is limited to about 0.25%, and furanose forms exists in negligible amounts. The terms "glucose" and "-glucose" are generally used for these cyclic forms as well. The ring arises from the open-chain form by an intramolecular nucleophilic addition reaction between the aldehyde group (at C-1) and either the C-4 or C-5 hydroxyl group, forming a hemiacetal linkage, −C(OH)H−O−.
The reaction between C-1 and C-5 yields a six-membered heterocyclic system called a pyranose, which is a monosaccharide sugar (hence "-ose") containing a derivatised pyran skeleton. The (much rarer) reaction between C-1 and C-4 yields a five-membered furanose ring, named after the cyclic ether furan. In either case, each carbon in the ring has one hydrogen and one hydroxyl attached, except for the last carbon (C-4 or C-5) where the hydroxyl is replaced by the remainder of the open molecule (which is −(C(CH2OH)HOH)−H or −(CHOH)−H respectively).
The ring-closing reaction makes carbon C-1 chiral, too, since its four bonds lead to −H, to −OH, to carbon C-2, and to the ring oxygen. These four parts of the molecule may be arranged around C-1 (the anomeric carbon) in two distinct ways, designated by the prefixes "α-" and "β-". When a glucopyranose molecule is drawn in the Haworth projection, the designation "α-" means that the hydroxyl group attached to C-1 and the −CH2OH group at C-5 lies on opposite sides of the ring's plane (a "trans" arrangement), while "β-" means that they are on the same side of the plane (a "cis" arrangement). Therefore, the open-chain isomer -glucose gives rise to four distinct cyclic isomers: α--glucopyranose, β--glucopyranose, α--glucofuranose, and β--glucofuranose. These five structures exist in equilibrium and interconvert, and the interconversion is much more rapid with acid catalysis.
The other open-chain isomer -glucose similarly gives rise to four distinct cyclic forms of -glucose, each the mirror image of the corresponding -glucose.
The rings are not planar, but are twisted in three dimensions. The glucopyranose ring (α or β) can assume several non-planar shapes, analogous to the "chair" and "boat" conformations of cyclohexane. Similarly, the glucofuranose ring may assume several shapes, analogous to the "envelope" conformations of cyclopentane.
In the solid state, only the glucopyranose forms are observed, forming colorless crystalline solids that are highly soluble in water and acetic acid but poorly soluble in methanol and ethanol. They melt at ("α") and ("β"), and decompose starting at 188 °C with release of various volatile products, ultimately leaving a residue of carbon.
However, some derivatives of glucofuranose, such as 1,2-"O"-isopropylidene--glucofuranose are stable and can be obtained pure as crystalline solids. For example, reaction of α-D-glucose with "para"-tolylboronic acid −()− reforms the normal pyranose ring to yield the 4-fold ester α-D-glucofuranose-1,2∶3,5-bis("p"-tolylboronate).
Each glucose isomer is subject to rotational isomerism. Within the cyclic form of glucose, rotation may occur around the O6-C6-C5-O5 torsion angle, termed the "ω"-angle, to form three staggered rotamer conformations called "gauche"-"gauche" (gg), "gauche"-"trans" (gt) and "trans"-"gauche" (tg). There is a tendency for the "ω"-angle to adopt a "gauche" conformation, a tendency that is attributed to the gauche effect.
Mutarotation consists of a temporary reversal of the ring-forming reaction, resulting in the open-chain form, followed by a reforming of the ring. The ring closure step may use a different −OH group than the one recreated by the opening step (thus switching between pyranose and furanose forms), or the new hemiacetal group created on C-1 may have the same or opposite handedness as the original one (thus switching between the α and β forms). Thus, though the open-chain form is barely detectable in solution, it is an essential component of the equilibrium.
The open-chain form is thermodynamically unstable, and it spontaneously isomerizes to the cyclic forms. (Although the ring closure reaction could in theory create four- or three-atom rings, these would be highly strained, and are not observed in practice.) In solutions at room temperature, the four cyclic isomers interconvert over a time scale of hours, in a process called mutarotation. Starting from any proportions, the mixture converges to a stable ratio of α:β 36:64. The ratio would be α:β 11:89 if it were not for the influence of the anomeric effect. Mutarotation is considerably slower at temperatures close to .
Whether in water or in the solid form, -(+)-glucose is dextrorotatory, meaning it will rotate the direction of polarized light clockwise as seen looking toward the light source. The effect is due to the chirality of the molecules, and indeed the mirror-image isomer, -(−)-glucose, is levorotatory (rotates polarized light counterclockwise) by the same amount. The strength of the effect is different for each of the five tautomers.
Note that the - prefix does not refer directly to the optical properties of the compound. It indicates that the C-5 chiral center has the same handedness as that of -glyceraldehyde (which was so labeled because it is dextrorotatory). The fact that -glucose is dextrorotatory is a combined effect of its four chiral centers, not just of C-5; and indeed some of the other -aldohexoses are levorotatory.
The conversion between the two anomers can be observed in a polarimeter, since pure α-glucose has a specific rotation angle of +112.2°·ml/(dm·g), pure β- D- glucose of +17.5°·ml/(dm·g). When equilibrium has been reached after a certain time due to mutarotation, the angle of rotation is +52.7°·ml/(dm·g). By adding acid or base, this transformation is much accelerated. The equilibration takes place via the open-chain aldehyde form.
In dilute sodium hydroxide or other dilute bases, the monosaccharides mannose, glucose and fructose interconvert (via a Lobry de Bruyn–Alberda–van Ekenstein transformation), so that a balance between these isomers is formed. This reaction proceeds via an enediol:
Dextrose is commonly used in homemade rockets built by amateur rocketeers. The dextrose is commonly mixed with a solid oxidizer such as potassium nitrate to make "rocket candy" or the weaker KNDX propellant.
Glucose is the most abundant monosaccharide. Glucose is also the most widely used aldohexose in most living organisms. One possible explanation for this is that glucose has a lower tendency than other aldohexoses to react nonspecifically with the amine groups of proteins. This reaction—glycation—impairs or destroys the function of many proteins, e.g. in glycated hemoglobin. Glucose's low rate of glycation can be attributed to its having a more stable cyclic form compared to other aldohexoses, which means it spends less time than they do in its reactive open-chain form. The reason for glucose having the most stable cyclic form of all the aldohexoses is that its hydroxy groups (with the exception of the hydroxy group on the anomeric carbon of -glucose) are in the equatorial position. Presumably, glucose is the most abundant natural monosaccharide because it is less glycated with proteins than other monosaccharides. Another hypothesis is that glucose, being the only D-aldohexose that has all five hydroxy substituents in the equatorial position in the form of β-D-glucose, is more readily accessible to chemical reactions, for example, for esterification or acetal formation. For this reason, D-glucose is also a highly preferred building block in natural polysaccharides (glycans). Polysaccharides that are composed solely of Glucose are termed glucans.
Glucose is produced by plants through the photosynthesis using sunlight, water and carbon dioxide and can be used by all living organisms as an energy and carbon source. However, most glucose does not occur in its free form, but in the form of its polymers, i.e. lactose, sucrose, starch and others which are energy reserve substances, and cellulose and chitin, which are components of the cell wall in plants or fungi and arthropods, respectively. These polymers are degraded to glucose during food intake by animals, fungi and bacteria using enzymes. All animals are also able to produce glucose themselves from certain precursors as the need arises. Nerve cells, cells of the renal medulla and erythrocytes depend on glucose for their energy production. In adult humans, there are about 18 g of glucose, of which about 4 g are present in the blood. Approximately 180 to 220 g of glucose are produced in the liver of an adult in 24 hours.
Many of the long-term complications of diabetes (e.g., blindness, kidney failure, and peripheral neuropathy) are probably due to the glycation of proteins or lipids. In contrast, enzyme-regulated addition of sugars to protein is called glycosylation and is essential for the function of many proteins.
Ingested glucose initially binds to the receptor for sweet taste on the tongue in humans. This complex of the proteins T1R2 and T1R3 makes it possible to identify glucose-containing food sources. Glucose mainly comes from food - about 300 g per day are produced by conversion of food, but it is also synthesized from other metabolites in the body's cells. In humans, the breakdown of glucose-containing polysaccharides happens in part already during chewing by means of amylase, which is contained in saliva, as well as by maltase, lactase and sucrase on the brush border of the small intestine. Glucose is a building block of many carbohydrates and can be split off from them using certain enzymes. Glucosidases, a subgroup of the glycosidases, first catalyze the hydrolysis of long-chain glucose-containing polysaccharides, removing terminal glucose. In turn, disaccharides are mostly degraded by specific glycosidases to glucose. The names of the degrading enzymes are often derived from the particular poly- and disaccharide; inter alia, for the degradation of polysaccharide chains there are amylases (named after amylose, a component of starch), cellulases (named after cellulose), chitinases (named after chitin) and more. Furthermore, for the cleavage of disaccharides, there are maltase, lactase, sucrase, trehalase and others. In humans, about 70 genes are known that code for glycosidases. They have functions in the digestion and degradation of glycogen, sphingolipids, mucopolysaccharides and poly(ADP-ribose). Humans do not produce cellulases, chitinases and trehalases, but the bacteria in the gut flora do.
In order to get into or out of cell membranes of cells and membranes of cell compartments, glucose requires special transport proteins from the major facilitator superfamily. In the small intestine (more precisely, in the jejunum), glucose is taken up into the intestinal epithelial cells with the help of glucose transporters via a secondary active transport mechanism called sodium ion-glucose symport via the sodium/glucose cotransporter 1. The further transfer occurs on the basolateral side of the intestinal epithelial cells via the glucose transporter GLUT2, as well as their uptake into liver cells, kidney cells, cells of the islets of Langerhans, nerve cells, astrocytes and tanyocytes. Glucose enters the liver via the vena portae and is stored there as a cellular glycogen. In the liver cell, it is phosphorylated by glucokinase at position 6 to glucose-6-phosphate, which can not leave the cell. With the help of glucose-6-phosphatase, glucose-6-phosphate is converted back into glucose exclusively in the liver, if necessary, so that it is available for maintaining a sufficient blood glucose concentration. In other cells, uptake happens by passive transport through one of the 14 GLUT proteins. In the other cell types, phosphorylation occurs through a hexokinase, whereupon glucose can no longer diffuse out of the cell.
The glucose transporter GLUT1 is produced by most cell types and is of particular importance for nerve cells and pancreatic β-cells. GLUT3 is highly expressed in nerve cells. Glucose from the bloodstream is taken up by GLUT4 from muscle cells (of the skeletal muscle and heart muscle) and fat cells. GLUT14 is formed exclusively in testes. Excess glucose is broken down and converted into fatty acids, which are stored as triacylglycerides. In the kidneys, glucose in the urine is absorbed via SGLT1 and SGLT2 in the apical cell membranes and transmitted via GLUT2 in the basolateral cell membranes. About 90% of kidney glucose reabsorption is via SGLT2 and about 3% via SGLT1.
In plants and some prokaryotes, glucose is a product of photosynthesis. Glucose is also formed by the breakdown of polymeric forms of glucose like glycogen (in animals and mushrooms) or starch (in plants). The cleavage of glycogen is termed glycogenolysis, the cleavage of starch is called starch degradation.
The metabolic pathway that begins with molecules containing two to four carbon atoms (C) and ends in the glucose molecule containing six carbon atoms is called gluconeogenesis and occurs in all living organisms. The smaller starting materials are the result of other metabolic pathways. Ultimately almost all biomolecules come from the assimilation of carbon dioxide in plants during photosynthesis. The free energy of formation of α--glucose is 917.2 kilojoules per mole. In humans, gluconeogenesis occurs in the liver and kidney, but also in other cell types. In the liver about 150 g of glycogen are stored, in skeletal muscle about 250 g. However, the glucose released in muscle cells upon cleavage of the glycogen can not be delivered to the circulation because glucose is phosphorylated by the hexokinase, and a glucose-6-phosphatase is not expressed to remove the phosphate group. Unlike for glucose, there is no transport protein for glucose-6-phosphate. Gluconeogenesis allows the organism to build up glucose from other metabolites, including lactate or certain amino acids, while consuming energy. The renal tubular cells can also produce glucose.
In humans, glucose is metabolised by glycolysis and the pentose phosphate pathway. Glycolysis is used by all living organisms, with small variations, and all organisms generate energy from the breakdown of monosaccharides. In the further course of the metabolism, it can be completely degraded via oxidative decarboxylation, the Krebs cycle (synonym "citric acid cycle") and the respiratory chain to water and carbon dioxide. If there is not enough oxygen available for this, the glucose degradation in animals occurs anaerobic to lactate via lactic acid fermentation and releases less energy. Muscular lactate enters the liver through the bloodstream in mammals, where gluconeogenesis occurs (Cori cycle). With a high supply of glucose, the metabolite acetyl-CoA from the Krebs cycle can also be used for fatty acid synthesis. Glucose is also used to replenish the body's glycogen stores, which are mainly found in liver and skeletal muscle. These processes are hormonally regulated.
In other living organisms, other forms of fermentation can occur. The bacterium "Escherichia coli" can grow on nutrient media containing glucose as the sole carbon source. In some bacteria and, in modified form, also in archaea, glucose is degraded via the Entner-Doudoroff pathway.
Use of glucose as an energy source in cells is by either aerobic respiration, anaerobic respiration, or fermentation. The first step of glycolysis is the phosphorylation of glucose by a hexokinase to form glucose 6-phosphate. The main reason for the immediate phosphorylation of glucose is to prevent its diffusion out of the cell as the charged phosphate group prevents glucose 6-phosphate from easily crossing the cell membrane. Furthermore, addition of the high-energy phosphate group activates glucose for subsequent breakdown in later steps of glycolysis. At physiological conditions, this initial reaction is irreversible.
In anaerobic respiration, one glucose molecule produces a net gain of two ATP molecules (four ATP molecules are produced during glycolysis through substrate-level phosphorylation, but two are required by enzymes used during the process). In aerobic respiration, a molecule of glucose is much more profitable in that a maximum net production of 30 or 32 ATP molecules (depending on the organism) through oxidative phosphorylation is generated.
Tumor cells often grow comparatively quickly and consume an above-average amount of glucose by glycolysis, which leads to the formation of lactate, the end product of fermentation in mammals, even in the presence of oxygen. This effect is called the Warburg effect. For the increased uptake of glucose in tumors various SGLT and GLUT are overly produced.
In yeast, ethanol is fermented at high glucose concentrations, even in the presence of oxygen (which normally leads to respiration but not to fermentation). This effect is called the Crabtree effect.
Glucose is a ubiquitous fuel in biology. It is used as an energy source in organisms, from bacteria to humans, through either aerobic respiration, anaerobic respiration (in bacteria), or fermentation. Glucose is the human body's key source of energy, through aerobic respiration, providing about 3.75 kilocalories (16 kilojoules) of food energy per gram. Breakdown of carbohydrates (e.g., starch) yields mono- and disaccharides, most of which is glucose. Through glycolysis and later in the reactions of the citric acid cycle and oxidative phosphorylation, glucose is oxidized to eventually form carbon dioxide and water, yielding energy mostly in the form of ATP. The insulin reaction, and other mechanisms, regulate the concentration of glucose in the blood. The physiological caloric value of glucose, depending on the source, is 16.2 kilojoules per gram and 15.7 kJ/g (3.74 kcal/g), respectively. The high availability of carbohydrates from plant biomass has led to a variety of methods during evolution, especially in microorganisms, to utilize the energy and carbon storage glucose. Differences exist in which end product can no longer be used for energy production. The presence of individual genes, and their gene products, the enzymes, determine which reactions are possible. The metabolic pathway of glycolysis is used by almost all living beings. An essential difference in the use of glycolysis is the recovery of NADPH as a reductant for anabolism that would otherwise have to be generated indirectly.
Glucose and oxygen supply almost all the energy for the brain, so its availability influences psychological processes. When glucose is low, psychological processes requiring mental effort (e.g., self-control, effortful decision-making) are impaired. In the brain, which is dependent on glucose and oxygen as the major source of energy, the glucose concentration is usually 4 to 6 mM (5 mM equals 90 mg/dL), but decreases to 2 to 3 mM when fasting. Confusion occurs below 1 mM and coma at lower levels.
The glucose in the blood is called blood sugar. Blood sugar levels are regulated by glucose-binding nerve cells in the hypothalamus. In addition, glucose in the brain binds to glucose receptors of the reward system in the nucleus accumbens. The binding of glucose to the sweet receptor on the tongue induces a release of various hormones of energy metabolism, either through glucose or through other sugars, leading to an increased cellular uptake and lower blood sugar levels. Artificial sweeteners do not lower blood sugar levels.
The blood sugar content of a healthy person in the short-time fasting state, e.g. after overnight fasting, is about 70 to 100 mg/dL of blood (4 to 5.5 mM). In blood plasma, the measured values are about 10–15% higher. In addition, the values in the arterial blood are higher than the concentrations in the venous blood since glucose is absorbed into the tissue during the passage of the capillary bed. Also in the capillary blood, which is often used for blood sugar determination, the values are sometimes higher than in the venous blood. The glucose content of the blood is regulated by the hormones insulin, incretin and glucagon. Insulin lowers the glucose level, glucagon increases it. Furthermore, the hormones adrenaline, thyroxine, glucocorticoids, somatotropin and adrenocorticotropin lead to an increase in the glucose level. There is also a hormone-independent regulation, which is referred to as glucose autoregulation. After food intake the blood sugar concentration increases. Values over 180 mg/dL in venous whole blood are pathological and are termed hyperglycemia, values below 40 mg/dL are termed hypoglycaemia. When needed, glucose is released into the bloodstream by glucose-6-phosphatase from glucose-6-phosphate originating from liver and kidney glycogen, thereby regulating the homeostasis of blood glucose concentration. In ruminants, the blood glucose concentration is lower (60 mg/dL in cattle and 40 mg/dL in sheep), because the carbohydrates are converted more by their gut flora into short-chain fatty acids.
Some glucose is converted to lactic acid by astrocytes, which is then utilized as an energy source by brain cells; some glucose is used by intestinal cells and red blood cells, while the rest reaches the liver, adipose tissue and muscle cells, where it is absorbed and stored as glycogen (under the influence of insulin). Liver cell glycogen can be converted to glucose and returned to the blood when insulin is low or absent; muscle cell glycogen is not returned to the blood because of a lack of enzymes. In fat cells, glucose is used to power reactions that synthesize some fat types and have other purposes. Glycogen is the body's "glucose energy storage" mechanism, because it is much more "space efficient" and less reactive than glucose itself.
As a result of its importance in human health, glucose is an analyte in glucose tests that are common medical blood tests. Eating or fasting prior to taking a blood sample has an effect on analyses for glucose in the blood; a high fasting glucose blood sugar level may be a sign of prediabetes or diabetes mellitus.
The glycemic index is an indicator of the speed of resorption and conversion to blood glucose levels from ingested carbohydrates, measured as the area under the curve of blood glucose levels after consumption in comparison to glucose (glucose is defined as 100). The clinical importance of the glycemic index is controversial, as foods with high fat contents slow the resorption of carbohydrates and lower the glycemic index, e.g. ice cream. An alternative indicator is the insulin index, measured as the impact of carbohydrate consumption on the blood insulin levels. The glycemic load is an indicator for the amount of glucose added to blood glucose levels after consumption, based on the glycemic index and the amount of consumed food.
Organisms use glucose as a precursor for the synthesis of several important substances. Starch, cellulose, and glycogen ("animal starch") are common glucose polymers (polysaccharides). Some of these polymers (starch or glycogen) serve as energy stores, while others (cellulose and chitin, which is made from a derivative of glucose) have structural roles. Oligosaccharides of glucose combined with other sugars serve as important energy stores. These include lactose, the predominant sugar in milk, which is a glucose-galactose disaccharide, and sucrose, another disaccharide which is composed of glucose and fructose. Glucose is also added onto certain proteins and lipids in a process called glycosylation. This is often critical for their functioning. The enzymes that join glucose to other molecules usually use phosphorylated glucose to power the formation of the new bond by coupling it with the breaking of the glucose-phosphate bond.
Other than its direct use as a monomer, glucose can be broken down to synthesize a wide variety of other biomolecules. This is important, as glucose serves both as a primary store of energy and as a source of organic carbon. Glucose can be broken down and converted into lipids. It is also a precursor for the synthesis of other important molecules such as vitamin C (ascorbic acid). In living organisms, glucose is converted to several other chemical compounds that are the starting material for various metabolic pathways. Among them, all other monosaccharides such as fructose (via the polyol pathway), mannose (the epimer of glucose at position 2), galactose (the epimer at position 4), fucose, various uronic acids and the amino sugars are produced from glucose. In addition to the phosphorylation to glucose-6-phosphate, which is part of the glycolysis, glucose can be oxidized during its degradation to glucono-1,5-lactone. Glucose is used in some bacteria as a building block in the trehalose or the dextran biosynthesis and in animals as a building block of glycogen. Glucose can also be converted from bacterial xylose isomerase to fructose. In addition, glucose metabolites produce all nonessential amino acids, sugar alcohols such as mannitol and sorbitol, fatty acids, cholesterol and nucleic acids. Finally, glucose is used as a building block in the glycosylation of proteins to glycoproteins, glycolipids, peptidoglycans, glycosides and other substances (catalyzed by glycosyltransferases) and can be cleaved from them by glycosidases.
Diabetes is a metabolic disorder where the body is unable to regulate levels of glucose in the blood either because of a lack of insulin in the body or the failure, by cells in the body, to respond properly to insulin. Each of these situations can be caused by persistently high elevations of blood glucose levels, through pancreatic burnout and insulin resistance. The pancreas is the organ responsible for the secretion of the hormones insulin and glucagon. Insulin is a hormone that regulates glucose levels, allowing the body's cells to absorb and use glucose. Without it, glucose cannot enter the cell and therefore cannot be used as fuel for the body's functions. If the pancreas is exposed to persistently high elevations of blood glucose levels, the insulin-producing cells in the pancreas could be damaged, causing a lack of insulin in the body. Insulin resistance occurs when the pancreas tries to produce more and more insulin in response to persistently elevated blood glucose levels. Eventually, the rest of the body becomes resistant to the insulin that the pancreas is producing, thereby requiring more insulin to achieve the same blood glucose-lowering effect, and forcing the pancreas to produce even more insulin to compete with the resistance. This negative spiral contributes to pancreatic burnout, and the disease progression of diabetes.
To monitor the body's response to blood glucose-lowering therapy, glucose levels can be measured. Blood glucose monitoring can be performed by multiple methods, such as the fasting glucose test which measures the level of glucose in the blood after 8 hours of fasting. Another test is the 2-hour glucose tolerance test (GTT) – for this test, the person has a fasting glucose test done, then drinks a 75-gram glucose drink and is retested. This test measures the ability of the person's body to process glucose. Over time the blood glucose levels should decrease as insulin allows it to be taken up by cells and exit the blood stream.
Individuals with diabetes or other conditions that result in low blood sugar often carry small amounts of sugar in various forms. One sugar commonly used is glucose, often in the form of glucose tablets (glucose pressed into a tablet shape sometimes with one or more other ingredients as a binder), hard candy, or sugar packet.
Most dietary carbohydrates contain glucose, either as their only building block (as in the polysaccharides starch and glycogen), or together with another monosaccharide (as in the hetero-polysaccharides sucrose and lactose). Unbounded glucose is one of the main ingredients of honey.
Glucose is produced industrially from starch by enzymatic hydrolysis using glucose amylase or by the use of acids. The enzymatic hydrolysis has largely displaced the acid-catalyzed hydrolysis. The result is glucose syrup (enzymatically with more than 90% glucose in the dry matter) with an annual worldwide production volume of 20 million tonnes (as of 2011). This is the reason for the former common name "starch sugar". The amylases most often come from "Bacillus licheniformis" or "Bacillus subtilis" (strain MN-385), which are more thermostable than the originally used enzymes. Starting in 1982, pullulanases from "Aspergillus niger" were used in the production of glucose syrup to convert amylopectin to starch (amylose), thereby increasing the yield of glucose. The reaction is carried out at a pH = 4.6–5.2 and a temperature of 55–60 °C. Corn syrup has between 20% and 95% glucose in the dry matter. The Japanese form of the glucose syrup, Mizuame, is made from sweet potato or rice starch. Maltodextrin contains about 20% glucose.
Many crops can be used as the source of starch. Maize, rice, wheat, cassava, potato, barley, sweet potato, corn husk and sago are all used in various parts of the world. In the United States, corn starch (from maize) is used almost exclusively. Some commercial glucose occurs as a component of invert sugar, a roughly 1:1 mixture of glucose and fructose that is produced from sucrose. In principle, cellulose could be hydrolysed to glucose, but this process is not yet commercially practical.
In the USA almost exclusively corn (more precisely: corn syrup) is used as glucose source for the production of isoglucose, which is a mixture of glucose and fructose, since fructose has a higher sweetening power — with same physiological calorific value of 374 kilocalories per 100 g. The annual world production of isoglucose is 8 million tonnes (as of 2011). When made from corn syrup, the final product is high fructose corn syrup (HFCS).
Glucose is mainly used for the production of fructose and in the production of glucose-containing foods. In foods, it is used as a sweetener, humectant, to increase the volume and to create a softer mouthfeel.
Various sources of glucose, such as grape juice (for wine) or malt (for beer), are used for fermentation to ethanol during the production of alcoholic beverages. Most soft drinks in the US use HFCS-55 (with a fructose content of 55% in the dry mass), while most other HFCS-sweetened foods in the US use HFCS-42 (with a fructose content of 42% in the dry mass). In the neighboring country Mexico, on the other hand, cane sugar is used in the soft drink as a sweetener, which has a higher sweetening power. In addition, glucose syrup is used, inter alia, in the production of confectionery such as candies, toffee and fondant. Typical chemical reactions of glucose when heated under water-free conditions are the caramelization and, in presence of amino acids, the maillard reaction.
In addition, various organic acids can be biotechnologically produced from glucose, for example by fermentation with "Clostridium thermoaceticum" to produce acetic acid, with "Penicilium notatum" for the production of araboascorbic acid, with "Rhizopus delemar" for the production of fumaric acid, with "Aspergillus niger" for the production of gluconic acid, with "Candida brumptii" to produce isocitric acid, with "Aspergillus terreus" for the production of itaconic acid, with "Pseudomonas fluorescens" for the production of 2-ketogluconic acid, with "Gluconobacter suboxydans" for the production of 5-ketogluconic acid, with "Aspergillus oryzae" for the production of kojic acid, with "Lactobacillus delbrueckii" for the production of lactic acid, with "Lactobacillus brevis" for the production of malic acid, with "Propionibacter shermanii" for the production of propionic acid, with "Pseudomonas aeruginosa" for the production of pyruvic acid and with "Gluconobacter suboxydans" for the production of tartaric acid.
Specifically, when a glucose molecule is to be detected at a certain position in a larger molecule, nuclear magnetic resonance spectroscopy, X-ray crystallography analysis or lectin immunostaining is performed with concanavalin A reporter enzyme conjugate (that binds only glucose or mannose).
These reactions have only historical significance:
The Fehling test is a classic method for the detection of aldoses. Due to mutarotation, glucose is always present to a small extent as an open-chain aldehyde. By adding the Fehling reagents (Fehling (I) solution and Fehling (II) solution), the aldehyde group is oxidized to a carboxylic acid, while the Cu2+ tartrate complex is reduced to Cu+ and forming a brick red precipitate (Cu2O).
In the Tollens test, after addition of ammoniacal AgNO3 to the sample solution, Ag+ is reduced by glucose to elemental silver.
In Barfoed's test, a solution of dissolved copper acetate, sodium acetate and acetic acid is added to the solution of the sugar to be tested and subsequently heated in a water bath for a few minutes. Glucose and other monosaccharides rapidly produce a reddish color and reddish brown copper(I) oxide (Cu2O).
As a reducing sugar, glucose reacts in the Nylander's test.
Upon heating a dilute potassium hydroxide solution with glucose to 100 °C, a strong reddish browning and a caramel-like odor develops. Concentrated sulfuric acid dissolves dry glucose without blackening at room temperature forming sugar sulfuric acid. In a yeast solution, alcoholic fermentation produces carbon dioxide in the ratio of 2.0454 molecules of glucose to one molecule of CO2. Glucose forms a black mass with stannous chloride. In an ammoniacal silver solution, glucose (as well as lactose and dextrin) leads to the deposition of silver. In an ammoniacal lead acetate solution, white lead glycoside is formed in the presence of glucose, which becomes less soluble on cooking and turns brown. In an ammoniacal copper solution, yellow copper oxide hydrate is formed with glucose at room temperature, while red copper oxide is formed during boiling (same with dextrin, except for with an ammoniacal copper acetate solution). With Hager's reagent, glucose forms mercury oxide during boiling. An alkaline bismuth solution is used to precipitate elemental, black-brown bismuth with glucose. Glucose boiled in an ammonium molybdate solution turns the solution blue. A solution with indigo carmine and sodium carbonate destains when boiled with glucose.
In concentrated solutions of glucose with a low proportion of other carbohydrates, its concentration can be determined with a polarimeter. For sugar mixtures, the concentration can be determined with a refractometer, for example in the Oechsle determination in the course of the production of wine.
The enzyme glucose oxidase (GOx) converts glucose into gluconic acid and hydrogen peroxide while consuming oxygen. Another enzyme, peroxidase, catalyzes a chromogenic reaction (Trinder reaction) of phenol with 4-aminoantipyrine to a purple dye.
The test-strip method employs the above-mentioned enzymatic conversion of glucose to gluconic acid to form hydrogen peroxide. The reagents are immobilised on a polymer matrix, the so-called test strip, which assumes a more or less intense color. This can be measured reflectometrically at 510 nm with the aid of an LED-based handheld photometer. This allows routine blood sugar determination by laymen. In addition to the reaction of phenol with 4-aminoantipyrine, new chromogenic reactions have been developed that allow photometry at higher wavelengths (550 nm, 750 nm).
The electroanalysis of glucose is also based on the enzymatic reaction mentioned above. The produced hydrogen peroxide can be amperometrically quantified by anodic oxidation at a potential of 600 mV. The GOx is immobilised on the electrode surface or in a membrane placed close to the electrode. Precious metals such as platinum or gold are used in electrodes, as well as carbon nanotube electrodes, which e.g. are doped with boron. Cu–CuO nanowires are also used as enzyme-free amperometric electrodes. This way a detection limit of 50 µmol/L has been achieved. A particularly promising method is the so-called "enzyme wiring". In this case, the electron flowing during the oxidation is transferred directly from the enzyme via a molecular wire to the electrode.
There are a variety of other chemical sensors for measuring glucose. Given the importance of glucose analysis in the life sciences, numerous optical probes have also been developed for saccharides based on the use of boronic acids, which are particularly useful for intracellular sensory applications where other (optical) methods are not or only conditionally usable. In addition to the organic boronic acid derivatives, which often bind highly specifically to the 1,2-diol groups of sugars, there are also other probe concepts classified by functional mechanisms which use selective glucose-binding proteins (e.g. concanavalin A) as a receptor. Furthermore, methods were developed which indirectly detect the glucose concentration via the concentration of metabolised products, e.g. by the consumption of oxygen using fluorescence-optical sensors. Finally, there are enzyme-based concepts that use the intrinsic absorbance or fluorescence of (fluorescence-labeled) enzymes as reporters.
Glucose can be quantified by copper iodometry.
In particular, for the analysis of complex mixtures containing glucose, e.g. in honey, chromatographic methods such as high performance liquid chromatography and gas chromatography are often used in combination with mass spectrometry. Taking into account the isotope ratios, it is also possible to reliably detect honey adulteration by added sugars with these methods. Derivatisation using silylation reagents is commonly used. Also, the proportions of di- and trisaccharides can be quantified.
Glucose uptake in cells of organisms is measured with 2-deoxy-D-glucose or fluorodeoxyglucose. (18F)fluorodeoxyglucose is used as a tracer in positron emission tomography in oncology and neurology, where it is by far the most commonly used diagnostic agent.
|
https://en.wikipedia.org/wiki?curid=12950
|
George Pólya
George Pólya (; ) (December 13, 1887 – July 9th, 1985) was a Hungarian mathematician. He was a professor of mathematics from 1914 to 1940 at ETH Zürich and from 1940 to 1953 at Stanford University. He made fundamental contributions to combinatorics, number theory, numerical analysis and probability theory. He is also noted for his work in heuristics and mathematics education. He has been described as one of The Martians.
Pólya was born in Budapest, Austria-Hungary to Anna Deutsch and Jakab Pólya, Hungarian Jews who had converted to the Roman Catholic faith in 1886. Although his parents were religious and he was baptized into the Roman Catholic Church, George Pólya grew up to be an agnostic. He was a professor of mathematics from 1914 to 1940 at ETH Zürich in Switzerland and from 1940 to 1953 at Stanford University. He remained Stanford Professor Emeritus for the rest of his life and career. He worked on a range of mathematical topics, including series, number theory, mathematical analysis, geometry, algebra, combinatorics, and probability. He was an Invited Speaker of the ICM in 1928 at Bologna, in 1936 at Oslo, and in 1950 at Cambridge, Massachusetts.
He died in Palo Alto, California, United States.
Early in his career, Pólya wrote with Gábor Szegő two influential problem books "Problems and Theorems in Analysis" ("I: Series, Integral Calculus, Theory of Functions" and "II: Theory of Functions. Zeros. Polynomials. Determinants. Number Theory. Geometry"). Later in his career, he spent considerable effort to identify systematic methods of problem-solving to further discovery and invention in mathematics for students, teachers, and researchers. He wrote five books on the subject: "How to Solve It", "Mathematics and Plausible Reasoning" ("Volume I: Induction and Analogy in Mathematics", and "Volume II: Patterns of Plausible Inference"), and "Mathematical Discovery: On Understanding, Learning, and Teaching Problem Solving" (volumes 1 and 2).
In "How to Solve It", Pólya provides general heuristics for solving a gamut of problems, including both mathematical and non-mathematical problems. The book includes advice for teaching students of mathematics and a mini-encyclopedia of heuristic terms. It was translated into several languages and has sold over a million copies. Russian physicist Zhores I. Alfyorov (Nobel laureate in 2000) praised it, noting that he was a fan. The Australian-American mathematician Terence Tao used the book to prepare for the International Mathematical Olympiad. The book is still used in mathematical education. Douglas Lenat's Automated Mathematician and Eurisko artificial intelligence programs were inspired by Pólya's work.
In addition to his works directly addressing problem solving, Pólya wrote another short book called "Mathematical Methods in Science", based on a 1963 work supported by the National Science Foundation, edited by Leon Bowden, and published by the Mathematical Association of America (MAA) in 1977. As Pólya notes in the preface, Bowden carefully followed a tape recording of a course Pólya gave several times at Stanford in order to put the book together. Pólya notes in the preface "that the following pages will be useful, yet they should not be regarded as a finished expression."
There are three prizes named after Pólya, causing occasional confusion of one for another. In 1969 the Society for Industrial and Applied Mathematics (SIAM) established the George Pólya Prize, given alternately in two categories for "a notable application of combinatorial theory" and for "a notable contribution in another area of interest to George Pólya." In 1976 the Mathematical Association of America (MAA) established the George Pólya Award "for articles of expository excellence" published in the "College Mathematics Journal". In 1987 the London Mathematical Society (LMS) established the Pólya Prize for "outstanding creativity in, imaginative exposition of, or distinguished contribution to, mathematics within the United Kingdom."
A mathematics center has been named in Pólya's honor at the University of Idaho in Moscow, Idaho. The mathematics center focuses mainly on tutoring students in the subjects of algebra and calculus.
Stanford University has a Polya Hall named in his honor. It was built while he was still teaching and he complained to his students that it made people think he was dead.
|
https://en.wikipedia.org/wiki?curid=12955
|
OpenGL Utility Toolkit
The OpenGL Utility Toolkit (GLUT) is a library of utilities for OpenGL programs, which primarily perform system-level I/O with the host operating system. Functions performed include window definition, window control, and monitoring of keyboard and mouse input. Routines for drawing a number of geometric primitives (both in solid and wireframe mode) are also provided, including cubes, spheres and the Utah teapot. GLUT also has some limited support for creating pop-up menus.
GLUT was written by Mark J. Kilgard, author of "OpenGL Programming for the X Window System" and "The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics", while he was working for Silicon Graphics Inc.
The two aims of GLUT are to allow the creation of rather portable code between operating systems (GLUT is cross-platform) and to make learning OpenGL easier. Getting started with OpenGL programming while using GLUT often takes only a few lines of code and does not require knowledge of operating system–specific windowing APIs.
All GLUT functions start with the codice_1 prefix (for example, codice_2 marks the current window as needing to be redrawn).
The original GLUT library by Mark Kilgard supports the X Window System (GLX) and was ported to Microsoft Windows (WGL) by Nate Robins. Additionally, macOS ships with a GLUT framework that supports its own NSGL/CGL.
Kilgard's GLUT library is no longer maintained, and its license did not permit the redistribution of modified versions of the library. This spurred the need for free software or open source reimplementations of the API from scratch. The first such library was FreeGLUT, which aims to be a reasonably close reproduction, though introducing a small number of new functions to deal with GLUT's limitations. OpenGLUT, a fork of FreeGLUT, adds a number of new features to the original API, but work on it ceased in May 2005.
Mark Kilgard has a GitHub repository for GLUT. The glut.h header file contains the following license:
Some of GLUT's original design decisions made it hard for programmers to perform desired tasks. This led many to create non-canon patches and extensions to GLUT. Some free software or open source reimplementations also include fixes.
Some of the more notable limitations of the original GLUT library include:
Since it is no longer maintained (essentially replaced by the open source FreeGLUT) the above design issues are still not resolved in the original GLUT.
|
https://en.wikipedia.org/wiki?curid=12956
|
Giovanni Boccaccio
Giovanni Boccaccio (, , ; 16 June 1313 – 21 December 1375) was an Italian writer, poet, correspondent of Petrarch, and an important Renaissance humanist. Boccaccio wrote a number of notable works, including "The Decameron" and "On Famous Women". He wrote his imaginative literature mostly in Tuscan vernacular, as well as other works in Latin, and is particularly noted for his realistic dialogue which differed from that of his contemporaries, medieval writers who usually followed formulaic models for character and plot.
The details of Boccaccio's birth are uncertain. He was born in Florence or in a village near Certaldo where his family was from. He was the son of Florentine merchant Boccaccino di Chellino and an unknown woman; he was likely born out of wedlock. Boccaccio's stepmother was called Margherita de' Mardoli.
Boccaccio grew up in Florence. His father worked for the Compagnia dei Bardi and, in the 1320s, married Margherita dei Mardoli, who was of a well-to-do family. Boccaccio may have been tutored by Giovanni Mazzuoli and received from him an early introduction to the works of Dante. In 1326, his father was appointed head of a bank and moved with his family to Naples. Boccaccio was an apprentice at the bank but disliked the banking profession. He persuaded his father to let him study law at the "Studium" (the present-day University of Naples), where he studied canon law for the next six years. He also pursued his interest in scientific and literary studies.
His father introduced him to the Neapolitan nobility and the French-influenced court of Robert the Wise (the king of Naples) in the 1330s. At this time, he fell in love with a married daughter of the king, who is portrayed as "Fiammetta" in many of Boccaccio's prose romances, including "Il Filocolo" (1338). Boccaccio became a friend of fellow Florentine Niccolò Acciaioli, and benefited from his influence as the administrator, and perhaps the lover, of Catherine of Valois-Courtenay, widow of Philip I of Taranto. Acciaioli later became counselor to Queen Joanna I of Naples and, eventually, her "Grand Seneschal".
It seems that Boccaccio enjoyed law no more than banking, but his studies allowed him the opportunity to study widely and make good contacts with fellow scholars. His early influences included Paolo da Perugia (a curator and author of a collection of myths called the "Collectiones"), humanists Barbato da Sulmona and Giovanni Barrili, and theologian Dionigi di Borgo San Sepolcro.
In Naples, Boccaccio began what he considered his true vocation of poetry. Works produced in this period include "Il Filostrato" and "Teseida" (the sources for Chaucer's "Troilus and Criseyde" and "The Knight's Tale", respectively), "The Filocolo" (a prose version of an existing French romance), and "La caccia di Diana" (a poem in "terza rima" listing Neapolitan women). The period featured considerable formal innovation, including possibly the introduction of the Sicilian octave, where it influenced Petrarch.
Boccaccio returned to Florence in early 1341, avoiding the plague of 1340 in that city, but also missing the visit of Petrarch to Naples in 1341. He had left Naples due to tensions between the Angevin king and Florence. His father had returned to Florence in 1338, where he had gone bankrupt. His mother died shortly afterward (possibly, as she was unknown – see above). Boccaccio continued to work, although dissatisfied with his return to Florence, producing "Comedia delle ninfe fiorentine" in 1341 (also known as "Ameto"), a mix of prose and poems, completing the fifty-canto allegorical poem "Amorosa visione" in 1342, and "Fiammetta" in 1343. The pastoral piece "Ninfale fiesolano" probably dates from this time, also. In 1343, Boccaccio's father remarried to Bice del Bostichi. His children by his first marriage had all died, but he had another son named Iacopo in 1344.
In Florence, the overthrow of Walter of Brienne brought about the government of "popolo minuto" ("small people", workers). It diminished the influence of the nobility and the wealthier merchant classes and assisted in the relative decline of Florence. The city was hurt further in 1348 by the Black Death, which killed some three-quarters of the city's population, later represented in the "Decameron".
From 1347, Boccaccio spent much time in Ravenna, seeking new patronage and, despite his claims, it is not certain whether he was present in plague-ravaged Florence. His stepmother died during the epidemic and his father was closely associated with the government efforts as minister of supply in the city. His father died in 1349 and Boccaccio was forced into a more active role as head of the family.
Boccaccio began work on "The Decameron" around 1349. It is probable that the structures of many of the tales date from earlier in his career, but the choice of a hundred tales and the frame-story "lieta brigata" of three men and seven women dates from this time. The work was largely complete by 1352. It was Boccaccio's final effort in literature and one of his last works in Tuscan vernacular; the only other substantial work was "Corbaccio" (dated to either 1355 or 1365). Boccaccio revised and rewrote "The Decameron" in 1370–1371. This manuscript has survived to the present day.
From 1350, Boccaccio became closely involved with Italian humanism (although less of a scholar) and also with the Florentine government. His first official mission was to Romagna in late 1350. He revisited that city-state twice and also was sent to Brandenburg, Milan and Avignon. He also pushed for the study of Greek, housing Barlaam of Calabria, and encouraging his tentative translations of works by Homer, Euripides, and Aristotle. In these years, he also took minor orders.
In October 1350, he was delegated to greet Francesco Petrarch as he entered Florence and also to have Petrarch as a guest at Boccaccio's home, during his stay. The meeting between the two was extremely fruitful and they were friends from then on, Boccaccio calling Petrarch his teacher and "magister". Petrarch at that time encouraged Boccaccio to study classical Greek and Latin literature. They met again in Padua in 1351, Boccaccio on an official mission to invite Petrarch to take a chair at the university in Florence. Although unsuccessful, the discussions between the two were instrumental in Boccaccio writing the "Genealogia deorum gentilium"; the first edition was completed in 1360 and this remained one of the key reference works on classical mythology for over 400 years. It served as an extended defense for the studies of ancient literature and thought. Despite the Pagan beliefs at its core, Boccaccio believed that much could be learned from antiquity. Thus, he challenged the arguments of clerical intellectuals who wanted to limit access to classical sources to prevent any moral harm to Christian readers. The revival of classical antiquity became a foundation of the Renaissance, and his defense of the importance of ancient literature was an essential requirement for its development. The discussions also formalized Boccaccio's poetic ideas. Certain sources also see a conversion of Boccaccio by Petrarch from the open humanist of the "Decameron" to a more ascetic style, closer to the dominant fourteenth century ethos. For example, he followed Petrarch (and Dante) in the unsuccessful championing of an archaic and deeply allusive form of Latin poetry. In 1359, following a meeting with Pope Innocent VI and further meetings with Petrarch, it is probable that Boccaccio took some kind of religious mantle. There is a persistent (but unsupported) tale that he repudiated his earlier works as profane in 1362, including "The Decameron".
In 1360, Boccaccio began work on "De mulieribus claris", a book offering biographies of one hundred and six famous women, that he completed in 1374.
A number of Boccaccio's close friends and other acquaintances were executed or exiled in the purge following the failed coup of 1361. It was in this year that Boccaccio left Florence to reside in Certaldo, although not directly linked to the conspiracy, where he became less involved in government affairs. He did not undertake further missions for Florence until 1365, and traveled to Naples and then on to Padua and Venice, where he met up with Petrarch in grand style at Palazzo Molina, Petrarch's residence as well as the place of Petrarch's library. He later returned to Certaldo. He met Petrarch only once again in Padua in 1368. Upon hearing of the death of Petrarch (19 July 1374), Boccaccio wrote a commemorative poem, including it in his collection of lyric poems, the "Rime".
He returned to work for the Florentine government in 1365, undertaking a mission to Pope Urban V. The papacy returned to Rome from Avignon in 1367, and Boccaccio was again sent to Urban, offering congratulations. He also undertook diplomatic missions to Venice and Naples.
Of his later works, the moralistic biographies gathered as "De casibus virorum illustrium" (1355–74) and "De mulieribus claris" (1361–1375) were most significant. Other works include a dictionary of geographical allusions in classical literature, "De montibus, silvis, fontibus, lacubus, fluminibus, stagnis seu paludibus, et de nominibus maris liber". He gave a series of lectures on Dante at the Santo Stefano church in 1373 and these resulted in his final major work, the detailed "Esposizioni sopra la Commedia di Dante". Boccaccio and Petrarch were also two of the most educated people in early Renaissance in the field of archaeology.
Boccaccio's change in writing style in the 1350s was due in part to meeting with Petrarch, but it was mostly due to poor health and a premature weakening of his physical strength. It also was due to disappointments in love. Some such disappointment could explain why Boccaccio came suddenly to write in a bitter "Corbaccio" style, having previously written always in praise of women and love. Petrarch describes how Pietro Petrone (a Carthusian monk) on his death bed in 1362 sent another Carthusian (Gioacchino Ciani) to urge him to renounce his worldly studies. Petrarch then dissuaded Boccaccio from burning his own works and selling off his personal library, letters, books, and manuscripts. Petrarch even offered to purchase Boccaccio's library, so that it would become part of Petrarch's library. However, upon Boccaccio's death, his entire collection was given to the monastery of Santo Spirito, in Florence, where it still resides.
His final years were troubled by illnesses, some relating to obesity and what often is described as dropsy, severe edema that would be described today as congestive heart failure. He died on 21 December 1375 in Certaldo, where he is buried.
See Consoli's bibliography for an exhaustive listing.
|
https://en.wikipedia.org/wiki?curid=12957
|
Giuseppe Verdi
Giuseppe Fortunino Francesco Verdi (; 9 or 10 October 1813 – 27 January 1901) was an Italian opera composer. He was born near Busseto to a provincial family of moderate means, and developed a musical education with the help of a local patron. Verdi came to dominate the Italian opera scene after the era of Vincenzo Bellini, Gaetano Donizetti, and Gioachino Rossini, whose works significantly influenced him.
In his early operas, Verdi demonstrated a sympathy with the Risorgimento movement which sought the unification of Italy. He also participated briefly as an elected politician. The chorus "Va, pensiero" from his early opera "Nabucco" (1842), and similar choruses in later operas, were much in the spirit of the unification movement, and the composer himself became esteemed as a representative of these ideals. An intensely private person, Verdi, however, did not seek to ingratiate himself with popular movements and as he became professionally successful was able to reduce his operatic workload and sought to establish himself as a landowner in his native region. He surprised the musical world by returning, after his success with the opera "Aida" (1871), with three late masterpieces: his Requiem (1874), and the operas "Otello" (1887) and "Falstaff" (1893).
His operas remain extremely popular, especially the three peaks of his 'middle period': "Rigoletto, Il trovatore" and "La traviata", and the 2013 bicentenary of his birth was widely celebrated in broadcasts and performances.
Verdi, the first child of Carlo Giuseppe Verdi (1785–1867) and Luigia Uttini (1787–1851), was born at their home in Le Roncole, a village near Busseto, then in the Département Taro and within the borders of the First French Empire following the annexation of the Duchy of Parma and Piacenza in 1808. The baptismal register, prepared on 11 October 1813, lists his parents Carlo and Luigia as "innkeeper" and "spinner" respectively. Additionally, it lists Verdi as being "born yesterday", but since days were often considered to begin at sunset, this could have meant either 9 or 10 October. Following his mother, Verdi always celebrated his birthday on 9 October, the day he himself believed he was born.
Verdi had a younger sister, Giuseppa, who died aged 17 in 1833. She is said to have been his closest friend during childhood. From the age of four, Verdi was given private lessons in Latin and Italian by the village schoolmaster, Baistrocchi, and at six he attended the local school. After learning to play the organ, he showed so much interest in music that his parents finally provided him with a spinet. Verdi's gift for music was already apparent by 1820–21 when he began his association with the local church, serving in the choir, acting as an altar boy for a while, and taking organ lessons. After Baistrocchi's death, Verdi, at the age of eight, became the official paid organist.
The music historian Roger Parker points out that both of Verdi's parents "belonged to families of small landowners and traders, certainly not the illiterate peasants from which Verdi later liked to present himself as having emerged... Carlo Verdi was energetic in furthering his son's education...something which Verdi tended to hide in later life... [T]he picture emerges of youthful precocity eagerly nurtured by an ambitious father and of a sustained, sophisticated and elaborate formal education."
In 1823, when he was 10, Verdi's parents arranged for the boy to attend school in Busseto, enrolling him in a "Ginnasio"—an upper school for boys—run by Don Pietro Seletti, while they continued to run their inn at Le Roncole. Verdi returned to Busseto regularly to play the organ on Sundays, covering the distance of several kilometres on foot. At age 11, Verdi received schooling in Italian, Latin, the humanities, and rhetoric. By the time he was 12, he began lessons with Ferdinando Provesi, "maestro di cappella" at San Bartolomeo, director of the municipal music school and co-director of the local "Società Filarmonica" (Philharmonic Society). Verdi later stated: "From the ages of 13 to 18 I wrote a motley assortment of pieces: marches for band by the hundred, perhaps as many little "sinfonie" that were used in church, in the theatre and at concerts, five or six concertos and sets of variations for pianoforte, which I played myself at concerts, many serenades, cantatas (arias, duets, very many trios) and various pieces of church music, of which I remember only a "Stabat Mater"." This information comes from the "Autobiographical Sketch" which Verdi dictated to the publisher Giulio Ricordi late in life, in 1879, and remains the leading source for his early life and career. Written, understandably, with the benefit of hindsight, it is not always reliable when dealing with issues more contentious than those of his childhood.
The other director of the Philharmonic Society was , a wholesale grocer and distiller, who was described by a contemporary as a "manic dilettante" of music. The young Verdi did not immediately become involved with the Philharmonic. By June 1827, he had graduated with honours from the "Ginnasio" and was able to focus solely on music under Provesi. By chance, when he was 13, Verdi was asked to step in as a replacement to play in what became his first public event in his home town; he was an immediate success mostly playing his own music to the surprise of many and receiving strong local recognition.
By 1829–30, Verdi had established himself as a leader of the Philharmonic: "none of us could rival him" reported the secretary of the organisation, Giuseppe Demaldè. An eight-movement cantata, "I deliri di Saul", based on a drama by Vittorio Alfieri, was written by Verdi when he was 15 and performed in Bergamo. It was acclaimed by both Demaldè and Barezzi, who commented: "He shows a vivid imagination, a philosophical outlook, and sound judgment in the arrangement of instrumental parts." In late 1829, Verdi had completed his studies with Provesi, who declared that he had no more to teach him. At the time, Verdi had been giving singing and piano lessons to Barezzi's daughter Margherita; by 1831, they were unofficially engaged.
Verdi set his sights on Milan, then the cultural capital of northern Italy, where he applied unsuccessfully to study at the Conservatory. Barezzi made arrangements for him to become a private pupil of , who had been "maestro concertatore" at La Scala, and who described Verdi's compositions as "very promising". Lavigna encouraged Verdi to take out a subscription to La Scala, where he heard Maria Malibran in operas by Gioachino Rossini and Vincenzo Bellini. Verdi began making connections in the Milanese world of music that were to stand him in good stead. These included an introduction by Lavigna to an amateur choral group, the "Società Filarmonica", led by Pietro Massini. Attending the "Società" frequently in 1834, Verdi soon found himself functioning as rehearsal director (for Rossini's "La cenerentola") and continuo player. It was Massini who encouraged him to write his first opera, originally titled "Rocester", to a libretto by the journalist Antonio Piazza.
List of compositions by Giuseppe Verdi
In mid-1834, Verdi sought to acquire Provesi's former post in Busseto but without success. But with Barezzi's help he did obtain the secular post of "maestro di musica". He taught, gave lessons, and conducted the Philharmonic for several months before returning to Milan in early 1835. By the following July, he obtained his certification from Lavigna. Eventually in 1835 Verdi became director of the Busseto school with a three-year contract. He married Margherita in May 1836, and by March 1837, she had given birth to their first child, Virginia Maria Luigia on 26 March 1837. Icilio Romano followed on 11 July 1838. Both the children died young, Virginia on 12 August 1838, Icilio on 22 October 1839.
In 1837, the young composer asked for Massini's assistance to stage his opera in Milan. The La Scala impresario, Bartolomeo Merelli, agreed to put on "Oberto" (as the reworked opera was now called, with a libretto rewritten by Temistocle Solera) in November 1839. It achieved a respectable 13 additional performances, following which Merelli offered Verdi a contract for three more works.
While Verdi was working on his second opera "Un giorno di regno", Margherita died of encephalitis at the age of 26. Verdi adored his wife and children and was devastated by their deaths. "Un giorno", a comedy, was premiered only a few months later. It was a flop and only given the one performance. Following its failure, it is claimed Verdi vowed never to compose again, but in his "Sketch" he recounts how Merelli persuaded him to write a new opera.
Verdi was to claim that he gradually began to work on the music for "Nabucco", the libretto of which had originally been rejected by the composer Otto Nicolai: "This verse today, tomorrow that, here a note, there a whole phrase, and little by little the opera was written", he later recalled. By the autumn of 1841 it was complete, originally under the title "Nabucodonosor". Well received at its first performance on 9 March 1842, "Nabucco" underpinned Verdi's success until his retirement from the theatre, twenty-nine operas (including some revised and updated versions) later. At its revival in La Scala for the 1842 autumn season it was given an unprecedented (and later unequalled) total of 57 performances; within three years it had reached (among other venues) Vienna, Lisbon, Barcelona, Berlin, Paris and Hamburg; in 1848 it was heard in New York, in 1850 in Buenos Aires. Porter comments that "similar accounts...could be provided to show how widely and rapidly all [Verdi's] other successful operas were disseminated."
A period of hard work for Verdi—with the creation of twenty operas (excluding revisions and translations)—followed over the next sixteen years, culminating in "Un ballo in maschera". This period was not without its frustrations and setbacks for the young composer, and he was frequently demoralised. In April 1845, in connection with "I due Foscari", he wrote: "I am happy, no matter what reception it gets, and I am utterly indifferent to everything. I cannot wait for these next three years to pass. I have to write six operas, then "addio" to everything." In 1858 Verdi complained: "Since "Nabucco", you may say, I have never had one hour of peace. Sixteen years in the galleys."
After the initial success of "Nabucco", Verdi settled in Milan, making a number of influential acquaintances. He attended the "Salotto Maffei", Countess Clara Maffei's salons in Milan, becoming her lifelong friend and correspondent. A revival of "Nabucco" followed in 1842 at La Scala where it received a run of fifty-seven performances, and this led to a commission from Merelli for a new opera for the 1843 season. "I Lombardi alla prima crociata" was based on a libretto by Solera and premiered in February 1843. Inevitably, comparisons were made with "Nabucco"; but one contemporary writer noted: "If ["Nabucco"] created this young man's reputation, "I Lombardi" served to confirm it."
Verdi paid close attention to his financial contracts, making sure he was appropriately remunerated as his popularity increased. For "I Lombardi" and "Ernani" (1844) in Venice he was paid 12,000 lire (including supervision of the productions); "Attila" and "Macbeth" (1847), each brought him 18,000 lire. His contracts with the publishers Ricordi in 1847 were very specific about the amounts he was to receive for new works, first productions, musical arrangements, and so on. He began to use his growing prosperity to invest in land near his birthplace. In 1844 he purchased Il Pulgaro, 62 acres (23 hectares) of farmland with a farmhouse and outbuildings, providing a home for his parents from May 1844. Later that year, he also bought the Palazzo Cavalli (now known as the Palazzo Orlandi) on the via Roma, Busseto's main street. In May 1848, Verdi signed a contract for land and houses at Sant'Agata in Busseto, which had once belonged to his family. It was here he built his own house, completed in 1880, now known as the Villa Verdi, where he lived from 1851 until his death.
In March 1843, Verdi visited Vienna (where Gaetano Donizetti was musical director) to oversee a production of "Nabucco". The older composer, recognising Verdi's talent, noted in a letter of January 1844: "I am very, very happy to give way to people of talent like Verdi... Nothing will prevent the good Verdi from soon reaching one of the most honourable positions in the cohort of composers." Verdi travelled on to Parma, where the Teatro Regio di Parma was producing "Nabucco" with Strepponi in the cast. For Verdi the performances were a personal triumph in his native region, especially as his father, Carlo, attended the first performance. Verdi remained in Parma for some weeks beyond his intended departure date. This fuelled speculation that the delay was due to Verdi's interest in Giuseppina Strepponi (who stated that their relationship began in 1843). Strepponi was in fact known for her amorous relationships (and many illegitimate children) and her history was an awkward factor in their relationship until they eventually agreed on marriage.
After successful stagings of "Nabucco" in Venice (with twenty-five performances in the 1842/43 season), Verdi began negotiations with the impresario of La Fenice to stage "I Lombardi", and to write a new opera. Eventually, Victor Hugo's "Hernani" was chosen, with Francesco Maria Piave as librettist. "Ernani" was successfully premiered in 1844 and within six months had been performed at twenty other theatres in Italy, and also in Vienna. The writer Andrew Porter notes that for the next ten years, Verdi's life "reads like a travel diary—a timetable of visits...to bring new operas to the stage or to supervise local premieres". La Scala premiered none of these new works, except for "Giovanna d'Arco". Verdi "never forgave the Milanese for their reception of "Un giorno di regno"".
During this period, Verdi began to work more consistently with his librettists. He relied on Piave again for "I due Foscari", performed in Rome in November 1844, then on Solera once more for "Giovanna d'Arco", at La Scala in February 1845, while in August that year he was able to work with Salvadore Cammarano on "Alzira" for the Teatro di San Carlo in Naples. Solera and Piave worked together on "Attila" for La Fenice (March 1846).
In April 1844, Verdi took on Emanuele Muzio, eight years his junior, as a pupil and amanuensis. He had known him since about 1828 as another of Barezzi's protégés. Muzio, who in fact was Verdi's only pupil, became indispensable to the composer. He reported to Barezzi that Verdi "has a breadth of spirit, of generosity, a wisdom". In November 1846, Muzio wrote of Verdi: "If you could see us, I seem more like a friend, rather than his pupil. We are always together at dinner, in the cafes, when we play cards...; all in all, he doesn't go anywhere without me at his side; in the house we have a big table and we both write there together, and so I always have his advice." Muzio was to remain associated with Verdi, assisting in the preparation of scores and transcriptions, and later conducting many of his works in their premiere performances in the US and elsewhere outside Italy. He was chosen by Verdi as one of the executors of his will, but predeceased the composer in 1890.
After a period of illness Verdi began work on "Macbeth" in September 1846. He dedicated the opera to Barezzi: "I have long intended to dedicate an opera to you, as you have been a father, a benefactor and a friend for me. It was a duty I should have fulfilled sooner if imperious circumstances had not prevented me. Now, I send you "Macbeth", which I prize above all my other operas, and therefore deem worthier to present to you." In 1997 Martin Chusid wrote that "Macbeth" was the only one of Verdi's operas of his "early period" to remain regularly in the international repertoire, although in the 21st century "Nabucco" has also entered the lists.
Strepponi's voice declined and her engagements dried up in the 1845 to 1846 period, and she returned to live in Milan whilst retaining contact with Verdi as his "supporter, promoter, unofficial adviser, and occasional secretary" until she decided to move to Paris in October 1846. Before she left Verdi gave her a letter that pledged his love. On the envelope, Strepponi wrote: "5 or 6 October 1846. They shall lay this letter on my heart when they bury me."
Verdi had completed "I masnadieri" for London by May 1847 except for the orchestration. This he left until the opera was in rehearsal, since he wanted to hear "la [Jenny] Lind and modify her role to suit her more exactly". Verdi agreed to conduct the premiere on 22 July 1847 at Her Majesty's Theatre, as well as the second performance. Queen Victoria and Prince Albert attended the first performance, and for the most part, the press was generous in its praise.
For the next two years, except for two visits to Italy during periods of political unrest, Verdi was based in Paris. Within a week of returning to Paris in July 1847, he received his first commission from the Paris Opéra. Verdi agreed to adapt "I Lombardi" to a new French libretto; the result was "Jérusalem", which contained significant changes to the music and structure of the work (including an extensive ballet scene) to meet Parisian expectations. Verdi was awarded the Order of Chevalier of the Legion of Honour. To satisfy his contracts with the publisher , Verdi dashed off "Il Corsaro". Budden comments "In no other opera of his does Verdi appear to have taken so little interest "before" it was staged."
On hearing the news of the "Cinque Giornate", the "Five Days" of street fighting that took place between 18 and 22 March 1848 and temporarily drove the Austrians out of Milan, Verdi travelled there, arriving on 5 April. He discovered that Piave was now "Citizen Piave" of the newly proclaimed Republic of San Marco. Writing a patriotic letter to him in Venice, Verdi concluded "Banish every petty municipal idea! We must all extend a fraternal hand, and Italy will yet become the first nation of the world...I am drunk with joy! Imagine that there are no more Germans here!!"
Verdi had been admonished by the poet Giuseppe Giusti for turning away from patriotic subjects, the poet pleading with him to "do what you can to nourish the [sorrow of the Italian people], to strengthen it, and direct it to its goal." Cammarano suggested adapting Joseph Méry's 1828 play "La Bataille de Toulouse", which he described as a story "that should stir every man with an Italian soul in his breast". The premiere was set for late January 1849. Verdi travelled to Rome before the end of 1848. He found that city on the verge of becoming a (short-lived) republic, which commenced within days of "La battaglia di Legnano"'s enthusiastically received premiere. In the spirit of the time were the tenor hero's final words, "Whoever dies for the fatherland cannot be evil-minded".
Verdi had intended to return to Italy in early 1848, but was prevented by work and illness, as well as, most probably, by his increasing attachment to Strepponi. Verdi and Strepponi left Paris in July 1849, the immediate cause being an outbreak of cholera, and Verdi went directly to Busseto to continue work on completing his latest opera, "Luisa Miller", for a production in Naples later in the year.
Verdi was committed to the publisher Giovanni Ricordi for an opera—which became "Stiffelio"—for Trieste in the Spring of 1850; and, subsequently, following negotiations with La Fenice, developed a libretto with Piave and wrote the music for "Rigoletto" (based on Victor Hugo's "Le roi s'amuse") for Venice in March 1851. This was the first of a sequence of three operas (followed by "Il trovatore" and "La traviata") which were to cement his fame as a master of opera.
The failure of "Stiffelio" (attributable not least to the censors of the time taking offence at the taboo subject of the supposed adultery of a clergyman's wife and interfering with the text and roles) incited Verdi to take pains to rework it, although even in the completely recycled version of "Aroldo" (1857) it still failed to please. "Rigoletto", with its intended murder of royalty, and its sordid attributes, also upset the censors. Verdi would not compromise: What does the sack matter to the police? Are they worried about the effect it will produce?...Do they think they know better than I?...I see the hero has been made no longer ugly and hunchbacked!! Why? A singing hunchback...why not?...I think it splendid to show this character as outwardly deformed and ridiculous, and inwardly passionate and full of love. I chose the subject for these very qualities...if they are removed I can no longer set it to music.
Verdi substituted a Duke for the King, and the public response and subsequent success of the opera all over Italy and Europe fully vindicated the composer. Aware that the melody of the Duke's song "La donna è mobile" ("Woman is fickle") would become a popular hit, Verdi excluded it from orchestral rehearsals for the opera, and rehearsed the tenor separately.
For several months Verdi was preoccupied with family matters. These stemmed from the way in which the citizens of Busseto were treating Giuseppina Strepponi, with whom he was living openly in an unmarried relationship. She was shunned in the town and at church, and while Verdi appeared indifferent, she was certainly not. Furthermore, Verdi was concerned about the administration of his newly acquired property at Sant'Agata. A growing estrangement between Verdi and his parents was perhaps also attributable to Strepponi (the suggestion that this situation was sparked by the birth of a child to Verdi and Strepponi which was given away as a foundling lacks any firm evidence). In January 1851, Verdi broke off relations with his parents, and in April they were ordered to leave Sant'Agata; Verdi found new premises for them and helped them financially to settle into their new home. It may not be coincidental that all six Verdi operas written in the period 1849–53 ("La battaglia, Luisa Miller, Stiffelio, Rigoletto, Il trovatore" and "La traviata"), have, uniquely in his oeuvre, heroines who are, in the opera critic Joseph Kerman's words, "women who come to grief because of sexual transgression, actual or perceived". Kerman, like the psychologist Gerald Mendelssohn, sees this choice of subjects as being influenced by Verdi's uneasy passion for Strepponi.
Verdi and Strepponi moved into Sant'Agata on 1 May 1851. May also brought an offer for a new opera from La Fenice, which Verdi eventually realised as "La traviata". That was followed by an agreement with the Rome Opera company to present "Il trovatore" for January 1853. Verdi now had sufficient earnings to retire, had he wished to. He had reached a stage where he could develop his operas as he wished, rather than be dependent on commissions from third parties. "Il trovatore" was in fact the first opera he wrote without a specific commission (apart from "Oberto"). At around the same time he began to consider creating an opera from Shakespeare's "King Lear". After first (1850) seeking a libretto from Cammarano (which never appeared), Verdi later (1857) commissioned one from Antonio Somma, but this proved intractable, and no music was ever written. Verdi began work on "Il trovatore" after the death of his mother in June 1851. The fact that this is "the one opera of Verdi's which focuses on a mother rather than a father" is perhaps related to her death.
In the winter of 1851–52 Verdi decided to go to Paris with Strepponi, where he concluded an agreement with the Opéra to write what became "Les vêpres siciliennes", his first original work in the style of grand opera. In February 1852, the couple attended a performance of Alexander Dumas "fils"'s play "The Lady of the Camellias"; Verdi immediately began to compose music for what would later become "La traviata".
After his visit to Rome for "Il trovatore" in January 1853, Verdi worked on completing "La traviata", but with little hope of its success, due to his lack of confidence in any of the singers engaged for the season. Furthermore, the management insisted that the opera be given a historical, not a contemporary setting. The premiere in March 1853 was indeed a failure: Verdi wrote: "Was the fault mine or the singers'? Time will tell." Subsequent productions (following some rewriting) throughout Europe over the following two years fully vindicated the composer; Roger Parker has written ""Il trovatore" consistently remains one of the three or four most popular operas in the Verdian repertoire: but it has never pleased the critics".
In the eleven years up to and including "Traviata", Verdi had written sixteen operas. Over the next eighteen years (up to "Aida"), he wrote only six new works for the stage. Verdi was happy to return to Sant'Agata and, in February 1856, was reporting a "total abandonment of music; a little reading; some light occupation with agriculture and horses; that's all". A couple of months later, writing in the same vein to Countess Maffei he stated: "I'm not doing anything. I don't read. I don't write. I walk in the fields from morning to evening, trying to recover, so far without success, from the stomach trouble caused me by "I vespri siciliani". Cursed operas!" An 1858 letter by Strepponi to the publisher Léon Escudier describes the kind of lifestyle that increasingly appealed to the composer: "His love for the country has become a mania, madness, rage, and fury—anything you like that is exaggerated. He gets up almost with the dawn, to go and examine the wheat, the maize, the vines, etc...Fortunately our tastes for this sort of life coincide, except in the matter of sunrise, which he likes to see up and dressed, and I from my bed."
Nonetheless on 15 May, Verdi signed a contract with La Fenice for an opera for the following spring. This was to be "Simon Boccanegra". The couple stayed in Paris until January 1857 to deal with these proposals, and also the offer to stage the translated version of "Il trovatore" as a grand opera. Verdi and Strepponi travelled to Venice in March for the premiere of "Simon Boccanegra", which turned out to be "a fiasco" (as Verdi reported, although on the second and third nights, the reception improved considerably).
With Strepponi, Verdi went to Naples early in January 1858 to work with Somma on the libretto of the opera "Gustave III", which over a year later would become "Un ballo in maschera". By this time, Verdi had begun to write about Strepponi as "my wife" and she was signing her letters as "Giuseppina Verdi". Verdi raged against the stringent requirements of the Neapolitan censor stating: "I'm drowning in a sea of troubles. It's almost certain that the censors will forbid our libretto." With no hope of seeing his "Gustavo III" staged as written, he broke his contract. This resulted in litigation and counter-litigation; with the legal issues resolved, Verdi was free to present the libretto and musical outline of "Gustave III" to the Rome Opera. There, the censors demanded further changes; at this point, the opera took the title "Un ballo in maschera".
Arriving in Sant'Agata in March 1859 Verdi and Strepponi found the nearby city of Piacenza occupied by about 6,000 Austrian troops who had made it their base, to combat the rise of Italian interest in unification in the Piedmont region. In the ensuing Second Italian War of Independence the Austrians abandoned the region and began to leave Lombardy, although they remained in control of the Venice region under the terms of the armistice signed at Villafranca. Verdi was disgusted at this outcome: "[W]here then is the independence of Italy, so long hoped for and promised?...Venice is not Italian? After so many victories, what an outcome... It is enough to drive one mad" he wrote to Clara Maffei.
Verdi and Strepponi now decided on marriage; they travelled to Collonges-sous-Salève, a village then part of Piedmont. On 29 August 1859 the couple were married there, with only the coachman who had driven them there and the church bell-ringer as witnesses. At the end of 1859, Verdi wrote to his friend Cesare De Sanctis "[Since completing "Ballo"] I have not made any more music, I have not seen any more music, I have not thought anymore about music. I don't even know what colour my last opera is, and I almost don't remember it." He began to remodel Sant'Agata, which took most of 1860 to complete and on which he continued to work for the next twenty years. This included major work on a square room that became his workroom, his bedroom, and his office.
Having achieved some fame and prosperity, Verdi began in 1859 to take an active interest in Italian politics. His early commitment to the Risorgimento movement is difficult to estimate accurately; in the words of the music historian Philip Gossett "myths intensifying and exaggerating [such] sentiment began circulating" during the nineteenth century. An example is the claim that when the "Va, pensiero" chorus in "Nabucco" was first sung in Milan, the audience, responding with nationalistic fervour, demanded an encore. As encores were expressly forbidden by the government at the time, such a gesture would have been extremely significant. But in fact the piece encored was not "Va, pensiero" but the hymn "Immenso Jehova".
The growth of the "identification of Verdi's music with Italian nationalist politics" perhaps began in the 1840s. In 1848, the nationalist leader Giuseppe Mazzini (whom Verdi had met in London the previous year) requested Verdi (who complied) to write a patriotic hymn. The opera historian Charles Osborne describes the 1849 La battaglia di Legnano as "an opera with a purpose" and maintains that "while parts of Verdi's earlier operas had frequently been taken up by the fighters of the Risorgimento...this time the composer had given the movement its own opera" It was not until 1859 in Naples, and only then spreading throughout Italy, that the slogan "Viva Verdi" was used as an acronym for Viva Vittorio Emanuele Re D'Italia" (Viva Victor Emmanuel King of Italy)", (who was then king of Piedmont). After Italy was unified in 1861, many of Verdi's early operas were increasingly re-interpreted as Risorgimento works with hidden Revolutionary messages that perhaps had not been originally intended by either the composer or his librettists.
In 1859, Verdi was elected as a member of the new provincial council, and was appointed to head a group of five who would meet with King Vittorio Emanuele II in Turin. They were enthusiastically greeted along the way and in Turin Verdi himself received much of the publicity. On 17 October Verdi met with Cavour, the architect of the initial stages of Italian unification. Later that year the government of Emilia was subsumed under the United Provinces of Central Italy, and Verdi's political life temporarily came to an end. Whilst still maintaining nationalist feelings, he declined in 1860 the office of provincial council member to which he had been elected "in absentia". Cavour however was anxious to convince a man of Verdi's stature that running for political office was essential to strengthening and securing Italy's future. The composer confided to Piave some years later that "I accepted on the condition that after a few months I would resign." Verdi was elected on 3 February 1861 for the town of Borgo San Donnino (Fidenza) to the Parliament of Piedmont-Sardinia in Turin (which from March 1861 became the Parliament of the Kingdom of Italy), but following the death of Cavour in 1861, which deeply distressed him, he scarcely attended. Later, in 1874, Verdi was appointed a member of the Italian Senate, but did not participate in its activities.
In the months following the staging of "Ballo", Verdi was approached by several opera companies seeking a new work or making offers to stage one of his existing ones, but refused them all. But when, in December 1860, an approach was made from Saint Petersburg's Imperial Theatre, the offer of 60,000 francs plus all expenses was doubtless a strong incentive. Verdi came up with the idea of adapting the 1835 Spanish play "Don Alvaro o la fuerza del sino" by Angel Saavedra, which became "La forza del destino", with Piave writing the libretto. The Verdis arrived in St. Petersburg in December 1861 for the premiere, but casting problems meant that it had to be postponed.
Returning via Paris from Russia on 24 February 1862, Verdi met two young Italian writers, the twenty-year-old Arrigo Boito and Franco Faccio. Verdi had been invited to write a piece of music for the 1862 International Exhibition in London, and charged Boito with writing a text, which became the "Inno delle nazioni". Boito, as a supporter of the grand opera of Giacomo Meyerbeer and an opera composer in his own right, was later in the 1860s critical of Verdi's "reliance on formula rather than form", incurring the composer's wrath. Nevertheless, he was to become Verdi's close collaborator in his final operas. The St. Petersburg premiere of "La forza" finally took place in September 1862, and Verdi received the Order of St. Stanislaus.
A revival of "Macbeth" in Paris in 1865 was not a success, but he obtained a commission for a new work, "Don Carlos", based on the play "Don Carlos" by Friedrich Schiller. He and Giuseppina spent late 1866 and much of 1867 in Paris, where they heard, and did not warm to, Giacomo Meyerbeer's last opera, "L'Africaine", and Richard Wagner's overture to "Tannhäuser." The opera's premiere in 1867 drew mixed comments. While the critic Théophile Gautier praised the work, the composer Georges Bizet was disappointed at Verdi's changing style: "Verdi is no longer Italian. He is following Wagner."
During the 1860s and 1870s, Verdi paid great attention to his estate around Busseto, purchasing additional land, dealing with unsatisfactory (in one case, embezzling) stewards, installing irrigation, and coping with variable harvests and economic slumps. In 1867, both Verdi's father Carlo, with whom he had restored good relations, and his early patron and father-in-law Antonio Barezzi, died. Verdi and Giuseppina decided to adopt Carlo's great-niece Filomena Maria Verdi, then seven years old, as their own child. She was to marry in 1878 the son of Verdi's friend and lawyer Angelo Carrara and her family became eventually the heirs of Verdi's estate.
"Aida" was commissioned by the Egyptian government for the opera house built by the Khedive Isma'il Pasha to celebrate the opening of the Suez Canal in 1869. The opera house actually opened with a production of "Rigoletto". The prose libretto in French by Camille du Locle, based on a scenario by the Egyptologist Auguste Mariette, was transformed to Italian verse by Antonio Ghislanzoni. Verdi was offered the enormous sum of 150,000 francs for the opera (even though he confessed that Ancient Egypt was "a civilization I have never been able to admire"), and it was first performed in Cairo in 1871. Verdi spent much of 1872 and 1873 supervising the Italian productions of "Aida" at Milan, Parma and Naples, effectively acting as producer and demanding high standards and adequate rehearsal time. During the rehearsals for the Naples production he wrote his string quartet, the only chamber music by him to survive, and the only major work in the form by an Italian of the 19th century.
In 1869, Verdi had been asked to compose a section for a requiem mass in memory of Gioachino Rossini. He compiled and completed the requiem, but its performance was abandoned (and its premiere did not take place until 1988). Five years later, Verdi reworked his "Libera Me" section of the Rossini Requiem and made it a part of his Requiem honouring Alessandro Manzoni, who had died in 1873. The complete Requiem was first performed at the cathedral in Milan on the anniversary of Manzoni's death on 22 May 1874. The "spinto" soprano Teresa Stolz (1834–1902), who had sung in La Scala productions from 1865 onwards, was the soloist in the first and many later performances of the Requiem; in February 1872, she had created Aida in its European premiere in Milan. She became closely associated personally with Verdi (exactly how closely remains conjectural), to Giuseppina Verdi's initial disquiet; but the women were reconciled and Stolz remained a companion of Verdi after Giuseppina's death in 1897 until his own death.
Verdi conducted his Requiem in Paris, London and Vienna in 1875 and in Cologne in 1876. It seemed that it would be his last work. In the words of his biographer John Rosselli, it "confirmed him as the unique presiding genius of Italian music. No fellow composer...came near him in popularity or reputation". Verdi, now in his sixties, initially seemed to withdraw into retirement. He deliberately shied away from opportunities to publicise himself or to become involved with new productions of his works, but secretly he began work on "Otello", which Boito (to whom the composer had been reconciled by Ricordi) had proposed to him privately in 1879. The composition was delayed by a revision of "Simon Boccanegra" which Verdi undertook with Boito, produced in 1881, and a revision of "Don Carlos". Even when "Otello" was virtually completed, Verdi teased "Shall I finish it? Shall I have it performed? Hard to tell, even for me." As news leaked out, Verdi was pressed by opera houses across Europe with enquiries; eventually the opera was triumphantly premiered at La Scala in February 1887.
Following the success of "Otello" Verdi commented, "After having relentlessly massacred so many heroes and heroines, I have at last the right to laugh a little." He had considered a variety of comic subjects but had found none of them wholly suitable and confided his ambition to Boito. The librettist said nothing at the time but secretly began work on a libretto based on "The Merry Wives of Windsor" with additional material taken from "Henry IV, Part 1" and "Part 2". Verdi received the draft libretto probably in early July 1889 after he had just read Shakespeare's play: "Benissimo! Benissimo!... No one could have done better than you", he wrote back to Boito. But he still had doubts: his age, his health (which he admits to being good) and his ability to complete the project: "If I were not to finish the music?". If the project failed, it would have been a waste of Boito's time, and have distracted him from completing his own new opera. Finally on 10 July 1889 he wrote again: "So be it! So let's do "Falstaff"! For now, let's not think of obstacles, of age, of illnesses!" Verdi emphasised the need for secrecy, but continued "If you are in the mood, then start to write." Later he wrote to Boito (capitals and exclamation marks are Verdi's own): "What joy to be able to say to the public: HERE WE ARE AGAIN!!! COME AND SEE US!"
The first performance of "Falstaff" took place at La Scala on 9 February 1893. For the first night, official ticket prices were thirty times higher than usual. Royalty, aristocracy, critics and leading figures from the arts all over Europe were present. The performance was a huge success; numbers were encored, and at the end the applause for Verdi and the cast lasted an hour. That was followed by a tumultuous welcome when the composer, his wife and Boito arrived at the Grand Hotel de Milan. Even more hectic scenes ensued when he went to Rome in May for the opera's premiere at the Teatro Costanzi, when crowds of well-wishers at the railway station initially forced Verdi to take refuge in a tool-shed. He witnessed the performance from the Royal Box at the side of King Umberto and the Queen.
In his last years Verdi undertook a number of philanthropic ventures, publishing in 1894 a song for the benefit of earthquake victims in Sicily, and from 1895 onwards planning, building and endowing a rest-home for retired musicians in Milan, the Casa di Riposo per Musicisti, and building a hospital at Villanova sull'Arda, close to Busseto. His last major composition, the choral set of "Four sacred pieces", was published in 1898. In 1900 he was deeply upset at the assassination of King Umberto and sketched a setting of a poem in his memory but was unable to complete it. While staying at the Grand Hotel, Verdi suffered a stroke on 21 January 1901. He gradually grew more feeble over the next week, during which Stolz cared for him, and died on 27 January at the age of 87.
Verdi was initially buried in a private ceremony at Milan's Cimitero Monumentale. A month later, his body was moved to the crypt of the Casa di Riposo. On this occasion, "Va, pensiero" from "Nabucco" was conducted by Arturo Toscanini with a chorus of 820 singers. A huge crowd was in attendance, estimated at 300,000. Boito wrote to a friend, in words which recall the mysterious final scene of "Don Carlos", "[Verdi] sleeps like a King of Spain in his Escurial, under a bronze slab that completely covers him."
Not all of Verdi's personal qualities were amiable. John Rosselli concluded after writing his biography that "I do not very much like the man Verdi, in particular the autocratic rentier-cum-estate owner, part-time composer, and seemingly full-time grumbler and reactionary critic of the later years", yet admits that like other writers, he must "admire him, warts and all...a deep integrity runs beneath his life, and can be felt even when he is being unreasonable or wrong."
Budden suggests that "With Verdi...the man and the artist on many ways developed side by side." Ungainly and awkward in society in his early years, "as he became a man of property and underwent the civilizing influence of Giuseppina...[he] acquired assurance and authority." He also learnt to keep himself to himself, never discussing his private life and maintaining when it suited his convenience legends about his supposed 'peasant' origins, his materialism and his indifference to criticism. Gerald Mendelsohn describes the composer as "an intensely private man who deeply resented efforts to inquire into his personal affairs. He regarded journalists and would-be biographers, as well as his neighbors in Busseto and the operatic public at large, as an intrusive lot, against whose prying attentions he needed constantly to defend himself."
Verdi was similarly never explicit about his religious beliefs. Anti-clerical by nature in his early years, he nonetheless built a chapel at Sant'Agata, but is rarely recorded as going to church. Strepponi wrote in 1871 "I won't say [Verdi] is an atheist, but he is not much of a believer." Rosselli comments that in the Requiem "The prospect of Hell appears to rule...[the Requiem] is troubled to the end," and offers little consolation.
"See also List of compositions by Giuseppe Verdi and individual articles on the works."
The writer Friedrich Schiller (four of whose plays were adapted as operas by Verdi) distinguished two types of artist in his 1795 essay "On Naïve and Sentimental Poetry". The philosopher Isaiah Berlin ranked Verdi in the 'naïve' category—"They are not...self-conscious. They do not...stand aside to contemplate their creations and express their own feelings...They are able...if they have genius, to embody their vision fully." (The 'sentimentals' seek to recreate nature and natural feelings on their own terms—Berlin instances Richard Wagner—"offering not peace, but a sword".) Verdi's operas are not written according to an aesthetic theory, or with a purpose to change the tastes of their audiences. In conversation with a German visitor in 1887 he is recorded as saying that, whilst "there was much to be admired in [Wagner's operas] "Tannhäuser" and "Lohengrin"...in his recent operas [Wagner] seemed to be overstepping the bounds of what can be expressed in music. For him "philosophical" music was incomprehensible." Although Verdi's works belong, as Rosselli admits "to the most artificial of genres...[they] ring emotionally true: truth and directness make them exciting, often hugely so."
The earliest study of Verdi's music, published in 1859 by the Italian critic Abraham Basevi, already distinguished four periods in Verdi's music. The early, 'grandiose' period, ended according to Basevi with "La battaglia di Legnano" (1849), and a 'personal' style began with the next opera "Luisa Miller." These two operas are generally agreed today by critics to mark the division between Verdi's 'early' and 'middle' periods. The 'middle' period is felt to end with "La traviata" (1853) and "Les vêpres siciliennes" (1855), with a 'late' period commencing with "Simon Boccanegra" (1857) running through to "Aida" (1871). The last two operas, "Otello" and "Falstaff", together with the Requiem and the "Four Sacred Pieces," then represent a 'final' period.
Verdi was to claim in his "Sketch" that during his early training with Lavigna "I did nothing but canons and fugues...No-one taught me orchestration or how to handle dramatic music." He is known to have written a variety of music for the Busseto Philharmonic society, including vocal music, band music and chamber works, (and including an alternative overture to Rossini's "Barber of Seville") but few of these works survive. (He may have given instructions before his death to destroy his early works).
Verdi uses in his early operas (and, in his own stylized versions, throughout his later work) the standard elements of Italian opera content of the period, referred to by the opera writer Julian Budden as the 'Code Rossini', after the composer who established through his work and popularity the accepted templates of these forms; they were also used by the composers dominant during Verdi's early career, Bellini, Donizetti and Saverio Mercadante. Amongst the essential elements are the aria, the duet, the ensemble, and the finale sequence of an act. The aria format, centred on a soloist, typically involved three sections; a slow introduction, marked typically cantabile or adagio, a "tempo di mezzo" which might involve chorus or other characters, and a cabaletta, an opportunity for bravura singing for the soloist. The duet was similarly formatted. Finales, covering climactic sequences of action, used the various forces of soloists, ensemble and chorus, usually culminating with an exciting stretto section. Verdi was to develop these and the other formulae of the generation preceding him with increasing sophistication during his career.
The operas of the early period show Verdi learning by doing and gradually establishing mastery over the different elements of opera. "Oberto" is poorly structured, and the orchestration of the first operas is generally simple, sometimes even basic. The musicologist Richard Taruskin suggests "the most striking effect in the early Verdi operas, and the one most obviously allied to the mood of the Risorgimento, was the big choral number sung—crudely or sublimely, according to the ear of the beholder—in unison. The success of "Va, pensiero" in "Nabucco" (which Rossini approvingly denoted as "a grand aria sung by sopranos, contraltos, tenors and basses"), was replicated in the similar "O Signor, dal tetto natio" in "I lombardi" and in 1844 in the chorus "Si ridesti il Leon di Castiglia" in "Ernani", the battle hymn of the conspirators seeking freedom In "I due Foscari" Verdi first uses recurring themes identified with main characters; here and in future operas the accent moves away from the 'oratorio' characteristics of the first operas towards individual action and intrigue.
From this period onwards Verdi also develops his instinct for "tinta" (literally 'colour'), a term which he used for characterising elements of an individual opera score—Parker gives as an example "the rising 6th that begins so many lyric pieces in "Ernani"". "Macbeth", even in its original 1847 version, shows many original touches; characterization by key (the Macbeths themselves generally singing in sharp keys, the witches in flat keys), a preponderance of minor key music, and highly original orchestration. In the 'dagger scene' and the duet following the murder of Duncan, the forms transcend the 'Code Rossini' and propel the drama in a compelling fashion. Verdi was to comment in 1868 that Rossini and his followers missed "the golden thread that binds all the parts together and, rather than a set of numbers without coherence, makes an opera". "Tinta" was for Verdi this "golden thread", an essential unifying factor in his works.
The writer David Kimbell states that in "Luisa Miller" and "Stiffelio" (the earliest operas of this period) there appears to be a "growing freedom in the large scale structure...and an acute attention to fine detail". Others echo those feelings. Julian Budden expresses the impact of "Rigoletto" and its place in Verdi's output as follows: "Just after 1850 at the age of 38, Verdi closed the door on a period of Italian opera with "Rigoletto". The so-called "ottocento" in music is finished. Verdi will continue to draw on certain of its forms for the next few operas, but in a totally new spirit." One example of Verdi's wish to move away from "standard forms" appears in his feelings about the structure of "Il trovatore". To his librettist, Cammarano, Verdi plainly states in a letter of April 1851 that if there were no standard forms—"cavatinas, duets, trios, choruses, finales, etc. ... and if you could avoid beginning with an opening chorus...", he would be quite happy.
Two external factors had their impacts on Verdi's compositions of this period. One is that with increasing reputation and financial security he no longer needed to commit himself to the productive treadmill, had more freedom to choose his own subjects, and had more time to develop them according to his own ideas. In the years 1849 to 1859 he wrote eight new operas, compared with fourteen in the previous ten years.
Another factor was the changed political situation; the failure of the 1848 revolutions led both to some diminution of the Risorgimento ethos (at least initially) and a significant increase in theatre censorship. This is reflected both in Verdi's choices of plots dealing more with personal relationships than political conflict, and in a (partly consequent) dramatic reduction in the operas of this period in the number of choruses (of the type which had first made him famous)—not only are there on average 40% fewer choruses in the 'middle' period operas compared to the 'early' period', but whereas virtually all the 'early' operas commence with a chorus, only one ("Luisa Miller") of the 'middle' period operas begin this way. Instead, Verdi experiments with a variety of means, e.g. a stage band ("Rigoletto"), an aria for bass ("Stiffelio"), a party scene ("La traviata"). Chusid also notes Verdi's increasing tendency to replace full-scale overtures with shorter orchestral introductions. Parker comments that "La traviata", the last opera of the 'middle' period, is "again a new adventure. It gestures towards a level of 'realism'...the contemporary world of waltzes pervades the score, and the heroine's death from disease is graphically depicted in the music." Verdi's increasing command of musical highlighting of changing moods and relationships is exemplified in Act III of "Rigoletto", where Duke's flippant song "La donna è mobile" is followed immediately by the quartet "Bella figlia dell'amore", contrasting the rapacious Duke and his inamorata with the (concealed) indignant Rigoletto and his grieving daughter. Taruskin asserts this is "the most famous ensemble Verdi ever composed".
Chusid notes Strepponi's description of the operas of the 1860s and 1870s as being "modern" whereas Verdi described the pre-1849 works as "the cavatina operas", as further indication that "Verdi became increasingly dissatisfied with the older, familiar conventions of his predecessors that he had adopted at the outset of his career," Parker sees a physical differentiation of the operas from "Les vêpres siciliennes" (1855) to "Aida" (1871) is that they are significantly longer, and with larger cast-lists, than previous works. They also reflect a shift towards the French genre of grand opera, notable in more colorful orchestration, counterpointing of serious and comic scenes, and greater spectacle. The opportunities of transforming Italian opera by utilising such resources appealed to him. For a commission from the Paris Opéra he expressly demanded a libretto from Eugène Scribe, the favorite librettist of Meyerbeer, telling him: "I want—in fact, I must have—a grandiose, impassioned and original subject." The result was "Les vêpres siciliennes", and the scenarios of "Simon Boccanegra" (1857), "Un ballo in maschera" (1859), "La forza del destino" (1862), "Don Carlos" (1865) and "Aida" (1872) all meet the same criteria. Porter notes that "Un ballo" marks an almost complete synthesis of Verdi's style with the grand opera hallmarks, such that "huge spectacle is not mere decoration but essential to the drama...musical and theatrical lines remain taut [and] the characters still sing as warmly, passionately and personally as in "Il trovatore"."
When the composer Ferdinand Hiller asked Verdi whether he preferred "Aida" or "Don Carlos", Verdi replied that "Aida" had "more bite and (if you'll forgive the word), more "theatricality"". During the rehearsals for the Naples production of "Aida" Verdi amused himself by writing his only string quartet, a sprightly work which shows in its last movement that he had not lost the skill for fugue-writing that he had learned with Lavigna.
Verdi's three last major works continued to show new development in conveying drama and emotion. The first to appear, in 1874 was his Requiem, scored for operatic forces but by no means an "opera in ecclesiastical dress" (the words in which Hans von Bülow condemned it before even hearing it). Although in the Requiem Verdi puts to use many of the techniques he learned in opera, its musical forms and emotions are not those of the stage. Verdi's tone painting at the opening of the Requiem is vividly described by the Italian composer Ildebrando Pizzetti, writing in 1941: "in [the words] murmured by an invisible crowd over the slow swaying of a few simple chords, you straightaway sense the fear and sadness of a vast multitude before the mystery of death. In the [following] "Et lux perpetuum" the melody spreads it wings...before falling back on itself...you hear a sigh for consolation and eternal peace."
By the time "Otello" premièred in 1887, more than 15 years after "Aida", the operas of Verdi's (predeceased) contemporary Richard Wagner had begun their ascendancy in popular taste, and many sought or identified Wagnerian aspects in Verdi's latest composition. Budden points out that there is little in the music of "Otello" that relates either to the "verismo" opera of the younger Italian composers, and little if anything which can be construed as a homage to the New German School. Nonetheless there is still much originality, building on the strengths which Verdi had already demonstrated; the powerful storm which opens the opera "in medias res", the recollection of the love duet of Act I in Otello's dying words (more an aspect of "tinta" than "leitmotif"), imaginative touches of harmony in Iago's "Era la notte" (Act II).
Finally, six years later, appeared "Falstaff", Verdi's only comedy apart from the early, ill-fated "Un giorno di regno". In this work Roger Parker writes that:
Although Verdi's operas brought him a popular following, not all contemporary critics approved of his work. The English critic Henry Chorley allowed in 1846 that "he is the only modern man...having a style—for better or worse", but found all his output unacceptable. "[His] faults [are] grave ones, calculated to destroy and degrade taste beyond those of any Italian composer in the long list" wrote Chorley, whilst conceding that "howsoever incomplete may have been his training, howsoever mistaken his aspirations may have proved...he "has" aspired." But by the time of Verdi's death, 55 years later, his reputation was assured, and the 1910 edition of Grove's Dictionary pronounced him "one of the greatest and most popular opera composers of the nineteenth century".
Verdi had no pupils apart from Muzio and no school of composers sought to follow his style which, however much it reflected his own musical direction, was rooted in the period of his own youth. By the time of his death, "verismo" was the accepted style of young Italian composers. The New York Metropolitan Opera frequently staged "Rigoletto, Trovatore" and "Traviata" during this period and featured "Aida" in every season from 1898 to 1945. Interest in the operas reawakened in mid-1920s Germany and this sparked a revival in England and elsewhere. From the 1930s onward there began to appear scholarly biographies and publications of documentation and correspondence.
In 1959 the Instituto di Studi Verdiani (from 1989 the Istituto Nazionale di Studi Verdiani) was founded in Parma and became a leading centre for research and publication of Verdi studies, and in the 1970s the American Institute for Verdi Studies was founded at New York University.
Historians have debated how political Verdi's operas were. In particular, the "Chorus of the Hebrew Slaves" (known as "Va, pensiero") from the third act of the opera "Nabucco" was used an anthem for Italian patriots, who were seeking to unify their country and free it from foreign control in the years up to 1861 (the chorus's theme of exiles singing about their homeland, and its lines such as "O mia patria, si bella e perduta" / "O my country, so lovely and so lost" were thought to have resonated with many Italians). Beginning in Naples in 1859 and spreading throughout Italy, the slogan "Viva VERDI" was used as an acronym for "Viva Vittorio Emanuele Re D'Italia" ("Long live Victor Emmanuel King of Italy"), referring to Victor Emmanuel II. Marco Pizzo argues that after 1815, music became a political tool, and many songwriters expressed ideals of freedom and equality. Pizzo claims that Verdi was part of this movement, for his operas were inspired by the love of country, the struggle for Italian independence, and speak to the sacrifice of patriots and exiles. George Martin claims Verdi was "the greatest artist" of the Risorgimento. "Throughout his work its values, its issues recur constantly, and he expressed them with great power".
But Mary Ann Smart argues that music critics at the time seldom mentioned any political themes. Likewise, Roger Parker argues that the political dimension of Verdi's operas was exaggerated by nationalistic historians looking for a hero in the late 19th century.
From the 1850s onwards, Verdi's operas displayed few patriotic themes because of the heavy censorship by the absolutist regime in power. Verdi later became disillusioned by politics, but he was personally active part in the political world of events of the Risorgimento and was elected to the first Italian parliament in 1861.
Three Italian conservatories, the Milan Conservatory and those in Turin and Como, are named after Verdi, as are many Italian theatres.
Verdi's hometown of Busseto displays Luigi Secchi's statue of a seated Verdi in 1913, next to the Teatro Verdi built in his honour in the 1850s. It is one of many statues to the composer in Italy. The Giuseppe Verdi Monument, a 1906 marble memorial, sculpted by Pasquale Civiletti, is located in Verdi Square in Manhattan, New York City. The monument includes a statue of Verdi himself and life-sized statues of four characters from his operas, (Aida, Otello, and Falstaff from the operas of the same names, and Leonora from "Il trovatore").
Verdi has been the subject of a number of film and stage works. These include the 1938 film directed by Carmine Gallone, "Giuseppe Verdi", starring Fosco Giachetti; the 1982 miniseries, "The Life of Verdi", directed by Renato Castellani, where Verdi was played by Ronald Pickup, with narration by Burt Lancaster in the English version; and the 1985 play "After Aida", by Julian Mitchell (1985). He is a character in the 2011 opera "Risorgimento!" by Italian composer Lorenzo Ferrero, written to commemorate the 150th anniversary of Italian unification of 1861.
Verdi's operas are frequently staged around the world. All of his operas are available in recordings in a number of versions, and on DVD – Naxos Records offers a complete boxed set.
Modern productions may differ substantially from those originally envisaged by the composer. Jonathan Miller's 1982 version of "Rigoletto" for English National Opera, set in the world of modern American mafiosi, received critical plaudits. But the same company's staging in 2002 of "Un ballo in maschera" as "A Masked Ball", directed by Calixto Bieito, including "satanic sex rituals, homosexual rape, [and] a demonic dwarf", got a general critical thumbs down.
Meanwhile, the music of Verdi can still evoke a range of cultural and political resonances. Excerpts from the Requiem were featured at the funeral of Diana, Princess of Wales in 1997. On 12 March 2011 during a performance of "Nabucco" at the Opera di Roma celebrating 150 years of Italian unification, the conductor Riccardo Muti paused after "Va pensiero" and turned to address the audience (which included the then Italian Prime Minister, Silvio Berlusconi) to complain about cuts in state funding of culture; the audience then joined in a repeat of the chorus. In 2014, the pop singer Katy Perry appeared at the Grammy Award wearing a dress designed by Valentino, embroidered with the music of "Dell'invito trascorsa è già l'ora" from the start of "La traviata". The bicentenary of Verdi's birth in 2013 was celebrated in numerous events around the world, both in performances and broadcasts.
|
https://en.wikipedia.org/wiki?curid=12958
|
German Navy
The German Navy (, officially ) is the navy of Germany and part of the unified "Bundeswehr" (Federal Defense), the German Armed Forces. The German Navy was originally known as the "Bundesmarine" (Federal Navy) from 1956 to 1995, when "Deutsche Marine" (German Navy) became the unofficial name with respect to the 1990 incorporation of the East German "Volksmarine" (People's Navy). It is deeply integrated into the NATO alliance. Its primary mission is protection of Germany's territorial waters and maritime infrastructure as well as sea lines of communication. Apart from this, the German Navy participates in peacekeeping operations, and renders humanitarian assistance and disaster relief. They also participate in Anti-Piracy operations.
The German Navy traces its roots back to the "Reichsflotte" (Imperial Fleet) of the revolutionary era of 1848–52. The "Reichsflotte" was the first German navy to sail under the black-red-gold flag. Founded on 14 June 1848 by the orders of the democratically elected Frankfurt Parliament, the "Reichsflotte"'s brief existence ended with the failure of the revolution and it was disbanded on 2 April 1852; thus, the modern day navy celebrates its birthday on 14 June.
Between May 1945 and 1956, the German Mine Sweeping Administration and its successor organizations, made up of former members of Nazi Germany's "Kriegsmarine" (War Navy), became something of a transition stage for the navy, allowing the future "Marine" to draw on recently experienced personnel upon its formation. Also, from 1949-52 the US Navy had maintained the Naval Historical Team in Bremerhaven. This group of former "Kriegsmarine" officers acting as historical and tactical consultants to the Americans, was significant in establishing a German element in the NATO senior naval staff. In 1956, with West Germany's accession to NATO, the "Bundesmarine" (Federal Navy), as the navy was known colloquially, was formally established. In the same year the East German "Volkspolizei See" (literally People's Police Sea) became the "Volksmarine" (People's Navy). During the Cold War all of the German Navy's combat vessels were assigned to NATO's Allied Forces Baltic Approaches's naval command NAVBALTAP.
With the accession of East Germany to the Federal Republic of Germany in 1990 the "Volksmarine" along with the whole National People's Army became part of the "Bundeswehr". Since 1995 the name "German Navy" is used in international context, while the official name since 1956 remains "Marine" without any additions. As of April 2020, the strength of the navy is 16,704 men and women.
A number of naval forces have operated in different periods. See
German warships permanently participate in all four NATO Maritime Groups. The German Navy is also engaged in operations against international terrorism such as Operation Enduring Freedom and NATO Operation Active Endeavour.
Presently the largest operation the German Navy is participating in is UNIFIL off the coast of Lebanon. The German contribution to this operation is two frigates, four fast attack craft, and two auxiliary vessels. The naval component of UNIFIL has been under German command.
The navy is operating a number of development and testing installations as part of an inter-service and international network. Among these is the Centre of Excellence for Operations in Confined and Shallow Waters (COE CSW), an affiliated centre of Allied Command Transformation. The COE CSW was established in April 2007 and officially accredited by NATO on 26 May 2009. It is co-located with the staff of the German Flotilla 1 in Kiel whose Commander is double-hatted as Director, COE CSW.
In total, there are about 65 commissioned ships in the German Navy, including; 10 frigates, 5 corvettes, 2 minesweepers, 10 minehunters, 6 submarines, 11 replenishment ships and 20 miscellaneous auxiliary vessels. The displacement of the navy is 220,000 tonnes.
Ships of the German Navy include:
In addition, the German Navy and the Royal Danish Navy are in cooperation in the "Ark Project". This agreement made the Ark Project responsible for the strategic sealift of German armed forces where the full-time charter of three roll-on-roll-off cargo and troop ships are ready for deployments. In addition, these ships are also kept available for the use of the other European NATO countries. The three vessels have a combined displacement of 60,000 tonnes.
Including these ships, the total ships' displacement available to the "Deutsche Marine" is 280,000 tonnes.
Procurement of Joint Support Ships (either two JSS800 for an amphibious group of 800 soldiers, or three smaller JSS400), was planned during the 1995–2010 period but the programme appears now to have been abandoned, not having been mentioned in two recent defence reviews. The larger ships would have been tasked for strategic troop transport and amphibious operations, and were to displace 27,000 to 30,000 tons for 800 soldiers. The German Navy will use the Joint Support Ship HNLMS Karel Doorman (A833) of the Royal Netherlands Navy as part of the integration of the German Navy Marines ("Seebatallion") in the Royal Netherlands Marine Corps as of 2016.
The naval air arm of the German Navy is called the "Marinefliegerkommando". The "Marinefliegerkommando" operates 56 aircraft.
The German Navy is commanded by the Inspector of the Navy ("Inspekteur der Marine") supported by the Navy Command ("Marinekommando") in Rostock.
|
https://en.wikipedia.org/wiki?curid=12960
|
GÉANT
GÉANT is the pan-European data network for the research and education community. It interconnects national research and education networks (NRENs) across Europe, enabling collaboration on projects ranging from biological science, to earth observation, to arts and culture. The GÉANT project combines a high-bandwidth, high-capacity 50,000 km network with a growing range of services. These allow researchers to collaborate, working together wherever they are located. Services include identity and trust, multi-domain monitoring perfSONAR MDM, dynamic circuits and roaming via the eduroam service.
Together with European NRENs, GÉANT connects 50 million users in over 10,000 institutions. Through links to research networks in other regions (such as Internet2 and ESnet in the USA, AfricaConnect in Africa, TEIN in Asia-Pacific and RedCLARA in Latin America), GÉANT enables collaboration between researchers in over half the world’s countries.
Co-funded by the European Commission and Europe’s NRENs, the GÉANT network was built and is operated by the GÉANT Association. The GÉANT project is a collaboration between 41 partners: 38 European NRENs, and NORDUnet (representing the five Nordic countries).
The GÉANT project began in November 2000, entered full production operation in December 2001 (fully replacing a network called TEN-155). Originally due to finish in October 2004, it was subsequently extended until April 2005.
The second generation network, named GÉANT2, began in September 2004 and continued through 2009, growing the network to 30 national networks in 34 countries.
The next GÉANT project (GN3) began on 1 April 2009 and continued until April 2013. This was then superseded by the GN3plus project which was scheduled to run for two years. It is funded under the EC’s seventh research and development Research Framework Programme (often referred to as FP7).
The Project is now in its fourth iteration (GN4).
As well as providing the high-bandwidth links across Europe, the GÉANT network also acts as a testbed for new technology.
It was the first "hybrid" network deployed on an international scale, combining routed IP and switched infrastructure. This enables the network to offer general traffic alongside virtual "private" network paths for projects, such as the Large Hadron Collider, which have particular requirements involving dedicated bandwidth, security and flexibility.
GÉANT supported native IPv6 since 2002 and multicast IPv6 since 2004. It is involved in network research, in areas such as carrier class network technologies, photonic switching, federated network architectures and virtualisation.
In 2013 a substantial network migration program was completed, meaning users could be offered multiple 100 Gbit/s links, with the core network supporting 500 Gbit/s and a network design that will support up to 8Tbit/s.
Already, over 1 Petabyte of data are transferred every day via the GÉANT backbone network.
The GÉANT project is a collaboration between 41 partners: 38 European NRENs and NORDUnet (representing the five Nordic countries).
The full list of NREN project partners are available on the website.
GÉANT links to research networks in other world regions, including:
These links not only help international research collaboration but also aid with projects that deliver societal benefit, such as e-health, telemedicine and weather forecasting/disaster warning systems. Allowing researchers to work within their own countries also stems migration from less developed countries, helping bridge the digital divide.
GÉANT is used by research communities, such as:
|
https://en.wikipedia.org/wiki?curid=12961
|
Gamma-Hydroxybutyric acid
"gamma"-Hydroxybutyric acid or γ-Hydroxybutyric acid (GHB), also known as 4-hydroxybutanoic acid, is a naturally occurring neurotransmitter and a psychoactive drug. It is a precursor to GABA, glutamate, and glycine in certain brain areas. It acts on the GHB receptor and is a weak agonist at the GABAB receptor. GHB has been used in a medical setting as a general anesthetic and as a treatment for cataplexy, narcolepsy, and alcoholism. It is also used illegally as an intoxicant, as an athletic performance enhancer, as a date rape drug, and as a recreational drug.
It is commonly used in the form of a salt, such as sodium γ-hydroxybutyrate (NaGHB, sodium oxybate, or Xyrem) or potassium γ-hydroxybutyrate (KGHB, potassium oxybate).GHB is also produced as a result of fermentation, and is found in small quantities in some beers and wines, beef and small citrus fruits.
Succinic semialdehyde dehydrogenase deficiency is a disease that causes GHB to accumulate in the blood.
The only common medical uses for GHB today are in the treatment of narcolepsy and, more rarely, alcoholism, although its use for alcoholism is not supported by evidence from randomized controlled trials. It is sometimes used off-label for the treatment of fibromyalgia. GHB is the active ingredient of the prescription medication sodium oxybate (Xyrem). Sodium oxybate is approved by U.S. Food and Drug Administration for the treatment of cataplexy associated with narcolepsy and excessive daytime sleepiness (EDS) associated with narcolepsy.
GHB has been shown to reliably increase slow-wave sleep and decrease the tendency for REM sleep in modified multiple sleep latency tests.
GHB is a central nervous system depressant used as an intoxicant. It has many street names. Its effects have been described anecdotally as comparable with ethanol (alcohol) and MDMA use, such as euphoria, disinhibition, enhanced libido and empathogenic states. At higher doses, GHB may induce nausea, dizziness, drowsiness, agitation, visual disturbances, depressed breathing, amnesia, unconsciousness, and death. One potential cause of death from GHB consumption is polydrug toxicity. Co-administration with other CNS depressants such as alcohol or benzodiazepines can result in an additive effect (potentiation), as they all bind to gamma-aminobutyric acid (or "GABA") receptor sites. The effects of GHB can last from 1.5 to 4 hours, or longer if large doses have been consumed. Consuming GHB with alcohol can cause respiratory arrest and vomiting in combination with unrousable sleep, which can contribute to a lethal outcome.
Recreational doses of 1–2 g generally provide a feeling of euphoria, and larger doses create deleterious effects such as reduced motor function and drowsiness. The sodium salt of GHB has a salty taste. Other salt forms such as calcium GHB and magnesium GHB have also been reported, but the sodium salt is by far the most common.
Some prodrugs, such as γ-butyrolactone (GBL), convert to GHB in the stomach and blood stream. Other prodrugs, such as 1,4-butanediol (1,4-B), have their own toxicity concerns. GBL and 1,4-B are normally found as pure liquids, but they can be mixed with other more harmful solvents when intended for industrial use (e.g. as paint stripper or varnish thinner).
GHB can be manufactured with little knowledge of chemistry, as it involves the mixing of its two precursors, GBL and an alkali hydroxide such as sodium hydroxide, to form the GHB salt. Due to the ease of manufacture and the availability of its precursors, it is not usually produced in illicit laboratories like other synthetic drugs, but in private homes by low-level producers.
GHB is "colourless and odourless".
A 2006 report commissioned by a UK parliamentary committee found the use of GHB to be less dangerous than tobacco and alcohol in physical harm, dependence and social harms.
GHB has been used as a club drug, apparently starting in the 1990s, as small doses of GHB can act as a euphoriant and are believed to be aphrodisiac. Slang terms for GHB include "liquid ecstasy", "lollipops", "liquid X" or "liquid E" due to its tendency to produce euphoria and sociability and its use in the dance party scene.
By 2009 this use had diminished, possibly due to efforts to control distribution of GHB and its analogs, or to the narrow range of dosing and adverse effects of confusion, dizziness, blurred vision, hot/cold flushes, profuse sweating, vomiting, and loss of consciousness when overdosed. The downward trend was still apparent in 2012.
Some athletes have used GHB or its analogs because of being marketed as anabolic agents, although there is no evidence that it builds muscle or improves performance.
GHB became known to the general public as a date rape drug by the late 1990s. GHB is colourless and odorless and has been described as "very easy to add to drinks". When consumed, the victim will quickly feel groggy and sleepy and may become unconscious. Upon recovery they may have an impaired ability to recall events that have occurred during the period of intoxication. In these situations evidence and the identification of the perpetrator of the rape is often difficult.
It is also difficult to establish how often GHB is used to facilitate rape as it is difficult to detect in a urine sample after a day, and many victims may only recall the rape some time after its occurrence; however, a 2006 study suggested that there was "no evidence to suggest widespread date rape drug use" in the UK, and that less than 2% of cases involved GHB, while 17% involved cocaine, and a survey in the Netherlands published in 2010 found that the proportion of drug-related rape where GHB was used appeared to be greatly overestimated by the media.
There have been several high-profile cases of GHB as a date rape drug that received national attention in the United States. In early 1999, a 15-year-old girl, Samantha Reid of Rockwood, Michigan, died from GHB poisoning. Reid's death inspired the legislation titled the "Hillory J. Farias and Samantha Reid Date-Rape Drug Prohibition Act of 2000". This is the law that made GHB a Schedule 1 controlled substance.
GHB can be detected in hair. Hair testing can be a useful tool in court cases or for the victim's own information. Most over-the-counter urine test kits test only for date rape drugs that are benzodiazepines, and GHB is not a benzodiazepine. To detect GHB in urine, the sample must be taken within four hours of GHB ingestion, and cannot be tested at home.
In humans, GHB has been shown to reduce the elimination rate of alcohol. This may explain the respiratory arrest that has been reported after ingestion of both drugs. A review of the details of 194 deaths attributed to or related to GHB over a ten-year period found that most were from respiratory depression caused by interaction with alcohol or other drugs.
One publication has investigated 226 deaths attributed to GHB. Of 226 deaths included, 213 had a cardiorespiratory arrest and 13 had fatal accidents. Seventy-one deaths (34%) had no co-intoxicants. Postmortem blood GHB was 18–4400 mg/L (median=347) in deaths negative for co-intoxicants.
One report has suggested that sodium oxybate overdose might be fatal, based on deaths of three patients who had been prescribed the drug. However, for two of the three cases, post-mortem GHB concentrations were 141 and 110 mg/L, which is within the expected range of concentrations for GHB after death, and the third case was a patient with a history of intentional drug overdose. The toxicity of GHB has been an issue in criminal trials, as in the death of Felicia Tang, where the defense argued that death was due to GHB, not murder.
GHB is produced in the body in very small amounts, and blood levels may climb after death to levels in the range of 30–50 mg/L. Levels higher than this are found in GHB deaths. Levels lower than this may be due to GHB or to postmortem endogenous elevations.
In multiple studies, GHB has been found to impair spatial memory, working memory, learning and memory in rats with chronic administration. These effects are associated with decreased NMDA receptor expression in the cerebral cortex and possibly other areas as well. In addition, the neurotoxicity appears to be caused by oxidative stress.
Although there have been reported fatalities due to GHB withdrawal, reports are inconclusive and further research is needed. A common problem is that GHB does not leave traces in the body after a short period of time, complicating diagnosis and research. Addiction occurs when repeated drug use disrupts the normal balance of brain circuits that control rewards, memory and cognition, ultimately leading to compulsive drug taking.
Rats forced to consume massive doses of GHB will intermittently prefer GHB solution to water but, after experiments on rats, it was noted that "no rat showed any sign of withdrawal when GHB was finally removed at the end of the 20-week period" or during periods of voluntary abstinence.
GHB has also been associated with a withdrawal syndrome of insomnia, anxiety, and tremor that usually resolves within three to twenty-one days. The withdrawal syndrome can be severe producing acute delirium and may require hospitalization in an intensive care unit for management. Management of GHB dependence involves considering the person's age, comorbidity and the pharmacological pathways of GHB. The mainstay of treatment for severe withdrawal is supportive care and benzodiazepines for control of acute delirium, but larger doses are often required compared to acute delirium of other causes (e.g. > 100 mg/d of diazepam). Baclofen has been suggested as an alternative or adjunct to benzodiazepines based on anecdotal evidence and some animal data. However, there is less experience with the use of baclofen for GHB withdrawal, and additional research in humans is needed. Baclofen was first suggested as an adjunct because benzodiazepines do not affect GABAB receptors and therefore have no cross-tolerance with GHB while baclofen, which works via GABAB receptors, is cross-tolerant with GHB and may be more effective in alleviating withdrawal effects of GHB.
GHB withdrawal is not widely discussed in textbooks and some psychiatrists, general practitioners, and even hospital emergency physicians may not be familiar with this withdrawal syndrome.
Overdose of GHB can sometimes be difficult to treat because of its multiple effects on the body. GHB tends to cause rapid unconsciousness at doses above 3500 mg, with single doses over 7000 mg often causing life-threatening respiratory depression, and higher doses still inducing bradycardia and cardiac arrest. Other side-effects include convulsions (especially when combined with stimulants), and nausea/vomiting (especially when combined with alcohol).
The greatest life threat due to GHB overdose (with or without other substances) is respiratory arrest. Other relatively common causes of death due to GHB ingestion include aspiration of vomitus, positional asphyxia, and trauma sustained while intoxicated (e.g., motor vehicle accidents while driving under the influence of GHB). The risk of aspiration pneumonia and positional asphyxia risk can be reduced by laying the patient down in the recovery position. People are most likely to vomit as they become unconscious, and as they wake up. It is important to keep the victim awake and moving; the victim must not be left alone due to the risk of death through vomiting. Frequently the victim will be in a good mood but this does not mean the victim is not in danger. GHB overdose is a medical emergency and immediate assessment in an emergency department is needed.
Convulsions from GHB can be treated with the benzodiazepines diazepam or lorazepam. Even though these benzodiazepines are also CNS depressants, they primarily modulate GABAA receptors whereas GHB is primarily a GABAB receptor agonist, and so do not worsen CNS depression as much as might be expected.
Because of the faster and more complete absorption of GBL relative to GHB, its dose-response curve is steeper, and overdoses of GBL tend to be more dangerous and problematic than overdoses involving only GHB or 1,4-B. Any GHB/GBL overdose is a medical emergency and should be cared for by appropriately trained personnel.
A newer synthetic drug SCH-50911, which acts as a selective GABAB antagonist, quickly reverses GHB overdose in mice. However, this treatment has yet to be tried in humans, and it is unlikely that it will be researched for this purpose in humans due to the illegal nature of clinical trials of GHB and the lack of medical indemnity coverage inherent in using an untested treatment for a life-threatening overdose.
GHB may be quantitated in blood or plasma to confirm a diagnosis of poisoning in hospitalized patients, to provide evidence in an impaired driving, or to assist in a medicolegal death investigation. Blood or plasma GHB concentrations are usually in a range of 50–250 mg/L in persons receiving the drug therapeutically (during general anesthesia), 30–100 mg/L in those arrested for impaired driving, 50–500 mg/L in acutely intoxicated patients and 100–1000 mg/L in victims of fatal overdosage. Urine is often the preferred specimen for routine drug abuse monitoring purposes. Both γ-butyrolactone (GBL) and 1,4-butanediol are converted to GHB in the body.
In January 2016, it was announced scientists had developed a way to detect GHB, among other things, in saliva.
Cells produce GHB by reduction of succinic semialdehyde via succinic semialdehyde reductase (SSR). This enzyme appears to be induced by cAMP levels, meaning substances that elevate cAMP, such as forskolin and vinpocetine, may increase GHB synthesis and release. Conversely, endogeneous GHB production in those taking valproic acid will be inhibited via inhibition of the conversion from succinic acid semialdehyde to GHB. People with the disorder known as succinic semialdehyde dehydrogenase deficiency, also known as γ-hydroxybutyric aciduria, have elevated levels of GHB in their urine, blood plasma and cerebrospinal fluid.
The precise function of GHB in the body is not clear. It is known, however, that the brain expresses a large number of receptors that are activated by GHB. These receptors are excitatory, however, and therefore not responsible for the sedative effects of GHB; they have been shown to elevate the principal excitatory neurotransmitter, glutamate. The benzamide antipsychotics—amisulpride, nemonapride, etc.—have been shown to bind to these GHB-activated receptors in vivo. Other antipsychotics were tested and were not found to have an affinity for this receptor.
GHB is a precursor to GABA, glutamate, and glycine in certain brain areas.
In spite of its demonstrated neurotoxicity, (see relevant section, above), GHB has neuroprotective properties, and has been found to protect cells from hypoxia.
GHB is also produced as a result of fermentation and so is found in small quantities in some beers and wines, in particular fruit wines. The amount found in wine is pharmacologically insignificant and not sufficient to produce psychoactive effects.
GHB has at least two distinct binding sites in the central nervous system. GHB is an agonist at the newly characterized GHB receptor, which is excitatory, and it is a weak agonist at the GABAB receptor, which is inhibitory. GHB is a naturally occurring substance that acts in a similar fashion to some neurotransmitters in the mammalian brain. GHB is probably synthesized from GABA in GABAergic neurons, and released when the neurons fire.
GHB has been found to activate oxytocinergic neurons in the supraoptic nucleus.
If taken orally, GABA itself does not effectively cross the blood–brain barrier.
GHB induces the accumulation of either a derivative of tryptophan or tryptophan itself in the extracellular space, possibly by increasing tryptophan transport across the blood–brain barrier. The blood content of certain neutral amino-acids, including tryptophan, is also increased by peripheral GHB administration. GHB-induced stimulation of tissue serotonin turnover may be due to an increase in tryptophan transport to the brain and in its uptake by serotonergic cells. As the serotonergic system may be involved in the regulation of sleep, mood, and anxiety, the stimulation of this system by high doses of GHB may be involved in certain neuropharmacological events induced by GHB administration.
However, at therapeutic doses, GHB reaches much higher concentrations in the brain and activates GABAB receptors, which are primarily responsible for its sedative effects. GHB's sedative effects are blocked by GABAB antagonists.
The role of the GHB receptor in the behavioural effects induced by GHB is more complex. GHB receptors are densely expressed in many areas of the brain, including the cortex and hippocampus, and these are the receptors that GHB displays the highest affinity for. There has been somewhat limited research into the GHB receptor; however, there is evidence that activation of the GHB receptor in some brain areas results in the release of glutamate, the principal excitatory neurotransmitter. Drugs that selectively activate the GHB receptor cause absence seizures in high doses, as do GHB and GABA(B) agonists.
Activation of both the GHB receptor and GABA(B) is responsible for the addictive profile of GHB. GHB's effect on dopamine release is biphasic. Low concentrations stimulate dopamine release via the GHB receptor. Higher concentrations inhibit dopamine release via GABA(B) receptors as do other GABA(B) agonists such as baclofen and phenibut. After an initial phase of inhibition, dopamine release is then increased via the GHB receptor. Both the inhibition and increase of dopamine release by GHB are inhibited by opioid antagonists such as naloxone and naltrexone. Dynorphin may play a role in the inhibition of dopamine release via kappa opioid receptors.
This explains the paradoxical mix of sedative and stimulatory properties of GHB, as well as the so-called "rebound" effect, experienced by individuals using GHB as a sleeping agent, wherein they awake suddenly after several hours of GHB-induced deep sleep. That is to say that, over time, the concentration of GHB in the system decreases below the threshold for significant GABAB receptor activation and activates predominantly the GHB receptor, leading to wakefulness.
Recently, analogs of GHB, such as 4-hydroxy-4-methylpentanoic acid (UMB68) have been synthesised and tested on animals, in order to gain a better understanding of GHB's mode of action. Analogues of GHB such as 3-methyl-GHB, 4-methyl-GHB, and 4-phenyl-GHB have been shown to produce similar effects to GHB in some animal studies, but these compounds are even less well researched than GHB itself. Of these analogues, only 4-methyl-GHB (γ-hydroxyvaleric acid, GHV) and a prodrug form γ-valerolactone (GVL) have been reported as drugs of abuse in humans, and on the available evidence seem to be less potent but more toxic than GHB, with a particular tendency to cause nausea and vomiting.
Other prodrug ester forms of GHB have also rarely been encountered by law enforcement, including 1,4-butanediol diacetate (BDDA/DABD), methyl-4-acetoxybutanoate (MAB), and ethyl-4-acetoxybutanoate (EAB), but these are, in general, covered by analogue laws in jurisdictions where GHB is illegal, and little is known about them beyond their delayed onset and longer duration of action. The intermediate compound γ-hydroxybutyraldehyde (GHBAL) is also a prodrug for GHB; however, as with all aliphatic aldehydes this compound is caustic and is strong-smelling and foul-tasting; actual use of this compound as an intoxicant is likely to be unpleasant and result in severe nausea and vomiting.
Both of the metabolic breakdown pathways shown for GHB can run in either direction, depending on the concentrations of the substances involved, so the body can make its own GHB either from GABA or from succinic semialdehyde. Under normal physiological conditions, the concentration of GHB in the body is rather low, and the pathways would run in the reverse direction to what is shown here to produce endogenous GHB. However, when GHB is consumed for recreational or health promotion purposes, its concentration in the body is much higher than normal, which changes the enzyme kinetics so that these pathways operate to metabolise GHB rather than producing it.
Alexander Zaytsev worked on this chemical family and published work on it in 1874. The first extended research into GHB and its use in humans was conducted in the early 1960s by Dr. Henri Laborit to use in studying the neurotransmitter GABA. It was studied in a range of uses including obstetric surgery and during childbirth and as an anxiolytic; there were anecdotal reports of it having antidepressant and aphrodisiac effects as well. It was also studied as an intravenous anesthetic agent and was marketed for that purpose starting in 1964 in Europe but it was not widely adopted as it caused seizures; as of 2006 that use was still authorized in France and Italy but not widely used. It was also studied to treat alcohol addiction; while the evidence for this use is weak, however sodium oxybate is marketed for this use in Italy.
GHB and sodium oxybate were also studied for use in narcolepsy from the 1960s onwards.
In May 1990 GHB was introduced as a dietary supplement and was marketed to body builders, for help with weight control and as a sleep aid, and as a "replacement" for l-tryptophan, which was removed from the market in November 1989 when batches contaminated with trace impurities were found to cause eosinophilia-myalgia syndrome, although eosinophilia-myalgia syndrome is also tied to tryptophan overload. In 2001 tryptophan supplement sales were allowed to resume, and in 2005 the FDA ban on tryptophan supplement importation was lifted. By November 1989 57 cases of illness caused by the GHB supplements had been reported to the Centers for Disease Control and Prevention, with people having taken up to three teaspoons of GHB; there were no deaths but nine people needed care in an intensive care unit. The FDA issued a warning in November 1990 that sale of GHB was illegal. GHB continued to be manufactured and sold illegally and it and analogs were adopted as a club drug and came to be used as a date rape drug, and the DEA made seizures and the FDA reissued warnings several times throughout the 1990s.
At the same time, research on the use of GHB in the form of sodium oxybate had formalized, as a company called Orphan Medical had filed an investigational new drug application and was running clinical trials with the intention of gaining regulatory approval for use to treat narcolepsy.
A popular children's toy, Bindeez (also known as Aqua Dots, in the United States), produced by Melbourne company Moose, was banned in Australia in early November 2007 when it was discovered that 1,4-butanediol (1,4-B), which is metabolized into GHB, had been substituted for the non-toxic plasticiser 1,5-pentanediol in the bead manufacturing process. Three young children were hospitalized as a result of ingesting a large number of the beads, and the toy was recalled.
In the United States, GHB was placed on Schedule I of the Controlled Substances Act in March 2000. However, used in sodium oxybate under an IND or NDA from the US FDA, it is considered a Schedule III substance but with Schedule I trafficking penalties, one of several drugs that are listed in multiple schedules.
On 20 March 2001, the UN Commission on Narcotic Drugs placed GHB in Schedule IV of the 1971 Convention on Psychotropic Substances.
In the UK GHB was made a class C drug in June 2003. In October 2013 the ACMD recommended upgrading it from schedule IV to schedule II in line with UN recommendations. Their report concluded that the minimal use of Xyrem in the UK meant that prescribers would be minimally inconvenienced by the rescheduling. This advice was followed and GHB was moved to schedule 2 on 7 January 2015.
In Hong Kong, GHB is regulated under Schedule 1 of Hong Kong's Chapter 134 "Dangerous Drugs Ordinance". It can only be used legally by health professionals and for university research purposes. The substance can be given by pharmacists under a prescription. Anyone who supplies the substance without prescription can be fined HK$10,000. The penalty for trafficking or manufacturing the substance is a HK$150,000 fine and life imprisonment. Possession of the substance for consumption without license from the Department of Health is illegal with a HK$100,000 fine or five years of jail time.
In Canada, GHB has been a Schedule I controlled substance since 6 November 2012 (the same schedule that contains heroin and cocaine). Prior to that date, it was a Schedule III controlled substance (the same schedule that contains amphetamines and LSD).
In New Zealand and Australia, GHB, 1,4-B, and GBL are all Class B illegal drugs, along with any possible esters, ethers, and aldehydes. GABA itself is also listed as an illegal drug in these jurisdictions, which seems unusual given its failure to cross the blood–brain barrier, but there was a perception among legislators that all known analogues should be covered as far as this was possible. Attempts to circumvent the illegal status of GHB have led to the sale of derivatives such as 4-methyl-GHB (γ-hydroxyvaleric acid, GHV) and its prodrug form γ-valerolactone (GVL), but these are also covered under the law by virtue of their being "substantially similar" to GHB or GBL and; so importation, sale, possession and use of these compounds is also considered to be illegal.
In Chile, GHB is a controlled drug under the law (psychotropic substances and narcotics).
In Norway and in Switzerland, GHB is considered a narcotic and is only available by prescription under the trade name Xyrem (Union Chimique Belge S.A.).
Sodium oxybate is also used therapeutically in Italy under the brand name Alcover for treatment of alcohol withdrawal and dependence.
|
https://en.wikipedia.org/wiki?curid=12962
|
Giordano Bruno
Giordano Bruno (, ; ; born Filippo Bruno, January or February 1548 – 17 February 1600) was an Italian Dominican friar, philosopher, mathematician, poet, cosmological theorist, and Hermetic occultist. He is known for his cosmological theories, which conceptually extended the then-novel Copernican model. He proposed that the stars were distant suns surrounded by their own planets, and he raised the possibility that these planets might foster life of their own, a philosophical position known as cosmic pluralism. He also insisted that the universe is infinite and could have no "centre".
Starting in 1593, Bruno was tried for heresy by the Roman Inquisition on charges of denial of several core Catholic doctrines, including eternal damnation, the Trinity, the divinity of Christ, the virginity of Mary, and transubstantiation. Bruno's pantheism was not taken lightly by the church, as was his teaching of the transmigration of the soul/reincarnation. The Inquisition found him guilty, and he was burned at the stake in Rome's Campo de' Fiori in 1600. After his death, he gained considerable fame, being particularly celebrated by 19th- and early 20th-century commentators who regarded him as a martyr for science, although historians agree that his heresy trial was not a response to his astronomical views but rather a response to his philosophical and religious views. Bruno's case is still considered a landmark in the history of free thought and the emerging sciences.
In addition to cosmology, Bruno also wrote extensively on the art of memory, a loosely organised group of mnemonic techniques and principles. Historian Frances Yates argues that Bruno was deeply influenced by Arab astrology (particularly the philosophy of Averroes), Neoplatonism, Renaissance Hermeticism, and Genesis-like legends surrounding the Egyptian god Thoth. Other studies of Bruno have focused on his qualitative approach to mathematics and his application of the spatial concepts of geometry to language.
Born Filippo Bruno in Nola (a "comune" in the modern-day province of Naples, in the Southern Italian region of Campania, then part of the Kingdom of Naples) in 1548, he was the son of Giovanni Bruno, a soldier, and Fraulissa Savolino. In his youth he was sent to Naples to be educated. He was tutored privately at the Augustinian monastery there, and attended public lectures at the Studium Generale. At the age of 17, he entered the Dominican Order at the monastery of San Domenico Maggiore in Naples, taking the name Giordano, after Giordano Crispo, his metaphysics tutor. He continued his studies there, completing his novitiate, and became an ordained priest in 1572 at age 24. During his time in Naples he became known for his skill with the art of memory and on one occasion travelled to Rome to demonstrate his mnemonic system before Pope Pius V and Cardinal Rebiba. In his later years Bruno claimed that the Pope accepted his dedication to him of the lost work "On The Ark of Noah" at this time.
While Bruno was distinguished for outstanding ability, his taste for free thinking and forbidden books soon caused him difficulties. Given the controversy he caused in later life it is surprising that he was able to remain within the monastic system for eleven years. In his testimony to Venetian inquisitors during his trial, many years later, he says that proceedings were twice taken against him for having cast away images of the saints, retaining only a crucifix, and for having recommended controversial texts to a novice. Such behaviour could perhaps be overlooked, but Bruno's situation became much more serious when he was reported to have defended the Arian heresy, and when a copy of the banned writings of Erasmus, annotated by him, was discovered hidden in the convent privy. When he learned that an indictment was being prepared against him in Naples he fled, shedding his religious habit, at least for a time.
Bruno first went to the Genoese port of Noli, then to Savona, Turin and finally to Venice, where he published his lost work "On the Signs of the Times" with the permission (so he claimed at his trial) of the Dominican Remigio Nannini Fiorentino. From Venice he went to Padua, where he met fellow Dominicans who convinced him to wear his religious habit again. From Padua he went to Bergamo and then across the Alps to Chambéry and Lyon. His movements after this time are obscure.
In 1579 he arrived in Geneva. As D.W. Singer, a Bruno biographer, notes, "The question has sometimes been raised as to whether Bruno became a Protestant, but it is intrinsically most unlikely that he accepted membership in Calvin's communion" During his Venetian trial he told inquisitors that while in Geneva he told the Marchese de Vico of Naples, who was notable for helping Italian refugees in Geneva, "I did not intend to adopt the religion of the city. I desired to stay there only that I might live at liberty and in security." Bruno had a pair of breeches made for himself, and the Marchese and others apparently made Bruno a gift of a sword, hat, cape and other necessities for dressing himself; in such clothing Bruno could no longer be recognised as a priest. Things apparently went well for Bruno for a time, as he entered his name in the Rector's Book of the University of Geneva in May 1579. But in keeping with his personality he could not long remain silent. In August he published an attack on the work of , a distinguished professor. He and the printer were promptly arrested. Rather than apologising, Bruno insisted on continuing to defend his publication. He was refused the right to take sacrament. Though this right was eventually restored, he left Geneva.
He went to France, arriving first in Lyon, and thereafter settling for a time (1580–1581) in Toulouse, where he took his doctorate in theology and was elected by students to lecture in philosophy. It seems he also attempted at this time to return to Catholicism, but was denied absolution by the Jesuit priest he approached. When religious strife broke out in the summer of 1581, he moved to Paris. There he held a cycle of thirty lectures on theological topics and also began to gain fame for his prodigious memory. Bruno's feats of memory were based, at least in part, on his elaborate system of mnemonics, but some of his contemporaries found it easier to attribute them to magical powers. His talents attracted the benevolent attention of the king Henry III. The king summoned him to the court. Bruno subsequently reported "I got me such a name that King Henry III summoned me one day to discover from me if the memory which I possessed was natural or acquired by magic art. I satisfied him that it did not come from sorcery but from organised knowledge; and, following this, I got a book on memory printed, entitled "The Shadows of Ideas", which I dedicated to His Majesty. Forthwith he gave me an Extraordinary Lectureship with a salary."
In Paris, Bruno enjoyed the protection of his powerful French patrons. During this period, he published several works on mnemonics, including "De umbris idearum" ("On the Shadows of Ideas", 1582), "Ars Memoriae" ("The Art of Memory", 1582), and "Cantus Circaeus" ("Circe's Song", 1582). All of these were based on his mnemonic models of organised knowledge and experience, as opposed to the simplistic logic-based mnemonic techniques of Petrus Ramus then becoming popular. Bruno also published a comedy summarizing some of his philosophical positions, titled "Il Candelaio" ("The Torchbearer", 1582). In the 16th century dedications were, as a rule, approved beforehand, and hence were a way of placing a work under the protection of an individual. Given that Bruno dedicated various works to the likes of King Henry III, Sir Philip Sidney, Michel de Castelnau (French Ambassador to England), and possibly Pope Pius V, it is apparent that this wanderer had risen sharply in status and moved in powerful circles.
In April 1583, Bruno went to England with letters of recommendation from Henry III as a guest of the French ambassador, Michel de Castelnau. There he became acquainted with the poet Philip Sidney (to whom he dedicated two books) and other members of the Hermetic circle around John Dee, though there is no evidence that Bruno ever met Dee himself. He also lectured at Oxford, and unsuccessfully sought a teaching position there. His views were controversial, notably with John Underhill, Rector of Lincoln College and subsequently bishop of Oxford, and George Abbot, who later became Archbishop of Canterbury. Abbot mocked Bruno for supporting "the opinion of Copernicus that the earth did go round, and the heavens did stand still; whereas in truth it was his own head which rather did run round, and his brains did not stand still", and found Bruno had both plagiarised and misrepresented Ficino's work, leading Bruno to return to the continent.
Nevertheless, his stay in England was fruitful. During that time Bruno completed and published some of his most important works, the six "Italian Dialogues", including the cosmological tracts "La Cena de le Ceneri" ("The Ash Wednesday Supper", 1584), "De la Causa, Principio et Uno" ("On Cause, Principle and Unity", 1584), "De l'Infinito, Universo e Mondi" ("On the Infinite, Universe and Worlds", 1584) as well as "Lo Spaccio de la Bestia Trionfante" ("The Expulsion of the Triumphant Beast", 1584) and "De gl' Heroici Furori" ("On the Heroic Frenzies", 1585). Some of these were printed by John Charlewood. Some of the works that Bruno published in London, notably "The Ash Wednesday Supper", appear to have given offense. Once again, Bruno's controversial views and tactless language lost him the support of his friends. John Bossy has advanced the theory that, while staying in the French Embassy in London, Bruno was also spying on Catholic conspirators, under the pseudonym "Henry Fagot", for Sir Francis Walsingham, Queen Elizabeth's Secretary of State.
Bruno is sometimes cited as being the first to propose that the universe is infinite, which he did during his time in England, but an English scientist, Thomas Digges, put forth this idea in a published work in 1576, some eight years earlier than Bruno. An infinite universe and the possibility of alien life had also been earlier suggested by German Catholic Cardinal Nicholas of Cusa in "On Learned Ignorance" published in 1440.
In October 1585, after the French embassy in London was attacked by a mob, Bruno returned to Paris with Castelnau, finding a tense political situation. Moreover, his 120 theses against Aristotelian natural science and his pamphlets against the mathematician Fabrizio Mordente soon put him in ill favour. In 1586, following a violent quarrel about Mordente's invention, the differential compass, he left France for Germany.
In Germany he failed to obtain a teaching position at Marburg, but was granted permission to teach at Wittenberg, where he lectured on Aristotle for two years. However, with a change of intellectual climate there, he was no longer welcome, and went in 1588 to Prague, where he obtained 300 taler from Rudolf II, but no teaching position. He went on to serve briefly as a professor in Helmstedt, but had to flee again when he was excommunicated by the Lutherans.
During this period he produced several Latin works, dictated to his friend and secretary Girolamo Besler, including "De Magia" ("On Magic"), "Theses De Magia" ("Theses on Magic") and "De Vinculis in Genere" ("A General Account of Bonding"). All these were apparently transcribed or recorded by Besler (or Bisler) between 1589 and 1590. He also published "De Imaginum, Signorum, Et Idearum Compositione" ("On the Composition of Images, Signs and Ideas", 1591).
In 1591 he was in Frankfurt. Apparently, during the Frankfurt Book Fair, he received an invitation to Venice from the local patrician Giovanni Mocenigo, who wished to be instructed in the art of memory, and also heard of a vacant chair in mathematics at the University of Padua. At the time the Inquisition seemed to be losing some of its strictness, and because the Republic of Venice was the most liberal state in the Italian Peninsula, Bruno was lulled into making the fatal mistake of returning to Italy.
He went first to Padua, where he taught briefly, and applied unsuccessfully for the chair of mathematics, which was given instead to Galileo Galilei one year later. Bruno accepted Mocenigo's invitation and moved to Venice in March 1592. For about two months he served as an in-house tutor to Mocenigo. When Bruno announced his plan to leave Venice to his host, the latter, who was unhappy with the teachings he had received and had apparently come to dislike Bruno, denounced him to the Venetian Inquisition, which had Bruno arrested on 22 May 1592. Among the numerous charges of blasphemy and heresy brought against him in Venice, based on Mocenigo's denunciation, was his belief in the plurality of worlds, as well as accusations of personal misconduct. Bruno defended himself skilfully, stressing the philosophical character of some of his positions, denying others and admitting that he had had doubts on some matters of dogma. The Roman Inquisition, however, asked for his transfer to Rome. After several months of argument, the Venetian authorities reluctantly consented and Bruno was sent to Rome in February 1593.
During the seven years of his trial in Rome, Bruno was held in confinement, lastly in the Tower of Nona. Some important documents about the trial are lost, but others have been preserved, among them a summary of the proceedings that was rediscovered in 1940. The numerous charges against Bruno, based on some of his books as well as on witness accounts, included blasphemy, immoral conduct, and heresy in matters of dogmatic theology, and involved some of the basic doctrines of his philosophy and cosmology. Luigi Firpo speculates the charges made against Bruno by the Roman Inquisition were:
Bruno defended himself as he had in Venice, insisting that he accepted the Church's dogmatic teachings, but trying to preserve the basis of his philosophy. In particular, he held firm to his belief in the plurality of worlds, although he was admonished to abandon it. His trial was overseen by the Inquisitor Cardinal Bellarmine, who demanded a full recantation, which Bruno eventually refused. On 20 January 1600, Pope Clement VIII declared Bruno a heretic, and the Inquisition issued a sentence of death. According to the correspondence of Gaspar Schopp of Breslau, he is said to have made a threatening gesture towards his judges and to have replied: "Maiori forsan cum timore sententiam in me fertis quam ego accipiam" ("Perhaps you pronounce this sentence against me with greater fear than I receive it").
He was turned over to the secular authorities. On Ash Wednesday, 17 February 1600, in the Campo de' Fiori (a central Roman market square), with his "tongue imprisoned because of his wicked words", he was hung upside down naked before finally being burned at the stake. His ashes were thrown into the Tiber river. All of Bruno's works were placed on the "Index Librorum Prohibitorum" in 1603.
The inquisition cardinals who judged Giordano Bruno were Cardinal Bellarmino (Bellarmine), Cardinal Madruzzo (Madruzzi), Camillo Cardinal Borghese (later Pope Paul V), Domenico Cardinal Pinelli, Pompeio Cardinal Arrigoni, Cardinal Sfondrati, Pedro Cardinal De Deza Manuel and Cardinal Santorio (Archbishop of Santa Severina, Cardinal-Bishop of Palestrina).
The measures taken to prevent Bruno continuing to speak have resulted in his becoming a symbol for free thought and speech in present-day Rome, where an annual memorial service takes place close to the spot where he was executed.
The earliest likeness of Bruno is an engraving published in 1715 and cited by Salvestrini as "the only known portrait of Bruno". Salvestrini suggests that it is a re-engraving made from a now lost original. This engraving has provided the source for later images.
The records of Bruno's imprisonment by the Venetian inquisition in May 1592 describe him as a man "of average height, with a hazel-coloured beard and the appearance of being about forty years of age".
Alternately, a passage in a work by George Abbot indicates that Bruno was of diminutive stature: "When that Italian Didapper, who intituled himselfe Philotheus Iordanus Brunus Nolanus, magis elaboratae Theologiae Doctor, &c. with a name longer than his body...". The word "didapper" used by Abbot is the derisive term which at the time meant "a small diving waterfowl".
In the first half of the 15th century, Nicholas of Cusa challenged the then widely accepted philosophies of Aristotelianism, envisioning instead an infinite universe whose centre was everywhere and circumference nowhere, and moreover teeming with countless stars. He also predicted that neither were the rotational orbits circular nor were their movements uniform.
In the second half of the 16th century, the theories of Copernicus (1473–1543) began diffusing through Europe. Copernicus conserved the idea of planets fixed to solid spheres, but considered the apparent motion of the stars to be an illusion caused by the rotation of the Earth on its axis; he also preserved the notion of an immobile centre, but it was the Sun rather than the Earth. Copernicus also argued the Earth was a planet orbiting the Sun once every year. However he maintained the Ptolemaic hypothesis that the orbits of the planets were composed of perfect circles—deferents and epicycles—and that the stars were fixed on a stationary outer sphere.
Despite the widespread publication of Copernicus' work "De revolutionibus orbium coelestium", during Bruno's time most educated Catholics subscribed to the Aristotelian geocentric view that the Earth was the centre of the universe, and that all heavenly bodies revolved around it. The ultimate limit of the universe was the "primum mobile", whose diurnal rotation was conferred upon it by a transcendental God, not part of the universe (although, as the kingdom of heaven, adjacent to it), a motionless prime mover and first cause. The fixed stars were part of this celestial sphere, all at the same fixed distance from the immobile Earth at the center of the sphere. Ptolemy had numbered these at 1,022, grouped into 48 constellations. The planets were each fixed to a transparent sphere.
Few astronomers of Bruno's time accepted Copernicus's heliocentric model. Among those who did were the Germans Michael Maestlin (1550–1631), Christoph Rothmann, Johannes Kepler (1571–1630); the Englishman Thomas Digges, author of "A Perfit Description of the Caelestial Orbes"; and the Italian Galileo Galilei (1564–1642).
In 1584, Bruno published two important philosophical dialogues ("La Cena de le Ceneri" and "De l'infinito universo et mondi") in which he argued against the planetary spheres (Christoph Rothmann did the same in 1586 as did Tycho Brahe in 1587) and affirmed the Copernican principle.
In particular, to support the Copernican view and oppose the objection according to which the motion of the Earth would be perceived by means of the motion of winds, clouds etc., in "La Cena de le Ceneri" Bruno anticipates some of the arguments of Galilei on the relativity principle. Note that he also uses the example now known as Galileo's ship.
Theophilus – [...] air through which the clouds and winds move are parts of the Earth, [...] to mean under the name of Earth the whole machinery and the entire animated part, which consists of dissimilar parts; so that the rivers, the rocks, the seas, the whole vaporous and turbulent air, which is enclosed within the highest mountains, should belong to the Earth as its members, just as the air [does] in the lungs and in other cavities of animals by which they breathe, widen their arteries, and other similar effects necessary for life are performed. The clouds, too, move through accidents in the body of the Earth and are in its bowels as are the waters. [...]
With the Earth move [...] all things that are on the Earth. If, therefore, from a point outside the Earth something were thrown upon the Earth, it would lose, because of the latter's motion, its straightness as would be seen on the ship [...] moving along a river, if someone on point C of the riverbank were to throw a stone along a straight line, and would see the stone miss its target by the amount of the velocity of the ship's motion. But if someone were placed high on the mast of that ship, move as it may however fast, he would not miss his target at all, so that the stone or some other heavy thing thrown downward would not come along a straight line from the point E which is at the top of the mast, or cage, to the point D which is at the bottom of the mast, or at some point in the bowels and body of the ship. Thus, if from the point D to the point E someone who is inside the ship would throw a stone straight up, it would return to the bottom along the same line however far the ship moved, provided it was not subject to any pitch and roll."
Bruno's infinite universe was filled with a substance—a "pure air", aether, or "spiritus"—that offered no resistance to the heavenly bodies which, in Bruno's view, rather than being fixed, moved under their own impetus (momentum). Most dramatically, he completely abandoned the idea of a hierarchical universe.
The universe is then one, infinite, immobile... It is not capable of comprehension and therefore is endless and limitless, and to that extent infinite and indeterminable, and consequently immobile.
Bruno's cosmology distinguishes between "suns" which produce their own light and heat, and have other bodies moving around them; and "earths" which move around suns and receive light and heat from them. Bruno suggested that some, if not all, of the objects classically known as fixed stars are in fact suns. According to astrophysicist Steven Soter, he was the first person to grasp that "stars are other suns with their own planets."
Bruno wrote that other worlds "have no less virtue nor a nature different from that of our Earth" and, like Earth, "contain animals and inhabitants".
During the late 16th century, and throughout the 17th century, Bruno's ideas were held up for ridicule, debate, or inspiration. Margaret Cavendish, for example, wrote an entire series of poems against "atoms" and "infinite worlds" in "Poems and Fancies" in 1664. Bruno's true, if partial, vindication would have to wait for the implications and impact of Newtonian cosmology.
Bruno's overall contribution to the birth of modern science is still controversial. Some scholars follow Frances Yates stressing the importance of Bruno's ideas about the universe being infinite and lacking geocentric structure as a crucial crossing point between the old and the new. Others see in Bruno's idea of multiple worlds instantiating the infinite possibilities of a pristine, indivisible One, a forerunner of Everett's many-worlds interpretation of quantum mechanics.
While many academics note Bruno's theological position as pantheism, several have described it as pandeism, and some also as panentheism. Physicist and philosopher Max Bernhard Weinstein in his "Welt- und Lebensanschauungen, Hervorgegangen aus Religion, Philosophie und Naturerkenntnis" ("World and Life Views, Emerging From Religion, Philosophy and Nature"), wrote that the theological model of pandeism was strongly expressed in the teachings of Bruno, especially with respect to the vision of a deity for which "the concept of God is not separated from that of the universe." However, Otto Kern takes exception to what he considers Weinstein's overbroad assertions that Bruno, as well as other historical philosophers such as John Scotus Eriugena, Anselm of Canterbury, Nicholas of Cusa, Mendelssohn, and Lessing, were pandeists or leaned towards pandeism. "Discover" editor Corey S. Powell also described Bruno's cosmology as pandeistic, writing that it was "a tool for advancing an animist or Pandeist theology", and this assessment of Bruno as a pandeist was agreed with by science writer Michael Newton Keas, and "The Daily Beast" writer David Sessions.
The Vatican has published few official statements about Bruno's trial and execution. In 1942, Cardinal Giovanni Mercati, who discovered a number of lost documents relating to Bruno's trial, stated that the Church was perfectly justified in condemning him. On the 400th anniversary of Bruno's death, in 2000, Cardinal Angelo Sodano declared Bruno's death to be a "sad episode" but, despite his regret, he defended Bruno's prosecutors, maintaining that the Inquisitors "had the desire to serve freedom and promote the common good and did everything possible to save his life". In the same year, Pope John Paul II made a general apology for "the use of violence that some have committed in the service of truth".
Some authors have characterised Bruno as a "martyr of science", suggesting parallels with the Galileo affair which began around 1610. "It should not be supposed," writes A. M. Paterson of Bruno and his "heliocentric solar system", that he "reached his conclusions via some mystical revelation...His work is an essential part of the scientific and philosophical developments that he initiated." Paterson echoes Hegel in writing that Bruno "ushers in a modern theory of knowledge that understands all natural things in the universe to be known by the human mind through the mind's dialectical structure".
Ingegno writes that Bruno embraced the philosophy of Lucretius, "aimed at liberating man from the fear of death and the gods." Characters in Bruno's "Cause, Principle and Unity" desire "to improve speculative science and knowledge of natural things," and to achieve a philosophy "which brings about the perfection of the human intellect most easily and eminently, and most closely corresponds to the truth of nature."
Other scholars oppose such views, and claim Bruno's martyrdom to science to be exaggerated, or outright false. For Yates, while "nineteenth century liberals" were thrown "into ecstasies" over Bruno's Copernicanism, "Bruno pushes Copernicus' scientific work back into a prescientific stage, back into Hermeticism, interpreting the Copernican diagram as a hieroglyph of divine mysteries."
According to historian Mordechai Feingold, "Both admirers and critics of Giordano Bruno basically agree that he was pompous and arrogant, highly valuing his opinions and showing little patience with anyone who even mildly disagreed with him." Discussing Bruno's experience of rejection when he visited Oxford University, Feingold suggests that "it might have been Bruno's manner, his language and his self-assertiveness, rather than his ideas" that caused offence.
In his "Lectures on the History of Philosophy" Hegel writes that Bruno's life represented "a bold rejection of all Catholic beliefs resting on mere authority."
Alfonso Ingegno states that Bruno's philosophy "challenges the developments of the Reformation, calls into question the truth-value of the whole of Christianity, and claims that Christ perpetrated a deceit on mankind... Bruno suggests that we can now recognise the universal law which controls the perpetual becoming of all things in an infinite universe." A. M. Paterson says that, while we no longer have a copy of the official papal condemnation of Bruno, his heresies included "the doctrine of the infinite universe and the innumerable worlds" and his beliefs "on the movement of the earth".
Michael White notes that the Inquisition may have pursued Bruno early in his life on the basis of his opposition to Aristotle, interest in Arianism, reading of Erasmus, and possession of banned texts. White considers that Bruno's later heresy was "multifaceted" and may have rested on his conception of infinite worlds. "This was perhaps the most dangerous notion of all... If other worlds existed with intelligent beings living there, did they too have their visitations? The idea was quite unthinkable."
Frances Yates rejects what she describes as the "legend that Bruno was prosecuted as a philosophical thinker, was burned for his daring views on innumerable worlds or on the movement of the earth." Yates however writes that "the Church was... perfectly within its rights if it included philosophical points in its condemnation of Bruno's heresies" because "the philosophical points were quite inseparable from the heresies."
According to the "Stanford Encyclopedia of Philosophy", "in 1600 there was no official Catholic position on the Copernican system, and it was certainly not a heresy. When [...] Bruno [...] was burned at the stake as a heretic, it had nothing to do with his writings in support of Copernican cosmology."
The website of the Vatican Apostolic Archive, discussing a summary of legal proceedings against Bruno in Rome, states: "In the same rooms where Giordano Bruno was questioned, for the same important reasons of the relationship between science and faith, at the dawning of the new astronomy and at the decline of Aristotle's philosophy, sixteen years later, Cardinal Bellarmino, who then contested Bruno's heretical theses, summoned Galileo Galilei, who also faced a famous inquisitorial trial, which, luckily for him, ended with a simple abjuration."
Following the 1870 Capture of Rome by the newly created Kingdom of Italy and the end of the Church's temporal power over the city, the erection of a monument to Bruno on the site of his execution became feasible. The monument was sharply opposed by the clerical party, but was finally erected by the Rome Municipality and inaugurated in 1889.
A statue of a stretched human figure standing on its head, designed by Alexander Polzin and depicting Bruno's death at the stake, was placed in Potsdamer Platz station in Berlin on 2 March 2008.
Retrospective iconography of Bruno shows him with a Dominican cowl but not tonsured. Edward Gosselin has suggested that it is likely Bruno kept his tonsure at least until 1579, and it is possible that he wore it again thereafter.
An idealised animated version of Bruno appears in the first episode of the 2014 television series "". In this depiction, Bruno is shown with a more modern look, without tonsure and wearing clerical robes and without his hood. "Cosmos" presents Bruno as an impoverished philosopher who was ultimately executed due to his refusal to recant his belief in other worlds, a portrayal that was criticised by some as simplistic or historically inaccurate. Corey S. Powell, of "Discover" magazine, says of Bruno, "A major reason he moved around so much is that he was argumentative, sarcastic, and drawn to controversy...He was a brilliant, complicated, difficult man.
The 2016 song "Roman Sky" by hard rock band Avenged Sevenfold focuses on the death of Bruno.
Also the song ""Anima Mundi"" by Massimiliano Larocca and the album "Numen Lumen" by neofolk group Hautville, which tracks Bruno's lyrics, were dedicated to the philosopher.
Algernon Charles Swinburne wrote a poem honouring Giordano Bruno in 1889, when the statue of Bruno was constructed in Rome.
Czeslaw Milosz evokes the story and image of Giordano Bruno in his poem "Campo Dei Fiori" (Warsaw 1943).
Randall Jarell's poem "The Emancipators" addresses Bruno, along with Galileo and Newton, as an originator of the modern scientific-industrial world.
Heather McHugh depicted Bruno as the principal of a story told (at dinner, by an "underestimated" travel guide) to a group of contemporary American poets in Rome. The poem (originally published in McHugh's collection of poems "Hinge & Sign", nominee for the National Book Award, and subsequently reprinted widely) channels the very question of ars poetica, or meta-meaning itself, through the embedded narrative of the suppression of Bruno's words, silenced towards the end of his life both literally and literarily.
Louis L’amour wrote To Giordano Bruno, a poem published in Smoke From This Altar, 1990.
Bruno and his theory of "the coincidence of contraries" ("coincidentia oppositorum") play an important role in James Joyce's novel "Finnegans Wake". Joyce wrote in a letter to his patroness, Harriet Shaw Weaver, "His philosophy is a kind of dualism – every power in nature must evolve an opposite in order to realise itself and opposition brings reunion". Amongst his numerous allusions to Bruno in his novel, including his trial and torture, Joyce plays upon Bruno's notion of "coincidentia oppositorum" through applying his name to word puns such as "Browne and Nolan" (the name of Dublin printers) and '"brownesberrow in nolandsland".
Giordano Bruno features as the hero in a series of historical crime novels by S.J. Parris (a pseudonym of Stephanie Merritt). In order these are "Heresy", "Prophecy", "Sacrilege", "Treachery", "Conspiracy" and "Execution".
"The Last Confession" by Morris West (posthumously published) is a fictional autobiography of Bruno, ostensibly written shortly before his execution.
In 1973 the biographical drama "Giordano Bruno" was released, an Italian/French movie directed by Giuliano Montaldo, starring Gian Maria Volonté as Bruno.
The Giordano Bruno Foundation (German: Giordano-Bruno-Stiftung) is a non-profit foundation based in Germany that pursues the "Support of Evolutionary Humanism". It was founded by entrepreneur Herbert Steffen in 2004. The Giordano Bruno Foundation is critical of religious fundamentalism and nationalism
The SETI League makes an annual award honouring the memory of Giordano Bruno to a deserving person or persons who have made a significant contribution to the practice of SETI (the search for extraterrestrial intelligence). The award was proposed by sociologist Donald Tarter in 1995 on the 395th anniversary of Bruno's death. The trophy presented is called a Bruno.
The 22 km impact crater Giordano Bruno on the far side of the Moon is named in his honour, as are the main belt Asteroids 5148 Giordano and 13223 Cenaceneri; the latter is named after his philosophical dialogue "La Cena de le Ceneri" ("The Ash Wednesday Supper") (see above).
Radio broadcasting station 2GB in Sydney, Australia is named for Bruno. The two letters "GB" in the call sign were chosen to honour Bruno, who was much admired by Theosophists who were the original holders of the station's licence.
Hans Werner Henze set his large scale cantata for orchestra, choir and four soloists, "Novae de infinito laudes" to Italian texts by Bruno, recorded in 1972 at the Salzburg Festival reissued on CD Orfeo C609 031B.
|
https://en.wikipedia.org/wiki?curid=12963
|
Geddy Lee
Geddy Lee Weinrib, (born Gary Lee Weinrib; July 29, 1953), known professionally as Geddy Lee, is a Canadian musician, singer, and songwriter. He is best known as the lead vocalist, bassist, and keyboardist for the Canadian rock group Rush. Lee joined what would become Rush in September 1968, at the request of his childhood friend Alex Lifeson, replacing original bassist and frontman Jeff Jones. Lee's first (and, so far, only) solo effort, "My Favourite Headache", was released in 2000.
An award-winning musician, Lee's style, technique, and skill on the bass guitar have inspired many rock musicians such as Cliff Burton of Metallica, Steve Harris of Iron Maiden, John Myung of Dream Theater, Les Claypool of Primus, and Tim Commerford of Rage Against the Machine and Audioslave. Along with his Rush bandmates – guitarist Alex Lifeson and drummer Neil Peart – Lee was made an Officer of the Order of Canada on May 9, 1996. The trio was the first rock band to be so honoured, as a group. In 2013, the group was inducted into the Rock and Roll Hall of Fame after 14 years of eligibility; they were nominated overwhelmingly in the Hall's first selection via fan ballot. Lee is ranked 13th by "Hit Parader" on their list of the 100 Greatest Heavy Metal Vocalists of All Time.
Lee was born on July 29, 1953 in Willowdale, (North York) Toronto, Ontario, to Morris (1920-1965) and Mary Weinrib (née Manya Rubenstein). His parents were Jewish Holocaust survivors from Poland who had survived the ghetto in their hometown Starachowice, followed by their imprisonments at Auschwitz and later Dachau and Bergen-Belsen concentration camps, during the Holocaust and World War II. They were about 13 years old when they were initially imprisoned at Auschwitz. "It was kind of surreal pre-teen shit," says Lee, describing how his father bribed guards to bring his mother shoes. After a period, his mother was transferred to Bergen-Belsen and his father to Dachau. When the war ended four years later and the Allies liberated the camps, Morris set out in search of Mary and found her at a displaced persons camp. They married there and eventually emigrated to Canada.
Lee's father was a skilled musician who died young, forcing Lee's mother to find outside work to support three children. Lee feels that not having parents at home during those years was probably a factor in his becoming a musician: "It was a terrible blow that I lost him, but the course of my life changed because my mother couldn't control us." He said that losing his father at such an early age made him aware of how "quickly life can disappear", which inspired him from then on to get the most out of his life and music.
He turned his basement into practice space for a band he formed with high-school friends. After the band began earning income from small performances at high-school shows or other events, he decided to drop out of high school and play rock and roll professionally. His mother was devastated when he told her, and he still feels that he owes her for the disappointments in her life. "All the shit I put her through," he says, "on top of the fact that she just lost her husband. I felt like I had to make sure that it was worth it. I wanted to show her that I was a professional, that I was working hard, and wasn't just a fuckin' lunatic."
"Jweekly" featured Lee's reflections on his mother's experiences as a refugee, and of his own Jewish heritage. Lee's name, "Geddy", was derived from his mother's heavily accented pronunciation of his given first name, "Gary". This was picked up by his friends in school, leading Lee to adopt it as his stage name and later his legal name.
After Rush had become a widely recognized rock group, Lee told the story about his mother's early life to the group's drummer and lyricist, Neil Peart, who then wrote the lyrics to "Red Sector A", inspired by her ordeal. The song, for which Lee wrote the music, was released on the band's 1984 album "Grace Under Pressure". The lyrics include the following verse:
Lee began playing music in school when he was 10 or 11, and got his first acoustic guitar at 14. In school, he first played drums, trumpet and clarinet. However, learning to play instruments in school wasn't satisfying to Lee, and he took basic piano lessons on his own. His interest increased dramatically after listening to some of the popular rock groups at the time. His early influences included Jack Bruce of Cream, John Entwistle of The Who, Jeff Beck, and Procol Harum. "I was mainly interested in early British progressive rock," said Lee. "That's how I learned to play bass, emulating Jack Bruce and people like that." Bruce's style of music was also noticed by Lee, who liked that "his sound was distinctive – it wasn't boring." Lee has also been influenced by Paul McCartney, Chris Squire, and James Jamerson.
Beginning in 1969, Rush began playing professionally in coffeehouses, high school dances and at various outdoor recreational events. By 1971, they were now playing mostly original songs in small clubs and bars, including Toronto's Gasworks and Abbey Road Pub. Lee describes the group during these early years as being "weekend warriors", holding down jobs during the weekdays and playing music on weekends: "We longed to break out of the boring surrounding of the suburbs and the endless similarities . . . the shopping plazas and all that stuff. . . the music was a vehicle for us to speak out." He claims that in the beginning they were simply "a straightforward rock band."
Short of money, they began opening concerts at venues such as Toronto's Victory Burlesque Theatre for the glam rock band New York Dolls. By 1972, Rush began performing full-length concerts, consisting mostly of original songs, in cities including Toronto and Detroit. As they gained more recognition, they began performing as an opening act for groups such as Aerosmith, Kiss, and Blue Öyster Cult.
Like Cream, Rush followed the model of a "power trio", with Lee both playing bass and singing. Lee's vocals produced a distinctive, "countertenor" falsetto, and resonant sound. Lee possessed a three-octave vocal range, from baritone through tenor, alto, and mezzo-soprano pitch ranges, although it has significantly decreased with age. Lee's playing style is widely regarded for his use of high treble and very hard playing of the strings, and for utilizing the bass as a lead instrument, often contrapuntal to Lifeson's guitar. In the 1970s and early 1980s, Lee mostly used a Rickenbacker 4001 bass, with a very noticeable grit in his tone. During the band's "synth era" in the mid-1980s, Lee used Steinberger and later Wal basses, with the latter having more of a "jazzy" tone, according to Lee. From 1993's "Counterparts" onward, Lee began using the Fender Jazz Bass almost exclusively, returning to his trademark high treble sound. Lee had first used the Jazz Bass during the recording of " Moving Pictures", on songs such as "Tom Sawyer."
After a number of early albums and increasing popularity, Rush's status as a rock group soared over the following five years as they consistently toured worldwide and produced successful albums, including "2112" (1976), "A Farewell to Kings" (1977), "Hemispheres" (1978), "Permanent Waves" (1980), and "Moving Pictures" (1981). Lee began adding synthesizers in 1977, with the release of "A Farewell to Kings". The additional sounds expanded the group's "textural capabilities", states keyboard critic Greg Armbruster, and allowed the trio to produce an orchestrated and more complex progressive rock music style. It also gave Lee the ability to play bass at the same time, as he could control the synthesizer with foot pedals. In 1981, he won "Keyboard" magazine's poll as "Best New Talent." By the 1984 album "Grace Under Pressure", Lee was surrounding himself with stacks of keyboards on stage.
By the 1980s, Rush had become one of the "biggest rock bands on the planet", selling out arena seats when touring. Lee was known for his dynamic stage movements. According to music critic Tom Mulhern, writing in 1980, "it's dazzling to see so much sheer energy expended without a nervous breakdown." By 1996, with their Test for Echo Tour, they began performing without an opening act, their shows lasting nearly three hours.
Music industry writer Christopher Buttner, who interviewed Lee in 1996, described him as a prodigy and "role model" for what every musician wants to be, noting his proficiency on stage. Buttner cited Lee's ability to vary time signatures, play multiple keyboards, use bass pedal controllers and control sequencers, all while singing lead vocals into as many as three microphones. Buttner adds that few musicians of any instrument "can juggle half of what Geddy can do without literally falling on their ass." As a result, notes Mulhern, Lee's instrumentation was the "pulse" of the group and created a "one-man rhythm section", which complemented guitarist Alex Lifeson and percussionist Neil Peart. Bass instructor Allan Slutsky, or "Dr Licks", credits Lee's "biting, high-end bass lines and creative synthesizer work" for helping the group become "one of the most innovative" of all the groups that play arena rock. By 1989, "Guitar Player" magazine had already designated Lee the "Best Rock Bass" player from their reader's poll for the previous five years.
Bass players who have cited Lee as an influence include Cliff Burton of Metallica, Steve Harris of Iron Maiden, John Myung of Dream Theater, and Les Claypool of Primus.
"My Favourite Headache", Lee's first and to date only solo album, was released on November 14, 2000, while Rush was on a hiatus following the deaths of Peart's wife and daughter. Musicians associated with the project include sometime Rush contributor Ben Mink, Soundgarden and Pearl Jam drummer Matt Cameron, and others.
The bulk of Lee's work in music has been with Rush (see Rush discography). However, Lee has also contributed to a body of work outside of his involvement with the band through guest appearances and album production. In 1981, Lee was the featured guest for the hit song "Take Off" and its included comedic commentary with Bob and Doug McKenzie (played by Rick Moranis and Dave Thomas, respectively) for the McKenzie Brothers' comedy album "Great White North". While Rush has had great success selling albums, "Take Off" is the highest charting single on the "Billboard" Hot 100 of Lee's career.
In 1982, Lee produced the first (and only) album from Toronto new wave band Boys Brigade. On the 1985 album "We Are the World", by humanitarian consortium USA for Africa, Lee recorded guest vocals for the song "Tears Are Not Enough". Lee sang "O Canada", the Canadian national anthem, at Baltimore's Camden Yards for the 1993 Major League Baseball All-Star Game.
Another version of "O Canada", with a rock arrangement, was recorded by Lee and Lifeson for the soundtrack of the 1999 film "".
Lee also plays bass on Canadian rock band I Mother Earth's track "Good for Sule", which is featured on the group's 1999 album "Blue Green Orange".
Lee was an interview subject in the documentary films "" and "", and has appeared in multiple episodes of the VH1 Classic series "Metal Evolution".
Along with his bandmates, Lee was a guest musician on the Max Webster song "Battle Scar", from the 1980 album "Universal Juveniles".
Lee appeared in Broken Social Scene's music video for their 2006 single "Fire Eye'd Boy", judging the band while they perform various musical tasks, and in 2006, Lee joined Lifeson's supergroup, the Big Dirty Band, to provide songs accompanying "".
In 2013, Lee made a brief cameo appearance as himself in the "How I Met Your Mother" season eight episode "P.S. I Love You".
In 2017, Lee performed with Yes during the band's Rock and Roll Hall of Fame induction, playing bass for the song "Roundabout."
In 2018, Lee published "Geddy Lee's Big Beautiful Book of Bass", which highlights his collection of over 250 basses along with interviews with some of leading bass players and bass technicians.
In 2020, Lee provided guest vocals to an all-star Canadian rendition of the late Bill Withers song "Lean on Me" during the TV special "Stronger Together, Tous Ensemble", a Canadian benefit performance simulcast by every major television network in Canada as a benefit for Food Banks Canada during the COVID-19 pandemic.
Lee married Nancy Young in 1976. They have a son, Julian, and a daughter, Kyla. He is an avid watch and wine collector, with a collection of 5,000 bottles. He takes annual trips to France, where he indulges in cheese and wine. In 2011, a charitable foundation he supports, Grapes for Humanity, created the Geddy Lee Scholarship for students of winemaking at Niagara College.
He is also a longtime baseball fan. His favourite team while growing up was the Detroit Tigers, and he later became a fan of the Toronto Blue Jays after they were established. In the 1980s, Lee began reading the works of Bill James, particularly "The Bill James Baseball Abstracts", which led to an interest in sabermetrics and participation in a fantasy baseball keeper league. He collects baseball memorabilia, once donating part of his collection to the Negro Leagues Baseball Museum, and threw the ceremonial first pitch to inaugurate the 2013 Toronto Blue Jays season. Lee sang the Canadian national anthem before the 1993 MLB All-Star Game. In 2016, Lee planned to produce an independent film about baseball in Italy.
Lee has described himself as a Jewish atheist, explaining to an interviewer, "I consider myself a Jew as a race, but not so much as a religion. I'm not down with religion at all. I’m a Jewish atheist, if that's possible."
Lee has varied his equipment list continually throughout his career.
In 1998, Fender released the Geddy Lee Jazz Bass, available in Black and 3-Color Sunburst (as of 2009). This signature model is a recreation of Lee's favourite bass, a 1972 Fender Jazz that he bought in a pawn shop in Kalamazoo, Michigan in 1978. In 2015, Fender released a revised USA model of his signature bass.
In the early 1970s, Lee's main instrument was a 1960's Fender Precision Bass, that he later had sanded down into a teardrop shape and refinished. He switched a modified Rickenbacker 4001 in the mid-1970s, to emulate the tone of Yes bassist Chris Squire. He had also used Steinberger and Wal basses throughout the 1980s.
For Rush's 2010 tour, Lee used two Orange AD200 bass heads together with two OBC410 4x10 bass cabinets.
Over the years, Lee has used synthesizers from Oberheim (Eight-voice, OB-1, OB-X, OB-Xa), PPG (Wave 2.2 and 2.3), Roland (Jupiter 8, D-50, XV-5080, and Fantom X7), Moog (Minimoog, Taurus pedals, Little Phatty), and Yamaha (DX7, KX76). Lee used sequencers early in their development and has continued to use similar innovations as they have developed over the years. Lee has also made use of digital samplers. Combined, these electronic devices have supplied many memorable keyboard sounds, such as the "growl" in "Tom Sawyer" and the percussive melody in the chorus of "The Spirit of Radio."
Beginning with the 1993 album "Counterparts", Rush reduced most keyboard- and synthesizer-derived sounds in their compositions. This reached a peak on the 2002 album "Vapor Trails", Rush's first since 1975's "Caress of Steel" to not feature any keyboards or synthesizers. On the 2007 album "Snakes & Arrows", Lee sparingly adds a Mellotron and bass pedals. However, it does not mark a return to a keyboard-heavy sound for the band. Much like "Vapor Trails", the music is primarily recorded with multiple layers of guitars, bass, drums and percussion.
Newer advances in synthesizer and sampler technology have allowed Lee to store familiar sounds from his old synthesizers alongside new ones in combination synthesizer/samplers, such as the Roland XV-5080. For live shows in 2002 and 2004, Lee and his keyboard technician used the playback capabilities of the XV-5080 to generate virtually all of Rush's keyboard sounds to date, as well as additional complex sound passages that previously required several machines at once to produce.
When playing live, Lee and his bandmates recreate their songs as accurately as possible with digital samplers. Using these samplers, the band members are able to recreate, in real-time, the sounds of non-traditional instruments, accompaniments, vocal harmonies, and other sound "events" that are familiar to those who have heard Rush songs from their albums.
To trigger these sounds in real-time, Lee uses MIDI controllers, placed at the locations on the stage where he has a microphone stand. Lee uses two types of MIDI controllers: one type resembles a traditional synthesizer keyboard on a stand (Yamaha KX76). The second type is a large foot-pedal keyboard, placed on the stage floor (Korg MPK-130, Roland PK-5). Combined, they enable Lee to use his free hands and feet to trigger sounds in electronic equipment that has been placed off-stage. It is with this technology that Lee and his bandmates are able to present their arrangements in a live setting with the level of complexity and fidelity that fans have come to expect, and without the need to resort to the use of backing tracks or employing an additional band member. A notable exception of this was during the "Clockwork Angels Tour", when a string ensemble played string parts, which were originally arranged and conducted by David Campbell on "Clockwork Angels".
Lee's (and his bandmates') use of MIDI controllers to trigger sampled instruments and audio events is visible throughout the "" concert DVD (2005).
From the "Snakes and Arrows" tour onwards, Lee used a Roland Fantom X7 and a Moog Little Phatty synthesizer.
Beginning in 1996, Lee stopped using traditional bass amplifiers on stage, opting to have the bass guitar signals input directly to the touring front-of-house console, to improve control and balance of sound reinforcement. Faced with the dilemma of what to do with the empty space left behind by the lack of large amplifier cabinets, Lee chose to decorate his side of the stage with unusual items.
For the 1996–1997 "Test for Echo Tour", Lee's side sported a fully stocked old-fashioned household refrigerator. For the 2002 "Vapor Trails" tour, Lee lined his side of the stage with three coin-operated Maytag dryers. Other large appliances appeared later in the same space. For visual effect they were "miked" by the sound crew, just as a real amplifier would be. Rush's crew loaded the dryers with specially-designed Rush-themed T-shirts, different from the shirts on sale to the general public. At the close of each show, Lee and Lifeson tossed these T-shirts into the audience. The dryers can be seen while watching the "Rush in Rio" DVD, the "R40" DVD, and the "R30: 30th Anniversary World Tour" DVD. For the band's tour, one of the three dryers was replaced with a rotating shelf-style vending machine. It too was fully stocked and operational during shows. For the R40 tour, there were four dryers, as opposed to the usual three dryers.
The Snakes & Arrows Tour prominently featured three Henhouse brand rotisserie chicken ovens on stage complete with an attendant in a chef's hat and apron to "tend" the chickens during shows. For the 2010–2011 Time Machine Tour, Lee's side of the stage featured a steampunk-inspired combination Time Machine and Sausage Maker, with an attendant occasionally throwing material into its feed hopper during the show. During the 2012–2013 Clockwork Angels Tour, Lee used a different steampunk device called a "Geddison" as a backdrop. This was composed of a giant old-style phonograph horn, an oversized model brain in a jar, a set of brass horns, and a working popcorn popper. The 2015 R40 tour combined several of these elements together, with the exception of the chicken ovens used on the "Snakes and Arrows" tour.
|
https://en.wikipedia.org/wiki?curid=12964
|
Geologic time scale
The geologic time scale (GTS) is a system of chronological dating that relates geological strata (stratigraphy) to time. It is used by geologists, paleontologists, and other Earth scientists to describe the timing and relationships of events that once occurred, also allowing them to accurately file the times when different creatures were fossilised, after carbon dating. The table of geologic time spans, presented here, agree with the nomenclature, dates and standard color codes set forth by the International Commission on Stratigraphy (ICS).
The primary defined divisions of time are periods called "eons", in sequence the Hadeanwhen the Earth and moon was formed, the Archeanwhen the very first lifeforms emergedsingle celled organisms, the Proterozoicwhere life diversified, though not to the extent of the Phanerozoicthis eon is considered “boring”, and until recently, scientists referred to it as the ‘boring billion’, and the Phanerozoicwhere life as we know it emerged, though for the first few millennia, life was exactly recognisable .
The first three of these can be referred to collectively as the Precambrian supereonbecause of the significance of the "Cambrian Explosion", an event that marked the massive diversification of multi-cellular, or "complex" lifeforms and organisms .
Eons are divided into eras, which are in turn divided into periods, epochs and ages.
Corresponding to eons, eras, periods, epochs and ages, the terms "eonothem", "erathem", "system", "series", "stage" are used to refer to the layers of rock that belong to these stretches of geologic time in Earth's history.
Geologists qualify these units as "early", "mid", and "late" when referring to time, and "lower", "middle", and "upper" when referring to the corresponding rocks. For example, the Lower Jurassic Series in chronostratigraphy corresponds to the Early Jurassic Epoch in geochronology. The adjectives are capitalized when the subdivision is formally recognized, and lower case when not; thus "early Miocene" but "Early Jurassic."
Evidence from radiometric dating indicates that Earth is about 4.54 billion years old. The geology or "deep time" of Earth's past has been organized into various units according to events which took place. Different spans of time on the GTS are usually marked by corresponding changes in the composition of strata which indicate major geological or paleontological events, such as mass extinctions. For example, the boundary between the Cretaceous period and the Paleogene period is defined by the Cretaceous–Paleogene extinction event, which marked the demise of the non-avian dinosaurs and many other groups of life. Older time spans, which predate the reliable fossil record (before the Proterozoic eon), are defined by their absolute age.
Geologic units from the same time but different parts of the world often look different and contain different fossils, so the same time-span was historically given different names in different locales. For example, in North America, the Lower Cambrian is called the Waucoban series that is then subdivided into zones based on succession of trilobites. In East Asia and Siberia, the same unit is split into Alexian, Atdabanian, and Botomian stages. A key aspect of the work of the International Commission on Stratigraphy is to reconcile this conflicting terminology and define universal horizons that can be used around the world.
Some other planets and moons in the Solar System have sufficiently rigid structures to have preserved records of their own histories, for example, Venus, Mars and the Earth's Moon. Dominantly fluid planets, such as the gas giants, do not preserve their history in a comparable manner. Apart from the Late Heavy Bombardment, events on other planets probably had little direct influence on the Earth, and events on Earth had correspondingly little effect on those planets. Construction of a time scale that links the planets is, therefore, of only limited relevance to the Earth's time scale, except in a Solar System context. The existence, timing, and terrestrial effects of the Late Heavy Bombardment are still a matter of debate.
In Ancient Greece, Aristotle (384–322 BCE) observed that fossils of seashells in rocks resembled those found on beaches – he inferred that the fossils in rocks were formed by organisms, and he reasoned that the positions of land and sea had changed over long periods of time. Leonardo da Vinci (1452–1519) concurred with Aristotle's interpretation that fossils represented the remains of ancient life.
The 11th-century Persian polymath Avicenna (Ibn Sina, died 1037) and the 13th-century Dominican bishop Albertus Magnus (died 1280) extended Aristotle's explanation into a theory of a petrifying fluid. Avicenna also first proposed one of the principles underlying geologic time scales, the law of superposition of strata, while discussing the origins of mountains in "The Book of Healing" (1027). The Chinese naturalist Shen Kuo (1031–1095) also recognized the concept of "deep time".
In the late 17th century Nicholas Steno (1638–1686) pronounced the principles underlying geologic (geological) time scales. Steno argued that rock layers (or strata) were laid down in succession, and that each represents a "slice" of time. He also formulated the law of superposition, which states that any given stratum is probably older than those above it and younger than those below it. While Steno's principles were simple, applying them proved challenging. Steno's ideas also lead to other important concepts geologists use today, such as relative dating. Over the course of the 18th century geologists realized that:
The Neptunist theories popular at this time (expounded by Abraham Werner (1749–1817) in the late 18th century) proposed that all rocks had precipitated out of a single enormous flood. A major shift in thinking came when James Hutton presented his "Theory of the Earth; or, an Investigation of the Laws Observable in the Composition, Dissolution, and Restoration of Land Upon the Globe"
before the Royal Society of Edinburgh in March and April 1785. John McPhee asserts that "as things appear from the perspective of the 20th century, James Hutton in those readings became the founder of modern geology".
Hutton proposed that the interior of Earth was hot, and that this heat was the engine which drove the creation of new rock: land was eroded by air and water and deposited as layers in the sea; heat then consolidated the sediment into stone, and uplifted it into new lands. This theory, known as "Plutonism", stood in contrast to the "Neptunist" flood-oriented theory.
The first serious attempts to formulate a geologic time scale that could be applied anywhere on Earth were made in the late 18th century. The most influential of those early attempts (championed by Werner, among others) divided the rocks of Earth's crust into four types: Primary, Secondary, Tertiary, and Quaternary. Each type of rock, according to the theory, formed during a specific period in Earth history. It was thus possible to speak of a "Tertiary Period" as well as of "Tertiary Rocks." Indeed, "Tertiary" (now Paleogene and Neogene) remained in use as the name of a geological period well into the 20th century and "Quaternary" remains in formal use as the name of the current period.
The identification of strata by the fossils they contained, pioneered by William Smith, Georges Cuvier, Jean d'Omalius d'Halloy, and Alexandre Brongniart in the early 19th century, enabled geologists to divide Earth history more precisely. It also enabled them to correlate strata across national (or even continental) boundaries. If two strata (however distant in space or different in composition) contained the same fossils, chances were good that they had been laid down at the same time. Detailed studies between 1820 and 1850 of the strata and fossils of Europe produced the sequence of geologic periods still used today.
Early work on developing the geologic time scale was dominated by British geologists, and the names of the geologic periods reflect that dominance. The "Cambrian", (the classical name for Wales) and the "Ordovician", and "Silurian", named after ancient Welsh tribes, were periods defined using stratigraphic sequences from Wales. The "Devonian" was named for the English county of Devon, and the name "Carboniferous" was an adaptation of "the Coal Measures", the old British geologists' term for the same set of strata. The "Permian" was named after Perm, Russia, because it was defined using strata in that region by Scottish geologist Roderick Murchison. However, some periods were defined by geologists from other countries. The "Triassic" was named in 1834 by a German geologist Friedrich Von Alberti from the three distinct layers (Latin meaning triad)red beds, capped by chalk, followed by black shalesthat are found throughout Germany and Northwest Europe, called the ‘Trias’. The "Jurassic" was named by a French geologist Alexandre Brongniart for the extensive marine limestone exposures of the Jura Mountains. The "Cretaceous" (from Latin "creta" meaning ‘chalk’) as a separate period was first defined by Belgian geologist Jean d'Omalius d'Halloy in 1822, using strata in the Paris basin and named for the extensive beds of chalk (calcium carbonate deposited by the shells of marine invertebrates) found in Western Europe.
British geologists were also responsible for the grouping of periods into eras and the subdivision of the Tertiary and Quaternary periods into epochs. In 1841 John Phillips published the first global geologic time scale based on the types of fossils found in each era. Phillips' scale helped standardize the use of terms like "Paleozoic" ("old life") which he extended to cover a larger period than it had in previous usage, and "Mesozoic" ("middle life") which he invented.
When William Smith and Sir Charles Lyell first recognized that rock strata represented successive time periods, time scales could be estimated only very imprecisely since estimates of rates of change were uncertain. While creationists had been proposing dates of around six or seven thousand years for the age of Earth based on the Bible, early geologists were suggesting millions of years for geologic periods, and some were even suggesting a virtually infinite age for Earth. Geologists and paleontologists constructed the geologic table based on the relative positions of different strata and fossils, and estimated the time scales based on studying rates of various kinds of weathering, erosion, sedimentation, and lithification. Until the discovery of radioactivity in 1896 and the development of its geological applications through radiometric dating during the first half of the 20th century, the ages of various rock strata and the age of Earth were the subject of considerable debate.
The first geologic time scale that included absolute dates was published in 1913 by the British geologist Arthur Holmes. He greatly furthered the newly created discipline of geochronology and published the world-renowned book "The Age of the Earth" in which he estimated Earth's age to be at least 1.6 billion years.
In 1977, the "Global Commission on Stratigraphy" (now the International Commission on Stratigraphy) began to define global references known as GSSP (Global Boundary Stratotype Sections and Points) for geologic periods and faunal stages. The commission's work is described in the 2012 geologic time scale of Gradstein et al. A UML model for how the timescale is structured, relating it to the GSSP, is also available.
Popular culture and a growing number of scientists use the term "Anthropocene" informally to label the current epoch in which we are living. The term was coined by Paul Crutzen and Eugene Stoermer in 2000 to describe the current time in which humans have had an enormous impact on the environment. It has evolved to describe an "epoch" starting some time in the past and on the whole defined by anthropogenic carbon emissions and production and consumption of plastic goods that are left in the ground.
Critics of this term say that the term should not be used because it is difficult, if not nearly impossible, to define a specific time when humans started influencing the rock stratadefining the start of an epoch. Others say that humans have not even started to leave their biggest impact on Earth, and therefore the Anthropocene has not even started yet.
The ICS has not officially approved the term . The Anthropocene Working Group met in Oslo in April 2016 to consolidate evidence supporting the argument for the Anthropocene as a true geologic epoch. Evidence was evaluated and the group voted to recommend "Anthropocene" as the new geological age in August 2016.
Should the International Commission on Stratigraphy approve the recommendation, the proposal to adopt the term will have to be ratified by the International Union of Geological Sciences before its formal adoption as part of the geologic time scale.
The following table summarizes the major events and characteristics of the periods of time making up the geologic time scale. This table is arranged with the most recent geologic periods at the top, and the most ancient at the bottom. The height of each table entry does not correspond to the duration of each subdivision of time.
The content of the table is based on the current official geologic time scale of the International Commission on Stratigraphy (ICS), with the epoch names altered to the early/late format from lower/upper as recommended by the ICS when dealing with chronostratigraphy.
The ICS now provides an online, interactive, version of this chart too, https://stratigraphy.org/timescale/, based on a service delivering a machine-readable Resource Description Framework/Web Ontology Language representation of the timescale which is available through the Commission for the Management and Application of Geoscience Information GeoSciML project as a service and at a SPARQL end-point.
Please note that this is not to scale, and even though the Phanerozoic era looks longer than the rest, it merely spans 500 million years, whilst the previous three eons (or the Precambrian supereon collectively span over 3.5 billion years. This discrepancy is caused by the lack of action in the first three eons (or supereon) compared to our’s (the Phanerozoic).
The ICS's "Geologic Time Scale 2012" book which includes the new approved time scale also displays a proposal to substantially revise the Precambrian time scale to reflect important events such as the formation of the Earth or the Great Oxidation Event, among others, while at the same time maintaining most of the previous chronostratigraphic nomenclature for the pertinent time span. (See also Period (geology)#Structure.)
Shown to scale:
Compare with the current official timeline, not shown to scale:
|
https://en.wikipedia.org/wiki?curid=12967
|
Gambler's fallacy
The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the erroneous belief that if a particular event occurs more frequently than normal during the past it is less likely to happen in the future (or vice versa), when it has otherwise been established that the probability of such events does not depend on what has happened in the past. Such events, having the quality of historical independence, are referred to as statistically independent. The fallacy is commonly associated with gambling, where it may be believed, for example, that the next dice roll is more than usually likely to be six because there have recently been less than the usual number of sixes.
The term "Monte Carlo fallacy" originates from the best known example of the phenomenon, which occurred in the Monte Carlo Casino in 1913.
The gambler's fallacy can be illustrated by considering the repeated toss of a fair coin. The outcomes in different tosses are statistically independent and the probability of getting heads on a single toss is (one in two). The probability of getting two heads in two tosses is (one in four) and the probability of getting three heads in three tosses is (one in eight). In general, if "Ai" is the event where toss "i" of a fair coin comes up heads, then:
If after tossing four heads in a row, the next coin toss also came up heads, it would complete a run of five successive heads. Since the probability of a run of five successive heads is (one in thirty-two), a person might believe that the next flip would be more likely to come up tails rather than heads again. This is incorrect and is an example of the gambler's fallacy. The event "5 heads in a row" and the event "first 4 heads, then a tails" are equally likely, each having probability . Since the first four tosses turn up heads, the probability that the next toss is a head is:
While a run of five heads has a probability of = 0.03125 (a little over 3%), the misunderstanding lies in not realizing that this is the case "only before the first coin is tossed". After the first four tosses, the results are no longer unknown, so their probabilities are at that point equal to 1 (100%). The reasoning that it is more likely that a fifth toss is more likely to be tails because the previous four tosses were heads, with a run of luck in the past influencing the odds in the future, forms the basis of the fallacy.
If a fair coin is flipped 21 times, the probability of 21 heads is 1 in 2,097,152. The probability of flipping a head after having already flipped 20 heads in a row is . Assuming a fair coin:
The probability of getting 20 heads then 1 tail, and the probability of getting 20 heads then another head are both 1 in 2,097,152. When flipping a fair coin 21 times, the outcome is equally likely to be 21 heads as 20 heads and then 1 tail. These two outcomes are equally as likely as any of the other combinations that can be obtained from 21 flips of a coin. All of the 21-flip combinations will have probabilities equal to 0.521, or 1 in 2,097,152. Assuming that a change in the probability will occur as a result of the outcome of prior flips is incorrect because every outcome of a 21-flip sequence is as likely as the other outcomes. In accordance with Bayes' theorem, the likely outcome of each flip is the probability of the fair coin, which is .
The fallacy leads to the incorrect notion that previous failures will create an increased probability of success on subsequent attempts. For a fair 16-sided die, the probability of each outcome occurring is (6.25%). If a win is defined as rolling a 1, the probability of a 1 occurring at least once in 16 rolls is:
The probability of a loss on the first roll is (93.75%). According to the fallacy, the player should have a higher chance of winning after one loss has occurred. The probability of at least one win is now:
By losing one toss, the player's probability of winning drops by two percentage points. With 5 losses and 11 rolls remaining, the probability of winning drops to around 0.5 (50%). The probability of at least one win does not increase after a series of losses; indeed, the probability of success "actually decreases", because there are fewer trials left in which to win. The probability of winning will eventually equal the probability of winning a single toss, which is (6.25%) and occurs when only one toss is left.
After a consistent tendency towards tails, a gambler may also decide that tails has become a more likely outcome. This is a rational and Bayesian conclusion, bearing in mind the possibility that the coin may not be fair; it is not a fallacy. Believing the odds to favor tails, the gambler sees no reason to change to heads. However it is a fallacy that a sequence of trials carries a memory of past results which tend to favor or disfavor future outcomes.
The inverse gambler's fallacy described by Ian Hacking is a situation where a gambler entering a room and seeing a person rolling a double six on a pair of dice may erroneously conclude that the person must have been rolling the dice for quite a while, as they would be unlikely to get a double six on their first attempt.
Researchers have examined whether a similar bias exists for inferences about unknown past events based upon known subsequent events, calling this the "retrospective gambler's fallacy".
An example of a retrospective gambler's fallacy would be to observe multiple successive "heads" on a coin toss and conclude from this that the previously unknown flip was "tails". Real world examples of retrospective gambler's fallacy have been argued to exist in events such as the origin of the Universe. In his book "Universes", John Leslie argues that "the presence of vastly many universes very different in their characters might be our best explanation for why at least one universe has a life-permitting character". Daniel M. Oppenheimer and Benoît Monin argue that "In other words, the 'best explanation' for a low-probability event is that it is only one in a multiple of trials, which is the core intuition of the reverse gambler's fallacy." Philosophical arguments are ongoing about whether such arguments are or are not a fallacy, arguing that the occurrence of our universe says nothing about the existence of other universes or trials of universes. Three studies involving Stanford University students tested the existence of a retrospective gamblers' fallacy. All three studies concluded that people have a gamblers' fallacy retrospectively as well as to future events. The authors of all three studies concluded their findings have significant "methodological implications" but may also have "important theoretical implications" that need investigation and research, saying "[a] thorough understanding of such reasoning processes requires that we not only examine how they influence our predictions of the future, but also our perceptions of the past."
In 1796, Pierre-Simon Laplace described in "A Philosophical Essay on Probabilities" the ways in which men calculated their probability of having sons: "I have seen men, ardently desirous of having a son, who could learn only with anxiety of the births of boys in the month when they expected to become fathers. Imagining that the ratio of these births to those of girls ought to be the same at the end of each month, they judged that the boys already born would render more probable the births next of girls." The expectant fathers feared that if more sons were born in the surrounding community, then they themselves would be more likely to have a daughter. This essay by Laplace is regarded as one of the earliest descriptions of the fallacy.
After having multiple children of the same sex, some parents may believe that they are due to have a child of the opposite sex. While the Trivers–Willard hypothesis predicts that birth sex is dependent on living conditions, stating that more male children are born in good living conditions, while more female children are born in poorer living conditions, the probability of having a child of either sex is still regarded as near 0.5 (50%).
Perhaps the most famous example of the gambler's fallacy occurred in a game of roulette at the Monte Carlo Casino on August 18, 1913, when the ball fell in black 26 times in a row. This was an extremely uncommon occurrence: the probability of a sequence of either red or black occurring 26 times in a row is or around 1 in 66.6 million, assuming the mechanism is unbiased. Gamblers lost millions of francs betting against black, reasoning incorrectly that the streak was causing an imbalance in the randomness of the wheel, and that it had to be followed by a long streak of red.
The gambler's fallacy does not apply in situations where the probability of different events is not independent. In such cases, the probability of future events can change based on the outcome of past events, such as the statistical permutation of events. An example is when cards are drawn from a deck without replacement. If an ace is drawn from a deck and not reinserted, the next draw is less likely to be an ace and more likely to be of another rank. The probability of drawing another ace, assuming that it was the first card drawn and that there are no jokers, has decreased from (7.69%) to (5.88%), while the probability for each other rank has increased from (7.69%) to (7.84%). This effect allows card counting systems to work in games such as blackjack.
In most illustrations of the gambler's fallacy and the reverse gambler's fallacy, the trial (e.g. flipping a coin) is assumed to be fair. In practice, this assumption may not hold. For example, if a coin is flipped 21 times, the probability of 21 heads with a fair coin is 1 in 2,097,152. Since this probability is so small, if it happens, it may well be that the coin is somehow biased towards landing on heads, or that it is being controlled by hidden magnets, or similar. In this case, the smart bet is "heads" because Bayesian inference from the empirical evidence — 21 heads in a row — suggests that the coin is likely to be biased toward heads. Bayesian inference can be used to show that when the long-run proportion of different outcomes is unknown but exchangeable (meaning that the random process from which the outcomes are generated may be biased but is equally likely to be biased in any direction) and that previous observations demonstrate the likely direction of the bias, the outcome which has occurred the most in the observed data is the most likely to occur again.
For example, if the "a priori" probability of a biased coin is say 1%, and assuming that such a biased coin would come down heads say 60% of the time, then after 21 heads the probability of a biased coin has increased to about 32%.
The opening scene of the play "Rosencrantz and Guildenstern Are Dead" by Tom Stoppard discusses these issues as one man continually flips heads and the other considers various possible explanations.
If external factors are allowed to change the probability of the events, the gambler's fallacy may not hold. For example, a change in the game rules might favour one player over the other, improving his or her win percentage. Similarly, an inexperienced player's success may decrease after opposing teams learn about and play against their weaknesses. This is another example of bias.
The gambler's fallacy arises out of a belief in a law of small numbers, leading to the erroneous belief that small samples must be representative of the larger population. According to the fallacy, streaks must eventually even out in order to be representative. Amos Tversky and Daniel Kahneman first proposed that the gambler's fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic, which states that people evaluate the probability of a certain event by assessing how similar it is to events they have experienced before, and how similar the events surrounding those two processes are. According to this view, "after observing a long run of red on the roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence than the occurrence of an additional red", so people expect that a short run of random outcomes should share properties of a longer run, specifically in that deviations from average should balance out. When people are asked to make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to tails stays closer to 0.5 in any short segment than would be predicted by chance, a phenomenon known as insensitivity to sample size. Kahneman and Tversky interpret this to mean that people believe short sequences of random events should be representative of longer ones. The representativeness heuristic is also cited behind the related phenomenon of the clustering illusion, according to which people see streaks of random events as being non-random when such streaks are actually much more likely to occur in small samples than people expect.
The gambler's fallacy can also be attributed to the mistaken belief that gambling, or even chance itself, is a fair process that can correct itself in the event of streaks, known as the just-world hypothesis. Other researchers believe that belief in the fallacy may be the result of a mistaken belief in an internal locus of control. When a person believes that gambling outcomes are the result of their own skill, they may be more susceptible to the gambler's fallacy because they reject the idea that chance could overcome skill or talent.
Some researchers believe that it is possible to define two types of gambler's fallacy: type one and type two. Type one is the classic gambler's fallacy, where individuals believe that a particular outcome is due after a long streak of another outcome. Type two gambler's fallacy, as defined by Gideon Keren and Charles Lewis, occurs when a gambler underestimates how many observations are needed to detect a favorable outcome, such as watching a roulette wheel for a length of time and then betting on the numbers that appear most often. For events with a high degree of randomness, detecting a bias that will lead to a favorable outcome takes an impractically large amount of time and is very difficult, if not impossible, to do. The two types differ in that type one wrongly assumes that gambling conditions are fair and perfect, while type two assumes that the conditions are biased, and that this bias can be detected after a certain amount of time.
Another variety, known as the retrospective gambler's fallacy, occurs when individuals judge that a seemingly rare event must come from a longer sequence than a more common event does. The belief that an imaginary sequence of die rolls is more than three times as long when a set of three sixes is observed as opposed to when there are only two sixes. This effect can be observed in isolated instances, or even sequentially. Another example would involve hearing that a teenager has unprotected sex and becomes pregnant on a given night, and concluding that she has been engaging in unprotected sex for longer than if we hear she had unprotected sex but did not become pregnant, when the probability of becoming pregnant as a result of each intercourse is independent of the amount of prior intercourse.
Another psychological perspective states that gambler's fallacy can be seen as the counterpart to basketball's hot-hand fallacy, in which people tend to predict the same outcome as the previous event - known as positive recency - resulting in a belief that a high scorer will continue to score. In the gambler's fallacy, people predict the opposite outcome of the previous event - negative recency - believing that since the roulette wheel has landed on black on the previous six occasions, it is due to land on red the next. Ayton and Fischer have theorized that people display positive recency for the hot-hand fallacy because the fallacy deals with human performance, and that people do not believe that an inanimate object can become "hot." Human performance is not perceived as random, and people are more likely to continue streaks when they believe that the process generating the results is nonrandom. When a person exhibits the gambler's fallacy, they are more likely to exhibit the hot-hand fallacy as well, suggesting that one construct is responsible for the two fallacies.
The difference between the two fallacies is also found in economic decision-making. A study by Huber, Kirchler, and Stockl in 2010 examined how the hot hand and the gambler's fallacy are exhibited in the financial market. The researchers gave their participants a choice: they could either bet on the outcome of a series of coin tosses, use an expert opinion to sway their decision, or choose a risk-free alternative instead for a smaller financial reward. Participants turned to the expert opinion to make their decision 24% of the time based on their past experience of success, which exemplifies the hot-hand. If the expert was correct, 78% of the participants chose the expert's opinion again, as opposed to 57% doing so when the expert was wrong. The participants also exhibited the gambler's fallacy, with their selection of either heads or tails decreasing after noticing a streak of either outcome. This experiment helped bolster Ayton and Fischer's theory that people put more faith in human performance than they do in seemingly random processes.
While the representativeness heuristic and other cognitive biases are the most commonly cited cause of the gambler's fallacy, research suggests that there may also be a neurological component. Functional magnetic resonance imaging has shown that after losing a bet or gamble, known as riskloss, the frontoparietal network of the brain is activated, resulting in more risk-taking behavior. In contrast, there is decreased activity in the amygdala, caudate, and ventral striatum after a riskloss. Activation in the amygdala is negatively correlated with gambler's fallacy, so that the more activity exhibited in the amygdala, the less likely an individual is to fall prey to the gambler's fallacy. These results suggest that gambler's fallacy relies more on the prefrontal cortex, which is responsible for executive, goal-directed processes, and less on the brain areas that control affective decision-making.
The desire to continue gambling or betting is controlled by the striatum, which supports a choice-outcome contingency learning method. The striatum processes the errors in prediction and the behavior changes accordingly. After a win, the positive behavior is reinforced and after a loss, the behavior is conditioned to be avoided. In individuals exhibiting the gambler's fallacy, this choice-outcome contingency method is impaired, and they continue to make risks after a series of losses.
The gambler's fallacy is a deep-seated cognitive bias and can be very hard to overcome. Educating individuals about the nature of randomness has not always proven effective in reducing or eliminating any manifestation of the fallacy. Participants in a study by Beach and Swensson in 1967 were shown a shuffled deck of index cards with shapes on them, and were instructed to guess which shape would come next in a sequence. The experimental group of participants was informed about the nature and existence of the gambler's fallacy, and were explicitly instructed not to rely on run dependency to make their guesses. The control group was not given this information. The response styles of the two groups were similar, indicating that the experimental group still based their choices on the length of the run sequence. This led to the conclusion that instructing individuals about randomness is not sufficient in lessening the gambler's fallacy.
An individual's susceptibility to the gambler's fallacy may decrease with age. A study by Fischbein and Schnarch in 1997 administered a questionnaire to five groups: students in grades 5, 7, 9, 11, and college students specializing in teaching mathematics. None of the participants had received any prior education regarding probability. The question asked was: "Ronni flipped a coin three times and in all cases heads came up. Ronni intends to flip the coin again. What is the chance of getting heads the fourth time?" The results indicated that as the students got older, the less likely they were to answer with "smaller than the chance of getting tails", which would indicate a negative recency effect. 35% of the 5th graders, 35% of the 7th graders, and 20% of the 9th graders exhibited the negative recency effect. Only 10% of the 11th graders answered this way, and none of the college students did. Fischbein and Schnarch theorized that an individual's tendency to rely on the representativeness heuristic and other cognitive biases can be overcome with age.
Another possible solution comes from Roney and Trick, Gestalt psychologists who suggest that the fallacy may be eliminated as a result of grouping. When a future event such as a coin toss is described as part of a sequence, no matter how arbitrarily, a person will automatically consider the event as it relates to the past events, resulting in the gambler's fallacy. When a person considers every event as independent, the fallacy can be greatly reduced.
Roney and Trick told participants in their experiment that they were betting on either two blocks of six coin tosses, or on two blocks of seven coin tosses. The fourth, fifth, and sixth tosses all had the same outcome, either three heads or three tails. The seventh toss was grouped with either the end of one block, or the beginning of the next block. Participants exhibited the strongest gambler's fallacy when the seventh trial was part of the first block, directly after the sequence of three heads or tails. The researchers pointed out that the participants that did not show the gambler's fallacy showed less confidence in their bets and bet fewer times than the participants who picked with the gambler's fallacy. When the seventh trial was grouped with the second block, and was perceived as not being part of a streak, the gambler's fallacy did not occur.
Roney and Trick argued that instead of teaching individuals about the nature of randomness, the fallacy could be avoided by training people to treat each event as if it is a beginning and not a continuation of previous events. They suggested that this would prevent people from gambling when they are losing, in the mistaken hope that their chances of winning are due to increase based on an interaction with previous events.
Studies have found that asylum judges, loan officers, baseball umpires and lotto players employ the gambler's fallacy consistently in their decision-making.
|
https://en.wikipedia.org/wiki?curid=12970
|
Gasparo Contarini
Gasparo Contarini (16 October 1483 – 24 August 1542) was an Italian diplomat, cardinal and Bishop of Belluno. He was one of the first proponents of the dialogue with Protestants, after the Reformation.
He was born in Venice, the eldest son of Alvise Contarini, of the ancient noble House of Contarini, and his wife Polissena Malpiero. After a thorough scientific and philosophical training at the University of Padua, he began his career in the service of his native city. From September 1520 to August 1525 he was the Republic's ambassador to Charles V, with whom Venice was soon at war, instructed to defend the Republic's alliance with Francis I of France. Though he participated at the Diet of Worms, April 1521, he never saw or spoke with Martin Luther. He accompanied Charles in the Netherlands and Spain.
Contarini was in Spain when the Magellan–Elcano circumnavigation returned in 1522, bringing with them a cargo of spices from the East as well as a scientific curiosity. Although the sailors had carefully recorded every day of the three-year journey since they left Seville, the ship's log was one day earlier than the actual date when they returned to Seville. Contarini was the first European to give a correct explanation of this phenomenon. Since the ship had sailed westward around the world, in the same direction as the apparent motion of the sun in the sky, the sailors had experienced one fewer sunrise than a stationary observer.
He participated at the Congress of Ferrara in 1526 as the Republic's representative; at the Congress the League of Cognac was formed against the Emperor, allying France with Venice and several states of Italy. Later, after the Sack of Rome (1527), he assisted in reconciling the emperor with Clement VII, whose release he had obtained, and with the Republic of Bologna. Upon his return to Venice, he was made a senator and a member of the Great Council.
In 1535, Paul III unexpectedly made the secular diplomat a cardinal in order to bind an able man of evangelical disposition to the Roman interests. Contarini accepted, but in his new position did not exhibit his former independence. At the time he was promoted to cardinal, May 21, 1535, he was still a layman. However, already in October 1536 he was appointed Bishop of Belluno One of the fruits of his diplomatic activity is his "De magistratibus et republica Venetorum".
As Cardinal, Contarini figured among the most prominent of the "Spirituali", the leaders of the movement for reform within the Roman church. In April 1536 Paul III appointed a commission to devise ways for a reformation, with Contarini presiding. Paul III received favorably Contarini's "Consilium de Emendanda Ecclesia", which was circulated among the cardinalate, but it remained a dead letter. Contarini in a letter to his friend Cardinal Reginald Pole (dated 11 November 1538) says that his hopes had been wakened anew by the pope's attitude. He and his friends, who formed the Catholic evangelical movement of the Spirituali, thought that all would have been done when the abuses in church life had been put away. What Contarini had to do with it is shown by his letters to the pope in which he complained of the schism in the church, of simony and flattery in the papal court, but above all of papal tyranny, its least grateful passages. Paul's successor Paul IV, once a member on the commission, in 1539 put it on the "Index Librorum Prohibitorum".
In 1541 Cardinal Contarini was papal legate at the Conference of Regensburg, the diet and religious debate marking the culmination of attempts to restore religious unity in Germany by means of conferences. There everything was unfavorable; the Catholic states were bitter, the Evangelicals were distant. Contarini's instructions though apparently free were in fact full of papal reservations. But the papal party had gladly sent him, thinking that through him a union in doctrine could be brought about, while the interest of Rome could be attended to later. Though the princes stood aloof, the theologians and the emperor were for peace, so the main articles were put forth in a formula, Evangelical in thought and Catholic in expression. The papal legate had revised the Catholic proposal and assented to the formula agreed upon. All gave their approval, even Johann Eck, though he later regretted it.
Contarini's theological advisor was Tommaso Badia; his own position is shown in a treatise on justification, composed at Regensburg, which in essential points is Evangelical, differing only in the omission of the negative side and in being interwoven with the teaching of Aquinas. Meanwhile, the papal policy had changed, and Contarini was compelled to follow his leader. He advised the emperor, after the conference had broken up, not to renew it, but to submit everything to the pope.
Ignatius Loyola acknowledged that Cardinal Contarini was largely responsible for the papal approbation of the Society of Jesus, on September 27, 1540. Meanwhile, Rome had drifted further into reaction, and Contarini died while legate at Bologna, at a time when the Inquisition had driven many of his friends and fellows in conviction into exile.
Contarini's book "De magistratibus et republica venetorum" (Paris, 1543) is an important source for the study of sixteenth- and seventeenth-century Venice's unique system of government. It was published in an English translation in 1599. This magisterial work, written during his time as an ambassador to Charles V, extols the various institutions of the Venetian state in a manner designed to emphasize harmony, fairness and serenity. Historians have demonstrated that this text represents Contarini's idealization of Venetian reality. Probably written for a foreign, courtly audience, this work functions as the source for the everlasting propagation of the "myth of Venice" as a stable, unchanging and prosperous society.
His depiction of how members of the council were elected to the senate, for example, aimed to emphasise the way the electoral system prevented factionalism from occurring, instead making sure that “public benefits are largely extended among the citizens” rather than narrowly amongst “one family” . An elaborate lottery is described as giving the maximum amount of chance in appointing patricians to particular offices, and care is taken to point out if two of one family are standing for similar posts. Fairness is further emphasised in Contarini's constant references to the equality the members of the council enjoyed. They “sit down where it pleases them, for there is no place appointed to any”, and they “with oath promise to do their utmost diligence, that the laws may be observed” . He creates an image of disparate individuals, with factions broken up by the guiding hand of the law, working to ensure those in positions of importance are fairly chosen from their number and without the capacity to serve the interests of a smaller group.
Contarini's depiction of the Doge lucidly demonstrates the way in which this figure embodies both the conscious illusion of a resplendent monarchical ruler and an equally conscious demonstration of a regime that wishes to portray itself as ruled by many limiting the powers of one. This calculated duality means that Contarini's doge, which the second book of De magistratibus is almost entirely devoted to discussing, represents the closest point in his text to what actually occurred, because the Doge served as a literal embodiment of the idealisation of the reality of Venetian politics. For Contarini, this duality almost defines the greatness of the Venetian constitution. The Doge is the “heart”, under which “all are comprised” . Contarini places him in the centre of his body metaphor, making him synecdochical for the city and the people that reside within it. This means he is to ensure that the disparate, competing interests of the city beat in time with one another, creating in the process the “perfection of civil agreement”. His job as a conductor, rather than a ruler, means therefore that the role takes on the aspect of representative of the entire city. Contarini's description of his vestments, privileges and rituals can therefore be compared to Marin Sanudo’s description of the physical spaces of Venice in his essay "In Praise of Venice". Both are designed to extol the virtues of the entire city by describing representative parts. This is apparent in the way both authors treat the chapel of St. Mark. Patron saints were hugely important in terms of civic self-identification in renaissance Italy . Contarini emphasizes this, saying that he is “with exceeding honour solemnized of the Venetians” . His description of the Doge's close relationship with the saint, through the “solemn pomp” with which he attends mass at the saint's chapel, attaches him to the aforementioned “exceeding honour”, in a similar fashion to the way in which Sanudo glorifies Venice as a whole by constantly referring to the beauty and worth of St. Mark's square and chapel as part of his panoramic praise of the city.
At the same time, however, Contarini's overall purpose is, of course, the glorification of the republican nature of his city. Therefore, he cannot avoid referring to “the other side” of the Doge's figure when discussing his “royal appearing show” . Things like the “kingly ornaments” which were “always purple garments or cloth of gold”, both very ostentatious assertions of wealth and power, were to ensure he was “recompensed” for his “limitation of authority” . Contarini thus openly concludes that the Doge is a combination of myth and reality, saying that “in everything you may see the show of a king, but his authority is nothing” . Indeed, as Edward Muir points out, “by the sixteenth century virtually every word, gesture and act that the doge made in public was subject to legal and ceremonial regulation” . He could not buy expensive jewels, own property outside Venice or the Veneto, display his insignia outside the Ducal Palace, decorate his apartment as he wanted, receive people in his ducal dress, send official letters, or have close ties with guilds, amongst a great many other restrictions. Legally, therefore, power in Venice came from the numerous councils, not the figurehead. The Doge thus becomes a brazen republican statement. Venice drew attention to a princely, magnificently adorned figurehead, only to direct most executive power to councils of her citizens.
|
https://en.wikipedia.org/wiki?curid=12975
|
Gastroenterology
Gastroenterology is the branch of medicine focused on the digestive system and its disorders.
Diseases affecting the gastrointestinal tract, which include the organs from mouth into anus, along the alimentary canal, are the focus of this speciality. Physicians practicing in this field are called gastroenterologists. They have usually completed about eight years of pre-medical and medical education, a year-long internship (if this is not a part of the residency), three years of an internal medicine residency, and three years in the gastroenterology fellowship. Gastroenterologists perform a number of diagnostic and therapeutic procedures including colonoscopy, esophagogastroduodenoscopy (EGD), endoscopic retrograde cholangiopancreatography (ERCP), endoscopic ultrasound (EUS), and liver biopsy. Some gastroenterology trainees will complete a "fourth-year" (although this is often their seventh year of graduate medical education) in transplant hepatology, advanced interventional endoscopy, inflammatory bowel disease, motility, or other topics.
Advanced endoscopy, sometimes called interventional or surgical endoscopy, is a sub-specialty of gastroenterology which focuses on advanced endoscopic techniques for the treatment of pancreatic, hepatobiliary, and gastrointestinal disease. Interventional gastroenterologists typically undergo an additional year of rigorous training in advanced endoscopic techniques including endoscopic retrograde cholangiopancreatography, endoscopic ultrasound guided diagnostic and interventional procedures, and advanced resection techniques including endoscopic mucosal resection and endoscopic submucosal dissection. Additionally, the performance of endoscopic bariatric procedures is also performed by some advanced endoscopists.
Hepatology, or hepatobiliary medicine, encompasses the study of the liver, pancreas, and biliary tree, and is traditionally considered a sub-specialty of gastroenterology, while proctology encompasses disorders of the anus, rectum, and colon and is considered a sub-specialty of general surgery.
Citing from Egyptian papyri, John F. Nunn identified significant knowledge of gastrointestinal diseases among practicing physicians during the periods of the pharaohs. Irynakhty, of the tenth dynasty, 2125 B.C., was a court physician specializing in gastroenterology, sleeping, and proctology.
Among ancient Greeks, Hippocrates attributed digestion to concoction. Galen's concept of the stomach having four "faculties" was widely accepted up to modernity in the seventeenth century.
Eighteenth century:
James L. Abbruzzese, MD; Pancreatic, Hepatic, and Biliary Carcinomas, Medical Oncology: A Comprehensive Review
Nineteenth century:
Twentieth century:
Twenty-first century:
1. International Classification of Disease (ICD 2007)/WHO classification:
2. MeSH subject Heading:
3. National Library of Medicine Catalogue (NLM classification 2006):
In the United States, gastroenterology is an internal medicine subspecialty certified by the American Board of Internal Medicine (ABIM) and the American Osteopathic Board of Internal Medicine (AOBIM).
|
https://en.wikipedia.org/wiki?curid=12976
|
Gulag
The Gulag or GULAG (Russian: ; acronym for "Glavnoe Upravlenie LAGerei", , 'Main Directorate of Camps') was the government agency in charge of the Soviet network of forced-labour camps set up by order of Vladimir Lenin, reaching its peak during Joseph Stalin's rule from the 1930s to the early 1950s. English-language speakers also use the word "gulag" to refer to all forced-labour camps that existed in the Soviet Union, including camps that existed in the post-Stalin era.
The Gulag is recognised as a major instrument of political repression in the Soviet Union. The camps housed a wide range of convicts, from petty criminals to political prisoners, large numbers of whom were convicted by simplified procedures, such as by NKVD troikas or by other instruments of extrajudicial punishment. In 1918–22, the agency was administered by Cheka, followed by the GPU (1922–23), OGPU (1923–34), later by the NKVD (1934–46), and in the final years by the Ministry of Internal Affairs (MVD). The Solovki prison camp, the first corrective labor camp constructed after the revolution, was established in 1918 and legalised by a decree, "On the creation of the forced-labour camps" on April 15, 1919.
The internment system grew rapidly, reaching a population of 100,000 in the 1920s. According to Nicolas Werth, the yearly mortality rate in the Soviet concentration camps strongly varied, reaching 5% (1933) and 20% (1942–1943) while dropping considerably in the post-war years (about 1 to 3% per year at the beginning of the 1950s). In 1956 the mortality rate dropped to 0.4%. The emergent consensus among scholars who utilize official archival data is that of the 18 million who were sent to the Gulag from 1930 to 1953, roughly 1.5 to 1.7 million perished there or as a result of their detention. However, some historians question the reliability of such data and instead rely heavily on literary sources that come to higher estimations. Archival researchers have found "no plan of destruction" of the gulag population and no statement of official intent to kill them, and prisoner releases vastly exceeded the number of deaths in the Gulag.
Almost immediately following the death of Stalin, the Soviet establishment took steps in dismantling the Gulag system. A general amnesty was declared in the immediate aftermath of Stalin's death, though it was limited to non-political prisoners and political prisoners sentenced to not more than 5 years. Shortly thereafter Nikita Khrushchev was elected as General Secretary of the Communist Party of the Soviet Union, initiating the processes of de-Stalinization and the Khrushchev Thaw, triggering a mass release and rehabilitation of political prisoners. The Gulag system ended definitively six years later on 25 January 1960, when the remains of the administration were dissolved by Khrushchev. The legal practice of sentencing convicts to penal labour, though restrained, was not fully abolished and continues to this day, although to a far more limited capacity, in the Russian Federation.
Aleksandr Solzhenitsyn, winner of the Nobel Prize in Literature, who survived eight years of Gulag incarceration, gave the term its international repute with the publication of "The Gulag Archipelago" in 1973. The author likened the scattered camps to "a chain of islands," and as an eyewitness he described the Gulag as a system where people were worked to death. In March 1940, there were 53 Gulag camp directorates (colloquially referred to simply as "camps") and 423 labour colonies in the Soviet Union. Many mining and industrial towns and cities in northern and eastern Russia and in Kazakhstan such as Karaganda, Norilsk, Vorkuta and Magadan, were originally blocks of camps built by prisoners and subsequently run by ex-prisoners.
Some suggest that 14 million people were imprisoned in the Gulag labour camps from 1929 to 1953 (the estimates for the period 1918–1929 are more difficult to calculate). Other calculations, by historian Orlando Figes, refer to 25 million prisoners of the Gulag in 1928–1953. A further 6–7 million were deported and exiled to remote areas of the USSR, and 4–5 million passed through labour colonies, plus 3.5 million who were already in, or who had been sent to, labour settlements. According to some estimates, the total population of the camps varied from 510,307 in 1934 to 1,727,970 in 1953. According to other estimates, at the beginning of 1953 the total number of prisoners in prison camps was more than 2.4 million of which more than 465,000 were political prisoners.
The institutional analysis of the Soviet concentration system is complicated by the formal distinction between GULAG and GUPVI.
GUPVI (ГУПВИ) was the Main Administration for Affairs of Prisoners of War and Internees (, ), a department of NKVD (later MVD) in charge of handling of foreign civilian internees and POWs (prisoners of war) in the Soviet Union during and in the aftermath of World War II (1939–1953). In many ways the GUPVI system was similar to GULAG. Its major function was the organisation of foreign forced labor in the Soviet Union. The top management of GUPVI came from the GULAG system. The major noted distinction from GULAG was the absence of convicted criminals in the GUPVI camps. Otherwise the conditions in both camp systems were similar: hard labour, poor nutrition and living conditions, and high mortality rate.
For the Soviet political prisoners, like Solzhenitsyn, all foreign civilian detainees and foreign POWs were imprisoned in the GULAG; the surviving foreign civilians and POWs considered themselves prisoners in the GULAG. According with the estimates, in total, during the whole period of the existence of GUPVI there were over 500 POW camps (within the Soviet Union and abroad), which imprisoned over 4,000,000 POW.
Most Gulag inmates were not political prisoners, although significant numbers of political prisoners could be found in the camps at any one time. Petty crimes and jokes about the Soviet government and officials were punishable by imprisonment. About half of political prisoners in the Gulag camps were imprisoned without trial; official data suggest that there were over 2.6 million sentences to imprisonment on cases investigated by the secret police throughout 1921–53. The GULAG was reduced in size following Stalin's death in 1953, in a period known as the Khrushchev Thaw.
In 1960, the Ministerstvo Vnutrennikh Del (MVD) ceased to function as the Soviet-wide administration of the camps in favour of individual republic MVD branches. The centralised detention facilities temporarily ceased functioning.
Although the term "Gulag" originally referred to a government agency, in English and many other languages the acronym acquired the qualities of a common noun, denoting "the Soviet system of prison-based, unfree labour".
Even more broadly, "Gulag" has come to mean the Soviet repressive system itself, the set of procedures that prisoners once called the "meat-grinder": the arrests, the interrogations, the transport in unheated cattle cars, the forced labour, the destruction of families, the years spent in exile, the early and unnecessary deaths.
Western authors use the term "Gulag" to denote all the prisons and internment camps in the Soviet Union. The term's contemporary usage is at times notably not directly related to the USSR, such as in the expression "North Korea's Gulag" for camps operational today.
The word "Gulag" was not often used in Russian, either officially or colloquially; the predominant terms were "the camps" (лагеря, "lagerya") and "the zone" (зона, "zona"), usually singular, for the labour camp system and for the individual camps. The official term, "corrective labour camp", was suggested for official use by the politburo of the Communist Party of the Soviet Union in the session of July 27, 1929.
The Tsar and the Russian Empire used both forced exile and forced labour as forms of judicial punishment. Katorga, a category of punishment reserved for those convicted of the most serious crimes, had many of the features associated with labour-camp imprisonment: confinement, simplified facilities (as opposed to prisons), and forced labour, usually involving hard, unskilled or semi-skilled work. According to historian Anne Applebaum, katorga was not a common sentence; approximately 6,000 katorga convicts were serving sentences in 1906 and 28,600 in 1916. Under the Imperial Russian penal system, those convicted of less serious crimes were sent to corrective prisons and also made to work. Forced exile to Siberia had been in use since the seventeenth century for a wide range of offenses and was a common punishment for political dissidents and revolutionaries. In the nineteenth century, the members of the failed Decembrist revolt, Polish nobles who resisted Russian rule, and members of various socialist revolutionary groups, including Bolsheviks such as Sergo Ordzhonikidze, Leon Trotsky, and Joseph Stalin were all sent into exile. Convicts serving labour sentences and exiles were sent to the underpopulated areas of Siberia and the Russian Far East – regions that had few towns or food sources and lacked any organised transportation systems. Despite the isolated conditions, there were prisoners who successfully escaped to populated areas. Stalin himself escaped three of the four times he was sent into exile. From these times, Siberia gained its fearful connotation of punishment, which was further enhanced by the Soviet GULAG system. The Bolsheviks' own experiences with exile and forced labour provided them with a model on which to base their system, including the importance of strict enforcement.
During 1920–50, the leaders of the Communist Party and the Soviet state considered repression to be a tool that was to be used for securing the normal functioning of the Soviet state system, as well as for preserving and strengthening the positions within their social base, the working class (when the Bolsheviks took power, peasants represented 80% of the population). In the midst of the Russian Civil War, Lenin and the Bolsheviks established a "special" prison camp system, separate from its traditional prison system and under the control of the Cheka. These camps, as Lenin envisioned them, had a distinctly political purpose. These early camps of the GULAG system were introduced in order to isolate and eliminate class-alien, socially dangerous, disruptive, suspicious, and other disloyal elements, whose deeds and thoughts were not contributing to the strengthening of the dictatorship of the proletariat. Forced labour as a "method of reeducation" was applied in the Solovki prison camp as early as the 1920s, based on Trotsky's experiments with forced labour camps for Czech war prisoners from 1918 and his proposals to introduce "compulsory labor service" voiced in "Terrorism and Communism". Various categories of prisoners were defined: petty criminals, POWs of the Russian Civil War, officials accused of corruption, sabotage and embezzlement, political enemies, dissidents and other people deemed dangerous for the state. In the first decade of Soviet rule, the judicial and penal systems were neither unified nor coordinated, and there was a distinction between criminal prisoners and political or "special" prisoners. The "traditional" judicial and prison system, which dealt with criminal prisoners, were first overseen by The People's Commissariat of Justice until 1922, after which they were overseen by the People's Commissariat of Internal Affairs, also known as the NKVD. The Cheka and its successor organisations, the GPU or State Political Directorate and the OGPU, oversaw political prisoners and the "special" camps to which they were sent. In April of 1929, the judicial distinctions between criminal and political prisoners were eliminated, and control of the entire Soviet penal system turned over to the OGPU. In 1928 there were 30,000 individuals interned; the authorities were opposed to compelled labour. In 1927 the official in charge of prison administration wrote:
The exploitation of prison labour, the system of squeezing "golden sweat" from them, the organisation of production in places of confinement, which while profitable from a commercial point of view is fundamentally lacking in corrective significance – these are entirely inadmissible in Soviet places of confinement.
The legal base and the guidance for the creation of the system of "corrective labour camps" (, ), the backbone of what is commonly referred to as the "Gulag", was a secret decree from the Sovnarkom of July 11, 1929, about the use of penal labour that duplicated the corresponding appendix to the minutes of the Politburo meeting of June 27, 1929.
One of the Gulag system founders was Naftaly Frenkel. In 1923 he was arrested for illegally crossing borders and smuggling. He was sentenced to 10 years' hard labor at Solovki, which later came to be known as the "first camp of the Gulag". While serving his sentence he wrote a letter to the camp administration detailing a number of "productivity improvement" proposals including the infamous system of labor exploitation whereas the inmates' food rations were to be linked to their rate of production, a proposal known as nourishment scale (шкала питания). This notorious you-eat-as-you-work system would often kill weaker prisoners in weeks and caused countless casualties. The letter caught the attention of a number of high communist officials including Genrikh Yagoda and Frenkel soon went from being an inmate to becoming a camp commander and an important Gulag official. His proposals soon saw widespread adoption in the Gulag system .
After having appeared as an instrument and place for isolating counter-revolutionary and criminal elements, the Gulag, because of its principle of "correction by forced labour", it quickly became, in fact, an independent branch of the national economy secured on the cheap labour force presented by prisoners. Hence it is followed by one more important reason for the constancy of the repressive policy, namely, the state's interest in unremitting rates of receiving a cheap labour force that was forcibly used, mainly in the extreme conditions of the east and north. The Gulag possessed both punitive and economic functions.
The Gulag was officially established on April 25, 1930 as the ULAG by the OGPU order 130/63 in accordance with the Sovnarkom order 22 p. 248 dated April 7, 1930. It was renamed as the Gulag in November of that year.
The hypothesis that economic considerations were responsible for mass arrests during the period of Stalinism has been refuted on the grounds of former Soviet archives that have become accessible since the 1990s, although some archival sources also tend to support an economic hypothesis. In any case, the development of the camp system followed economic lines. The growth of the camp system coincided with the peak of the Soviet industrialisation campaign. Most of the camps established to accommodate the masses of incoming prisoners were assigned distinct economic tasks. These included the exploitation of natural resources and the colonisation of remote areas, as well as the realisation of enormous infrastructural facilities and industrial construction projects. The plan to achieve these goals with "special settlements" instead of labor camps was dropped after the revealing of the Nazino affair in 1933; subsequently the Gulag system was expanded.
The 1931–32 archives indicate the Gulag had approximately 200,000 prisoners in the camps; while in 1935, approximately 800,000 were in camps and 300,000 in colonies (annual averages).
In the early 1930s, a tightening of Soviet penal policy caused significant growth of the prison camp population. During the Great Purge of 1937–38, mass arrests caused another increase in inmate numbers. Hundreds of thousands of persons were arrested and sentenced to long prison terms on the grounds of one of the multiple passages of the notorious Article 58 of the Criminal Codes of the Union republics, which defined punishment for various forms of "counterrevolutionary activities". Under NKVD Order No. 00447, tens of thousands of Gulag inmates were executed in 1937–38 for "continuing counterrevolutionary activities".
Between 1934 and 1941, the number of prisoners with higher education increased more than eight times, and the number of prisoners with high education increased five times. It resulted in their increased share in the overall composition of the camp prisoners. Among the camp prisoners, the number and share of the intelligentsia was growing at the quickest pace. Distrust, hostility, and even hatred for the intelligentsia was a common characteristic of the Soviet leaders. Information regarding the imprisonment trends and consequences for the intelligentsia derive from the extrapolations of Viktor Zemskov from a collection of prison camp population movements data.
The gulag was an administration body that watched over the camps; eventually its name would be used for these camps retrospectively. After Lenin's death in 1924, Stalin was able to take control of the government, and began to form the gulag system. On June 27, 1929 the Politburo created a system of self-supporting camps that would eventually replace the existing prisons around the country. These prisons were meant to receive inmates that received a prison sentence that exceeded three years. Prisoners that had a shorter prison sentence than three years were to remain in the prison system that was still under the purview of the NKVD. The purpose of these new camps was to colonise the remote and inhospitable environments throughout the Soviet Union. These changes took place around the same time that Stalin started to institute collectivisation and rapid industrial development. Collectivisation resulted in a large scale purge of peasants and so-called Kulaks. The Kulaks were supposedly wealthy (comparatively to other Soviet peasants) and were considered to be capitalists by the state, and by extension enemies of socialism. The term would also become associated with anyone who opposed or even seemed unsatisfied with the Soviet government.
By late 1929 Stalin began a program known as "dekulakization". Stalin demanded that the kulak class be completely wiped out, resulting in the imprisonment and execution of Soviet peasants. In a mere four months, 60,000 people being sent to the camps and another 154,000 exiled. This was only the beginning of the "dekulakisation" process, however. In 1931 alone 1,803,392 people were exiled.
Although these massive relocation processes were successful in getting a large potential free forced labour work force where they needed to be, that is about all it was successful at doing. The "special settlers", as the Soviet government referred to them, all lived on starvation level rations, and many people starved to death in the camps, and anyone who was healthy enough to escape tried to do just that. This resulted in the government having to give rations to a group of people they were getting hardly any use out of, and was just costing the Soviet government money. The Unified State Political Administration (OGPU) quickly realised the problem, and began to reform the "dekulakisation" process. To help prevent the mass escapes the OGPU started to recruit people within the colony to help stop people who attempted to leave, and set up ambushes around known popular escape routes. The OGPU also attempted to raise the living conditions in these camps that would not encourage people to actively try and escape, and Kulaks were promised that they would regain their rights after five years. Even these revisions ultimately failed to resolve the problem, and the "dekulakisation" process was a failure in providing the government with a steady forced labour force. These prisoners were also lucky to be in the gulag in the early 1930s. Prisoners were relatively well off compared to what the prisoners would have to go through in the final years of the gulag.
On the eve of World War II, Soviet archives indicate a combined camp and colony population upwards of 1.6 million in 1939, according to V. P. Kozlov. Anne Applebaum and Steven Rosefielde estimate that 1.2 to 1.5 million people were in Gulag system's prison camps and colonies when the war started.
After the German invasion of Poland that marked the start of World War II in Europe, the Soviet Union invaded and annexed eastern parts of the Second Polish Republic. In 1940 the Soviet Union occupied Estonia, Latvia, Lithuania, Bessarabia (now the Republic of Moldova) and Bukovina. According to some estimates, hundreds of thousands of Polish citizens and inhabitants of the other annexed lands, regardless of their ethnic origin, were arrested and sent to the Gulag camps. However, according to the official data, the total number of sentences for political and anti-state (espionage, terrorism) crimes in USSR in 1939–41 was 211,106.
Approximately 300,000 Polish prisoners of war were captured by the USSR during and after the "Polish Defensive War". Almost all of the captured officers and a large number of ordinary soldiers were then murdered (see Katyn massacre) or sent to Gulags. Of the 10,000–12,000 Poles sent to Kolyma in 1940–41, most prisoners of war, only 583 men survived, released in 1942 to join the Polish Armed Forces in the East. Out of General Anders' 80,000 evacuees from Soviet Union gathered in Great Britain only 310 volunteered to return to Soviet-controlled Poland in 1947.
During the Great Patriotic War, Gulag populations declined sharply due to a steep rise in mortality in 1942–43. In the winter of 1941 a quarter of the Gulag's population died of starvation. 516,841 prisoners died in prison camps in 1941–43, from a combination of their harsh working conditions and the famine caused by the German invasion. This period accounts for about half of all gulag deaths, according to Russian statistics.
In 1943, the term "katorga works" () was reintroduced. They were initially intended for Nazi collaborators, but then other categories of political prisoners (for example, members of deported peoples who fled from exile) were also sentenced to "katorga works". Prisoners sentenced to "katorga works" were sent to Gulag prison camps with the most harsh regime and many of them perished.
Up until World War II, the Gulag system expanded dramatically to create a Soviet "camp economy". Right before the war, forced labour provided 46.5% of the nation's nickel, 76% of its tin, 40% of its cobalt, 40.5% of its chrome-iron ore, 60% of its gold, and 25.3% of its timber. And in preparation for war, the NKVD put up many more factories and built highways and railroads.
The Gulag quickly switched to production of arms and supplies for the army after fighting began. At first, transportation remained a priority. In 1940 the NKVD focused most of its energy on railroad construction. This would prove extremely important when the German advance into the Soviet Union started in 1941. In addition, factories converted to produce ammunition, uniforms, and other supplies. Moreover, the NKVD gathered skilled workers and specialists from throughout the Gulag into 380 special colonies which produced tanks, aircraft, armaments, and ammunition.
Despite its low capital costs, the camp economy suffered from serious flaws. For one, actual productivity almost never matched estimates: the estimates proved far too optimistic. In addition, scarcity of machinery and tools plagued the camps, and the tools that the camps did have quickly broke. The Eastern Siberian Trust of the Chief Administration of Camps for Highway Construction destroyed ninety-four trucks in just three years. But the greatest problem was simple – forced labour was less efficient than free labour. In fact, prisoners in the Gulag were, on average, half as productive as free labourers in the USSR at the time, which may be partially explained by malnutrition.
To make up for this disparity, the NKVD worked prisoners harder than ever. To meet rising demand, prisoners worked longer and longer hours, and on lower food-rations than ever before. A camp administrator said in a meeting: "There are cases when a prisoner is given only four or five hours out of twenty-four for rest, which significantly lowers his productivity." In the words of a former Gulag prisoner: "By the spring of 1942, the camp ceased to function. It was difficult to find people who were even able to gather firewood or bury the dead." The scarcity of food stemmed in part from the general strain on the entire Soviet Union, but also lack of central aid to the Gulag during the war. The central government focused all its attention on the military, and left the camps to their own devices. In 1942 the Gulag set up the Supply Administration to find their own food and industrial goods. During this time, not only did food become scarce, but the NKVD limited rations in an attempt to motivate the prisoners to work harder for more food, a policy that lasted until 1948.
In addition to food shortages, the Gulag suffered from labour scarcity at the beginning of the war. The Great Terror of 1936–1938 had provided a large supply of free labour, but by the start of World War II the purges had slowed down. In order to complete all of their projects, camp administrators moved prisoners from project to project. To improve the situation, laws were implemented in mid-1940 that allowed giving short camp sentences (4 months or a year) to those convicted of petty theft, hooliganism, or labour-discipline infractions. By January 1941 the Gulag workforce had increased by approximately 300,000 prisoners. But in 1942 serious food shortages began, and camp populations dropped again. The camps lost still more prisoners to the war effort. (The Soviet Union went into total war footing in June 1941.) Many labourers received early releases so that they could be drafted and sent to the front.
Even as the pool of workers shrank, demand for outputs continued to grow rapidly. As a result, the Soviet government pushed the Gulag to "do more with less". With fewer able-bodied workers and few supplies from outside the camp system, camp administrators had to find a way to maintain production. The solution they found was to push the remaining prisoners still harder. The NKVD employed a system of setting unrealistically high production goals, straining resources in an attempt to encourage higher productivity. As the Axis armies pushed into Soviet territory from June 1941 on, labour resources became further strained, and many of the camps had to evacuate out of Western Russia. From the beginning of the war to halfway through 1944, 40 camps were set up, and 69 were disbanded. During evacuations, machinery received priority, leaving prisoners to reach safety on foot. The speed of Operation Barbarossa's advance prevented the evacuation of all labourers in good time, and the NKVD massacred many to prevent them from falling into German hands. While this practice denied the Germans a source of free labour, it also further restricted the Gulag's capacity to keep up with the Red Army's demands. When the tide of the war turned, however, and the Soviets started pushing the Axis invaders back, fresh batches of labourers replenished the camps. As the Red Army recaptured territories from the Germans, an influx of Soviet ex-POWs greatly increased the Gulag population.
After World War II the number of inmates in prison camps and colonies, again, rose sharply, reaching approximately 2.5 million people by the early 1950s (about 1.7 million of whom were in camps).
When the war in Europe ended in May 1945, as many as two million former Russian citizens were forcefully repatriated into the USSR. On February 11, 1945, at the conclusion of the Yalta Conference, the United States and United Kingdom signed a Repatriation Agreement with the Soviet Union. One interpretation of this agreement resulted in the forcible repatriation of all Soviets. British and U.S. civilian authorities ordered their military forces in Europe to deport to the Soviet Union up to two million former residents of the Soviet Union, including persons who had left the Russian Empire and established different citizenship years before. The forced repatriation operations took place from 1945–47.
Multiple sources state that Soviet POWs, on their return to the Soviet Union, were treated as traitors (see Order No. 270). According to some sources, over 1.5 million surviving Red Army soldiers imprisoned by the Germans were sent to the Gulag. However, that is a confusion with two other types of camps. During and after World War II, freed POWs went to special "filtration" camps. Of these, by 1944, more than 90 percent were cleared, and about 8 percent were arrested or condemned to penal battalions. In 1944, they were sent directly to reserve military formations to be cleared by the NKVD. Further, in 1945, about 100 filtration camps were set for repatriated Ostarbeiter, POWs, and other displaced persons, which processed more than 4,000,000 people. By 1946, the major part of the population of these camps were cleared by NKVD and either sent home or conscripted (see table for details). 226,127 out of 1,539,475 POWs were transferred to the NKVD, i.e. the Gulag.
After Nazi Germany's defeat, ten NKVD-run "special camps" subordinate to the Gulag were set up in the Soviet Occupation Zone of post-war Germany. These "special camps" were former Stalags, prisons, or Nazi concentration camps such as Sachsenhausen (special camp number 7) and Buchenwald (special camp number 2). According to German government estimates "65,000 people died in those Soviet-run camps or in transportation to them." According to German researchers, Sachsenhausen, where 12,500 Soviet era victims have been uncovered, should be seen as an integral part of the Gulag system.
Yet the major reason for the post-war increase in the number of prisoners was the tightening of legislation on property offences in summer 1947 (at this time there was a famine in some parts of the Soviet Union, claiming about 1 million lives), which resulted in hundreds of thousands of convictions to lengthy prison terms, sometimes on the basis of cases of petty theft or embezzlement. At the beginning of 1953 the total number of prisoners in prison camps was more than 2.4 million of which more than 465,000 were political prisoners.
The state continued to maintain the extensive camp system for a while after Stalin's death in March 1953, although the period saw the grip of the camp authorities weaken, and a number of conflicts and uprisings occur ("see" Bitch Wars; Kengir uprising; Vorkuta uprising).
The amnesty in March 1953 was limited to non-political prisoners and for political prisoners sentenced to not more than 5 years, therefore mostly those convicted for common crimes were then freed. The release of political prisoners started in 1954 and became widespread, and also coupled with mass rehabilitations, after Nikita Khrushchev's denunciation of Stalinism in his Secret Speech at the 20th Congress of the CPSU in February 1956.
The "Gulag" institution was closed by the MVD order No 020 of January 25, 1960 but forced labor colonies for political and criminal prisoners continued to exist. Political prisoners continued to be kept in one of the most famous camps Perm-36 until 1987 when it was closed. (See also Foreign forced labor in the Soviet Union.)
The Russian penal system, despite reforms and a reduction in prison population, informally or formally continues many practices endemic to the "Gulag" system, including forced labour, inmates policing inmates, and prisoner intimidation.
In the late 2000s, some human rights activists accused authorities of gradual removal of Gulag remembrance from places such as Perm-36 and Solovki prison camp.
Prior to the dissolution of the Soviet Union, estimates of Gulag victims ranged from 2.3 to 17.6 million (see a History of Gulag population estimates section). Mortality in Gulag camps in 1934–40 was 4–6 times higher than average in the Soviet Union. Post-1991 research by historians accessing archival materials brought this range down considerably. According to a 1993 study of archival Soviet data, a total of 1,053,829 people died in the Gulag from 1934 to 1953. However, taking into account the fact that it was common practice to release prisoners who were either suffering from incurable diseases or near death, a combined statistics on mortality "in the camps" and mortality "caused by the camps" gives a probable figure around 1.6 million. In contrast Anatoly Vishnevsky estimated total number of those who died in imprisonment in 1930–53 is at least 1.76 million, about half of which occurred between 1941–43 following the German invasion. If prisoner deaths from labor colonies and special settlements are included, the death toll according to J. Otto Pohl rises to 2,749,163, although the historian who compiled this estimate (J. Otto Pohl) stresses that it is incomplete, and doesn't cover all prisoner categories for every year.
In her recent study, Golfo Alexopoulos attempted to challenge this consensus figure by encompassing those whose life was shortened due to GULAG conditions. Alexopoulos concluded from her research that a systematic practice of the Gulag was to release sick prisoners on the verge of death; and that all prisoners who received the health classification "invalid," "light physical labour," "light individualised labour," or "physically defective" that together according to Alexopoulos encompassed at least one third of all inmates who passed through the Gulag died or had their lives shortened due to detention in the Gulag in captivity or shortly after release. The GULAG mortality estimated in this way yields the figure of 6 million deaths. Historian Orlando Figes and Russian writer Vadim Erlikman have posited similar estimates. The estimate of Alexopoulos however; has obvious methodological difficulties and is supported by misinterpreted evidence such as presuming that hundreds of thousands of prisoners “directed to other places of detention” in 1948 was a euphemism for releasing prisoners on the verge of death into labour colonies, when it was really referring to internal transport in the Gulag rather than release.
The tentative historical consensus among archival researchers and historians who access such data is that of the 18 million people who passed through the gulag from 1930 to 1953, is that at least between 1.5 and 1.7 million perished as a result of their detention though some historians believe the actual death toll is "somewhat higher."
Certificates of death in the Gulag system for the period from 1930 to 1956.
Living and working conditions in the camps varied significantly across time and place, depending, among other things, on the impact of broader events (World War II, countrywide famines and shortages, waves of terror, sudden influx or release of large numbers of prisoners). However, to one degree or another, the large majority of prisoners at most times faced meagre food rations, inadequate clothing, overcrowding, poorly insulated housing, poor hygiene, and inadequate health care. Most prisoners were compelled to perform harsh physical labour. In most periods and economic branches, the degree of mechanisation of work processes was significantly lower than in the civilian industry: tools were often primitive and machinery, if existent, short in supply. Officially established work hours were in most periods longer and days off were fewer than for civilian workers.
Andrei Vyshinsky, procurator of the Soviet Union, wrote a memorandum to NKVD chief Nikolai Yezhov in 1938 which stated:
Among the prisoners there are some so ragged and lice-ridden that they pose a sanitary danger to the rest. These prisoners have deteriorated to the point of losing any resemblance to human beings. Lacking food…they collect orts [refuse] and, according to some prisoners, eat rats and dogs.
In general, the central administrative bodies showed a discernible interest in maintaining the labour force of prisoners in a condition allowing the fulfilment of construction and production plans handed down from above. Besides a wide array of punishments for prisoners refusing to work (which, in practice, were sometimes applied to prisoners that were too enfeebled to meet production quota), they instituted a number of positive incentives intended to boost productivity. These included monetary bonuses (since the early 1930s) and wage payments (from 1950 onward), cuts of individual sentences, general early-release schemes for norm fulfilment and overfulfilment (until 1939, again in selected camps from 1946 onward), preferential treatment, and privileges for the most productive workers (shock workers or Stakhanovites in Soviet parlance).
A distinctive incentive scheme that included both coercive and motivational elements and was applied universally in all camps consisted in standardised "nourishment scales": the size of the inmates' ration depended on the percentage of the work quota delivered. Naftaly Frenkel is credited for the introduction of this policy. While it was effective in compelling many prisoners to work harder, for many a prisoner it had the adverse effect, accelerating the exhaustion and sometimes causing the death of persons unable to fulfil high production quota.
Immediately after the German attack on the Soviet Union in June 1941 the conditions in camps worsened drastically: quotas were increased, rations cut, and medical supplies came close to none, all of which led to a sharp increase in mortality. The situation slowly improved in the final period and after the end of the war.
Considering the overall conditions and their influence on inmates, it is important to distinguish three major strata of Gulag inmates:
A severe famine of 1931–1933 swept across many different regions in the Soviet Union. During this time, it is estimated that around six to seven million people starved to death. On 7 August 1932, a new edict drafted by Stalin specified a minimum sentence of ten years or execution for theft from collective farms or of cooperative property. Over the next few months, prosecutions rose fourfold. A large share of cases prosecuted under the law were for the theft of small quantities of grain worth less than fifty rubles. The law was later relaxed on 8 May 1933. Overall, during the first half of 1933, prisons saw more new incoming inmates than the three previous years combined.
Prisoners in the camps faced harsh working conditions. One Soviet report stated that, in early 1933, up to 15% of the prison population in Soviet Uzbekistan died monthly. During this time, prisoners were getting around worth of food a day. Many inmates attempted to flee, causing an upsurge in coercive and violent measures. Camps were directed "not to spare bullets". The bodies of inmates who tried to escape were commonly displayed in the courtyards of the camps, and the administrators would forcibly escort the inmates around the dead bodies as a message. Until 1934, lack of food and the outbreak of diseases started to destabilise the Gulag system. It wasn't until the famine ended that the system started to stabilize.
The convicts in such camps were actively involved in all kinds of labour with one of them being logging ("lesopoval"). The working territory of logging presented by itself a square and was surrounded by forest clearing. Thus, all attempts to exit or escape from it were well observed from the four towers set at each of its corners.
Locals who captured a runaway were given rewards. It is also said that Gulags in colder areas were less concerned with finding escaped prisoners as they would die anyhow from the severely cold winters. In such cases prisoners who did escape without getting shot were often found dead kilometres away from the camp.
In the early days of Gulag, the locations for the camps were chosen primarily for the isolated conditions involved. Remote monasteries in particular were frequently reused as sites for new camps. The site on the Solovetsky Islands in the White Sea is one of the earliest and also most noteworthy, taking root soon after the Revolution in 1918. The colloquial name for the islands, "Solovki", entered the vernacular as a synonym for the labor camp in general. It was presented to the world as an example of the new Soviet method for "re-education of class enemies" and reintegrating them through labor into Soviet society. Initially the inmates, largely Russian intelligentsia, enjoyed relative freedom (within the natural confinement of the islands). Local newspapers and magazines were published and even some scientific research was carried out (e.g., a local botanical garden was maintained but unfortunately later lost completely). Eventually Solovki turned into an ordinary Gulag camp; in fact some historians maintain that it was a pilot camp of this type. In 1929 Maxim Gorky visited the camp and published an apology for it. The report of Gorky's trip to Solovki was included in the cycle of impressions titled "Po Soiuzu Sovetov," Part V, subtitled "Solovki." In the report, Gorky wrote that "camps such as 'Solovki' were absolutely necessary."
With the new emphasis on Gulag as the means of concentrating cheap labour, new camps were then constructed throughout the Soviet sphere of influence, wherever the economic task at hand dictated their existence (or was designed specifically to avail itself of them, such as the White Sea-Baltic Canal or the Baikal Amur Mainline), including facilities in big cities — parts of the famous Moscow Metro and the Moscow State University new campus were built by forced labor. Many more projects during the rapid industrialisation of the 1930s, war-time and post-war periods were fulfilled on the backs of convicts. The activity of Gulag camps spanned a wide cross-section of Soviet industry. Gorky organised in 1933 a trip of 120 writers and artists to the White Sea–Baltic Canal, 36 of them wrote a propaganda book about the construction published in 1934 and destroyed in 1937.
The majority of Gulag camps were positioned in extremely remote areas of northeastern Siberia (the best known clusters are "Sevvostlag" ("The North-East Camps") along Kolyma river and "Norillag" near Norilsk) and in the southeastern parts of the Soviet Union, mainly in the steppes of Kazakhstan ("Luglag", "Steplag", "Peschanlag"). A very precise map was made by the Memorial Foundation. These were vast and sparsely inhabited regions with no roads (in fact, the construction of the roads themselves was assigned to the inmates of specialised railway camps) or sources of food, but rich in minerals and other natural resources (such as timber). However, camps were generally spread throughout the entire Soviet Union, including the European parts of Russia, Belarus, and Ukraine. There were several camps outside the Soviet Union, in Czechoslovakia, Hungary, Poland, and Mongolia, which were under the direct control of the Gulag.
Not all camps were fortified; some in Siberia were marked only by posts. Escape was deterred by the harsh elements, as well as tracking dogs that were assigned to each camp. While during the 1920s and 1930s native tribes often aided escapees, many of the tribes were also victimised by escaped thieves. Tantalised by large rewards as well, they began aiding authorities in the capture of Gulag inmates. Camp guards were given stern incentive to keep their inmates in line at all costs; if a prisoner escaped under a guard's watch, the guard would often be stripped of his uniform and become a Gulag inmate himself. Further, if an escaping prisoner was shot, guards could be fined amounts that were often equivalent to one or two weeks wages.
In some cases, teams of inmates were dropped off in new territory with a limited supply of resources and left to set up a new camp or die. Sometimes it took several waves of colonists before any one group survived to establish the camp.
The area along the Indigirka river was known as "the Gulag inside the Gulag". In 1926, the Oimiakon (Оймякон) village in this region registered the record low temperature of −71.2 °C (−96 °F).
Under the supervision of Lavrenty Beria who headed both NKVD and the Soviet atom bomb program until his demise in 1953, thousands of "zeks" (Gulag inmates) were used to mine uranium ore and prepare test facilities on Novaya Zemlya, Vaygach Island, Semipalatinsk, among other sites.
Throughout the history of the Soviet Union, there were at least 476 separate camp administrations. The Russian researcher Galina Ivanova stated that,
to date, Russian historians have discovered and described 476 camps that existed at different times on the territory of the USSR. It is well known that practically every one of them had several branches, many of which were quite large. In addition to the large numbers of camps, there were no less than 2,000 colonies. It would be virtually impossible to reflect the entire mass of Gulag facilities on a map that would also account for the various times of their existence.
Since many of these existed only for short periods, the number of camp administrations at any given point was lower. It peaked in the early 1950s, when there were more than 100 camp administrations across the Soviet Union. Most camp administrations oversaw several single camp units, some as many as dozens or even hundreds. The infamous complexes were those at Kolyma, Norilsk, and Vorkuta, all in arctic or subarctic regions. However, prisoner mortality in Norilsk in most periods was actually lower than across the camp system as a whole.
According to historian Stephen Barnes, there exist four major ways of looking at the origins and functions of the Gulag:
Hannah Arendt argues that as part of a totalitarian system of government, the camps of the Gulag system were experiments in "total domination." In her view, the goal of a totalitarian system was not merely to establish limits on liberty, but rather to abolish liberty entirely in service of its ideology. She argues that the Gulag system was not merely political repression because the system survived and grew long after Stalin had wiped out all serious political resistance. Although the various camps were initially filled with criminals and political prisoners, eventually they were filled with prisoners who were arrested irrespective of anything relating to them as individuals, but rather only on the basis of their membership in some ever shifting category of imagined threats to the state.
She also argues that the function of the Gulag system was not truly economic. Although the Soviet government deemed them all "forced labour" camps, this in fact highlighted that the work in the camps was deliberately pointless, since all Russian workers could be subject to forced labour. The only real economic purpose they typically served was financing the cost of their own supervision. Otherwise the work performed was generally useless, either by design or made that way through extremely poor planning and execution; some workers even preferred more difficult work if it was actually productive. She differentiated between "authentic" forced-labour camps, concentration camps, and "annihilation camps". In authentic labour camps, inmates worked in "relative freedom and are sentenced for limited periods." Concentration camps had extremely high mortality rates and but were still "essentially organised for labour purposes." Annihilation camps were those where the inmates were "systematically wiped out through starvation and neglect." She criticises other commentators' conclusion that the purpose of the camps was a supply of cheap labour. According to her, the Soviets were able to liquidate the camp system without serious economic consequences, showing that the camps were not an important source of labour and were overall economically irrelevant.
Arendt argues that together with the systematised, arbitrary cruelty inside the camps, this served the purpose of total domination by eliminating the idea that the arrestees had any political or legal rights. Morality was destroyed by maximising cruelty and by organising the camps internally to make the inmates and guards complicit. The terror resulting from operation of the Gulag system caused people outside of the camps to cut all ties with anyone who was arrested or purged and to avoid forming ties with others for fear of being associated with anyone who was targeted. As a result, the camps were essential as the nucleus of a system that destroyed individuality and dissolved all social bonds. Thereby, the system attempted to eliminate any capacity for resistance or self-directed action in the greater population.
Statistical reports made by the OGPU-NKVD-MGB-MVD between the 1930s and 1950s are kept in the State Archive of the Russian Federation formerly called Central State Archive of the October Revolution (CSAOR). These documents were highly classified and inaccessible. Amid glasnost and democratisation in the late 1980s, Viktor Zemskov and other Russian researchers managed to gain access to the documents and published the highly classified statistical data collected by the OGPU-NKVD-MGB-MVD and related to the number of the Gulag prisoners, special settlers, etc. In 1995, Zemskov wrote that foreign scientists have begun to be admitted to the restricted-access collection of these documents in the State Archive of the Russian Federation since 1992. However, only one historian, namely Zemskov, was admitted to these archives, and later the archives were again "closed", according to Leonid Lopatnikov.
While considering the issue of reliability of the primary data provided by corrective labor institutions, it is necessary to take into account the following two circumstances. On the one hand, their administration was not interested to understate the number of prisoners in its reports, because it would have automatically led to a decrease in the food supply plan for camps, prisons, and corrective labor colonies. The decrement in food would have been accompanied by an increase in mortality that would have led to wrecking of the vast production program of the Gulag. On the other hand, overstatement of data of the number of prisoners also did not comply with departmental interests, because it was fraught with the same (i.e., impossible) increase in production tasks set by planning bodies. In those days, people were highly responsible for non-fulfilment of plan. It seems that a resultant of these objective departmental interests was a sufficient degree of reliability of the reports.
Between 1990 and 1992, the first precise statistical data on the Gulag based on the Gulag archives were published by Viktor Zemskov. These had been generally accepted by leading Western scholars, despite the fact that a number of inconsistencies were found in this statistics. It is also necessary to note that not all the conclusions drawn by Zemskov based on his data have been generally accepted. Thus, Sergei Maksudov alleged that although literary sources, for example the books of Lev Razgon or Aleksandr Solzhenitsyn, did not envisage the total number of the camps very well and markedly exaggerated their size, on the other hand, Viktor Zemskov, who published many documents by the NKVD and KGB, was far from understanding of the Gulag essence and the nature of socio-political processes in the country. He added that without distinguishing the degree of accuracy and reliability of certain figures, without making a critical analysis of sources, without comparing new data with already known information, Zemskov absolutises the published materials by presenting them as the ultimate truth. As a result, Maksudov charges that Zemskov attempts to make generalized statements with reference to a particular document, as a rule, do not hold water.
In response, Zemskov wrote that the charge that Zemskov allegedly did not compare new data with already known information could not be called fair. In his words, the trouble with most western writers is that they do not benefit from such comparisons. Zemskov added that when he tried not to overuse the juxtaposition of new information with "old" one, it was only because of a sense of delicacy, not to once again psychologically traumatise the researchers whose works used incorrect figures, as it turned out after the publication of the statistics by the OGPU-NKVD-MGB-MVD.
According to French historian Nicolas Werth, the mountains of the materials of the Gulag archives, which are stored in funds of the State Archive of the Russian Federation and are being constantly exposed during the last fifteen years, represent only a very small part of bureaucratic prose of immense size left over the decades of "creativity" by the "dull and reptile" organisation managing the Gulag. In many cases, local camp archives, which had been stored in sheds, barracks, or other rapidly disintegrating buildings, simply disappeared in the same way as most of the camp buildings did.
In 2004 and 2005, some archival documents were published in the edition "Istoriya Stalinskogo Gulaga. Konets 1920-kh — Pervaya Polovina 1950-kh Godov. Sobranie Dokumentov v 7 Tomakh" ("The History of Stalin's Gulag. From the Late 1920s to the First Half of the 1950s. Collection of Documents in Seven Volumes"), wherein each of its seven volumes covered a particular issue indicated in the title of the volume:
The edition contains the brief introductions by the two "patriarchs of the Gulag science", Robert Conquest and Aleksandr Solzhenitsyn, and 1431 documents, the overwhelming majority of which were obtained from funds of the State Archive of the Russian Federation.
During the decades before the dissolution of the USSR, the debates about the population size of GULAG failed to arrive at generally accepted figures; wide-ranging estimates have been offered, and the bias toward higher or lower side was sometimes ascribed to political views of the particular author. Some of those earlier estimates (both high and low) are shown in the table below.
The glasnost political reforms in the late 1980s and the subsequent dissolution of the USSR led to the release of a large amount of formerly classified archival documents, including new demographic and NKVD data. Analysis of the official GULAG statistics by Western scholars immediately demonstrated that, despite their inconsistency, they do not support previously published higher estimates. Importantly, the released documents made possible to clarify terminology used to describe different categories of forced labour population, because the use of the terms "forced labour", "GULAG", "camps" interchangeably by early researchers led to significant confusion and resulted in significant inconsistencies in the earlier estimates. Archival studies revealed several components of the NKVD penal system in the Stalinist USSR: prisons, labor camps, labor colonies, as well as various "settlements" (exile) and of non-custodial forced labour. Although most of them fit the definition of forced labour, only labour camps, and labour colonies were associated with punitive forced labour in detention. Forced labour camps ("GULAG camps") were hard regime camps, whose inmates were serving more than three-year terms. As a rule, they were situated in remote parts of the USSR, and labour conditions were extremely hard there. They formed a core of the GULAG system. The inmates of "corrective labour colonies" served shorter terms; these colonies were located in less remote parts of the USSR, and they were run by local NKVD administration. Preliminary analysis of the GULAG camps and colonies statistics (see the chart on the right) demonstrated that the population reached the maximum before the World War II, then dropped sharply, partially due to massive releases, partially due to wartime high mortality, and then was gradually increasing until the end of Stalin era, reaching the global maximum in 1953, when the combined population of GULAG camps and labour colonies amounted to 2,625,000.
The results of these archival studies convinced many scholars, including Robert Conquest or Stephen Wheatcroft to reconsider their earlier estimates of the size of the GULAG population, although the 'high numbers' of arrested and deaths are not radically different from earlier estimates. Although such scholars as Rosefielde or Vishnevsky point at several inconsistencies in archival data with Rosefielde pointing out the archival figure of 1,196,369 for the population of the Gulag and labor colonies combined on December 31, 1936 is less than half the 2.75 million labor camp population given to the Census Board by the NKVD for the 1937 census, it is generally believed that these data provide more reliable and detailed information that the indirect data and literary sources available for the scholars during the Cold War era. Although Conquest cited Beria's report to the Politburo of the labor camp numbers at the end of 1938 stating there were almost 7 million prisoners in the labor camps, more than three times the archival figure for 1938 and an official report to Stalin by the Soviet minister of State Security in 1952 stating there were 12 million prisoners in the labor camps.
These data allowed scholars to conclude that during the period of 1928–53, about 14 million prisoners passed through the system of GULAG "labour camps" and 4–5 million passed through the "labour colonies". Thus, these figures reflect the number of convicted persons, and do not take into account the fact that a significant part of Gulag inmates had been convicted more than one time, so the actual number of convicted is somewhat overstated by these statistics. From other hand, during some periods of Gulag history the official figures of GULAG population reflected the camps' capacity, not the actual number of inmates, so the actual figures were 15% higher in, e.g. 1946.
The Gulag spanned nearly four decades of Soviet and East European history and affected millions of individuals. Its cultural impact was enormous.
The Gulag has become a major influence on contemporary Russian thinking, and an important part of modern Russian folklore. Many songs by the authors-performers known as the "bards", most notably Vladimir Vysotsky and Alexander Galich, neither of whom ever served time in the camps, describe life inside the Gulag and glorified the life of "Zeks". Words and phrases which originated in the labor camps became part of the Russian/Soviet vernacular in the 1960s and 1970s.
The memoirs of Alexander Dolgun, Aleksandr Solzhenitsyn, Varlam Shalamov and Yevgenia Ginzburg, among others, became a symbol of defiance in Soviet society. These writings harshly chastised the Soviet people for their tolerance and apathy regarding the Gulag, but at the same time provided a testament to the courage and resolve of those who were imprisoned.
Another cultural phenomenon in the Soviet Union linked with the Gulag was the forced migration of many artists and other people of culture to Siberia. This resulted in a Renaissance of sorts in places like Magadan, where, for example, the quality of theatre production was comparable to Moscow's and Eddie Rosner played jazz.
Many eyewitness accounts of Gulag prisoners have been published:
Soviet show that the goals of the gulag included colonisation of sparsely populated remote areas. To this end, the notion of "free settlement" was introduced.
When well-behaved persons had served the majority of their terms, they could be released for "free settlement" (вольное поселение, "volnoye poseleniye") outside the confinement of the camp. They were known as "free settlers" (; not to be confused with the term , "exile settlers"). In addition, for persons who served full term, but who were denied the free choice of place of residence, it was recommended to assign them for "free settlement" and give them land in the general vicinity of the place of confinement.
The gulag inherited this approach from the katorga system.
It is estimated that of the 40,000 people collecting state pensions in Vorkuta, 32,000 are trapped former gulag inmates, or their descendants.
Persons who served a term in a camp or prison were restricted from taking a wide range of jobs. Concealment of a previous imprisonment was a triable offence. Persons who served terms as "politicals" were nuisances for "First Departments" (), outlets of the secret police at all enterprises and institutions), because former "politicals" had to be monitored.
Many people who were released from camps were restricted from settling in larger cities.
Both Moscow and St. Petersburg have memorials to the victims of the Gulag made of boulders from the Solovki camp — the first prison camp in the Gulag system. Moscow's memorial is on Lubyanka Square, the site of the headquarters of the NKVD. People gather at these memorials every year on the Day of Victims of the Repression (October 30).
Moscow has the State Gulag Museum whose first director was Anton Antonov-Ovseyenko. In 2015, another museum dedicated to the Gulag was opened in Moscow.
|
https://en.wikipedia.org/wiki?curid=12980
|
Geiger counter
A Geiger counter is an instrument used for detecting and measuring ionizing radiation. Also known as a Geiger–Muller counter (or Geiger–Müller counter), it is widely used in applications such as radiation dosimetry, radiological protection, experimental physics, and the nuclear industry.
It detects ionizing radiation such as alpha particles, beta particles, and gamma rays using the ionization effect produced in a Geiger–Müller tube, which gives its name to the instrument. In wide and prominent use as a hand-held radiation survey instrument, it is perhaps one of the world's best-known radiation detection instruments.
The original detection principle was realized in 1908, at the Victoria University of Manchester, but it was not until the development of the Geiger–Müller tube in 1928 that the Geiger counter could be produced as a practical instrument. Since then, it has been very popular due to its robust sensing element and relatively low cost. However, there are limitations in measuring high radiation rates and the energy of incident radiation.
A Geiger counter consists of a Geiger–Müller tube (the sensing element which detects the radiation) and the processing electronics, which displays the result.
The Geiger–Müller tube is filled with an inert gas such as helium, neon, or argon at low pressure, to which a high voltage is applied. The tube briefly conducts electrical charge when a particle or photon of incident radiation makes the gas conductive by ionization. The ionization is considerably amplified within the tube by the Townsend discharge effect to produce an easily measured detection pulse, which is fed to the processing and display electronics. This large pulse from the tube makes the Geiger counter relatively cheap to manufacture, as the subsequent electronics are greatly simplified. The electronics also generate the high voltage, typically 400–900 volts, that has to be applied to the Geiger–Müller tube to enable its operation. To stop the discharge in the Geiger–Müller tube a little halogen gas or organic material (alcohol) is added to the gas mixture.
There are two types of detected radiation readout: counts or radiation dose. The counts display is the simplest and is the number of ionizing events detected displayed either as a count rate, such as "counts per minute" or "counts per second", or as a total number of counts over a set time period (an integrated total). The counts readout is normally used when alpha or beta particles are being detected. More complex to achieve is a display of radiation dose rate, displayed in a unit such as the sievert which is normally used for measuring gamma or X-ray dose rates. A Geiger–Müller tube can detect the presence of radiation, but not its energy, which influences the radiation's ionizing effect. Consequently, instruments measuring dose rate require the use of an energy compensated Geiger–Müller tube, so that the dose displayed relates to the counts detected. The electronics will apply known factors to make this conversion, which is specific to each instrument and is determined by design and calibration.
The readout can be analog or digital, and modern instruments offer serial communications with a host computer or network.
There is usually an option to produce audible representing the number of ionization events detected. This is the distinctive sound normally associated with handheld or portable Geiger counters. The purpose of this is to allow the user to concentrate on manipulation of the instrument whilst retaining auditory feedback on the radiation rate.
There are two main limitations of the Geiger counter. Because the output pulse from a Geiger–Müller tube is always of the same magnitude (regardless of the energy of the incident radiation), the tube cannot differentiate between radiation types. Secondly, the inability to measure high radiation rates due to the "dead time" of the tube. This is an insensitive period after each ionization of the gas during which any further incident radiation will not result in a count, and the indicated rate is, therefore, lower than actual. Typically the dead time will reduce indicated count rates above about 104 to 105 counts per second depending on the characteristic of the tube being used. While some counters have circuitry which can compensate for this, for accurate measurements ion chamber instruments are preferred for high radiation rates.
The intended detection application of a Geiger counter dictates the tube design used. Consequently, there are a great many designs, but they can be generally categorized as "end-window", windowless "thin-walled", "thick-walled", and sometimes hybrids of these types.
The first historical uses of the Geiger principle were for the detection of alpha and beta particles, and the instrument is still used for this purpose today. For alpha particles and low energy beta particles, the "end-window" type of a Geiger–Müller tube has to be used as these particles have a limited range and are easily stopped by a solid material. Therefore, the tube requires a window which is thin enough to allow as many as possible of these particles through to the fill gas. The window is usually made of mica with a density of about 1.5 - 2.0 mg/cm2.
Alpha particles have the shortest range, and to detect these the window should ideally be within 10 mm of the radiation source due to alpha particle attenuation. However, the Geiger–Müller tube produces a pulse output which is the same magnitude for all detected radiation, so a Geiger counter with an end window tube cannot distinguish between alpha and beta particles. A skilled operator can use varying distance from a radiation source to differentiate between alpha and high energy beta particles.
The "pancake" Geiger–Müller tube is a variant of the end-window probe, but designed with a larger detection area to make checking quicker. However, the pressure of the atmosphere against the low pressure of the fill gas limits the window size due to the limited strength of the window membrane.
Some beta particles can also be detected by a thin-walled "windowless" Geiger–Müller tube, which has no end-window, but allows high energy beta particles to pass through the tube walls. Although the tube walls have a greater stopping power than a thin end-window, they still allow these more energetic particles to reach the fill gas.
End-window Geiger counters are still used as a general purpose, portable, radioactive contamination measurement and detection instrument, owing to their relatively low cost, robustness and their relatively high detection efficiency; particularly with high energy beta particles. However, for discrimination between alpha and beta particles or provision of particle energy information, scintillation counters or proportional counters should be used. Those instrument types are manufactured with much larger detector areas, which means that checking for surface contamination is quicker than with a Geiger counter.
Geiger counters are widely used to detect gamma radiation and X-rays collectively known as photons, and for this the windowless tube is used. However, detection efficiency is low compared to alpha and beta particles.
The article on the Geiger–Müller tube carries a more detailed account of the techniques used to detect photon radiation. For high energy photons the tube relies on the interaction of the radiation with the tube wall, usually a high Z material such as chrome steel of 1–2 mm thickness to produce electrons within the tube wall. These enter and ionize the fill gas.
This is necessary as the low-pressure gas in the tube has little interaction with higher energy photons. However, as photon energies decrease to low levels there is greater gas interaction and the direct gas interaction increases. At very low energies (less than 25 KeV) direct gas ionisation dominates and a steel tube attenuates the incident photons. Consequently, at these energies, a typical tube design is a long tube with a thin wall which has a larger gas volume to give an increased chance direct interaction of a particle with the fill gas.
Above these low energy levels, there is a considerable variance in response to different photon energies of the same intensity, and a steel-walled tube employs what is known as "energy compensation" in the form of filter rings around the naked tube which attempts to compensate for these variations over a large energy range. A chrome steel G-M tube is about 1% efficient over a wide range of energies.
A variation of the Geiger tube is used to measure neutrons, where the gas used is boron trifluoride or helium-3 and a plastic moderator is used to slow the neutrons. This creates an alpha particle inside the detector and thus neutrons can be counted.
The term "Geiger counter" is commonly used to mean a hand-held survey type meter, however the Geiger principle is in wide use in installed "area gamma" alarms for personnel protection, and in process measurement and interlock applications.
A Geiger tube is still the sensing device, but the processing electronics will have a higher degree of sophistication and reliability than that used in a hand held survey meter.
For hand-held units there are two fundamental physical configurations: the "integral" unit with both detector and electronics in the same unit, and the "two-piece" design which has a separate detector probe and an electronics module connected by a short cable.
In the 1930s a mica window was added to the cylindrical design allowing low-penetration radiation to pass through with ease.
The integral unit allows single-handed operation, so the operator can use the other hand for personal security in challenging monitoring positions, but the two piece design allows easier manipulation of the detector, and is commonly used for alpha and beta surface contamination monitoring where careful manipulation of the probe is required or the weight of the electronics module would make operation unwieldy. A number of different sized detectors are available to suit particular situations, such as placing the probe in small apertures or confined spaces.
Gamma and X-Ray detectors generally use an "integral" design so the Geiger–Müller tube is conveniently within the electronics enclosure. This can easily be achieved because the casing usually has little attenuation, and is employed in ambient gamma measurements where distance from the source of radiation is not a significant factor. However, to facilitate more localised measurements such as "surface dose", the position of the tube in the enclosure is sometimes indicated by targets on the enclosure so an accurate measurement can be made with the tube at the correct orientation and a known distance from the surface.
There is a particular type of gamma instrument known as a "hot spot" detector which has the detector tube on the end of a long pole or flexible conduit. These are used to measure high radiation gamma locations whilst protecting the operator by means of distance shielding.
Particle detection of alpha and beta can be used in both integral and two-piece designs. A pancake probe (for alpha/beta) is generally used to increase the area of detection in two-piece instruments whilst being relatively light weight. In integral instruments using an end window tube there is a window in the body of the casing to prevent shielding of particles. There are also hybrid instruments which have a separate probe for particle detection and a gamma detection tube within the electronics module. The detectors are switchable by the operator, depending the radiation type that is being measured.
In the United Kingdom the National Radiological Protection Board issued a user guidance note on selecting the best portable instrument type for the radiation measurement application concerned. This covers all radiation protection instrument technologies and includes a guide to the use of G-M detectors.
In 1908 Hans Geiger, under the supervision of Ernest Rutherford at the Victoria University of Manchester (now the University of Manchester), developed an experimental technique for detecting alpha particles that would later be used to develop the Geiger–Müller tube in 1928. This early counter was only capable of detecting alpha particles and was part of a larger experimental apparatus. The fundamental ionization mechanism used was discovered by John Sealy Townsend between 1897 and 1901, and is known as the Townsend discharge, which is the ionization of molecules by ion impact.
It was not until 1928 that Geiger and Walther Müller (a PhD student of Geiger) developed the sealed Geiger–Müller tube which used basic ionization principles previously used experimentally. Small and rugged, not only could it detect alpha and beta radiation as prior models had done, but also gamma radiation.
|
https://en.wikipedia.org/wiki?curid=12984
|
General Synod
The General Synod is the title of the governing body of some church organizations.
In the Church of England, the General Synod, which was established in 1970 (replacing the Church Assembly), is the legislative body of the Church.
In the Episcopal Church in the United States of America, the equivalent is General Convention.
General Synods of other churches within the Anglican Communion
The United Church of Christ based in the United States also calls their main governing body a General Synod. It meets every two years and consists of over 600 delegates from various congregations and conferences.
The National Baptist Churches of the USA calls their main governing body a General Synod. It meets annually setting the theological and missional direction for the denomi-network.
The Associate Reformed Presbyterian Church has as its highest Church court the General Synod. The ARP General Synod meets yearly (in recent years, it has, almost without exception, been held at Bonclarken). The delegates to the General Synod of the ARP Church are the elder representatives elected from each church's Session and all ministers from all presbyteries that comprise the Church (excluding ministers and elders from the independent ARP Synods of Mexico and Pakistan).
The Evangelical Church of Augsburg and Helvetic Confession in Austria and the United Evangelical Lutheran Church of Germany each call their main legislative bodies Generalsynode. In the Evangelical Church in Prussia the legislating body was called Generalsynode between 1846 and 1953.
The governing body of the Reformed Church in America, a Calvinist denomination in the United States and Canada, is known as the General Synod.
"Kirkemøtet", the governing body of the Church of Norway is normally translated to General Synod. It convenes once a year, and consists of 85 representatives, of whom seven or eight are sent from each of the dioceses.
The Batak Christian Protestant Church (BPCP), or "Huria Kristen Batak Protestan" (abbreviated HKBP), held a twice-a-year General Synod (Sinode Godang), to discuss about matters in HKBP, and to elect the new "Ephorus" (or Board) for the HKBP. The first General Synod of HKBP was held in 1922.
In the North American Lutheran tradition, General Synod refers to a church body which existed from 1820–1918. See Evangelical Lutheran General Synod of the United States of North America.
|
https://en.wikipedia.org/wiki?curid=12985
|
Gerrymandering
Gerrymandering (,) is a practice intended to establish an unfair political advantage for a particular party or group by manipulating district boundaries, which is most commonly used in first-past-the-post electoral systems.
Two principal tactics are used in gerrymandering: "cracking" (i.e. diluting the voting power of the opposing party's supporters across many districts) and "packing" (concentrating the opposing party's voting power in one district to reduce their voting power in other districts). A third tactic, shown in the top-left diagram in the graphic, is homogenization of all districts (essentially a form of cracking where the majority party uses its superior numbers to guarantee the minority party never attains a majority in any district).
In addition to its use achieving desired electoral results for a particular party, gerrymandering may be used to help or hinder a particular demographic, such as a political, ethnic, racial, linguistic, religious, or class group, such as in Northern Ireland where boundaries were constructed to guarantee Protestant Unionist majorities. The U.S. federal voting district boundaries that produce a majority of constituents representative of African-American or other racial minorities are known as "majority-minority districts". Gerrymandering can also be used to protect incumbents. Wayne Dawkings describes it as politicians picking their voters instead of voters picking their politicians.
The term "gerrymandering" is named after Elbridge Gerry (pronounced like "Gary"), who, as Governor of Massachusetts in 1812, signed a bill that created a partisan district in the Boston area that was compared to the shape of a mythological salamander. The term has negative connotations and gerrymandering is almost always considered a corruption of the democratic process. The resulting district is known as a "gerrymander" (). The word is also a verb for the process.
The word gerrymander (originally written Gerry-mander) was used for the first time in the Boston Gazette (not to be confused with the original "Boston Gazette") on 26 March 1812. The word was created in reaction to a redrawing of Massachusetts state senate election districts under Governor Elbridge Gerry. In 1812, Gerry signed a bill that redistricted Massachusetts to benefit his Democratic-Republican Party. When mapped, one of the contorted districts in the Boston area was said to resemble the shape of a mythological salamander.
The original gerrymander, and original 1812 gerrymander cartoon, depict the Essex South state senatorial district for the legislature of The Commonwealth of Massachusetts.
Gerrymander is a portmanteau of the governor's last name and the word "salamander".
The redistricting was a notable success for Gerry's Democratic-Republican Party. Although in the 1812 election both the Massachusetts House and governorship were won by Federalists by a comfortable margin and cost Gerry his job, the redistricted state Senate remained firmly in Democratic-Republican hands.
The author of the term gerrymander may never be definitively established. Historians widely believe that the Federalist newspaper editors Nathan Hale, and Benjamin and John Russell coined the term, but the historical record does not have definitive evidence as to who created or uttered the word for the first time.
Appearing with the term, and helping spread and sustain its popularity, was a political cartoon depicting a strange animal with claws, wings and a dragon-like head satirizing the map of the oddly shaped district. This cartoon was most likely drawn by Elkanah Tisdale, an early 19th-century painter, designer, and engraver who was living in Boston at the time. Tisdale had the engraving skills to cut the woodblocks to print the original cartoon. These woodblocks survive and are preserved in the Library of Congress.
The word "gerrymander" was reprinted numerous times in Federalist newspapers in Massachusetts, New England, and nationwide during the remainder of 1812. This suggests some organized activity of the Federalists to disparage Governor Gerry in particular, and the growing Democratic-Republican party in general. "Gerrymandering" soon began to be used to describe not only the original Massachusetts example, but also other cases of district shape manipulation for partisan gain in other states. According to the "Oxford English Dictionary," the word's acceptance was marked by its publication in a dictionary (1848) and in an encyclopedia (1868). Since the letter "g" of the eponymous "Gerry" is pronounced with a hard g as in "get", the word "gerrymander" was originally pronounced . However, pronunciation as , with a soft g as in "gentle," has become the accepted pronunciation.
From time to time, other names are given the "-mander" suffix to tie a particular effort to a particular politician or group. These include the 1852 "Henry-mandering", "Jerrymander" (referring to California Governor Jerry Brown), "Perrymander" (a reference to Texas Governor Rick Perry), and "Tullymander" (after the Irish politician James Tully).
The primary goals of gerrymandering are to maximize the effect of supporters' votes and to minimize the effect of opponents' votes. A partisan gerrymander's main purpose is to influence not only the districting statute but the entire corpus of legislative decisions enacted in its path.
These can be accomplished through a number of ways:
These tactics are typically combined in some form, creating a few "forfeit" seats for packed voters of one type in order to secure more seats and greater representation for voters of another type. This results in candidates of one party (the one responsible for the gerrymandering) winning by small majorities in most of the districts, and another party winning by a large majority in only a few of the districts.
Gerrymandering is effective because of the wasted vote effect. "Wasted votes" are votes that did not contribute to electing a candidate, either because they were in excess of the bare minimum needed for victory or because the candidate lost. By moving geographic boundaries, the incumbent party packs opposition voters into a few districts they will already win, wasting the extra votes. Other districts are more tightly constructed with the opposition party allowed a bare minority count, thereby wasting all the minority votes for the losing candidate. These districts constitute the majority of districts and are drawn to produce a result favoring the incumbent party.
A quantitative measure of the effect of gerrymandering is the efficiency gap, computed from the difference in the wasted votes for two different political parties summed over all the districts. Citing in part an efficiency gap of 11.69% to 13%, a U.S. District Court in 2016 ruled against the 2011 drawing of Wisconsin legislative districts. In the 2012 election for the state legislature, that gap in wasted votes meant that one party had 48.6% of the two-party votes but won 61% of the 99 districts.
While the wasted vote effect is strongest when a party wins by narrow margins across multiple districts, gerrymandering narrow margins can be risky when voters are less predictable. To minimize the risk of demographic or political shifts swinging a district to the opposition, politicians can create more packed districts, leading to more comfortable margins in unpacked ones.
Some political science research suggests that, contrary to common belief, gerrymandering does not decrease electoral competition, and can even increase it. Some say that, rather than packing the voters of their party into uncompetitive districts, party leaders tend to prefer to spread their party's voters into multiple districts, so that their party can win a larger number of races. (See scenario (c) in the box.) This may lead to increased competition. Instead of gerrymandering, some researchers find that other factors, such as partisan polarization and the incumbency advantage, have driven the recent decreases in electoral competition. Similarly, a 2009 study found that "congressional polarization is primarily a function of the differences in how Democrats and Republicans represent the same districts rather than a function of which districts each party represents or the distribution of constituency preferences."
These findings are, however, a matter of some dispute. While gerrymandering may not decrease electoral competition in all cases, there are certainly instances where gerrymandering does reduce such competition.
One state in which gerrymandering has arguably had an adverse effect on electoral competition is California. In 2000, a bipartisan redistricting effort redrew congressional district lines in ways that all but guaranteed incumbent victories; as a result, California saw only one congressional seat change hands between 2000 and 2010. In response to this obvious gerrymandering, a 2010 referendum in California gave the power to redraw congressional district lines to the California Citizens Redistricting Commission, which had been created to draw California State Senate and Assembly districts by another referendum in 2008. In stark contrast to the redistricting efforts that followed the 2000 census, the redistricting commission has created a number of the most competitive congressional districts in the country.
The effect of gerrymandering for incumbents is particularly advantageous, as incumbents are far more likely to be reelected under conditions of gerrymandering. For example, in 2002, according to political scientists Norman Ornstein and Thomas Mann, only four challengers were able to defeat incumbent members of the U.S. Congress, the lowest number in modern American history. Incumbents are likely to be of the majority party orchestrating a gerrymander, and incumbents are usually easily renominated in subsequent elections, including incumbents among the minority.
Mann, a Senior Fellow of Governance Studies at the Brookings Institution, has also noted that "Redistricting is a deeply political process, with incumbents actively seeking to minimize the risk to themselves (via bipartisan gerrymanders) or to gain additional seats for their party (via partisan gerrymanders)". The bipartisan gerrymandering that Mann mentions refers to the fact that legislators often also draw distorted legislative districts even when such redistricting does not provide an advantage to their party.
Gerrymandering of state legislative districts can effectively guarantee an incumbent's victory by 'shoring up' a district with higher levels of partisan support, without disproportionately benefiting a particular political party. This can be highly problematic from a governance perspective, because forming districts to ensure high levels of partisanship often leads to higher levels of partisanship in legislative bodies. If a substantial number of districts are designed to be polarized, then those districts' representation will also likely act in a heavily partisan manner, which can create and perpetuate partisan gridlock.
This demonstrates that gerrymandering can have a deleterious effect on the principle of democratic accountability. With uncompetitive seats/districts reducing the fear that incumbent politicians may lose office, they have less incentive to represent the interests of their constituents, even when those interests conform to majority support for an issue across the electorate as a whole. Incumbent politicians may look out more for their party's interests than for those of their constituents.
Gerrymandering can affect campaign costs for district elections. If districts become increasingly stretched out, candidates must pay increased costs for transportation and trying to develop and present campaign advertising across a district. The incumbent's advantage in securing campaign funds is another benefit of his or her having a gerrymandered secure seat.
Gerrymandering also has significant effects on the representation received by voters in gerrymandered districts. Because gerrymandering can be designed to increase the number of wasted votes among the electorate, the relative representation of particular groups can be drastically altered from their actual share of the voting population. This effect can significantly prevent a gerrymandered system from achieving proportional and descriptive representation, as the winners of elections are increasingly determined by who is drawing the districts rather than the preferences of the voters.
Gerrymandering may be advocated to improve representation within the legislature among otherwise underrepresented minority groups by packing them into a single district. This can be controversial, as it may lead to those groups' remaining marginalized in the government as they become confined to a single district. Candidates outside that district no longer need to represent them to win elections.
As an example, much of the redistricting conducted in the United States in the early 1990s involved the intentional creation of additional "majority-minority" districts where racial minorities such as African Americans were packed into the majority. This "maximization policy" drew support by both the Republican Party (who had limited support among African Americans and could concentrate their power elsewhere) and by minority representatives elected as Democrats from these constituencies, who then had safe seats.
The 2012 election provides a number of examples as to how partisan gerrymandering can adversely affect the descriptive function of states' congressional delegations. In Pennsylvania, for example, Democratic candidates for the House of Representatives received 83,000 more votes than Republican candidates, yet the Republican-controlled redistricting process in 2010 resulted in Democrats losing to their Republican counterparts in 13 out of Pennsylvania's 18 districts.
In the seven states where Republicans had complete control over the redistricting process, Republican House candidates received 16.7 million votes and Democratic House candidates received 16.4 million votes. The redistricting resulted in Republican victories in 73 out of the 107 affected seats; in those 7 states, Republicans received 50.4% of the votes but won in over 68% of the congressional districts. While it is but one example of how gerrymandering can have a significant effect on election outcomes, this kind of disproportional representation of the public will seems to be problematic for the legitimacy of democratic systems, regardless of one's political affiliation.
In Michigan, redistricting was constructed by a Republican Legislature in 2011. Federal congressional districts were so designed that cities such as Battle Creek, Grand Rapids, Jackson, Kalamazoo, Lansing, and East Lansing were separated into districts with large conservative-leaning hinterlands that essentially diluted the Democratic votes in those cities in Congressional elections. Since 2010 not one of those cities is within a district in which a Democratic nominee for the House of Representatives has a reasonable chance of winning, short of Democratic landslide.
Gerrymandering can also be done to help incumbents as a whole, effectively turning every district into a packed one and greatly reducing the potential for competitive elections. This is particularly likely to occur when the minority party has significant obstruction power—unable to enact a partisan gerrymander, the legislature instead agrees on ensuring their own mutual reelection.
In an unusual occurrence in 2000, for example, the two dominant parties in the state of California cooperatively redrew both state and Federal legislative districts to preserve the status quo, ensuring the electoral safety of the politicians from unpredictable voting by the electorate. This move proved completely effective, as no State or Federal legislative office changed party in the 2004 election, although 53 congressional, 20 state senate, and 80 state assembly seats were potentially at risk.
In 2006, the term "70/30 District" came to signify the equitable split of two evenly split (i.e. 50/50) districts. The resulting districts gave each party a guaranteed seat and retained their respective power base.
Prison-based gerrymandering occurs when prisoners are counted as residents of a particular district, increasing the district's population with non-voters when assigning political apportionment. This phenomenon violates the principle of one person, one vote because, although many prisoners come from (and return to) urban communities, they are counted as "residents" of the rural districts that contain large prisons, thereby artificially inflating the political representation in districts with prisons at the expense of voters in all other districts without prisons. Others contend that prisoners should not be counted as residents of their original districts when they do not reside there and are not legally eligible to vote.
Due to the perceived issues associated with gerrymandering and its effect on competitive elections and democratic accountability, numerous countries have enacted reforms making the practice either more difficult or less effective. Countries such as the U.K., Australia, Canada and most of those in Europe have transferred responsibility for defining constituency boundaries to neutral or cross-party bodies. In Spain, they are constitutionally fixed since 1978.
In the United States, however, such reforms are controversial and frequently meet particularly strong opposition from groups that benefit from gerrymandering. In a more neutral system, they might lose considerable influence.
The most commonly advocated electoral reform proposal targeted at gerrymandering is to change the redistricting process. Under these proposals, an independent and presumably objective commission is created specifically for redistricting, rather than having the legislature do it.
This is the system used in the United Kingdom, where the independent boundary commissions determine the boundaries for constituencies in the House of Commons and the devolved legislatures, subject to ratification by the body in question (almost always granted without debate). A similar situation exists in Australia where the independent Australian Electoral Commission and its state-based counterparts determine electoral boundaries for federal, state and local jurisdictions.
To help ensure neutrality, members of a redistricting agency may be appointed from relatively apolitical sources such as retired judges or longstanding members of the civil service, possibly with requirements for adequate representation among competing political parties. Additionally, members of the board can be denied access to information that might aid in gerrymandering, such as the demographic makeup or voting patterns of the population.
As a further constraint, consensus requirements can be imposed to ensure that the resulting district map reflects a wider perception of fairness, such as a requirement for a supermajority approval of the commission for any district proposal. Consensus requirements, however, can lead to deadlock, such as occurred in Missouri following the 2000 census. There, the equally numbered partisan appointees were unable to reach consensus in a reasonable time, and consequently the courts had to determine district lines.
In the U.S. state of Iowa, the nonpartisan Legislative Services Bureau (LSB, akin to the U.S. Congressional Research Service) determines boundaries of electoral districts. Aside from satisfying federally mandated contiguity and population equality criteria, the LSB mandates unity of counties and cities. Consideration of political factors such as location of incumbents, previous boundary locations, and political party proportions is specifically forbidden. Since Iowa's counties are chiefly regularly shaped polygons, the LSB process has led to districts that follow county lines.
In 2005, the U.S. state of Ohio had a ballot measure to create an independent commission whose first priority was competitive districts, a sort of "reverse gerrymander". A complex mathematical formula was to be used to determine the competitiveness of a district. The measure failed voter approval chiefly due to voter concerns that communities of interest would be broken up.
In 2017, the Open Our Democracy Act of 2017 was submitted to the US House of Representatives by Rep. Delaney as a means to implement non-partisan redistricting.
When a single political party controls both legislative houses of a state during redistricting, both Democrats and Republicans have displayed a marked propensity for couching the process in secrecy; in May 2010, for example, the Republican National Committee held a redistricting training session in Ohio where the theme was "Keep it Secret, Keep it Safe". The need for increased transparency in redistricting processes is clear; a 2012 investigation by The Center for Public Integrity reviewed every state's redistricting processes for both transparency and potential for public input, and ultimately assigned 24 states grades of either D or F.
In response to these types of problems, redistricting transparency legislation has been introduced to US Congress a number of times in recent years, including the Redistricting Transparency Acts of 2010, 2011, and 2013. Such policy proposals aim to increase the transparency and responsiveness of the redistricting systems in the US. The merit of increasing transparency in redistricting processes is based largely on the premise that lawmakers would be less inclined to draw gerrymandered districts if they were forced to defend such districts in a public forum.
Because gerrymandering relies on the wasted-vote effect, the use of a different voting system with fewer wasted votes can help reduce gerrymandering. In particular, the use of multi-member districts alongside voting systems establishing proportional representation such as single transferable voting can reduce wasted votes and gerrymandering. Semi-proportional voting systems such as single non-transferable vote or cumulative voting are relatively simple and similar to "first past the post" and can also reduce the proportion of wasted votes and thus potential gerrymandering. Electoral reformers have advocated all three as replacement systems.
Electoral systems with various forms of proportional representation are now found in nearly all European countries, resulting in multi-party systems (with many parties represented in the parliaments) with higher voter attendance in the elections, fewer wasted votes, and a wider variety of political opinions represented.
Electoral systems with election of just one winner in each district (i.e., "winner-takes-all" electoral systems) and no proportional distribution of extra mandates to smaller parties tend to create two-party systems (Duverger's law). In these, just two parties effectively compete in the national elections and thus the national political discussions are forced into a narrow two-party frame, where loyalty and forced statements inside the two parties distort the political debate.
If a proportional or semi-proportional voting system is used then increasing the number of winners in any given district will reduce the number of wasted votes. This can be accomplished both by merging separate districts together and by increasing the total size of the body to be elected. Since gerrymandering relies on exploiting the wasted vote effect, increasing the number of winners per district can reduce the potential for gerrymandering in proportional systems. Unless all districts are merged, however, this method cannot eliminate gerrymandering entirely.
In contrast to proportional methods, if a nonproportional voting system with multiple winners (such as block voting) is used, then increasing the size of the elected body while keeping the number of districts constant will not reduce the amount of wasted votes, leaving the potential for gerrymandering the same. While merging districts together under such a system can reduce the potential for gerrymandering, doing so also amplifies the tendency of block voting to produce landslide victories, creating a similar effect to gerrymandering by concentrating wasted votes among the opposition and denying them representation.
If a system of single-winner elections is used, then increasing the size of the elected body will implicitly increase the number of districts to be created. This change can actually make gerrymandering easier when raising the number of single-winner elections, as opposition groups can be more efficiently packed into smaller districts without accidentally including supporters, further increasing the number of wasted votes amongst the opposition.
Another way to avoid gerrymandering is simply to stop redistricting altogether and use existing political boundaries such as state, county, or provincial lines. While this prevents future gerrymandering, any existing advantage may become deeply ingrained. The United States Senate, for instance, has more competitive elections than the House of Representatives due to the use of existing state borders rather than gerrymandered districts—Senators are elected by their entire state, while Representatives are elected in legislatively drawn districts.
The use of fixed districts creates an additional problem, however, in that fixed districts do not take into account changes in population. Individual voters can come to have very different degrees of influence on the legislative process. This malapportionment can greatly affect representation after long periods of time or large population movements. In the United Kingdom during the Industrial Revolution, several constituencies that had been fixed since they gained representation in the Parliament of England became so small that they could be won with only a handful of voters ("rotten boroughs"). Similarly, in the U.S. the state legislature of Alabama refused to redistrict for more than 60 years, despite major changes in population patterns. By 1960 less than a quarter of the state's population controlled the majority of seats in the legislature. This practice of using fixed districts for state legislatures was effectively banned in the United States after the "Reynolds v. Sims" Supreme Court decision in 1964, establishing a rule of one man, one vote, but the practice remains very much alive for the United States Senate since states now have vastly different populations.
Another means to reduce gerrymandering is to create objective, precise criteria to which any district map must comply. Courts in the United States, for instance, have ruled that congressional districts must be contiguous in order to be constitutional. This, however, is not a particularly binding constraint, as very narrow strips of land with few or no voters in them may be used to connect separate regions for inclusion in one district, as is the case in Illinois's 4th congressional district.
Depending on the distribution of voters for a particular party, metrics that maximize compactness can be opposed to metrics that minimize the efficiency gap. For example, in the United States, voters registered with the Democratic Party tend to be concentrated in cities, potentially resulting in a large number of "wasted" votes if compact districts are drawn around city populations. Neither of these metrics take into consideration other possible goals, such as proportional representation based on other demographic characteristics (such as race, ethnicity, gender, or income), maximizing competitiveness of elections (the greatest number of districts where party affiliation is 50/50), avoiding splits of existing government units (like cities and counties), and ensuring representation of major interest groups (like farmers or voters in a specific transportation corridor), though any of these could be incorporated into a more complicated metric.
One method is to define a minimum district to convex polygon ratio . To use this method, every proposed district is circumscribed by the smallest possible convex polygon (its convex hull; think of stretching a rubberband around the outline of the district). Then, the area of the district is divided by the area of the polygon; or, if at the edge of the state, by the portion of the area of the polygon within state boundaries.
The advantages of this method are that it allows a certain amount of human intervention to take place (thus solving the Colorado problem of splitline districting); it allows the borders of the district to follow existing jagged subdivisions, such as neighbourhoods or voting districts (something isoperimetric rules would discourage); and it allows concave coastline districts, such as the Florida gulf coast area. It would mostly eliminate bent districts, but still permit long, straight ones. However, since human intervention is still allowed, the gerrymandering issues of packing and cracking would still occur, just to a lesser extent.
The Center for Range Voting has proposed a way to draw districts by a simple algorithm. The algorithm uses only the shape of the state, the number N of districts wanted, and the population distribution as inputs. The algorithm (slightly simplified) is:
This district-drawing algorithm has the advantages of simplicity, ultra-low cost, a single possible result (thus no possibility of human interference), lack of intentional bias, and it produces simple boundaries that do not meander needlessly. It has the disadvantage of ignoring geographic features such as rivers, cliffs, and highways and cultural features such as tribal boundaries. This landscape oversight causes it to produce districts different from those a human would produce. Ignoring geographic features can induce very simple boundaries.
While most districts produced by the method will be fairly compact and either roughly rectangular or triangular, some of the resulting districts can still be long and narrow strips (or triangles) of land.
Like most automatic redistricting rules, the shortest splitline algorithm will fail to create majority-minority districts, for both ethnic and political minorities, if the minority populations are not very compact. This might reduce minority representation.
Another criticism of the system is that splitline districts sometimes divide and diffuse the voters in a large metropolitan area. This condition is most likely to occur when one of the first splitlines cuts through the metropolitan area. It is often considered a drawback of the system because residents of the same agglomeration are assumed to be a community of common interest. This is most evident in the splitline allocation of Colorado.
As of July 2007, shortest-splitline redistricting pictures, based on the results of the 2000 census, are available for all 50 states.
It is possible to define a specific minimum isoperimetric quotient, proportional to the ratio between the area and the square of the perimeter of any given congressional voting district. Although technologies presently exist to define districts in this manner, there are no rules in place mandating their use, and no national movement to implement such a policy. One problem with the simplest version of this rule is that it would prevent incorporation of jagged natural boundaries, such as rivers or mountains; when such boundaries are required, such as at the edge of a state, certain districts may not be able to meet the required minima. One way of avoiding this problem is to allow districts which share a border with a state border to replace that border with a polygon or semi-circle enclosing the state boundary as a kind of virtual boundary definition, but using the actual perimeter of the district whenever this occurs inside the state boundaries. Enforcing a minimum isoperimetric quotient would encourage districts with a high ratio between area and perimeter.
The efficiency gap is a simply-calculable measure that can show the effects of gerrymandering. It measures wasted votes for each party: the sum of votes cast in losing districts (losses due to cracking) and excess votes cast in winning districts (losses due to packing). The difference in these wasted votes are divided by total votes cast, and the resulting percentage is the efficiency gap.
The introduction of modern computers alongside the development of elaborate voter databases and special districting software has made gerrymandering a far more precise science. Using such databases, political parties can obtain detailed information about every household including political party registration, previous campaign donations, and the number of times residents voted in previous elections and combine it with other predictors of voting behavior such as age, income, race, or education level. With this data, gerrymandering politicians can predict the voting behavior of each potential district with an astonishing degree of precision, leaving little chance for creating an accidentally competitive district.
On the other hand, the introduction of modern computers would let the United States Census Bureau to calculate more equal populations in every voting district that are based only on districts being the most compact and equal populations. This could be done easily using their Block Centers based on the Global Positioning System rather than street addresses. With this data, gerrymandering politicians will not be in charge, thus allowing competitive districts again.
Online web apps such as Dave's Redistricting have allowed users to simulate redistricting states into legislative districts as they wish. According to Bradlee, the software was designed to "put power in people's hands," and so that they "can see how the process works, so it's a little less mysterious than it was 10 years ago."
Markov chain Monte Carlo (MCMC) can measure the extent to which redistricting plans favor a particular party or group in election, and can support automated redistricting simulators.
Gerrymandering is most likely to emerge, in majoritarian systems, where the country is divided into several voting districts and the candidate with the most votes wins the district. If the ruling party is in charge of drawing the district lines, it can abuse the fact that in a majoritarian system all votes that do not go to the winning candidate are essentially irrelevant to the composition of a new government. Even though gerrymandering can be used in other voting systems, it has the most significant impact on voting outcomes in first-past-the-post systems. Partisan redrawing of district lines is particularly harmful to democratic principles in majoritarian two-party systems. In general, two party systems tend to be more polarized than proportional systems. Possible consequences of gerrymandering in such a system can be an amplification of polarization in politics and a lack of representation of minorities, as a large part of the constituency is not represented in policy making. However, not every state using a first-past-the-post system is being confronted with the negative impacts of gerrymandering. Some countries, such as Australia, Canada and the UK, authorize non-partisan organizations to set constituency boundaries in attempt to prevent gerrymandering.
The introduction of a proportional system is often proposed as the most effective solution to partisan gerrymandering. In such systems the entire constituency is being represented proportionally to their votes. Even though, voting districts can be part of a proportional system, the redrawing of district lines would not benefit a party, as those districts are mainly of organizational value.
In mixed systems that use proportional and majoritarian voting principles, the usage of gerrymandering is a constitutional obstacle that states have to deal with. However, in mixed systems the advantage a political actor can potentially gain from redrawing district lines is much less than in majoritarian systems. In mixed systems voting districts are mostly being used to avoid that elected parliamentarians are getting too detached from their constituency. The principle which determines the representation in parliament is usually the proportional aspect of the voting system. Seats in parliament are being allocated to each party in accordance to the proportion of their overall votes. In most mixed systems, winning a voting district merely means that a candidate is guaranteed a seat in parliament, but does not expand a party’s share in the overall seats. However, gerrymandering can still be used to manipulate the outcome in voting districts. In most, democracies with a mixed system, non-partisan institutions are in charge of drawing district lines and therefore Gerrymandering is a less common phenomenon.
Gerrymandering should not be confused with malapportionment, whereby the number of eligible voters per elected representative can vary widely without relation to how the boundaries are drawn. Nevertheless, the "-mander" suffix has been applied to particular malapportionments. Sometimes political representatives use both gerrymandering and malapportionment to try to maintain power.
Several western democracies, notably the Netherlands, Slovakia and Slovenia employ an electoral system with only one (nationwide) voting district for election of national representatives. This virtually precludes gerrymandering. Other European countries such as Austria, Czechia or Sweden, among many others, have electoral districts with fixed boundaries (usually one district for each administrative division). The number of representatives for each district can change after a census due to population shifts, but their boundaries do not change. This also effectively eliminates gerrymandering.
Additionally, many countries where the president is directly elected by the citizens (e.g. France, Poland, among others) use only one electoral district for presidential election, despite using multiple districts to elect representatives.
The 1962 Bahamian general election was likely influenced by gerrymandering.
Gerrymandering has not typically been considered a problem in the Australian electoral system largely because drawing of electoral boundaries has typically been done by non-partisan electoral commissions. There have been historical cases of malapportionment, whereby the distribution of electors to electorates was not in proportion to the population in several states. For example, Sir Thomas Playford was Premier of South Australia from 1938 to 1965 as a result of a system of malapportionment, which became known as the Playmander, despite it not strictly speaking involving a gerrymander. More recently the nominally independent South Australian Electoral Districts Boundaries Commission has been accused of favouring the Australian Labor Party, as the party has been able to form government in four of the last seven elections, despite receiving a lower two-party preferred vote.
In Queensland, malapportionment combined with a gerrymander under Premier Sir Joh Bjelke-Petersen became nicknamed the Bjelkemander in the 1970s and 1980s. Under the system, electoral boundaries were drawn so that rural electorates had as few as half as many voters as metropolitan ones and regions with high levels of support for the opposition Labor Party were concentrated into fewer electorates, allowing Bjelke-Petersen's Country Party (later National Party) led Coalition government to remain in power despite attracting substantially less than 50% of the vote. In the 1986 election, for example, the National Party received 39.64% of the first preference vote and won 49 seats (in the 89 seat Parliament) whilst the Labor Opposition received 41.35% but won only 30 seats. Despite this, the Liberals/Nationals still received a greater combined share of the vote than the Labor opposition because the system also worked against the Liberal representation.
Early in Canadian history, both the federal and provincial levels used gerrymandering to try to maximize partisan power. When Alberta and Saskatchewan were admitted to Confederation in 1905, their original district boundaries were set forth in the respective Alberta and Saskatchewan Acts. Federal Liberal cabinet members devised the boundaries to ensure the election of provincial Liberal governments.
Since responsibility for drawing federal and provincial electoral boundaries was handed over to independent agencies, the problem has largely been eliminated at those levels of government. Manitoba was the first province to authorize a non-partisan group to define constituency boundaries in the 1950s. In 1964, the federal government delegated the drawing of boundaries for federal electoral districts to the non-partisan agency Elections Canada which answers to Parliament rather than the government of the day.
As a result, gerrymandering is not generally a major issue in Canada except at the civic level. Although city wards are recommended by independent agencies, city councils occasionally overrule them. That is much more likely if the city is not homogenous and different neighborhoods have sharply different opinions about city policy direction.
In 2006, a controversy arose in Prince Edward Island over the provincial government's decision to throw out an electoral map drawn by an independent commission. Instead, they created two new maps. The government adopted the second of them, which was designed by the caucus of the governing party. Opposition parties and the media attacked Premier Pat Binns for what they saw as gerrymandering of districts. Among other things, the government adopted a map that ensured that every current Member of the Legislative Assembly from the premier's party had a district to run in for re-election, but in the original map, several had been redistricted. However, in the 2007 provincial election only seven of 20 incumbent Members of the Legislative Assembly were re-elected (seven did not run for re-election), and the government was defeated.
The military government which ruled Chile from 1973 to 1990 was ousted in a national plebiscite in October 1988. Opponents of General Augusto Pinochet voted NO to remove him from power and to trigger democratic elections, while supporters (mostly from the right-wing) voted YES to keep him in office for another eight years.
Five months prior to the plebiscite, the regime published a law regulating future elections and referendums, but the configuration of electoral districts and the manner in which Congress seats would be awarded were only added to the law seven months after the referendum.
For the Chamber of Deputies (lower house), 60 districts were drawn by grouping (mostly) neighboring communes (the smallest administrative subdivision in the country) within the same region (the largest). It was established that two deputies would be elected per district, with the most voted coalition needing to outpoll its closest rival by a margin of more than 2-to-1 to take both seats. The results of the 1988 plebiscite show that neither the "NO" side nor the "YES" side outpolled the other by said margin in any of the newly established districts. They also showed that the vote/seat ratio was lower in districts which supported the "YES" side and higher in those where the "NO" was strongest. In spite of this, at the 1989 parliamentary election, the center-left opposition was able to capture both seats (the so-called "doblaje") in twelve out of 60 districts, winning control of 60% of the Chamber.
Senate constituencies were created by grouping all lower-chamber districts in a region, or by dividing a region into two constituencies of contiguous lower-chamber districts. The 1980 Constitution allocated a number of seats to appointed senators, making it harder for one side to change the Constitution by itself. The opposition won 22 senate seats in the 1989 election, taking both seats in three out of 19 constituencies, controlling 58% of the elected Senate, but only 47% of the full Senate. The unelected senators were eliminated in the 2005 constitutional reforms, but the electoral map has remained largely untouched (two new regions were created in 2007, one of which altered the composition of two senatorial constituencies; the first election to be affected by this minor change took place in 2013).
France is one of the few countries to let legislatures redraw the map with no check. In practice, the legislature sets up an executive commission. Districts called "arrondissements" were used in the Third Republic and under the Fifth Republic they are called "circonscriptions". During the Third Republic, some reforms of arrondissements, which were also used for administrative purposes, were largely suspected to have been arranged to favor the kingmaker in the Assembly, the Parti radical.
The dissolution of Seine and Seine-et-Oise départements by de Gaulle was seen as a case of Gerrymandering to counter communist influence around Paris.
In the modern regime, there were three designs: in 1958 (regime change), 1987 (by Charles Pasqua) and 2010 (by Alain Marleix), three times by conservative governments. Pasqua's drawing was known to have been particularly good at gerrymandering, resulting in 80% of the seats with 58% of the vote in 1993, and forcing Socialists in the 1997 snap election to enact multiple pacts with smaller parties in order to win again, this time as a coalition. In 2010, the Sarkozy government created 12 districts for expats.
The Constitutional council was called twice by the opposition to decide about gerrymandering, but it never considered partisan disproportions. However, it forced the Marleix committee to respect an 80–120% population ratio, ending a tradition dating back to the Revolution in which "départements", however small in population, would send at least two MPs.
Gerrymandering in France is also done against regionalist parties. "Départements" are always used even if they split urban areas or larger identity territories, and smaller identity divisions are avoided.
When the electoral districts in Germany were redrawn in 2000, the ruling center-left Social Democratic Party (SPD) was accused of gerrymandering to marginalize the left-wing PDS party. The SPD combined traditional PDS strongholds in eastern Berlin with new districts made up of more populous areas of western Berlin, where the PDS had very limited following.
After having won four seats in Berlin in the 1998 national election, the PDS was able to retain only two seats altogether in the 2002 elections. Under German electoral law, a political party has to win either more than five percent of the votes or at least three directly elected seats, to qualify for top-up seats under the Additional Member System. The PDS vote fell below five percent thus they failed to qualify for top-up seats and were confined to just two members of the Bundestag, the German federal parliament (elected representatives are always allowed to hold their seats as individuals). Had they won a third constituency, the PDS would have gained at least 25 additional seats, which would have been enough to hold the balance of power in the Bundestag.
In the election of 2005, The Left (successor of the PDS) gained 8.7% of the votes and thus qualified for top-up seats.
The number of Bundestag seats of parties which previously got over 5% of the votes cannot be affected very much by gerrymandering, because seats are awarded to these parties on a proportional basis. However, when a party wins so many districts in any one of the 16 federal states that those seats alone count for more than its proportional share of the vote in that same state does the districting have some influence on larger parties—those extra seats, called "Überhangmandate", remain. In the Bundestag election of 2009, Angela Merkel's CDU/CSU gained 24 such extra seats, while no other party gained any; this skewed the result so much that the Federal Constitutional Court of Germany issued two rulings declaring the existing election laws invalid and requiring the Bundestag to pass a new law limiting such extra seats to no more than 15. In 2013, Germany's Supreme Court ruled on the constitutionality of Überhangmandate, which from then on have to be added in proportion to the second vote of each party thereby making it impossible that one party can have more seats than earned by the proportionate votes in the election.
Gerrymandering has been rather common in Greek history since organized parties with national ballots only appeared after the 1926 Constitution. The only case before that was the creation of the Piraeus electoral district in 1906, in order to give the Theotokis party a safe district.
The most infamous case of gerrymandering was in the 1956 election. While in previous elections the districts were based on the prefecture level (νομός), for 1956 the country was split in districts of varying sizes, some being the size of prefectures, some the size of sub-prefectures (επαρχία) and others somewhere in between. In small districts the winning party would take all seats, in intermediate size, it would take most and there was proportional representation in the largest districts. The districts were created in such a way that small districts were those that traditionally voted for the right while large districts were those that voted against the right.
This system has become known as the three-phase (τριφασικό) system or the baklava system (because, as baklava is split into full pieces and corner pieces, the country was also split into disproportionate pieces). The opposition, being composed of the center and the left, formed a coalition with the sole intent of changing the electoral law and then calling new elections. Even though the centrist and leftist opposition won the popular vote (1,620,007 votes against 1,594,992), the right-wing ERE won the majority of seats (165 to 135) and was to lead the country for the next two years.
In Hong Kong, functional constituencies are demarcated by the government and defined in statutes, making them prone to gerrymandering. The functional constituency for the information technology sector was particular criticized for gerrymandering and voteplanting.
There are also gerrymandering concerns in the constituencies of district councils.
In 2011, Fidesz politician János Lázár has proposed a redesign to Hungarian voting districts; considering the territorial results of previous elections, this redesign would favor right-wing politics according to the opposition. Since then, the law has been passed by the Fidesz-majority Parliament. Formerly it took twice as many votes to gain a seat in some election districts as in some others.
Until the 1980s Dáil boundaries in Ireland were drawn not by an independent commission but by government ministers. Successive arrangements by governments of all political characters have been attacked as gerrymandering. Ireland uses the single transferable vote, and as well as the actual boundaries drawn, the main tool of gerrymandering has been the number of seats per constituency used, with three-seat constituencies normally benefiting the strongest parties in an area, whereas four-seat constituencies normally help smaller parties.
In 1947 the rapid rise of new party Clann na Poblachta threatened the position of the governing party Fianna Fáil. The government of Éamon de Valera introduced the Electoral (Amendment) Act 1947, which increased the size of the Dáil from 138 to 147 and increased the number of three-seat constituencies from fifteen to twenty-two. The result was described by the journalist and historian Tim Pat Coogan as "a blatant attempt at gerrymander which no Six County Unionist could have bettered." The following February the 1948 general election was held and Clann na Poblachta secured ten seats instead of the nineteen they would have received proportional to their vote.
In the mid-1970s, the Minister for Local Government, James Tully, attempted to arrange the constituencies to ensure that the governing Fine Gael–Labour Party National Coalition would win a parliamentary majority. The Electoral (Amendment) Act 1974 was planned as a major reversal of previous gerrymandering by Fianna Fáil (then in opposition). Tully ensured that there were as many as possible three-seat constituencies where the governing parties were strong, in the expectation that the governing parties would each win a seat in many constituencies, relegating Fianna Fáil to one out of three.
In areas where the governing parties were weak, four-seat constituencies were used so that the governing parties had a strong chance of still winning two. The election results created substantial change, as there was a larger than expected collapse in the vote. Fianna Fáil won a landslide victory in the 1977 Irish general election, two out of three seats in many cases, relegating the National Coalition parties to fight for the last seat. Consequently, the term "Tullymandering" was used to describe the phenomenon of a failed attempt at gerrymandering.
A hypothesis of gerrymandering was theorized by constituencies drawed by electoral act of 2017, so-called Rosatellum.
From the years 1981 until 2005, Kuwait was divided into 25 electoral districts in order to over-represent the government's supporters (the 'tribes'). In July 2005, a new law for electoral reforms was approved which prevented electoral gerrymandering by cutting the number of electoral districts from 25 to 5.
The government of Kuwait found that 5 electoral districts resulted in a powerful parliament with the majority representing the opposition. A new law was crafted by the government of Kuwait and signed by the Amir to gerrymander the districts to 10 allowing the government's supporters to regain the majority.
The practice of gerrymandering has been around in the country since its independence in 1957. The ruling coalition at that time, "Barisan Nasional" (BN; English: "National Front"), has been accused of controlling the election commission by revising the boundaries of constituencies. For example, during the 13th General Election in 2013, Barisan Nasional won 60% of the seats in the Malaysian Parliament despite only receiving 47% of the popular vote. Malapportionment has also been used at least since 1974, when it was observed that in one state alone (Perak), the parliamentary constituency with the most voters had more than ten times as many voters as the one with the fewest voters. These practices finally failed BN in the 14th General Election on 9 May 2018, when the opposing "Pakatan Harapan" (PH; English: "Alliance of Hope") won despite perceived efforts of gerrymandering and malapportionment from the incumbent.
The Labour Party that won in 1981, even though the Nationalist Party got the most votes, did so because of its gerrymandering. A 1987 constitutional amendment prevented that situation from reoccurring.
After the restoration of democracy in 1990, Nepali politics has well exercised the practice of gerrymandering with the view to take advantage in the election. It was often practiced by Nepali Congress, which remained in power in most of the time. Learning from this, the reshaping of constituency was done for constituent assembly and the opposition now wins elections.
Congressional districts in the Philippines were originally based on an ordinance from the 1987 Constitution, which was created by the Constitutional Commission, which was ultimately based on legislative districts as they were drawn in 1907. The same constitution gave Congress of the Philippines the power to legislate new districts, either through a national redistricting bill or piecemeal redistricting per province or city. Congress has never passed a national redistricting bill since the approval of the 1987 constitution, while it has incrementally created 34 new districts, out of the 200 originally created in 1987.
This allows Congress to create new districts once a place reaches 250,000 inhabitants, the minimum required for its creation. With this, local dynasties, through congressmen, can exert influence in the district-making process by creating bills carving new districts from old ones. In time, as the population of the Philippines increases, these districts, or groups of it, will be the basis of carving new provinces out of existing ones.
An example was in Camarines Sur, where two districts were divided into three districts which allegedly favors the Andaya and the Arroyo families; it caused Rolando Andaya and Dato Arroyo, who would have otherwise run against each other, run in separate districts, with one district allegedly not even surpassing the 250,000-population minimum. The Supreme Court later ruled that the 250,000 population minimum does not apply to an additional district in a province. The resulting splits would later be the cause of another gerrymander, where the province would be split into a new province called Nueva Camarines; the bill was defeated in the Senate in 2013.
In recent decades, critics have accused the ruling People's Action Party (PAP) of unfair electoral practices to maintain significant majorities in the Parliament of Singapore. Among the complaints are that the government uses gerrymandering. The Elections Department was established as part of the executive branch under the Prime Minister of Singapore, rather than as an independent body. Critics have accused it of giving the ruling party the power to decide polling districts and polling sites through electoral engineering, based on poll results in previous elections.
Members of opposition parties claim that the Group Representation Constituency system is "synonymous to gerrymandering", pointing out examples of Cheng San GRC and Eunos GRC which were dissolved by the Elections Department with voters redistributed to other constituencies after opposition parties gained ground in elections.
Until the establishment of the Second Spanish Republic in 1931, Spain used both single-member and multi-member constituencies in general elections. Multi-member constituencies were only used in some big cities. Some gerrymandering examples included the districts of Vilademuls or Torroella de Montgrí in Catalonia. These districts were created in order to prevent the Federal Democratic Republican Party to win a seat in Figueres or La Bisbal and to secure a seat to the dynastic parties. Since 1931, the constituency boundaries match the province boundaries.
After the Francoist dictatorship, during the transition to democracy, these fixed provincial constituencies were reestablished in Section 68.2 of the current 1978 Spanish Constitution, so gerrymandering is impossible in general elections. There are not "winner-takes-all" elections in Spain except for the tiny territories of Ceuta and Melilla (which only have one representative each); everywhere else the number of representatives assigned to a constituency is proportional to its population and calculated according to a national law, so tampering with under- or over-representation is difficult too.
European, some regional and municipal elections are held under single, at-large multi-member constituencies with proportional representation and gerrymandering is not possible either.
Sri Lanka's new Local Government elections process has been the talking point of gerrymandering since its inception. Even though that talk was more about the ward-level, it is also seen in some local council areas too.
In the most recent election of 2010, there were numerous examples of gerrymandering throughout the entire country of Sudan. A report from the Rift Valley Institute uncovered violations of Sudan's electoral law, where constituencies were created that were well below and above the required limit. According to Sudan's National Elections Act of 2008, no constituency can have a population that is 15% greater or less than the average constituency size. The Rift Valley Report uncovered a number of constituencies that are in violation of this rule. Examples include constituencies in Jonglei, Warrap, South Darfur, and several other states.
Turkey has used gerrymandering in the city of İstanbul in the 2009 municipal elections. Just before the election İstanbul was divided in to new districts. Large low income neighborhoods were bundled with the rich neighborhoods to win the municipal elections.
Gerrymandering (Irish: "Claonroinnt") is widely considered to have been introduced after the establishment of Home Rule in Northern Ireland in 1921, favouring Unionists who tended to be Protestant, to the detriment of Nationalists who were mostly Catholic. Some critics and supporters spoke at the time of "A Protestant Parliament for a Protestant People". This passed also into local government. Stephen Gwynn had noted as early as 1911 that since the introduction of the Local Government (Ireland) Act 1898:
In Armagh there are 68,000 Protestants, 56,000 Catholics. The County Council has twenty-two Protestants and eight Catholics. In Tyrone, Catholics are a majority of the population, 82,000 against 68,000; but the electoral districts have been so arranged that Unionists return sixteen as against thirteen Nationalists (one a Protestant). This Council gives to the Unionists two to one majority on its Committees, and out of fifty-two officials employs only five Catholics. In Antrim, which has the largest Protestant majority (196,000 to 40,000), twenty-six Unionists and three Catholics are returned. Sixty officers out of sixty-five are good Unionists and Protestants.
In the 1920s and 1930s, the Ulster Unionist Party created new electoral boundaries for the Londonderry County Borough Council to ensure election of a Unionist council in a city where Nationalists had a large majority and had won previous elections. Initially local parties drew the boundaries, but in the 1930s the province-wide government redrew them to reinforce the gerrymander. However, in the 1967 election, Unionists won 35.5% of the votes and received 60% of the seats, while Nationalists got 27.4% of the votes but received 40% of the seats. This meant that both the Unionist and Nationalist parties were over-represented, while the Northern Ireland Labour Party and Independents (amounting to more than 35% of the votes cast) were severely under-represented.
From the outset, Northern Ireland had installed the single transferable vote (STV) system in order to secure fair elections in terms of proportional representation in its Parliament. After two elections under that system, in 1929 Stormont changed the electoral system to be the same as the rest of the United Kingdom: a single-member first past the post system. The only exception was for the election of four Stormont MPs to represent the Queen's University of Belfast. Some scholars believe that the boundaries were gerrymandered to under-represent Nationalists. Other geographers and historians, for instance Professor John H. Whyte, disagree. They have argued that the electoral boundaries for the Parliament of Northern Ireland were not gerrymandered to a greater level than that produced by any single-winner election system, and that the actual number of Nationalist MPs barely changed under the revised system (it went from 12 to 11 and later went back up to 12). Most observers have acknowledged that the change to a single-winner system was a key factor, however, in stifling the growth of smaller political parties, such as the Northern Ireland Labour Party and Independent Unionists.
After Westminster reintroduced direct rule in 1973, it restored the single transferable vote (STV) for elections to the Northern Ireland Assembly in the following year, using the same definitions of constituencies as for the Westminster Parliament. Currently, in Northern Ireland, all elections use STV except those for positions in the Westminster Parliament, which follow the pattern in the rest of the United Kingdom by using "first past the post."
The number of electors in a United Kingdom constituency can vary considerably, with the smallest constituency currently (2017 electoral register) having fewer than a fifth of the electors of the largest (Scotland's Na h-Eileanan an Iar (21,769 constituents) and Orkney and Shetland (34,552), compared to England's North West Cambridgeshire (93,223) and Isle of Wight (110,697)). This variation has resulted from:
Under the Sixth Periodic Review of Westminster constituencies, the Coalition government planned to review and redraw the parliamentary constituency boundaries for the House of Commons of the United Kingdom. The review and redistricting was to be carried out by the four UK boundary commissions to produce a reduction from 650 to 600 seats, and more uniform sizes, such that a constituency was to have no fewer than 70,583 and no more than 80,473 electors. The process was intended to address historic malapportionment, and be complete by 2015. Preliminary reports suggesting the areas set to lose the fewest seats historically tended to vote Conservative, while other less populous and deindustrialized regions, such as Wales, which would lose a larger proportion of its seats, tending to have more Labour and Liberal Democrat voters, partially correcting the existing malapportionment. An opposition (Labour) motion to suspend the review until after the next general election was tabled in the House of Lords and a vote called in the United Kingdom House of Commons, in January 2013. The motion was passed with the help of the Liberal Democrats, going back on an election pledge. , a new review is in progress and a draft of the new boundaries has been published.
The United States, among the first with an elected representative government, is believed to be the source of the term "Gerrymander."
The word "gerrymander" is a portmanteau of salamander and Gerry, the last name of an early Massachusetts Governor. "Gerrymander" became the accepted term for the practice after a 1812 caricature satirized the bizarre shape of a district in Essex County, Massachusetts as a dragon-like "monster" and Federalist newspapers editors and others at the time likened the ungainly misshapen district to a salamander.
As a historical footnote, a story often retold in the past but now disproven, stated that while Patrick Henry and his Anti-Federalist allies were in control of the Virginia House of Delegates in 1788, they drew the boundaries of Virginia's 5th congressional district in an unsuccessful attempt to keep James Madison out of the U.S. House of Representatives via the candidacy of James Monroe.
The practice of gerrymandering the borders of new states continued past the Civil War and into the late 19th century. The Republican Party used its control of Congress to secure the admission of more states in territories friendly to their party—the admission of Dakota Territory as two states instead of one being a notable example. By the rules for representation in the Electoral College, each new state carried at least three electoral votes regardless of its population.
All redistricting in the United States has been contentious because it has been controlled by political parties vying for power. As a consequence of the decennial census required by the United States Constitution, districts for members of the House of Representatives typically need to be redrawn whenever the number of members in a state changes. In many states, state legislatures redraw boundaries for state legislative districts at the same time.
State legislatures have used gerrymandering along racial lines both to decrease and increase minority representation in state governments and congressional delegations. In Ohio, a conversation between Republican officials was recorded that demonstrated that redistricting was being done to aid their political candidates. Furthermore, the discussions assessed the race of voters as a factor in redistricting, on the premise that African-Americans tend to back Democratic candidates. Republicans apparently removed approximately 13,000 African-American voters from the district of Jim Raussen, a Republican candidate for the House of Representatives, in an attempt to tip the scales in what was once a competitive district for Democratic candidates.
With the Civil Rights Movement and passage of the Voting Rights Act of 1965, federal enforcement and protections of suffrage for all citizens were enacted. Gerrymandering for the purpose of reducing the political influence of a racial or ethnic minority group was prohibited. After the Voting Rights Act of 1965 was passed, some states created "majority-minority" districts to enhance minority voting strength. This practice, also called "affirmative gerrymandering", was supposed to redress historic discrimination and ensure that ethnic minorities would gain some seats and representation in government. In some states, bipartisan gerrymandering is the norm. State legislators from both parties sometimes agree to draw congressional district boundaries in a way that ensures the re-election of most or all incumbent representatives from both parties.
Rather than allowing more political influence, some states have shifted redistricting authority from politicians and given it to non-partisan redistricting commissions. The states of Washington, Arizona, and California have created standing committees for redistricting following the 2010 census. It has been argued however that in California's case, gerrymandering still continued despite this change. Rhode Island and New Jersey have developed "ad hoc" committees, but developed the past two decennial reapportionments tied to new census data. Florida's amendments 5 and 6, meanwhile, established rules for the creation of districts but did not mandate an independent commission.
International election observers from the Organization for Security and Co-operation in Europe Office for Democratic Institutions and Human Rights, who were invited to observe and report on the 2004 national elections, expressed criticism of the U.S. congressional redistricting process and made a recommendation that the procedures be reviewed to ensure genuine competitiveness of Congressional election contests.
In 2015, an analyst reported that the two major parties differ in the way they redraw districts. The Democrats construct coalition districts of liberals and minorities together with conservatives which results in Democratic-leaning districts. The Republicans tend to place liberals all together in a district, conservatives in others, creating clear partisan districts.
In June 2019, the United States Supreme Court ruled in "Lamone v. Benisek" and "Rucho v. Common Cause" that federal courts lacked jurisdiction to hear challenges over partisan gerrymandering.
Prior to the 26 September 2010 legislative elections, gerrymandering took place via an addendum to the electoral law by the National Assembly of Venezuela. In the subsequent election, Hugo Chávez's political party, the United Socialist Party of Venezuela drew 48% of the votes overall, while the opposition parties (the Democratic Unity Roundtable and the Fatherland for All parties) drew 52% of the votes. However, due to the re-allocation of electoral legislative districts prior to the election, Chávez's United Socialist Party of Venezuela was awarded over 60% of the spots in the National Assembly (98 deputies), while 67 deputies were elected for the two opposition parties combined.
In a play on words, the use of race-conscious procedures in jury selection has been termed "jurymandering".
|
https://en.wikipedia.org/wiki?curid=12987
|
Gin
Gin is a distilled alcoholic drink that derives its predominant flavour from juniper berries ("Juniperus communis"). Gin is one of the broadest categories of spirits, all of various origins, styles, and flavour profiles, that revolve around juniper as a common ingredient.
Gin began its life as a medicinal liquor made by monks in Italy, who were swiftly followed by other monks and alchemists across Europe, particularly Southern France, Flanders and the Netherlands – where gin is often incorrectly believed to have been invented, to provide aqua vita from distillates of grapes and grains. It then became an object of commerce in the spirits industry. Gin emerged in England after the introduction of the jenever, a Dutch and Belgian liquor which originally had been a medicine. Although this development had been taking place since early 17th century, gin became widespread after the William of Orange-led 1688 Glorious Revolution and subsequent import restrictions on French brandy.
Gin today is produced in subtly different ways, from a wide range of herbal ingredients, giving rise to a number of distinct styles and brands. After juniper, gin tends to be flavoured with botanical/herbal, spice, floral or fruit-flavours or often a combination. It is most commonly consumed mixed with tonic water. Gin is also often used as a base spirit to produce flavoured gin-based liqueurs such as, for example, Sloe gin, traditionally by the addition of fruit, flavourings and sugar.
The earliest known written reference to jenever appears in the 13th-century encyclopaedic work "Der Naturen Bloeme" (Bruges), with the earliest printed recipe for jenever dating from 16th-century work "Een Constelijck Distileerboec" (Antwerp).
The physician Franciscus Sylvius has been falsely credited with the invention of gin in the mid-17th century, although the existence of jenever is confirmed in Philip Massinger's play "The Duke of Milan" (1623), when Sylvius would have been about nine years old. It is further claimed that English soldiers who provided support in Antwerp against the Spanish in 1585, during the Eighty Years' War, were already drinking jenever for its calming effects before battle, from which the term "Dutch courage" is believed to have originated. According to some unconfirmed accounts, gin originated in Italy.
By the mid-17th century, numerous small Dutch and Flemish distillers had popularized the re-distillation of malted barley spirit or malt wine with juniper, anise, caraway, coriander, etc., which were sold in pharmacies and used to treat such medical problems as kidney ailments, lumbago, stomach ailments, gallstones, and gout. Gin emerged in England in varying forms by the early 17th century, and at the time of the Restoration, enjoyed a brief resurgence. Gin became vastly more popular as an alternative to brandy, when William III, II & I and Mary II became co-sovereigns of England, Scotland and Ireland after leading the Glorious Revolution. Particularly in crude, inferior forms, it was more likely to be flavoured with turpentine. Historian Angela McShane has described it as a "Protestant drink" as its rise was brought about by a protestant king, fuelling his armies fighting the Catholic Irish and French.
Gin drinking in England rose significantly after the government allowed unlicensed gin production, and at the same time imposed a heavy duty on all imported spirits such as French brandy. This created a larger market for poor-quality barley that was unfit for brewing beer, and in 1695–1735 thousands of gin-shops sprang up throughout England, a period known as the Gin Craze. Because of the low price of gin, when compared with other drinks available at the same time, and in the same geographic location, gin began to be consumed regularly by the poor. Of the 15,000 drinking establishments in London, not including coffee shops and drinking chocolate shops, over half were gin shops. Beer maintained a healthy reputation as it was often safer to drink the brewed ale than unclean plain water. Gin, though, was blamed for various social problems, and it may have been a factor in the higher death rates which stabilized London's previously growing population. The reputation of the two drinks was illustrated by William Hogarth in his engravings "Beer Street and Gin Lane" (1751), described by the BBC as "arguably the most potent anti-drug poster ever conceived." The negative reputation of gin survives today in the English language, in terms like "gin mills" or the American phrase "gin joints" to describe disreputable bars, or "gin-soaked" to refer to drunks. The epithet "mother's ruin" is a common British name for gin, the origin of which is the subject of ongoing debate.
The Gin Act 1736 imposed high taxes on retailers and led to riots in the streets. The prohibitive duty was gradually reduced and finally abolished in 1742. The Gin Act 1751 was more successful, however; it forced distillers to sell only to licensed retailers and brought gin shops under the jurisdiction of local magistrates. Gin in the 18th century was produced in pot stills, and was somewhat sweeter than the London gin known today.
In London in the early 18th century, much gin was distilled legally in residential houses (there were estimated to be 1,500 residential stills in 1726) and was often flavoured with turpentine to generate resinous woody notes in addition to the juniper. As late as 1913, "Webster's Dictionary" states without further comment, "'common gin' is usually flavoured with turpentine".
Another common variation was to distill in the presence of sulphuric acid. Although the acid itself does not distil, it imparts the additional aroma of diethyl ether to the resulting gin. Sulphuric acid subtracts one water molecule from two ethanol molecules to create diethyl ether, which also forms an azeotrope with ethanol, and therefore distils with it. The result is a sweeter spirit, and one that may have possessed additional analgesic or even intoxicating effects – see Paracelsus.
Dutch or Belgian gin, also known as "jenever" or "genever", evolved from malt wine spirits, and is a distinctly different drink from later styles of gin. Schiedam, a city in the province of South Holland, is famous for its "jenever"-producing history. The same for Hasselt in the Belgian province of Limburg. The "oude" (old) style of "jenever" remained very popular throughout the 19th century, where it was referred to as "Holland" or "Geneva" gin in popular, American, pre-Prohibition bartender guides.
The 18th century gave rise to a style of gin referred to as "Old Tom gin", which is a softer, sweeter style of gin, often containing sugar. Old Tom gin faded in popularity by the early 20th century.
The invention and development of the column still (1826 and 1831) made the distillation of neutral spirits practical, thus enabling the creation of the "London dry" style that evolved later in the 19th century.
In tropical British colonies gin was used to mask the bitter flavour of quinine, which was the only effective anti-malarial compound. Quinine was dissolved in carbonated water to form tonic water; the resulting cocktail is gin and tonic, although modern tonic water contains only a trace of quinine as a flavouring. Gin is a common base spirit for many mixed drinks, including the martini. Secretly produced "bathtub gin" was available in the speakeasies and "blind pigs" of Prohibition-era America as a result of the relative simple production.
Sloe gin is traditionally described as a liqueur made by infusing sloes (the fruit of the blackthorn) in gin, although modern versions are almost always compounded from neutral spirits and flavourings. Similar infusions are possible with other fruits, such as damsons. Another popular gin-based liqueur with a longstanding history is Pimm's No.1 Cup (25% alcohol by volume(ABV)), which is a fruit cup flavoured with citrus and spices.
The National Jenever Museums are located in Hasselt, Belgium, and Schiedam, the Netherlands.
Since 2013 gin has been in a period of ascendancy worldwide, with many new brands and producers entering the category leading to a period of strong growth, innovation and change. More recently gin-based liqueurs have been popularised, reaching a market outside that of traditional gin drinkers, including fruit-flavoured and usually coloured "Pink gin", Rhubarb gin, Spiced gin, Violet gin, Blood orange gin and Sloe gin. Surging popularity and unchecked competition has led to consumer's conflation of gin with gin liqueurs and many products are straddling, pushing or breaking the boundaries of established definitions in a period of genesis for the industry.
The name "gin" is a shortened form of the older English word "genever", related to the French word "genièvre" and the Dutch word "jenever". All ultimately derive from "juniperus", the Latin for juniper.
Although many different styles of gin have evolved, it is legally differentiated into four categories in the European Union, as follows.
In the United States, "gin" is defined as an alcoholic beverage of no less than 40% ABV (80 proof) that possesses the characteristic flavour of juniper berries. Gin produced only through the redistillation of botanicals can be further distinguished and marketed as "distilled gin".
The Canadian Food and Drug Regulation recognises gin with three different definitions (Genever, Gin, London or Dry gin) that loosely approximate the US definitions. Whereas a more detailed regulation is provided for Holland gin or genever, no distinction is made between compounded gin and distilled gin. Either compounded or distilled gin can be labelled as Dry Gin or London Dry Gin if it does not contain any sweetening agents. For Genever and Gin, they shall not contain more than two percent sweetening agents.
Some legal classifications (protected denomination of origin) define gin as only originating from specific geographical areas without any further restrictions (e.g. Plymouth gin (PGI now lapsed), Ostfriesischer Korngenever, Slovenská borovička, Kraški Brinjevec, etc.), while other common descriptors refer to classic styles that are culturally recognised, but not legally defined (e.g. Old Tom gin). Sloe gin is also worth mentioning as although technically a gin-based liqueur it is unique in that the EU spirit drink regulations stipulate the colloquial term sloe gin can legally be used without the "liqueur" suffix when certain production criteria are met.
Several different techniques for the production of gin have evolved since its early origins, this evolution being reflective of ongoing modernization in distillation and flavouring techniques. As a result of this evolution, gins can be broadly differentiated into three basic styles.
Popular botanicals or flavouring agents for gin, besides the required juniper, often include citrus elements, such as lemon and bitter orange peel, as well as a combination of other spices, which may include any of anise, angelica root and seed, orris root, licorice root, cinnamon, almond, cubeb, savory, lime peel, grapefruit peel, dragon eye (longan), saffron, baobab, frankincense, coriander, grains of paradise, nutmeg, cassia bark or others. The different combinations and concentrations of these botanicals in the distillation process cause the variations in taste among gin products.
Chemical research has begun to identify the various chemicals that are extracted in the distillation process and contribute to gin's flavouring. For example, juniper monoterpenes come from juniper berries. Citric flavours come from chemicals such as limonene and gamma-terpinene linalool. Spice-like flavours come from chemicals such as sabinene, delta-3-carene, and para-cymene.
In 2018, more than half the growth in the UK Gin category was contributed by flavoured gin.
According to the Canadian Food and Drug Regulation, gin is produced through redistillation of alcohol from juniper-berries or a mixture of more than one such redistilled food products.
A well known gin cocktail is the martini, traditionally made with gin and dry vermouth. Several other notable gin-based drinks include:
Note that the only criterion for "notability" here is that the gin have a Wikipedia article, not any measure of public recognition, market share, awards gathered, or positive reviews by those in the liquor trade.
|
https://en.wikipedia.org/wiki?curid=12988
|
Gall–Peters projection
The Gall–Peters projection is a rectangular map projection that maps all areas such that they have the correct sizes relative to each other. Like any equal-area projection, it achieves this goal by distorting most shapes. The projection is a particular example of the cylindrical equal-area projection with latitudes 45° north and south as the regions on the map that have no distortion.
The projection is named after James Gall and Arno Peters. Gall is credited with describing the projection in 1855 at a science convention. He published a paper on it in 1885. Peters brought the projection to a wider audience beginning in the early 1970s by means of the "Peters World Map". The name "Gall–Peters projection" seems to have been used first by Arthur H. Robinson in a pamphlet put out by the American Cartographic Association in 1986.
Maps based on the projection are promoted by UNESCO, and they are also widely used by British schools. The U.S. state of Massachusetts and Boston Public Schools began phasing in these maps in March 2017, becoming the first public school district and state in the United States to adopt Gall–Peters maps as their standard.
The Gall–Peters projection achieved notoriety in the late 20th century as the centerpiece of a controversy about the political implications of map design.
The projection is conventionally defined as:
where "λ" is the longitude from the central meridian in degrees, "φ" is the latitude, and "R" is the radius of the globe used as the model of the earth for projection. For longitude given in radians, remove the factors.
Stripping out unit conversion and uniform scaling, the formulae may be written:
where "λ" is the longitude from the central meridian (in radians), "φ" is the latitude, and "R" is the radius of the globe used as the model of the earth for projection. Hence the sphere is mapped onto the vertical cylinder, and the cylinder is stretched to double its length. The stretch factor, 2 in this case, is what distinguishes the variations of cylindric equal-area projection.
The various specializations of the cylindric equal-area projection differ only in the ratio of the vertical to horizontal axis. This ratio determines the "standard parallel" of the projection, which is the parallel at which there is no distortion and along which distances match the stated scale. There are always two standard parallels on the cylindric equal-area projection, each at the same distance north and south of the equator. The standard parallels of the Gall–Peters are 45° N and 45° S. Several other specializations of the equal-area cylindric have been described, promoted, or otherwise named.
The Gall–Peters projection was first described in 1855 by clergyman James Gall, who presented it along with two other projections at the Glasgow meeting of the British Association for the Advancement of Science (the BA). He gave it the name "orthographic" and formally published his work in 1885 in the "Scottish Geographical Magazine". The projection is suggestive of the Orthographic projection in that distances between parallels of the Gall–Peters are a constant multiple of the distances between the parallels of the orthographic. That constant is .
The name "Gall–Peters projection" seems to have been used first by Arthur H. Robinson in a pamphlet put out by the American Cartographic Association in 1986. Before 1973 it had been known, when referred to at all, as the "Gall orthographic" or "Gall's orthographic." Most Peters supporters refer to it only as the "Peters projection." During the years of controversy the cartographic literature tended to mention both attributions, settling on one or the other for the purposes of the article. In recent years "Gall–Peters" seems to dominate.
In 1967, Arno Peters, a German filmmaker, devised a map projection identical to Gall's orthographic projection and presented it in 1973 as a "new invention". He promoted it as a superior alternative to the Mercator projection, which was suited to navigation but also used commonly in world maps. The Mercator projection increasingly inflates the sizes of regions according to their distance from the equator. This inflation results, for example, in a representation of Greenland that is larger than Africa, which has a geographic area 14 times greater than Greenland's. Since much of the technologically underdeveloped world lies near the equator, these countries appear smaller on a Mercator and therefore, according to Peters, seem less significant. On Peters's projection, by contrast, areas of equal size on the globe are also equally sized on the map. By using his "new" projection, Peters argued that poorer, less powerful nations could be restored to their rightful proportions. This reasoning has been picked up by many educational and religious bodies, leading to adoption of the Gall–Peters projection among some socially concerned groups, including Oxfam, National Council of Churches, New Internationalist magazine, and the Mennonite Central Committee. However, Peters's choice of 45° N/S for the standard parallels means that the regions displayed with highest accuracy include Europe and the US, and not the tropics.
Peters's original description of the projection for his map contained a geometric error that, taken literally, implies standard parallels of 46°02′ N/S. However the text accompanying the description made it clear that he had intended the standard parallels to be 45° N/S, making his projection identical to Gall's orthographic. In any case, the difference is negligible in a world map.
At first, the cartographic community largely ignored Peters's foray into cartography. The preceding century had already witnessed many campaigns for new projections with little visible result. Just twenty years earlier, for example, Trystan Edwards described and promoted his own eponymous projection, disparaging the Mercator, and recommending his projection as "the" solution. Peters's projection differed from Edwards's only in height-to-width ratio. More problematic, Peters's projection was identical to one that was already over a century old, though he probably did not realize it. That projection—Gall's orthographic—passed unnoticed when it was announced in 1855.
Beyond the lack of novelty in the projection itself, the claims Peters made about the projection were also familiar to cartographers. Just as in the case of Peters, earlier projections generally were promoted as alternatives to the Mercator. Inappropriate use of the Mercator projection in world maps and the size disparities figuring prominently in Peters's arguments against the Mercator projection had been remarked upon for centuries and quite commonly in the 20th century. As early as 1943, Stewart notes this phenomenon and compares the quest for the perfect projection to "squaring the circle or making pi come out even" because the mathematics that governs map projections just does not permit development of a map projection that is objectively significantly better than the hundreds already devised. Even Peters's politicized interpretation of the common use of Mercator was nothing new, with Kelloway's 1946 text mentioning a similar controversy.
Cartographers had long despaired over publishers' inapt use of the Mercator. A 1943 "New York Times" editorial stated that "The time has come to discard [the Mercator] for something that represents the continents and directions less deceptively ... Although its usage ... has diminished ... it is still highly popular as a wall map apparently in part because, as a rectangular map, it fills a rectangular wall space with more map, and clearly because its familiarity breeds more popularity." Because of the lack of novelty both in the projection Peters devised and in the rhetoric surrounding its promotion, the cartographic community had no reason to think Peters would succeed any more than Edwards or his predecessors had.
Peters, however, launched his campaign in a different world from that of Edwards. He announced his map at a time when themes of social justice resonated strongly in academia and politics. Suggesting "cartographic imperialism", Peters found ready audiences. The campaign was bolstered by the claim that the Peters projection was the only "area-correct" map. Other claims included "absolute angle conformality", "no extreme distortions of form", and "totally distance-factual".
All of those claims were erroneous. Some of the oldest projections are equal-area (the sinusoidal projection is also known as the "Mercator equal-area projection"), and hundreds have been described, refuting any implication that Peters's map is special in that regard. In any case, Mercator was not the pervasive projection Peters made it out to be: a wide variety of projections has always been used in world maps. Peters's chosen projection suffers extreme distortion in the polar regions, as any cylindrical projection must, and its distortion along the equator is considerable. Several scholars have remarked on the irony of the projection's undistorted presentation of the mid latitudes, including Peters's native Germany, at the expense of the low latitudes, which host more of the technologically underdeveloped nations. The claim of distance fidelity is particularly problematic: Peters's map lacks distance fidelity everywhere except along the 45th parallels north and south, and then only in the direction of those parallels. No world projection is good at preserving distances everywhere; Peters's and all other cylindric projections are especially bad in that regard because east-west distances inevitably balloon toward the poles.
The cartographic community met Peters's 1973 press conference with amusement and mild exasperation, but little activity beyond a few articles commenting on the technical aspects of Peters's claims. In the ensuing years, however, it became clear that Peters and his map were no flash in the pan. By 1980 many cartographers had turned overtly hostile to his claims. In particular, Peters writes in "The New Cartography",
This attack galled the cartographic community. Their most emphatic refutation of Peters's assertions was the long list of cartographers who, over the preceding century, had formally expressed frustration at publishers' overuse of the Mercator, as noted above. Many of those cartographers had already developed projections they explicitly promoted as alternatives to the Mercator, including the most influential American cartographers of the twentieth century: John Paul Goode (Goode homolosine projection), Erwin Raisz (Armadillo projection), and Arthur H. Robinson (Robinson projection). Hence the cartographic community viewed Peters's narrative as ahistorical and mean-spirited.
The two camps never made any real attempts toward reconciliation. The Peters camp largely ignored the protests of the cartographers. Peters maintained there should be "one map for one world"—his—and did not acknowledge the prior art of Gall until the controversy had largely run its course, late in his life. While Peters likely reinvented the projection independently, his unscholarly conduct and refusal to engage the cartographic community undoubtedly contributed to the polarization and impasse.
Frustrated by some very visible successes and mounting publicity stirred up by the industry that had sprung up around the Peters map, the cartographic community began to plan more coordinated efforts to restore balance, as they saw it. The 1980s saw a flurry of literature directed against the Peters phenomenon. Though Peters's map was not singled out, the controversy motivated the American Cartographic Association (now Cartography and Geographic Information Society) to produce a series of booklets (including "Which Map Is Best") designed to educate the public about map projections and distortion in maps. In 1989 and 1990, after some internal debate, seven North American geographic organizations adopted the following resolution, which rejected all rectangular world maps, a category that includes both the Mercator and the Gall–Peters projections:
One map society, the North American Cartographic Information Society (NACIS), declined to endorse the 1989 resolution, although no reasons were given.
The geographic and cartographic communities did not unanimously disparage the Peters World Map. Some cartographers, including J. Brian Harley, have credited the Peters phenomenon with demonstrating the social implications of map projections, at the very least. Crampton sees the condemnation from the cartographic community as reactionary and perhaps demonstrative of immaturity in the profession, given that all maps are political. Denis Wood sees the map as one of many useful tools. Lastly, Terry Hardaker of Oxford Cartographers Limited, sympathetic to Peters's mission, became the map's official cartographer when Peters, overwhelmed by the technical aspects of cartography, sought to pass on those responsibilities.
Notes
Further reading
|
https://en.wikipedia.org/wiki?curid=12990
|
Gram Parsons
Ingram Cecil Connor III (November 5, 1946 – September 19, 1973), known professionally as Gram Parsons, was an American singer, songwriter, guitarist and pianist. Parsons recorded as a solo artist and with the International Submarine Band, the Byrds and the Flying Burrito Brothers. He popularized what he called "Cosmic American Music", a hybrid of country, rhythm and blues, soul, folk, and rock.
Parsons was born in Winter Haven, Florida and developed an interest in country music while attending Harvard University. He founded the International Submarine Band in 1966, but the group disbanded prior to the 1968 release of its debut album, "Safe at Home". Parsons joined The Byrds in early 1968 and played a pivotal role in the making of the seminal "Sweetheart of the Rodeo" album. After leaving the group in late 1968, Parsons and fellow Byrd Chris Hillman formed The Flying Burrito Brothers in 1969; the band released its debut, "The Gilded Palace of Sin", the same year. The album was well received but failed commercially. After a sloppy cross-country tour, the band hastily recorded "Burrito Deluxe". Parsons was fired from the band before the album's release in early 1970. Emmylou Harris assisted him on vocals for his first solo record, "GP", released in 1973. Although it received enthusiastic reviews, the release failed to chart. His next album, "Grievous Angel", peaked at number 195 on the "Billboard" chart. His health deteriorated due to several years of drug abuse and he died in 1973 at the age of 26.
Parsons's relatively short career was described by AllMusic as "enormously influential" for country and rock, "blending the two genres to the point that they became indistinguishable from each other." He has been credited with helping to found the country rock and alt-country genres. His posthumous honors include the Americana Music Association "President's Award" for 2003 and a ranking at No. 87 on "Rolling Stone"'s list of the "100 Greatest Artists of All Time."
Ingram Cecil Connor III was born on November 5, 1946, in Winter Haven, Florida, to Ingram Cecil "Coon Dog" (1917–1958) and Avis (née Snively) Connor (1923–1965). The Connors normally resided at their main residence in Waycross, Georgia, but Avis returned to her hometown in Florida to give birth. She was the daughter of citrus fruit magnate John A. Snively, who held extensive properties in Winter Haven and in Waycross. Gram's father, Ingram Connor II was a famous World War II flying ace, decorated with the Air Medal, who was present at the 1941 attack on Pearl Harbor. Biographer David Meyer characterized these parents as loving; he wrote in "Twenty Thousand Roads" that they are "remembered as affectionate parents and a loving couple".
However, he also notes that "unhappiness was eating away at the Connor family": Avis suffered from depression, and both parents were alcoholics. Ingram Connor II committed suicide two days before Christmas in 1958, devastating the 12-year-old Gram and his younger sister, also named Avis. Avis subsequently married Robert Parsons, who adopted Gram and his sister; they took his surname.
Gram Parsons briefly attended the prestigious Bolles School in Jacksonville, Florida, before transferring to the public Winter Haven High School; after failing his junior year, he returned to Bolles (which had converted from a military to a liberal arts curriculum amid the incipient Vietnam War). For a time, the family found a stability of sorts. They were torn apart in early 1965, when Robert became embroiled in an extramarital affair and Avis' heavy drinking led to her death from cirrhosis on June 5, 1965, the day of Gram's graduation from Bolles.
As his family was disintegrating around him, Parsons developed strong musical interests, particularly after seeing Elvis Presley perform in concert on February 22, 1956, in Waycross. Five years later, barely in his teens, he played in rock and roll cover bands such as the Pacers and the Legends, headlining in clubs owned by his stepfather in the Winter Haven/Polk County area. By the age of 16, he graduated to folk music, and in 1963 he teamed up with his first professional outfit, the Shilohs, in Greenville, South Carolina. Heavily influenced by The Kingston Trio and The Journeymen, the band played hootenannies, coffee houses and high school auditoriums; as Parsons was still enrolled in prep school, he only performed with the group in select engagements. Forays into New York City (where Parsons briefly lived with a female folk singer in a loft on Houston Street) included a performance at Florida's exhibition in the 1964 New York World's Fair and regular appearances at the Café Rafio on Bleecker Street in Greenwich Village in the summer of 1964. Although John Phillips (an acquaintance of Shiloh George Wrigley) arranged an exploratory meeting with Albert Grossman, the impresario balked at booking the group for a Christmas engagement at The Bitter End when he discovered that the Shilohs were still high school students. Following a recording session at the radio station of Bob Jones University, the group reached a creative impasse amid the emergence of folk rock and dissolved in the spring of 1965.
Despite his middling grades and test scores, Parsons was admitted to Harvard University's class of 1969 on the basis of a strong admissions essay. Although he claimed to have studied theology (an oblique reference to his close friendship with his residential tutor, Harvard Divinity School graduate student Jet Thomas) in subsequent interviews, Parsons seldom attended his general education courses before departing in early 1966 after one semester. He did not become seriously interested in country music until his time at Harvard, where he heard Merle Haggard for the first time.
In 1966, he and other musicians from the Boston folk scene formed a group called the International Submarine Band. After briefly residing in the Kingsbridge section of the Bronx, they relocated to Los Angeles the following year. Following several lineup changes, the band signed to Lee Hazlewood's LHI Records, where they spent late 1967 recording "Safe at Home". The album contains one of Parsons' best-known songs, "Luxury Liner", and an early version of "Do You Know How It Feels", which he revised later in his career. "Safe at Home" would remain unreleased until mid-1968, by which time the International Submarine Band had broken up.
By 1968, Parsons had come to the attention of The Byrds' bassist, Chris Hillman, via business manager Larry Spector as a possible replacement band member following the departures of David Crosby and Michael Clarke from the group in late 1967. Parsons had been acquainted with Hillman since the pair had met in a bank during 1967 and in February 1968 he passed an audition for the band, being initially recruited as a jazz pianist but soon switching to rhythm guitar and vocals.
Although Parsons was an equal contributor to the band, he was not regarded as a full member of The Byrds by the band's record label, Columbia Records. Consequently, when the Byrds' Columbia recording contract was renewed on February 29, 1968, it was only original members Roger McGuinn and Chris Hillman who signed it. Parsons, like fellow new recruit Kevin Kelley, was hired as a sideman and received a salary from McGuinn and Hillman. In later years, this led Hillman to state, "Gram was hired. He was not a member of The Byrds, ever. He was on salary, that was the only way we could get him to turn up." However, these comments overlook the fact that Parsons, like Kelley, was considered a bona fide member of the band during 1968 and, as such, was given equal billing alongside McGuinn, Hillman, and Kelley on the "Sweetheart of the Rodeo" album and in contemporary press coverage of the band.
"Sweetheart of the Rodeo" was originally conceived by band leader Roger McGuinn as a sprawling, double album history of American popular music. It was to begin with bluegrass music, then move through country and western, jazz, rhythm and blues, and rock music, before finally ending with the most advanced (for the time) form of electronic music. However, as recording plans were made, Parsons exerted a controlling influence over the group, persuading the other members to leave Los Angeles and record the album in Nashville, Tennessee. Along the way, McGuinn's original album concept was jettisoned in favor of a fully fledged country project, which included Parsons' songs such as "One Hundred Years from Now" and "Hickory Wind", along with compositions by Bob Dylan, Woody Guthrie, Merle Haggard, and others.
Recording sessions for "Sweetheart of the Rodeo" commenced at Columbia Records' recording studios in the Music Row area of Nashville on March 9, 1968. Midway through, the sessions moved to Columbia Studios, Hollywood, Los Angeles. They finally came to a close on May 27, 1968. However, Parsons was still under contract to LHI Records and consequently, Hazlewood contested Parsons' appearance on the album and threatened legal action. As a result, McGuinn ended up replacing three of Parsons' lead vocals with his own singing on the finished album, a move that still rankled Parsons as late as 1973, when he told Cameron Crowe in an interview that McGuinn "erased it and did the vocals himself and fucked it up." However, Parsons is still featured as lead vocalist on the songs "You're Still on My Mind", "Life in Prison", and "Hickory Wind".
While in England with The Byrds in the summer of 1968, Parsons left the band due to his concerns over a planned concert tour of South Africa, and after speaking to Mick Jagger and Keith Richards about the tour, he cited opposition to that country's apartheid policies. There has been some doubt expressed by Hillman over the sincerity of Parsons' protest. It appears that Parsons was mostly apolitical, although he did refer to one of the younger African-American butlers in the Connor household as being "like a brother" to him in an interview. During this period Parsons became acquainted with Mick Jagger and Keith Richards of The Rolling Stones. Before Parsons' departure from The Byrds, he had accompanied the two Rolling Stones to Stonehenge (along with McGuinn and Hillman) in the English county of Wiltshire. Immediately after leaving the band, Parsons stayed at Richards' house and the pair developed a close friendship over the next few years, with Parsons reintroducing the guitarist to country music. According to Stones' confidant and close friend of Parsons, Phil Kaufman, the two would sit around for hours playing obscure country records and trading off on various songs with their guitars.
Returning to Los Angeles, Parsons sought out Hillman, and the two formed The Flying Burrito Brothers with bassist Chris Ethridge and pedal steel player Sneaky Pete Kleinow. Their 1969 album "The Gilded Palace of Sin" marked the culmination of Parsons' post-1966 musical vision: a modernized variant of the Bakersfield sound that was popularized by Buck Owens amalgamated with strands of soul and psychedelic rock. The band appeared on the album cover wearing Nudie suits emblazoned with all sorts of hippie accoutrements, including marijuana, Tuinal and Seconal-inspired patches on Parsons' suit. Along with the Parsons-Hillman originals "Christine's Tune" and "Sin City" were versions of the soul music classics "The Dark End of the Street" and "Do Right Woman", the latter featuring David Crosby on high harmony. The album's original songs were the result of a very productive songwriting partnership between Parsons and Hillman, who were sharing a bachelor pad in the San Fernando Valley during this period. The atypically pronounced (for Parsons) gospel-soul influence on this album likely evolved from the ecumenical tastes of bassist Chris Ethridge (who co-wrote "Hot Burrito No. 1 [I'm Your Toy]" and "Hot Burrito No. 2" with Parsons) and frequent jamming with Delaney & Bonnie and Richards during the album's gestation.
Original drummer Eddie Hoh (best known for his work with The Monkees and Al Kooper) proved to be unable to perform adequate takes due to an incipient substance abuse problem and was dismissed after two songs, leading the group to record the remainder of the album with a variety of session drummers, including former International Submarine Band drummer Jon Corneal (who briefly joined the group as an official member, appearing on a plurality of the tracks) and Popeye Phillips of Dr. Hook & the Medicine Show. Before commencing live performances, the group ultimately settled upon original Byrds drummer Michael Clarke. Technically maladroit in comparison to his predecessors, Clarke's striking physical appearance proved to be the primary criterion in this decision; an associate of the band would later recall that "the Burritos had to be pretty" and "Corneal didn't fit" from that standpoint.
While unsuccessful from a commercial standpoint, the album was measured by rock critic Robert Christgau as "an ominous, obsessive, tongue-in-cheek country-rock synthesis, absorbing rural and urban, traditional and contemporary, at point of impact." Embarking on a cross-country tour via train, as Parsons suffered from periodic bouts of fear of flying, the group squandered most of their money in a perpetual poker game and received bewildered reactions in most cities. Parsons was frequently indulging in massive quantities of psilocybin and cocaine, so his performances were erratic at best, while much of the band's repertoire consisted of vintage honky-tonk and soul standards with few originals. Perhaps the most successful appearance occurred in Philadelphia, where the group opened for the reconstituted Byrds. Midway through their set, Parsons joined the headline act and fronted his former group on renditions of "Hickory Wind" and "You Don't Miss Your Water". The other Burritos surfaced with the exception of Clarke, and the joint aggregation played several songs, including "Long Black Veil" and "Goin' Back".
The Flying Burrito Brothers appeared at the Sky River Rock Festival in Tenino, Washington, at the end of August.
After returning to Los Angeles, the group recorded "The Train Song", written during an increasingly infrequent songwriting session on the train and produced by 1950s R&B legends Larry Williams and Johnny "Guitar" Watson. Despite a request from the Burritos that the remnants of their publicity budget be diverted to promotion of the single, it also flopped. During this period, Ethridge realized that he did not share Parsons' and Hillman's affinity for country music, precipitating his departure shortly thereafter. He was replaced by lead guitarist Bernie Leadon, while Hillman reverted to bass.
By this time, Parsons's own use of drugs had increased so much that new songs were rare and much of his time was diverted to partying with the Stones, who briefly relocated to America in the summer of 1969 to finish their forthcoming "Let It Bleed" album and prepare for an autumn cross-country tour, their first series of regular live engagements in over two years. As they prepared to play the nation's largest basketball arenas and early stadium concerts, the Burritos played to dwindling nightclub audiences; on one occasion, Jagger had to beseech Parsons to fulfill an obligation to his group. As Parsons "became a trust-fund baby when he came of age," he was still receiving about $30,000 per year (equivalent to $210,000 in 2018) from his family trust during this period, "distinguishing him from his many hungry, hard-scrabble peers."
However, the singer's dedication to the Rolling Stones was rewarded when the Burrito Brothers were booked as one of the acts at the infamous Altamont Music Festival. Playing a short set including "Six Days on the Road" and "Bony Moronie", Parsons left on one of the final helicopters and attempted to seduce Michelle Phillips. "Six Days..." was included in "Gimme Shelter", a documentary of the event.
With mounting debt incurred, A&M hoped to recoup some of their losses by marketing the Burritos as a straight country group. To this end, manager Jim Dickson instigated a loose session where the band recorded several honky tonk staples from their live act, contemporary pop covers in a countrified vein ("To Love Somebody", "Lodi", "I Shall Be Released", "Honky Tonk Women"), and Larry Williams' "Bony Moronie". This was soon scrapped in favor of a second album of originals on an extremely reduced budget.
Faced with a dearth of new material, most of the album was hastily written in the studio by Leadon, Hillman, and Parsons, with two "Gilded Palace of Sin" outtakes thrown into the mix. The resulting album, entitled "Burrito Deluxe", was released in April 1970. Although it is considered less inspired than its predecessor, it is notable for the Parsons-Hillman-Leadon song "Older Guys" and for its take on Jagger and Richards' "Wild Horses", the first recording released of this famous song. Parsons was inspired to cover the song after hearing an advance tape of the "Sticky Fingers" track sent to Kleinow, who was scheduled to overdub a pedal steel part; although Kleinow's part was not included on the released Rolling Stones version, it is available on bootlegs. Ultimately—and to the chagrin of Hillman, who was not keen on the song amid the band's creative malaise—Jagger and Richards consented to the cover version.
Like its predecessor, "Burrito Deluxe" underperformed commercially but also failed to carry the critical cachet of the debut. Disenchanted with the band, Parsons left the Burritos in mutual agreement with Hillman, who was long fatigued by his friend's unprofessionalism. Under Hillman's direction, the group recorded one more studio album before dissolving in the autumn of 1971.
In a recent interview with "American Songwriter" Chris Hillman explained that "[t]he greatest legacy of the Flying Burrito Brothers and Gram is we were the alternative country band. We couldn't get on country radio and we couldn't get on rock radio! We were the outlaw country band for a brief period."
Parsons signed a solo deal with A&M Records and moved in with producer Terry Melcher in early 1970. Melcher, who had worked with The Byrds and The Beach Boys, was a member of the successful duo Bruce & Terry, also known as The Rip Chords. The two shared a mutual penchant for cocaine and heroin, and as a result, the sessions were largely unproductive, with Parsons eventually losing interest in the project. "Terry loved Gram and wanted to produce him ... But neither of them could get anything done," recalled writer and mutual friend Eve Babitz. "Long lost, the tapes from this session have gathered a legendary patina," writes David Meyer. The recording stalled, and the master tapes were checked out, but there is conflict as to whether "Gram ... or Melcher took them".
He then accompanied the Rolling Stones on their 1971 U.K. tour in the hope of being signed to the newly formed Rolling Stones Records; by this juncture, Parsons and Richards had mulled the possibility of recording a duo album. Moving into Villa Nellcôte with the guitarist during the sessions for "Exile on Main Street" that commenced thereafter, Parsons remained in a consistently incapacitated state and frequently quarreled with his much younger girlfriend, aspiring actress Gretchen Burrell. Eventually, Parsons was asked to leave by Anita Pallenberg, Richards' longtime domestic partner. Decades later, Richards suggested in his memoir that Jagger may have been the impetus for Parsons' departure because Richards was spending so much time playing music with Parsons. Rumors have persisted that he appears somewhere on the legendary album, and while Richards concedes that it is very likely he is among the chorus of singers on "Sweet Virginia", this has never been substantiated. Parsons attempted to rekindle his relationship with the band on their 1972 American tour to no avail.
After leaving the Stones' camp, Parsons married Burrell in 1971 at his stepfather's New Orleans estate. Allegedly, the relationship was far from stable, with Burrell cutting a needy and jealous figure while Parsons quashed her burgeoning film career. Many of the singer's closest associates and friends claim that Parsons was preparing to commence divorce proceedings at the time of his death; the couple had already separated by this point.
Parsons and Burrell enjoyed the most idyllic time of their relationship in the second half of 1971, visiting old cohorts like Ian Dunlop and Family/Blind Faith/Traffic member Ric Grech in England. With the assistance of Grech and one of the bassist's friends, a doctor who also dabbled in country music and is now known as Hank Wangford, Parsons eventually stopped taking heroin; a previous treatment suggested by William Burroughs proved unsuccessful.
He returned to the US for a one-off concert with the Burritos, and at Hillman's request went to hear Emmylou Harris sing in a small club in Washington, D.C. They befriended each other and, within a year, he asked her to join him in Los Angeles for another attempt to record his first solo album. It came as a surprise to many when Parsons was enthusiastically signed to Reprise Records by Mo Ostin in mid-1972. The ensuing "GP" (1973) featured several members of Elvis Presley's TCB Band, led by lead guitarist James Burton. It included six new songs from a creatively revitalized Parsons alongside several country covers, including Tompall Glaser's "Streets of Baltimore" and George Jones' "That's All It Took".
Parsons, by now featuring Harris as his duet partner, toured across the United States as Gram Parsons and the Fallen Angels in February–March 1973. Unable to afford the services of the TCB Band for a month, the group featured the talents of Colorado-based rock guitarist Jock Bartley (soon to climb to fame with Firefall), veteran Nashville session musician Neil Flanz on pedal steel, eclectic bassist Kyle Tullis (best known for his work with Dolly Parton and Larry Coryell) and former Mountain drummer N.D. Smart. The touring party also included Gretchen Parsons—by this point extremely envious of Harris—and Harris' young daughter. Coordinating the spectacle as road manager was Phil Kaufman, who had served time with Charles Manson on Terminal Island in the mid-sixties and first met Parsons while working for the Stones in 1968. Kaufman ensured that the performer stayed away from substance abuse, limiting his alcohol intake during shows and throwing out any drugs smuggled into hotel rooms. At first, the band was under-rehearsed and played poorly; however, they improved markedly with steady gigging and received rapturous responses at several leading countercultural venues, including Armadillo World Headquarters in Austin, Texas, Max's Kansas City in New York City, and Liberty Hall in Houston, Texas (where Neil Young and Linda Ronstadt sat in for a filmed performance). According to a number of sources, it was Harris who forced the band to practice and work up an actual set list. Nevertheless, the tour failed to galvanize sales of "GP", which never charted in the "Billboard" 200.
For his next and final album, 1974's posthumously released "Grievous Angel", he again used Harris and members of the TCB Band for the sessions. The record received even more enthusiastic reviews than had "GP", and has since attained classic status. Its most celebrated song is a Parsons-Harris duet cover of "Love Hurts," a song that remains in Harris' solo repertoire. Notable Parsons-penned songs included "$1000 Wedding," a holdover from the Burrito Brothers era, and "Brass Buttons," a 1965 opus that addressed his mother's alcoholism. A new version of "Hickory Wind" was included, while "Ooh Las Vegas," co-written with Grech, dated from the "GP" sessions. Although Parsons only contributed two new songs to the album ("In My Hour of Darkness" and "Return of the Grievous Angel"), he was highly enthused with his new sound and seemed to have finally adopted a diligent mindset to his musical career, limiting his intake of alcohol and opiates during most of the sessions.
Before recording, Parsons and Harris played a preliminary four-show mini-tour as the headline act in a June 1973 Warner Brothers country rock package with the New Kentucky Colonels and Country Gazette. A shared backing band included former Byrds lead guitarist and Kentucky Colonel Clarence White, Pete Kleinow, and Chris Ethridge. On July 14, 1973, White was killed by a drunk driver in Palmdale, California, while loading equipment in his car for a concert with the New Kentucky Colonels. At White's funeral, Parsons and Bernie Leadon launched into an impromptu touching rendition of "Farther Along"; that evening, Parsons reportedly informed Phil Kaufman of his final wish: to be cremated in Joshua Tree. Despite the almost insurmountable setback, Parsons, Harris, and the other musicians decided to continue with plans for a fall tour.
In the summer of 1973, Parsons' Topanga Canyon home burned to the ground, the result of a stray cigarette. Nearly all of his possessions were destroyed with the exception of a guitar and a prized Jaguar automobile. The fire proved to be the last straw in the relationship between Burrell and Parsons, who moved into a spare room in Kaufman's house. While not recording, he frequently hung out and jammed with members of New Jersey–based country rockers Quacky Duck and His Barnyard Friends and the proto-punk Jonathan Richman & the Modern Lovers, who were represented by former Byrds manager Eddie Tickner.
Before formally breaking up with Burrell, Parsons already had a woman waiting in the wings. While recording, he saw a photo of a beautiful woman at a friend's home and was instantly smitten. The woman turned out to be Margaret Fisher, a high school sweetheart of the singer from his Waycross, Georgia, days. Like Parsons, Fisher had drifted west and became established in the Bay Area rock scene. A meeting was arranged and the two instantly rekindled their relationship, with Fisher dividing her weeks between Los Angeles and San Francisco at Parsons' expense.
In the late 1960s, Parsons became enamored of and began to vacation at Joshua Tree National Monument in southeastern California, where he frequently partook in psychedelics and reportedly experienced several UFO sightings. After splitting from Burrell, Parsons often spent his weekends in the area with Margaret Fisher and Phil Kaufman, with whom he had been living. Scheduled to resume touring in October 1973, Parsons decided to go on another recuperative excursion on September 17. Accompanying him were Fisher, personal assistant Michael Martin, and Dale McElroy, Martin's girlfriend. Kaufman later declared that Parsons' attorney was preparing divorce papers for him to serve to Burrell while the singer remained in Joshua Tree on September 20.
During the trip, Parsons often retreated to the desert, while the group visited bars in the nearby hamlet of Yucca Valley, California, on both nights of their stay. Parsons consumed large amounts of alcohol and barbiturates. On September 18, Martin drove back to Los Angeles to resupply the group with marijuana. That night, after challenging Fisher and McElroy to drink with him (Fisher didn't like alcohol and McElroy was recovering from a bout of hepatitis), he said, "I'll drink for the three of us," and proceeded to drink six double tequilas. They then returned to the Joshua Tree Inn, where Parsons purchased morphine from an unknown young woman. After being injected by her in room #8, he overdosed. Fisher gave Parsons an ice-cube suppository, and later on a cold shower. Instead of moving Parsons around the room, she put him to bed and went out to buy coffee in the hope of reviving him, leaving McElroy to stand watch. As his respiration became irregular and later ceased, McElroy attempted resuscitation. Her efforts failed and Fisher, watching from outside, was visibly alarmed. After further failed attempts, they decided to call an ambulance. Parsons was declared dead on his arrival at High Desert Memorial Hospital at 12:15 a.m. on September 19, 1973, in Yucca Valley. The official cause of death was an overdose of morphine and alcohol.
According to Fisher in the 2005 biography "Grievous Angel: An Intimate Biography of Gram Parsons," the amount of morphine consumed by Parsons would be lethal to three regular users; thus, he had likely overestimated his tolerance in light of his diminished intake despite his extensive experience with opiates. Keith Richards stated in the 2004 documentary film "Fallen Angel" that Parsons understood the danger of combining opiates and alcohol and should have known better. Upon Parsons' death, Fisher and McElroy were returned to Los Angeles by Kaufman, who dispersed the remnants of Parsons' drugs in the desert.
Before his death, Parsons stated that he wanted his body cremated at Joshua Tree and his ashes spread over Cap Rock, a prominent natural feature there; however, Parsons' stepfather Bob organized a private ceremony back in New Orleans and neglected to invite any of his friends from the music industry. Two accounts state that Bob Parsons stood to inherit Gram's share of his grandfather's estate if he could prove that Gram was a resident of Louisiana, explaining his eagerness to have him buried there.
To fulfill Parsons' funeral wishes, Kaufman and a friend stole his body from Los Angeles International Airport and in a borrowed hearse, they drove it to Joshua Tree. Upon reaching the Cap Rock section of the park, they attempted to cremate Parsons' body by pouring five gallons of gasoline into the open coffin and throwing a lit match inside. What resulted was an enormous fireball. The police gave chase but, as one account puts it, the men "were unencumbered by sobriety," and they escaped. Another telling indicates that the police did not "give chase", but that the Kaufman and friend were arrested for (presumably?) an "open-container/motor-vehicle" violation and/or suspected DUI, and somehow escaped that arrest.
The two were arrested several days later. Since there was no law against stealing a dead body, they were only fined $750 for stealing the coffin and were not prosecuted for leaving of his charred remains in the desert. What remained of Parson's body was eventually buried in Garden of Memories Cemetery in Metairie, Louisiana.
The site of Parsons' cremation is today known as The Cap Rock Parking Lot. A local myth in circulation brings Parsons fans out to a large rock flake known to rock climbers as "The Gram Parsons Memorial Hand Traverse". This myth was popularized when someone added a slab that marked Parsons' cremation to the memorial rock. The slab has since been removed by the U.S. National Park Service, and relocated to the Joshua Tree Inn. There is no monument at Cap Rock noting Parsons' cremation at the site. Joshua Tree park guides are given the option to tell the story of Parsons' cremation during tours, but there is no mention of the act in official maps or brochures. Fans regularly assemble simple rock structures and writings on the rock, which the park service periodically remove. The area that fans regularly graffiti in memory of Gram Parsons is also a Native American cultural site with rich tribal history.
Stephen Thomas Erlewine of AllMusic describes Parsons as "enormously influential" for both country and rock, "blending the two genres to the point that they became indistinguishable from each other. ... His influence could still be heard well into the next millennium." In his 2005 essay on Parsons for "Rolling Stone" magazine's "100 Greatest Artist" list, Keith Richards notes that Parsons' recorded music output was "pretty minimal." Nevertheless, Richards claims that Parsons' "effect on country music is enormous" and adds that this is "why we're talking about him now."
The 2003 film "Grand Theft Parsons" stars Johnny Knoxville as Phil Kaufman and chronicles a farcical version of the theft of Parsons' corpse. In 2006, the Gandulf Hennig-directed documentary film titled "Gram Parsons: Fallen Angel" was released.
Emmylou Harris has continued to champion Parsons' work throughout her career, covering a number of his songs over the years, including "Hickory Wind", "Wheels", "Sin City", "Luxury Liner", and "Hot Burrito No. 2". Harris's songs "Boulder to Birmingham", from her 1975 album "Pieces of the Sky", and "The Road", from her 2011 album "Hard Bargain", are tributes to Parsons. In addition, her 1985 album "The Ballad of Sally Rose" is an original concept album that includes many allusions to Parsons in its narrative. The song "My Man", written by Bernie Leadon and performed by the Eagles on their album "On the Border", is a tribute to Gram Parsons. Both Leadon and Parsons were members of the Flying Burrito Brothers during the late 1960s and early 1970s.
The 1973 album "Crazy Eyes" by Poco pays homage to Parsons, as Richie Furay composed the title track in honor of him, and sings one of Parsons' own compositions, "Brass Buttons." The album was released four days before Parsons died.
A music festival called Gram Fest or the Cosmic American Music Festival was held annually in honor of Parsons in Joshua Tree, California, between 1996 and 2006. The show featured tunes written by Gram Parsons and Gene Clark as well as influential songs and musical styles from other artists that were part of that era. Performers were also encouraged to showcase their own material. The underlying theme of the event is to inspire the performers to take these musical styles to the next level of the creative process. Past concerts have featured such notable artists as Sneaky Pete Kleinow, Chris Ethridge, Spooner Oldham, John Molo, Jack Royerton, Gib Guilbeau, Counting Crows, Bob Warford, Rosie Flores, David Lowery, Barry and Holly Tashian, George Tomsco, Jann Browne, Lucinda Williams, Polly Parsons, The "Road Mangler" Phil Kaufman, Ben Fong-Torres, Victoria Williams, Mark Olson, and Sid Griffin, as well as a variety of many other bands that had played over the two or three day event. In addition, the Gram Parsons Tribute, in Waycross, Georgia, is a music festival remembering Parsons in the town in which he grew up. Additional tributes spring up every year, the latest being the Southern California "Gram On!" celebration by The Rickenbastards in July, 2013, celebrating the life and legacy of a simple country boy with a dream, Gram Parsons.
In February 2008, Gram's protégée, Emmylou Harris, was inducted into the Country Music Hall of Fame. Despite his influence, however, Parsons has yet to be inducted. Radley Balko has written that "Parsons may be the most influential artist yet to be inducted to either the Rock and Roll or Country Music Hall(s) of Fame. And it's a damned shame." The Gram Parsons Petition Project (now Gram ParsonsInterNational) was begun in May 2008 in support of an ongoing drive to induct Parsons into the Country Music Hall of Fame. On September 19, 2008, the 35th anniversary of Parsons' death, it was first presented to the Country Music Association (CMA) and Hall as a "List of Supporters" together with the official Nomination Proposal. The online List of Supporters reached 10,000 on the 40th anniversary of his death, with nearly 14,000 currently listed. Annual Gram Parsons InterNational concerts in Nashville and various other cities, now in the 12th year, support the petition cause.
In November 2009, the musical theatre production "Grievous Angel: The Legend of Gram Parsons" premiered, starring Anders Drerup as Gram Parsons and Kelly Prescott as Emmylou Harris. Directed by Michael Bate and co-written by Bate and David McDonald, the production was inspired by a March 1973 interview that Bate conducted with Parsons, which became Parsons' last recorded conversation.
In 2012, Swedish folk duo First Aid Kit released the single "Emmylou" from the album "The Lion's Roar". The song's chorus is a lyrical acknowledgment of the Gram Parsons and Emmylou Harris singing partnership, and to the romantic relationship between them that never fully developed before his death.
In the fall of 2012 Florida festival promoter and musician Randy Judy presented his bio-musical "Farther Along – The Music and Life of Gram Parsons" at Magnoliafest at the Spirit of the Suwannee Music Park.
A Cleveland, Ohio area band, New Soft Shoe, performs as a tribute band to Parsons' music.
A St. Paul, Minnesota band, The Gilded Palace Sinners, is another Parsons' tribute group.
|
https://en.wikipedia.org/wiki?curid=12991
|
Go-fast boat
A go-fast boat is a small, fast boat designed with a long narrow platform and a planing hull to enable it to reach high speeds.
During the era of Prohibition in the United States, these boats joined the ranks of "rum-runners" transferring illegal liquor from larger vessels waiting outside US territorial waters to the mainland. The high speed of such craft enabled them to avoid interception by the Coast Guard. More recently the term "cigarette boat" has replaced the term "rum-runner". The present era of cigarette boats, dating from the 1960s, owes much of their design to boats designed for offshore powerboat racing, particularly by designer and builder Donald Aronow. During this period, these boats were used by drug smugglers to transfer drugs across the Caribbean to the United States.
A typical go-fast is laid-up using a combination of fibreglass, kevlar and carbon fibre, using a deep "V" style offshore racing hull ranging from long, narrow in beam, and equipped with two or more powerful engines, often totalling more than . The boats can typically travel at speeds over in calm waters, over in choppy waters, and maintain in the average Caribbean seas. They are heavy enough to cut through higher waves, although slower.
Reflecting their racing heritage, accommodations on these five-passengers-or-fewer boats are minimal. A small low cabin under the foredeck is typical, much smaller than a typical motor yacht of similar size. In addition to racing, most buyers buy these boats for their mystique, immense power, high top speeds, and sleek shape.
These boats are difficult to detect by radar except on flat calm seas or at close range. The United States Coast Guard and the DEA found them to be stealthy, fast, seaworthy, and very difficult to intercept using conventional craft. Because of this, Coast Guards have developed their own high-speed craft and use helicopters equipped with anti-materiel rifles used to disable engines of fleeing boats. The US Coast Guard go-fast boat is a rigid-hulled inflatable boat (RHIB) equipped with radar and powerful engines. The RHIB is armed with several types of non-lethal weapons and an M240 GPMG.
|
https://en.wikipedia.org/wiki?curid=12992
|
Glasgow City Chambers
The City Chambers or Municipal Buildings in Glasgow, Scotland, has functioned as the headquarters of Glasgow City Council since 1996, and of preceding forms of municipal government in the city since 1889. It is located on the eastern side of the city's George Square. It is a Category A listed building.
The need for a new city chambers had been apparent since the 18th century, with the old Tolbooth at Glasgow Cross becoming insufficient for the purposes of civic government in a growing town with greater political responsibilities. In 1814, the Tolbooth was sold – with the exception of the steeple, which still remains – and the council chambers moved to Jail Square in the Saltmarket, near Glasgow Green. Subsequent moves were made to Wilson Street and Ingram Street. In the early 1880s, City Architect John Carrick was asked to identify a suitable site for a purpose built City Council Chambers. Carrick identified the east side of George Square, which was then bought.
Following a design competition, the building was designed by the Scottish architect William Young in the Victorian style and construction started in 1882. The building was inaugurated by Queen Victoria in August 1888 and the first council meeting held within the chambers took place in October 1889. An extension connected by pairs of archways across John Street was completed in 1912 and Exchange House in George Street was completed in the mid-1980s.
The new City Chambers initially housed Glasgow Town Council from 1888 to 1895, when that body was replaced by Glasgow Corporation. It remained the Corporation's headquarters until it was replaced by Glasgow District Council under the wider Strathclyde Regional Council in May 1975. It then remained the Glasgow District Council headquarters until the abolition of the Strathclyde Region led to the formation of Glasgow City Council in April 1996.
The building is in the Beaux arts style, an interpretation of Renaissance Classicism incorporating Italianate styles with a vast range of ornate decoration, used to express the wealth and industrial export-led economic prosperity of the Second City of the Empire. The exterior sculpture, by James Alexander Ewing, included the central Jubilee Pediment as its centrepiece. Although originally intended to feature a figure symbolising Glasgow 'with the Clyde at her feet sending her manufactures to all the world', the Pediment was redesigned to celebrate Queen Victoria's Golden Jubilee. It depicts Victoria enthroned, surrounded by emblematic figures of Scotland, England, Ireland and Wales, alongside the colonies of the British Empire. Ewing also designed the apex sculptures of Truth, Riches, and Honour, and the statues of The Four Seasons on the Chamber's tower. The central apex figure of Truth is popularly known as Glasgow's Statue of Liberty, because of its close resemblance to the similarly posed, but very much larger, statue in New York harbour.
The entrance hall of the Chambers displays a mosaic of the city's coat of arms on the floor. The arms reflect legends about Glasgow's patron saint, Saint Mungo, and include four emblems – the bird, tree, bell, and fish – as remembered in the following verse:
The ornate banqueting hall, which is long by wide and high, is decorated with huge murals by Scottish painters. The room hosted Nelson Mandela and Sir Alex Ferguson when they received the Freedom of the City in 1993 and 1999 respectively. The Council Chamber is clad Spanish mahogany paneling and its windows are made of Venetian stained glass.
The building was used as a stand in for the Kremlin in the film "An Englishman Abroad" in 1983 and as the Vatican in "Heavenly Pursuits" in 1986. It was also used for the film "The House of Mirth" in 2000 and featured more recently in the television series Outlander.
|
https://en.wikipedia.org/wiki?curid=12994
|
Gone with the Wind (novel)
Gone with the Wind is a novel by American writer Margaret Mitchell, first published in 1936. The story is set in Clayton County and Atlanta, both in Georgia, during the American Civil War and Reconstruction Era. It depicts the struggles of young Scarlett O'Hara, the spoiled daughter of a well-to-do plantation owner, who must use every means at her disposal to claw her way out of poverty following Sherman's destructive "March to the Sea". This historical novel features a coming-of-age story, with the title taken from a poem written by Ernest Dowson.
"Gone with the Wind" was popular with American readers from the outset and was the top American fiction bestseller in 1936 and 1937. As of 2014, a Harris poll found it to be the second favorite book of American readers, just behind the Bible. More than 30 million copies have been printed worldwide.
"Gone with the Wind" is a controversial reference point for subsequent writers of the South, both black and white. Scholars at American universities refer to, interpret, and study it in their writings. The novel has been absorbed into American popular culture.
Mitchell received the Pulitzer Prize for Fiction for the book in 1937. It was adapted into the 1939 film of the same name, which has been considered to be one of the greatest movies ever made. "Gone with the Wind" is the only novel by Mitchell published during her lifetime.
Born in 1900 in Atlanta, Georgia, Margaret Mitchell was a Southerner and writer throughout her life. She grew up hearing stories about the American Civil War and the Reconstruction from her Irish-American grandmother, who had endured its suffering. Her forceful and intellectual mother was a suffragist who fought for the rights of women to vote.
As a young woman, Mitchell found love with an army lieutenant. He was killed in World War I, and she would carry his memory for the remainder of her life. After studying at Smith College for a year, during which time her mother died from the 1918 pandemic flu, Mitchell returned to Atlanta. She married, but her husband was an abusive bootlegger. Mitchell took a job writing feature articles for the "Atlanta Journal" at a time when Atlanta debutantes of her class did not work. After divorcing her first husband, she married again, this time to a man who shared her interest in writing and literature. He had also been the best man at her first wedding.
Margaret Mitchell began writing "Gone with the Wind" in 1926 to pass the time while recovering from a slow-healing auto-crash injury. In April 1935, Harold Latham of Macmillan, an editor looking for new fiction, read her manuscript and saw that it could be a best-seller. After Latham agreed to publish the book, Mitchell worked for another six months checking the historical references and rewriting the opening chapter several times. Mitchell and her husband John Marsh, a copy editor by trade, edited the final version of the novel. Mitchell wrote the book's final moments first and then wrote the events that led up to them. "Gone with the Wind" was published in June 1936.
The author tentatively titled the novel "Tomorrow is Another Day", from its last line. Other proposed titles included "Bugles Sang True", "Not in Our Stars", and "Tote the Weary Load". The title Mitchell finally chose is from the first line of the third stanza of the poem "Non Sum Qualis Eram Bonae sub Regno Cynarae" by Ernest Dowson:
Scarlett O'Hara uses the title phrase when she wonders to herself if her home on a plantation called "Tara" is still standing, or if it had "gone with the wind which had swept through Georgia." In a general sense, the title is a metaphor for the demise of a way of life in the South prior to the Civil War. When taken in the context of Dowson's poem about "Cynara," the phrase "gone with the wind" alludes to erotic loss. The poem expresses the regrets of someone who has lost his feelings for his "old passion," Cynara. Dowson's Cynara, a name that comes from the Greek word for artichoke, represents a lost love.
It is also possible that the author was influenced by the connection of the phrase “Gone with the wind” with Tara in a line of James Joyce’s Ulysses in the chapter “Aeolus”.
"Gone with the Wind" takes place in the southern United States in the state of Georgia during the American Civil War (1861–1865) and the Reconstruction Era (1865–1877). The novel unfolds against the backdrop of rebellion seven southern states initially, including Georgia, have declared their secession from the United States (the "Union") and formed the Confederate States of America (the "Confederacy"), after Abraham Lincoln was elected president. The Union refuses to accept secession and no compromise is found as war approaches.
The novel opens April 15, 1861, at "Tara," a plantation owned by Gerald O'Hara, an Irish immigrant who has become a successful planter, and his wife, Ellen Robillard O'Hara, from a coastal aristocratic family of French descent. Their 16-year-old daughter, Scarlett, is not beautiful, but men seldom realized it once they were caught up in her charm. All the talk is of the coming Civil War.
There are brief but vivid descriptions of the South as it began and grew, with backgrounds of the main characters: the stylish and highbrow French, the gentlemanly English, and the forced-to-flee and looked-down-upon Irish. Scarlett learns that one of her many beaux, Ashley Wilkes, will soon be engaged to his cousin, Melanie Hamilton. She is heart-stricken. The next day at the Wilkeses' barbecue at Twelve Oaks, Scarlett tells Ashley she loves him, and he admits he cares for her. However, he knows he would not be happy if married to her because of their personality differences. She loses her temper with him, and he silently takes it.
Rhett Butler, who has a reputation as a rogue, had been alone in the library when Ashley and Scarlett entered and felt it wiser to stay unseen during the argument. Rhett applauds Scarlett for the "unladylike" spirit she displayed with Ashley. Infuriated and humiliated, she tells Rhett, "You aren't fit to wipe his boots!"
After rejoining the other party guests, she learns that war has been declared and the men are going to enlist. Seeking revenge, Scarlett accepts a marriage proposal from Melanie's brother, Charles Hamilton. They marry two weeks later. Charles dies of pneumonia following the measles two months after the war begins. As a young widow, Scarlett gives birth to her first child, Wade Hampton Hamilton, named after his father's general. She is bound by custom to wear black and avoid conversation with young men. Scarlett feels restricted by these conventions and bitterly misses her life as a young, unmarried woman.
Aunt Pittypat is living with Melanie in Atlanta and invites Scarlett to stay with them, as she was Charles' wife. In Atlanta, Scarlett's spirits revive, and she is busy with hospital work and sewing circles for the Confederate Army. Scarlett encounters Rhett Butler again at a benefit dance, where he is dressed like a dandy. Although Rhett believes the war is a lost cause, he is blockade running for profit. The men must bid for a dance with a lady, and Rhett bids "one hundred fifty dollars-in gold" for a dance with Scarlett. They waltz to the tune of "When This Cruel War is Over," and Scarlett sings the words.
Others at the dance are shocked that Rhett would bid for a widow and that she would accept the dance while still wearing black (or widow's weeds). Melanie defends her, arguing she is supporting the cause for which Melanie's husband, Ashley, is fighting.
At Christmas (1863), Ashley is granted a furlough from the army. Melanie becomes pregnant with their first child.
The war is going badly for the Confederacy. By September 1864, Atlanta is besieged from three sides. The city becomes desperate and hundreds of wounded Confederate soldiers pour in. Melanie goes into labor with only the inexperienced Scarlett to assist, as all the doctors are attending the soldiers. Prissy, a young slave, cries out in despair and fear, "De Yankees is comin!" In the chaos, Scarlett, left to fend for herself, cries for the comfort and safety of her mother and Tara. The tattered Confederate States Army sets flame to Atlanta and abandons it to the Union Army.
Melanie gives birth to a boy, Beau, with Scarlett's assistance. Scarlett then finds Rhett and begs him to take herself, Wade, Melanie, Beau, and Prissy to Tara. Rhett laughs at the idea but steals an emaciated horse and a small wagon, and they follow the retreating army out of Atlanta.
Part way to Tara, Rhett has a change of heart and abandons Scarlett to enlist in the army (he later recounts that when they learned he had attended West Point, they put him in the artillery, which may have saved his life). Scarlett then makes her way to Tara, where she is welcomed on the steps by her father, Gerald. Things have drastically changed: Scarlett's mother is dead, her father has lost his mind with grief, her sisters are sick with typhoid fever, the field slaves have left after Emancipation, the Yankees have burned all the cotton, and there is no food in the house. Scarlett avows that she and her family will survive and never be hungry again.
The long-tiring struggle for survival begins that has Scarlett working in the fields. There are hungry people to feed and little food. There is the ever-present threat of the Yankees who steal and burn. At one point, a Yankee soldier trespasses on Tara, and it is implied that he would steal from the house and possibly rape Scarlett and Melanie. Scarlett kills him with Charles' pistol, and sees that Melanie had also prepared to fight him with a sword.
A long post-war succession of Confederate soldiers returning home stop at Tara to find food and rest. Eventually, Ashley returns from the war, with his idealistic view of the world shattered.
Life at Tara slowly begins to recover, but then new taxes are levied on the plantation. Scarlett knows only one man with enough money to help her: Rhett Butler. She looks for him in Atlanta only to learn that he is in jail. Rhett refuses to give money to Scarlett, and leaving the jailhouse in fury, she runs into Frank Kennedy, who runs a store in Atlanta and is betrothed to Scarlett's sister, Suellen. Realizing Frank also has money, Scarlett hatches a plot and tells Frank that Suellen will not marry him. Frank succumbs to Scarlett's charms and marries her two weeks later, knowing he has done "something romantic and exciting for the first time in his life." Always wanting her to be happy and radiant, Frank gives Scarlett the money to pay the taxes.
While Frank has a cold and is pampered by Aunt Pittypat, Scarlett goes over the accounts at Frank's store and finds that many owe him money. Scarlett is now terrified about the taxes and decides money, a lot of it, is needed. She takes control of the store, and her business practices leave many Atlantans resentful of her. With a loan from Rhett, she buys a sawmill and runs it herself, all scandalous conduct. To Frank's relief, Scarlett learns she is pregnant, which curtails her "unladylike" activities for a while. She convinces Ashley to come to Atlanta and manage the mill, all the while still in love with him. At Melanie's urging, Ashley takes the job. Melanie becomes the center of Atlanta society, and Scarlett gives birth to Ella Lorena: "Ella for her grandmother Ellen, and Lorena because it was the most fashionable name of the day for girls."
Georgia is under martial law, and life has taken on a new and more frightening tone. For protection, Scarlett keeps Frank's pistol tucked in the upholstery of the buggy. Her trips alone to and from the mill take her past a shanty town where criminal elements live. While on her way home one evening, she is accosted by two men who try to rob her, but she escapes with the help of Big Sam, the black former foreman from Tara. Attempting to avenge his wife, Frank and the Ku Klux Klan raid the shanty town whereupon Frank is shot dead. Scarlett is a widow again.
To keep the raiders from being arrested, Rhett puts on a charade. He walks into the Wilkeses' home with Hugh Elsing and Ashley, singing and pretending to be drunk. Yankee officers outside question Rhett, and he says he and the other men had been at Belle Watling's brothel that evening, a story Belle later confirms to the officers. The men are indebted to Rhett, and his Scallawag reputation among them improves a notch, but the men's wives, except Melanie, are livid at owing their husbands' lives to Belle Watling.
Frank Kennedy lies in a casket in the quiet stillness of the parlor in Aunt Pittypat's home. Scarlett is remorseful. She is swigging brandy from Aunt Pitty's swoon bottle when Rhett comes to call. She tells him tearfully, "I'm afraid I'll die and go to hell." He says, "Maybe there isn't a hell." Before she can cry any further, he asks her to marry him, saying, "I always intended having you, one way or another." She says she doesn't love him and doesn't want to be married again. However, he kisses her passionately, and in the heat of the moment, she agrees to marry him. One year later, Scarlett and Rhett announce their engagement, which becomes the talk of the town.
Mr. and Mrs. Butler honeymoon in New Orleans, spending lavishly. Upon returning to Atlanta, they stay in the bridal suite at the National Hotel while their new home on Peachtree Street is being built. Scarlett chooses a modern Swiss chalet style home like the one she saw in "Harper's Weekly", with red wallpaper, thick red carpet, and black walnut furniture. Rhett describes it as an "architectural horror". Shortly after they move in to their new home, the sardonic jabs between them turn into full-blown quarrels. Scarlett wonders why Rhett married her. Then "with real hate in her eyes", she tells Rhett she will have a baby, which she does not want.
Wade is seven years old in 1869 when his half-sister, Eugenie Victoria, named after two queens, is born. She has blue eyes like Gerald O'Hara, and Melanie nicknames her, "Bonnie Blue," in reference to the Bonnie Blue Flag of the Confederacy.
When Scarlett is feeling well again, she makes a trip to the mill and talks to Ashley, who is alone in the office. In their conversation, she comes away believing Ashley still loves her and is jealous of her intimate relations with Rhett, which excites her. She returns home and tells Rhett she does not want more children. From then on, they sleep separately, and when Bonnie is two years old, she sleeps in a little bed beside Rhett (with the light on all night because she is afraid of the dark). Rhett turns his attention toward Bonnie, dotes on her, spoils her, and worries about her reputation when she is older.
Melanie is giving a surprise birthday party for Ashley. Scarlett goes to the mill to keep Ashley there until party time, a rare opportunity for her to see him alone. When she sees him, she feels "sixteen again, a little breathless and excited." Ashley tells her how pretty she looks, and they reminisce about the days when they were young and talk about their lives now. Suddenly, Scarlett's eyes fill with tears, and Ashley holds her head against his chest. Ashley sees his sister, India Wilkes, standing in the doorway. Before the party has even begun, a rumor of an affair between Ashley and Scarlett spreads, and Rhett and Melanie hear it. Melanie refuses to accept any criticism of her sister-in-law, and India Wilkes is banished from the Wilkeses' home for it, causing a rift in the family.
Rhett, more drunk than Scarlett has ever seen him, returns home from the party long after Scarlett. His eyes are bloodshot, and his mood is dark and violent. He enjoins Scarlett to drink with him. Not wanting him to know she is fearful of him, she throws back a drink and gets up from her chair to go back to her bedroom. He stops her and pins her shoulders to the wall. She tells him he is jealous of Ashley, and Rhett accuses her of "crying for the moon" over Ashley. He tells her they could have been happy together saying, "for I loved you and I know you." He then takes her in his arms and carries her up the stairs to her bedroom, where it is strongly implied that he rapes her—or, possibly, that they have consensual sex following the argument.
The next morning, Rhett leaves for Charleston and New Orleans with Bonnie. Scarlett finds herself missing him, but she is still unsure if Rhett loves her, having said it while drunk. She learns she is pregnant with her fourth child.
When Rhett returns, Scarlett waits for him at the top of the stairs. She wonders if Rhett will kiss her, but to her irritation, he does not. He says she looks pale. She says it's because she is pregnant. He sarcastically asks if the father is Ashley. She calls Rhett a cad and tells him no woman would want his baby. He says, "Cheer up, maybe you'll have a miscarriage." She lunges at him, but he dodges, and she tumbles backwards down the stairs. She is seriously ill for the first time in her life, having lost her child and broken her ribs. Rhett is remorseful, believing he has killed her. When Melanie goes to him in order to give him an update on Scarlett’s condition, he asks if she's asked for him. When Melanie replies she hasn't, he breaks down, finally realizing that Scarlett never really loved him. Sobbing and drunk, he buries his head in Melanie's lap and almost confesses that Scarlett truly loves Ashley, but stops himself at the last moment.
Scarlett, who is thin and pale, goes to Tara, taking Wade and Ella with her, to regain her strength and vitality from "the green cotton fields of home." When she returns healthy to Atlanta, she sells the mills to Ashley. She finds Rhett's attitude has noticeably changed. He is sober, kinder, polite—and seemingly disinterested. Though she misses the old Rhett at times, Scarlett is content to leave well enough alone.
Bonnie is four years old in 1873. Spirited and willful, she has her father wrapped around her finger and giving in to her every demand. Even Scarlett is jealous of the attention Bonnie gets. Rhett rides his horse around town with Bonnie in front of him, but Mammy insists it is not fitting for a girl to ride a horse with her dress flying up. Rhett heeds her words and buys Bonnie a Shetland pony, whom she names "Mr. Butler," and teaches her to ride sidesaddle. Then Rhett pays a boy named Wash twenty-five cents to teach Mr. Butler to jump over wood bars. When Mr. Butler is able to get his fat legs over a one-foot bar, Rhett puts Bonnie on the pony, and soon Mr. Butler is leaping bars and Aunt Melly's rose bushes.
Wearing her blue velvet riding habit with a red feather in her black hat, Bonnie pleads with her father to raise the bar to one-and-a-half feet. He gives in, warning her not to come crying if she falls. Bonnie yells to her mother, "Watch me take this one!" The pony gallops towards the wood bar, but trips over it. Bonnie breaks her neck in the fall, and dies.
In the dark days and months following Bonnie's death, Rhett is often drunk and disheveled, while Scarlett, though deeply bereaved also, seems to hold up under the strain. With the untimely death of Melanie Wilkes who was pregnant again, a short time later, Rhett decides he only wants the calm dignity of the genial South he once knew in his youth and leaves Atlanta to find it. Scarlett, who finally realizes she loves him on the night of Melanie's death, confesses to him, only for Rhett to say he'd given up on her after she'd suffered her miscarriage and she hadn't asked for him, and delivers the book's most famous line – "My dear, I don't give a damn." Meanwhile, Scarlett dreams of love that has eluded her for so long. However, she still has Tara and knows she can win Rhett back, because "tomorrow is another day."
Margaret Mitchell arranged "Gone with the Wind" chronologically, basing it on the life and experiences of the main character, Scarlett O'Hara, as she grew from adolescence into adulthood. During the time span of the novel, from 1861 to 1873, Scarlett ages from sixteen to twenty-eight years. This is a type of "Bildungsroman", a novel concerned with the moral and psychological growth of the protagonist from youth to adulthood (coming-of-age story). Scarlett's development is affected by the events of her time. Mitchell used a smooth linear narrative structure. The novel is known for its exceptional "readability". The plot is rich with vivid characters.
"Gone with the Wind" is often placed in the literary subgenre of the historical romance novel. Pamela Regis has argued that is more appropriately classified as a historical novel, as it does not contain all of the elements of the romance genre. The novel has also been described as an early classic of the erotic historical genre, because it is thought to contain some degree of pornography.
Slavery in the United States in "Gone with the Wind" is a backdrop to a story that is essentially about other things. Southern plantation fiction (also known as Anti-Tom literature, in reference to reactions to Harriet Beecher Stowe's anti-slavery novel, "Uncle Tom's Cabin" of 1852) from the mid-19th century, culminating in "Gone with the Wind", is written from the perspective and values of the slaveholder and tends to present slaves as docile and happy.
The characters in the novel are organized into two basic groups along class lines: the white planter class, such as Scarlett and Ashley, and the black house servant class. The slaves depicted in "Gone with the Wind" are primarily loyal house servants, such as Mammy, Pork, Prissy, and Uncle Peter. House servants are the highest "caste" of slaves in Mitchell's caste system. They choose to stay with their masters after the Emancipation Proclamation of 1863 and subsequent Thirteenth Amendment of 1865 sets them free. Of the servants who stayed at Tara, Scarlett thinks, "There were qualities of loyalty and tirelessness and love in them that no strain could break, no money could buy."
The field slaves make up the lower class in Mitchell's caste system. The field slaves from the Tara plantation and the foreman, Big Sam, are taken away by Confederate soldiers to dig ditches and never return to the plantation. Mitchell wrote that other field slaves were "loyal" and "refused to avail themselves of the new freedom", but the novel has no field slaves who stay on the plantation to work after they have been emancipated.
American William Wells Brown escaped from slavery and published his memoir, or slave narrative, in 1847. He wrote of the disparity in conditions between the house servant and the field hand:
During the time that Mr. Cook was overseer, I was a house servant—a situation preferable to a field hand, as I was better fed, better clothed, and not obliged to rise at the ringing bell, but about an half hour after. I have often laid and heard the crack of the whip, and the screams of the slave.
Although the novel is more than 1,000 pages long, the character of Mammy never considers what her life might be like away from Tara. She recognizes her freedom to come and go as she pleases, saying, "Ah is free, Miss Scarlett. You kain sen' me nowhar Ah doan wanter go," but Mammy remains duty-bound to "Miss Ellen's chile." (No other name for Mammy is given in the novel.)
Eighteen years before the publication of "Gone with the Wind", an article titled, "The Old Black Mammy," written in the "Confederate Veteran" in 1918, discussed the romanticized view of the mammy character persisting in Southern literature:
... for her faithfulness and devotion, she has been immortalized in the literature of the South; so the memory of her will never pass, but live on in the tales that are told of those "dear dead days beyond recall".
Micki McElya, in her book "Clinging to Mammy", suggests the myth of the faithful slave, in the figure of Mammy, lingered because white Americans wished to live in a world in which African Americans were not angry over the injustice of slavery.
The best-selling anti-slavery novel, "Uncle Tom's Cabin" by Harriet Beecher Stowe, published in 1852, is mentioned briefly in "Gone with the Wind" as being accepted by the Yankees as "revelation second only to the Bible". The enduring interest of both "Uncle Tom's Cabin" and "Gone with the Wind" has resulted in lingering stereotypes of 19th-century black slaves. "Gone with the Wind" has become a reference point for subsequent writers about the South, both black and white alike.
The southern belle is an archetype for a young woman of the antebellum American South upper class. The southern belle was believed to be physically attractive but, more importantly, personally charming with sophisticated social skills. She is subject to the correct code of female behavior. The novel's heroine, Scarlett O'Hara, charming though not beautiful, is a classic southern belle.
For young Scarlett, the ideal southern belle is represented by her mother, Ellen O'Hara. In "A Study in Scarlett", published in "The New Yorker", Claudia Roth Pierpont wrote:
The Southern belle was bred to conform to a subspecies of the nineteenth-century "lady"... For Scarlett, the ideal is embodied in her adored mother, the saintly Ellen, whose back is never seen to rest against the back of any chair on which she sits, whose broken spirit everywhere is mistaken for righteous calm ...
However, Scarlett is not always willing to conform. Kathryn Lee Seidel, in her book, "The Southern Belle in the American Novel", wrote:
... part of her does try to rebel against the restraints of a code of behavior that relentlessly attempts to mold her into a form to which she is not naturally suited.
The figure of a pampered southern belle, Scarlett lives through an extreme reversal of fortune and wealth, and survives to rebuild Tara and her self-esteem. Her bad belle traits (Scarlett's deceitfulness, shrewdness, manipulation, and superficiality), in contrast to Melanie's good belle traits (trust, self-sacrifice, and loyalty), enable her to survive in the post-war South and pursue her main interest, which is to make enough money to survive and prosper. Although Scarlett was "born" around 1845, she is portrayed to appeal to modern-day readers for her passionate and independent spirit, determination and obstinate refusal to feel defeated.
Marriage was supposed to be the goal of all southern belles, as women's status was largely determined by that of their husbands. All social and educational pursuits were directed towards it. Despite the Civil War and loss of a generation of eligible men, young ladies were still expected to marry. By law and Southern social convention, household heads were adult, white propertied males, and all white women and all African Americans were thought to require protection and guidance because they lacked the capacity for reason and self-control.
The Atlanta Historical Society has produced a number of "Gone with the Wind" exhibits, among them a 1994 exhibit titled, "Disputed Territories: "Gone with the Wind" and Southern Myths". The exhibit asked, "Was Scarlett a Lady?", finding that historically most women of the period were not involved in business activities as Scarlett was during Reconstruction, when she ran a sawmill. White women performed traditional jobs such as teaching and sewing, and generally disliked work outside the home.
During the Civil War, Southern women played a major role as volunteer nurses working in makeshift hospitals. Many were middle- and upper class women who had never worked for wages or seen the inside of a hospital. One such nurse was Ada W. Bacot, a young widow who had lost two children. Bacot came from a wealthy South Carolina plantation family that owned 87 slaves.
In the fall of 1862, Confederate laws were changed to permit women to be employed in hospitals as members of the Confederate Medical Department. Twenty-seven-year-old nurse Kate Cumming from Mobile, Alabama, described the primitive hospital conditions in her journal:
They are in the hall, on the gallery, and crowded into very small rooms. The foul air from this mass of human beings at first made me giddy and sick, but I soon got over it. We have to walk, and when we give the men any thing kneel, in blood and water; but we think nothing of it at all.
The Civil War came to an end on April 26, 1865 when Confederate General Johnston surrendered his armies in the Carolinas Campaign to Union General Sherman. Several battles are mentioned or depicted in "Gone with the Wind".
The Atlanta Campaign (May–September 1864) took place in northwest Georgia and the area around Atlanta.
Confederate General Johnston fights and retreats from Dalton (May 7–13) to Resaca (May 13–15) to Kennesaw Mountain (June 27). Union General Sherman suffers heavy losses to the entrenched Confederate army. Unable to pass through Kennesaw, Sherman swings his men around to the Chattahoochee River where the Confederate army is waiting on the opposite side of the river. Once again, General Sherman flanks the Confederate army, forcing Johnston to retreat to Peachtree Creek (July 20), five miles northeast of Atlanta.
The Savannah Campaign was conducted in Georgia during November and December 1864.
Although Abraham Lincoln is mentioned in the novel fourteen times, no reference is made to his assassination on April 14, 1865.
Ashley Wilkes is the beau ideal of Southern manhood. A planter by inheritance, Ashley knew the Confederate cause had died. Ashley's name signifies paleness. His "pallid skin literalizes the idea of Confederate death."
He contemplates leaving Georgia for New York City. Had he gone North, he would have joined numerous other ex-Confederate transplants there. Ashley, embittered by war, tells Scarlett he has been "in a state of suspended animation" since the surrender. He feels he is not "shouldering a man's burden" at Tara and believes he is "much less than a man—much less, indeed, than a woman".
A "young girl's dream of the Perfect Knight", Ashley is like a young girl himself. With his "poet's eye", Ashley has a "feminine sensitivity". Scarlett is angered by the "slur of effeminacy flung at Ashley" when her father tells her the Wilkes family was "born queer". (Mitchell's use of the word "queer" is for its sexual connotation because queer, in the 1930s, was associated with homosexuality.) Ashley's effeminacy is associated with his appearance, his lack of forcefulness, and sexual impotency. He rides, plays poker, and drinks like "proper men", but his heart is not in it, Gerald claims. The embodiment of castration, Ashley wears the head of Medusa on his cravat pin.
Scarlett's love interest, Ashley Wilkes, lacks manliness, and her husbands—the "calf-like" Charles Hamilton, and the "old-maid in britches", Frank Kennedy—are unmanly as well. Mitchell is critiquing masculinity in southern society since Reconstruction. Even Rhett Butler, the well-groomed dandy, is effeminate or "gay-coded." Charles, Frank and Ashley represent the impotence of the post-war white South. Its power and influence have been diminished.
The word "scallawag" is defined as a loafer, a vagabond, or a rogue. Scallawag had a special meaning after the Civil War as an epithet for a white Southerner who accepted and supported Republican reforms. Mitchell defines scallawags as "Southerners who had turned Republican very profitably." Rhett Butler is accused of being a "damned Scallawag." In addition to scallawags, Mitchell portrays other types of scoundrels in the novel: Yankees, carpetbaggers, Republicans, prostitutes, and overseers. In the early years of the Civil War, Rhett is called a "scoundrel" for his "selfish gains" profiteering as a blockade-runner.
As a scallawag, Rhett is despised. He is the "dark, mysterious, and slightly malevolent hero loose in the world". Literary scholars have identified elements of Mitchell's first husband, Berrien "Red" Upshaw, in the character of Rhett. Another sees the image of Italian actor Rudolph Valentino, whom Margaret Mitchell interviewed as a young reporter for "The Atlanta Journal". Fictional hero Rhett Butler has a "swarthy face, flashing teeth and dark alert eyes". He is a "scamp, blackguard, without scruple or honor."
If "Gone with the Wind" has a theme it is that of survival. What makes some people come through catastrophes and others, apparently just as able, strong, and brave, go under? It happens in every upheaval. Some people survive; others don't. What qualities are in those who fight their way through triumphantly that are lacking in those that go under? I only know that survivors used to call that quality 'gumption.' So I wrote about people who had gumption and people who didn't.
— Margaret Mitchell, 1936
The sales of Margaret Mitchell's novel in the summer of 1936, as the nation was recovering from the Great Depression and at the virtually unprecedented high price of three dollars, reached about one million by the end of December. The book was a bestseller by the time reviews began to appear in national magazines. Herschel Brickell, a critic for the "New York Evening Post", lauded Mitchell for the way she "tosses out the window all the thousands of technical tricks our novelists have been playing with for the past twenty years."
Ralph Thompson, a book reviewer for "The New York Times", was critical of the length of the novel, and wrote in June 1936:I happen to feel that the book would have been infinitely better had it been edited down to say, 500 pages, but there speaks the harassed daily reviewer as well as the would-be judicious critic. Very nearly every reader will agree, no doubt, that a more disciplined and less prodigal piece of work would have more nearly done justice to the subject-matter.Some reviewers compared the book to William Thackeray's "Vanity Fair" and Leo Tolstoy's "War and Peace". Mitchell herself claimed Charles Dickens as an inspiration and called "Gone with the Wind" a "'Victorian' type novel."
Helen Keller, whose father had owned slaves and fought as a Confederate captain and who had later supported the NAACP and the ACLU, read the 12-volume Braille edition.
The book brought her fond memories of her southern infancy but she also felt sadness comparing that with what she knew about the South.
"Gone with the Wind" has been criticized for its stereotypical and derogatory portrayal of African Americans in the 19th century South. Former field hands during the early days of Reconstruction are described behaving "as creatures of small intelligence might naturally be expected to do. Like monkeys or small children turned loose among treasured objects whose value is beyond their comprehension, they ran wild—either from perverse pleasure in destruction or simply because of their ignorance."
Commenting on this passage of the novel, Jabari Asim, author of "The N Word: Who Can Say It, Who Shouldn't, and Why", says it is, ""one of the more charitable passages in "Gone With the Wind", Margaret Mitchell hesitated to blame black 'insolence' during Reconstruction solely on 'mean niggers'," "of which, she said, there were few even in slavery days.""
Critics say that Mitchell downplayed the violent role of the Ku Klux Klan and their abuse of freedmen. Author Pat Conroy, in his preface to a later edition of the novel, describes Mitchell's portrayal of the Ku Klux Klan as having "the same romanticized role it had in "The Birth of a Nation" and appears to be a benign combination of the Elks Club and a men's equestrian society".
Regarding the historical inaccuracies of the novel, historian Richard N. Current points out:
No doubt it is indeed unfortunate that "Gone with the Wind" perpetuates many myths about Reconstruction, particularly with respect to blacks. Margaret Mitchell did not originate them and a young novelist can scarcely be faulted for not knowing what the majority of mature, professional historians did not know until many years later.
In "Gone with the Wind", Mitchell explores some complexities in racial issues. Scarlett was asked by a Yankee woman for advice on whom to appoint as a nurse for her children; Scarlett suggested a "darky", much to the disgust of the Yankee woman who was seeking an Irish maid, a "Bridget". African Americans and Irish Americans are treated "in precisely the same way" in "Gone with the Wind", writes David O'Connell in his 1996 book, "The Irish Roots of Margaret Mitchell's Gone With the Wind". Ethnic slurs on the Irish and Irish stereotypes pervade the novel, O'Connell claims, and Scarlett is not an exception to the terminology. Irish scholar Geraldine Higgins notes that Jonas Wilkerson labels Scarlett: "you highflying, bogtrotting Irish". Higgins says that, as the Irish American O'Haras were slaveholders and African Americans were held in bondage, the two ethnic groups are not equivalent in the ethnic hierarchy of the novel.
The novel has been criticized for promoting plantation values and romanticizing the white supremacy of the antebellum south. Mitchell biographer Marianne Walker, author of "Margaret Mitchell and John Marsh: The Love Story Behind Gone with the Wind", believes that those who attack the book on these grounds have not read it. She said that the popular 1939 film "promotes a false notion of the Old South". Mitchell was not involved in the screenplay or film production.
James Loewen, author of "Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong", says this novel is "profoundly racist and profoundly wrong." In 1984, an alderman in Waukegan, Illinois, challenged the book's inclusion on the reading list of the Waukegan School District on the grounds of "racism" and "unacceptable language." He objected to the frequent use of the racial slur "nigger." He also objected to several other books: "The Nigger of the 'Narcissus'", "Uncle Tom's Cabin", and "Adventures of Huckleberry Finn" for the same reason.
Mitchell's use of color in the novel is symbolic and open to interpretation. Red, green, and a variety of hues of each of these colors, are the predominant palette of colors related to Scarlett.
The novel and film adaptation have come under intense criticism for racist and white supremacist themes in 2020 following the on-camera murder of Black American, George Floyd, and the ensuing protests and focus on systemic racism in the United States.
In 1937, Margaret Mitchell received the Pulitzer Prize for Fiction for "Gone with the Wind" and the second annual National Book Award from the American Booksellers Association. It is ranked as the second favorite book by American readers, just behind the Bible, according to a 2008 Harris poll. The poll found the novel has its strongest following among women, those aged 44 or more, both Southerners and Midwesterners, both whites and Hispanics, and those who have not attended college. In a 2014 Harris poll, Mitchell's novel ranked again as second, after the Bible. The novel is on the list of best-selling books. As of 2010, more than 30 million copies have been printed in the United States and abroad. More than 24 editions of "Gone with the Wind" have been issued in China. "TIME" magazine critics, Lev Grossman and Richard Lacayo, included the novel on their list of the 100 best English-language novels from 1923 to the present (2005). In 2003, the book was listed at number 21 on the BBC's The Big Read poll of the UK's "best-loved novel."
"Gone with the Wind" has been adapted several times for stage and screen:
"Gone with the Wind" has appeared in many places and forms in popular culture:
On June 30, 1986, the 50th anniversary of the day "Gone with the Wind" went on sale, the U.S. Post Office issued a 1-cent stamp showing an image of Margaret Mitchell. The stamp was designed by Ronald Adair and was part of the U.S. Postal Service's Great Americans series.
On September 10, 1998, the U.S. Post Office issued a 32-cent stamp as part of its Celebrate the Century series recalling various important events in the 20th century. The stamp, designed by Howard Paine, displays the book with its original dust jacket, a white Magnolia blossom, and a hilt placed against a background of green velvet.
To commemorate the 75th anniversary (2011) of the publication of "Gone with the Wind" in 1936, Scribner published a paperback edition featuring the book's original jacket art.
The Windies are ardent "Gone with the Wind" fans who follow all the latest news and events surrounding the book and film. They gather periodically in costumes from the film or dressed as Margaret Mitchell. Atlanta, Georgia is their meeting place.
One story of the legacy of "Gone with the Wind" is that people worldwide incorrectly think it was the "true story" of the Old South and how it was changed by the American Civil War and Reconstruction. The film adaptation of the novel "amplified this effect." The plantation legend was "burned" into the mind of the public through Mitchell's vivid prose. Moreover, her fictional account of the war and its aftermath has influenced how the world has viewed the city of Atlanta for successive generations.
Some readers of the novel have seen the film first and read the novel afterward. One difference between the film and the novel is the staircase scene, in which Rhett carries Scarlett up the stairs. In the film, Scarlett weakly struggles and does not scream as Rhett starts up the stairs. In the novel, "he hurt her and she cried out, muffled, frightened."
Earlier in the novel, in an intended rape at Shantytown (Chapter 44), Scarlett is attacked by a black man who rips open her dress while a white man grabs hold of the horse's bridle. She is rescued by another black man, Big Sam. In the film, she is attacked by a white man, while a black man grabs the horse's bridle.
The Library of Congress began a multiyear "Celebration of the Book" in July 2012 with an exhibition on "Books That Shaped America", and an initial list of 88 books by American authors that have influenced American lives. "Gone with the Wind" was included in the Library's list. Librarian of Congress, James H. Billington said:
This list is a starting point. It is not a register of the 'best' American books – although many of them fit that description. Rather, the list is intended to spark a national conversation on books written by Americans that have influenced our lives, whether they appear on this initial list or not.
Among books on the list considered to be the Great American Novel were "Moby-Dick", "Adventures of Huckleberry Finn", "The Great Gatsby", "The Grapes of Wrath", "The Catcher in the Rye", "Invisible Man", and "To Kill a Mockingbird".
Throughout the world, the novel appeals due to its universal themes: war, love, death, racial conflict, class, gender and generation, which speak especially to women. In North Korea, readers relate to the novel's theme of survival, finding it to be "the most compelling message of the novel". Margaret Mitchell's personal collection of nearly 70 foreign language translations of her novel was given to the Atlanta Public Library after her death.
On August 16, 2012, the Archdiocese of Atlanta announced that it had been bequeathed a 50% stake in the trademarks and literary rights to "Gone With the Wind" from the estate of Margaret Mitchell's deceased nephew, Joseph Mitchell. Margaret Mitchell had separated from the Catholic Church. However, one of Mitchell's biographers, Darden Asbury Pyron, stated that Margaret Mitchell had "an intense relationship" with her mother, who was a Roman Catholic.
Although some of Mitchell's papers and documents related to the writing of "Gone with the Wind" were burned after her death, many documents, including assorted draft chapters, were preserved. The last four chapters of the novel are held by the Pequot Library of Southport, Connecticut.
The first printing of 10,000 copies contains the original publication date: "Published May, 1936". After the book was chosen as the Book-of-the-Month's selection for July, publication was delayed until June 30. The second printing of 25,000 copies (and subsequent printings) contains the release date: "Published June, 1936." The third printing of 15,000 copies was made in June 1936. Additionally, 50,000 copies were printed for the Book-of-the-Month Club July selection. "Gone with the Wind" was officially released to the American public on June 30, 1936.
Although Mitchell refused to write a sequel to "Gone with the Wind", Mitchell's estate authorized Alexandra Ripley to write a sequel, which was titled "Scarlett". The book was subsequently adapted into a television mini-series in 1994. A second sequel was authorized by Mitchell's estate titled "Rhett Butler's People", by Donald McCaig. The novel parallels "Gone with the Wind" from Rhett Butler's perspective. In 2010, Mitchell's estate authorized McCaig to write a prequel, which follows the life of the house servant Mammy, whom McCaig names "Ruth". The novel, "Ruth's Journey", was released in 2014.
The copyright holders of "Gone with the Wind" attempted to suppress publication of "The Wind Done Gone" by Alice Randall, which retold the story from the perspective of the slaves. A federal appeals court denied the plaintiffs an injunction ("Suntrust v. Houghton Mifflin") against publication on the basis that the book was parody and therefore protected by the First Amendment. The parties subsequently settled out of court and the book went on to become a "New York Times" Best Seller.
A book sequel unauthorized by the copyright holders, "The Winds of Tara" by Katherine Pinotti, was blocked from publication in the United States. The novel was republished in Australia, avoiding U.S. copyright restrictions.
Away from copyright lawsuits, Internet fan fiction has proved to be a fertile medium for sequels (some of them book-length), parodies, and rewritings of "Gone with the Wind."
Numerous unauthorized sequels to "Gone with the Wind" have been published in Russia, mostly under the pseudonym Yuliya Hilpatrik, a cover for a consortium of writers. "The New York Times" states that most of these have a "Slavic" flavor.
Several sequels were written in Hungarian under the pseudonym Audrey D. Milland or Audrey Dee Milland, by at least four different authors (who are named in the colophon as translators to make the book seem a translation from the English original, a procedure common in the 1990s but prohibited by law since then). The first one picks up where Ripley's "Scarlett" ended, the next one is about Scarlett's daughter Cat. Other books include a prequel trilogy about Scarlett's grandmother Solange and a three-part miniseries of a supposed illegitimate daughter of Carreen.
"Gone with the Wind" has been in the public domain in Australia since 1999 (50 years after Margaret Mitchell's death). On 1 January 2020, the book entered the public domain in the European Union (70 years after the author's death). Under an extension of copyright law, "Gone with the Wind" will not enter the public domain in the United States until 2031, however.
|
https://en.wikipedia.org/wiki?curid=12995
|
George Washington Carver
George Washington Carver (1860s – January 5, 1943) was an American agricultural scientist and inventor. He promoted alternative crops to cotton and methods to prevent soil depletion. He was the most prominent black scientist of the early 20th century.
While a professor at Tuskegee Institute, Carver developed techniques to improve soils depleted by repeated plantings of cotton. He wanted poor farmers to grow other crops, such as peanuts and sweet potatoes, as a source of their own food and to improve their quality of life. The most popular of his 44 practical bulletins for farmers contained 105 food recipes using peanuts. Although he spent years developing and promoting numerous products made from peanuts, none became commercially successful.
Apart from his work to improve the lives of farmers, Carver was also a leader in promoting environmentalism. He received numerous honors for his work, including the Spingarn Medal of the NAACP. In an era of high racial polarization, his fame reached beyond the black community. He was widely recognized and praised in the white community for his many achievements and talents. In 1941, "Time" magazine dubbed Carver a "Black Leonardo".
Carver was born into slavery, in Diamond Grove (now Diamond), Newton County, Missouri, near Crystal Place, sometime in the early or mid 1860s. The date of his birth is uncertain and was not known to Carver; but it was before slavery was abolished in Missouri, which occurred in January 1865, during the American Civil War. His master, Moses Carver, was a German American immigrant, who had purchased George's parents, Mary and Giles, from William P. McGinnis on October 9, 1855, for $700.
When George was a week old, he, a sister, and his mother were kidnapped by night raiders from Arkansas. George's brother, James, was rushed to safety from the kidnappers. The kidnappers sold the slaves in Kentucky. Moses Carver hired John Bentley to find them, but he found only the infant George. Moses negotiated with the raiders to gain the boy's return, and rewarded Bentley. After slavery was abolished, Moses Carver and his wife, Susan, raised George and his older brother, James, as their own children. They encouraged George to continue his intellectual pursuits, and "Aunt Susan" taught him the basics of reading and writing.
Black people were not allowed at the public school in Diamond Grove. George decided to go to a school for black children 10 miles (16 km) south, in Neosho. When he reached the town, he found the school closed for the night. He slept in a nearby barn. By his own account, the next morning he met a kind woman, Mariah Watkins, from whom he wished to rent a room. When he identified himself as "Carver's George", as he had done his whole life, she replied that from now on his name was "George Carver". George liked Mariah Watkins, and her words "You must learn all you can, then go back out into the world and give your learning back to the people" made a great impression on him.
At age 13, because he wanted to attend the academy there, he moved to the home of another foster family, in Fort Scott, Kansas. After witnessing the killing of a black man by a group of whites, Carver left the city. He attended a series of schools before earning his diploma at Minneapolis High School in Minneapolis, Kansas.
Carver applied to several colleges before being accepted at Highland University in Highland, Kansas. When he arrived, however, they refused to let him attend because of his race. In August 1886, Carver traveled by wagon with J. F. Beeler from Highland to Eden Township in Ness County, Kansas. He homesteaded a claim near Beeler, where he maintained a small conservatory of plants and flowers and a geological collection. He manually plowed of the claim, planting rice, corn, Indian corn and garden produce, as well as various fruit trees, forest trees, and shrubbery. He also earned money by odd jobs in town and worked as a ranch hand.
In early 1888, Carver obtained a $300 loan at the Bank of Ness City for education. By June he left the area. In 1890, Carver started studying art and piano at Simpson College in Indianola, Iowa. His art teacher, Etta Budd, recognized Carver's talent for painting flowers and plants; she encouraged him to study botany at Iowa State Agricultural College (now Iowa State University) in Ames.
When he began there in 1891, he was the first black student at Iowa State. Carver's Bachelor's thesis for a degree in Agriculture was "Plants as Modified by Man", dated 1894. Iowa State University professors Joseph Budd and Louis Pammel convinced Carver to continue there for his master's degree. Carver did research at the Iowa Experiment Station under Pammel during the next two years. His work at the experiment station in plant pathology and mycology first gained him national recognition and respect as a botanist. Carver received his master of science degree in 1896. Carver taught as the first black faculty member at Iowa State.
Despite occasionally being addressed as "doctor," Carver never received an official doctorate, and in a personal communication with Louis H. Pammel, he noted that it was a "misnomer", given to him by others due to his abilities and their assumptions about his education. With that said, both Simpson College and Selma University awarded him honorary doctorates of science in his lifetime. Iowa State later awarded him a doctorate of humane letters posthumously in 1994.
In 1896, Booker T. Washington, the first principal and president of the Tuskegee Institute (now Tuskegee University), invited Carver to head its Agriculture Department. Carver taught there for 47 years, developing the department into a strong research center and working with two additional college presidents during his tenure. He taught methods of crop rotation, introduced several alternative cash crops for farmers that would also improve the soil of areas heavily cultivated in cotton, initiated research into crop products (chemurgy), and taught generations of black students farming techniques for self-sufficiency.
Carver designed a mobile classroom to take education out to farmers. He called it a "Jesup wagon" after the New York financier and philanthropist Morris Ketchum Jesup, who provided funding to support the program.
To recruit Carver to Tuskegee, Washington gave him an above average salary and two rooms for his personal use, although both concessions were resented by some other faculty. Because he had earned a master's in a scientific field from a "white" institution, some faculty perceived him as arrogant. Unmarried faculty members normally had to share rooms, with two to a room, in the spartan early days of the institute.
One of Carver's duties was to administer the Agricultural Experiment Station farms. He had to manage the production and sale of farm products to generate revenue for the Institute. He soon proved to be a poor administrator. In 1900, Carver complained that the physical work and the letter-writing required were too much. In 1904, an Institute committee reported that Carver's reports on yields from the poultry yard were exaggerated, and Washington confronted Carver about the issue. Carver replied in writing, "Now to be branded as a liar and party to such hellish deception it is more than I can bear, and if your committee feel that I have willfully lied or [was] party to such lies as were told my resignation is at your disposal." During Washington's last five years at Tuskegee, Carver submitted or threatened his resignation several times: when the administration reorganized the agriculture programs, when he disliked a teaching assignment, to manage an experiment station elsewhere, and when he did not get summer teaching assignments in 1913–14. In each case, Washington smoothed things over.
Carver started his academic career as a researcher and teacher. In 1911, Washington wrote a letter to him complaining that Carver had not followed orders to plant particular crops at the experiment station. This revealed Washington's micro-management of Carver's department, which he had headed for more than 10 years by then. Washington at the same time refused Carver's requests for a new laboratory, research supplies for his exclusive use, and respite from teaching classes. Washington praised Carver's abilities in teaching and original research but said about his administrative skills:
When it comes to the organization of classes, the ability required to secure a properly organized and large school or section of a school, you are wanting in ability. When it comes to the matter of practical farm managing which will secure definite, practical, financial results, you are wanting again in ability.
In 1911, Carver complained that his laboratory had not received the equipment which Washington had promised 11 months before. He also complained about Institute committee meetings. Washington praised Carver in his 1911 memoir, "My Larger Education: Being Chapters from My Experience". Washington called Carver "one of the most thoroughly scientific men of the Negro race with whom I am acquainted." After Washington died in 1915, his successor made fewer demands on Carver for administrative tasks.
While a professor at Tuskegee, Carver joined the Gamma Sigma chapter of Phi Beta Sigma fraternity. He spoke at the 1930 Conclave that was held at Tuskegee, Alabama, in which he delivered a powerful and emotional speech to the men in attendance.
From 1915 to 1923, Carver concentrated on researching and experimenting with new uses for peanuts, sweet potatoes, soybeans, pecans, and other crops, as well as having his assistants research and compile existing uses. This work, and especially his speaking to a national conference of the Peanut Growers Association in 1920 and in testimony before Congress in 1921 to support passage of a tariff on imported peanuts, brought him wide publicity and increasing renown. In these years, he became one of the most well-known African Americans of his time.
Carver developed techniques to improve soils depleted by repeated plantings of cotton. Together with other agricultural experts, he urged farmers to restore nitrogen to their soils by practicing systematic crop rotation: alternating cotton crops with plantings of sweet potatoes or legumes (such as peanuts, soybeans and cowpeas). These crops both restored nitrogen to the soil and were good for human consumption. Following the crop rotation practice resulted in improved cotton yields and gave farmers alternative cash crops. To train farmers to successfully rotate and cultivate the new crops, Carver developed an agricultural extension program for Alabama that was similar to the one at Iowa State. To encourage better nutrition in the South, he widely distributed recipes using the alternative crops.
Additionally, he founded an industrial research laboratory, where he and assistants worked to popularize the new crops by developing hundreds of applications for them. They did original research as well as promoting applications and recipes, which they collected from others. Carver distributed his information as agricultural bulletins.
Carver's work was known by officials in the national capital before he became a public figure. President Theodore Roosevelt publicly admired his work. Former professors of Carver's from Iowa State University were appointed to positions as Secretary of Agriculture: James Wilson, a former dean and professor of Carver's, served from 1897 to 1913. Henry Cantwell Wallace served from 1921 to 1924. He knew Carver personally because his son Henry A. Wallace and the researcher were friends. The younger Wallace served as U.S. Secretary of Agriculture from 1933 to 1940, and as Franklin Delano Roosevelt's vice president from 1941 to 1945.
The American industrialist, farmer, and inventor William C. Edenborn of Winn Parish, Louisiana, grew peanuts on his demonstration farm. He consulted with Carver.
In 1916, Carver was made a member of the Royal Society of Arts in England, one of only a handful of Americans at that time to receive this honor. Carver's promotion of peanuts gained him the most notice. In 1919, Carver wrote to a peanut company about the potential he saw for peanut milk. Both he and the peanut industry seemed unaware that in 1917 William Melhuish had secured for a milk substitute made from peanuts and soybeans.
The United Peanut Associations of America invited Carver to speak at their 1920 convention. He discussed "The Possibilities of the Peanut" and exhibited 145 peanut products. By 1920, the U.S. peanut farmers were being undercut by low prices on imported peanuts from the Republic of China.
In 1921, peanut farmers and industry representatives planned to appear at Congressional hearings to ask for a tariff. Based on the quality of Carver's presentation at their convention, they asked the African-American professor to testify on the tariff issue before the Ways and Means Committee of the United States House of Representatives. Due to segregation, it was highly unusual for an African American to appear as an expert witness at Congress representing European-American industry and farmers. Southern congressmen, reportedly shocked at Carver's arriving to testify, were said to have mocked him. As he talked about the importance of the peanut and its uses for American agriculture, the committee members repeatedly extended the time for his testimony. The Fordney–McCumber Tariff of 1922 was passed including one on imported peanuts. Carver's testifying to Congress made him widely known as a public figure.
During the last two decades of his life, Carver seemed to enjoy his celebrity status. He was often on the road promoting Tuskegee University, peanuts, and racial harmony. Although he only published six agricultural bulletins after 1922, he published articles in peanut industry journals and wrote a syndicated newspaper column, "Professor Carver's Advice". Business leaders came to seek his help, and he often responded with free advice. Three American presidents—Theodore Roosevelt, Calvin Coolidge and Franklin Roosevelt—met with him, and the Crown Prince of Sweden studied with him for three weeks. From 1923 to 1933, Carver toured white Southern colleges for the Commission on Interracial Cooperation.
With his increasing notability, Carver became the subject of biographies and articles. Raleigh H. Merritt contacted him for his biography published in 1929. Merritt wrote:
At present not a great deal has been done to utilize Dr. Carver's discoveries commercially. He says that he is merely scratching the surface of scientific investigations of the possibilities of the peanut and other Southern products.
In 1932, the writer James Saxon Childers wrote that Carver and his peanut products were almost solely responsible for the rise in U.S. peanut production after the boll weevil devastated the American cotton crop beginning about 1892. His article, "A Boy Who Was Traded for a Horse" (1932), in "The American Magazine", and its 1937 reprint in "Reader's Digest", contributed to this myth about Carver's influence. Other popular media tended to exaggerate Carver's impact on the peanut industry.
From 1933 to 1935, Carver worked to develop peanut oil massages to treat infantile paralysis (polio). Ultimately, researchers found that the massages, not the peanut oil, provided the benefits of maintaining some mobility to paralyzed limbs.
From 1935 to 1937, Carver participated in the USDA Disease Survey. Carver had specialized in plant diseases and mycology for his master's degree.
In 1937, Carver attended two chemurgy conferences, an emerging field in the 1930s, during the Great Depression and the Dust Bowl, concerned with developing new products from crops. He was invited by Henry Ford to speak at the conference held in Dearborn, Michigan, and they developed a friendship. That year Carver's health declined, and Ford later installed an elevator at the Tuskegee dormitory where Carver lived, so that the elderly man would not have to climb stairs.
Carver had been frugal in his life, and in his seventies he established a legacy by creating a museum of his work, as well as the George Washington Carver Foundation at Tuskegee in 1938 to continue agricultural research. He donated nearly in his savings to create the foundation.
Carver never married. At age 40, he began a courtship with Sarah L. Hunt, an elementary school teacher and the sister-in-law of Warren Logan, Treasurer of Tuskegee Institute. This lasted three years until she took a teaching job in California. In her 2015 biography, Christina Vella reviews his relationships and suggests that Carver was bisexual and constrained by mores of his historic period.
When he was 70, Carver established a friendship and research partnership with the scientist Austin W. Curtis, Jr. This young black man, a graduate of Cornell University, had some teaching experience before coming to Tuskegee. Carver bequeathed to Curtis his royalties from an authorized 1943 biography by Rackham Holt. After Carver died in 1943, Curtis was fired from Tuskegee Institute. He left Alabama and resettled in Detroit. There he manufactured and sold peanut-based personal care products.
Upon returning home one day, Carver took a bad fall down a flight of stairs; he was found unconscious by a maid who took him to a hospital. Carver died January 5, 1943, at the age of 78 from complications (anemia) resulting from this fall. He was buried next to Booker T. Washington at Tuskegee University. Due to his frugality, Carver's life savings totaled $60,000, all of which he donated in his last years and at his death to the Carver Museum and to the George Washington Carver Foundation.
On his grave was written, "He could have added fortune to fame, but caring for neither, he found happiness and honor in being helpful to the world."
Even as an adult Carver spoke with a high pitch. Historian Linda O. McMurry noted that he "was a frail and sickly child" who suffered "from a severe case of whooping cough and frequent bouts of what was called croup." McMurry contested the diagnosis of croup, holding rather that "His stunted growth and apparently impaired vocal cords suggest instead tubercular or pneumococcal infection. Frequent infections of that nature could have caused the growth of polyps on the larynx and may have resulted from a gamma globulin deficiency. ... until his death the high pitch of his voice startled all who met him, and he suffered from frequent chest congestion and loss of voice."
There are some rumors that Carver was castrated. Harley Flack and Edmund Pellegrino's book "African-American Perspectives on Biomedical Ethics" (1992) reports that Carver was castrated by a physician at age 11 at the request of his white master. A friend of Carver's was told by the autopsy doctors — according to Carver's biographer Peter Burchard, who told this to Iowa Public Radio in 2010 — that Carver had only scar tissue instead of testicles. If it is true that he was castrated before puberty, it would explain his high voice, but it would also suggest that he should not have been able to grow his beard.
Carver believed he could have faith both in God and science and integrated them into his life. He testified on many occasions that his faith in Jesus was the only mechanism by which he could effectively pursue and perform the art of science. Carver became a Christian when he was still a young boy, as he wrote in connection to his conversion in 1931:
He was not expected to live past his 21st birthday due to failing health. He lived well past the age of 21, and his belief deepened as a result. Throughout his career, he always found friendship with other Christians. He relied on them especially when criticized by the scientific community and media regarding his research methodology.
Carver viewed faith in Jesus Christ as a means of destroying both barriers of racial disharmony and social stratification. He was as concerned with his students' character development as he was with their intellectual development. He compiled a list of eight cardinal virtues for his students to strive toward:
Beginning in 1906 at Tuskegee, Carver led a Bible class on Sundays for several students at their request. He regularly portrayed stories by acting them out. He responded to critics with this: "When you do the common things in life in an uncommon way, you will command the attention of the world."
A movement to establish a U.S. national monument to Carver began before his death. Because of World War II, such non-war expenditures had been banned by presidential order. Missouri senator Harry S. Truman sponsored a bill in favor of a monument. In a committee hearing on the bill, one supporter said:
The bill is not simply a momentary pause on the part of busy men engaged in the conduct of the war, to do honor to one of the truly great Americans of this country, but it is in essence a blow against the Axis, it is in essence a war measure in the sense that it will further unleash and release the energies of roughly 15,000,000 Negro people in this country for full support of our war effort.
The bill passed unanimously in both houses.
On July 14, 1943, President Franklin D. Roosevelt dedicated $30,000 for the George Washington Carver National Monument west-southwest of Diamond, Missouri, the area where Carver had spent time in his childhood. This was the first national monument dedicated to an African American and the first to honor someone other than a president. The national monument complex includes a bust of Carver, a ¾-mile nature trail, a museum, the 1881 Moses Carver house, and the Carver cemetery. The national monument opened in July 1953.
In December 1947, a fire broke out in the Carver Museum, and much of the collection was damaged. "Time" magazine reported that all but three of the 48 Carver paintings at the museum were destroyed. His best-known painting, displayed at the World's Columbian Exposition of 1893 in Chicago, depicts a yucca and cactus. This canvas survived and has undergone conservation. It is displayed together with several of his other paintings.
Carver was featured on U.S. 1948 commemorative stamps. From 1951 to 1954, he was depicted on the commemorative Carver-Washington half dollar coin along with Booker T. Washington. A second stamp honoring Carver, of face value 32¢, was issued on 3 February 1998 as part of the Celebrate the Century stamp sheet series. Two ships, the Liberty ship SS "George Washington Carver" and the nuclear submarine USS "George Washington Carver" (SSBN-656), were named in his honor.
In 1977, Carver was elected to the Hall of Fame for Great Americans. In 1990, he was inducted into the National Inventors Hall of Fame. In 1994, Iowa State University awarded Carver a Doctor of Humane Letters. In 2000, Carver was a charter inductee in the USDA Hall of Heroes as the "Father of Chemurgy".
In 2002, scholar Molefi Kete Asante listed George Washington Carver as one of 100 Greatest African Americans.
In 2005, Carver's research at the Tuskegee Institute was designated a National Historic Chemical Landmark by the American Chemical Society. On February 15, 2005, an episode of "Modern Marvels" included scenes from within Iowa State University's Food Sciences Building and about Carver's work. In 2005, the Missouri Botanical Garden in St. Louis, Missouri, opened a George Washington Carver garden in his honor, which includes a life-size statue of him.
Many institutions continue to honor George Washington Carver. Dozens of elementary schools and high schools are named after him. National Basketball Association star David Robinson and his wife, Valerie, founded an academy named after Carver; it opened on September 17, 2001, in San Antonio, Texas. The Carver Community Cultural Center, a historic center located in San Antonio, is named for him.
Carver was given credit in popular folklore for many inventions that did not come out of his lab. Three patents (one for cosmetics; , and two for paints and stains; and ) were issued to Carver in 1925 to 1927; however, they were not commercially successful. Aside from these patents and some recipes for food, Carver left no records of formulae or procedures for making his products. He did not keep a laboratory notebook.
Mackintosh notes that, "Carver did not explicitly claim that he had personally discovered all the peanut attributes and uses he cited, but he said nothing to prevent his audiences from drawing the inference."
Carver's research was intended to produce replacements from common crops for commercial products, which were generally beyond the budget of the small one-horse farmer. A misconception grew that his research on products for subsistence farmers were developed by others commercially to change Southern agriculture. Carver's work to provide small farmers with resources for more independence from the cash economy foreshadowed the "appropriate technology" work of E. F. Schumacher.
Dennis Keeney, director of the Leopold Center for Sustainable Agriculture at Iowa State University, wrote in the "Leopold Letter" (newsletter):
Carver worked on improving soils, growing crops with low inputs, and using species that fixed nitrogen (hence, the work on the cowpea and the peanut). Carver wrote in 'The Need of Scientific Agriculture in the South': "The virgin fertility of our soils and the vast amount of unskilled labor have been more of a curse than a blessing to agriculture. This exhaustive system for cultivation, the destruction of forest, the rapid and almost constant decomposition of organic matter, have made our agricultural problem one requiring more brains than of the North, East or West."
Carver worked for years to create a company to market his products. The most important was the Carver Penol Company, which sold a mixture of creosote and peanuts as a patent medicine for respiratory diseases such as tuberculosis. Sales were lackluster and the product was ineffective according to the Food and Drug Administration. Other ventures were The Carver Products Company and the Carvoline Company. Carvoline Antiseptic Hair Dressing was a mix of peanut oil and lanolin. Carvoline Rubbing Oil was a peanut oil for massages.
Carver is often mistakenly credited with the invention of peanut butter. By the time Carver published "How to Grow the Peanut and 105 Ways of Preparing it For Human Consumption" in 1916, many methods of preparation of peanut butter had been developed or patented by various pharmacists, doctors and food scientists working in the US and Canada. The Aztec were known to have made peanut butter from ground peanuts as early as the 15th century. Canadian pharmacist Marcellus Gilmore Edson was awarded (for its manufacture) in 1884, 12 years before Carver began his work at Tuskegee.
Carver is also associated with developing sweet potato products. In his 1922 sweet potato bulletin, Carver listed a few dozen recipes, "many of which I have copied verbatim from Bulletin No. 129, U. S. Department of Agriculture". Carver's records included the following sweet potato products: 73 dyes, 17 wood fillers, 14 candies, 5 library pastes, 5 breakfast foods, 4 starches, 4 flours, and 3 molasses. He also had listings for vinegars, dry coffee and instant coffee, candy, after-dinner mints, orange drops, and lemon drops.
During his more than four decades at Tuskegee, Carver's official published work consisted mainly of 44 practical bulletins for farmers. His first bulletin in 1898 was on feeding acorns to farm animals. His final bulletin in 1943 was about the peanut. He also published six bulletins on sweet potatoes, five on cotton, and four on cowpeas. Some other individual bulletins dealt with alfalfa, wild plum, tomato, ornamental plants, corn, poultry, dairying, hogs, preserving meats in hot weather, and nature study in schools.
His most popular bulletin, "How to Grow the Peanut and 105 Ways of Preparing it for Human Consumption," was first published in 1916 and was reprinted many times. It gave a short overview of peanut crop production and contained a list of recipes from other agricultural bulletins, cookbooks, magazines, and newspapers, such as the "Peerless Cookbook", "Good Housekeeping", and "Berry's Fruit Recipes". Carver's was far from the first American agricultural bulletin devoted to peanuts, but his bulletins did seem to be more popular and widespread than previous ones.
|
https://en.wikipedia.org/wiki?curid=12997
|
Grok
Grok is a neologism coined by American writer Robert A. Heinlein for his 1961 science fiction novel "Stranger in a Strange Land". While the "Oxford English Dictionary" summarizes the meaning of "grok" as "to understand intuitively or by empathy, to establish rapport with" and "to empathize or communicate sympathetically (with); also, to experience enjoyment", Heinlein's concept is far more nuanced, with critic Istvan Csicsery-Ronay Jr. observing that "the book's major theme can be seen as an extended definition of the term". The concept of "grok" garnered significant critical scrutiny in the years after the book's initial publication. The term and aspects of the underlying concept have become part of communities as diverse as polyamory (in particular the Church of All Worlds) and computer science.
Critic David E. Wright Sr. points out that in the 1991 "uncut" edition of "Stranger", the word "grok" "was used first "without any explicit definition" on page 22" and continued to be used without being explicitly defined until page 253 (emphasis in original). He notes that this first intensional definition is simply "to drink", but that this is only a metaphor "much as English 'I see' often means the same as 'I understand'". Critics have bridged this absence of explicit definition by citing passages from "Stranger" that illustrate the term. A selection of these passages follows:
Robert A. Heinlein originally coined the term "grok" in his 1961 novel "Stranger in a Strange Land" as a Martian word that could not be defined in Earthling terms, but can be associated with various literal meanings such as "water", "to drink", "life", or "to live", and had a much more profound figurative meaning that is hard for terrestrial culture to understand because of its assumption of a singular reality.
According to the book, drinking water is a central focus on Mars, where it is scarce. Martians use the merging of their bodies with water as a simple example or symbol of how two entities can combine to create a new reality greater than the sum of its parts. The water becomes part of the drinker, and the drinker part of the water. Both "grok" each other. Things that once had separate realities become entangled in the same experiences, goals, history, and purpose. Within the book, the statement of divine immanence verbalized between the main characters, "Thou Art God", is logically derived from the concept inherent in the term "grok".
Heinlein describes Martian words as "guttural" and "jarring". Martian speech is described as sounding "like a bullfrog fighting a cat". Accordingly, "grok" is generally pronounced as a guttural "gr" terminated by a sharp "k" with very little or no vowel sound (a narrow transcription might be ).
William Tenn suggests Heinlein in creating the word might have been influenced by Tenn's very similar concept of "griggo", earlier introduced in Tenn's story "Venus and the Seven Sexes" (published in 1949). In his later afterword to the story, Tenn says Heinlein considered such influence "very possible".
Uses of the word in the decades after the 1960s are more concentrated in computer culture, such as a 1984 appearance in "InfoWorld": "There isn't any software! Only different internal states of hardware. It's all hardware! It's a shame programmers don't grok that better."
The Jargon File, which describes itself as a "Hacker's Dictionary" and has been published under that name three times, puts "grok" in a programming context:
The entry existed in the very earliest forms of the Jargon File, dating from the early 1980s. A typical tech usage from the "Linux Bible, 2005" characterizes the Unix software development philosophy as "one that can make your life a lot simpler once you grok the idea".
The book "Perl Best Practices" defines "grok" as understanding a portion of computer code in a profound way. It goes on to suggest that to "re-grok" code is to reload the intricacies of that portion of code into one's memory after some time has passed and all the details of it are no longer remembered. In that sense, "to grok" means to load everything into memory for immediate use. It is analogous to the way a processor caches memory for short term use, but the only implication by this reference was that it was something that a human (or perhaps a Martian) would do.
The main web page for cURL, an open source tool and programming library, describes the function of cURL as "cURL groks URLs".
The book "Cyberia" covers its use in this subculture extensively:
The keystroke logging software used by the NSA for its remote intelligence gathering operations is named GROK.
One of the most powerful parsing filters used in ElasticSearch software's logstash component is named "grok".
Tom Wolfe, in his book "The Electric Kool-Aid Acid Test" (1968), describes a character's thoughts during an acid trip: "He looks down, two bare legs, a torso rising up at him and like he is just noticing them for the first time ... he has never seen any of this flesh before, this stranger. He groks over that ..."
In his counterculture Volkswagen repair manual, "How to Keep Your Volkswagen Alive: A Manual of Step-by-Step Procedures for the Compleat Idiot" (1969), dropout aerospace engineer John Muir instructs prospective used VW buyers to "grok the car" before buying.
The word was used numerous times by Robert Anton Wilson in his works "The Illuminatus! Trilogy" and "Schrödinger's Cat Trilogy".
The term inspired actress Mayim Bialik's women's lifestyle site, Grok Nation.
|
https://en.wikipedia.org/wiki?curid=12998
|
Geelong Football Club
The Geelong Football Club, nicknamed the Cats, is a professional Australian rules football club based in Geelong, Victoria. The club competes in the Australian Football League (AFL), the highest level of Australian rules football in Australia. The Cats have been the VFL/AFL premiers nine times, with three in the AFL era (since 1990). The Cats have also won ten McClelland Trophies.
The club was formed in 1859, making it the second oldest club in the AFL after Melbourne and one of the oldest football clubs in the world. Geelong participated in the first football competition in Australia and was a foundation club of both the Victorian Football Association (VFA) in 1877 and the Victorian Football League (VFL) in 1897.
The club first established itself in the VFA by winning seven premierships, making it the most successful VFA club leading up to the formation of the VFL in 1897. The club won a further six premierships by 1963, before enduring a 44-year waiting period until it won its next premiership—an AFL-record 119-point victory in the 2007 AFL Grand Final. Geelong have since won a further two premierships in 2009 and 2011.
The Cats play most of their home games at Kardinia Park (known for sponsorship reasons as GMHBA Stadium) and play the remainder at the Melbourne Cricket Ground. Geelong's traditional guernsey colours are navy blue and white hoops. The club's nickname, "The Cats", was first used in 1923 after a run of losses prompted a local cartoonist to suggest that the club needed a black cat to bring it good luck. The club's official team song and anthem is "We Are Geelong".
Geelong's traditional navy blue and white hooped guernsey has been worn since the club's inception in the mid-1800s. The design is said to represent the white seagulls and blue water of Corio Bay.
The team have worn various away guernseys since 1998, all featuring the club's logo and traditional colours.
"We Are Geelong" is the song sung after a game won by the Geelong Football Club. It is sung to the tune of "Toreador" from "Carmen". The lyrics were written by former premiership player John Watts. Only the first verse is used at matches and by the team after a victory. The song currently used by the club was recorded by the Fable Singers in April 1972.
Geelong's administrative headquarters is its home stadium, Kardinia Park. The club trains here during the season, however it also trains at its alternate training venue, Deakin University's Elite Sport Precinct. The latter features an MCG-sized oval and is used often by the club in the pre-season, when Kardinia Park is being used for other events.
The rivalry between Hawthorn and Geelong is defined by two Grand Finals: those of 1989 and 2008. In the 1989 Grand Final, Geelong played the man, resulting in major injuries for several Hawks players, Mark Yeates knocking out Dermott Brereton at the opening bounce; Hawthorn controlled the game, leading by approximately 40 points for most of the match; in the last quarter, Geelong almost managed to come from behind to win, but fell short by six points. In 2008 Grand Final, Geelong was the heavily backed favourite and had lost only one match for the season, but Hawthorn upset Geelong by 26 points; Geelong won its next eleven matches against Hawthorn over the following five years, under a curse, which was dubbed the "Kennett curse" which was attributed to disrespectful comments made by Hawthorn president Jeff Kennett following the 2008 Grand Final. It was later revealed that after the 2008 grand final, Paul Chapman initiated a pact between other Geelong players to never lose to Hawthorn again. The curse was broken in a preliminary final in 2013, after Paul Chapman played his final match for Geelong the previous week. Hawthorn went on to win the next three premierships. In 2016 Geelong again defeated Hawthorn in the qualifying final. In 20 matches between the two sides between 2008 and 2017, 12 were decided by less than 10 points, with Geelong victorious in 11 of those 12 close games.
In 1925, Geelong won their first flag over Collingwood. In 1930, Collingwood defeated Geelong in the grand final making it four flags in-a-row for the Pies. Geelong would later deny Collingwood three successive premierships in 1937, winning a famous grand final by 32 points.
The two sides played against each other in 6 finals between 1951 and 1955, including the 1952 Grand Final when Geelong easily beat Collingwood by 46 points. In 1953, Collingwood ended Geelong's record 23-game winning streak in the home and away season, and later defeated them by 12 points in the grand final, denying the Cats a third successive premiership.
Since 2007, the clubs have again both been at the top of the ladder and have met regularly in finals. Geelong won a memorable preliminary final by five points on their way to their first flag in 44 years. In 2008, Collingwood inflicted Geelong's only home-and-away loss, by a massive 86 points, but the teams did not meet in the finals. They would meet in preliminary finals in 2009 and 2010, each winning one "en route" to a premiership. They finally met in a Grand Final in 2011, which Geelong won by 38 points; Geelong inflicted Collingwood's only three losses for the 2011 season.
The Geelong reserves team began competing in the VFL Reserves competition with the league's other reserves teams from 1919. From 1919 to 1991 the VFL/AFL operated a reserves competition, and from 1992 to 1999 a "de facto" AFL reserves competition was run by the Victorian State Football League. The Geelong Football Club fielded a reserves team in both of these competitions, allowing players who were not selected for the senior team to play for Geelong in the lower grade. During that time, the Geelong reserves team won thirteen premierships (1923, 1924, 1930, 1937, 1938, 1948, 1960, 1963, 1964, 1975, 1980, 1981, 1982), the most of any club.
Since the demise of the AFL reserves competition, the Geelong reserves team has competed in the new Victorian Football League, having won three premierships in that time. Unlike all other Victorian AFL clubs, Geelong has never operated in a reserves affiliation with an existing VFL club, having instead operated its stand-alone reserves team continuously. The team is composed of both reserves players from the club's primary and rookie AFL lists, and a separately maintained list of players eligible only for VFL matches. Home games are played at GMHBA Stadium, with some played as curtain-raisers to senior AFL matches.
In 2017, following the inaugural AFL Women's (AFLW) season, Geelong was among eight clubs that applied for licenses to enter the competition from 2019 onwards. In September 2017, the club was announced as one of two clubs, along with , to receive a license to join the competition in 2019. The club has also had a team in the second-tier VFL Women's league since 2017.
^ Denotes the ladder was split into two or more conferences. These numbers refer to the club's overall finishing position that season.
"Sources: Club historical data and VFLW stats"
|
https://en.wikipedia.org/wiki?curid=13007
|
Galileo (satellite navigation)
Galileo is a global navigation satellite system (GNSS) that went live in 2016, created by the European Union through the European GNSS Agency (GSA), headquartered in Prague, Czech Republic, with two ground operations centers in Fucino, Italy, and Oberpfaffenhofen, Germany. The €10 billion project is named after the Italian astronomer Galileo Galilei. One of the aims of Galileo is to provide an independent high-precision positioning system so European nations do not have to rely on the U.S. GPS, or the Russian GLONASS systems, which could be disabled or degraded by their operators at any time.
The use of basic (lower-precision) Galileo services is free and open to everyone. The higher-precision capabilities are available for paying commercial users. Galileo is intended to provide horizontal and vertical position measurements within 1-metre precision, and better positioning services at higher latitudes than other positioning systems.
Galileo is also to provide a new global search and rescue (SAR) function as part of the MEOSAR system.
The first Galileo test satellite, the GIOVE-A, was launched 28 December 2005, while the first satellite to be part of the operational system was launched on 21 October 2011. As of July 2018, 26 of the planned 30 active satellites are in orbit. Galileo started offering Early Operational Capability (EOC) on 15 December 2016, providing initial services with a weak signal, and is expected to reach Full Operational Capability (FOC) in 2019. The complete 30-satellite Galileo system (24 operational and 6 active spares) is expected by 2020. It is expected that the next generation of satellites will begin to become operational by 2025 to replace older equipment, which can then be used for backup capabilities.
By early 2020 there were 26 live satellites in the constellation: 22 in usable condition (i.e. the satellite is operational and contributing to the service provision), two satellites are in "testing" and two more not available to users. Out of 22 active satellites, 3 were from the IOV (In-Orbit Validation) types and 19 of the FOC types. Two test FOC satellites are orbiting the Earth in highly-eccentric orbits whose orientation changes with respect to other Galileo orbital planes. However, these satellites are fully useful for precise positioning and geodesy with a limited usability in navigation.
In 1999, the different concepts of the three main contributors of ESA (Germany, France and Italy) for Galileo were compared and reduced to one by a joint team of engineers from all three countries. The first stage of the Galileo programme was agreed upon officially on 26 May 2003 by the European Union and the European Space Agency.
The system is intended primarily for civilian use, unlike the more military-oriented systems of the United States (GPS), Russia (GLONASS), and China (BeiDou-1/2). The European system will only be subject to shutdown for military purposes in extreme circumstances (like armed conflict). The countries that contribute most to the Galileo Project are Germany and Italy.
The European Commission had some difficulty funding the project's next stage, after several allegedly "per annum" sales projection graphs for the project were exposed in November 2001 as "cumulative" projections which for each year projected included all previous years of sales. The attention that was brought to this multibillion-euro growing error in sales forecasts resulted in a general awareness in the Commission and elsewhere that it was unlikely that the program would yield the return on investment that had previously been suggested to investors and decision-makers.
On 17 January 2002, a spokesman for the project stated that, as a result of US pressure and economic difficulties, "Galileo is almost dead."
A few months later, however, the situation changed dramatically. European Union member states decided it was important to have a satellite-based positioning and timing infrastructure that the US could not easily turn off in times of political conflict.
The European Union and the European Space Agency agreed in March 2002 to fund the project, pending a review in 2003 (which was completed on 26 May 2003). The starting cost for the period ending in 2005 is estimated at €1.1 billion. The required satellites (the planned number is 30) were to be launched between 2011 and 2014, with the system up and running and under civilian control from 2019. The final cost is estimated at €3 billion, including the infrastructure on Earth, constructed in 2006 and 2007. The plan was for private companies and investors to invest at least two-thirds of the cost of implementation, with the EU and ESA dividing the remaining cost. The base "Open Service" is to be available without charge to anyone with a Galileo-compatible receiver, with an encrypted higher-bandwidth improved-precision "Commercial Service" originally planned to be available at a cost, but in February 2018 the high accuracy service (HAS) (providing Precise Point Positioning data on the E6 frequency) was agreed to be made freely available, with the authentication service remaining commercial. By early 2011 costs for the project had run 50% over initial estimates.
Galileo is intended to be an EU civilian GNSS that allows all users access to it. Initially GPS reserved the highest quality signal for military use, and the signal available for civilian use was intentionally degraded (Selective Availability). This changed with President Bill Clinton signing a policy directive in 1996 to turn off Selective Availability. Since May 2000 the same precision signal has been provided to both civilians and the military.
Since Galileo was designed to provide the highest possible precision (greater than GPS) to anyone, the US was concerned that an enemy could use Galileo signals in military strikes against the US and its allies (some weapons like missiles use GNSSs for guidance). The frequency initially chosen for Galileo would have made it impossible for the US to block the Galileo signals without also interfering with its own GPS signals. The US did not want to lose their GNSS capability with GPS while denying enemies the use of GNSS. Some US officials became especially concerned when Chinese interest in Galileo was reported.
An anonymous EU official claimed that the US officials implied that they might consider shooting down Galileo satellites in the event of a major conflict in which Galileo was used in attacks against American forces. The EU's stance is that Galileo is a neutral technology, available to all countries and everyone. At first, EU officials did not want to change their original plans for Galileo, but they have since reached the compromise that Galileo is to use different frequencies. This allows the blocking or jamming of either GNSS without affecting the other.
One of the reasons given for developing Galileo as an independent system was that position information from GPS can be made significantly inaccurate by the deliberate application of universal Selective Availability (SA) by the US military. GPS is widely used worldwide for civilian applications; Galileo's proponents argued that civil infrastructure, including airplane navigation and landing, should not rely solely upon a system with this vulnerability.
On 2 May 2000, SA was disabled by the President of the United States, Bill Clinton; in late 2001 the entity managing the GPS confirmed that they did not intend to enable selective availability ever again. Though Selective Availability capability still exists, on 19 September 2007 the US Department of Defense announced that newer GPS satellites would not be capable of implementing Selective Availability;
the wave of Block IIF satellites launched in 2009, and all subsequent GPS satellites, are stated not to support SA. As old satellites are replaced in the GPS Block IIIA program, SA will cease to be an option. The modernisation programme also contains standardised features that allow GPS III and Galileo systems to inter-operate, allowing receivers to be developed to utilise GPS and Galileo together to create an even more accurate GNSS.
In June 2004, in a signed agreement with the United States, the European Union agreed to switch to a modulation known as BOC(1,1) (Binary Offset Carrier 1.1) allowing the coexistence of both GPS and Galileo, and the future combined use of both systems.
The European Union also agreed to address the "mutual concerns related to the protection of allied and US national security capabilities."
The first experimental satellite, GIOVE-A, was launched in December 2005 and was followed by a second test satellite, GIOVE-B, launched in April 2008. After successful completion of the In-Orbit Validation (IOV) phase, additional satellites were launched. On 30 November 2007 the 27 EU transport ministers involved reached an agreement that Galileo should be operational by 2013, but later press releases suggest it was delayed to 2014.
In mid-2006 the public/private partnership fell apart, and the European Commission decided to nationalise the Galileo programme.
In early 2007 the EU had yet to decide how to pay for the system and the project was said to be "in deep crisis" due to lack of more public funds. German Transport Minister Wolfgang Tiefensee was particularly doubtful about the consortium's ability to end the infighting at a time when only one testbed satellite had been successfully launched.
Although a decision was yet to be reached, on 13 July 2007 EU countries discussed cutting €548m ($755m, £370m) from the union's competitiveness budget for the following year and shifting some of these funds to other parts of the financing pot, a move that could meet part of the cost of the union's Galileo satellite navigation system. European Union research and development projects could be scrapped to overcome a funding shortfall.
In November 2007, it was agreed to reallocate funds from the EU's agriculture and administration budgets and to soften the tendering process in order to invite more EU companies.
In April 2008, the EU transport ministers approved the Galileo Implementation Regulation. This allowed the €3.4bn to be released from the EU's agriculture and administration budgets to allow the issuing of contracts to start construction of the ground station and the satellites.
In June 2009, the European Court of Auditors published a report, pointing out governance issues, substantial delays and budget overruns that led to project stalling in 2007, leading to further delays and failures.
In October 2009, the European Commission cut the number of satellites definitively planned from 28 to 22, with plans to order the remaining six at a later time. It also announced that the first OS, PRS and SoL signal would be available in 2013, and the CS and SOL some time later. The €3.4 billion budget for the 2006–2013 period was considered insufficient. In 2010 the think-tank Open Europe estimated the total cost of Galileo from start to 20 years after completion at €22.2 billion, borne entirely by taxpayers. Under the original estimates made in 2000, this cost would have been €7.7 billion, with €2.6 billion borne by taxpayers and the rest by private investors.
In November 2009, a ground station for Galileo was inaugurated near Kourou (French Guiana).
The launch of the first four in-orbit validation (IOV) satellites was planned for the second half of 2011, and the launch of full operational capability (FOC) satellites was planned to start in late 2012.
In March 2010 it was verified that the budget for Galileo would only be available to provide the 4 IOV and 14 FOC satellites by 2014, with no funds then committed to bring the constellation above this 60% capacity. Paul Verhoef, the satellite navigation program manager at the European Commission, indicated that this limited funding would have serious consequences commenting at one point "To give you an idea, that would mean that for three weeks in the year you will not have satellite navigation" in reference to the proposed 18-vehicle constellation.
In July 2010, the European Commission estimated further delays and additional costs of the project to grow up to €1.5-€1.7 billion, and moved the estimated date of completion to 2018. After completion the system will need to be subsidised by governments at €750 million per year. An additional €1.9 billion was planned to be spent bringing the system up to the full complement of 30 satellites (27 operational + 3 active spares).
In December 2010, EU ministers in Brussels voted Prague, in the Czech Republic, as the headquarters of the Galileo project.
In January 2011, infrastructure costs up to 2020 were estimated at €5.3 billion. In that same month, Wikileaks revealed that Berry Smutny, the CEO of the German satellite company OHB-System, said that Galileo "is a stupid idea that primarily serves French interests". The BBC learned in 2011 that €500 million (£440M) would become available to make the extra purchase, taking Galileo within a few years from 18 operational satellites to 24.
The first two Galileo In-Orbit Validation satellites were launched by Soyuz ST-B flown from Guiana Space Centre on 21 October 2011, and the remaining two on 12 October 2012.
Twenty-two further satellites with Full Operational Capability (FOC) were on order . The first four pairs of satellites were launched on 22 August 2014, 27 March 2015, 11 September 2015 and 17 December 2015.
In January 2017, news agencies reported that six of the passive hydrogen masers (PHM) and three of the rubidium atomic clocks (RAFS) had failed. Four of the full operational satellites have each lost at least one clock; but no satellite has lost more than two. The operation has not been affected as each satellite is launched with four clocks (2 PHM and 2 RAFS). The possibility of a systemic flaw is being considered. SpectraTime, the Swiss producer of both on-board clock types, declined to comment. According to ESA they concluded with their industrial partners for the rubidium atomic clocks that some implemented testing and operational measures were required. Additionally some refurbishment is required for the rubidium atomic clocks that still have to be launched. For the passive hydrogen masers operational measures are being studied to reduce the risk of failure. China and India use the same SpectraTime-built atomic clocks in their satellite navigation systems. ESA has contacted the Indian Space Research Organisation (ISRO) who initially reported not having experienced similar failures. However, at the end of January 2017, Indian news outlets reported that all three clocks aboard the IRNSS-1A satellite (launched in July 2013 with a 10-year life expectancy) had failed and that a replacement satellite would be launched in the second half of 2017, these atomic clocks were said to be supplied under a four-million-euro deal.
In July 2017 the European Commission reported that the main causes of the malfunctions have been identified and measures have been put in place to reduce the possibility of further malfunctions of the satellites already in space. According to European sources ESA took measures to correct both identified sets of problems by replacing a faulty component that can cause a short circuit in the rubidium clocks and improve the passive hydrogen maser clocks as well on satellites still to be launched.
From 11 July till 18 July 2019 the whole constellation experienced an "unexplained" signal outage with all active satellites showing "NOT USABLE" status on Galileo status page. The cause of the incident was an equipment malfunction in the Galileo ground infrastructure that affected the calculation of time and orbit predictions.
In September 2003, China joined the Galileo project. China was to invest €230 million (US$302 million, GBP 155 million, CNY 2.34 billion) in the project over the following years.
In July 2004, Israel signed an agreement with the EU to become a partner in the Galileo project.
On 3 June 2005 the EU and Ukraine signed an agreement for Ukraine to join the project, as noted in a press release.
As of November 2005, Morocco also joined the programme.
In mid-2006, the Public-Private Partnership fell apart and the European Commission decided to nationalise Galileo as an EU programme.
In November 2006, China opted instead to upgrade BeiDou navigation system, its then-regional satellite navigation system. The decision was due to security concerns and issues with Galileo financing.
On 30 November 2007, the 27 member states of the European Union unanimously agreed to move forward with the project, with plans for bases in Germany and Italy. Spain did not approve during the initial vote, but approved it later that day. This greatly improved the viability of the Galileo project: "The EU's executive had previously said that if agreement was not reached by January 2008, the long-troubled project would essentially be dead."
On 3 April 2009, Norway too joined the programme pledging €68.9 million toward development costs and allowing its companies to bid for the construction contracts. Norway, while not a member of the EU, is a member of ESA.
On 18 December 2013, Switzerland signed a cooperation agreement to fully participate in the program, and retroactively contributed €80 million for the period 2008–2013. As a member of ESA, it already collaborated in the development of the Galileo satellites, contributing the state-of-the-art hydrogen-maser clocks. Switzerland's financial commitment for the period 2014–2020 will be calculated in accordance with the standard formula applied for the Swiss participation in the EU research Framework Programme.
In March 2018, the European Commission announced that the United Kingdom may be excluded from parts of the project (especially relating to the secured service PRS) following its exit from the European Union (EU). As a result, Airbus plans to relocate work on the Ground Control Segment (GCS) from its Portsmouth premises to an EU state. British officials have been reported to be seeking legal advice on whether they can reclaim the €1.4 billion invested by the United Kingdom, of the €10 billion spent to date. In a speech at the EU Institute for Security Studies conference, the EU Chief Negotiator in charge of the Brexit negotiations, Michel Barnier, stressed the EU position that the UK had decided to leave the EU and thus all EU programmes, including Galileo. In August 2018, it was reported the UK will look to create a competing satellite navigation system to Galileo post-Brexit. In December 2018, British Prime Minister Theresa May announced that the UK would no longer seek to reclaim the investment, and Science Minister Sam Gyimah resigned over the matter.
As of 2012, the system was scheduled to have 15 satellites operational in 2015 and reach full operation in 2020 with the following specifications:
The system's orbit and signal accuracy is controlled by a ground segment consisting of:
The system transmits three signals: E1 (1575.42 MHz), E5 (1191.795 MHz) consisting of E5a (1176.45 MHz) and E5b (1207.14 MHz), and E6 (1278.75 MHz):
The Galileo system will have four main services:
The former Safety of Life service is being re-profiled and it will probably be up to the receiver to assess the integrity of the signal. (ARAIM: Advanced Receiver Autonomous Integrity Monitoring)
All above services are expected to provide Navigation Message Authentication (NMA) to prevent GNSS spoofing attacks using Open Service Navigation Message Authentication (OSNMA) specification published in 2016. The service uses TESLA cryptographic algorithm and is expected to be available from 2020.
Each Galileo satellite has two master passive hydrogen maser atomic clocks and two secondary rubidium atomic clocks which are independent of one other. As precise and stable space-qualified atomic clocks are critical components to any satellite-navigation system, the employed quadruple redundancy keeps Galileo functioning when onboard atomic clocks fail in space. The onboard passive hydrogen maser clocks' precision is four times better than the onboard rubidium atomic clocks and estimated at 1 second per 3 million years (a timing error of a nanosecond or 1 billionth of a second (10−9 or 1/1,000,000,000 s) translates into a positional error on Earth's surface), and will provide an accurate timing signal to allow a receiver to calculate the time that it takes the signal to reach it. The Galileo satellites are configured to run one hydrogen maser clock in primary mode and a rubidium clock as hot backup. Under normal conditions, the operating hydrogen maser clock produces the reference frequency from which the navigation signal is generated. Should the hydrogen maser encounter any problem, an instantaneous switchover to the rubidium clock would be performed. In case of a failure of the primary hydrogen maser the secondary hydrogen maser could be activated by the ground segment to take over within a period of days as part of the redundant system. A clock monitoring and control unit provides the interface between the four clocks and the navigation signal generator unit (NSU). It passes the signal from the active hydrogen master clock to the NSU and also ensures that the frequencies produced by the master clock and the active spare are in phase, so that the spare can take over instantly should the master clock fail. The NSU information is used to calculate the position of the receiver by trilaterating the difference in received signals from multiple satellites.
The onboard passive hydrogen maser and rubidium clocks are very stable over a few hours. If they were left to run indefinitely, though, their timekeeping would drift, so they need to be synchronized regularly with a network of even more stable ground-based reference clocks. These include active hydrogen maser clocks and clocks based on the caesium frequency standard, which show a far better medium and long-term stability than rubidium or passive hydrogen maser clocks. These clocks on the ground are gathered together within the parallel functioning Precise Timing Facilities in the Fucino and Oberpfaffenhofen Galileo Control Centres. The ground based clocks also generate a worldwide time reference called Galileo System Time (GST), the standard for the Galileo system and are routinely compared to the local realizations of UTC, the UTC(k) of the European frequency and time laboratories.
For more information of the concept of global satellite navigation systems, see GNSS and GNSS positioning calculation.
The European GNSS Service Centre (GSC), located in Madrid, is an integral part of Galileo and provides the single interface between the Galileo system and Galileo users. GSC publishes Galileo official documentation, promotes Galileo current and future services worldwide, supports standardization and distributes Galileo almanacs, ephemeris and metadata.
The GSC User Helpdesk is the point of contact for Galileo user's assistance. GSC answers queries and gathers incident notifications from users on Galileo. The helpdesk is continuously available for all worldwide Galileo users through the GSC web portal.
GSC provides updated Galileo constellation status and informs on planned and unplanned events through Notice Advisory to Galileo Users (NAGU). GSC publishes Galileo reference documentation and general information on Galileo services and signals description and Galileo performance reports.
Galileo is to provide a new global search and rescue (SAR) function as part of the MEOSAR system. Satellites will be equipped with a transponder which will relay distress signals from emergency beacons to the Rescue coordination centre, which will then initiate a rescue operation. At the same time, the system is projected to provide a signal, the Return Link Message (RLM), "to" the emergency beacon, informing them that their situation has been detected and help is on the way. This latter feature is new and is considered a major upgrade compared to the existing Cospas-Sarsat system, which does not provide feedback to the user. Tests in February 2014 found that for Galileo's search and rescue function, operating as part of the existing International Cospas-Sarsat Programme, 77% of simulated distress locations can be pinpointed within 2 km, and 95% within 5 km.
Galileo Return Link Service (RLS) which allows acknowledgement of distress messages received through the constellation went live in January 2020.
In 2004 the Galileo System Test Bed Version 1 (GSTB-V1) project validated the on-ground algorithms for Orbit Determination and Time Synchronisation (OD&TS). This project, led by ESA and European Satellite Navigation Industries, has provided industry with fundamental knowledge to develop the mission segment of the Galileo positioning system.
A third satellite, GIOVE-A2, was originally planned to be built by SSTL for launch in the second half of 2008. Construction of GIOVE-A2 was terminated due to the successful launch and in-orbit operation of GIOVE-B.
The GIOVE Mission segment operated by European Satellite Navigation Industries used the GIOVE-A/B satellites to provide experimental results based on real data to be used for risk mitigation for the IOV satellites that followed on from the testbeds. ESA organised the global network of ground stations to collect the measurements of GIOVE-A/B with the use of the GETR receivers for further systematic study. GETR receivers are supplied by Septentrio as well as the first Galileo navigation receivers to be used to test the functioning of the system at further stages of its deployment. Signal analysis of GIOVE-A/B data confirmed successful operation of all the Galileo signals with the tracking performance as expected.
These testbed satellites were followed by four IOV Galileo satellites that are much closer to the final Galileo satellite design. The Search & Rescue feature is also installed. The first two satellites were launched on 21 October 2011 from Guiana Space Centre using a Soyuz launcher, the other two on 12 October 2012. This enables key validation tests, since earth-based receivers such as those in cars and phones need to "see" a minimum of four satellites in order to calculate their position in three dimensions. Those 4 IOV Galileo satellites were constructed by Astrium GmbH and Thales Alenia Space. On 12 March 2013, a first fix was performed using those four IOV satellites. Once this In-Orbit Validation (IOV) phase has been completed, the remaining satellites will be installed to reach the Full Operational Capability.
On 7 January 2010, it was announced that the contract to build the first 14 FOC satellites was awarded to OHB System and Surrey Satellite Technology Limited (SSTL).
Fourteen satellites will be built at a cost of €566M (£510M; $811M).
Arianespace will launch the satellites for a cost of €397M (£358M; $569M).
The European Commission also announced that the €85 million contract for system support covering industrial services required by ESA for integration and validation of the Galileo system had been awarded to Thales Alenia Space. Thales Alenia Space subcontract performances to Astrium GmbH and security to Thales Communications.
In February 2012, an additional order of eight satellites was awarded to OHB Systems for €250M ($327M), after outbidding EADS Astrium tender offer. Thus bringing the total to 22 FOC satellites.
On 7 May 2014, the first two FOC satellites landed in Guyana for their joint launch planned in summer Originally planned for launch during 2013, problems tooling and establishing the production line for assembly led to a delay of a year in serial production of Galileo satellites. These two satellites (Galileo satellites GSAT-201 and GSAT-202) were launched on 22 August 2014. The names of these satellites are Doresa and Milena named after European children who had previously won a drawing contest. On 23 August 2014, launch service provider Arianespace announced that the flight VS09 experienced an anomaly and the satellites were injected into an incorrect orbit. They ended up in elliptical orbits and thus could not be used for navigation. However, it was later possible to use them to do a physics experiment, so they were not a complete loss.
Satellites GSAT-203 and GSAT-204 were launched successfully on 27 March 2015 from Guiana Space Centre using a Soyuz four stage launcher. Using the same Soyuz launcher and launchpad, satellites GSAT-205 (Alba) and GSAT-206 (Oriana) were launched successfully on 11 September 2015.
Satellites GSAT-208 (Liene) and GSAT-209 (Andriana) were successfully launched from Kourou, French Guiana, using the Soyuz launcher on 17 December 2015.
Satellites GSAT-210 (Daniele) and GSAT-211 (Alizée) were launched on 24 May 2016.
Starting in November 2016, deployment of the last twelve satellites will use a modified Ariane 5 launcher, named Ariane 5 ES, capable of placing four Galileo satellites into orbit per launch.
Satellites GSAT-207 (Antonianna), GSAT-212 (Lisa), GSAT-213 (Kimberley), GSAT-214 (Tijmen) were successfully launched from Kourou, French Guiana, on 17 November 2016 on an Ariane 5 ES.
On 15 December 2016, Galileo started offering Initial Operational Capability (IOC). The services currently offered are Open Service, Public Regulated Service and Search and Rescue Service.
Satellites GSAT-215 (Nicole), GSAT-216 (Zofia), GSAT-217 (Alexandre), GSAT-218 (Irina) were successfully launched from Kourou, French Guiana, on 12 December 2017 on an Ariane 5 ES.
Satellites GSAT-219 (Tara), GSAT-220 (Samuel), GSAT-221 (Anna), GSAT-222 (Ellen) were successfully launched from Kourou, French Guiana, on 25 July 2018 on an Ariane 5 ES.
As of 2014, ESA and its industry partners have begun studies on Galileo Second Generation satellites, which will be presented to the EC for the late 2020s launch period. One idea is to employ electric propulsion, which would eliminate the need for an upper stage during launch and allow satellites from a single batch to be inserted into more than one orbital plane. The new generation satellites are expected to be available by 2025. and serve to augment the existing network.
In July 2006 an international consortium of universities and research institutions embarked on a study of potential scientific applications of the Galileo constellation. This project, named GEO6, is a broad study oriented to the general scientific community, aiming to define and implement new applications of Galileo.
Among the various GNSS users identified by the Galileo Joint Undertaking, the GEO6, project addresses the Scientific User Community (UC).
The GEO6 project aims at fostering possible novel applications within the scientific UC of GNSS signals, and particularly of Galileo.
The AGILE project is an EU-funded project devoted to the study of the technical and commercial aspects of location-based services (LBS). It includes technical analysis of the benefits brought by Galileo (and EGNOS) and studies the hybridisation of Galileo with other positioning technologies (network-based, WLAN, etc.). Within these projects, some pilot prototypes were implemented and demonstrated.
On the basis of the potential number of users, potential revenues for Galileo Operating Company or Concessionaire (GOC), international relevance, and level of innovation, a set of Priority Applications (PA) will be selected by the consortium and developed within the time-frame of the same project.
These applications will help to increase and optimise the use of the EGNOS services and the opportunities offered by the Galileo Signal Test-Bed (GSTB-V2) and the Galileo (IOV) phase.
All major GNSS receiver chips support Galileo and hundreds of end-user devices are compatible with Galileo. The first, dual-frequency-GNSS-capable Android devices, which track more than one radio signal from each satellite, E1 and E5a frequencies for Galileo, were the Huawei Mate 20 line, Xiaomi Mi 8, Xiaomi Mi 9 and Xiaomi Mi MIX 3. , there were more than 140 Galileo-enabled smartphones on the market of which 9 were dual-frequency enabled. On 24 December 2018, the European Commission passed a mandate for all new smartphones to implement Galileo for E112 support.
Effective 1 April 2018, all new vehicles sold in Europe must support eCall, an automatic emergency response system that dials 112 and transmits Galileo location data in the event of an accident.
Until late 2018, Galileo was not authorized for use in the United States, and as such, only variably worked on devices that could receive Galileo signals, within United States territory. The Federal Communications Commission's position on the matter was (and remains) that non-GPS radio navigation satellite systems (RNSS) receivers must be granted a license to receive said signals. A waiver of this requirement for Galileo was requested by the EU and submitted in 2015, and on 6 January 2017, public comment on the matter was requested. On 15 November 2018, the FCC granted the requested waiver, explicitly allowing non-federal consumer devices to access Galileo E1 and E5 frequencies. However, most devices, including smartphones still require operating system updates or similar updates to allow the use of Galileo signals within the United States.
The European Satellite Navigation project was selected as the main motif of a very high-value collectors' coin: the Austrian European Satellite Navigation commemorative coin, minted on 1 March 2006. The coin has a silver ring and gold-brown niobium "pill". In the reverse, the niobium portion depicts navigation satellites orbiting the Earth. The ring shows different modes of transport, for which satellite navigation was developed: an airplane, a car, a lorry, a train and a container ship.
|
https://en.wikipedia.org/wiki?curid=13009
|
Gavrilo Princip
Gavrilo Princip (, ; 25 July 189428 April 1918) was a Bosnian Serb member of Young Bosnia who sought an end to Austro-Hungarian rule in Bosnia and Herzegovina. At the age of 19, he assassinated Archduke Franz Ferdinand of Austria and the Archduke's wife, Sophie, Duchess of Hohenberg, in Sarajevo on 28 June 1914. Princip and his accomplices were arrested and implicated as a nationalist secret society, which initiated the July Crisis and led to the outbreak of World War I.
At his trial, Princip stated that: "I am a Yugoslav nationalist, aiming for the unification of all Yugoslavs, and I do not care what form of state, but it must be freed from Austria." Princip was sentenced to twenty years in prison, the maximum for his age, and was imprisoned at the Terezín fortress. He died on 28 April 1918 from tuberculosis exacerbated by poor prison conditions which had already caused the loss of his right arm.
Gavrilo Princip was born in the remote hamlet of Obljaj, near Bosansko Grahovo, on . He was the second of his parents' nine children, six of whom died in infancy. Princip's mother Marija wanted to name him after her late brother Špiro, but he was named Gavrilo at the insistence of a local Eastern Orthodox priest, who claimed that naming the sickly infant after the Archangel Gabriel would help him survive.
A Serb family, the Princips had lived in northwestern Bosnia for many centuries and adhered to the Serbian Orthodox Christian faith. Princip's parents, Petar and Marija ("née" Mićić), were poor farmers who lived off the little land that they owned. They belonged to a class of Christian peasants known as "kmeti" (serfs), who were often oppressed by their Muslim landlords.
Petar, who insisted on "strict correctness," never drank or swore and was ridiculed by his neighbours as a result. In his youth, he fought in the Herzegovina Uprising against the Ottoman Empire. Following the revolt, he returned to being a farmer in the Grahovo valley, where he worked approximately of land and was forced to give a third of his income to his landlord. In order to supplement his income and feed his family, he resorted to transporting mail and passengers across the mountains between northwestern Bosnia and Dalmatia.
Despite Petar's opposition, Gavrilo Princip began attending primary school in 1903, aged nine. He overcame a difficult first year and became very successful in his studies, for which he was awarded a collection of Serbian epic poetry by his headmaster. At the age of 13, Princip moved to Sarajevo, where his elder brother Jovan intended to enroll him into an Austro-Hungarian military school. However, by the time Princip reached Sarajevo, Jovan had changed his mind after a friend advised him not to make Gavrilo "an executioner of his own people." Princip was enrolled into a merchant school instead. Jovan paid for his tuition with the money he earned performing manual labour, carrying logs from the forests surrounding Sarajevo to mills within the city. After three years of study, Gavrilo transferred to a local gymnasium. In 1910, he came to revere Bogdan Žerajić, a Bosnian Serb revolutionary who attempted to assassinate Marijan Varešanin, the Austro-Hungarian Governor of Bosnia and Herzegovina, before taking his own life. In 1911, Princip joined Young Bosnia (), a society that wanted to separate Bosnia from Austria-Hungary and unite it with the neighbouring Kingdom of Serbia. Because the local authorities had forbidden students to form organizations and clubs, Princip and other members of Young Bosnia met in secret. During their meetings, they discussed literature, ethics and politics.
In 1912, Princip was expelled from school for being involved in a demonstration against Austro-Hungarian authorities. A student who witnessed the incident claimed that "Princip went from class to class, threatening with his knuckle-duster all the boys who wavered in coming to the new demonstrations." Princip left Sarajevo shortly after being expelled and made the 280-kilometre (170 mi) journey to Belgrade on foot. According to one account, he fell to his knees and kissed the ground upon crossing the border into Serbia. In Belgrade, Princip volunteered to join the Serbian guerrilla bands fighting the Ottoman Turks, under the leadership of Major Vojislav Tankosić. Tankosić was a member of the Black Hand, the foremost conspiratorial society in Serbia at the time.
At first, Princip was rejected at a recruitment office in Belgrade because of his small stature. Enraged, he tracked down Tankosić himself, who also told him that he was too small and weak. Humiliated, Princip returned to Bosnia and lodged with his brother in Sarajevo. He spent the next several months moving back and forth between Sarajevo and Belgrade. In Belgrade he met Živojin Rafajlović, one of the founders of the Serbian Chetnik Organization, who sent him (along with 15 other Young Bosnia members) to the Chetnik training centre in Vranje. There, they met with school manager Mihajlo Stevanović-Cupara. He lived in Cupara's house, which is today located on Gavrilo Princip Street in Vranje. Princip practiced shooting, using bombs and the blade, after which training was completed and he returned to Belgrade.
In 1913, while Princip was staying in Sarajevo, Austria-Hungary declared a state of emergency, implemented martial law, seized control of all schools, and prohibited all Serb cultural organizations.
On 28 June 1914, Princip assassinated Archduke Franz Ferdinand of Austria and his wife, Duchess Sophie Chotek. The royal couple arrived in Sarajevo by train shortly before 10 a.m. and rode in the third car of a six-car motorcade towards Town Hall. Their car's top was rolled back in order to allow the crowds a good view of its occupants.
Princip and the five other conspirators lined the route. They were spaced out along the Appel Quay, each one with instructions to assassinate the Archduke when the royal car reached their position. The first conspirator on the route to see the royal car was Muhamed Mehmedbašić. Standing by the Austro-Hungarian Bank, Mehmedbašić lost his nerve and allowed the car to pass without taking action. At 10:15 a.m., when the motorcade passed the central police station, nineteen-year-old student Nedeljko Čabrinović hurled a hand grenade at the Archduke's car. The driver accelerated when he saw the object flying towards him, and the bomb, which had a 10-second delay, exploded under the fourth car. Two of the occupants were seriously wounded. After Čabrinović's failed attempt, the motorcade sped away and Princip and the remaining conspirators failed to act due to the motorcade's high speed.
After the Archduke gave his scheduled speech at Town Hall, he decided to visit the victims of Čabrinović's grenade attack at the Sarajevo Hospital. In order to avoid the city centre, General Oskar Potiorek decided that the royal car should travel straight along the Appel Quay to the hospital. However, Potiorek forgot to inform the driver, Leopold Lojka, about this decision. On the way to the hospital, Lojka incorrectly turned onto a side street where Princip had positioned himself in front of a local delicatessen. After realizing his mistake, Lojka braked and began to reverse. As he did so the engine stalled and the gears locked. Princip stepped forward, drew his FN Model 1910, and at point-blank range fired twice into the car, first hitting the Archduke in the neck, and then hitting the Duchess in the abdomen. They both died shortly after.
Princip attempted to shoot himself, but the pistol was wrestled from his hand before he had a chance to fire another shot. At his trial, he said he regretted the killing of the Duchess and meant to kill Potiorek, but was nonetheless proud of what he had done. He further stated that: "I am a Yugoslav nationalist, aiming for the unification of all Yugoslavs, and I do not care what form of state, but it must be freed from Austria." The Black Hand was implicated in the assassination which led Austria-Hungary to issue a démarche to Serbia known as the July Ultimatum which led up to the outbreak of World War I.
Princip was nineteen years old at the time and too young to receive the death penalty, as he was twenty-seven days shy of the twenty-year minimum age limit required by Habsburg Law. Instead, he received the maximum sentence of twenty years in prison.
Princip was chained to a wall in solitary confinement at the Terezín fortress, where he lived in harsh conditions and suffered from tuberculosis. The disease ate away his bones so badly that his right arm had to be amputated. In January 1916, Princip unsuccessfully attempted to hang himself with a towel. From February to June 1916, Princip met with Martin Pappenheim, a psychiatrist in the Austro-Hungarian army, four times. Pappenheim wrote that Princip believed the World War was bound to happen, independent of his actions, and that he "cannot feel himself responsible for the catastrophe."
Gavrilo Princip died on 28 April 1918, three years and ten months after the assassination. At the time of his death, weakened by malnutrition and disease, he weighed around .
Fearing his bones might become relics for Slavic nationalists, Princip's prison guards secretly took the body to an unmarked grave, but a Czech soldier assigned to the burial remembered the location, and in 1920 Princip and the other "Heroes of Vidovdan" were exhumed and brought to Sarajevo, where they were buried together beneath the Vidovdan Heroes Chapel "built to commemorate for eternity our Serb heroes" at the Holy Archangels Cemetery which includes a citation from the Montenegrin poet Njegoš: "Blessed is he who lives forever. He had something to be born for."
Princip's legacy is still disputed. He is celebrated as a hero by Serbs but regarded as a terrorist by many Bosniaks.
In socialist Yugoslavia, Gavrilo Princip was venerated as a national hero and a freedom fighter who fought to liberate all the peoples of Yugoslavia from Austrian rule; however, in the modern day, many Croats and Bosniaks have now expressed viewpoints characterizing Princip as a murderer. Asim Sarajlic, a senior MP of the Bosniak nationalist Party of Democratic Action, stated that Princip brought an end to "a golden era of history under Austrian rule" and that "we are strongly against the mythology of Princip as a fighter of freedom."
However, Serbs continue to venerate his memory, with Nenad Samardžija, governor of East Sarajevo, saying that the assassination was not a terrorist act but "a movement of young people who wanted to liberate themselves from colonial slavery." During the Yugoslavian era, Latin Bridge, the site of the assassination, was renamed "Princip's Bridge" in remembrance.
The house where Princip lived in Sarajevo was destroyed during World War I. After the war, it was rebuilt as a museum in the Kingdom of Yugoslavia. Yugoslavia was conquered by Germany in 1941 and Sarajevo became part of the Independent State of Croatia. The Croatian Ustaše destroyed the house again. After the establishment of Communist Yugoslavia in 1944, the house of Gavrilo Princip became a museum again and there was another museum dedicated to him within the city of Sarajevo. During the Yugoslav Wars of the 1990s, the house of Gavrilo Princip was destroyed and then rebuilt for the third time in 2015.
Princip's pistol was confiscated by the authorities and eventually given, along with the Archduke's bloody undershirt, to , a Jesuit priest who was a close friend of the Archduke and had given the Archduke and his wife their last rites. The pistol and shirt remained in the possession of the Austrian Jesuits until they were offered on long-term loan to the Museum of Military History in Vienna in 2004. It is now part of the permanent exhibition there.
There have been many short-lived memorials to Gavrilo Princip. In 1917, a pillar was constructed at the corner of where the assassination took place. It was destroyed the following year. In the 1940s, a plaque commemorating Princip was removed when the German Army invaded, and after World War II, a new plaque went up which incorrectly claimed that "Gavrilo Princip threw off the German occupiers." During the Bosnian War, embossed footprints marking where Princip fired the fatal shots were torn out.
As the centenary of the assassination neared, a plaque at the corner where the assassination took place was put up, which states: "From this place on 28 June 1914, Gavrilo Princip assassinated the heir to the Austro-Hungarian throne Franz Ferdinand and his wife Sofia." On 21 April 2014, a bust of Princip was unveiled in Tovariševo, and on the centenary itself, a statue was erected in East Sarajevo.
A year later, a statue of Princip was unveiled in Belgrade by the President of Serbia Tomislav Nikolić and the President of Republika Srpska Milorad Dodik, as a gift from Republika Srpska to Serbia. At the unveiling Nikolić gave a speech, saying in part: "Princip was a hero, a symbol of liberation ideas, tyrant-murderer, idea-holder of liberation from slavery, which spanned through Europe."
|
https://en.wikipedia.org/wiki?curid=13010
|
Greenwich Village
Greenwich Village ( , , ), often referred to by locals as simply "the Village", is a neighborhood on the west side of Manhattan in New York City, within Lower Manhattan. Broadly, Greenwich Village is bounded by 14th Street to the north, Broadway to the east, Houston Street to the south, and the Hudson River to the west. Greenwich Village also contains several subsections, including the West Village west of Seventh Avenue and the Meatpacking District in the northwest corner of Greenwich Village.
The neighborhood's name comes from "Groenwijck", one of the Dutch names for the village (meaning "Green District"), which was Anglicized to "Greenwich". In the 20th century, Greenwich Village was known as an artists' haven, the Bohemian capital, the cradle of the modern LGBT movement, and the East Coast birthplace of both the Beat and '60s counterculture movements. Greenwich Village contains Washington Square Park, as well as two of New York's private colleges, New York University (NYU) and the New School.
Greenwich Village is part of Manhattan Community District 2 and is patrolled by the 6th Precinct of the New York City Police Department. Greenwich Village has undergone extensive gentrification and commercialization; the four ZIP Codes that constitute the Village – 10011, 10012, 10003, and 10014 – were all ranked among the ten most expensive in the United States by median housing price in 2014, according to "Forbes", with residential property sale prices in the West Village neighborhood typically exceeding US in 2017.
The neighborhood is bordered by Broadway to the east, the North River (part of the Hudson River) to the west, Houston Street to the south, and 14th Street to the north. It is roughly centered on Washington Square Park and New York University. The neighborhoods surrounding it are the East Village and NoHo to the east, SoHo and Hudson Square to the south, and Chelsea and Union Square to the north. The East Village was formerly considered part of the Lower East Side and has never been considered a part of Greenwich Village. The western part of Greenwich Village is known as the West Village; the dividing line of its eastern border is debated but commonly cited as Seventh Avenue or Sixth Avenue. The Far West Village is another sub-neighborhood of Greenwich Village that is bordered on its west by the Hudson River and on its east by Hudson Street.
Into the early 20th century, Greenwich Village was distinguished from the upper-class neighborhood of Washington Square—based on the major landmark of Washington Square Park or Empire Ward in the 19th century.
"Encyclopædia Britannica"'s 1956 article on "New York (City)" states (under the subheading "Greenwich Village") that the southern border of the Village is Spring Street, reflecting an earlier understanding. Today, Spring Street overlaps with the modern, newer SoHo neighborhood designation, while the modern "Encyclopædia Britannica" cites the southern border as Houston Street.
As Greenwich Village was once a rural, isolated hamlet to the north of the 17th century European settlement on Manhattan Island, its street layout is more organic than the planned grid pattern of the 19th century grid plan (based on the Commissioners' Plan of 1811). Greenwich Village was allowed to keep the 18th century street pattern of what is now called the West Village: areas that were already built up when the plan was implemented, west of what is now Greenwich Avenue and Sixth Avenue, resulted in a neighborhood whose streets are dramatically different, in layout, from the ordered structure of the newer parts of Manhattan.
Many of the neighborhood's streets are narrow and some curve at odd angles. This is generally regarded as adding to both the historic character and charm of the neighborhood. In addition, as the meandering Greenwich Street used to be on the Hudson River shoreline, much of the neighborhood west of Greenwich Street is on landfill, but still follows the older street grid. When Sixth and Seventh Avenues were built in the early 20th century, they were built diagonally to the existing street plan, and many older, smaller streets had to be demolished.
Unlike the streets of most of Manhattan above Houston Street, streets in the Village are typically named rather than numbered. While some of the formerly named streets (including Factory, Herring and Amity Streets) are now numbered, they still do not always conform to the usual grid pattern when they enter the neighborhood. For example, West 4th Street runs east-west across most of Manhattan, but runs north-south in Greenwich Village, causing it to intersect with West 10th, 11th, and 12th Streets before ending at West 13th Street.
A large section of Greenwich Village, made up of more than 50 northern and western blocks in the area up to 14th Street, is part of a Historic District established by the New York City Landmarks Preservation Commission. The District's convoluted borders run no farther south than 4th Street or St. Luke's Place, and no farther east than Washington Square East or University Place. Redevelopment in that area is severely restricted, and developers must preserve the main façade and aesthetics of the buildings during renovation.
Most of the buildings of Greenwich Village are mid-rise apartments, 19th century row houses, and the occasional one-family walk-up, a sharp contrast to the high-rise landscape in Midtown and Downtown Manhattan.
Politically, Greenwich Village is in New York's 10th congressional district. It is also in the New York State Senate's 25th district, the New York State Assembly's 66th district, and the New York City Council's 3rd district.
In the 16th century, Native Americans referred to its farthest northwest corner, by the cove on the Hudson River at present-day Gansevoort Street, as Sapokanikan ("tobacco field"). The land was cleared and turned into pasture by Dutch and freed African settlers in the 1630s, who named their settlement Noortwyck ("North district", equivalent to Northwich/Northwick). In the 1630s, Governor Wouter van Twiller farmed tobacco on here at his "Farm in the Woods". The English conquered the Dutch settlement of New Netherland in 1664, and Greenwich Village developed as a hamlet separate from the larger New York City to the south on land that would eventually become the Financial District. In 1644, the eleven Dutch African settlers were freed after the first black legal protest in America. All received parcels of land what is now Greenwich Village.
The earliest known reference to the village's name as "Greenwich" dates back to 1696, in the will of Yellis Mandeville of Greenwich; however, the village was not mentioned in the city records until 1713. Sir Peter Warren began accumulating land in 1731 and built a frame house capacious enough to hold a sitting of the Assembly when smallpox rendered the city dangerous in 1739. His house, which survived until the Civil War era, overlooked the North River from a bluff; its site on the block bounded by Perry and Charles Streets, Bleecker and West 4th Streets, can still be recognized by its mid-19th century rowhouses inserted into a neighborhood still retaining many houses of the 1830–37 boom.
The oldest house remaining in Greenwich Village is the Isaacs-Hendricks House, at 77 Bedford Street (built 1799, much altered and enlarged 1836, third story 1928). When the Church of St. Luke in the Fields was founded in 1820 it stood in fields south of the road (now Christopher Street) that led from Greenwich Lane (now Greenwich Avenue) down to a landing on the North River. In 1822, a yellow fever epidemic in New York encouraged residents to flee to the healthier air of Greenwich Village, and afterwards many stayed. The future site of Washington Square was a potter's field from 1797 to 1823 when up to 20,000 of New York's poor were buried here, and still remain. The handsome Greek revival rowhouses on the north side of Washington Square were built about 1832, establishing the fashion of Washington Square and lower Fifth Avenue for decades to come. Well into the 19th century, the district of Washington Square was considered separate from Greenwich Village.
Greenwich Village historically was known as an important landmark on the map of American bohemian culture in the early and mid-20th century. The neighborhood was known for its colorful, artistic residents and the alternative culture they propagated. Due in part to the progressive attitudes of many of its residents, the Village was a focal point of new movements and ideas, whether political, artistic, or cultural, This tradition as an enclave of avant-garde and alternative culture was established during the 19th century and continued into the 20th century, when small presses, art galleries, and experimental theater thrived. In 1969, enraged members of the gay community, in search for equality, started the Stonewall riots. The Stonewall Inn was later recognized as a National Historic Landmark for having started the gay rights movement.
The Tenth Street Studio Building was situated at 51 West 10th Street between Fifth and Sixth Avenues, the building was commissioned by James Boorman Johnston and designed by Richard Morris Hunt. Its innovative design soon represented a national architectural prototype, and featured a domed central gallery, from which interconnected rooms radiated. Hunt's studio within the building housed the first architectural school in the United States. Soon after its completion in 1857, the building helped to make Greenwich Village central to the arts in New York City, drawing artists from all over the country to work, exhibit, and sell their art. In its initial years Winslow Homer took a studio there, as did Edward Lamson Henry, and many of the artists of the Hudson River School, including Frederic Church and Albert Bierstadt.
From the late 19th century until the present, the Hotel Albert has served as a cultural icon of Greenwich Village. Opened during the 1880s and originally located at 11th Street and University Place, called the Hotel St. Stephan and then after 1902, called the Hotel Albert while under the ownership of William Ryder, it served as a meeting place, restaurant and dwelling for several important artists and writers from the late 19th century well into the 20th century. After 1902, the owner's brother Albert Pinkham Ryder lived and painted there. Some other noted guests who lived there include: Augustus St. Gaudens, Robert Louis Stevenson, Mark Twain, Hart Crane, Walt Whitman, Anaïs Nin, Thomas Wolfe, Robert Lowell, Horton Foote, Salvador Dalí, Philip Guston, Jackson Pollock, and Andy Warhol. During the golden age of bohemianism, Greenwich Village became famous for such eccentrics as Joe Gould (profiled at length by Joseph Mitchell) and Maxwell Bodenheim, dancer Isadora Duncan, writer William Faulkner, and playwright Eugene O'Neill. Political rebellion also made its home here, whether serious (John Reed) or frivolous (Marcel Duchamp and friends set off balloons from atop Washington Square Arch, proclaiming the founding of "The Independent Republic of Greenwich Village" on January 24, 1917).
In 1924, the Cherry Lane Theatre was established. Located at 38 Commerce Street, it is New York City's oldest continuously running Off-Broadway theater. A landmark in Greenwich Village's cultural landscape, it was built as a farm silo in 1817, and also served as a tobacco warehouse and box factory before Edna St. Vincent Millay and other members of the Provincetown Players converted the structure into a theatre they christened the Cherry Lane Playhouse, which opened on March 24, 1924, with the play "The Man Who Ate the Popomack". During the 1940s The Living Theatre, Theatre of the Absurd, and the Downtown Theater movement all took root there, and it developed a reputation as a showcase for aspiring playwrights and emerging voices.
In one of the many Manhattan properties that Gertrude Vanderbilt Whitney and her husband owned, Gertrude Whitney established the "Whitney Studio Club" at 8 West 8th Street in 1914, as a facility where young artists could exhibit their works. By the 1930s it had evolved into her greatest legacy, the Whitney Museum of American Art, on the site of today's New York Studio School of Drawing, Painting and Sculpture. The Whitney was founded in 1931, as an answer to the Museum of Modern Art, founded 1928, and its collection of mostly European modernism and its neglect of American Art. Gertrude Whitney decided to put the time and money into the museum after the New York Metropolitan Museum of Art turned down her offer to contribute her twenty-five-year collection of modern art works. In 1936, the renowned Abstract Expressionist artist and teacher Hans Hofmann moved his art school from East 57th Street to 52 West 9th Street. In 1938, Hofmann moved again to a more permanent home at 52 West 8th Street. The school remained active until 1958, when Hofmann retired from teaching.
On January 8, 1947, stevedore Andy Hintz was fatally shot by hitmen John M. Dunn, Andrew Sheridan and Danny Gentile in front of his apartment. Before he died on January 29, he told his wife that "Johnny Dunn shot me." The three gunmen were immediately arrested. Sheridan and Dunn were executed.
The Village hosted the nation's first racially integrated nightclub, when Café Society was opened in 1938 at 1 Sheridan Square by Barney Josephson. Café Society showcased African American talent and was intended to be an American version of the political cabarets that Josephson had seen in Europe before World War I. Notable performers there included: Pearl Bailey, Count Basie, Nat King Cole, John Coltrane, Miles Davis, Ella Fitzgerald, Coleman Hawkins, Billie Holiday, Lena Horne, Burl Ives, Lead Belly, Anita O'Day, Charlie Parker, Les Paul and Mary Ford, Paul Robeson, Kay Starr, Art Tatum, Sarah Vaughan, Dinah Washington, Josh White, Teddy Wilson, Lester Young, and the Weavers, who also in Christmas 1949, played at the Village Vanguard.
The annual Greenwich Village Halloween Parade, initiated in 1974 by Greenwich Village puppeteer and mask maker Ralph Lee, is the world's largest Halloween parade and America's only major nighttime parade, attracting more than 60,000 costumed participants, two million in-person spectators, and a worldwide television audience of over 100 million.
Greenwich Village again became important to the Bohemian scene during the 1950s, when the Beat Generation focused their energies there. Fleeing from what they saw as oppressive social conformity, a loose collection of writers, poets, artists, and students (later known as the Beats) and the Beatniks, moved to Greenwich Village, and to North Beach in San Francisco, in many ways creating the U.S. East Coast and West Coast predecessors, respectively, to the East Village-Haight Ashbury hippie scene of the next decade. The Village (and surrounding New York City) would later play central roles in the writings of, among others, Maya Angelou, James Baldwin, William S. Burroughs, Truman Capote, Allen Ginsberg, Jack Kerouac, Rod McKuen, Marianne Moore, and Dylan Thomas, who collapsed at the Chelsea Hotel, and died at St. Vincents Hospital at 170 West 12th Street, in the Village after drinking at the White Horse Tavern on November 5, 1953.
Off-Off-Broadway began in Greenwich Village in 1958 as a reaction to Off Broadway, and a "complete rejection of commercial theatre". Among the first venues for what would soon be called "Off-Off-Broadway" (a term supposedly coined by critic Jerry Tallmer of the "Village Voice") were coffeehouses in Greenwich Village, in particular, the Caffe Cino at 31 Cornelia Street, operated by the eccentric Joe Cino, who early on took a liking to actors and playwrights and agreed to let them stage plays there without bothering to read the plays first, or to even find out much about the content. Also integral to the rise of Off-Off-Broadway were Ellen Stewart at La MaMa, originally located at 321 E. 9th Street, and Al Carmines at the Judson Poets' Theater, located at Judson Memorial Church on the south side of Washington Square Park.
The Village had a cutting-edge cabaret and music scene. "The Village Gate", the "Village Vanguard", and the "Blue Note" (since 1981) regularly hosted some of the biggest names in jazz. Greenwich Village also played a major role in the development of the folk music scene of the 1960s. Music clubs included "Gerde's Folk City", "The Bitter End," "Cafe Au Go Go", "Cafe Wha?", "The Gaslight Cafe" and "The Bottom Line". Three of the four members of the Mamas & the Papas met there. Guitarist and folk singer Dave Van Ronk lived there for many years. Village resident and cultural icon Bob Dylan by the mid-60s had become one of the world's foremost popular songwriters, and often developments in Greenwich Village would influence the simultaneously occurring folk rock movement in San Francisco and elsewhere, and vice versa. Dozens of other cultural and popular icons got their start in the Village's nightclub, theater, and coffeehouse scene during the 1950s, 1960s, and early 1970s, including Eric Andersen, Joan Baez, Jackson Browne, the Clancy Brothers and Tommy Makem, Richie Havens, Jimi Hendrix, Janis Ian, the Kingston Trio, the Lovin' Spoonful, Bette Midler, Liza Minnelli, Joni Mitchell, Maria Muldaur, Laura Nyro, Phil Ochs, Tom Paxton, Peter, Paul, and Mary, Carly Simon, Simon & Garfunkel, Nina Simone, Barbra Streisand, James Taylor, and the Velvet Underground. The Greenwich Village of the 1950s and 1960s was at the center of Jane Jacobs's book "The Death and Life of Great American Cities", which defended it and similar communities, while criticizing common urban renewal policies of the time.
Founded by New York-based artist Mercedes Matter and her students, the New York Studio School of Drawing, Painting and Sculpture is an art school formed in the mid-1960s in the Village. Officially opened September 23, 1964, the school is still active, at 8 W. 8th Street, the site of the original Whitney Museum of American Art.
Greenwich Village was home to a safe house used by the radical anti-war movement known as the Weather Underground. On March 6, 1970, their safehouse was destroyed when an explosive device they were constructing was accidentally detonated, killing three of their members (Ted Gold, Terry Robbins, and Diana Oughton).
The Village has been a center for movements that challenged the wider American culture, for example, its role in the gay liberation movement. The Stonewall riots were a series of spontaneous, violent demonstrations by members of the gay community against a police raid that took place in the early morning hours of June 28, 1969, at the Stonewall Inn, 53 Christopher Street. Considered together, the demonstrations are widely considered to constitute the single most important event leading to the gay liberation movement and the modern fight for LGBT rights in the United States. On June 23, 2015, the Stonewall Inn was the first landmark in New York City to be recognized by the New York City Landmarks Preservation Commission on the basis of its status in LGBT history, and on June 24, 2016, the Stonewall National Monument was named the first U.S. National Monument dedicated to the LGBTQ-rights movement. Greenwich Village contains the world's oldest gay and lesbian bookstore, Oscar Wilde Bookshop, founded in 1967, while The Lesbian, Gay, Bisexual & Transgender Community Center – best known as simply "The Center" – has occupied the former Food & Maritime Trades High School at 208 West 13th Street since 1984. In 2006, the Village was the scene of an assault involving seven lesbians and a straight man that sparked appreciable media attention, with strong statements defending both sides of the case.
Since the end of the twentieth century, many artists and local historians have mourned the fact that the bohemian days of Greenwich Village are long gone, because of the extraordinarily high housing costs in the neighborhood. The artists fled to other New York City neighborhoods including SoHo, Tribeca, Dumbo, Williamsburg, and Long Island City. Nevertheless, residents of Greenwich Village still possess a strong community identity and are proud of their neighborhood's unique history and fame, and its well-known liberal live-and-let-live attitudes.
Historically, local residents and preservation groups have been concerned about development in the Village and have fought to preserve its architectural and historic integrity. In the 1960s, Margot Gayle led a group of citizens to preserve the Jefferson Market Courthouse (later reused as Jefferson Market Library), while other citizen groups fought to keep traffic out of Washington Square Park, and Jane Jacobs, using the Village as an example of a vibrant urban community, advocated to keep it that way.
Since then, preservation has been a part of the Village ethos. Shortly after the New York City Landmarks Preservation Commission (LPC) was established in 1965, it acted to protect parts of Greenwich Village, designating the small Charlton-King-Vandam Historic District in 1966, which contains the city's largest concentration of row houses in the Federal style, as well as a significant concentration of Greek Revival houses, and the even smaller MacDougal-Sullivan Gardens Historic District in 1967, a group of 22 houses sharing a common back garden, built in the Greek Revival style and later renovated with Colonial Revival façades. In 1969, the LPC designated the Greenwich Village Historic District – which remained the city's largest for four decades – despite preservationists' advocacy for the entire neighborhood to be designated an historic district. Advocates continued to pursue their goal of additional designation, spurred in particular by the increased pace of development in the 1990s.
The Greenwich Village Society for Historic Preservation (GVSHP), a nonprofit organization dedicated to the architectural and cultural character and heritage of the neighborhood, successfully proposed new districts and individual landmarks to the LPC. Those include:
The Landmarks Preservation Commission designated as landmarks several individual sites proposed by the Greenwich Village Society for Historic Preservation, including the former Bell Telephone Labs Complex (1861–1933), now Westbeth Artists' Housing, designated in 2011; the Silver Towers/University Village Complex (1967), designed by I.M. Pei and including the Picasso sculpture "Portrait of Sylvette," designated in 2008; and three early 19th-century federal houses at 127, 129 and 131 MacDougal Street.
Several contextual rezonings were enacted in Greenwich Village in recent years to limit the size and height of allowable new development in the neighborhood, and to encourage the preservation of existing buildings. The following were proposed by the GVSHP and passed by the City Planning Commission:
New York University and Greenwich Village preservationists have been embroiled in a conflict over campus expansion versus preservation of the scale and Bohemian character of the Village.
As one press critic put it in 2013, "For decades, New York University has waged architectural war on Greenwich Village." Recent examples of the university clashing with the community, often led by the Greenwich Village Society for Historic Preservation, include the destruction of the 85 West Third Street house where Edgar Allan Poe lived from 1844–5, which NYU promised to rebuild using original materials, but then claimed not to have enough bricks to do so; the construction of the 26-story Founders Hall dorm behind the façade of demolished St. Ann's Church at 120 East Twelfth Street, which advocates protested as being out of scale for the low-rise area, and received assurances from NYU, which then built all 26 stories anyway; and the demolition in 2009 of the Provincetown Playhouse and Apartments, over protests.
In 2008, as part of a multi-stakeholder Community Task Force on NYU Development, the university agreed to a set of "Planning Principles." Yet advocates did not find NYU was following the principles in practice, culminating in a successful lawsuit against the university's "NYU 2031" plan for expansion.
For census purposes, the New York City government classifies Greenwich Village as part of the West Village neighborhood tabulation area. Based on data from the 2010 United States Census, the population of West Village was 66,880, a change of −1,603 (−2.4%) from the 68,483 counted in 2000. Covering an area of , the neighborhood had a population density of . The racial makeup of the neighborhood was 80.9% (54,100) White, 2% (1,353) African American, 0.1% (50) Native American, 8.2% (5,453) Asian, 0% (20) Pacific Islander, 0.4% (236) from other races, and 2.4% (1,614) from two or more races. Hispanic or Latino of any race were 6.1% (4,054) of the population.
The entirety of Community District 2, which comprises Greenwich Village and SoHo, had 91,638 inhabitants as of NYC Health's 2018 Community Health Profile, with an average life expectancy of 85.8 years. This is higher than the median life expectancy of 81.2 for all New York City neighborhoods. Most inhabitants are adults: a plurality (42%) are between the ages of 25–44, while 24% are between 45–64, and 15% are 65 or older. The ratio of youth and college-aged residents was lower, at 9% and 10% respectively.
As of 2017, the median household income in Community Districts 1 and 2 (including the Financial District and Tribeca) was $144,878, though the median income in Greenwich Village individually was $119,728. In 2018, an estimated 9% of Greenwich Village and SoHo residents lived in poverty, compared to 14% in all of Manhattan and 20% in all of New York City. One in twenty-five residents (4%) were unemployed, compared to 7% in Manhattan and 9% in New York City. Rent burden, or the percentage of residents who have difficulty paying their rent, is 38% in Greenwich Village and SoHo, compared to the boroughwide and citywide rates of 45% and 51% respectively. Based on this calculation, , Greenwich Village and SoHo are considered high-income relative to the rest of the city and not gentrifying.
Greenwich Village includes several collegiate institutions. Since the 1830s, New York University (NYU) has had a campus there. In 1973 NYU moved from its campus in University Heights in the West Bronx (the current site of Bronx Community College), to Greenwich Village with many buildings around Gould Plaza on West 4th Street. In 1976 Yeshiva University established the Benjamin N. Cardozo School of Law in the northern part of Greenwich Village. In the 1980s Hebrew Union College was built in Greenwich Village. The New School, with its Parsons The New School for Design, a division of The New School, and the School's Graduate School expanded in the 2000s, with the renovated, award-winning design of the Sheila C. Johnson Design Center at 66 Fifth Avenue on 13th Street. The Cooper Union is located in Greenwich Village, at Astor Place, near St. Mark's Place on the border of the East Village. Pratt Institute established its latest Manhattan campus in an adaptively reused Brunner & Tryon-designed loft building on 14th Street, east of Seventh Avenue. The university campus building expansion was followed by a gentrification process in the 1980s.
The historic Washington Square Park is the center and heart of the neighborhood. Additionally, the Village has several other, smaller parks: Christopher, Father Fagan, Little Red Square, Minetta Triangle, Petrosino Square, and Time Landscape. There are also city playgrounds, including DeSalvio Playground, Minetta, Thompson Street, Bleecker Street, Downing Street, Mercer Street, Cpl. John A. Seravelli, and William Passannante Ballfield. One of the most famous courts, is "The Cage", officially known as the West Fourth Street Courts. Sitting atop the We subway station at Sixth Avenue, the courts are used by basketball and American handball players from across the city. The Cage has become one of the most important tournament sites for the citywide "Streetball" amateur basketball tournament. Since 1975, New York University's art collection has been housed at the Grey Art Gallery bordering Washington Square Park, at 100 Washington Square East. The Grey Art Gallery is notable for its museum-quality exhibitions of contemporary art.
The Village has a bustling performing arts scene. It is home to many Off Broadway and Off-Off-Broadway theaters; for instance, "Blue Man Group" has taken up residence in the Astor Place Theater. "The Village Gate" (until 1992), the "Village Vanguard" and the "Blue Note" are still presenting some of the biggest names in jazz on a regular basis. Other music clubs include "The Bitter End," and "Lion's Den". The Village has its own orchestra aptly named the "Greenwich Village Orchestra". Comedy clubs dot the Village as well, including "Comedy Cellar", where many American stand-up comedians got their start.
Several publications have offices in the Village, most notably the monthly magazines "American Heritage" and "Fortune" and formerly also the citywide newsweekly the "Village Voice". The National Audubon Society, having relocated its national headquarters from a mansion in Carnegie Hill to a restored and very green, former industrial building in NoHo, relocated to smaller but even greener LEED certified digs at 225 Varick Street, on Houston Street near the Film Forum.
Greenwich Village is patrolled by the 6th Precinct of the NYPD, located at 233 West 10th Street. The 6th Precinct ranked 68th safest out of 69 patrol areas for per-capita crime in 2010. This is due to a high incidence of property crime. With a non-fatal assault rate of 10 per 100,000 people, Greenwich Village's rate of violent crimes per capita is less than that of the city as a whole. The incarceration rate of 100 per 100,000 people is lower than that of the city as a whole.
The 6th Precinct has a lower crime rate than in the 1990s, with crimes across all categories having decreased by 80.6% between 1990 and 2018. The precinct reported 1 murder, 20 rapes, 153 robberies, 121 felony assaults, 163 burglaries, 1,031 grand larcenies, and 28 grand larcenies auto in 2018.
Greenwich Village is served by two New York City Fire Department (FDNY) fire stations:
Preterm births are more common in Greenwich Village and SoHo than in other places citywide, though teenage births are less common. In Greenwich Village and SoHo, there were 91 preterm births per 1,000 live births (compared to 87 per 1,000 citywide), and 1 teenage birth per 1,000 live births (compared to 19.3 per 1,000 citywide), though the teenage birth rate is based on a small sample size. Greenwich Village and SoHo have a low population of residents who are uninsured. In 2018, this population of uninsured residents was estimated to be 4%, less than the citywide rate of 12%, though this was based on a small sample size.
The concentration of fine particulate matter, the deadliest type of air pollutant, in Greenwich Village and SoHo is , more than the city average. Sixteen percent of Greenwich Village and SoHo residents are smokers, which is more than the city average of 14% of residents being smokers. In Greenwich Village and SoHo, 4% of residents are obese, 3% are diabetic, and 15% have high blood pressure, the lowest rates in the city—compared to the citywide averages of 24%, 11%, and 28% respectively. In addition, 5% of children are obese, the lowest rate in the city, compared to the citywide average of 20%.
Ninety-six percent of residents eat some fruits and vegetables every day, which is more than the city's average of 87%. In 2018, 91% of residents described their health as "good," "very good," or "excellent," more than the city's average of 78%. For every supermarket in Greenwich Village and SoHo, there are 7 bodegas.
The nearest major hospitals are Beth Israel Medical Center in Stuyvesant Town, as well as the Bellevue Hospital Center and NYU Langone Medical Center in Kips Bay, and NewYork-Presbyterian Lower Manhattan Hospital in the Civic Center area.
Greenwich Village is located within four primary ZIP Codes. The subsection of West Village, south of Greenwich Avenue and west of Sixth Avenue, is located in 10014, while the northwestern section of Greenwich Village north of Greenwich Avenue and Washington Square Park and west of Fifth Avenue is in 10011. The northeastern part of the Village, north of Washington Square Park and east of Fifth Avenue, is in 10003. The neighborhood's southern portion, the area south of Washington Square Park and east of Sixth Avenue, is in 10012. The United States Postal Service operates three post offices near Greenwich Village:
Greenwich Village and SoHo generally have a higher rate of college-educated residents than the rest of the city. The vast majority of residents age 25 and older (84%) have a college education or higher, while 4% have less than a high school education and 12% are high school graduates or have some college education. By contrast, 64% of Manhattan residents and 43% of city residents have a college education or higher. The percentage of Greenwich Village and SoHo students excelling in math rose from 61% in 2000 to 80% in 2011, and reading achievement increased from 66% to 68% during the same time period.
Greenwich Village and SoHo's rate of elementary school student absenteeism is lower than the rest of New York City. In Greenwich Village and SoHo, 7% of elementary school students missed twenty or more days per school year, less than the citywide average of 20%. Additionally, 91% of high school students in Greenwich Village and SoHo graduate on time, more than the citywide average of 75%.
Greenwich Village residents are zoned to two elementary schools: PS 3, Melser Charrette School, and PS 41, Greenwich Village School. Residents are zoned to Baruch Middle School 104. Residents apply to various New York City high schools. The private Greenwich Village High School was formerly located in the area, but later moved to SoHo.
Greenwich Village is home to New York University, which owns large sections of the area and most of the buildings around Washington Square Park. To the north is the campus of The New School, which is housed in several buildings that are considered historical landmarks because of their innovative architecture. New School's Sheila Johnson Design Center doubles as a public art gallery. Cooper Union has been located in the East Village since its founding in 1859.
The New York Public Library (NYPL) operates two branches in Greenwich Village. The Jefferson Market Library is located at 425 Avenue of the Americas (Sixth Avenue). The building was a courthouse in the 19th and 20th centuries before being converted into a library in 1967, and it is now a city designated landmark. The Hudson Park branch is located at 66 Leroy Street. The branch is housed in Carnegie library that was built in 1906 and expanded in 1920.
Greenwich Village is served by the IND Eighth Avenue Line (), the IND Sixth Avenue Line (), the BMT Canarsie Line (), and the IRT Broadway–Seventh Avenue Line () of the New York City Subway. The 14th Street/Sixth Avenue, 14th Street/Eighth Avenue, West Fourth Street–Washington Square, and Christopher Street–Sheridan Square stations are in the neighborhood. Local New York City Bus routes, operated by the Metropolitan Transportation Authority, include the M55, M7, M11, M14, and M20. On the PATH, the Christopher Street, Ninth Street, and 14th Street stations are in Greenwich Village.
Greenwich Village has long been a popular neighborhood for numerous artists and other notable people. Past and present notable residents include:
|
https://en.wikipedia.org/wiki?curid=13011
|
History of Macau
Macau is a Special Administrative Region (SAR) of the People's Republic of China. In 1557 it was leased to Portugal as a trading post in exchange for an annual rent of 500 tael in order to stay in Macau, it remained under Chinese sovereignty and authority until 1887, the Portuguese came to consider and administer it as a "de facto" colony. Following the signing of the Treaty of Nanking between China and Britain in 1842, and the signing of treaties between China and foreign powers during the 1860s, establishing the benefit of "the most favoured nation" for them, the Portuguese attempted to conclude a similar treaty in 1862, but the Chinese refused, owing to a misunderstanding over the sovereignty of Macau. In 1887 the Portuguese finally managed to secure an agreement from China that Macao was Portuguese territory. In 1999 it was handed over to China. Macau was the last extant European territory in continental Asia.
The human history of Macau stretches back up to 6,000 years, and includes many different and diverse civilisations and periods of existence. Evidence of human and culture dating back 4,000 to 6,000 years has been discovered on the Macau Peninsula and dating back 5,000 years on Coloane Island.
During the Qin Dynasty (221–206 BC), the region was under the jurisdiction of Panyu County, Nanhai Prefecture of the province of Guangdong. The region is first known to have been settled during the Han dynasty. It was administratively part of Dongguan Prefecture in the Jin dynasty (265–420 AD), and alternated under the control of Nanhai and Dongguan in later dynasties.
Since the 5th century, merchant ships travelling between Southeast Asia and Guangzhou used the region as a port for refuge, fresh water, and food. In 1152, during the Song dynasty (960–1279 AD), it was under the jurisdiction of the new Xiangshan County. In 1277, approximately 50,000 refugees fleeing the Mongol conquest of China settled in the coastal area.
Mong Há has long been the center of Chinese life in Macau and the site of what may be the region's oldest temple, a shrine devoted to the Buddhist Guanyin (Goddess of Mercy). Later in the Ming dynasty (1368–1644 AD), fishermen migrated to Macau from various parts of Guangdong and Fujian provinces and built the A-Ma Temple where they prayed for safety on the sea. The Hoklo Boat people were the first to show interest in Macau as a trading centre for the southern provinces. However, Macau did not develop as a major settlement until the Portuguese arrived in the 16th century.
During the age of discovery Portuguese sailors explored the coasts of Africa and Asia. The sailors later established posts at Goa in 1510, and conquered Malacca in 1511, driving the Sultan to the southern tip of the Malay Peninsula from where he kept making raids on the Portuguese. The Portuguese under Jorge Álvares landed at Lintin Island in the Pearl River Delta of China in 1513 with a hired junk sailing from Portuguese Malacca. They erected a stone marker at Lintin Island claiming it for the King of Portugal, Manuel I. In the same year, the Indian Viceroy Afonso de Albuquerque commissioned Rafael Perestrello — a cousin of Christopher Columbus to sail to China in order to open up trade relations. Rafael traded with the Chinese merchants in Guangzhou in that year and in 1516, but was not allowed to move further.
Portugal's king Manuel I in 1517 commissioned a diplomatic and trade mission to Guangzhou headed by Tomé Pires and Fernão Pires de Andrade. The embassy lasted until the death of the Zhengde Emperor in Nanjing. The embassy was further rejected by the Chinese Ming court, which now became less interested in new foreign contacts. The Ming Court was also influenced by reports of misbehaviour of Portuguese elsewhere in China, and by the deposed Sultan of Malacca seeking Chinese assistance to drive the Portuguese out of Malacca.
In 1521 and 1522 several more Portuguese ships reached the trading island Tamão off the coast near Guangzhou, but were driven away by the now hostile Ming authorities. Pires was imprisoned and died in Canton.
Good relations between the Portuguese and Chinese Ming dynasty resumed in the 1540s, when Portuguese aided China in eliminating coastal pirates. The two later began annual trade missions to the offshore Shangchuan Island in 1549. A few years later, Lampacau Island, closer to the Pearl River Delta, became the main base of the Portuguese trade in the region.
Diplomatic relations were further improved and salvaged by the Leonel de Sousa agreement with Cantonese authorities in 1554. In 1557, the Ming court finally gave consent for a permanent and official Portuguese trade base at Macau. In 1558, Leonel de Sousa became the second Portuguese Governor of Macau.
They later built some rudimentary stone-houses around the area now called Nam Van. But not until 1557 did the Portuguese establish a permanent settlement in Macau, at an annual rent of 500 taels (~) of silver. Later that year, the Portuguese established a walled village there. Ground rent payments began in 1573. China retained sovereignty and Chinese residents were subject to Chinese law, but the territory was under Portuguese administration. In 1582 a land lease was signed, and annual rent was paid to Xiangshan County. The Portuguese continued to pay an annual tribute up to 1863 in order to stay in Macau.
The Portuguese often married Tanka women since Han Chinese women would not have relations with them. Some of the Tanka's descendants became Macanese people. Some Tanka children were enslaved by Portuguese raiders. The Chinese poet Wu Li wrote a poem, which included a line about the Portuguese in Macau being supplied with fish by the Tanka.
After the Portuguese were allowed to permanently settle in Macau, both Chinese and Portuguese merchants flocked to Macau, although the Portuguese were never numerous (numbering just 900 in 1583 and 1200 out of 26,000 in 1640). It quickly became an important node in the development of Portugal's trade along three major routes: Macau–Malacca–Goa–Lisbon, Guangzhou–Macau–Nagasaki and Macau–Manila–Mexico. The Guangzhou–Macau–Nagasaki route was particularly profitable because the Portuguese acted as middlemen, shipping Chinese silks to Japan and Japanese silver to China, pocketing huge markups in the process. This already lucrative trade became even more so when Chinese officials handed Macau's Portuguese traders a monopoly by banning direct trade with Japan in 1547, due to piracy by Chinese and Japanese nationals.
Macau's golden age coincided with the union of the Spanish and Portuguese crowns, between 1580 and 1640. King Philip II of Spain was encouraged to not harm the status quo, to allow trade to continue between Portuguese Macau and Spanish Manila, and to not interfere with Portuguese trade with China. In 1587, Philip promoted Macau from "Settlement or Port of the Name of God" to "City of the Name of God" (Cidade do Nome de Deus de Macau).
The alliance of Portugal with Spain meant that Portuguese colonies became targets for the Netherlands, which was embroiled at the time in a lengthy struggle for its independence from Spain, the Eighty Years' War. After the Dutch East India Company was founded in 1602, the Dutch unsuccessfully attacked Macau several times, culminating in a full-scale invasion attempt in 1622, when 800 attackers were successfully repelled by 150 Macanese and Portuguese defenders and a large number of African slaves. One of the first actions of Macau's next governor, who arrived the following year, was to strengthen the city's defences, which included the construction of the Guia Fortress.
As well as being an important trading post, Macau was a center of activity for Catholic missionaries, as it was seen as a gateway for the conversion of the vast populations of China and Japan. Jesuits had first arrived in the 1560s and were followed by Dominicans in the 1580s. Both orders soon set about constructing churches and schools, the most notable of which were the Jesuit Cathedral of Saint Paul and the St. Dominic's Church built by the Dominicans. In 1576, Macau was established as an episcopal see by Pope Gregory XIII with Melchior Carneiro appointed as the first bishop.
In 1637, increasing suspicion of the intentions of Spanish and Portuguese Catholic missionaries in Japan finally led the "shōgun" to seal Japan off from foreign influence. Later named the sakoku period, this meant that no Japanese were allowed to leave the country (or return if they were living abroad), and no foreign ship was allowed to dock in a Japanese port. An exception was made for the Protestant Dutch, who were allowed to continue to trade with Japan from the confines of a small man-made island in Nagasaki, Deshima. Macau's most profitable trade route, that between Japan and China, had been severed. The crisis was compounded two years later by the loss of Malacca to the Dutch in 1641, damaging the link with Goa.
The news that the Portuguese House of Braganza had regained control of the Crown from the Spanish Habsburgs took two years to reach Macau, arriving in 1642. A ten-week celebration ensued, and despite its new-found poverty, Macau sent gifts to the new King João IV along with expressions of loyalty. In return, the King rewarded Macau with the addition of the words "There is none more Loyal" to its existing title. Macau was now "City of the Name of God in China, There is none more loyal". ("Não há outra mais Leal" []).
In 1685, the privileged position of the Portuguese in trade with China ended, following a decision by the Kangxi Emperor of China to allow trade with all foreign countries. Over the next century, England, the Dutch Republic, France, Denmark, Sweden, the United States and Russia moved in, establishing factories and offices in Guangzhou and Macau. British trading dominance in the 1790s was unsuccessfully challenged by a combined French and Spanish naval squadron at the Macau Incident of 27 January 1799.
Until 20 April 1844 Macau was under the jurisdiction of Portugal's Indian colonies, the so-called "Estado português da India" (Portuguese State of India), but after this date, it, along with East Timor, was accorded recognition by Lisbon (but not by Beijing) as an overseas province of Portugal. The Treaty of Peace, Amity, and Commerce between China and the United States was signed in a temple in Macau on 3 July 1844. The temple was used by a Chinese judicial administrator, who also oversaw matters concerning foreigners, and was located in the village of Mong Há. The Templo de Kun Iam was the site where, on 3 July 1844, the treaty of Wangxia (named after the village of Mong Ha where the temple was located) was signed by representatives of the United States and China. This marked the official beginning of Sino-US relations.
After China ceded Hong Kong to the British in 1842, Macau's position as a major regional trading centre declined further still because larger ships were drawn to the deep water port of Victoria Harbour. In an attempt to reverse the decline, Portugal declared Macau a free port, expelled Chinese officials and soldiers, and thereafter levied taxes on Chinese residents. In 1848, there was a revolt of the boatmen that was put down.
Portugal continued to pay rent to China until 1849, when the Portuguese abolished the Chinese customs house and declared Macau's "independence", a year which also saw Chinese retaliation and finally the assassination of Gov. Ferreira do Amaral during the so-called Baishaling Incident. Portugal gained control of the island of Wanzai (Lapa by the Portuguese and now as Wanzaizhen), to the northwest of Macau and which now is under the jurisdiction of Zhuhai (Xiangzhou District), in 1849 but relinquished it in 1887. Control over Taipa and Coloane, two islands south of Macau, was obtained between 1851 and 1864. Macau and East Timor were again combined as an overseas province of Portugal under control of Goa in 1883. The Protocol Respecting the Relations Between the Two Countries (signed in Lisbon 26 March 1887) and the Beijing Treaty (signed in Beijing on 1 December 1887) confirmed "perpetual occupation and government" of Macau by Portugal (with Portugal's promise "never to alienate Macau and dependencies without agreement with China" in the treaty). Taipa and Coloane were also ceded to Portugal, but the border with the mainland was not delimited. Ilha Verde () was incorporated into Macau's territory in 1890, and, once a kilometre offshore, by 1923 it had been absorbed into peninsula Macau through land reclamation.
In 1871, the Hospital Kiang Wu was founded as a traditional Chinese medical hospital. It was in 1892 that doctor Sun Yat-sen brought Western medicine services to the hospital.
In the 1930s, Macau's traditional income streams related to illegal opium sales dried up, as the Royal Navy's Eastern Fleet suppressed piracy and smuggling in support of Hong Kong's growing commercial status. Traditional local industries of fishing, firecrackers and incense, as well as tea and tobacco processing, were all small scale, while Macau Government income from 'Fan-Tan' gambling was only around US$5000 (about US$100,000 in modern money) per day. So the financially pressed Portuguese government urged the colony's administrators to develop greater economic self-sufficiency. One channel that bore fruit was as a transit point for the new trans-Pacific passenger and postal flights, for competing airlines from the US and Japan – which was at the time engaged in conflict with China. In 1935, Pan-Am secured sea-landing rights in Macau and immediately set about building related communications infrastructure in the enclave, allowing a service from San Francisco to begin in November that year.
Intertwined with this economic progress was an alleged and much discussed offer (never officially confirmed) in 1935 by Japan to buy Macau from Portugal, for US$100 million. Concerns were raised by the British, and others. In May, the Portuguese government twice denied that it would accept any such offer, and the matter was closed.
From 1848 to about the early 1870s, Macau was the infamous transit port of a trade of coolies (or slave labourers) from southern China. Most of them were kidnapped from the Guangdong province and were shipped off in packed vessels to Cuba, Peru, or other South American ports to work on plantations or in mines. Many died on the way there due to malnutrition, disease, or other mistreatment. The "Dea del Mar" which had set sail to Callao from Macau in 1865 with 550 Chinese on board, arrived in Tahiti with only 162 of them still alive.
Macau became a refugee center during WWII causing its population to climb from about 200,000 to about 700,000 people within a few years. Refugee operations were organized through the Santa Casa da Misericordia.
Unlike in the case of Portuguese Timor, which was occupied by the Japanese in 1942 along with Dutch Timor, the Japanese respected Portuguese neutrality in Macau, but only up to a point. As such, Macau enjoyed a brief period of economic prosperity as the only neutral port in South China, after the Japanese had occupied Guangzhou (Canton) and Hong Kong. In August 1943, Japanese troops seized the British steamer "Sian" in Macau and killed about 20 guards. The next month they demanded the installation of Japanese "advisors" under the alternative of military occupation. The result was that a virtual Japanese protectorate was created over Macau.
When it was discovered that neutral Macau was planning to sell aviation fuel to Japan, aircraft from the "USS Enterprise" bombed and strafed the hangar of the Naval Aviation Centre on 16 January 1945 to destroy the fuel. American air raids on targets in Macau were also made on 25 February and 11 June 1945. Following Portuguese government protest, in 1950 the United States paid US$20,255,952 to the government of Portugal.
Japanese domination ended in August 1945.
When the Chinese communists came to power in 1949, they declared the Protocol of Lisbon to be invalid as an "unequal treaty" imposed by foreigners on China. However, Beijing was not ready to settle the treaty question, leaving the maintenance of "the status quo" until a more appropriate time. Beijing took a similar position on treaties relating to the Hong Kong territories of the United Kingdom.
In 1951, the Salazar regime declared Macau, as well as other Portuguese colonies, an "Overseas Province" of Portugal.
During the 1950s and 1960s Macau's border crossing to China Portas do Cerco was also referred to as "Far Eastern Checkpoint Charlie" with a major border incident happening in 1952 with Portuguese African Troops exchanging fire with Chinese Communist border guards. According to reports, the exchange lasted for one-and-three-quarter hours, leaving one dead and several dozens injured on the Macau side and more than 100 casualties claimed on the Communist Chinese side.
In 1954, the Macau Grand Prix was established, first as a treasure hunt throughout the city, and in later years as a formal car racing event.
In 1962, the gambling industry of Macau saw a major breakthrough when the government granted the "Sociedade de Turismo e Diversões de Macau" (STDM), a syndicate jointly formed by Hong Kong and Macau businessmen, the monopoly rights to all forms of gambling. The STDM introduced western-style games and modernised the marine transport between Macau and Hong Kong, bringing millions of gamblers from Hong Kong every year.
Riots broke out in 1966 during the communist Cultural Revolution, when local Chinese and the Macau authority clashed, the most serious one being the so-called 12-3 incident. This was sparked by the overreaction of some Portuguese officials to what was a regular minor dispute concerning building permits. The riots caused 8 deaths and the end was a total climbdown by the Portuguese Government.
On January 29, 1967, the Portuguese Governor, José Manuel de Sousa e Faro Nobre de Carvalho, with the endorsement of Portuguese Prime Minister Salazar, signed a statement of apology at the Chinese Chamber of Commerce, under a portrait of Mao Zedong, with Ho Yin, the Chamber's President, presiding.
Two agreements were signed, one with Macau's Chinese community, and the other with mainland China. The latter committed the Government to compensate local Chinese community leaders with as much as 2 million Macau Patacas and to prohibit all Kuomintang activities in Macau. This move ended the conflict, and relations between the government and the leftist organisations remained largely peaceful.
This success in Macau encouraged leftists in Hong Kong to "do the same", leading to riots by leftists in Hong Kong in 1967.
A Portuguese proposal to return the province to China was declined by China.
Also in 1966, the Church of our Lady of Sorrows on Coloane opened up.
In 1968, the Taipa-Coloane Causeway linking Taipa island and Coloane island was opened up.
In 1974, following the anti-colonialist Carnation Revolution, Portugal relinquished all claims over Macau and proposed to return Macau back to Chinese sovereignty.
In 1990, the Academy of Public Security Forces was founded in Coloane.
In 1994, the Bridge of Friendship was completed, the second bridge connecting Macau and Taipa.
In November 1995, the Macau International Airport was inaugurated. Before then the territory only had 2 temporary airports for small aeroplanes, in addition to several permanent heliports.
In 1997, the Macau Stadium was completed in Taipa.
Portugal and the People's Republic of China established diplomatic relations on 8 February 1979, and Beijing acknowledged Macau as "Chinese territory under Portuguese administration." A year later, Gen. Melo Egidio became the first governor of Macau to pay an official visit to Beijing.
The visit underscored both parties' interest in finding a mutually agreeable solution to Macau's status. A joint communique signed 20 May 1986 called for negotiations on the Macau question, and four rounds of talks followed between 30 June 1986 and 26 March 1987. The Joint Declaration on the Question of Macau was signed in Beijing on 13 April 1987, setting the stage for the return of Macau to full Chinese sovereignty as a Special Administrative Region on 20 December 1999.
After four rounds of talks, "the Joint Declaration of the Government of the People's Republic of China and the Government of the Republic of Portugal on the Question of Macau" was officially signed in April 1987. The two sides exchanged instruments of ratification on 15 January 1988 and the Joint Declaration entered into force. During the transitional period between the date of the entry into force of the Joint Declaration and 19 December 1999 the Portuguese government was responsible for the administration of Macau.
The Basic Law of the Macau Special Administrative Region of the People's Republic of China, was adopted by the National People's Congress (NPC) on 31 March 1993 as the constitutional law for Macau, taking effect on 20 December 1999.
The PRC has promised that, under its "one country, two systems" formula, China's socialist economic system will not be practiced in Macau and that Macau will enjoy a high degree of autonomy in all matters except foreign and defense affairs until at least 2049, fifty years after the handover.
Although offered control of Macau as early as the 1960s, the Chinese deemed the time "not yet ripe" and preferred to wait until December 1999—the very end of the millennium, two years after the Hong Kong handover—to close this chapter of history.
Upon the handover of Macau European colonization of Asia ended.
In 2002, the Macau government ended the gambling monopoly system and 3 (later 6) casino operating concessions (and subconcessions) were granted to Sociedade de Jogos de Macau (SJM, an 80% owned subsidiary of STDM), Wynn Resorts, Las Vegas Sands, Galaxy Entertainment Group, the partnership of MGM Mirage and Pansy Ho Chiu-king, and the partnership of Melco and PBL, thus marking the begin of the rise of Macau as the new gambling hub in Asia.
As one of the measures to develop the gambling industry, the Cotai strip was completed after the handover to China with construction of the hotel and casino industry starting in 2004. In 2007, the first of many resorts opened, The Venetian Macao. Many other resorts followed, both in Cotai and on Macau island, providing for a major tax income stream to Macau government and a drop in overall unemployment over the years down to a mere 2% in 2013.
In 2004, the Sai Van Bridge is completed, the third bridge between Macau island and Taipa island.
In 2005, the Macau East Asian Games Dome, the principal venue for the 4th East Asian Games, is inaugurated.
Also in 2005, Macau government started a wave of social housing construction (lasting until 2013 at least), constructing over 8000 apartment units in the process.
Similar to other economies in the world, the financial crisis of 2007–08 hit Macau leading to a stall in construction of major construction works (Sands Cotai Central) and a spike in unemployment.
With residential and development space being sparse, Macau government officially announced on 27 June 2009 that the University of Macau will build its new campus on Hengqin island, in a stretch directly facing the Cotai area, south of the current border post. Along with this development, several other residential and business development projects on Hengqin are in the planning.
In 2011 to 2013 further major construction on several planned mega-resorts on the Cotai Strip commenced.
2014 marked the first time that the gambling revenues in Macau declined on a year-to-year basis. Starting in June 2014, gambling revenues declined for the second half of the year on a month-to-month basis (compared with 2013) causing the Macau Daily Times to announce that the "Decade of gambling expansion end[ed]". Some reasons for the slowdown are China's anti-corruption drive reaching Macau, China's economy slowing down and changes of Mainland Chinese tourists preference of visiting other countries as a travel destination.
This led the Macau government to attempt to reconstruct the economy, to depend less on gambling revenues and focus on building world-class non-gambling tourism and leisure centers, as well as developing itself as a platform for economic and trade cooperation between China and Portuguese-speaking countries.
In 2015, the borders of Macau were redrawn by the state council, shifting the land border north to the Canal dos Patos and expanding the maritime border significantly. The changes increased the size of Macau's maritime territory by 85 square kilometers.
Typhoon Hato hit southern China in August 2017 causing widespread damage to Macau, never before experienced – major flooding and property damages, with citywide power and water outages lasting for at least 24 hours after the passage of the storm. Overall, 10 deaths and at least 200 injuries were reported. This caused widespread anger against the Macau government, accused of being unprepared for the typhoon as were as the delay of raising the no. 10 tropical cyclone signal; this caused the head of the Macao Meteorological and Geophysical Bureau to resign. At the request of the Macau government, the Chinese People's Liberation Army Macau Garrison (for the first time in Macau's history) deployed around 1,000 troops to assist in disaster relief and cleaning up.
On December 12, 2019, Macau officially opened its first rail transit system: the Macau Light Rapid Transit.
|
https://en.wikipedia.org/wiki?curid=19069
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.