text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
History of astronomy
Astronomy is the oldest of the natural sciences, dating back to antiquity, with its origins in the religious, mythological, cosmological, calendrical, and astrological beliefs and practices of prehistory: vestiges of these are still found in astrology, a discipline long interwoven with public and governmental astronomy. It was not completely separated in Europe (see astrology and astronomy) during the Copernican Revolution starting in 1543. In some cultures, astronomical data was used for astrological prognostication. The study of astronomy has received financial and social support from many institutions, especially the Church, which was its largest source of support between the 12th century to the Enlightenment.
Ancient astronomers were able to differentiate between stars and planets, as stars remain relatively fixed over the centuries while planets will move an appreciable amount during a comparatively short time.
Early cultures identified celestial objects with gods and spirits. They related these objects (and their movements) to phenomena such as rain, drought, seasons, and tides. It is generally believed that the first astronomers were priests, and that they understood celestial objects and events to be manifestations of the divine, hence early astronomy's connection to what is now called astrology. A 32,500 year old carved ivory Mammoth tusk could contain the oldest known star chart (resembling the constellation Orion). It has also been suggested that drawing on the wall of the Lascaux caves in France dating from 33,000 to 10,000 years ago could be a graphical representation of the Pleiades, the Summer Triangle, and the Northern Crown. Ancient structures with possibly astronomical alignments (such as Stonehenge) probably fulfilled astronomical, religious, and social functions.
Calendars of the world have often been set by observations of the Sun and Moon (marking the day, month and year), and were important to agricultural societies, in which the harvest depended on planting at the correct time of year, and for which the nearly full moon was the only lighting for night-time travel into city markets.
The common modern calendar is based on the Roman calendar. Although originally a lunar calendar, it broke the traditional link of the month to the phases of the Moon and divided the year into twelve almost-equal months, that mostly alternated between thirty and thirty-one days. Julius Caesar instigated calendar reform in 46 BCE and introduced what is now called the Julian calendar, based upon the 365 day year length originally proposed by the 4th century BCE Greek astronomer Callippus.
The origins of Western astronomy can be found in Mesopotamia, the "land between the rivers" Tigris and Euphrates, where the ancient kingdoms of Sumer, Assyria, and Babylonia were located. A form of writing known as cuneiform emerged among the Sumerians around 3500–3000 BC. Our knowledge of Sumerian astronomy is indirect, via the earliest Babylonian star catalogues dating from about 1200 BC. The fact that many star names appear in Sumerian suggests a continuity reaching into the Early Bronze Age. Astral theology, which gave planetary gods an important role in Mesopotamian mythology and religion, began with the Sumerians. They also used a sexagesimal (base 60) place-value number system, which simplified the task of recording very large and very small numbers. The modern practice of dividing a circle into 360 degrees, or an hour into 60 minutes, began with the Sumerians. For more information, see the articles on Babylonian numerals and mathematics.
Classical sources frequently use the term Chaldeans for the astronomers of Mesopotamia, who were, in reality, priest-scribes specializing in astrology and other forms of divination.
The first evidence of recognition that astronomical phenomena are periodic and of the application of mathematics to their prediction is Babylonian. Tablets dating back to the Old Babylonian period document the application of mathematics to the variation in the length of daylight over a solar year. Centuries of Babylonian observations of celestial phenomena are recorded in the series of cuneiform tablets known as the "Enūma Anu Enlil". The oldest significant astronomical text that we possess is Tablet 63 of the "Enūma Anu Enlil", the Venus tablet of Ammi-saduqa, which lists the first and last visible risings of Venus over a period of about 21 years and is the earliest evidence that the phenomena of a planet were recognized as periodic. The MUL.APIN, contains catalogues of stars and constellations as well as schemes for predicting heliacal risings and the settings of the planets, lengths of daylight measured by a water clock, gnomon, shadows, and intercalations. The Babylonian GU text arranges stars in 'strings' that lie along declination circles and thus measure right-ascensions or time-intervals, and also employs the stars of the zenith, which are also separated by given right-ascensional differences.
A significant increase in the quality and frequency of Babylonian observations appeared during the reign of Nabonassar (747–733 BC). The systematic records of ominous phenomena in Babylonian astronomical diaries that began at this time allowed for the discovery of a repeating 18-year cycle of lunar eclipses, for example. The Greek astronomer Ptolemy later used Nabonassar's reign to fix the beginning of an era, since he felt that the earliest usable observations began at this time.
The last stages in the development of Babylonian astronomy took place during the time of the Seleucid Empire (323–60 BC). In the 3rd century BC, astronomers began to use "goal-year texts" to predict the motions of the planets. These texts compiled records of past observations to find repeating occurrences of ominous phenomena for each planet. About the same time, or shortly afterwards, astronomers created mathematical models that allowed them to predict these phenomena directly, without consulting past records. A notable Babylonian astronomer from this time was Seleucus of Seleucia, who was a supporter of the heliocentric model.
Babylonian astronomy was the basis for much of what was done in Greek and Hellenistic astronomy, in classical Indian astronomy, in Sassanian Iran, in Byzantium, in Syria, in Islamic astronomy, in Central Asia, and in Western Europe.
Astronomy in the Indian subcontinent dates back to the period of Indus Valley Civilization during 3rd millennium BCE, when it was used to create calendars. As the Indus Valley civilization did not leave behind written documents, the oldest extant Indian astronomical text is the Vedanga Jyotisha, dating from the Vedic period. Vedanga Jyotisha describes rules for tracking the motions of the Sun and the Moon for the purposes of ritual. During the 6th century, astronomy was influenced by the Greek and Byzantine astronomical traditions.
Aryabhata (476–550), in his magnum opus "Aryabhatiya" (499), propounded a computational system based on a planetary model in which the Earth was taken to be spinning on its axis and the periods of the planets were given with respect to the Sun. He accurately calculated many astronomical constants, such as the periods of the planets, times of the solar and lunar eclipses, and the instantaneous motion of the Moon. Early followers of Aryabhata's model included Varahamihira, Brahmagupta, and Bhaskara II.
Astronomy was advanced during the Shunga Empire and many star catalogues were produced during this time. The Shunga period is known as the "Golden age of astronomy in India".
It saw the development of calculations for the motions and places of various planets, their rising and setting, conjunctions, and the calculation of eclipses.
Indian astronomers by the 6th century believed that comets were celestial bodies that re-appeared periodically. This was the view expressed in the 6th century by the astronomers Varahamihira and Bhadrabahu, and the 10th-century astronomer Bhattotpala listed the names and estimated periods of certain comets, but it is unfortunately not known how these figures were calculated or how accurate they were.
Bhāskara II (1114–1185) was the head of the astronomical observatory at Ujjain, continuing the mathematical tradition of Brahmagupta. He wrote the "Siddhantasiromani" which consists of two parts: "Goladhyaya" (sphere) and "Grahaganita" (mathematics of the planets). He also calculated the time taken for the Earth to orbit the Sun to 9 decimal places. The Buddhist University of Nalanda at the time offered formal courses in astronomical studies.
Other important astronomers from India include Madhava of Sangamagrama, Nilakantha Somayaji and Jyeshtadeva, who were members of the Kerala school of astronomy and mathematics from the 14th century to the 16th century. Nilakantha Somayaji, in his "Aryabhatiyabhasya", a commentary on Aryabhata's "Aryabhatiya", developed his own computational system for a partially heliocentric planetary model, in which Mercury, Venus, Mars, Jupiter and Saturn orbit the Sun, which in turn orbits the Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century. Nilakantha's system, however, was mathematically more efficient than the Tychonic system, due to correctly taking into account the equation of the centre and latitudinal motion of Mercury and Venus. Most astronomers of the Kerala school of astronomy and mathematics who followed him accepted his planetary model.
The Ancient Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their models were based on nested homocentric spheres centered upon the Earth. Their younger contemporary Heraclides Ponticus proposed that the Earth rotates around its axis.
A different approach to celestial phenomena was taken by natural philosophers such as Plato and Aristotle. They were less concerned with developing mathematical predictive models than with developing an explanation of the reasons for the motions of the Cosmos. In his "Timaeus", Plato described the universe as a spherical body divided into circles carrying the planets and governed according to harmonic intervals by a world soul. Aristotle, drawing on the mathematical model of Eudoxus, proposed that the universe was made of a complex system of concentric spheres, whose circular motions combined to carry the planets around the earth. This basic cosmological model prevailed, in various forms, until the 16th century.
In the 3rd century BC Aristarchus of Samos was the first to suggest a heliocentric system, although only fragmentary descriptions of his idea survive. Eratosthenes estimated the circumference of the Earth with great accuracy.
Greek geometrical astronomy developed away from the model of concentric spheres to employ more complex models in which an eccentric circle would carry around a smaller circle, called an epicycle which in turn carried around a planet. The first such model is attributed to Apollonius of Perga and further developments in it were carried out in the 2nd century BC by Hipparchus of Nicea. Hipparchus made a number of other contributions, including the first measurement of precession and the compilation of the first star catalog in which he proposed our modern system of apparent magnitudes.
The Antikythera mechanism, an ancient Greek astronomical observational device for calculating the movements of the Sun and the Moon, possibly the planets, dates from about 150–100 BC, and was the first ancestor of an astronomical computer. It was discovered in an ancient shipwreck off the Greek island of Antikythera, between Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the 18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum of Athens, accompanied by a replica.
Depending on the historian's viewpoint, the acme or corruption of physical Greek astronomy is seen with Ptolemy of Alexandria, who wrote the classic comprehensive presentation of geocentric astronomy, the "Megale Syntaxis" (Great Synthesis), better known by its Arabic title "Almagest", which had a lasting effect on astronomy up to the Renaissance. In his "Planetary Hypotheses", Ptolemy ventured into the realm of cosmology, developing a physical model of his geometric system, in a universe many times smaller than the more realistic conception of Aristarchus of Samos four centuries earlier.
The precise orientation of the Egyptian pyramids affords a lasting demonstration of the high degree of technical skill in watching the heavens attained in the 3rd millennium BC. It has been shown the Pyramids were aligned towards the pole star, which, because of the precession of the equinoxes, was at that time Thuban, a faint star in the constellation of Draco. Evaluation of the site of the temple of Amun-Re at Karnak, taking into account the change over time of the obliquity of the ecliptic, has shown that the Great Temple was aligned on the rising of the midwinter Sun. The length of the corridor down which sunlight would travel would have limited illumination at other times of the year. The Egyptians also found the position of Sirius (the dog star) who they believed was Anubis their Jackal headed god moving through the heavens. Its position was critical to their civilisation as when it rose heliacal in the east before sunrise it foretold the flooding of the Nile. It is also where we get the phrase 'dog days of summer' from.
Astronomy played a considerable part in religious matters for fixing the dates of festivals and determining the hours of the night. The titles of several temple books are preserved recording the movements and phases of the sun, moon and stars. The rising of Sirius (Egyptian: Sopdet, Greek: Sothis) at the beginning of the inundation was a particularly important point to fix in the yearly calendar.
Writing in the Roman era, Clement of Alexandria gives some idea of the importance of astronomical observations to the sacred rites:
And after the Singer advances the Astrologer (ὡροσκόπος), with a "horologium" (ὡρολόγιον) in his hand, and a "palm" (φοίνιξ), the symbols of astrology. He must know by heart the Hermetic astrological books, which are four in number. Of these, one is about the arrangement of the fixed stars that are visible; one on the positions of the Sun and Moon and five planets; one on the conjunctions and phases of the Sun and Moon; and one concerns their risings.
The Astrologer's instruments ("horologium" and "palm") are a plumb line and sighting instrument. They have been identified with two inscribed objects in the Berlin Museum; a short handle from which a plumb line was hung, and a palm branch with a sight-slit in the broader end. The latter was held close to the eye, the former in the other hand, perhaps at arm's length. The "Hermetic" books which Clement refers to are the Egyptian theological texts, which probably have nothing to do with Hellenistic Hermetism.
From the tables of stars on the ceiling of the tombs of Rameses VI and Rameses IX it seems that for fixing the hours of the night a man seated on the ground faced the Astrologer in such a position that the line of observation of the pole star passed over the middle of his head. On the different days of the year each hour was determined by a fixed star culminating or nearly culminating in it, and the position of these stars at the time is given in the tables as in the centre, on the left eye, on the right shoulder, etc. According to the texts, in founding or rebuilding temples the north axis was determined by the same apparatus, and we may conclude that it was the usual one for astronomical observations. In careful hands it might give results of a high degree of accuracy.
The astronomy of East Asia began in China. Solar term was completed in Warring States period. The knowledge of Chinese astronomy was introduced into East Asia.
Astronomy in China has a long history. Detailed records of astronomical observations were kept from about the 6th century BC, until the introduction of Western astronomy and the telescope in the 17th century. Chinese astronomers were able to precisely predict eclipses.
Much of early Chinese astronomy was for the purpose of timekeeping. The Chinese used a lunisolar calendar, but because the cycles of the Sun and the Moon are different, astronomers often prepared new calendars and made observations for that purpose.
Astrological divination was also an important part of astronomy. Astronomers took careful note of "guest stars" which suddenly appeared among the fixed stars. They were the first to record a supernova, in the Astrological Annals of the Houhanshu in 185 AD. Also, the supernova that created the Crab Nebula in 1054 is an example of a "guest star" observed by Chinese astronomers, although it was not recorded by their European contemporaries. Ancient astronomical records of phenomena like supernovae and comets are sometimes used in modern astronomical studies.
The world's first star catalogue was made by Gan De, a , in the 4th century BC.
Maya astronomical codices include detailed tables for calculating phases of the Moon, the recurrence of eclipses, and the appearance and disappearance of Venus as morning and evening star. The Maya based their calendrics in the carefully calculated cycles of the Pleiades, the Sun, the Moon, Venus, Jupiter, Saturn, Mars, and also they had a precise description of the eclipses as depicted in the Dresden Codex, as well as the ecliptic or zodiac, and the Milky Way was crucial in their Cosmology. A number of important Maya structures are believed to have been oriented toward the extreme risings and settings of Venus. To the ancient Maya, Venus was the patron of war and many recorded battles are believed to have been timed to the motions of this planet. Mars is also mentioned in preserved astronomical codices and early mythology.
Although the Maya calendar was not tied to the Sun, John Teeple has proposed that the Maya calculated the solar year to somewhat greater accuracy than the Gregorian calendar. Both astronomy and an intricate numerological scheme for the measurement of time were vitally important components of Maya religion.
Since 1990 our understanding of prehistoric Europeans has been radically changed by discoveries of ancient astronomical artifacts throughout Europe. The artifacts demonstrate that Neolithic and Bronze Age Europeans had a sophisticated knowledge of mathematics and astronomy.
Among the discoveries are:
The Arabic and the Persian world under Islam had become highly cultured, and many important works of knowledge from Greek astronomy and Indian astronomy and Persian astronomy were translated into Arabic, used and stored in libraries throughout the area. An important contribution by Islamic astronomers was their emphasis on observational astronomy. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. Zij star catalogues were produced at these observatories.
In the 10th century, Abd al-Rahman al-Sufi (Azophi) carried out observations on the stars and described their positions, magnitudes, brightness, and colour and drawings for each constellation in his "Book of Fixed Stars". He also gave the first descriptions and pictures of "A Little Cloud" now known as the Andromeda Galaxy. He mentions it as lying before the mouth of a Big Fish, an Arabic constellation. This "cloud" was apparently commonly known to the Isfahan astronomers, very probably before 905 AD. The first recorded mention of the Large Magellanic Cloud was also given by al-Sufi. In 1006, Ali ibn Ridwan observed SN 1006, the brightest supernova in recorded history, and left a detailed description of the temporary star.
In the late 10th century, a huge observatory was built near Tehran, Iran, by the astronomer Abu-Mahmud al-Khujandi who observed a series of meridian transits of the Sun, which allowed him to calculate the tilt of the Earth's axis relative to the Sun. He noted that measurements by earlier (Indian, then Greek) astronomers had found higher values for this angle, possible evidence that the axial tilt is not constant but was in fact decreasing. In 11th-century Persia, Omar Khayyám compiled many tables and performed a reformation of the calendar that was more accurate than the Julian and came close to the Gregorian.
Other Muslim advances in astronomy included the collection and correction of previous astronomical data, resolving significant problems in the Ptolemaic model, the development of the universal latitude-independent astrolabe by Arzachel, the invention of numerous other astronomical instruments, Ja'far Muhammad ibn Mūsā ibn Shākir's belief that the heavenly bodies and celestial spheres were subject to the same physical laws as Earth, the first elaborate experiments related to astronomical phenomena, the introduction of exacting empirical observations and experimental techniques, and the introduction of empirical testing by Ibn al-Shatir, who produced the first model of lunar motion which matched physical observations.
Natural philosophy (particularly Aristotelian physics) was separated from astronomy by Ibn al-Haytham (Alhazen) in the 11th century, by Ibn al-Shatir in the 14th century, and Qushji in the 15th century, leading to the development of an astronomical physics.
After the significant contributions of Greek scholars to the development of astronomy, it entered a relatively static era in Western Europe from the Roman era through the 12th century. This lack of progress has led some astronomers to assert that nothing happened in Western European astronomy during the Middle Ages. Recent investigations, however, have revealed a more complex picture of the study and teaching of astronomy in the period from the 4th to the 16th centuries.
Western Europe entered the Middle Ages with great difficulties that affected the continent's intellectual production. The advanced astronomical treatises of classical antiquity were written in Greek, and with the decline of knowledge of that language, only simplified summaries and practical texts were available for study. The most influential writers to pass on this ancient tradition in Latin were Macrobius, Pliny, Martianus Capella, and Calcidius. In the 6th century Bishop Gregory of Tours noted that he had learned his astronomy from reading Martianus Capella, and went on to employ this rudimentary astronomy to describe a method by which monks could determine the time of prayer at night by watching the stars.
In the 7th century the English monk Bede of Jarrow published an influential text, "On the Reckoning of Time", providing churchmen with the practical astronomical knowledge needed to compute the proper date of Easter using a procedure called the "computus". This text remained an important element of the education of clergy from the 7th century until well after the rise of the Universities in the 12th century.
The range of surviving ancient Roman writings on astronomy and the teachings of Bede and his followers began to be studied in earnest during the revival of learning sponsored by the emperor Charlemagne. By the 9th century rudimentary techniques for calculating the position of the planets were circulating in Western Europe; medieval scholars recognized their flaws, but texts describing these techniques continued to be copied, reflecting an interest in the motions of the planets and in their astrological significance.
Building on this astronomical background, in the 10th century European scholars such as Gerbert of Aurillac began to travel to Spain and Sicily to seek out learning which they had heard existed in the Arabic-speaking world. There they first encountered various practical astronomical techniques concerning the calendar and timekeeping, most notably those dealing with the astrolabe. Soon scholars such as Hermann of Reichenau were writing texts in Latin on the uses and construction of the astrolabe and others, such as Walcher of Malvern, were using the astrolabe to observe the time of eclipses in order to test the validity of computistical tables.
By the 12th century, scholars were traveling to Spain and Sicily to seek out more advanced astronomical and astrological texts, which they translated into Latin from Arabic and Greek to further enrich the astronomical knowledge of Western Europe. The arrival of these new texts coincided with the rise of the universities in medieval Europe, in which they soon found a home. Reflecting the introduction of astronomy into the universities, John of Sacrobosco wrote a series of influential introductory astronomy textbooks: the Sphere, a Computus, a text on the Quadrant, and another on Calculation.
In the 14th century, Nicole Oresme, later bishop of Liseux, showed that neither the scriptural texts nor the physical arguments advanced against the movement of the Earth were demonstrative and adduced the argument of simplicity for the theory that the Earth moves, and "not" the heavens. However, he concluded "everyone maintains, and I think myself, that the heavens do move and not the earth: For God hath established the world which shall not be moved." In the 15th century, Cardinal Nicholas of Cusa suggested in some of his scientific writings that the Earth revolved around the Sun, and that each star is itself a distant sun.
During the renaissance period, astronomy began to undergo a revolution in thought known as the Copernican revolution, which gets the name from the astronomer Nicolaus Copernicus, who proposed a heliocentric system, in which the planets revolved around the Sun and not the Earth. His "De Revolutionibus Orbium Coelestium" was published in 1543. While in the long term this was a very controversial claim, in the very beginning it only brought minor controversy. The theory became the dominant view because many figures, most notably Galileo Galilei, Johannes Kepler and Isaac Newton championed and improved upon the work. Other figures also aided this new model despite not believing the overall theory, like Tycho Brahe, with his well-known observations.
Brahe, a Danish noble, was an essential astronomer in this period. He came on the astronomical scene with the publication of "De Nova Stella" in which he disproved conventional wisdom on the supernova SN 1572. He also created the Tychonic System in which he blended the mathematical benefits of the Copernican system and the “physical benefits” of the Ptolemaic system. This was one of the systems people believed in when they did not accept heliocentrism, but could no longer accept the Ptolemaic system. He is most known for his highly accurate observations of the stars and the solar system. Later he moved to Prague and continued his work. In Prague he was at work on the Rudolphine Tables, that were not finished until after his death. The Rudolphine Tables was a star map designed to be more accurate than either the Alphonsine Tables, made in the 1300s and the Prutenic Tables which were inaccurate. He was assisted at this time by his assistant Johannes Kepler, who would later use his observations to finish Brahe's works and for his theories as well.
After the death of Brahe, Kepler was deemed his successor and was given the job of completing Brahe's uncompleted works, like the Rudolphine Tables. He completed the Rudolphine Tables in 1624, although it was not published for several years. Like many other figures of this era, he was subject to religious and political troubles, like the Thirty Years War, which led to chaos that almost destroyed some of his works. Kepler was, however, the first to attempt to derive mathematical predictions of celestial motions from assumed physical causes. He discovered the three Kepler's Laws of Planetary Motion that now carry his name, those laws being as follows:
With these laws, he managed to improve upon the existing Heliocentric model. The first two were published in 1609. Kepler's contributions improved upon the overall system, giving it more credibility because it adequately explained events and could cause more reliable predictions. Before this the Copernican model was just as unreliable as the Ptolemaic model. This improvement came because Kepler realized the orbits were not perfect circles, but ellipses.Galileo Galilei was among the first to use a telescope to observe the sky, and after constructing a 20x refractor telescope. He discovered the four largest moons of Jupiter in 1610, which are now collectively known as the Galilean moons, in his honor. This discovery was the first known observation of satellites orbiting another planet. He also found that our Moon had craters and observed, and correctly explained, sunspots, and that Venus exhibited a full set of phases resembling lunar phases. Galileo argued that these facts demonstrated incompatibility with the Ptolemaic model, which could not explain the phenomenon and would even contradict it. With the moons it demonstrated that the Earth does not have to have everything orbiting it and that other parts of the Solar System could orbit another object, such as the Earth orbiting the Sun. In the Ptolemaic system the celestial bodies were supposed to be perfect so such objects should not have craters or sunspots. The phases of Venus could only happen in the event that Venus's orbit is insides Earth's orbit, which could not happen if the Earth was the center. He, as the most famous example, had to face challenges from church officials, more specifically the Roman Inquisition. They accused him of heresy because these beliefs went against the teachings of the Roman Catholic Church and were challenging the Catholic church's authority when it was at its weakest. While he was able to avoid punishment for a little while he was eventually tried and pled guilty to heresy in 1633. Although this came at some expense, his book was banned, and he was put under house arrest until he died in 1642.Sir Isaac Newton developed further ties between physics and astronomy through his law of universal gravitation. Realizing that the same force that attracts objects to the surface of the Earth held the Moon in orbit around the Earth, Newton was able to explain – in one theoretical framework – all known gravitational phenomena. In his "Philosophiae Naturalis Principia Mathematica", he derived Kepler's laws from first principles. Those first principles are as follows:
Thus while Kepler explained how the planets moved, Newton accurately managed to explain why the planets moved the way they do. Newton's theoretical developments laid many of the foundations of modern physics.
Outside of England, Newton's theory took some time to become established. Descartes' theory of vortices held sway in France, and Huygens, Leibniz and Cassini accepted only parts of Newton's system, preferring their own philosophies. Voltaire published a popular account in 1738. In 1748, the French Academy of Sciences offered a reward for solving the perturbations of Jupiter and Saturn which was eventually solved by Euler and Lagrange. Laplace completed the theory of the planets, publishing from 1798 to 1825.
Edmund Halley succeeded Flamsteed as Astronomer Royal in England and succeeded in predicting the return in 1758 of the comet that bears his name. Sir William Herschel found the first new planet, Uranus, to be observed in modern times in 1781. The gap between the planets Mars and Jupiter disclosed by the Titius–Bode law was filled by the discovery of the asteroids Ceres and Pallas in 1801 and 1802 with many more following.
At first, astronomical thought in America was based on Aristotelian philosophy, but interest in the new astronomy began to appear in Almanacs as early as 1659.
In the 19th century, Joseph von Fraunhofer discovered that when sunlight was dispersed, a multitude of spectral lines were observed (regions where there was less or no light). Experiments with hot gases showed that the same lines could be observed in the spectra of gases, with specific lines corresponding to unique elements. It was proved that the chemical elements found in the Sun (chiefly hydrogen and helium) were also found on Earth.
During the 20th century spectroscopy (the study of these lines) advanced, especially because of the advent of quantum physics, which was necessary to understand the observations.
Although in previous centuries noted astronomers were exclusively male, at the turn of the 20th century women began to play a role in the great discoveries. In this period prior to modern computers, women at the United States Naval Observatory (USNO), Harvard University, and other astronomy research institutions began to be hired as human "computers", who performed the tedious calculations while scientists performed research requiring more background knowledge. A number of discoveries in this period were originally noted by the women "computers" and reported to their supervisors. For example, at the Harvard Observatory Henrietta Swan Leavitt discovered the cepheid variable star period-luminosity relation which she further developed into a method of measuring distance outside of the Solar System.
Annie Jump Cannon, also at Harvard, organized the stellar spectral types according to stellar temperature. In 1847, Maria Mitchell discovered a comet using a telescope. According to Lewis D. Eigen, Cannon alone, "in only 4 years discovered and catalogued more stars than all the men in history put together."
Most of these women received little or no recognition during their lives due to their lower professional standing in the field of astronomy. Although their discoveries and methods are taught in classrooms around the world, few students of astronomy can attribute the works to their authors or have any idea that there were active female astronomers at the end of the 19th century.
Most of our current knowledge was gained during the 20th century. With the help of the use of photography, fainter objects were observed. The Sun was found to be part of a galaxy made up of more than 1010 stars (10 billion stars). The existence of other galaxies, one of the matters of "the great debate", was settled by Edwin Hubble, who identified the Andromeda nebula as a different galaxy, and many others at large distances and receding, moving away from our galaxy.
Physical cosmology, a discipline that has a large intersection with astronomy, made huge advances during the 20th century, with the model of the hot Big Bang heavily supported by the evidence provided by astronomy and physics, such as the redshifts of very distant galaxies and radio sources, the cosmic microwave background radiation, Hubble's law and cosmological abundances of elements.
In the 19th century, scientists began discovering forms of light which were invisible to the naked eye: X-Rays, gamma rays, radio waves, microwaves, ultraviolet radiation, and infrared radiation. This had a major impact on astronomy, spawning the fields of infrared astronomy, radio astronomy, x-ray astronomy and finally gamma-ray astronomy. With the advent of spectroscopy it was proven that other stars were similar to the Sun, but with a range of temperatures, masses and sizes. The existence of our galaxy, the Milky Way, as a separate group of stars was only proven in the 20th century, along with the existence of "external" galaxies, and soon after, the expansion of the universe seen in the recession of most galaxies from us. | https://en.wikipedia.org/wiki?curid=14021 |
Haber process
The Haber process, also called the Haber–Bosch process, is an artificial nitrogen fixation process and is the main industrial procedure for the production of ammonia today. It is named after its inventors, the German chemists Fritz Haber and Carl Bosch, who developed it in the first decade of the 20th century. The process converts atmospheric nitrogen (N2) to ammonia (NH3) by a reaction with hydrogen (H2) using a metal catalyst under high temperatures and pressures:
Before the development of the Haber process, ammonia had been difficult to produce on an industrial scale, with early methods such as the Birkeland–Eyde process and Frank–Caro process all being highly inefficient.
Although the Haber process is mainly used to produce fertilizer today, during World War I it provided Germany with a source of ammonia for the production of explosives, compensating for the Allied Powers' trade blockade on Chilean saltpeter.
Throughout the 19th century the demand for nitrates and ammonia for use as fertilizers and industrial feedstocks had been steadily increasing. The main source was mining niter deposits. At the beginning of the 20th century it was being predicted that these reserves could not satisfy future demands, and research into new potential sources of ammonia became more important. Although atmospheric nitrogen (N2) is abundant, comprising nearly 80% of the air, it is exceptionally stable and does not readily react with other chemicals. Converting N2 into ammonia posed a challenge for chemists globally.
Haber, with his assistant Robert Le Rossignol, developed the high-pressure devices and catalysts needed to demonstrate the Haber process at laboratory scale. They demonstrated their process in the summer of 1909 by producing ammonia from air, drop by drop, at the rate of about per hour. The process was purchased by the German chemical company BASF, which assigned Carl Bosch the task of scaling up Haber's tabletop machine to industrial-level production. He succeeded in 1910. Haber and Bosch were later awarded Nobel prizes, in 1918 and 1931 respectively, for their work in overcoming the chemical and engineering problems of large-scale, continuous-flow, high-pressure technology.
Ammonia was first manufactured using the Haber process on an industrial scale in 1913 in BASF's Oppau plant in Germany, reaching 20 tonnes per day the following year. During World War I, the production of munitions required large amounts of nitrate. The Allies had access to large sodium nitrate deposits in Chile (Chile saltpetre) controlled by British companies. Germany had no such resources, so the Haber process proved essential to the German war effort. Synthetic ammonia from the Haber process was used for the production of nitric acid, a precursor to the nitrates used in explosives.
Today, the most popular catalysts are based on iron promoted with K2O, CaO, SiO2, and Al2O3. The original Haber–Bosch reaction chambers used osmium as the catalyst, but it was available in extremely small quantities. Haber noted uranium was almost as effective and easier to obtain than osmium. Under Bosch's direction in 1909, the BASF researcher Alwin Mittasch discovered a much less expensive iron-based catalyst, which is still used today.
During the interwar years, alternative processes were developed, the most notably different being the Casale process and Claude process. Luigi Casale and Georges Claude proposed to increase the pressure of the synthesis loop to , thereby increasing the single-pass ammonia conversion and making nearly complete liquefaction at ambient temperature feasible. Georges Claude even proposed to have three or four converters with liquefaction steps in series, thereby omitting the need for a recycle. Nowadays, most plants resemble the original Haber process ( and ), albeit with improved single-pass conversion and lower energy consumption due to process and catalyst optimization.
A major contributor to the elucidation of this mechanism was Gerhard Ertl.
This conversion is typically conducted at and between , as the gases (nitrogen and hydrogen) are passed over four beds of catalyst, with cooling between each pass for maintaining a reasonable equilibrium constant. On each pass only about 15% conversion occurs, but any unreacted gases are recycled, and eventually an overall conversion of 97% is achieved.
The steam reforming, shift conversion, carbon dioxide removal, and methanation steps each operate at pressures of about , and the ammonia synthesis loop operates at pressures ranging from , depending upon which proprietary process is used.
The major source of hydrogen is methane from natural gas. The conversion, steam reforming, is conducted with steam in a high-temperature and -pressure tube inside a reformer with a nickel catalyst, separating the carbon and hydrogen atoms in the natural gas. Other fossil fuel sources include coal, heavy fuel oil and naphtha, while hydrogen is also produced from biomass and from electrolysis of water.
Nitrogen gas (N2) is very unreactive because the atoms are held together by strong triple bonds. The Haber process relies on catalysts that accelerate the scission of this triple bond.
Two opposing considerations are relevant to this synthesis: the position of the equilibrium and the rate of reaction. At room temperature, the equilibrium is strongly in favor of ammonia, but the reaction doesn't proceed at a detectable rate due to its high activation energy. Because the reaction is exothermic, the equilibrium constant (using bar or atm units) becomes 1 around (see Le Châtelier's principle).
Above this temperature, the equilibrium quickly becomes quite unfavorable for the reaction product at atmospheric pressure, according to the van 't Hoff equation. Lowering the temperature is also unhelpful because the catalyst requires a temperature of at least 400 °C to be efficient.
Increased pressure does favor the forward reaction because there are 4 moles of reactant for every 2 moles of product, and the pressure used () alters the equilibrium concentrations to give a substantial ammonia yield. The reason for this is evident in the equilibrium relationship, which is
formula_2
where formula_3 is the fugacity coefficient of species formula_4, formula_5 is the mole fraction of the same species, formula_6 is the pressure in the reactor, and formula_7 is standard pressure, typically .
Economically, pressurization of the reactor is expensive: pipes, valves, and reaction vessels need to be strengthened, and there are safety considerations when working at 20 MPa. In addition, running compressors takes considerable energy, as work must be done on the (very compressible) gas. Thus, the compromise used gives a single-pass yield of around 15%
While removing the product (i.e., ammonia gas) from the system would increase the reaction yield, this step is not used in practice, since the temperature is too high; it is removed from the equilibrium mixture of gases leaving the reaction vessel. The hot gases are cooled enough, whilst maintaining a high pressure, for the ammonia to condense and be removed as liquid. Unreacted hydrogen and nitrogen gases are then returned to the reaction vessel to undergo further reaction. While most ammonia is removed (typically down to 2–5 mol.%), some ammonia remains in the recycle stream to the converter. In academic literature, more complete separation of ammonia has been proposed by absorption in metal halides and by adsorption on zeolites. Such a process is called a "absorbent-enhanced Haber process" or "adsorbent-enhanced Haber process".
The Haber–Bosch process relies on catalysts to accelerate the hydrogenation of N2. The catalysts are "heterogeneous", meaning that they are solid that interact on gaseous reagents.
The catalyst typically consists of finely divided iron bound to an iron oxide carrier containing promoters possibly including aluminium oxide, potassium oxide, calcium oxide, and magnesium oxide.
In industrial practice, the iron catalyst is obtained from finely ground iron powder, which is usually obtained by reduction of high-purity magnetite (Fe3O4). The pulverized iron metal is burnt (oxidized) to give magnetite or wüstite (FeO, ferrous oxide) of a defined particle size. The magnetite (or wüstite) particles are then partially reduced, removing some of the oxygen in the process. The resulting catalyst particles consist of a core of magnetite, encased in a shell of wüstite, which in turn is surrounded by an outer shell of iron metal. The catalyst maintains most of its bulk volume during the reduction, resulting in a highly porous high-surface-area material, which enhances its effectiveness as a catalyst. Other minor components of the catalyst include calcium and aluminium oxides, which support the iron catalyst and help it maintain its surface area. These oxides of Ca, Al, K, and Si are unreactive to reduction by the hydrogen.
The production of the required magnetite catalyst requires a particular melting process in which the used raw materials must be free of catalyst poisons and the promoter aggregates must be evenly distributed in the magnetite melt. Rapid cooling of the magnetite melt, which has an initial temperature of about 3500 °C, produces the precursor desired highly-active catalyst. Unfortunately, the rapid cooling ultimately forms a catalyst of reduced abrasion resistance. Despite this disadvantage, the method of rapid cooling is often preferred in practice.
The reduction of the catalyst precursor magnetite to α-iron is carried out directly in the production plant with synthesis gas. The reduction of the magnetite proceeds via the formation of Wüstite (FeO), so that particles with a core of magnetite surrounded by a shell of Wüstite are formed. The further reduction of magnetite and wüstite leads to the formation of α-iron, which forms together with the promoters the outer shell. The involved processes are complex and depend on the reduction temperature: At lower temperatures, wüstite disproportionates into an iron phase and a magnetite phase; at higher temperatures, the reduction of the wüstite and magnetite to iron dominates.
The α-iron forms primary crystallites with a diameter of about 30 nanometers. These form crystallites a bimodal pore system with pore diameters of about 10 nanometers (produced by the reduction of the magnetite phase) and of 25 to 50 nanometers (produced by the reduction of the desertite phase). With the exception of cobalt oxide, the promoters are not reduced.
During the reduction of the iron oxide with synthesis gas, water vapour is formed. This water vapor must be considered for high catalyst quality as contact with the finely divided iron would lead to premature aging of the catalyst through recrystallization, especially in conjunction with high temperatures. The vapour pressure of the water in the gas mixture produced during catalyst formation is thus kept as low as possible, target values are below 3 gm−3. For this reason, the reduction is carried out at high gas exchange, low pressure and low temperatures. The exothermic nature of the ammonia formation ensures a gradual increase in temperature.
The reduction of fresh, fully oxidized catalyst or precursor to full production capacity takes four to ten days. The Wüstit phase is reduced faster and at lower temperatures than the magnetite phase (Fe3O4). After detailed kinetic, microscopic and X-ray spectroscopic investigations it was shown that Wüstite reacts first to metallic iron. This leads to a gradient of iron(II) ions, whereby these diffuse from the magnetite through the Wüstite to the particle surface and precipitate there as iron nuclei.
In industrial practice, pre-reduced, stabilised catalysts have gained a significant market share. They are delivered showing the fully developed pore structure, but have been oxidized again on the surface after manufacture and are therefore no longer pyrophoric. The reactivation of such pre-reduced catalysts requires only 30 to 40 hours instead of the usual time periods of several days. In addition to the short start-up time, they also have other advantages such as higher water resistance and lower weight.
Since the industrial launch of the Haber–Bosch process, many efforts have been made to improve it. Many metals were intensively tested in the search for suitable catalysts: The requirement for suitability is the dissociative adsorption of nitrogen (i. e. the nitrogen molecule must be split upon adsorption into nitrogen atoms). At the same time the binding of the nitrogen atoms must not be too strong, otherwise the catalyst would be blocked and the catalytic ability would be reduced (i. e. self-poisoning). The elements in the periodic table at the left of the iron group show such a strong bond to nitrogen. The formation of surface nitrides makes for example chromium catalysts ineffective. Metals to the right of the iron group, in contrast, adsorb nitrogen too weakly to be able to activate it sufficiently for ammonia synthesis. Haber himself initially used catalysts based on osmium and uranium. Uranium, however, reacts to its nitride during catalysis and osmium oxide is rare.
Due to the comparatively low price, high availability, easy processing, lifespan and activity, iron was ultimately chosen as catalyst. The production of for example 1800 tons ammonia per day requires a gas pressure of at least 130 bar, temperatures of 400 to 500 °C and a reactor volume of at least 100 m³. According to theoretical and practical studies, further improvements of the pure iron catalyst are limited. It was only in 1984 that the activity of iron catalysts were increased noticeably by inclusion of cobalt.
Ruthenium forms highly active catalysts. Allowing milder operating pressures and temperatures, Ru-based materials are referred to as second-generation catalysts. Such catalysts are prepared by decomposition of triruthenium dodecacarbonyl on graphite. A drawback of activated-carbon-supported ruthenium-based catalysts is the methanation of the support in the presence of hydrogen. Their activity is strongly dependent on the catalyst carrier and the promoters. A wide range of substances can be used as carriers, including carbon, magnesium oxide, aluminum oxide, zeolites, spinels, and boron nitride.
Ruthenium-activated carbon-based catalysts have been used industrially in the KBR Advanced Ammonia Process (KAAP) since 1992. The carbon carrier is partially degraded to methane; however, this can be mitigated by a special treatment of the carbon at 1500 °C, thus prolonging the catalysts lifetime. In addition, the finely dispersed carbon poses a risk of explosion. For these reasons and due to its low acidity, magnesium oxide has proven to be a good alternative. Carriers with acidic properties extract electrons from ruthenium, make it less reactive, and undesirably bind ammonia to the surface.
Catalyst poisons lower the activity of the catalyst. Catalyst poisons are usually impurities in the synthesis gas (a raw material). Sulfur compounds, phosphorus compounds, arsenic compounds, and chlorine compounds are permanent catalyst poisons. Water, carbon monoxide, carbon dioxide and oxygen are temporary catalyst poisons.
Although chemically inert components of the synthesis gas mixture such as noble gases or methane are not catalyst poisons in the strict sense, they accumulate through the recycling of the process gases and thus lower the partial pressure of the reactants, which in turn has a negative effect on the conversion.
The formation of ammonia occurs from nitrogen and hydrogen according to the following equation:
The reaction is an exothermic equilibrium reaction in which the gas volume is reduced. The equilibrium constant Keq of the reaction (see table) is obtained from the following equation:
Since the reaction is exothermic, the equilibrium of the reaction shifts at lower temperatures to the side of the ammonia. Furthermore, four volumetric parts of the raw materials produce two volumetric parts of ammonia. According to Le Chatelier's principle, a high pressure therefore also favours the formation of ammonia. In addition, a high pressure is necessary to ensure sufficient surface coverage of the catalyst with nitrogen. For this reason, a ratio of nitrogen to hydrogen of 1 to 3, a pressure of 250 to 350 bar, a temperature of 450 to 550 °C and α iron are used as catalysts.
The catalyst ferrite (α-Fe) is produced in the reactor by the reduction of magnetite with hydrogen. The catalyst has its highest efficiency at temperatures of about 400 to 500 °C. Even though the catalyst greatly lowers the activation energy for the cleavage of the triple bond of the nitrogen molecule, high temperatures are still required for an appropriate reaction rate. At the industirally utilized reaction temperature of 450 to 550 °C an optimum between the decomposition of ammonia into the starting materials and the effectiveness of the catalyst is achieved. The formed ammonia is continuously removed from the system. The volume fraction of ammonia in the gas mixture is about 20%.
The inert components, especially the noble gases such as argon, should not exceed a certain content in order not to reduce the partial pressure of the reactants too much. To remove the inert gas components, part of the gas is removed and the argon is separated in a gas separation plant. The extraction of pure argon from the circulating gas is carried out using the Linde process.
Modern ammonia plants produce more than 3000 tons per day in one production line. The following diagram shows the set-up of a Haber–Bosch plant:
Depending on its origin, the synthesis gas must first be freed from impurities such as hydrogen sulphide or organic sulphur compounds, which act as a catalyst poison. High concentrations of hydrogen sulphide, which occur in synthesis gas from carbonization coke, are removed in a wet cleaning stage such as the Sulfosolvan process, while low concentrations are removed by adsorption on activated carbon. Organosulfur compounds are separated by pressure swing adsorption together with carbon dioxide after CO conversion.
To produce hydrogen by steam reforming, methane reacts with water vapor using a nickel oxide-alumina catalyst in the primary reformer to form carbon monoxide and hydrogen. The energy required for this, the enthalpy ΔH, is 206 kJ/mol.
The methane gas reacts in the primary reformer only partially. In order to increase the hydrogen yield and keep the content of inert components (i. e. methane) as low as possible, the remaining methane gas is converted in a second step with oxygen to hydrogen and carbon monoxide in the secondary reformer. The secondary reformer is supplied with air as oxygen source. Also the required nitrogen for the subsequent ammonia synthesis is added to the gas mixture.
In a third step, the carbon monoxide is oxidized to carbon dioxide, which is called CO conversion or water-gas shift reaction.
Carbon monoxide and carbon dioxide would form carbamates with ammonia, which would clog (as solids) pipelines and apparatus within a short time. In the following process step, the carbon dioxide must therefore be removed from the gas mixture. In contrast to carbon monoxide, carbon dioxide can easily be removed from the gas mixture by gas scrubbing with triethanolamine. The gas mixture then still contains methane and noble gases such as argon, which, however, behave inertly.
The gas mixture is then compressed to operating pressure by turbo compressors. The resulting compression heat is dissipated by heat exchangers; it is used to preheat raw gases.
The actual production of ammonia takes place in the ammonia reactor. The first reactors were bursting under the high pressure because the atomic hydrogen in the carbonaceous steel partially recombined to methane and produced cracks in the steel. Bosch therefore developed tube reactors consisting of a pressure-bearing steel tube in which a low-carbon iron lining tube was inserted filled with the catalyst. Hydrogen that diffused through the inner steel pipe escaped to the outside via thin holes in the outer steel jacket, the so-called Bosch holes. A disadvantage of the tubular reactors was the relatively high pressure loss, which had to be applied again by compression. The development of hydrogen-resistant chromium-molybdenum steels made it possible to construct single-walled pipes.
Modern ammonia reactors are designed as multi-storey reactors with low pressure drop, in which the catalysts are distributed as fills over about ten storeys one above the other. The gas mixture flows through them one after the other from top to bottom. Cold gas is injected from the side for cooling. A disadvantage of this reactor type is the incomplete conversion of the cold gas mixture in the last catalyst bed.
Alternatively, the reaction mixture between the catalyst layers is cooled using heat exchangers, whereby the hydrogen-nitrogen mixture is preheated to reaction temperature. Reactors of this type have three catalyst beds. In addition to good temperature control, this reactor type has the advantage of better conversion of the raw material gases compared to reactors with cold gas injection.
The reaction product is continuously removed for maximum yield. The gas mixture is cooled to 450 °C in a heat exchanger using water, freshly supplied gases and other process streams. The ammonia also condenses and is separated in a pressure separator. Unreacted nitrogen and hydrogen are than compressed back to the process by a circulating gas compressor, supplemented with fresh gas and fed to the reactor. In a subsequent distillation, the product ammonia is purified.
The mechanism of ammonia synthesis contains the following seven elementary steps:
Transport and diffusion (the first and last two steps) are fast compared to adsorption, reaction and desorption because of the shell structure of the catalyst. It is known from various investigations that the rate-determining step of the ammonia synthesis is the dissociation of nitrogen. In contrast, exchange reactions between hydrogen and deuterium on the Haber–Bosch catalysts still take place at temperatures of at a measurable rate; the exchange between deuterium and hydrogen on the ammonia molecule also takes place at room temperature. Since the adsorption of both molecules is rapid, it cannot determine the speed of ammonia synthesis.
In addition to the reaction conditions, the adsorption of nitrogen on the catalyst surface depends on the microscopic structure of the catalyst surface. Iron has different crystal surfaces, whose reactivity is very different. The Fe(111) and Fe(211) surfaces have by far the highest activity. The explanation for this is that only these surfaces have so-called C7 sites - these are iron atoms with seven closest neighbours.
The dissociative adsorption of nitrogen on the surface follows the following scheme, where S* symbolizes an iron atom on the surface of the catalyst:
The adsorption of nitrogen is similar to the chemisorption of carbon monoxide. On a Fe(111) surface, the adsorption of nitrogen first leads to an adsorbed γ-species with an adsorption energy of 24 kJmol−1 and an N-N stretch vibration of 2100 cm−1. Since the nitrogen is isoelectronic to carbon monoxide, it adsorbs in an on-end configuration in which the molecule is bound perpendicular to the metal surface at one nitrogen atom. This has been confirmed by photoelectron spectroscopy.
Ab-initio-MO calculations have shown that, in addition to the σ binding of the free electron pair of nitrogen to the metal, there is a π binding from the d orbitals of the metal to the π* orbitals of nitrogen, which strengthens the iron-nitrogen bond. The nitrogen in the α state is more strongly bound with 31 kJmol−1. The resulting N-N bond weakening could be experimentally confirmed by a reduction of the wave numbers of the N-N stretching oscillation to 1490 cm−1.
Further heating of the Fe(111) area covered by α-N2 leads to both desorption and emergence of a new band at 450 cm−1. This represents a metal-nitrogen oscillation, the β state. A comparison with vibration spectra of complex compounds allows the conclusion that the N2 molecule is bound "side-on", with an N atom in contact with a C7 site. This structure is called "surface nitride". The surface nitride is very strongly bound to the surface. Hydrogen atoms (Hads), which are very mobile on the catalyst surface, quickly combine with it.
Infrared spectroscopically detected surface imides (NHad), surface amides (NH2,ad) and surface ammoniacates (NH3,ad) are formed, the latter decay under NH3 release (desorption). The individual molecules were identified or assigned by X-ray photoelectron spectroscopy (XPS), high-resolution electron energy loss spectroscopy (HREELS) and IR spectroscopy.
On the basis of these experimental findings, the reaction mechanism is believed to involve the following steps (see also figure):
Reaction 5 occurs in three steps, forming NH, NH2, and then NH3. Experimental evidence points to reaction 2 as being the slow, rate-determining step. This is not unexpected, since the bond broken, the nitrogen triple bond, is the strongest of the bonds that must be broken.
As with all Haber–Bosch catalysts, nitrogen dissociation is the rate determining step for ruthenium activated carbon catalysts. The active center for ruthenium is a so-called B5 site, a 5-fold coordinated position on the Ru(0001) surface where two ruthenium atoms form a step edge with three ruthenium atoms on the Ru(0001) surface. The number of B5 sites depends on the size and shape of the ruthenium particles, the ruthenium precursor and the amount of ruthenium used. The reinforcing effect of the basic carrier used in the ruthenium catalyst is similar to the promoter effect of alkali metals used in the iron catalyst.
An energy diagram can be created based on the enthalpy of reaction of the individual steps. The energy diagram can be used to compare homogeneous and heterogeneous reactions: Due to the high activation energy of the dissociation of nitrogen, the homogeneous gas phase reaction is not realizable. The catalyst avoids this problem as the energy gain resulting from the binding of nitrogen atoms to the catalyst surface overcompensates for the necessary dissociation energy so that the reaction is finally exothermic. Nevertheless, the dissociative adsorption of nitrogen remains the rate determining step: not because of the activation energy, but mainly because of the unfavorable pre-exponential factor of the rate constant. Although hydrogenation is endothermic, this energy can easily be applied by the reaction temperature (about 700 K).
When first invented, the Haber process competed against another industrial process, the cyanamide process. However, the cyanamide process consumed large amounts of electrical power and was more labor-intensive than the Haber process.
As of 2018, the Haber process produces 230 million tonnes of anhydrous ammonia per year.The ammonia is used mainly as a nitrogen fertilizer as ammonia itself, in the form of ammonium nitrate, and as urea. The Haber process consumes 3–5% of the world's natural-gas production (around 1–2% of the world's energy supply). In combination with advances in breeding, herbicides and pesticides, these fertilizers have helped to increase the productivity of agricultural land:
The energy-intensivity of the process contributes to climate change and other environmental problems:
Since nitrogen use efficiency is typically less than 50%, farm runoff from heavy use of fixed industrial nitrogen disrupts biological habitats.
Nearly 50% of the nitrogen found in human tissues originated from the Haber–Bosch process. Thus, the Haber process serves as the "detonator of the population explosion", enabling the global population to increase from 1.6 billion in 1900 to 7.7 billion by November 2018. | https://en.wikipedia.org/wiki?curid=14022 |
Hot or Not
Hot or Not is a rating site that allows users to rate the attractiveness of photos submitted voluntarily by others. The site offers a matchmaking engine called 'Meet Me' and an extended profile feature called "Hotlists". The domain hotornot.com is currently owned by Hot Or Not Limited, and was previously owned by Avid Life Media. 'Hot or Not' was a significant influence on the people who went on to create the social media sites Facebook and YouTube.
Users would submit photographs of themselves to the site for the purpose of other users to rate said person's attractiveness on a scale of 1 - 10, with the cumulative average acting as the overall score for a given photograph.
The site was founded in October 2000 by James Hong and Jim Young, two friends and Silicon Valley-based engineers. Both graduated from the University of California, Berkeley in electrical engineering, with Young pursuing a Ph.D at the time. It was inspired by some other developers' ideas.
The site was a technical solution to a disagreement the founders had one day over a passing woman's attractiveness. The site was originally called "Am I Hot or Not". Within a week of launching, it had reached almost two million page views per day. Within a few months, the site was immediately behind CNET and NBCi on NetNielsen Rating's Top 25 advertising domains. To keep up with rising costs Hong and Young added a matchmaking component to their website called "Meet Me at Hot or Not", i.e. a system of range voting. The matchmaking service has been especially successful and the site continues to generate most of its revenue through subscriptions. In the December 2006 issue of "Time" magazine, the founders of YouTube stated that they originally set out to make a version of Hot or Not with Video before developing their more inclusive site. Mark Zuckerberg of Facebook similarly got his start by creating a Hot or Not type site called FaceMash, where he posted photos from Harvard's Facebook for the university's community to rate.
Hot or Not was sold for a rumored $20 million on February 8, 2008 to Avid Life Media, owners of Ashley Madison. Annual revenue reached $7.5 million, with net profits of $5.5 million. They initially started off $60,000 in debt due to tuition fees James paid for his MBA. On July 31, 2008, Hot or Not launched Hot or Not Gossip and a Baresi rate box (a "hot meter") – a subdivision to expand their market, run by former radio DJ turned celebrity blogger Zack Taylor.
Hot or Not was preceded by the rating sites, like RateMyFace, which was registered a year earlier in the summer of 1999, and AmIHot.com, which was registered in January 2000 by MIT freshman Daniel Roy. Regardless, despite any head starts of its predecessors, Hot or Not quickly became the most popular. Since AmIHotOrNot.com's launch, the concept has spawned many imitators. The concept always remained the same, but the subject matter varied greatly. The concept has also been integrated with a wide variety of dating and matchmaking systems. In 2007 BecauseImHot.com launched and deleted anyone with a rating below 7 after a voting audit or the first 50 votes (whichever is first).
In 2005, as an example of using image morphing methods to study the effects of averageness, imaging researcher Pierre Tourigny created a composite of about 30 faces to find out the current standard of good looks on the Internet. On the Hot or Not web site, people rate others' attractiveness on a scale of 1 to 10. An average score based on hundreds or even thousands of individual ratings takes only a few days to emerge. To make this hot or not palette of morphed images, photos from the site were sorted by rank and used SquirlzMorph to create multi-morph composites from them. Unlike projects like Face of Tomorrow, where the subjects are posed for the purpose, the portraits are blurry because the source images are of low resolution with differences in variables such as posture, hair styles and glasses, so that in this instance images could use only 36 control points for the morphs. A similar study was done with Miss Universe contestants, as shown in the averageness article, as well as one for age, as shown in youthfulness article.
A 2006 "hot" or "not" style study, involving 264 women and 18 men, at the Washington University School of Medicine, as published online in the journal "Brain Research", indicates that a person's brain determines whether an image is erotically appealing long before the viewer is even aware they are seeing the picture. Moreover, according to these researchers, one of the basic functions of the brain is to classify images into a hot or not type categorization. The study's researchers also discovered that sexy shots induce a uniquely powerful reaction in the brain, equal in effect for both men and women, and that erotic images produced a strong reaction in the hypothalamus. | https://en.wikipedia.org/wiki?curid=14023 |
H.263
H.263 is a video compression standard originally designed as a low-bit-rate compressed format for videoconferencing. It was standardized by the ITU-T Video Coding Experts Group (VCEG) in a project ending in 1995/1996. It is a member of the H.26x family of video coding standards in the domain of the ITU-T.
Like previous H.26x standards, H.263 is based on discrete cosine transform (DCT) video compression. H.263 was later extended to add various additional enhanced features in 1998 and 2000. Smaller additions were also made in 1997 and 2001, and a unified edition was produced in 2005.
The H.263 standard was first designed to be utilized in H.324 based systems (PSTN and other circuit-switched network videoconferencing and videotelephony), but it also found use in H.323 (RTP/IP-based videoconferencing), H.320 (ISDN-based videoconferencing, where it was the most widely used video compression standard), RTSP (streaming media) and SIP (IP-based videoconferencing) solutions.
H.263 is a required video coding format in ETSI 3GPP technical specifications for IP Multimedia Subsystem (IMS), Multimedia Messaging Service (MMS) and Transparent end-to-end Packet-switched Streaming Service (PSS). In 3GPP specifications, H.263 video is usually used in 3GP container format.
H.263 also found many applications on the internet: much Flash Video content (as used on sites such as YouTube, Google Video, and MySpace) used to be encoded in Sorenson Spark format (an incomplete implementation of H.263). The original version of the RealVideo codec was based on H.263 until the release of RealVideo 8.
H.263 was developed as an evolutionary improvement based on experience from H.261 and H.262 (aka MPEG-2 Video), the previous ITU-T standards for video compression, and the MPEG-1 standard developed in ISO/IEC. Its first version was completed in 1995 and provided a suitable replacement for H.261 at all bit rates. It was further enhanced in projects known as H.263v2 (also known as H.263+ or H.263 1998) and H.263v3 (also known as H.263++ or H.263 2000). It was also used as the basis for the development of MPEG-4 Part 2. MPEG-4 Part 2 is H.263 compatible in the sense that basic "baseline" H.263 bitstreams are correctly decoded by an MPEG-4 Video decoder.
The next enhanced format developed by ITU-T VCEG (in partnership with MPEG) after H.263 was the H.264 standard, also known as AVC and MPEG-4 part 10. As H.264 provides a significant improvement in capability beyond H.263, the H.263 standard is now considered a legacy design. Most new videoconferencing products now include H.264 as well as H.263 and H.261 capabilities. An even-newer standard format, HEVC, has also been developed by VCEG and MPEG, and has begun to emerge in some applications.
Since the original ratification of H.263 in March 1996 (approving a document that was produced in November 1995), there have been two subsequent additions which improved on the original standard by additional optional extensions (for example, the H.263v2 project added a deblocking filter in its Annex J).
The original version of H.263 specified the following annexes:
The first version of H.263 supported a limited set of picture sizes:
In March 1997, an informative Appendix I describing Error Tracking – an encoding technique for providing improved robustness to data losses and errors, was approved to provide information for the aid of implementers having an interest in such techniques.
H.263v2 (also known as "H.263+", or as "the 1998 version of H.263") is the informal name of the second edition of the ITU-T H.263 international video coding standard. It retained the entire technical content of the original version of the standard, but enhanced H.263 capabilities by adding several annexes which can substantially improve encoding efficiency and provide other capabilities (such as enhanced robustness against data loss in the transmission channel). The H.263+ project was ratified by the ITU in February 1998. It added the following Annexes:
H.263v2 also added support for flexible customized picture formats and custom picture clock frequencies. As noted above, the only picture formats previously supported in H.263 had been Sub-QCIF, QCIF, CIF, 4CIF, and 16CIF, and the only picture clock frequency had been 30000/1001 (approximately 29.97) clock ticks per second.
H.263v2 specified a set of recommended modes in an informative appendix (Appendix II, since deprecated):
The definition of H.263v3 (also known as H.263++ or as the 2000 version of H.263) added three annexes. These annexes and an additional annex that specified profiles (approved the following year) were originally published as separate documents from the main body of the standard itself. The additional annexes specified are:
The prior informative Appendix II (recommended optional enhancement) was obsoleted by the creation of the normative Annex X.
In June 2001, another informative appendix (Appendix III, Examples for H.263 encoder/decoder implementations) was approved. It describes techniques for encoding and for error/loss concealment by decoders.
In January 2005, a unified H.263 specification document was produced (with the exception of Appendix III, which remains as a separately-published document).
In August 2005, an implementors guide was approved to correct a small error in the seldom-used Annex Q reduced-resolution update mode.
In countries without software patents, H.263 video can be legally encoded and decoded with the free LGPL-licensed libavcodec library (part of the FFmpeg project) which is used by programs such as ffdshow, VLC media player and MPlayer. | https://en.wikipedia.org/wiki?curid=14024 |
Histone
In biology, histones are highly basic proteins found in eukaryotic cell nuclei that pack and order the DNA into structural units called nucleosomes. Histones are abundant in lysine and arginine. Histone are the chief protein components of chromatin, acting as spools around which DNA winds, and playing a role in gene regulation. Without histones, the unwound DNA in chromosomes would be very long (a length to width ratio of more than 10 million to 1 in human DNA). For example, each human diploid cell (containing 23 pairs of chromosomes) has about 1.8 meters of DNA; wound on the histones, the diploid cell has about 90 micrometers (0.09 mm) of chromatin. When the diploid cells are duplicated and condensed during mitosis, the result is about 120 micrometers of chromosomes.
Five major families of histones exist: H1/H5, H2A, H2B, H3, and H4. Histones H2A, H2B, H3 and H4 are known as the core histones, while histones H1/H5 are known as the linker histones.
The core histones all exist as dimers, which are similar in that they all possess the histone fold domain: three alpha helices linked by two loops. It is this helical structure that allows for interaction between distinct dimers, particularly in a head-tail fashion (also called the handshake motif). The resulting four distinct dimers then come together to form one octameric nucleosome core, approximately 63 Angstroms in diameter (a solenoid (DNA)-like particle). Around 146 base pairs (bp) of DNA wrap around this core particle 1.65 times in a left-handed super-helical turn to give a particle of around 100 Angstroms across. The linker histone H1 binds the nucleosome at the entry and exit sites of the DNA, thus locking the DNA into place and allowing the formation of higher order structure. The most basic such formation is the 10 nm fiber or beads on a string conformation. This involves the wrapping of DNA around nucleosomes with approximately 50 base pairs of DNA separating each pair of nucleosomes (also referred to as linker DNA). Higher-order structures include the 30 nm fiber (forming an irregular zigzag) and 100 nm fiber, these being the structures found in normal cells. During mitosis and meiosis, the condensed chromosomes are assembled through interactions between nucleosomes and other regulatory proteins.
Histones are subdivided into canonical replication-dependent histones that are expressed during the S-phase of cell cycle and replication-independent histone variants, expressed during the whole cell cycle. In animals, genes encoding canonical histones are typically clustered along the chromosome, lack introns and use a stem loop structure at the 3' end instead of a polyA tail. Genes encoding histone variants are usually not clustered, have introns and their mRNAs are regulated with polyA tails. Complex multicellular organisms typically have a higher number of histone variants providing a variety of different functions. Recent data are accumulating about the roles of diverse histone variants highlighting the functional links between variants and the delicate regulation of organism development. Histone variants from different organisms, their classification and variant specific features can be found in "HistoneDB 2.0 - Variants" database.
The following is a list of human histone proteins:
The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer, forming two nearly symmetrical halves by tertiary structure (C2 symmetry; one macromolecule is the mirror image of the other). The H2A-H2B dimers and H3-H4 tetramer also show pseudodyad symmetry. The 4 'core' histones (H2A, H2B, H3 and H4) are relatively similar in structure and are highly conserved through evolution, all featuring a 'helix turn helix turn helix' motif (DNA-binding protein motif that recognize specific DNA sequence). They also share the feature of long 'tails' on one end of the amino acid structure - this being the location of post-translational modification (see below).
Archaeal histone only contains a H3-H4 like dimeric structure made out of the same protein. Such dimeric structures can stack into a tall superhelix ("supernucleosome") onto which DNA coils in a manner similar to nucleosome spools. Only some archaeal histones have tails.
It has been proposed that histone proteins are evolutionarily related to the helical part of the extended AAA+ ATPase domain, the C-domain, and to the N-terminal substrate recognition domain of Clp/Hsp100 proteins. Despite the differences in their topology, these three folds share a homologous helix-strand-helix (HSH) motif.
Using an electron paramagnetic resonance spin-labeling technique, British researchers measured the distances between the spools around which eukaryotic cells wind their DNA. They determined the spacings range from 59 to 70 Å.
In all, histones make five types of interactions with DNA:
The highly basic nature of histones, aside from facilitating DNA-histone interactions, contributes to their water solubility.
Histones are subject to post translational modification by enzymes primarily on their N-terminal tails, but also in their globular domains. Such modifications include methylation, citrullination, acetylation, phosphorylation, SUMOylation, ubiquitination, and ADP-ribosylation. This affects their function of gene regulation.
In general, genes that are active have less bound histone, while inactive genes are highly associated with histones during interphase. It also appears that the structure of histones has been evolutionarily conserved, as any deleterious mutations would be severely maladaptive. All histones have a highly positively charged N-terminus with many lysine and arginine residues.
Histones were discovered in 1884 by Albrecht Kossel. The word "histone" dates from the late 19th century and is derived from the German word ""Histon"", a word itself of uncertain origin - perhaps from the Greek "histanai" or "histos".
In the early 1960s, before the types of histones were known and before histones were known to be highly conserved across taxonomically diverse organisms, James F. Bonner and his collaborators began a study of these proteins that were known to be tightly associated with the DNA in the nucleus of higher organisms. Bonner and his postdoctoral fellow Ru Chih C. Huang showed that isolated chromatin would not support RNA transcription in the test tube, but if the histones were extracted from the chromatin, RNA could be transcribed from the remaining DNA. Their paper became a citation classic. Paul T'so and James Bonner had called together a World Congress on Histone Chemistry and Biology in 1964, in which it became clear that there was no consensus on the number of kinds of histone and that no one knew how they would compare when isolated from different organisms. Bonner and his collaborators then developed methods to separate each type of histone, purified individual histones, compared amino acid compositions in the same histone from different organisms, and compared amino acid sequences of the same histone from different organisms in collaboration with Emil Smith from UCLA. For example, they found Histone IV sequence to be highly conserved between peas and calf thymus. However, their work on the biochemical characteristics of individual histones did not reveal how the histones interacted with each other or with DNA to which they were tightly bound.
Also in the 1960s, Vincent Allfrey and Alfred Mirsky had suggested, based on their analyses of histones, that acetylation and methylation of histones could provide a transcriptional control mechanism, but did not have available the kind of detailed analysis that later investigators were able to conduct to show how such regulation could be gene-specific. Until the early 1990s, histones were dismissed by most as inert packing material for eukaryotic nuclear DNA, a view based in part on the models of Mark Ptashne and others, who believed that transcription was activated by protein-DNA and protein-protein interactions on largely naked DNA templates, as is the case in bacteria.
During the 1980s, Yahli Lorch and Roger Kornberg showed that a nucleosome on a core promoter prevents the initiation of transcription in vitro, and Michael Grunstein demonstrated that histones repress transcription in vivo, leading to the idea of the nucleosome as a general gene repressor. Relief from repression is believed to involve both histone modification and the action of chromatin-remodeling complexes. Vincent Allfrey and Alfred Mirsky had earlier proposed a role of histone modification in transcriptional activation, regarded as a molecular manifestation of epigenetics. Michael Grunstein and David Allis found support for this proposal, in the importance of histone acetylation for transcription in yeast and the activity of the transcriptional activator Gcn5 as a histone acetyltransferase.
The discovery of the H5 histone appears to date back to the 1970s, and it is now considered an isoform of Histone H1.
Histones are found in the nuclei of eukaryotic cells, and in certain Archaea, namely Proteoarchaea and Euryarchaea, but not in bacteria. The unicellular algae known as dinoflagellates were previously thought to be the only eukaryotes that completely lack histones, however, later studies showed that their DNA still encodes histone genes. Unlike the core histones, lysine-rich linker histone (H1) proteins are found in bacteria, otherwise known as nucleoprotein HC1/HC2.
Archaeal histones may well resemble the evolutionary precursors to eukaryotic histones. Histone proteins are among the most highly conserved proteins in eukaryotes, emphasizing their important role in the biology of the nucleus. In contrast mature sperm cells largely use protamines to package their genomic DNA, most likely because this allows them to achieve an even higher packaging ratio.
There are some "variant" forms in some of the major classes. They share amino acid sequence homology and core structural similarity to a specific class of major histones but also have their own feature that is distinct from the major histones. These "minor histones" usually carry out specific functions of the chromatin metabolism. For example, histone H3-like CENPA is associated with only the centromere region of the chromosome. Histone H2A variant H2A.Z is associated with the promoters of actively transcribed genes and also involved in the prevention of the spread of silent heterochromatin. Furthermore, H2A.Z has roles in chromatin for genome stability. Another H2A variant H2A.X is phosphorylated at S139 in regions around double-strand breaks and marks the region undergoing DNA repair. Histone H3.3 is associated with the body of actively transcribed genes.
Nucleosome histones may have evolved from ribosomal proteins (RPS6/RPS15) with which they share much in common, both being short and basic proteins.
Histones act as spools around which DNA winds. This enables the compaction necessary to fit the large genomes of eukaryotes inside cell nuclei: the compacted molecule is 40,000 times shorter than an unpacked molecule.
Histones undergo posttranslational modifications that alter their interaction with DNA and nuclear proteins. The H3 and H4 histones have long tails protruding from the nucleosome, which can be covalently modified at several places. Modifications of the tail include methylation, acetylation, phosphorylation, ubiquitination, SUMOylation, citrullination, and ADP-ribosylation. The core of the histones H2A and H2B can also be modified. Combinations of modifications are thought to constitute a code, the so-called "histone code". Histone modifications act in diverse biological processes such as gene regulation, DNA repair, chromosome condensation (mitosis) and spermatogenesis (meiosis).
The common nomenclature of histone modifications is:
So H3K4me1 denotes the monomethylation of the 4th residue (a lysine) from the start (i.e., the N-terminal) of the H3 protein.
A huge catalogue of histone modifications have been described, but a functional understanding of most is still lacking. Collectively, it is thought that histone modifications may underlie a histone code, whereby combinations of histone modifications have specific meanings. However, most functional data concerns individual prominent histone modifications that are biochemically amenable to detailed study.
The addition of one, two, or many methyl groups to lysine has little effect on the chemistry of the histone; methylation leaves the charge of the lysine intact and adds a minimal number of atoms so steric interactions are mostly unaffected. However, proteins containing Tudor, chromo or PHD domains, amongst others, can recognise lysine methylation with exquisite sensitivity and differentiate mono, di and tri-methyl lysine, to the extent that, for some lysines (e.g.: H4K20) mono, di and tri-methylation appear to have different meanings. Because of this, lysine methylation tends to be a very informative mark and dominates the known histone modification functions.
Recently it has been shown, that the addition of a serotonin group to the position 5 glutamine of H3, happens in serotonergic cells such as neurons. This is part of the differentiation of the serotonergic cells. This post-translational modification happens in conjunction with the H3K4me3 modification. The serotonylation potentiates the binding of the general transcription factor TFIID to the TATA box.
What was said above of the chemistry of lysine methylation also applies to arginine methylation, and some protein domains—e.g., Tudor domains—can be specific for methyl arginine instead of methyl lysine. Arginine is known to be mono- or di-methylated, and methylation can be symmetric or asymmetric, potentially with different meanings.
Enzymes called peptidylarginine deiminases (PADs) hydrolyze the imine group of arginines and attach a keto group, so that there is one less positive charge on the amino acid residue. This process has been involved in the activation of gene expression by making the modified histones less tightly bound to DNA and thus making the chromatin more accessible. PADs can also produce the opposite effect by removing or inhibiting mono-methylation of arginine residues on histones and thus antagonizing the positive effect arginine methylation has on transcriptional activity.
Addition of an acetyl group has a major chemical effect on lysine as it neutralises the positive charge. This reduces electrostatic attraction between the histone and the negatively charged DNA backbone, loosening the chromatin structure; highly acetylated histones form more accessible chromatin and tend to be associated with active transcription. Lysine acetylation appears to be less precise in meaning than methylation, in that histone acetyltransferases tend to act on more than one lysine; presumably this reflects the need to alter multiple lysines to have a significant effect on chromatin structure. The modification includes H3K27ac.
Addition of a negatively charged phosphate group can lead to major changes in protein structure, leading to the well-characterised role of phosphorylation in controlling protein function. It is not clear what structural implications histone phosphorylation has, but histone phosphorylation has clear functions as a post-translational modification, and binding domains such as BRCT have been characterised.
Most well-studied histone modifications are involved in control of transcription.
Two histone modifications are particularly associated with active transcription:
Three histone modifications are particularly associated with repressed genes:
Analysis of histone modifications in embryonic stem cells (and other stem cells) revealed many gene promoters carrying both H3K4Me3 and H3K27Me3, in other words these promoters display both activating and repressing marks simultaneously. This peculiar combination of modifications marks genes that are poised for transcription; they are not required in stem cells, but are rapidly required after differentiation into some lineages. Once the cell starts to differentiate, these bivalent promoters are resolved to either active or repressive states depending on the chosen lineage.
Marking sites of DNA damage is an important function for histone modifications. It also protects DNA from getting destroyed by ultraviolet radiation of sun.
H3K36me3 has the ability to recruit the MSH2-MSH6 (hMutSα) complex of the DNA mismatch repair pathway. Consistently, regions of the human genome with high levels of H3K36me3 accumulate less somatic mutations due to mismatch repair activity.
Epigenetic modifications of histone tails in specific regions of the brain are of central importance in addictions. Once particular epigenetic alterations occur, they appear to be long lasting "molecular scars" that may account for the persistence of addictions.
Cigarette smokers (about 15% of the US population) are usually addicted to nicotine. After 7 days of nicotine treatment of mice, acetylation of both histone H3 and histone H4 was increased at the FosB promoter in the nucleus accumbens of the brain, causing 61% increase in FosB expression. This would also increase expression of the splice variant Delta FosB. In the nucleus accumbens of the brain, Delta FosB functions as a "sustained molecular switch" and "master control protein" in the development of an addiction.
About 7% of the US population is addicted to alcohol. In rats exposed to alcohol for up to 5 days, there was an increase in histone 3 lysine 9 acetylation in the pronociceptin promoter in the brain amygdala complex. This acetylation is an activating mark for pronociceptin. The nociceptin/nociceptin opioid receptor system is involved in the reinforcing or conditioning effects of alcohol.
Methamphetamine addiction occurs in about 0.2% of the US population. Chronic methamphetamine use causes methylation of the lysine in position 4 of histone 3 located at the promoters of the "c-fos" and the "C-C chemokine receptor 2 (ccr2)" genes, activating those genes in the nucleus accumbens (NAc). c-fos is well known to be important in addiction. The "ccr2" gene is also important in addiction, since mutational inactivation of this gene impairs addiction.
The first step of chromatin structure duplication is the synthesis of histone proteins: H1, H2A, H2B, H3, H4. These proteins are synthesized during S phase of the cell cycle. There are different mechanisms which contribute to the increase of histone synthesis.
Yeast carry one or two copies of each histone gene, which are not clustered but rather scattered throughout chromosomes. Histone gene transcription is controlled by multiple gene regulatory proteins such as transcription factors which bind to histone promoter regions. In budding yeast, the candidate gene for activation of histone gene expression is SBF. SBF is a transcription factor that is activated in late G1 phase, when it dissociates from its repressor Whi5. This occurs when Whi5 is phosphorylated by Cdc8 which is a G1/S Cdk. Suppression of histone gene expression outside of S phases is dependent on Hir proteins which form inactive chromatin structure at the locus of histone genes, causing transcriptional activators to be blocked.
In metazoans the increase in the rate of histone synthesis is due to the increase in processing of pre-mRNA to its mature form as well as decrease in mRNA degradation; this results in an increase of active mRNA for translation of histone proteins. The mechanism for mRNA activation has been found to be the removal of a segment of the 3' end of the mRNA strand, and is dependent on association with stem-loop binding protein (SLBP). SLBP also stabilizes histone mRNAs during S phase by blocking degradation by the 3'hExo nuclease. SLBP levels are controlled by cell-cycle proteins, causing SLBP to accumulate as cells enter S phase and degrade as cells leave S phase. SLBP are marked for degradation by phosphorylation at two threonine residues by cyclin dependent kinases, possibly cyclin A/ cdk2, at the end of S phase. Metazoans also have multiple copies of histone genes clustered on chromosomes which are localized in structures called Cajal bodies as determined by genome-wide chromosome conformation capture analysis (4C-Seq).
Nuclear protein Ataxia-Telangiectasia (NPAT), also known as nuclear protein coactivator of histone transcription, is a transcription factor which activates histone gene transcription on chromosomes 1 and 6 of human cells. NPAT is also a substrate of cyclin E-Cdk2, which is required for the transition between G1 phase and S phase. NPAT activates histone gene expression only after it has been phosphorylated by the G1/S-Cdk cyclin E-Cdk2 in early S phase. This shows an important regulatory link between cell-cycle control and histone synthesis. | https://en.wikipedia.org/wiki?curid=14029 |
Hierarchical organization
A hierarchical organization is an organizational structure where every entity in the organization, except one, is subordinate to a single other entity. This arrangement is a form of a hierarchy. In an organization, the hierarchy usually consists of a singular/group of power at the top with subsequent levels of power beneath them. This is the dominant mode of organization among large organizations; most corporations, governments, criminal enterprises, and organized religions are hierarchical organizations with different levels of management, power or authority. For example, the broad, top-level overview of the general organization of the Catholic Church consists of the Pope, then the Cardinals, then the Archbishops, and so on.
Members of hierarchical organizational structures chiefly communicate with their immediate superior and with their immediate subordinates. Structuring organizations in this way is useful partly because it can reduce the communication overhead by limiting information flow.
A hierarchy is typically visualized as a pyramid, where the height of the ranking or person depicts their power status and the width of that level represents how many people or business divisions are at that level relative to the whole—the highest-ranking people are at the apex, and there are very few of them; the base may include thousands of people who have no subordinates. These hierarchies are typically depicted with a tree or triangle diagram, creating an organizational chart or organigram. Those nearest the top have more power than those nearest the bottom, and there being fewer people at the top than at the bottom. As a result, superiors in a hierarchy generally have higher status and command greater rewards than their subordinates.
All governments and most companies have similar structures. Traditionally, the monarch was the pinnacle of the state. In many countries, feudalism and manorialism provided a formal social structure that established hierarchical links at every level of society, with the monarch at the top.
In modern post-feudal states the nominal top of the hierarchy still remains the head of state, which may be a president or a constitutional monarch, although in many modern states the powers of the head of state are delegated among different bodies. Below the head, there is commonly a senate, parliament or congress, which in turn often delegate the day-to-day running of the country to a prime minister. In many democracies, the people are considered to be the notional top of the hierarchy, over the head of state; in reality, the people's power is restricted to voting in elections.
In business, the business owner traditionally occupied the pinnacle of the organization. In most modern large companies, there is now no longer a single dominant shareholder, and the collective power of the business owners is for most purposes delegated to a board of directors, which in turn delegates the day-to-day running of the company to a managing director or CEO. Again, although the shareholders of the company are the nominal top of the hierarchy, in reality many companies are run at least in part as personal fiefdoms by their management; corporate governance rules are an attempt to mitigate this tendency.
The organizational development theorist Elliott Jacques identified a special role for hierarchy in his concept of requisite organization.
The iron law of oligarchy, introduced by Robert Michels, describes the inevitable tendency of hierarchical organizations to become oligarchic in their decision making.
The Peter Principle is a term coined by Laurence J. Peter in which the selection of a candidate for a position in an hierarchical organization is based on the candidate's performance in their current role, rather than on abilities relevant to the intended role. Thus, employees only stop being promoted once they can no longer perform effectively, and managers in an hierarchical organization "rise to the level of their incompetence."
Hierarchiology is another term coined by Laurence J. Peter, described in his humorous book of the same name, to refer to the study of hierarchical organizations and the behavior of their members.
The IRG Solution – hierarchical incompetence and how to overcome it argued that hierarchies were inherently incompetent, and were only able to function due to large amounts of informal lateral communication fostered by private informal networks.
In the work of diverse theorists such as William James (1842–1910), Michel Foucault (1926–1984) and Hayden White, important critiques of hierarchical epistemology are advanced. James famously asserts in his work "Radical Empiricism" that clear distinctions of type and category are a constant but unwritten goal of scientific reasoning, so that when they are discovered, success is declared. But if aspects of the world are organized differently, involving inherent and intractable ambiguities, then scientific questions are often considered unresolved. A hesitation to declare success upon the discovery of ambiguities leaves heterarchy at an artificial and subjective disadvantage in the scope of human knowledge. This bias is an artifact of an aesthetic or pedagogical preference for hierarchy, and not necessarily an expression of objective observation.
Hierarchies and hierarchical thinking has been criticized by many people, including Susan McClary and one political philosophy which is vehemently opposed to hierarchical organization: anarchism. Heterarchy is the most commonly proposed alternative to hierarchy and this has been combined with responsible autonomy by Gerard Fairtlough in his work on Triarchy theory. The most beneficial aspect of a hierarchical organization is the clear command that is established. However, hierarchy may become dismantled by abuse of power.
Amidst constant innovation in information and communication technologies, hierarchical authority structures are giving way to greater decision-making latitude for individuals and more flexible definitions of job activities and this new style of work presents a challenge to existing organizational forms, with some research studies contrasting traditional organizational forms against groups that operate as online communities that are characterized by personal motivation and the satisfaction of making one's own decisions. With all levels of an organization having access to information and communication via digital means, power structures align more as a wirearchy, enabling the flow of power and authority to be based not on hierarchical levels, but on information, trust, credibility, and a focus on results. | https://en.wikipedia.org/wiki?curid=14031 |
Harry Secombe
Sir Harry Donald Secombe (8 September 1921 – 11 April 2001) was a Welsh comedian, actor and singer. Secombe was a member of the British radio comedy programme "The Goon Show" (1951–1960), playing many characters, but most notably, Neddie Seagoon. An accomplished tenor, he also appeared in musicals and films – notably as Mr Bumble in "Oliver!" (1968) – and, in his later years, was a presenter of television shows incorporating hymns and other devotional songs.
Secombe was born in St Thomas, Swansea, the third of four children of Nellie Jane Gladys (née Davies), a shop manageress, and Frederick Ernest Secombe, a grocer. From the age of 11 he attended Dynevor School, a state grammar school in central Swansea.
His family were regular churchgoers, belonging to the congregation of St Thomas Church. A member of the choir, from the age of 12 Secombe would perform a sketch entitled "The Welsh Courtship" at church socials, acting as "feed" to his sister Carol. His elder brother, Fred Secombe, was the author of several books about his experiences as an Anglican priest and rector.
After leaving school in 1937, Secombe became a pay clerk at Baldwin's store. With war looming, he decided in 1938 that he would join the Territorial Army. Very short sighted, he got a friend to tell him the sight test, and then learnt it by heart. He served as a Lance Bombardier in No.132 Field Regiment of the Royal Artillery. He would refer to the unit in which he served during the Second World War in the North African Campaign, Sicily, and Italy, as "The Five-Mile Snipers". While in North Africa Secombe met Spike Milligan for the first time. In Sicily he joined a concert party and developed his own comedy routines to entertain the troops.
When Secombe visited the Falkland Islands to entertain the troops after the 1982 Falklands War, his old regiment promoted him to the rank of sergeant – 37 years after he had been demobbed.
He made his first radio broadcast in May 1944 on a variety show aimed at the services. Following the end of fighting in the war but prior to demobilisation Secombe joined a pool of entertainers in Naples and formed a comedy duo with Spike Milligan.
Secombe joined the cast of the Windmill Theatre in 1946, using a routine he had developed in Italy about how people shaved. Secombe always claimed that his ability to sing could always be counted on to save him when he bombed.
After a regional touring career, his first break came in radio when he was chosen as resident comedian for the Welsh series "Welsh Rarebit," followed by appearances on "Variety Bandbox" and a regular role in "Educating Archie".
Secombe met Michael Bentine at the Windmill Theatre, and was introduced to Peter Sellers by his agent Jimmy Grafton. Both Milligan and Sellers credited him with keeping the act on the bill when club owners had wanted to sack them. Together with Spike Milligan, the four wrote a comedy radio script, and "Those Crazy People" was commissioned and first broadcast on 28 May 1951. Produced by Dennis Main Wilson, this would soon become "The Goon Show" and the show remained on the air until 1960. Secombe mainly played Neddie Seagoon, around whom the show's absurd plots developed. In 1955, whilst appearing on "The Goon Show", Secombe was approached by the BBC to step in at short notice to take the lead in the radio comedy "Hancock's Half Hour". The star of the show, Tony Hancock, had decided to take an unannounced break abroad the day before the live airing of the second season. Secombe appeared in the lead for the first three episodes and had a guest role in the fourth after Hancock's return. All four episodes are lost, but following the discovery of the original scripts the episodes were rerecorded in 2017, with Andrew Secombe performing the role held by his then late father.
With the success of "The Goon Show", Secombe developed a dual career as both a comedy actor and a singer. At the beginning of his career as an entertainer his act would end with a joke version of the duet "Sweethearts," in which he sang both the baritone and falsetto parts. Trained under Italian maestro Manlio di Veroli, he emerged as a "bel canto" tenor (characteristically, he insisted that in his case this meant "can belto") and had a long list of best-selling record albums to his credit.
In 1958 he appeared in the film "Jet Storm," which starred Dame Sybil Thorndike and Richard Attenborough and in the same year Secombe starred in the title role in "Davy", one of Ealing Studios' last films.
The power of his voice allowed Secombe to appear in many stage musicals. This included 1963's "Pickwick," based on Charles Dickens' "The Pickwick Papers", which gave him the number 18 hit single "If I Ruled the World" – his later signature tune. In 1965 the show was produced on tour in the United States, where on Broadway he garnered a nomination for a Tony Award for Best Actor in a Musical. Secombe scored his biggest hit single in 1967 with his version of "This Is My Song", which peaked at no. 2 on the charts in April 1967 while a recording by Petula Clark, which had hit no. 1 in February, was still in the top ten. He also appeared in the musical "The Four Musketeers" (1967) at Drury Lane, as Mr. Bumble in Carol Reed's film of "Oliver!" (1968), and in the Envy segment of "The Magnificent Seven Deadly Sins" (1971).
He would go on to star in his own television show, "The Harry Secombe Show", which debuted on Christmas Day 1968 on BBC 1 and ran for thirty-one episodes until 1973. A sketch comedy show featuring Julian Orchard as Secombe's regular sidekick, the series also featured guest appearances by fellow Goon Spike Milligan as well as leading performers such as Ronnie Barker and Arthur Lowe. Secombe later starred in similar vehicles such as "Sing a Song of Secombe" and ITV's "Secombe with Music" during the 1970s.
Later in life, Secombe (whose brother Fred Secombe was a priest in the Church in Wales, part of the Anglican Communion) attracted new audiences as a presenter of religious programmes, such as the BBC's "Songs of Praise" and ITV's "Stars on Sunday" and "Highway". He was also a special programming consultant to Harlech Television and hosted a Thames Television programme in 1979 entitled "Cross on the Donkey's Back". In the latter half of the 1980s, Secombe personally sponsored a football team for boys aged 9–11 in the local West Sutton Little League, 'Secombes Knights'.
In 1990, he was one of a few to be honoured by a second appearance on "This Is Your Life", when he was surprised by Michael Aspel at a book signing in a London branch of WH Smith. Secombe had been a subject of the show previously in March 1958 when Eamonn Andrews surprised him at the BBC Television Theatre.
In 1963 he was appointed a Commander of the Order of the British Empire (CBE).
He was knighted in 1981, and jokingly referred to himself as Sir Cumference (in recognition of his rotund figure). The motto he chose for his coat of arms was "GO ON", a reference to goon.
Secombe suffered from peritonitis in 1980. Within two years, taking advice from doctors, he had lost five stone in weight. He had a stroke in 1997, from which he made a slow recovery. He was then diagnosed with prostate cancer in September 1998. After suffering a second stroke in 1999, he was forced to abandon his television career, but made a documentary about his condition in the hope of giving encouragement to other sufferers. Secombe had diabetes in the latter part of his life.
Secombe died on 11 April 2001 at the age of 79, from prostate cancer, in hospital in Guildford, Surrey. His ashes are interred at the parish church of Shamley Green, and a later memorial service to celebrate his life was held at Westminster Abbey on 26 October 2001. As well as family members and friends, the service was also attended by Charles, Prince of Wales and representatives of Prince Philip, Duke of Edinburgh, Anne, Princess Royal, Princess Margaret, Countess of Snowdon and Prince Edward, Duke of Kent. On his tombstone is the inscription: "To know him was to love him."
Upon hearing of his old friend's death, Spike Milligan quipped, "I'm glad he died before me, because I didn't want him to sing at my funeral." But Secombe would have the last laugh: upon Milligan's own death the following year, a recording of Secombe singing was played at Spike's memorial service.
The Secombe Theatre at Sutton, London, bears his name in memory of this former local personality. He is also fondly remembered at the London Welsh Centre, where he opened the bar on St Patrick's Day (17 March) 1971.
Secombe met Myra Joan Atherton at the Mumbles Dance hall in 1946. The couple were married from 1948 until his death, and had four children: | https://en.wikipedia.org/wiki?curid=14033 |
Heroin
Heroin, also known as diamorphine among other names, is an opioid used as a recreational drug for its euphoric effects. It is used medically in several countries to relieve pain or in opioid replacement therapy. It is typically injected, usually into a vein, but it can also be smoked, snorted, or inhaled. The onset of effects is usually rapid and lasts for a few hours.
Common side effects include respiratory depression (decreased breathing), dry mouth, drowsiness, impaired mental function, constipation, and addiction. Side effects of use by injection can include abscesses, infected heart valves, blood-borne infections, and pneumonia. After a history of long-term use, opioid withdrawal symptoms can begin within hours of the last use. When given by injection into a vein, heroin has two to three times the effect of a similar dose of morphine. It typically comes as a white or brown powder.
Treatment of heroin addiction often includes behavioral therapy and medications. Medications can include buprenorphine, methadone, or naltrexone. A heroin overdose may be treated with naloxone. An estimated 17 million people use opiates, of which heroin is the most common, and opioid use resulted in 122,000 deaths. The total number of heroin users worldwide as of 2015 is believed to have increased in Africa, the Americas, and Asia since 2000. In the United States, approximately 1.6 percent of people have used heroin at some point, with 950,000 using it in the last year. When people die from overdosing on a drug, the drug is usually an opioid and often heroin.
Heroin was first made by C. R. Alder Wright in 1874 from morphine, a natural product of the opium poppy. Internationally, heroin is controlled under Schedules I and IV of the Single Convention on Narcotic Drugs, and it is generally illegal to make, possess, or sell without a license. About 448 tons of heroin were made in 2016. In 2015, Afghanistan produced about 66% of the world's opium. Illegal heroin is often mixed with other substances such as sugar, starch, caffeine, quinine, or other opioids like fentanyl.
Bayer's original trade name (see 'History' section) of heroin is typically used in non-medical settings. It is used as a recreational drug for the euphoria it induces. Anthropologist Michael Agar once described heroin as "the perfect whatever drug." Tolerance develops quickly, and increased doses are needed in order to achieve the same effects. Its popularity with recreational drug users, compared to morphine, reportedly stems from its perceived different effects.
Short-term addiction studies by the same researchers demonstrated that tolerance developed at a similar rate to both heroin and morphine. When compared to the opioids hydromorphone, fentanyl, oxycodone, and pethidine (meperidine), former addicts showed a strong preference for heroin and morphine, suggesting that heroin and morphine are particularly susceptible to abuse and addiction. Morphine and heroin were also much more likely to produce euphoria and other "positive" subjective effects when compared to these other opioids.
In the United States, heroin is not accepted as medically useful.
Under the generic name diamorphine, heroin is prescribed as a strong pain medication in the United Kingdom, where it is administered via oral, subcutaneous, intramuscular, intrathecal, intranasal or intravenous routes. It may be prescribed for the treatment of acute pain, such as in severe physical trauma, myocardial infarction, post-surgical pain and chronic pain, including end-stage terminal illnesses. In other countries it is more common to use morphine or other strong opioids in these situations. In 2004 the National Institute for Health and Clinical Excellence produced guidance on the management of caesarean section, which recommended the use of intrathecal or epidural diamorphine for post-operative pain relief.
Diamorphine continues to be widely used in palliative care in the UK, where it is commonly given by the subcutaneous route, often via a syringe driver, if patients cannot easily swallow morphine solution. The advantage of diamorphine over morphine is that diamorphine is more fat soluble and therefore more potent by injection, so smaller doses of it are needed for the same effect on pain. Both of these factors are advantageous if giving high doses of opioids via the subcutaneous route, which is often necessary in palliative care.
It is also used in the palliative management of bone fractures and other trauma, especially in children. In the trauma context, it is primarily given by nose in hospital; although a prepared nasal spray is available. It has traditionally been made by the attending physician, generally from the same "dry" ampoules as used for injection.
A number of European countries prescribe heroin for treatment of heroin addiction. Diamorphine may be used as a maintenance drug to assist the treatment of opiate addiction, normally in long-term chronic intravenous (IV) heroin users. It is only prescribed following exhaustive efforts at treatment via other means. It is sometimes thought that heroin users can walk into a clinic and walk out with a prescription, but the process takes many weeks before a prescription for diamorphine is issued. Though this is somewhat controversial among proponents of a zero-tolerance drug policy, it has proven superior to methadone in improving the social and health situations of addicts.
The UK Department of Health's Rolleston Committee Report in 1926 established the British approach to diamorphine prescription to users, which was maintained for the next 40 years: dealers were prosecuted, but doctors could prescribe diamorphine to users when withdrawing. In 1964 the Brain Committee recommended that only selected approved doctors working at approved specialised centres be allowed to prescribe diamorphine and benzoylmethylecgonine (cocaine) to users. The law was made more restrictive in 1968. Beginning in the 1970s, the emphasis shifted to abstinence and the use of methadone; currently only a small number of users in the UK are prescribed diamorphine.
In 1994, Switzerland began a trial diamorphine maintenance program for users that had failed multiple withdrawal programs. The aim of this program was to maintain the health of the user by avoiding medical problems stemming from the illicit use of diamorphine. The first trial in 1994 involved 340 users, although enrollment was later expanded to 1000, based on the apparent success of the program. The trials proved diamorphine maintenance to be superior to other forms of treatment in improving the social and health situation for this group of patients. It has also been shown to save money, despite high treatment expenses, as it significantly reduces costs incurred by trials, incarceration, health interventions and delinquency. Patients appear twice daily at a treatment center, where they inject their dose of diamorphine under the supervision of medical staff. They are required to contribute about 450 Swiss francs per month to the treatment costs. A national referendum in November 2008 showed 68% of voters supported the plan, introducing diamorphine prescription into federal law. The previous trials were based on time-limited executive ordinances. The success of the Swiss trials led German, Dutch, and Canadian cities to try out their own diamorphine prescription programs. Some Australian cities (such as Sydney) have instituted legal diamorphine supervised injecting centers, in line with other wider harm minimization programs.
Since January 2009, Denmark has prescribed diamorphine to a few addicts that have tried methadone and subutex without success. Beginning in February 2010, addicts in Copenhagen and Odense became eligible to receive free diamorphine. Later in 2010 other cities including Århus and Esbjerg joined the scheme. It was supposed that around 230 addicts would be able to receive free diamorphine.
However, Danish addicts would only be able to inject heroin according to the policy set by Danish National Board of Health. Of the estimated 1500 drug users who did not benefit from the then-current oral substitution treatment, approximately 900 would not be in the target group for treatment with injectable diamorphine, either because of "massive multiple drug abuse of non-opioids" or "not wanting treatment with injectable diamorphine".
In July 2009, the German Bundestag passed a law allowing diamorphine prescription as a standard treatment for addicts; a large-scale trial of diamorphine prescription had been authorized in the country in 2002.
On August 26, 2016 Health Canada issued regulations amending prior regulations it had issued under the Controlled Drugs and Substances Act; the "New Classes of Practitioners Regulations", the "Narcotic Control Regulations", and the "Food and Drug Regulations", to allow doctors to prescribe diamorphine to people who have a severe opioid addiction who have not responded to other treatments. The prescription heroin can be accessed by doctors through Health Canada's Special Access Programme (SAP) for "emergency access to drugs for patients with serious or life-threatening conditions when conventional treatments have failed, are unsuitable, or are unavailable."
The onset of heroin's effects depends upon the route of administration. Studies have shown that the subjective pleasure of drug use (the reinforcing component of addiction) is proportional to the rate at which the blood level of the drug increases. Smoking is the fastest route of drug administration, although intravenous injection results in a quicker rise in blood concentration. These are followed by suppository (anal or vaginal insertion), insufflation (snorting), and ingestion (swallowing).
Ingestion does not produce a rush as forerunner to the high experienced with the use of heroin, which is most pronounced with intravenous use. While the onset of the rush induced by injection can occur in as little as a few seconds, the oral route of administration requires approximately half an hour before the high sets in. Thus, with both higher the dosage of heroin used and faster the route of administration used, the higher potential risk for psychological addiction.
Large doses of heroin can cause fatal respiratory depression, and the drug has been used for suicide or as a murder weapon. The serial killer Harold Shipman used diamorphine on his victims, and the subsequent Shipman Inquiry led to a tightening of the regulations surrounding the storage, prescribing and destruction of controlled drugs in the UK.
Because significant tolerance to respiratory depression develops quickly with continued use and is lost just as quickly during withdrawal, it is often difficult to determine whether a heroin lethal overdose was accidental, suicide or homicide. Examples include the overdose deaths of Sid Vicious, Janis Joplin, Tim Buckley, Hillel Slovak, Layne Staley, Bradley Nowell, Ted Binion, and River Phoenix.
Chronic use of heroin and other opioids has been shown to be a potential cause of hyponatremia, resultant because of excess vasopressin secretion.
Use of heroin by mouth is less common than other methods of administration, mainly because there is little to no "rush", and the effects are less potent. Heroin is entirely converted to morphine by means of first-pass metabolism, resulting in deacetylation when ingested. Heroin's oral bioavailability is both dose-dependent (as is morphine's) and significantly higher than oral use of morphine itself, reaching up to 64.2% for high doses and 45.6% for low doses; opiate-naive users showed far less absorption of the drug at low doses, having bioavailabilities of only up to 22.9%. The maximum plasma concentration of morphine following oral administration of heroin was around twice as much as that of oral morphine.
Injection, also known as "slamming", "banging", "shooting up", "digging" or "mainlining", is a popular method which carries relatively greater risks than other methods of administration. Heroin base (commonly found in Europe), when prepared for injection, will only dissolve in water when mixed with an acid (most commonly citric acid powder or lemon juice) and heated. Heroin in the east-coast United States is most commonly found in the hydrochloride salt form, requiring just water (and no heat) to dissolve. Users tend to initially inject in the easily accessible arm veins, but as these veins collapse over time, users resort to more dangerous areas of the body, such as the femoral vein in the groin. Users who have used this route of administration often develop a deep vein thrombosis. Intravenous users can use a various single dose range using a hypodermic needle. The dose of heroin used for recreational purposes is dependent on the frequency and level of use: thus a first-time user may use between 5 and 20 mg, while an established addict may require several hundred mg per day. As with the injection of any drug, if a group of users share a common needle without sterilization procedures, blood-borne diseases, such as HIV/AIDS or hepatitis, can be transmitted.
The use of a common dispenser for water for the use in the preparation of the injection, as well as the sharing of spoons and filters can also cause the spread of blood-borne diseases. Many countries now supply small sterile spoons and filters for single use in order to prevent the spread of disease.
Smoking heroin refers to vaporizing it to inhale the resulting fumes, rather than burning and inhaling the smoke. It is commonly smoked in glass pipes made from glassblown Pyrex tubes and light bulbs. Heroin may be smoked from aluminium foil, which is heated by an underneath flame, with the resulting smoke inhaled through a tube of rolled up foil, a method also known as "chasing the dragon".
Another popular route to intake heroin is insufflation (snorting), where a user crushes the heroin into a fine powder and then gently inhales it (sometimes with a straw or a rolled-up banknote, as with cocaine) into the nose, where heroin is absorbed through the soft tissue in the mucous membrane of the sinus cavity and straight into the bloodstream. This method of administration redirects first-pass metabolism, with a quicker onset and higher bioavailability than oral administration, though the duration of action is shortened. This method is sometimes preferred by users who do not want to prepare and administer heroin for injection or smoking, but still experience a fast onset. Snorting heroin becomes an often unwanted route, once a user begins to inject the drug. The user may still get high on the drug from snorting, and experience a nod, but will not get a rush. A "rush" is caused by a large amount of heroin entering the body at once. When the drug is taken in through the nose, the user does not get the rush because the drug is absorbed slowly rather than instantly.
Heroin for pain has been mixed with sterile water on site by the attending physician, and administered using a syringe with a nebuliser tip. Heroin may be used for fractures, burns, finger-tip injuries, suturing, and wound re-dressing, but is inappropriate in head injuries.
Little research has been focused on the suppository (anal insertion) or pessary (vaginal insertion) methods of administration, also known as "plugging". These methods of administration are commonly carried out using an oral syringe. Heroin can be dissolved and withdrawn into an oral syringe which may then be lubricated and inserted into the anus or vagina before the plunger is pushed. The rectum or the vaginal canal is where the majority of the drug would likely be taken up, through the membranes lining their walls.
Heroin is classified as a hard drug in terms of drug harmfulness. Like most opioids, unadulterated heroin may lead to adverse effects. The purity of street heroin varies greatly, leading to overdoses when the purity is higher than they expected.
Users report an intense rush, an acute transcendent state of euphoria, which occurs while diamorphine is being metabolized into 6-monoacetylmorphine (6-MAM) and morphine in the brain. Some believe that heroin produces more euphoria than other opioids; one possible explanation is the presence of 6-monoacetylmorphine, a metabolite unique to heroin – although a more likely explanation is the rapidity of onset. While other opioids of recreational use produce only morphine, heroin also leaves 6-MAM, also a psycho-active metabolite. However, this perception is not supported by the results of clinical studies comparing the physiological and subjective effects of injected heroin and morphine in individuals formerly addicted to opioids; these subjects showed no preference for one drug over the other. Equipotent injected doses had comparable action courses, with no difference in subjects' self-rated feelings of euphoria, ambition, nervousness, relaxation, drowsiness, or sleepiness. The rush is usually accompanied by a warm flushing of the skin, dry mouth, and a heavy feeling in the extremities. Nausea, vomiting, and severe itching may also occur. After the initial effects, users usually will be drowsy for several hours; mental function is clouded; heart function slows; and breathing is also severely slowed, sometimes enough to be life-threatening. Slowed breathing can also lead to coma and permanent brain damage.
Repeated heroin use changes the physical structure and physiology of the brain, creating long-term imbalances in neuronal and hormonal systems that are not easily reversed. Studies have shown some deterioration of the brain's white matter due to heroin use, which may affect decision-making abilities, the ability to regulate behavior, and responses to stressful situations. Heroin also produces profound degrees of tolerance and physical dependence. Tolerance occurs when more and more of the drug is required to achieve the same effects. With physical dependence, the body adapts to the presence of the drug, and withdrawal symptoms occur if use is reduced abruptly.
Intravenous use of heroin (and any other substance) with needles and syringes or other related equipment may lead to:
The withdrawal syndrome from heroin may begin within as little as two hours of discontinuation of the drug; however, this time frame can fluctuate with the degree of tolerance as well as the amount of the last consumed dose, and more typically begins within 6–24 hours after cessation. Symptoms may include sweating, malaise, anxiety, depression, akathisia, priapism, extra sensitivity of the genitals in females, general feeling of heaviness, excessive yawning or sneezing, rhinorrhea, insomnia, cold sweats, chills, severe muscle and bone aches, nausea, vomiting, diarrhea, cramps, watery eyes, fever, cramp-like pains, and involuntary spasms in the limbs (thought to be an origin of the term "kicking the habit").
Heroin overdose is usually treated with the opioid antagonist, naloxone. This reverses the effects of heroin and causes an immediate return of consciousness but may result in withdrawal symptoms. The half-life of naloxone is shorter than some opioids, such that it may need to be given multiple times until the opioid has been metabolized by the body.
Between 2012 and 2015 it was the leading cause of drug related deaths in the United States. Since then fentanyl is a more common cause of drug related deaths.
Depending on drug interactions and numerous other factors, death from overdose can take anywhere from several minutes to several hours. Death usually occurs due to lack of oxygen resulting from the lack of breathing caused by the opioid. Heroin overdoses can occur because of an unexpected increase in the dose or purity or because of diminished opioid tolerance. However, many fatalities reported as overdoses are probably caused by interactions with other depressant drugs such as alcohol or benzodiazepines. Since heroin can cause nausea and vomiting, a significant number of deaths attributed to heroin overdose are caused by aspiration of vomit by an unconscious person. Some sources quote the median lethal dose (for an average 75 kg opiate-naive individual) as being between 75 and 600 mg. Illicit heroin is of widely varying and unpredictable purity. This means that the user may prepare what they consider to be a moderate dose while actually taking far more than intended. Also, tolerance typically decreases after a period of abstinence. If this occurs and the user takes a dose comparable to their previous use, the user may experience drug effects that are much greater than expected, potentially resulting in an overdose. It has been speculated that an unknown portion of heroin-related deaths are the result of an overdose or allergic reaction to quinine, which may sometimes be used as a cutting agent.
When taken orally, heroin undergoes extensive first-pass metabolism via deacetylation, making it a prodrug for the systemic delivery of morphine. When the drug is injected, however, it avoids this first-pass effect, very rapidly crossing the blood–brain barrier because of the presence of the acetyl groups, which render it much more fat soluble than morphine itself. Once in the brain, it then is deacetylated variously into the inactive 3-monoacetylmorphine and the active 6-monoacetylmorphine (6-MAM), and then to morphine, which bind to μ-opioid receptors, resulting in the drug's euphoric, analgesic (pain relief), and anxiolytic (anti-anxiety) effects; heroin itself exhibits relatively low affinity for the μ receptor. Analgesia follows from the activation of the μ receptor G-protein coupled receptor, which indirectly hyperpolarizes the neuron, reducing the release of nociceptive neurotransmitters, and hence, causes analgesia and increased pain tolerance.
Unlike hydromorphone and oxymorphone, however, administered intravenously, heroin creates a larger histamine release, similar to morphine, resulting in the feeling of a greater subjective "body high" to some, but also instances of pruritus (itching) when they first start using.
Normally GABA, released from inhibitory neurones, inhibits the release of dopamine. Opiates, like heroin and morphine, decrease the inhibitory activity of such neurones. This causes increased release of dopamine in the brain which is the reason for euphoric and rewarding effects of heroin.
Both morphine and 6-MAM are μ-opioid agonists that bind to receptors present throughout the brain, spinal cord, and gut of all mammals. The μ-opioid receptor also binds endogenous opioid peptides such as β-endorphin, Leu-enkephalin, and Met-enkephalin. Repeated use of heroin results in a number of physiological changes, including an increase in the production of μ-opioid receptors (upregulation). These physiological alterations lead to tolerance and dependence, so that stopping heroin use results in uncomfortable symptoms including pain, anxiety, muscle spasms, and insomnia called the opioid withdrawal syndrome. Depending on usage it has an onset 4–24 hours after the last dose of heroin. Morphine also binds to δ- and κ-opioid receptors.
There is also evidence that 6-MAM binds to a subtype of μ-opioid receptors that are also activated by the morphine metabolite morphine-6β-glucuronide but not morphine itself. The third subtype of third opioid type is the mu-3 receptor, which may be a commonality to other six-position monoesters of morphine. The contribution of these receptors to the overall pharmacology of heroin remains unknown.
A subclass of morphine derivatives, namely the 3,6 esters of morphine, with similar effects and uses, includes the clinically used strong analgesics nicomorphine (Vilan), and dipropanoylmorphine; there is also the latter's dihydromorphine analogue, diacetyldihydromorphine (Paralaudin). Two other 3,6 diesters of morphine invented in 1874–75 along with diamorphine, dibenzoylmorphine and acetylpropionylmorphine, were made as substitutes after it was outlawed in 1925 and, therefore, sold as the first "designer drugs" until they were outlawed by the League of Nations in 1930.
Diamorphine is produced from acetylation of morphine derived from natural opium sources, generally using acetic anhydride.
The major metabolites of diamorphine, 6-MAM, morphine, morphine-3-glucuronide and morphine-6-glucuronide, may be quantitated in blood, plasma or urine to monitor for abuse, confirm a diagnosis of poisoning or assist in a medicolegal death investigation. Most commercial opiate screening tests cross-react appreciably with these metabolites, as well as with other biotransformation products likely to be present following usage of street-grade diamorphine such as 6-acetylcodeine and codeine. However, chromatographic techniques can easily distinguish and measure each of these substances. When interpreting the results of a test, it is important to consider the diamorphine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate an opiate-naive individual, and the chronic user often has high baseline values of these metabolites in his system. Furthermore, some testing procedures employ a hydrolysis step before quantitation that converts many of the metabolic products to morphine, yielding a result that may be 2 times larger than with a method that examines each product individually.
The opium poppy was cultivated in lower Mesopotamia as long ago as 3400 BC. The chemical analysis of opium in the 19th century revealed that most of its activity could be ascribed to the alkaloids codeine and morphine.
Diamorphine was first synthesized in 1874 by C. R. Alder Wright, an English chemist working at St. Mary's Hospital Medical School in London who had been experimenting combining morphine with various acids. He boiled anhydrous morphine alkaloid with acetic anhydride for several hours and produced a more potent, acetylated form of morphine which is now called "diacetylmorphine" or "morphine diacetate". He sent the compound to F. M. Pierce of Owens College in Manchester for analysis. Pierce told Wright:
Wright's invention did not lead to any further developments, and diamorphine became popular only after it was independently re-synthesized 23 years later by chemist Felix Hoffmann. Hoffmann was working at Bayer pharmaceutical company in Elberfeld, Germany, and his supervisor Heinrich Dreser instructed him to acetylate morphine with the objective of producing codeine, a constituent of the opium poppy that is pharmacologically similar to morphine but less potent and less addictive. Instead, the experiment produced an acetylated form of morphine one and a half to two times more potent than morphine itself. The head of Bayer's research department reputedly coined the drug's new name of "heroin," based on the German "heroisch" which means "heroic, strong" (from the ancient Greek word "heros, ήρως"). Bayer scientists were not the first to make heroin, but their scientists discovered ways to make it, and Bayer led commercialization of heroin.
In 1895, Bayer marketed diacetylmorphine as an over-the-counter drug under the trademark name Heroin. It was developed chiefly as a morphine substitute for cough suppressants that did not have morphine's addictive side-effects. Morphine at the time was a popular recreational drug, and Bayer wished to find a similar but non-addictive substitute to market. However, contrary to Bayer's advertising as a "non-addictive morphine substitute," heroin would soon have one of the highest rates of addiction among its users.
From 1898 through to 1910, diamorphine was marketed under the trademark name Heroin as a non-addictive morphine substitute and cough suppressant. In the 11th edition of "Encyclopædia Britannica" (1910), the article on morphine states: "In the cough of phthisis minute doses [of morphine] are of service, but in this particular disease morphine is frequently better replaced by codeine or by heroin, which checks irritable coughs without the narcotism following upon the administration of morphine."
In the U.S., the Harrison Narcotics Tax Act was passed in 1914 to control the sale and distribution of diacetylmorphine and other opioids, which allowed the drug to be prescribed and sold for medical purposes. In 1924, the United States Congress banned its sale, importation, or manufacture. It is now a Schedule I substance, which makes it illegal for non-medical use in signatory nations of the Single Convention on Narcotic Drugs treaty, including the United States.
The Health Committee of the League of Nations banned diacetylmorphine in 1925, although it took more than three years for this to be implemented. In the meantime, the first designer drugs, viz. 3,6 diesters and 6 monoesters of morphine and acetylated analogues of closely related drugs like hydromorphone and dihydromorphine, were produced in massive quantities to fill the worldwide demand for diacetylmorphine—this continued until 1930 when the Committee banned diacetylmorphine analogues with no therapeutic advantage over drugs already in use, the first major legislation of this type.
Bayer lost some of its trademark rights to heroin (as well as aspirin) under the 1919 Treaty of Versailles following the German defeat in World War I.
Use of heroin by jazz musicians in particular was prevalent in the mid-twentieth century, including Billie Holiday, saxophonists Charlie Parker and Art Pepper, guitarist Joe Pass and piano player/singer Ray Charles; a "staggering number of jazz musicians were addicts". It was also a problem with many rock musicians, particularly from the late 1960s through the 1990s. Pete Doherty is also a self-confessed user of heroin. Nirvana lead singer Kurt Cobain's heroin addiction was well documented. Pantera frontman, Phil Anselmo, turned to heroin while touring during the 1990s to cope with his back pain. James Taylor, Eric Clapton, Johnny Winter, Keith Richards and Janis Joplin also used heroin. Many musicians have made songs referencing their heroin usage.
"Diamorphine" is the Recommended International Nonproprietary Name and British Approved Name. Other synonyms for heroin include: diacetylmorphine, and morphine diacetate. Heroin is also known by many street names including dope, H, smack, junk, horse, and brown, among others.
In Hong Kong, diamorphine is regulated under Schedule 1 of Hong Kong's Chapter 134 "Dangerous Drugs Ordinance". It is available by prescription. Anyone supplying diamorphine without a valid prescription can be fined $5,000,000 (HKD) and imprisoned for life. The penalty for trafficking or manufacturing diamorphine is a $5,000,000 (HKD) fine and life imprisonment. Possession of diamorphine without a license from the Department of Health is illegal with a $1,000,000 (HKD) fine and 7 years of jail time.
In the Netherlands, diamorphine is a List I drug of the Opium Law. It is available for prescription under tight regulation exclusively to long-term addicts for whom methadone maintenance treatment has failed. It cannot be used to treat severe pain or other illnesses.
In the United Kingdom, diamorphine is available by prescription, though it is a restricted Class A drug. According to the 50th edition of the British National Formulary (BNF), diamorphine hydrochloride may be used in the treatment of acute pain, myocardial infarction, acute pulmonary oedema, and chronic pain. The treatment of chronic non-malignant pain must be supervised by a specialist. The BNF notes that all opioid analgesics cause dependence and tolerance but that this is "no deterrent in the control of pain in terminal illness". When used in the palliative care of cancer patients, diamorphine is often injected using a syringe driver.
In Switzerland, heroin is produced in injectable or tablet form under the name Diaphin by a private company under contract to the Swiss government. Swiss-produced heroin has been imported into Canada with government approval.
In Australia diamorphine is listed as a schedule 9 prohibited substance under the Poisons Standard (October 2015). A schedule 9 drug is outlined in the Poisons Act 1964 as "Substances which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of the CEO."
In Canada, diamorphine is a controlled substance under Schedule I of the Controlled Drugs and Substances Act (CDSA). Any person seeking or obtaining diamorphine without disclosing authorization 30 days before obtaining another prescription from a practitioner is guilty of an indictable offense and subject to imprisonment for a term not exceeding seven years. Possession of diamorphine for the purpose of trafficking is an indictable offense and subject to imprisonment for life.
In the United States, diamorphine is a Schedule I drug according to the Controlled Substances Act of 1970, making it illegal to possess without a DEA license. Possession of more than 100 grams of diamorphine or a mixture containing diamorphine is punishable with a minimum mandatory sentence of 5 years of imprisonment in a federal prison.
Turkey maintains strict laws against the use, possession or trafficking of illegal drugs. If convicted under these offences, one could receive a heavy fine or a prison sentence of 4 to 24 years.
Abused prescription medicine such as opioid can lead to heroin addiction. The number of death from illegal opioid overdose follows the increasing number of death caused by prescription opioid overdoses. Prescription opioids are relatively easy to obtain. This may ultimately lead to heroin injection because heroin is cheaper than prescribed pills.
Diamorphine is produced from acetylation of morphine derived from natural opium sources. One such method of heroin production involves isolation of the water-soluble components of raw opium, including morphine, in a strongly basic aqueous solution, followed by recrystallization of the morphine base by addition of ammonium chloride. The solid morphine base is then filtered out. The morphine base is then reacted with acetic anhydride, which forms heroin. This highly impure brown heroin base may then undergo further purification steps, which produces a white-colored product; the final products have a different appearance depending on purity and have different names. Heroin purity has been classified into four grades. No.4 is the purest form – white powder (salt) to be easily dissolved and injected. No.3 is "brown sugar" for smoking (base). No.1 and No.2 are unprocessed raw heroin (salt or base).
Traffic is heavy worldwide, with the biggest producer being Afghanistan. According to a U.N. sponsored survey, in 2004, Afghanistan accounted for production of 87 percent of the world's diamorphine. Afghan opium kills around 100,000 people annually.
In 2003 "The Independent" reported:
Opium production in that country has increased rapidly since, reaching an all-time high in 2006. War in Afghanistan once again appeared as a facilitator of the trade. Some 3.3 million Afghans are involved in producing opium.
At present, opium poppies are mostly grown in Afghanistan (), and in Southeast Asia, especially in the region known as the Golden Triangle straddling Burma (), Thailand, Vietnam, Laos () and Yunnan province in China. There is also cultivation of opium poppies in Pakistan (), Mexico () and in Colombia (). According to the DEA, the majority of the heroin consumed in the United States comes from Mexico (50%) and Colombia (43-45%) via Mexican criminal cartels such as Sinaloa Cartel. However, these statistics may be significantly unreliable, the DEA's 50/50 split between Colombia and Mexico is contradicted by the amount of hectares cultivated in each country and in 2014, the DEA claimed most of the heroin in the US came from Colombia.
, the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs such as heroin into the United States and trafficking them throughout the United States. According to the Royal Canadian Mounted Police, 90% of the heroin seized in Canada (where the origin was known) came from Afghanistan. Pakistan is the destination and transit point for 40 percent of the opiates produced in Afghanistan, other destinations of Afghan opiates are Russia, Europe and Iran.
Conviction for trafficking heroin carries the death penalty in most Southeast Asian, some East Asian and Middle Eastern countries (see Use of death penalty worldwide for details), among which Malaysia, Singapore and Thailand are the most strict. The penalty applies even to citizens of countries where the penalty is not in place, sometimes causing controversy when foreign visitors are arrested for trafficking, for example the arrest of nine Australians in Bali, the death sentence given to Nola Blake in Thailand in 1987, or the hanging of an Australian citizen Van Tuong Nguyen in Singapore.
The origins of the present international illegal heroin trade can be traced back to laws passed in many countries in the early 1900s that closely regulated the production and sale of opium and its derivatives including heroin. At first, heroin flowed from countries where it was still legal into countries where it was no longer legal. By the mid-1920s, heroin production had been made illegal in many parts of the world. An illegal trade developed at that time between heroin labs in China (mostly in Shanghai and Tianjin) and other nations. The weakness of government in China and conditions of civil war enabled heroin production to take root there. Chinese triad gangs eventually came to play a major role in the illicit heroin trade. The French Connection route started in the 1930s.
Heroin trafficking was virtually eliminated in the U.S. during World War II because of temporary trade disruptions caused by the war. Japan's war with China had cut the normal distribution routes for heroin and the war had generally disrupted the movement of opium. After World War II, the Mafia took advantage of the weakness of the postwar Italian government and set up heroin labs in Sicily. The Mafia took advantage of Sicily's location along the historic route opium took westward into Europe and the United States. Large-scale international heroin production effectively ended in China with the victory of the communists in the civil war in the late 1940s. The elimination of Chinese production happened at the same time that Sicily's role in the trade developed.
Although it remained legal in some countries until after World War II, health risks, addiction, and widespread recreational use led most western countries to declare heroin a controlled substance by the latter half of the 20th century. In the late 1960s and early 1970s, the CIA supported anti-Communist Chinese Nationalists settled near the Sino-Burmese border and Hmong tribesmen in Laos. This helped the development of the Golden Triangle opium production region, which supplied about one-third of heroin consumed in US after the 1973 American withdrawal from Vietnam. In 1999, Burma, the heartland of the Golden Triangle, was the second largest producer of heroin, after Afghanistan.
The Soviet-Afghan war led to increased production in the Pakistani-Afghan border regions, as U.S.-backed mujaheddin militants raised money for arms from selling opium, contributing heavily to the modern Golden Crescent creation. By 1980, 60 percent of heroin sold in the U.S. originated in Afghanistan. It increased international production of heroin at lower prices in the 1980s. The trade shifted away from Sicily in the late 1970s as various criminal organizations violently fought with each other over the trade. The fighting also led to a stepped-up government law enforcement presence in Sicily.
Following the discovery at a Jordanian airport of a toner cartridge that had been modified into an improvised explosive device, the resultant increased level of airfreight scrutiny led to a major shortage (drought) of heroin from October 2010 until April 2011. This was reported in most of mainland Europe and the UK which led to a price increase of approximately 30 percent in the cost of street heroin and an increased demand for diverted methadone. The number of addicts seeking treatment also increased significantly during this period. Other heroin droughts (shortages) have been attributed to cartels restricting supply in order to force a price increase and also to a fungus that attacked the opium crop of 2009. Many people thought that the American government had introduced pathogens into the Afghanistan atmosphere in order to destroy the opium crop and thus starve insurgents of income.
On 13 March 2012, Haji Bagcho, with ties to the Taliban, was convicted by a U.S. District Court of conspiracy, distribution of heroin for importation into the United States and narco-terrorism. Based on heroin production statistics compiled by the United Nations Office on Drugs and Crime, in 2006, Bagcho's activities accounted for approximately 20 percent of the world's total production for that year.
The European Monitoring Centre for Drugs and Drug Addiction reports that the retail price of brown heroin varies from €14.5 per gram in Turkey to €110 per gram in Sweden, with most European countries reporting typical prices of €35–40 per gram. The price of white heroin is reported only by a few European countries and ranged between €27 and €110 per gram.
The United Nations Office on Drugs and Crime claims in its 2008 World Drug Report that typical US retail prices are US$172 per gram.
Harm reduction is a public health philosophy that seeks to reduce the harms associated with the use of illicit drugs. One aspect of harm reduction initiatives focuses on the behaviour of individual users. In the case of diamorphine, this includes promoting safer means of taking the drug, such as smoking, nasal use, oral or rectal insertion. This attempts to avoid the higher risks of overdose, infections and blood-borne viruses associated with injecting the drug. Other measures include using a small amount of the drug first to gauge the strength, and minimize the risks of overdose. For the same reason, poly drug use (the use of two or more drugs at the same time) is discouraged. Injecting diamorphine users are encouraged to use new needles, syringes, spoons/steri-cups and filters every time they inject and not share these with other users. Users are also encouraged to not use it on their own, as others can assist in the event of an overdose.
Governments that support a harm reduction approach usually fund needle and syringe exchange programs, which supply new needles and syringes on a confidential basis, as well as education on proper filtering before injection, safer injection techniques, safe disposal of used injecting gear and other equipment used when preparing diamorphine for injection may also be supplied including citric acid sachets/vitamin C sachets, steri-cups, filters, alcohol pre-injection swabs, sterile water ampules and tourniquets (to stop use of shoe laces or belts).
Another harm reduction measure employed for example in Europe, Canada and Australia are safe injection sites where users can inject diamorphine and cocaine under the supervision of medically trained staff. Safe injection sites are low threshold and allow social services to approach problem users that would otherwise be hard to reach.
In the UK the Criminal Justice System has a protocol in place that requires that any individual that is arrested and is suspected of having a substance misuse problem be offered the chance to enter a treatment program. This has had the effect of drastically reducing an area's crime rate as individuals arrested for theft in order to supply the funds for their drugs are no longer in the position of having to steal to purchase heroin because they have been placed onto a methadone program, quite often more quickly than would have been possible had they not been arrested. This aspect of harm reduction is seen as being beneficial to both the individual and the community at large, who are then protected from the possible theft of their goods.
During the late 1980s and early 1990s, Swiss authorities ran the ZIPP-AIDS (Zurich Intervention Pilot Project), handing out free syringes in the officially tolerated drug scene in Platzspitz park. In 1994, Zurich started a pilot project using prescription heroin in heroin-assisted treatment (HAT) which allowed users to obtain heroin and inject it under medical supervision. The HAT program proved to be cost-beneficial to society and improve patients overall health and social stability and has since been introduced in multiple European countries.
Researchers are attempting to reproduce the biosynthetic pathway that produces morphine in genetically engineered yeast. In June 2015 the "S"-reticuline could be produced from sugar and "R"-reticuline could be converted to morphine, but the intermediate reaction could not be performed. | https://en.wikipedia.org/wiki?curid=14034 |
Hinayana
"Hīnayāna" () is a Sanskrit term literally meaning the "small/deficient vehicle". Classical Chinese and Tibetan teachers translate it as "smaller vehicle". The term was applied to the "Śrāvakayāna", the Buddhist path followed by a śrāvaka who wished to become an arhat. This term appeared around the first or second century. Hīnayāna was often contrasted with "Mahāyāna", which means the "great vehicle".
In 1950 the World Fellowship of Buddhists declared that the term Hīnayana should not be used when referring to any form of Buddhism existing today.
In the past, the term was widely used by Western scholars to cover "the earliest system of Buddhist doctrine", as the "Monier-Williams Sanskrit-English Dictionary" put it. Modern Buddhist scholarship has deprecated the pejorative term, and uses instead the term "Nikaya Buddhism" to refer to early Buddhist schools.
"Hinayana" has also been used as a synonym for Theravada, which is the main tradition of Buddhism in Sri Lanka and Southeast Asia; this is considered inaccurate and derogatory. Robert Thurman writes, "'Nikaya Buddhism' is a coinage of Professor Masatoshi Nagatomi of Harvard University, who suggested it to me as a usage for the eighteen schools of Indian Buddhism to avoid the term 'Hinayana Buddhism,' which is found offensive by some members of the Theravada tradition."
Within Mahayana Buddhism, there were a variety of interpretations as to whom or to what the term "Hinayana" referred. Kalu Rinpoche stated the "lesser" or "greater" designation "did not refer to economic or social status, but concerned the spiritual capacities of the practitioner".
The word "hīnayāna" is formed of "hīna": "little", "poor", "inferior", "abandoned", "deficient", "defective"; and "yāna" (यान): "vehicle", where "vehicle" means "a way of going to enlightenment". The Pali Text Society's "Pali-English Dictionary" (1921–25) defines "hīna" in even stronger terms, with a semantic field that includes "poor, miserable; vile, base, abject, contemptible", and "despicable".
The term was translated by Kumārajīva and others into Classical Chinese as "small vehicle" (小 meaning "small", 乘 meaning "vehicle"), although earlier and more accurate translations of the term also exist. In Mongolian ("Baga Holgon") the term for hinayana also means "small" or "lesser" vehicle, while in Tibetan there are at least two words to designate the term, "theg chung" meaning "small vehicle" and "theg dman" meaning "inferior vehicle" or "inferior spiritual approach".
Thrangu Rinpoche has emphasized that "hinayana" is in no way implying "inferior". In his translation and commentary of Asanga's "Distinguishing Dharma from Dharmata", he writes, "all three traditions of hinayana, mahayana, and vajrayana were practiced in Tibet and that the hinayana which literally means "lesser vehicle" is in no way inferior to the mahayana."
According to Jan Nattier, it is most likely that the term Hīnayāna postdates the term Mahāyāna and was only added at a later date due to antagonism and conflict between the bodhisattva and śrāvaka ideals. The sequence of terms then began with the term "Bodhisattvayāna" "bodhisattva-vehicle", which was given the epithet Mahāyāna "Great Vehicle". It was only later, after attitudes toward the bodhisattva teachings had become more critical, that the term Hīnayāna was created as a back-formation, contrasting with the already established term Mahāyāna. The earliest Mahāyāna texts often use the term Mahāyāna as an epithet and synonym for Bodhisattvayāna, but the term Hīnayāna is comparatively rare in early texts, and is usually not found at all in the earliest translations. Therefore, the often-perceived symmetry between Mahāyāna and Hīnayāna can be deceptive, as the terms were not actually coined in relation to one another in the same era.
According to Paul Williams, "the deep-rooted misconception concerning an unfailing, ubiquitous fierce criticism of the Lesser Vehicle by the [Mahāyāna] is not supported by our texts." Williams states that while evidence of conflict is present in some cases, there is also substantial evidence demonstrating peaceful coexistence between the two traditions.
Although the 18–20 early Buddhist schools are sometimes loosely classified as Hīnayāna in modern times, this is not necessarily accurate. There is no evidence that Mahāyāna ever referred to a separate formal school of Buddhism but rather as a certain set of ideals, and later doctrines. Paul Williams has also noted that the Mahāyāna never had nor ever attempted to have a separate vinaya or ordination lineage from the early Buddhist schools, and therefore bhikṣus and bhikṣuṇīs adhering to the Mahāyāna formally adheres to the vinaya of an early school. This continues today with the Dharmaguptaka ordination lineage in East Asia and the Mūlasarvāstivāda ordination lineage in Tibetan Buddhism. Mahāyāna was never a separate sect of the early schools. From Chinese monks visiting India, we now know that both Mahāyāna and non-Mahāyāna monks in India often lived in the same monasteries side by side.
The seventh-century Chinese Buddhist monk and pilgrim Yijing wrote about the relationship between the various "vehicles" and the early Buddhist schools in India. He wrote, "There exist in the West numerous subdivisions of the schools which have different origins, but there are only four principal schools of continuous tradition." These schools are the Mahāsāṃghika Nikāya, Sthavira nikāya, Mūlasarvāstivāda Nikāya, and Saṃmitīya Nikāya. Explaining their doctrinal affiliations, he then writes, "Which of the four schools should be grouped with the Mahāyāna or with the Hīnayāna is not determined." That is to say, there was no simple correspondence between a Buddhist school and whether its members learn "Hīnayāna" or "Mahāyāna" teachings.
To identify entire schools as "Hīnayāna" that contained not only śrāvakas and pratyekabuddhas but also Mahāyāna bodhisattvas would be attacking the schools of their fellow Mahāyānists as well as their own. Instead, what is demonstrated in the definition of "Hīnayāna" given by Yijing is that the term referred to individuals based on doctrinal differences.
Scholar Isabelle Onians asserts that although "the Mahāyāna ... very occasionally referred to earlier Buddhism as the Hinayāna, the Inferior Way, [...] the preponderance of this name in the secondary literature is far out of proportion to occurrences in the Indian texts." She notes that the term Śrāvakayāna was "the more politically correct and much more usual" term used by Mahāyānists. Jonathan Silk has argued that the term "Hinayana" was used to refer to whomever one wanted to criticize on any given occasion, and did not refer to any definite grouping of Buddhists.
The Chinese monk Yijing, who visited India in the 7th century, distinguished Mahāyāna from Hīnayāna as follows:
In the 7th century, the Chinese Buddhist monk Xuanzang describes the concurrent existence of the Mahāvihara and the Abhayagiri vihāra in Sri Lanka. He refers to the monks of the Mahāvihara as the "Hīnayāna Sthaviras" and the monks of Abhayagiri vihāra as the "Mahāyāna Sthaviras". Xuanzang further writes, "The Mahāvihāravāsins reject the Mahāyāna and practice the Hīnayāna, while the Abhayagirivihāravāsins study both Hīnayāna and Mahāyāna teachings and propagate the "Tripiṭaka"."
Mahayanists were primarily in philosophical dialectic with the Vaibhāṣika school of Sarvāstivāda, which had by far the most "comprehensive edifice of doctrinal systematics" of the nikāya schools. With this in mind it is sometimes argued that the Theravada would not have been considered a "Hinayana" school by Mahayanists because, unlike the now-extinct Sarvastivada school, the primary object of Mahayana criticism, the Theravada school does not claim the existence of independent dharmas; in this it maintains the attitude of early Buddhism. Additionally, the concept of the bodhisattva as one who puts off enlightenment rather than reaching awakening as soon as possible, has no roots in Theravada textual or cultural contexts, current or historical. Aside from the Theravada schools being geographically distant from the Mahayana, the Hinayana distinction is used in reference to certain views and practices that had become found within the Mahayana tradition itself. Theravada, as well as Mahayana schools stress the urgency of one's own awakening in order to end suffering. Some contemporary Theravadin figures have thus indicated a sympathetic stance toward the Mahayana philosophy found in the "Heart Sutra" and the "Mūlamadhyamakakārikā".
The Mahayanists were bothered by the substantialist thought of the Sarvāstivādins and Sautrāntikins, and in emphasizing the doctrine of śūnyatā, David Kalupahana holds that they endeavored to preserve the early teaching. The Theravadins too refuted the Sarvāstivādins and Sautrāntikins (and followers of other schools) on the grounds that their theories were in conflict with the non-substantialism of the canon. The Theravada arguments are preserved in the "Kathavatthu".
Some western scholars still regard the Theravada school to be one of the Hinayana schools referred to in Mahayana literature, or regard Hinayana as a synonym for Theravada, although there is strong evidence that the Theravada schools were in existence as is, long before Mahayana doctrine was created, and certainly many centuries before the derogatory word Hinayana was created. These scholars understand the term to refer to schools of Buddhism that did not accept the teachings of the Mahāyāna sūtras as authentic teachings of the Buddha. At the same time, scholars have objected to the pejorative connotation of the term Hinayana and some scholars do not use it for any school. | https://en.wikipedia.org/wiki?curid=14036 |
Humphrey Bogart
Humphrey DeForest Bogart (; December 25, 1899January 14, 1957) was an American film and stage actor. His performances in Classical Hollywood cinema films made him an American cultural icon. In 1999, the American Film Institute selected Bogart as the greatest male star of classic American cinema.
Bogart began acting in Broadway shows, beginning his career in motion pictures with "Up the River" (1930) for Fox. Bogart appeared in supporting roles for the next decade, sometimes portraying gangsters. Bogart was praised for his work as Duke Mantee in "The Petrified Forest" (1936), but remained secondary to other actors Warner Bros. cast in lead roles.
His breakthrough from supporting roles to stardom came with "High Sierra" (1941, his last gangster role) and "The Maltese Falcon" (1941), considered one of the first great "noir" films. Bogart's private detectives, Sam Spade (in "The Maltese Falcon") and Phillip Marlowe (in 1946's "The Big Sleep"), became the models for detectives in other "noir" films. His most significant romantic lead role was with Ingrid Bergman in "Casablanca" (1942), which earned him his first nomination for the Academy Award for Best Actor. Bogart and 19-year-old Lauren Bacall fell in love when they filmed "To Have and Have Not" (1944); soon after the main filming for "The Big Sleep" (1946, their second film together), he filed for divorce from his third wife and married Bacall. After their marriage, she played his love interest in "Dark Passage" (1947) and "Key Largo" (1948).
Bogart's performances in "The Treasure of the Sierra Madre" (1948) and "In a Lonely Place" (1950) are now considered among his best, although they were not recognized as such when the films were released. He reprised those unsettled, unstable characters as a World War II naval-vessel commander in "The Caine Mutiny" (1954), which was a critical and commercial hit and earned him another Best Actor nomination. As a cantankerous river steam launch skipper with Katharine Hepburn's missionary in the World War I adventure "The African Queen" (1951), Bogart received the Academy Award for Best Actor. In his later years, significant roles included "The Barefoot Contessa" with Ava Gardner and his on-screen competition with William Holden for Audrey Hepburn in "Sabrina" (1954). A heavy smoker and drinker, Bogart died from esophageal cancer in January 1957.
Humphrey DeForest Bogart was born on Christmas Day 1899 in New York City, the eldest child of Belmont DeForest Bogart (1867–1934) and Maud Humphrey (1868–1940). Belmont was the only child of the unhappy marriage of Adam Welty Bogart (a Canandaigua, New York, innkeeper) and Julia Augusta Stiles, a wealthy heiress. The name "Bogart" derives from the Dutch surname, "Bogaert". Belmont and Maud married in June 1898. He was a Presbyterian, of English and Dutch descent, and a descendant of Sarah Rapelje (the first European child born in New Netherland). Maud was an Episcopalian of English heritage, and a descendant of "Mayflower" passenger John Howland. Humphrey was raised Episcopalian, but was non-practicing for most of his adult life.
The date of Bogart's birth has been disputed. Clifford McCarty wrote that Warner Bros. publicity department had altered it from January 23, 1900 "to foster the view that a man born on Christmas Day couldn't really be as villainous as he appeared to be on screen". The "corrected" January birthdate subsequently appeared—and in some cases, remains—in many otherwise-authoritative sources. According to biographers Ann M. Sperber and Eric Lax, Bogart always celebrated his birthday on December 25 and listed it on official records (including his marriage license).
Lauren Bacall wrote in her autobiography that Bogart's birthday was always celebrated on Christmas Day, saying that he joked about being cheated out of a present every year. Sperber and Lax noted that a birth announcement in the "Ontario County Times" of January 10, 1900 rules out the possibility of a January 23 birthdate; state and federal census records from 1900 also report a Christmas 1899 birthdate.
Belmont, Bogart's father, was a cardiopulmonary surgeon. Maud was a commercial illustrator who received her art training in New York and France, including study with James Abbott McNeill Whistler. She later became art director of the fashion magazine "The Delineator" and a militant suffragette. Maud used a drawing of baby Humphrey in an advertising campaign for Mellins Baby Food. She earned over $50,000 a year at the peak of her career, considerably more than her husband's $20,000. The Bogarts lived in an Upper West Side apartment, and had a cottage on a 55-acre estate on Canandaigua Lake in upstate New York. When he was young, Bogart's group of friends at the lake would put on plays.
He had two younger sisters: Frances ("Pat") and Catherine Elizabeth ("Kay"). Bogart's parents were busy in their careers, and frequently fought. Very formal, they showed little emotion towards their children. Maud told her offspring to call her "Maud" instead of "Mother", and showed little (if any) physical affection for them. When she was pleased, she "[c]lapped you on the shoulder, almost the way a man does", Bogart recalled. "I was brought up very unsentimentally but very straightforwardly. A kiss, in our family, was an event. Our mother and father didn't glug over my two sisters and me."
Bogart was teased as a boy for his curls, tidiness, the "cute" pictures his mother had him pose for, the Little Lord Fauntleroy clothes in which she dressed him, and for his first name. He inherited a tendency to needle, a fondness for fishing, a lifelong love of boating, and an attraction to strong-willed women from his father.
Bogart attended the private Delancey School until the fifth grade, and then attended the prestigious Trinity School. He was an indifferent, sullen student who showed no interest in after-school activities. Bogart later attended Phillips Academy, a boarding school to which he was admitted based on family connections. Although his parents hoped that he would go on to Yale University, in 1918 Bogart left Phillips. Several reasons have been given; according to one, he was expelled for throwing the headmaster (or a groundskeeper) into Rabbit Pond on campus. Another cited smoking, drinking, poor academic performance, and (possibly) inappropriate comments made to the staff. In a third scenario, Bogart was withdrawn by his father for failing to improve his grades. His parents were deeply disappointed in their failed plans for his future.
With no viable career options, Bogart followed his passion for the sea and enlisted in the United States Navy in the spring of 1918 (during World War I). He recalled later, "At eighteen, war was great stuff. Paris! Sexy French girls! Hot damn!" Bogart was recorded as a model sailor, who spent most of his sea time after the armistice ferrying troops back from Europe.
He may have received his trademark scar and developed his characteristic lisp during his naval stint. There are several conflicting stories. In one, his lip was cut by shrapnel when his ship (the ) was shelled. The ship was never shelled, however, and it is believed that Bogart was not at sea before the armistice. Another story, held by longtime friend Nathaniel Benchley, was that Bogart was injured while taking a prisoner to Portsmouth Naval Prison in Kittery, Maine. While changing trains in Boston, the handcuffed prisoner reportedly asked Bogart for a cigarette. When Bogart looked for a match, the prisoner smashed him across the mouth with the cuffs (cutting Bogart's lip) and fled before he was recaptured and imprisoned. In an alternative version, Bogart was struck in the mouth by a handcuff loosened while freeing his charge; the other handcuff was still around the prisoner's wrist. By the time Bogart was treated by a doctor, a scar had formed. David Niven said that when he first asked Bogart about his scar, however, he said that it was caused by a childhood accident. "Goddamn doctor", Bogart later told Niven. "Instead of stitching it up, he screwed it up." According to Niven, the stories that Bogart got the scar during wartime were made up by the studios. His post-service physical did not mention the lip scar, although it noted many smaller scars. When actress Louise Brooks met Bogart in 1924, he had scar tissue on his upper lip which Brooks said Bogart may have had partially repaired before entering the film industry in 1930. Brooks said that his "lip wound gave him no speech impediment, either before or after it was mended."
Bogart returned home to find his father in poor health, his medical practice faltering, and much of the family's wealth lost in bad timber investments. His character and values developed separate from his family during his navy days, and he began to rebel. Bogart became a liberal who disliked pretension, phonies and snobs, sometimes defying conventional behavior and authority; he was also well-mannered, articulate, punctual, self-effacing and standoffish. After his naval service he worked as a shipper and a bond salesman, joining the Coast Guard Reserve.
Bogart resumed his friendship with Bill Brady Jr. (whose father had show-business connections), and obtained an office job with William A. Brady's new World Films company. Although he wanted to try his hand at screenwriting, directing, and production, he excelled at none. Bogart was stage manager for Brady's daughter Alice's play "A Ruined Lady". He made his stage debut a few months later as a Japanese butler in Alice's 1921 play "Drifting" (nervously delivering one line of dialogue), and appeared in several of her subsequent plays.
Although Bogart had been raised to believe that acting was a lowly profession, he liked the late hours actors kept and the attention they received: "I was born to be indolent and this was the softest of rackets." He spent much of his free time in speakeasies, drinking heavily. A barroom brawl at this time was also a purported cause of Bogart's lip damage, dovetailing with Louise Brooks' account.
Preferring to learn by doing, he never took acting lessons. Bogart was persistent and worked steadily at his craft, appearing in at least 17 Broadway productions between 1922 and 1935. He played juveniles or romantic supporting roles in drawing-room comedies and is reportedly the first actor to say, "Tennis, anyone?" on stage. According to Alexander Woollcott, Bogart "is what is usually and mercifully described as inadequate."
Other critics were kinder. Heywood Broun, reviewing "Nerves", wrote: "Humphrey Bogart gives the most effective performance ... both dry and fresh, if that be possible". He played a juvenile lead (reporter Gregory Brown) in Lynn Starling's comedy "Meet the Wife", which had a successful 232-performance run at the Klaw Theatre from November 1923 through July 1924. Bogart disliked his trivial, effeminate early-career parts, calling them "White Pants Willie" roles.
While playing a double role in "Drifting" at the Playhouse Theatre in 1922, he met actress Helen Menken; they were married on May 20, 1926, at the Gramercy Park Hotel in New York City. Divorced on November 18, 1927, they remained friends. Menken said in her divorce filing that Bogart valued his career more than marriage, citing neglect and abuse. He married Mary Philips, with whom he had worked in the play "Nerves" during its brief run at the Comedy Theatre in September 1924, on April 3, 1928 at her mother's apartment in Hartford, Connecticut.
Theatrical production dropped off sharply after the Wall Street Crash of 1929, and many of the more-photogenic actors headed for Hollywood. Bogart debuted on film with Helen Hayes in the 1928 two-reeler, "The Dancing Town", a complete copy of which has not been found. He also appeared with Joan Blondell and Ruth Etting in a Vitaphone short, "Broadway's Like That" (1930), which was rediscovered in 1963.
Bogart signed a contract with the Fox Film Corporation for $750 a week. There he met Spencer Tracy, a Broadway actor whom Bogart liked and admired, and they became close friends and drinking companions. In 1930, Tracy first called him "Bogie". He made his film debut in his only film with Bogart, John Ford's early sound film "Up the River" (1930), in which they had major roles as inmates. Tracy received top billing, but Bogart appeared on the film's posters. He was billed fourth behind Tracy, Claire Luce and Warren Hymer.
Bogart then had a supporting role in "Bad Sister" (1931) with Bette Davis. Decades later, Tracy and Bogart planned to make "The Desperate Hours" together. Both wanted top billing, however; Tracy dropped out, and was replaced by Fredric March. Bogart shuttled back and forth between Hollywood and the New York stage from 1930 to 1935, out of work for long periods. His parents had separated; his father died in 1934 in debt, which Bogart eventually paid off. He inherited his father's gold ring, which he wore in many of his films. At his father's deathbed, Bogart finally told him how much he loved him. Bogart's second marriage was rocky; dissatisfied with his acting career, depressed and irritable, he drank heavily.
In 1934, Bogart starred in the Broadway play "Invitation to a Murder" at the Theatre Masque (renamed the John Golden Theatre in 1937). Its producer, Arthur Hopkins, heard the play from offstage; he sent for Bogart and offered him the role of escaped murderer Duke Mantee in Robert E. Sherwood's forthcoming play, "The Petrified Forest". Hopkins later recalled:
The play had 197 performances at the Broadhurst Theatre in New York in 1935. Although Leslie Howard was the star, "The New York Times" critic Brooks Atkinson said that the play was "a peach ... a roaring Western melodrama ... Humphrey Bogart does the best work of his career as an actor." Bogart said that the play "marked my deliverance from the ranks of the sleek, sybaritic, stiff-shirted, swallow-tailed 'smoothies' to which I seemed condemned to life." However, he still felt insecure. Warner Bros. bought the screen rights to "The Petrified Forest" in 1935. The play seemed ideal for the studio, which was known for its socially-realistic pictures for a public entranced by real-life criminals such as John Dillinger and Dutch Schultz. Bette Davis and Leslie Howard were cast. Howard, who held the production rights, made it clear that he wanted Bogart to star with him.
The studio tested several Hollywood veterans for the Duke Mantee role and chose Edward G. Robinson, who had star appeal and was due to make a film to fulfill his contract. Bogart cabled news of this development to Howard in Scotland, who replied: "Att: Jack Warner Insist Bogart Play Mantee No Bogart No Deal L.H.". When Warner Bros. saw that Howard would not budge, they gave in and cast Bogart. Jack Warner wanted Bogart to use a stage name, but Bogart declined having built a reputation with his name in Broadway theater. The film version of "The Petrified Forest" was released in 1936. According to "Variety", "Bogart's menace leaves nothing wanting". Frank S. Nugent wrote for "The New York Times" that the actor "can be a psychopathic gangster more like Dillinger than the outlaw himself." The film was successful at the box office, earning $500,000 in rentals, and made Bogart a star. He never forgot Howard's favor and named his only daughter, Leslie Howard Bogart, after him in 1952.
Despite his success in "The Petrified Forest" (an "A movie"), Bogart signed a tepid 26-week contract at $550 per week and was typecast as a gangster in a series of B movie crime dramas. Although he was proud of his success, the fact that it derived from gangster roles weighed on him: "I can't get in a mild discussion without turning it into an argument. There must be something in my tone of voice, or this arrogant face—something that antagonizes everybody. Nobody likes me on sight. I suppose that's why I'm cast as the heavy."
In spite of his success, Warner Bros. had no interest in raising Bogart's profile. His roles were repetitive and physically demanding; studios were not yet air-conditioned, and his tightly-scheduled job at Warners was anything but the indolent and "peachy" actor's life he hoped for. Although Bogart disliked the roles chosen for him, he worked steadily. "In the first 34 pictures" for Warner's, he told George Frazier, "I was shot in 12, electrocuted or hanged in 8, and was a jailbird in 9". He averaged a film every two months between 1936 and 1940, sometimes working on two films at the same time. Bogart used these years to begin developing his film persona: a wounded, stoical, cynical, charming, vulnerable, self-mocking loner with a code of honor.
Amenities at Warners were few, compared to the prestigious Metro-Goldwyn-Mayer. Bogart thought that the Warners wardrobe department was cheap, and often wore his own suits in his films; he used his dog, Zero, to play Pard (his character's dog) in "High Sierra". His disputes with Warner Bros. over roles and money were similar to those waged by the studio with other, less-malleable stars such as Bette Davis and James Cagney.
Leading men at Warner Bros. included James Cagney and Edward G. Robinson. Most of the studio's better scripts went to them (or others), leaving Bogart with what was left: films like "San Quentin" (1937), "Racket Busters" (1938), and "You Can't Get Away with Murder" (1939). His only substantial role during this period was in "Dead End" (1937, on loan to Samuel Goldwyn), as a gangster modeled after Baby Face Nelson.
Bogart played violent roles so often that in Nevil Shute's 1939 novel, "What Happened to the Corbetts", the protagonist replies "I've seen Humphrey Bogart with one often enough" when asked if he knows how to operate an automatic weapon. Although he played a variety of supporting roles in films such as "Angels with Dirty Faces" (1938), Bogart's roles were either rivals of characters played by Cagney and Robinson or a secondary member of their gang. In "Black Legion" (1937), a movie Graham Greene described as "intelligent and exciting, if rather earnest", he played a good man who was caught up with (and destroyed by) a racist organization,
The studio cast Bogart as a wrestling promoter in "Swing Your Lady" (1938), a "hillbilly musical" which he reportedly considered his worst film performance. He played a rejuvenated, formerly-dead scientist in "The Return of Doctor X" (1939), his only horror film: "If it'd been Jack Warner's blood ... I wouldn't have minded so much. The trouble was they were drinking mine and I was making this stinking movie." His wife, Mary, had a stage hit in "A Touch of Brimstone" and refused to abandon her Broadway career for Hollywood. After the play closed, Mary relented; she insisted on continuing her career, however, and they divorced in 1937.
Bogart entered a turbulent third marriage to actress Mayo Methot, a lively, friendly woman when sober but paranoid and aggressive when drunk, on August 21, 1938. She became convinced that Bogart was unfaithful to her (which he eventually was, with Lauren Bacall, while filming "To Have and Have Not" in 1944). They drifted apart; Methot's drinking increased, and she threw plants, crockery and other objects at Bogart. She set their house afire, stabbed him with a knife, and slashed her wrists several times. Bogart needled her; apparently enjoying confrontation, he was sometimes violent as well. The press called them "the Battling Bogarts".
According to their friend, Julius Epstein, "The Bogart-Methot marriage was the sequel to the Civil War". Bogart bought a motor launch which he named "Sluggy," his nickname for Methot: "I like a jealous wife .. We get on so well together (because) we don't have illusions about each other ... I wouldn't give you two cents for a dame without a temper." Louise Brooks said that "except for Leslie Howard, no one contributed as much to Humphrey's success as his third wife, Mayo Methot." Methot's influence was increasingly destructive, however, and Bogart also continued to drink.
He had a lifelong disdain for pretension and phoniness, and was again irritated by his inferior films. Bogart rarely watched his own films and avoided premieres, issuing fake press releases about his private life to satisfy journalistic and public curiosity. When he thought an actor, director or studio had done something shoddy, he spoke up publicly about it. Bogart advised Robert Mitchum that the only way to stay alive in Hollywood was to be an "againster". He was not the most popular of actors, and some in the Hollywood community shunned him privately to avoid trouble with the studios. Bogart once said,
The Hollywood press, unaccustomed to such candor, was delighted.
"High Sierra" (1941, directed by Raoul Walsh) was written by John Huston, Bogart's friend and drinking partner. The film was adapted from a novel by W. R. Burnett, author of the novel on which "Little Caesar" was based. Paul Muni, George Raft, Cagney and Robinson turned down the lead role, giving Bogart the opportunity to play a character with some depth. Walsh initially opposed Bogart's casting, preferring Raft for the part. It was Bogart's last major film as a gangster; a supporting role followed in "The Big Shot", released in 1942. He worked well with Ida Lupino, sparking jealousy from Mayo Methot.
The film cemented a strong personal and professional connection between Bogart and Huston. Bogart admired (and somewhat envied) Huston for his skill as a writer; a poor student, Bogart was a lifelong reader. He could quote Plato, Pope, Ralph Waldo Emerson and over a thousand lines of Shakespeare, and subscribed to the "Harvard Law Review". Bogart admired writers; some of his best friends were screenwriters, including Louis Bromfield, Nathaniel Benchley, and Nunnally Johnson. He enjoyed intense, provocative conversation (accompanied by stiff drinks), as did Huston. Both were rebellious and enjoyed playing childish pranks. Huston was reportedly easily bored during production, and admired Bogart (also bored easily off-camera) for his acting talent and his intense concentration on-set.
Now regarded as a classic film noir, "The Maltese Falcon" (1941) was John Huston's directorial debut. Based on the Dashiell Hammett novel, it was first serialized in the pulp magazine "Black Mask" in 1929 and was the basis of two earlier film versions; the second was "Satan Met a Lady" (1936), starring Bette Davis. Producer Hal B. Wallis initially offered to cast George Raft as the leading man, but Raft (more established than Bogart) had a contract stipulating he was not required to appear in remakes. Fearing that it would be nothing more than a sanitized version of the pre-Production Code "The Maltese Falcon" (1931), Raft turned down the role to make "Manpower" with director Raoul Walsh. Huston then eagerly accepted Bogart as his Sam Spade.
Complementing Bogart were co-stars Sydney Greenstreet, Peter Lorre, Elisha Cook Jr., and Mary Astor as the treacherous female foil. Bogart's sharp timing and facial expressions were praised by the cast and director as vital to the film's quick action and rapid-fire dialogue. It was a commercial hit, and a major triumph for Huston. Bogart was unusually happy with the film: "It is practically a masterpiece. I don't have many things I'm proud of ... but that's one".
Bogart played his first romantic lead in "Casablanca" (1942): Rick Blaine, an expatriate nightclub owner hiding from a suspicious past and negotiating a fine line among Nazis, the French underground, the Vichy prefect and unresolved feelings for his ex-girlfriend. Bosley Crowther wrote in his November 1942 "New York Times" review that Bogart's character was used "to inject a cold point of tough resistance to evil forces afoot in Europe today". The film, directed by Michael Curtiz and produced by Hal Wallis, featured Ingrid Bergman, Claude Rains, Sydney Greenstreet, Paul Henreid, Conrad Veidt, Peter Lorre and Dooley Wilson.
Bogart and Bergman's on-screen relationship was based on professionalism rather than actual rapport, although Mayo Methot assumed otherwise. Off the set, the co-stars hardly spoke. Bergman (who had a reputation for affairs with her leading men) later said about Bogart, "I kissed him but I never knew him." Because she was taller, Bogart had blocks attached to his shoes in some scenes.
Bogart is reported to have been responsible for the notion that Rick Blaine should be portrayed as a chess player, a metaphor for the relationships he maintained with friends, enemies, and allies. He played tournament-level chess (one division below master) in real life, often enjoying games with crew members and cast but finding his better in Paul Henreid.
"Casablanca" won the Academy Award for Best Picture at the 16th Academy Awards for 1943. Bogart was nominated for Best Actor in a Leading Role, but lost to Paul Lukas for his performance in "Watch on the Rhine". The film vaulted Bogart from fourth place to first in the studio's roster, however, finally overtaking James Cagney. He more than doubled his annual salary to over $460,000 by 1946, making him the world's highest-paid actor.
Bogart went on United Service Organizations and War Bond tours with Methot in 1943 and 1944, making arduous trips to Italy and North Africa (including Casablanca). He was still required to perform in films with weak scripts, leading to conflicts with the front office. He starred in "Conflict" (1945, again with Greenstreet), but turned down "God is My Co-Pilot" that year.
Bogart met Lauren Bacall (1924–2014) while filming "To Have and Have Not" (1944), a loose adaptation of the Ernest Hemingway novel. It has several similarities to "Casablanca": the same enemies, the same kind of hero, and a piano player (played by Hoagy Carmichael). When they met, Bacall was 19 and Bogart 44; he nicknamed her "Baby." A model since age 16, she had appeared in two failed plays. Bogart was attracted by Bacall's high cheekbones, green eyes, tawny blond hair, lean body, maturity, poise and earthy, outspoken honesty; he reportedly said, "I just saw your test. We'll have a lot of fun together".
Their emotional bond was strong from the start, their age and acting-experience differences encouraging a mentor-student dynamic. In contrast to the Hollywood norm, their affair was Bogart's first with a leading lady. His early meetings with Bacall were discreet and brief, their separations bridged by love letters. The relationship made it easier for Bacall to make her first film, and Bogart did his best to put her at ease with jokes and quiet coaching. He encouraged her to steal scenes; Howard Hawks also did his best to highlight her role, and found Bogart easy to direct.
However, Hawks began to disapprove of the relationship. He considered himself Bacall's protector and mentor, and Bogart was usurping that role. Not usually drawn to his starlets, the married director also fell for Bacall; he told her that she meant nothing to Bogart and threatened to send her to the poverty-row Monogram Pictures. Bogart calmed her down, and then went after Hawks; Jack Warner settled the dispute, and filming resumed. Hawks said about Bacall, "Bogie fell in love with the character she played, so she had to keep playing it the rest of her life."
Months after wrapping "To Have and Have Not", Bogart and Bacall were reunited for an encore: the film noir "The Big Sleep" (1946), based on the novel by Raymond Chandler with script help from William Faulkner. Chandler admired the actor's performance: "Bogart can be tough without a gun. Also, he has a sense of humor that contains that grating undertone of contempt." Although the film was completed and scheduled for release in 1945, it was withdrawn and re-edited to add scenes exploiting Bogart and Bacall's box-office chemistry in "To Have and Have Not" and the publicity surrounding their offscreen relationship. At director Howard Hawks' urging, production partner Charles K. Feldman agreed to a rewrite of Bacall's scenes to heighten the "insolent" quality which had intrigued critics such as James Agee and audiences of the earlier film, and a memo was sent to studio head Jack Warner.
The dialogue, especially in the added scenes supplied by Hawks, was full of sexual innuendo, and Bogart is convincing as private detective Philip Marlowe. The film was successful, although some critics found its plot confusing and overly complicated. According to Chandler, Hawks and Bogart argued about who killed the chauffeur; when Chandler received an inquiry by telegram, he could not provide an answer.
Bogart filed for divorce from Methot in February 1945. He and Bacall married in a small ceremony at the country home of Bogart's close friend, Pulitzer Prize-winning author Louis Bromfield, at Malabar Farm (near Lucas, Ohio) on May 21, 1945.
They moved into a $160,000 ($ in ) white brick mansion in an exclusive neighborhood of Los Angeles's Holmby Hills. The marriage was a happy one, with tensions due to their differences. Bogart's drinking was sometimes problematic. He was a homebody, and Bacall liked the nightlife; he loved the sea, which made her seasick.
Bogart bought the "Santana", a sailing yacht, from actor Dick Powell in 1945. He found the sea a sanctuary and spent about thirty weekends a year on the water, with a particular fondness for sailing around Catalina Island: "An actor needs something to stabilize his personality, something to nail down what he really is, not what he is currently pretending to be." Bogart joined the Coast Guard Temporary Reserve, offering the Coast Guard use of the "Santana". He reportedly attempted to enlist, but was turned down due to his age.
The suspenseful "Dark Passage" (1947) was Bogart and Bacall's next collaboration. Vincent Parry (Bogart) is intent on finding the real murderer for a crime of which he was convicted and sentenced to prison. According to Bogart's biographer, Stefan Kanfer, it was "a production line film noir with no particular distinction".
Bogart and Bacall's last pairing in a film was in "Key Largo" (1948). Directed by John Huston, Edward G. Robinson was billed second (behind Bogart) as gangster Johnny Rocco: a seething, older synthesis of many of his early bad-guy roles. The characters are trapped during a hurricane in a hotel owned by Bacall's father-in-law, played by Lionel Barrymore. Claire Trevor won an Academy Award for Best Supporting Actress for her performance as Rocco's physically-abused, alcoholic girlfriend.
Riding high in 1947 with a new contract which provided limited script refusal and the right to form his production company, Bogart rejoined with John Huston for "The Treasure of the Sierra Madre": a stark tale of greed among three gold prospectors in Mexico. Lacking a love interest or a happy ending, it was considered a risky project. Bogart later said about co-star (and John Huston's father) Walter Huston, "He's probably the only performer in Hollywood to whom I'd gladly lose a scene."
The film was shot in the heat of summer for greater realism and atmosphere, and was grueling to make. James Agee wrote, "Bogart does a wonderful job with this character ... miles ahead of the very good work he has done before." Although John Huston won the Academy Award for Best Director and screenplay and his father won the Best Supporting Actor award, the film had mediocre box-office results. Bogart complained, "An intelligent script, beautifully directed—something different—and the public turned a cold shoulder on it."
Bogart, a liberal Democrat, organized the Committee for the First Amendment (a delegation to Washington, D.C.) opposing what he saw as the House Un-American Activities Committee's harassment of Hollywood screenwriters and actors. He wrote an article, "I'm No Communist", for the March 1948 issue of "Photoplay" magazine distancing himself from the Hollywood Ten to counter negative publicity resulting from his appearance. Bogart wrote, "The ten men cited for contempt by the House Un-American Activities Committee were not defended by us."
Bogart created his film company, Santana Productions (named after his yacht and the cabin cruiser in "Key Largo"), in 1948. The right to create his own company had left Jack Warner furious, fearful that other stars would do the same and further erode the major studios' power. In addition to pressure from freelancing actors such as Bogart, James Stewart and Henry Fonda, they were beginning to buckle from the impact of television and the enforcement of antitrust laws which broke up theater chains. Bogart appeared in his final films for Warners, "Chain Lightning" (1950) and "The Enforcer" (1951).
Except for "Beat the Devil" (1953), originally distributed in the United States by United Artists, the company released its films through Columbia Pictures; Columbia re-released "Beat the Devil" a decade later. In quick succession, Bogart starred in "Knock on Any Door" (1949), "Tokyo Joe" (1949), "In a Lonely Place" (1950), and "Sirocco" (1951). Santana also made two films without him: "And Baby Makes Three" (1949) and "The Family Secret" (1951).
Although most lost money at the box office (ultimately forcing Santana's sale), at least two retain a reputation; "In a Lonely Place" is considered a film-noir high point. Bogart plays Dixon Steele, an embittered writer with a violent reputation who is the primary suspect in the murder of a young woman and falls in love with failed actress Laurel Gray (Gloria Grahame). Several Bogart biographers, and actress-writer Louise Brooks, have felt that this role is closest to the real Bogart. According to Brooks, the film "gave him a role that he could play with complexity, because the film character's pride in his art, his selfishness, drunkenness, lack of energy stabbed with lightning strokes of violence were shared by the real Bogart". The character mimics some of Bogart's personal habits, twice ordering the actor's favorite meal (ham and eggs).
A parody of sorts of "The Maltese Falcon", "Beat the Devil" was the final film for Bogart and John Huston. Co-written by Truman Capote, the eccentrically-filmed story follows an amoral group of rogues chasing an unattainable treasure. Bogart sold his interest in Santana to Columbia for over $1 million in 1955.
Outside Santana Productions, Bogart starred with Katharine Hepburn in the John Huston-directed "The African Queen" in 1951. The C. S. Forester novel on which it was based was overlooked and left undeveloped for 15 years, until producer Sam Spiegel and Huston bought the rights. Spiegel sent Katharine Hepburn the book; she suggested Bogart for the male lead, believing that "he was the only man who could have played that part". Huston's love of adventure, his deep, longstanding friendship (and success) with Bogart, and the chance to work with Hepburn convinced the actor to leave Hollywood for a difficult shoot on location in the Belgian Congo. Bogart was to get 30 percent of the profits and Hepburn 10 percent, plus a relatively-small salary for both. The stars met in London, and announced that they would work together.
Bacall came for the over-four-month duration, leaving their young son in Los Angeles. The Bogarts began the trip with a junket through Europe, including a visit with Pope Pius XII. Bacall later made herself useful as a cook, nurse and clothes washer; her husband said: "I don't know what we'd have done without her. She Luxed my undies in darkest Africa." Nearly everyone in the cast developed dysentery except Bogart and Huston, who subsisted on canned food and alcohol; Bogart said, "All I ate was baked beans, canned asparagus and Scotch whisky. Whenever a fly bit Huston or me, it dropped dead." Hepburn (a teetotaler) fared worse in the difficult conditions, losing weight and at one point becoming very ill. Bogart resisted Huston's insistence on using real leeches in a key scene where Charlie has to drag his steam launch through an infested marsh, and reasonable fakes were employed. The crew overcame illness, army-ant infestations, leaky boats, poor food, attacking hippos, poor water filters, extreme heat, isolation, and a boat fire to complete the film. Despite the discomfort of jumping from the boat into swamps, rivers and marshes, "The African Queen" apparently rekindled Bogart's early love of boats; when he returned to California, he bought a classic mahogany Hacker-Craft runabout which he kept until his death.
His performance as cantankerous skipper Charlie Allnutt earned Bogart an Academy Award for Best Actor in 1951 (his only award of three nominations), and he considered it the best of his film career. Promising friends that if he won his speech would break the convention of thanking everyone in sight, Bogart advised Claire Trevor when she was nominated for "Key Largo" to "just say you did it all yourself and don't thank anyone". When Bogart won, however, he said: "It's a long way from the Belgian Congo to the stage of this theatre. It's nicer to be here. Thank you very much ... No one does it alone. As in tennis, you need a good opponent or partner to bring out the best in you. John and Katie helped me to be where I am now." Despite the award and its accompanying recognition, Bogart later said: "The way to survive an Oscar is never to try to win another one ... too many stars ... win it and then figure they have to top themselves ... they become afraid to take chances. The result: A lot of dull performances in dull pictures." "The African Queen" was Bogart's first starring Technicolor role.
Bogart dropped his asking price to obtain the role of Captain Queeg in Edward Dmytryk's drama, "The Caine Mutiny" (1954). Though he retained some of his old bitterness about having to do so, he delivered a strong performance in the lead; he received his final Oscar nomination and was the subject of a June 7, 1954 "Time" magazine cover story.
Despite his success, Bogart was still melancholy; he grumbled to (and feuded with) the studio, while his health began to deteriorate. The character of Queeg was similar to his roles in "The Maltese Falcon", "Casablanca" and "The Big Sleep"–the wary loner who trusts no one—but without their warmth and humor. Like his portrayal of Fred C. Dobbs in "The Treasure of the Sierra Madre", Bogart's Queeg is a paranoid, self-pitying character whose small-mindedness eventually destroys him. Henry Fonda played a different role in the Broadway version of "The Caine Mutiny", generating publicity for the film.
For "Sabrina" (1954), Billy Wilder wanted Cary Grant for the older male lead and chose Bogart to play the conservative brother who competes with his younger, playboy sibling (William Holden) for the affection of the Cinderella-like Sabrina (Audrey Hepburn). Although Bogart was lukewarm about the part, he agreed to it on a handshake with Wilder without a finished script but with the director's assurance that he would take good care of Bogart during filming. The actor, however, got along poorly with his director and co-stars; he complained about the script's last-minute drafting and delivery, and accused Wilder of favoring Hepburn and Holden on and off the set. Wilder was the opposite of Bogart's ideal director (John Huston) in style and personality; Bogart complained to the press that Wilder was "overbearing" and "is [a] kind of Prussian German with a riding crop. He is the type of director I don't like to work with ... the picture is a crock of crap. I got sick and tired of who gets Sabrina." Wilder later said, "We parted as enemies but finally made up." Despite the acrimony, the film was successful; according to a review in "The New York Times", Bogart was "incredibly adroit ... the skill with which this old rock-ribbed actor blends the gags and such duplicities with a manly manner of melting is one of the incalculable joys of the show".
Joseph L. Mankiewicz's "The Barefoot Contessa" (1954) was filmed in Rome. In this Hollywood backstory Bogart is a broken-down man, a cynical director-narrator who saves his career by making a star of a flamenco dancer modeled on Rita Hayworth. He was uneasy with Ava Gardner in the female lead; she had just broken up with his Rat Pack buddy Frank Sinatra, and Bogart was annoyed by her inexperienced performance. The actor was generally praised as the film's strongest part. During filming and while Bacall was home, Bogart resumed his discreet affair with Verita Bouvaire-Thompson (his long-time studio assistant, whom he drank with and took sailing). When Bacall found them together, she extracted an expensive shopping spree from her husband; the three traveled together after the shooting.
Bogart could be generous with actors, particularly those who were blacklisted, down on their luck or having personal problems. During the filming of the Edward Dmytryk-directed "The Left Hand of God" (1955), he noticed his co-star Gene Tierney having a hard time remembering her lines and behaving oddly; he coached Tierney, feeding her lines. Familiar with mental illness because of his sister's bouts of depression, Bogart encouraged Tierney to seek treatment. He stood behind Joan Bennett, insisted on Bennett as his co-star in Michael Curtiz's "We're No Angels" (1955) when a scandal made her "persona non grata" with Jack Warner.
Bogart rarely performed on television, but he and Bacall appeared on Edward R. Murrow's "Person to Person" and disagreed on the answer to every question. He also appeared on "The Jack Benny Show", where a surviving kinescope of the live telecast captures him in his only TV sketch-comedy performance. Bogart and Bacall worked on an early color telecast in 1955, an NBC adaptation of "The Petrified Forest" for "Producers' Showcase". Bogart received top billing, and Henry Fonda played Leslie Howard's role; a black and white kinescope of the live telecast has survived. Bogart performed radio adaptations of some of his best-known films, such as "Casablanca" and "The Maltese Falcon", and recorded a radio series entitled "Bold Venture" with Bacall.
Bogart became a father at age 49, when Bacall gave birth to Stephen Humphrey Bogart on January 6, 1949 during the filming of "Tokyo Joe". The name was taken from Steve, Bogart's character's nickname in "To Have and Have Not". Stephen became an author and biographer, and hosted a television special about his father on Turner Classic Movies. The couple's daughter, Leslie Howard Bogart, was born on August 23, 1952. Her first and middle names honor Leslie Howard, Bogart's friend and co-star in "The Petrified Forest".
Bogart was a founding member and the original leader of the Hollywood Rat Pack. In the spring of 1955, after a long party in Las Vegas attended by Frank Sinatra, Judy Garland, her husband Sidney Luft, Michael Romanoff and his wife Gloria, David Niven, Angie Dickinson and others, Bacall surveyed the wreckage and said: "You look like a goddamn rat pack."
The name stuck, and was made official at Romanoff's in Beverly Hills. Sinatra was dubbed Pack Leader; Bacall Den Mother; Bogart Director of Public Relations, and Sid Luft Acting Cage Manager. Asked by columnist Earl Wilson what the group's purpose was, Bacall replied: "To drink a lot of bourbon and stay up late."
After signing a long-term deal with Warner Bros., Bogart predicted with glee that his teeth and hair would fall out before the contract ended. In 1955, however, his health was failing. In the wake of Santana, Bogart had formed a new company and had plans for a film ("Melville Goodwin, U.S.A.") in which he would play a general and Bacall a press magnate. His persistent cough and difficulty eating became too serious to ignore, though, and he dropped the project.
A heavy smoker and drinker, Bogart had developed esophageal cancer. He did not talk about his health, and visited a doctor in January 1956 after considerable persuasion from Bacall. The disease worsened several weeks later, and on March 1 Bogart had surgery to remove his esophagus, two lymph nodes and a rib. The surgery was unsuccessful, and chemotherapy followed. He had additional surgery in November 1956, when the cancer had spread. Although Bogart became too weak to walk up and down stairs, he joked despite the pain: "Put me in the dumbwaiter and I'll ride down to the first floor in style." It was then altered to accommodate his wheelchair. Sinatra, Katharine Hepburn, and Spencer Tracy visited Bogart on January 13, 1957. In an interview, Hepburn said:
Bogart lapsed into a coma and died the following day, 20 days after his 57th birthday; at the time of his death he weighed only . A simple funeral was held at All Saints Episcopal Church, with music by Bogart's favorite composers: Johann Sebastian Bach and Claude Debussy. In attendance were some of Hollywood's biggest stars, including Hepburn, Tracy, Judy Garland, David Niven, Ronald Reagan, James Mason, Bette Davis, Danny Kaye, Joan Fontaine, Marlene Dietrich, James Cagney, Errol Flynn, Edward G. Robinson, Gregory Peck, Gary Cooper, Billy Wilder and studio head Jack L. Warner. Bacall asked Tracy to give the eulogy; he was too upset, however, and John Huston spoke instead:
Bogart was cremated, and his ashes were interred in Forest Lawn Memorial Park's Columbarium of Eternal Light in its Garden of Memory in Glendale, California. He was buried with a small, gold whistle which had been part of a charm bracelet he had given to Bacall before they married. On it was inscribed, "If you want anything, just whistle." This alluded to a scene in "To Have and Have Not" when Bacall's character says to Bogart shortly after their first meeting, "You know how to whistle, don't you, Steve? You just put your lips together and blow."
Bogart's estate had a gross value of $910,146 and a net value of $737,668 ($ million and $ million, respectively, in ).
On August 21, 1946, he recorded his hand- and footprints in cement in a ceremony at Grauman's Chinese Theatre. On February 8, 1960, Bogart was posthumously inducted into the Hollywood Walk of Fame with a motion-picture star at 6322 Hollywood Boulevard.
After his death, a "Bogie cult" formed at the Brattle Theatre in Cambridge, Massachusetts, in Greenwich Village, and in France; this contributed to his increased popularity during the late 1950s and 1960s. In 1997, "Entertainment Weekly" magazine ranked Bogart the number-one movie legend of all time; two years later, the American Film Institute rated him the greatest male screen legend.
Jean-Luc Godard's "Breathless" (1960) was the first film to pay tribute to Bogart. Over a decade later, in Woody Allen's comic paean "Play It Again, Sam" (1972), Bogart's ghost aids Allen's character: a film critic having difficulties with women who says that his "sex life has turned into the 'Petrified Forest.
The United States Postal Service honored Bogart with a stamp in its "Legends of Hollywood" series in 1997, the third figure recognized. At a ceremony attended by Lauren Bacall and the Bogart children, Stephen and Leslie, USPS governing-board chair Tirso del Junco delivered a tribute:
"Today, we mark another chapter in the Bogart legacy. With an image that is small and yet as powerful as the ones he left in celluloid, we will begin today to bring his artistry, his power, his unique star quality, to the messages that travel the world."
On June 24, 2006, 103rd Street between Broadway and West End Avenue in New York City was renamed Humphrey Bogart Place. Lauren Bacall and her son, Stephen Bogart, attended the ceremony. "Bogie would never have believed it", she said to the assembled city officials and onlookers.
Bogart has inspired a number of artists. Two Bugs Bunny cartoons featured the actor: "Slick Hare" (1947) and "8 Ball Bunny" (1950, based on "The Treasure of the Sierra Madre"). "The Man with Bogart's Face" (1981, starring Bogart lookalike Robert Sacchi) was an homage to the actor. The lyrics of Bertie Higgins' 1981 song, "Key Largo", refer to "Key Largo" and "Casablanca". | https://en.wikipedia.org/wiki?curid=14045 |
History painting
History painting is a genre in painting defined by its subject matter rather than artistic style. History paintings usually depict a moment in a narrative story, rather than a specific and static subject, as in a portrait. The term is derived from the wider senses of the word "historia" in Latin and Italian, meaning "story" or "narrative", and essentially means "story painting". Most history paintings are not of scenes from history, especially paintings from before about 1850.
In modern English, historical painting is sometimes used to describe the painting of scenes from history in its narrower sense, especially for 19th-century art, excluding religious, mythological and allegorical subjects, which are included in the broader term history painting, and before the 19th century were the most common subjects for history paintings.
History paintings almost always contain a number of figures, often a large number, and normally show some type of action that is a moment in a narrative. The genre includes depictions of moments in religious narratives, above all the "Life of Christ", as well as narrative scenes from mythology, and also allegorical scenes. These groups were for long the most frequently painted; works such as Michelangelo's Sistine Chapel ceiling are therefore history paintings, as are most very large paintings before the 19th century. The term covers large paintings in oil on canvas or fresco produced between the Renaissance and the late 19th century, after which the term is generally not used even for the many works that still meet the basic definition.
History painting may be used interchangeably with historical painting, and was especially so used before the 20th century. Where a distinction is made, "historical painting" is the painting of scenes from secular history, whether specific episodes or generalized scenes. In the 19th century, historical painting in this sense became a distinct genre. In phrases such as "historical painting materials", "historical" means in use before about 1900, or some earlier date.
History paintings were traditionally regarded as the highest form of Western painting, occupying the most prestigious place in the hierarchy of genres, and considered the equivalent to the epic in literature. In his "De Pictura" of 1436, Leon Battista Alberti had argued that multi-figure history painting was the noblest form of art, as being the most difficult, which required mastery of all the others, because it was a visual form of history, and because it had the greatest potential to move the viewer. He placed emphasis on the ability to depict the interactions between the figures by gesture and expression.
This view remained general until the 19th century, when artistic movements began to struggle against the establishment institutions of academic art, which continued to adhere to it. At the same time, there was from the latter part of the 18th century an increased interest in depicting in the form of history painting moments of drama from recent or contemporary history, which had long largely been confined to battle-scenes and scenes of formal surrenders and the like. Scenes from ancient history had been popular in the early Renaissance, and once again became common in the Baroque and Rococo periods, and still more so with the rise of Neoclassicism. In some 19th or 20th century contexts, the term may refer specifically to paintings of scenes from secular history, rather than those from religious narratives, literature or mythology.
The term is generally not used in art history in speaking of medieval painting, although the Western tradition was developing in large altarpieces, fresco cycles, and other works, as well as miniatures in illuminated manuscripts. It comes to the fore in Italian Renaissance painting, where a series of increasingly ambitious works were produced, many still religious, but several, especially in Florence, which did actually feature near-contemporary historical scenes such as the set of three huge canvases on "The Battle of San Romano" by Paolo Uccello, the abortive "Battle of Cascina" by Michelangelo and the "Battle of Anghiari" by Leonardo da Vinci, neither of which were completed. Scenes from ancient history and mythology were also popular. Writers such as Alberti and the following century Giorgio Vasari in his "Lives of the Artists", followed public and artistic opinion in judging the best painters above all on their production of large works of history painting (though in fact the only modern (post-classical) work described in "De Pictura" is Giotto's huge "Navicella" in mosaic). Artists continued for centuries to strive to make their reputation by producing such works, often neglecting genres to which their talents were better suited.
There was some objection to the term, as many writers preferred terms such as "poetic painting" ("poesia"), or wanted to make a distinction between the "true" "istoria", covering history including biblical and religious scenes, and the "fabula", covering pagan myth, allegory, and scenes from fiction, which could not be regarded as true. The large works of Raphael were long considered, with those of Michelangelo, as the finest models for the genre.
In the Raphael Rooms in the Vatican Palace, allegories and historical scenes are mixed together, and the Raphael Cartoons show scenes from the Gospels, all in the Grand Manner that from the High Renaissance became associated with, and often expected in, history painting. In the Late Renaissance and Baroque the painting of actual history tended to degenerate into panoramic battle-scenes with the victorious monarch or general perched on a horse accompanied with his retinue, or formal scenes of ceremonies, although some artists managed to make a masterpiece from such unpromising material, as Velázquez did with his "The Surrender of Breda".
An influential formulation of the hierarchy of genres, confirming the history painting at the top, was made in 1667 by André Félibien, a historiographer, architect and theoretician of French classicism became the classic statement of the theory for the 18th century:Celui qui fait parfaitement des païsages est au-dessus d'un autre qui ne fait que des fruits, des fleurs ou des coquilles. Celui qui peint des animaux vivants est plus estimable que ceux qui ne représentent que des choses mortes & sans mouvement; & comme la figure de l'homme est le plus parfait ouvrage de Dieu sur la Terre, il est certain aussi que celui qui se rend l'imitateur de Dieu en peignant des figures humaines, est beaucoup plus excellent que tous les autres ... un Peintre qui ne fait que des portraits, n'a pas encore cette haute perfection de l'Art, & ne peut prétendre à l'honneur que reçoivent les plus sçavans. Il faut pour cela passer d'une seule figure à la représentation de plusieurs ensemble; il faut traiter l'histoire & la fable; il faut représenter de grandes actions comme les historiens, ou des sujets agréables comme les Poëtes; & montant encore plus haut, il faut par des compositions allégoriques, sçavoir couvrir sous le voile de la fable les vertus des grands hommes, & les mystères les plus relevez.
He who produces perfect landscapes is above another who only produces fruit, flowers or seashells. He who paints living animals is more than those who only represent dead things without movement, and as man is the most perfect work of God on the earth, it is also certain that he who becomes an imitator of God in representing human figures, is much more excellent than all the others ... a painter who only does portraits still does not have the highest perfection of his art, and cannot expect the honour due to the most skilled. For that he must pass from representing a single figure to several together; history and myth must be depicted; great events must be represented as by historians, or like the poets, subjects that will please, and climbing still higher, he must have the skill to cover under the veil of myth the virtues of great men in allegories, and the mysteries they reveal".
By the late 18th century, with both religious and mytholological painting in decline, there was an increased demand for paintings of scenes from history, including contemporary history. This was in part driven by the changing audience for ambitious paintings, which now increasingly made their reputation in public exhibitions rather than by impressing the owners of and visitors to palaces and public buildings. Classical history remained popular, but scenes from national histories were often the best-received. From 1760 onwards, the Society of Artists of Great Britain, the first body to organize regular exhibitions in London, awarded two generous prizes each year to paintings of subjects from British history.
The unheroic nature of modern dress was regarded as a serious difficulty. When, in 1770, Benjamin West proposed to paint "The Death of General Wolfe" in contemporary dress, he was firmly instructed to use classical costume by many people. He ignored these comments and showed the scene in modern dress. Although George III refused to purchase the work, West succeeded both in overcoming his critics' objections and inaugurating a more historically accurate style in such paintings. Other artists depicted scenes, regardless of when they occurred, in classical dress and for a long time, especially during the French Revolution, history painting often focused on depictions of the heroic male nude.
The large production, using the finest French artists, of propaganda paintings glorifying the exploits of Napoleon, were matched by works, showing both victories and losses, from the anti-Napoleonic alliance by artists such as Goya and J.M.W. Turner. Théodore Géricault's "The Raft of the Medusa" (1818–1819) was a sensation, appearing to update the history painting for the 19th century, and showing anonymous figures famous only for being victims of what was then a famous and controversial disaster at sea. Conveniently their clothes had been worn away to classical-seeming rags by the point the painting depicts. At the same time the demand for traditional large religious history paintings very largely fell away.
In the mid-nineteenth century there arose a style known as historicism, which marked a formal imitation of historical styles and/or artists. Another development in the nineteenth century was the treatment of historical subjects, often on a large scale, with the values of genre painting, the depiction of scenes of everyday life, and anecdote. Grand depictions of events of great public importance were supplemented with scenes depicting more personal incidents in the lives of the great, or of scenes centred on unnamed figures involved in historical events, as in the Troubadour style. At the same time scenes of ordinary life with moral, political or satirical content became often the main vehicle for expressive interplay between figures in painting, whether given a modern or historical setting.
By the later 19th century, history painting was often explicitly rejected by avant-garde movements such as the Impressionists (except for Édouard Manet) and the Symbolists, and according to one recent writer "Modernism was to a considerable extent built upon the rejection of History Painting... All other genres are deemed capable of entering, in one form or another, the 'pantheon' of modernity considered, but History Painting is excluded".
Initially, "history painting" and "historical painting" were used interchangeably in English, as when Sir Joshua Reynolds in his fourth "Discourse" uses both indiscriminately to cover "history painting", while saying "...it ought to be called poetical, as in reality it is", reflecting the French term "peinture historique", one equivalent of "history painting". The terms began to separate in the 19th century, with "historical painting" becoming a sub-group of "history painting" restricted to subjects taken from history in its normal sense. In 1853 John Ruskin asked his audience: "What do you at present "mean" by historical painting? Now-a-days it means the endeavour, by the power of imagination, to portray some historical event of past days." So for example Harold Wethey's three-volume catalogue of the paintings of Titian (Phaidon, 1969–75) is divided between "Religious Paintings", "Portraits", and "Mythological and Historical Paintings", though both volumes I and III cover what is included in the term "History Paintings". This distinction is useful but is by no means generally observed, and the terms are still often used in a confusing manner. Because of the potential for confusion modern academic writing tends to avoid the phrase "historical painting", talking instead of "historical subject matter" in history painting, but where the phrase is still used in contemporary scholarship it will normally mean the painting of subjects from history, very often in the 19th century. "Historical painting" may also be used, especially in discussion of painting techniques in conservation studies, to mean "old", as opposed to modern or recent painting.
In 19th-century British writing on art the terms "subject painting" or "anecdotic" painting were often used for works in a line of development going back to William Hogarth of monoscenic depictions of crucial moments in an implied narrative with unidentified characters, such as William Holman Hunt's 1853 painting "The Awakening Conscience" or Augustus Egg's "Past and Present", a set of three paintings, updating sets by Hogarth such as "Marriage à-la-mode".
History painting was the dominant form of academic painting in the various national academies in the 18th century, and for most of the 19th, and increasingly historical subjects dominated. During the Revolutionary and Napoleonic periods the heroic treatment of contemporary history in a frankly propagandistic fashion by Antoine-Jean, Baron Gros, Jacques-Louis David, Carle Vernet and others was supported by the French state, but after the fall of Napoleon in 1815 the French governments were not regarded as suitable for heroic treatment and many artists retreated further into the past to find subjects, though in Britain depicting the victories of the Napoleonic Wars mostly occurred after they were over. Another path was to choose contemporary subjects that were oppositional to government either at home and abroad, and many of what were arguably the last great generation of history paintings were protests at contemporary episodes of repression or outrages at home or abroad: Goya's "The Third of May 1808" (1814), Théodore Géricault's "The Raft of the Medusa" (1818–19), Eugène Delacroix's "The Massacre at Chios" (1824) and "Liberty Leading the People" (1830). These were heroic, but showed heroic suffering by ordinary civilians.
Romantic artists such as Géricault and Delacroix, and those from other movements such as the English Pre-Raphaelite Brotherhood continued to regard history painting as the ideal for their most ambitious works. Others such as Jan Matejko in Poland, Vasily Surikov in Russia, José Moreno Carbonero in Spain and Paul Delaroche in France became specialized painters of large historical subjects. The "style troubadour" ("troubadour style") was a somewhat derisive French term for earlier paintings of medieval and Renaissance scenes, which were often small and depicting moments of anecdote rather than drama; Ingres, Richard Parkes Bonington and Henri Fradelle painted such works. Sir Roy Strong calls this type of work the "Intimate Romantic", and in French it was known as the "peinture de genre historique" or "peinture anecdotique" ("historical genre painting" or "anecdotal painting").
Church commissions for large group scenes from the Bible had greatly reduced, and historical painting became very significant. Especially in the early 19th century, much historical painting depicted specific moments from historical literature, with the novels of Sir Walter Scott a particular favourite, in France and other European countries as much as Great Britain. By the middle of the century medieval scenes were expected to be very carefully researched, using the work of historians of costume, architecture and all elements of decor that were becoming available. And example of this is the extensive research of Byzantine architecture, clothing and decoration made in Parisian museums and libraries by Moreno Carbonero for his masterwork "The Entry of Roger de Flor in Constantinople". The provision of examples and expertise for artists, as well as revivalist industrial designers, was one of the motivations for the establishment of museums like the Victoria and Albert Museum in London.
New techniques of printmaking such as the chromolithograph made good quality monochrome print reproductions both relatively cheap and very widely accessible, and also hugely profitable for artist and publisher, as the sales were so large. Historical painting often had a close relationship with Nationalism, and painters like Matejko in Poland could play an important role in fixing the prevailing historical narrative of national history in the popular mind. In France, "L'art Pompier" ("Fireman art") was a derisory term for official academic historical painting, and in a final phase, "History painting of a debased sort, scenes of brutality and terror, purporting to illustrate episodes from Roman and Moorish history, were Salon sensations. On the overcrowded walls of the exhibition galleries, the paintings that shouted loudest got the attention". Orientalist painting was an alternative genre that offered similar exotic costumes and decor, and at least as much opportunity to depict sex and violence. | https://en.wikipedia.org/wiki?curid=14051 |
Hyperbola
In mathematics, a hyperbola (plural "hyperbolas" or "hyperbolae") is a type of smooth curve lying in a plane, defined by its geometric properties or by equations for which it is the solution set. A hyperbola has two pieces, called connected components or branches, that are mirror images of each other and resemble two infinite bows. The hyperbola is one of the three kinds of conic section, formed by the intersection of a plane and a double cone. (The other conic sections are the parabola and the ellipse. A circle is a special case of an ellipse.) If the plane intersects both halves of the double cone but does not pass through the apex of the cones, then the conic is a hyperbola.
Hyperbolas arise in many ways:
and so on.
Each branch of the hyperbola has two arms which become straighter (lower curvature) further out from the center of the hyperbola. Diagonally opposite arms, one from each branch, tend in the limit to a common line, called the asymptote of those two arms. So there are two asymptotes, whose intersection is at the center of symmetry of the hyperbola, which can be thought of as the mirror point about which each branch reflects to form the other branch. In the case of the curve formula_1 the asymptotes are the two coordinate axes.
Hyperbolas share many of the ellipses' analytical properties such as eccentricity, focus, and directrix. Typically the correspondence can be made with nothing more than a change of sign in some term. Many other mathematical objects have their origin in the hyperbola, such as hyperbolic paraboloids (saddle surfaces), hyperboloids ("wastebaskets"), hyperbolic geometry (Lobachevsky's celebrated non-Euclidean geometry), hyperbolic functions (sinh, cosh, tanh, etc.), and gyrovector spaces (a geometry proposed for use in both relativity and quantum mechanics which is not Euclidean).
The word "hyperbola" derives from the Greek , meaning "over-thrown" or "excessive", from which the English term hyperbole also derives. Hyperbolae were discovered by Menaechmus in his investigations of the problem of doubling the cube, but were then called sections of obtuse cones. The term hyperbola is believed to have been coined by Apollonius of Perga (c. 262–c. 190 BC) in his definitive work on the conic sections, the "Conics".
The names of the other two general conic sections, the ellipse and the parabola, derive from the corresponding Greek words for "deficient" and "applied"; all three names are borrowed from earlier Pythagorean terminology which referred to a comparison of the side of rectangles of fixed area with a given line segment. The rectangle could be "applied" to the segment (meaning, have an equal length), be shorter than the segment or exceed the segment.
A hyperbola can be defined geometrically as a set of points (locus of points) in the Euclidean plane:
The midpoint formula_8 of the line segment joining the foci is called the "center" of the hyperbola. The line through the foci is called the "major axis". It contains the "vertices" formula_9, which have distance formula_10 to the center. The distance formula_11 of the foci to the center is called the "focal distance" or "linear eccentricity". The quotient formula_12 is the "eccentricity" formula_13.
The equation formula_14 can be viewed in a different way (see diagram):
If formula_15 is the circle with midpoint formula_16 and radius formula_17, then the distance of a point formula_3 of the right branch to the circle formula_15 equals the distance to the focus formula_20:
formula_15 is called the "circular directrix" (related to focus formula_16) of the hyperbola. In order to get the left branch of the hyperbola, one has to use the circular directrix related to formula_20. This property should not be confused with the definition of a hyperbola with help of a directrix (line) below.
If Cartesian coordinates are introduced such that the origin is the center of the hyperbola and the "x"-axis is the major axis, then the hyperbola is called "east-west-opening" and
For an arbitrary point formula_27 the distance to the focus formula_28 is
formula_29 and to the second focus formula_30. Hence the point formula_27 is on the hyperbola if the following condition is fulfilled
Remove the square roots by suitable squarings and use the relation formula_33 to obtain the equation of the hyperbola:
This equation is called the canonical form of a hyperbola, because any hyperbola, regardless of its orientation relative to the Cartesian axes and regardless of the location of its center, can be transformed to this form by a change of variables, giving a hyperbola that is congruent to the original (see below).
The axes of symmetry or "principal axes" are the "transverse axis" (containing the segment of length 2"a" with endpoints at the vertices) and the "conjugate axis" (containing the segment of length 2"b" perpendicular to the transverse axis and with midpoint at the hyperbola's center). As opposed to an ellipse, a hyperbola has only two vertices: formula_35. The two points formula_36 on the conjugate axes are "not" on the hyperbola.
It follows from the equation that the hyperbola is "symmetric" with respect to both of the coordinate axes and hence symmetric with respect to the origin.
For a hyperbola in the above canonical form, the eccentricity is given by
Two hyperbolas are geometrically similar to each other – meaning that they have the same shape, so that one can be transformed into the other by rigid left and right movements, rotation, taking a mirror image, and scaling (magnification) – if and only if they have the same eccentricity.
Solving the equation (above) of the hyperbola for formula_38 yields
It follows from this that the hyperbola approaches the two lines
for large values of formula_41. These two lines intersect at the center (origin) and are called "asymptotes" of the hyperbola formula_42
With the help of the second figure one can see that
From the Hesse normal form formula_45 of the asymptotes and the equation of the hyperbola one gets:
From the equation formula_49 of the hyperbola (above) one can derive:
In addition, from (2) above it can be shown that
The length of the chord through one of the foci, perpendicular to the major axis of the hyperbola, is called the "latus rectum". One half of it is the "semi-latus rectum" formula_54. A calculation shows
The semi-latus rectum formula_54 may also be viewed as the "radius of curvature " at the vertices.
The simplest way to determine the equation of the tangent at a point formula_57 is to implicitly differentiate the equation formula_58 of the hyperbola. Denoting "dy/dx" as "y′", this produces
With respect to formula_60, the equation of the tangent at point formula_57 is
A particular tangent line distinguishes the hyperbola from the other conic sections. Let "f" be the distance from the vertex "V" (on both the hyperbola and its axis through the two foci) to the nearer focus. Then the distance, along a line perpendicular to that axis, from that focus to a point P on the hyperbola is greater than 2"f". The tangent to the hyperbola at P intersects that axis at point Q at an angle ∠PQV of greater than 45°.
In the case formula_63 the hyperbola is called "rectangular" (or "equilateral"), because its asymptotes intersect rectangularly (that is, are perpendicular). For this case, the linear eccentricity is formula_64, the eccentricity formula_65 and the semi-latus rectum formula_66.
Using the hyperbolic sine and cosine functions formula_67, a parametric representation of the hyperbola formula_58 can be obtained, which is similar to the parametric representation of an ellipse:
which satisfies the Cartesian equation because formula_70
Further parametric representations are given in the section Parametric equations below.
Exchange formula_71 and formula_38 to obtain the equation of the conjugate hyperbola (see diagram):
Just as the trigonometric functions are defined in terms of the unit circle, so also the hyperbolic functions are defined in terms of the unit hyperbola, as shown in this diagram. In a unit circle, the angle (in radians) is equal to twice the area of the circular sector which that angle subtends. The analogous hyperbolic angle is likewise defined as twice the area of a hyperbolic sector.
Let formula_10 be twice the area between the formula_71 axis and a ray through the origin intersecting the unit hyperbola, and define formula_77 as the coordinates of the intersection point.
Then the area of the hyperbolic sector is the area of the triangle minus the curved region past the vertex at formula_78:
which simplifies to the area hyperbolic cosine
Solving for formula_71 yields the exponential form of the hyperbolic cosine:
From formula_83 one gets
and its inverse the area hyperbolic sine:
Other hyperbolic functions are defined according to the hyperbolic cosine and hyperbolic sine, so for example
If the "xy"-coordinate system is rotated about the origin by the angle formula_87 and new coordinates formula_88 are assigned, then formula_89.
The rectangular hyperbola formula_90 (whose semi-axes are equal) has the new equation formula_91.
Solving for formula_92 yields formula_93
Thus, in an "xy"-coordinate system the graph of a function formula_94 with equation
A rotation of the original hyperbola by formula_105 results in a rectangular hyperbola entirely in the second and fourth quadrants, with the same asymptotes, center, semi-latus rectum, radius of curvature at the vertices, linear eccentricity, and eccentricity as for the case of formula_87 rotation, with equation
Shifting the hyperbola with equation formula_111 so that the new center is formula_112, yields the new equation
and the new asymptotes are formula_114 and formula_115.
The shape parameters formula_116 remain unchanged.
The two lines at distance formula_117 and parallel to the minor axis are called directrices of the hyperbola (see diagram).
For an arbitrary point formula_3 of the hyperbola the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity:
The proof for the pair formula_120 follows from the fact that formula_121 and formula_122 satisfy the equation
The second case is proven analogously.
The "inverse statement" is also true and can be used to define a hyperbola (in a manner similar to the definition of a parabola):
For any point formula_124 (focus), any line formula_125 (directrix) not through formula_124 and any real number formula_13 with formula_128 the set of points (locus of points), for which the quotient of the distances to the point and to the line is formula_13
Let formula_133 and assume formula_97 is a point on the curve.
The directrix formula_125 has equation formula_136. With formula_137, the relation formula_138 produces the equations
The substitution formula_141 yields
This is the equation of an "ellipse" (formula_143) or a "parabola" (formula_144) or a "hyperbola" (formula_145). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram).
If formula_145, introduce new parameters formula_147 so that
formula_148, and then the equation above becomes
which is the equation of a hyperbola with center formula_150, the "x"-axis as major axis and
the major/minor semi axis formula_147.
The intersection of an upright double cone by a plane not through the vertex with slope greater than the slope of the lines on the cone is a hyperbola (see diagram: red curve). In order to prove the defining property of a hyperbola (see above) one uses two Dandelin spheres formula_152, which are spheres that touch the cone along circles formula_153 , formula_154 and the intersecting (hyperbola) plane at points formula_20 and formula_16. It turns out: formula_157 are the "foci" of the hyperbola.
The definition of a hyperbola by its foci and its circular directrices (see above) can be used for drawing an arc of it with help of pins, a string and a ruler :
(0) Choose the "foci" formula_5, the vertices formula_9 and one of the "circular directrices" , for example formula_15 (circle with radius formula_17)
(1) A "ruler" is fixed at point formula_16 free to rotate around formula_16. Point formula_163 is marked at distance formula_17.
(2) A "string" with length formula_187 is prepared.
(3) One end of the string is pinned at point formula_161 on the ruler, the other end is pinned to point formula_20.
(4) Take a "pen" and hold the string tight to the edge of the ruler.
(5) "Rotating" the ruler around formula_16 prompts the pen to draw an arc of the right branch of the hyperbola, because of formula_191 (see the definition of a hyperbola by "circular directrices").
The tangent at a point formula_3 bisects the angle between the lines formula_193.
Let formula_194 be the point on the line formula_167 with the distance formula_17 to the focus formula_16 (see diagram, formula_10 is the semi major axis of the hyperbola). Line formula_199 is the bisector of the angle between the lines formula_193. In order to prove that formula_199 is the tangent line at point formula_3, one checks that any point formula_203 on line formula_199 which is different from formula_3 cannot be on the hyperbola. Hence formula_199 has only point formula_3 in common with the hyperbola and is, therefore, the tangent at point formula_3.
From the diagram and the triangle inequality one recognizes that formula_209 holds, which means: formula_210. But if formula_203 is a point of the hyperbola, the difference should be formula_17.
The midpoints of parallel chords of a hyperbola lie on a line through the center (see diagram).
The points of any chord may lie on different branches of the hyperbola.
The proof of the property on midpoints is best done for the hyperbola formula_213. Because any hyperbola is an affine image of the hyperbola formula_213 (see section below) and an affine transformation preserves parallelism and midpoints of line segments, the property is true for all hyperbolas:
For two points formula_215 of the hyperbola formula_213
For parallel chords the slope is constant and the midpoints of the parallel chords lie on the line formula_219
Consequence: for any pair of points formula_220 of a chord there exists a "skew reflection" with an axis (set of fixed points) passing through the center of the hyperbola, which exchanges the points formula_220 and leaves the hyperbola (as a whole) fixed. A skew reflection is a generalization of an ordinary reflection across a line formula_222, where all point-image pairs are on a line perpendicular to formula_222.
Because a skew reflection leaves the hyperbola fixed, the pair of asymptotes is fixed, too. Hence the midpoint formula_8 of a chord formula_225 divides the related line segment formula_226 between the asymptotes into halves, too. This means that formula_227. This property can be used for the construction of further points formula_203 of the hyperbola if a point formula_3 and the asymptotes are given.
If the chord degenerates into a "tangent", then the touching point divides the line segment between the asymptotes in two halves.
The following method to construct single points of a hyperbola relies on the Steiner generation of a non degenerate conic section:
For the generation of points of the hyperbola formula_237 one uses the pencils at the vertices formula_9. Let formula_239 be a point of the hyperbola and formula_240. The line segment formula_241 is divided into n equally-spaced segments and this division is projected parallel with the diagonal formula_176 as direction onto the line segment formula_243 (see diagram). The parallel projection is part of the projective mapping between the pencils at formula_244 and formula_245 needed. The intersection points of any two related lines formula_246 and formula_247 are points of the uniquely defined hyperbola.
"Remark:" The subdivision could be extended beyond the points formula_161 and formula_163 in order to get more points, but the determination of the intersection points would become more inaccurate. A better idea is extending the points already constructed by symmetry (see animation).
"Remark:"
A hyperbola with equation formula_250 is uniquely determined by three points formula_251 with different "x"- and "y"-coordinates. A simple way to determine the shape parameters formula_252 uses the "inscribed angle theorem" for hyperbolas:
Analogous to the inscribed angle theorem for circles one gets the
Inscribed angle theorem for hyperbolas:,:
A consequence of the inscribed angle theorem for hyperbolas is the
3-point-form of a hyperbola's equation:
For a hyperbola formula_264 the intersection points of "orthogonal" tangents lie on the circle formula_265.
This circle is called the "orthoptic" of the given hyperbola.
The tangents may belong to points on different branches of the hyperbola.
In case of formula_266 there are no pairs of orthogonal tangents.
Any hyperbola can be described in a suitable coordinate system by an equation formula_58. The equation of the tangent at a point formula_268 of the hyperbola is formula_269 If one allows point formula_268 to be an arbitrary point different from the origin, then
This relation between points and lines is a bijection.
The inverse function maps
Such a relation between points and lines generated by a conic is called pole-polar relation or just "polarity". The pole is the point, the polar the line. See Pole and polar.
By calculation one checks the following properties of the pole-polar relation of the hyperbola:
"Remarks:"
Pole-polar relations exist for ellipses and parabolas, too.
Another definition of a hyperbola uses affine transformations:
An affine transformation of the Euclidean plane has the form formula_289, where formula_161 is a regular matrix (its determinant is not 0) and formula_291 is an arbitrary vector. If formula_292 are the column vectors of the matrix formula_161, the unit hyperbola formula_294 is mapped onto the hyperbola
formula_291 is the center, formula_297 a point of the hyperbola and formula_298 a tangent vector at this point.
In general the vectors formula_292 are not perpendicular. That means, in general formula_300 are "not" the vertices of the hyperbola. But formula_301 point into the directions of the asymptotes. The tangent vector at point formula_302 is
Because at a vertex the tangent is perpendicular to the major axis of the hyperbola one gets the parameter formula_304 of a vertex from the equation
and hence from
which yields
The two "vertices" of the hyperbola are formula_309
Solving the parametric representation for formula_310 by Cramer's rule and using formula_311, one gets the implicit representation
The definition of a hyperbola in this section gives a parametric representation of an arbitrary hyperbola, even in space, if one allows formula_313 to be vectors in space.
Because the unit hyperbola formula_83 is affinely equivalent to the hyperbola formula_213, an arbitrary hyperbola can be considered as the affine image (see previous section) of the hyperbola formula_316
formula_318 is the center of the hyperbola, the vectors formula_319 have the directions of the asymptotes and formula_320 is a point of the hyperbola. The tangent vector is
At a vertex the tangent is perpendicular to the major axis. Hence
and the parameter of a vertex is
formula_324 is equivalent to formula_325 and formula_326 are the vertices of the hyperbola.
The following properties of a hyperbola are easily proven using the representation of a hyperbola introduced in this section.
The tangent vector can be rewritten by factorization:
This means that
This property provides a way to construct the tangent at a point on the hyperbola.
This property of a hyperbola is an affine version of the 3-point-degeneration of Pascal's theorem.
The area of the grey parallelogram formula_331 in the above diagram is
and hence independent of point formula_3. The last equation follows from a calculation for the case, where formula_3 is a vertex and the hyperbola in its canonical form formula_335
For a hyperbola with parametric representation formula_336 (for simplicity the center is the origin) the following is true:
The simple proof is a consequence of the equation formula_339.
This property provides a possibility to construct points of a hyperbola if the asymptotes and one point are given.
This property of a hyperbola is an affine version of the 4-point-degeneration of Pascal's theorem.
For simplicity the center of the hyperbola may be the origin and the vectors formula_340 have equal length. If the last assumption is not fulfilled one can first apply a parameter transformation (see above) in order to make the assumption true. Hence formula_341 are the vertices, formula_342 span the minor axis and one gets formula_343 and formula_344.
For the intersection points of the tangent at point formula_345 with the asymptotes one gets the points
The "area" of the triangle formula_347 can be calculated by a 2x2-determinant:
(see rules for determinants).
formula_349 is the area of the rhombus generated by formula_340. The area of a rhombus is equal to one half of the product of its diagonals. The diagonals are the semi-axes formula_147 of the hyperbola. Hence:
For pole = focus:
The polar coordinates used most commonly for the hyperbola are defined relative to the Cartesian coordinate system that has its "origin in a focus" and its x-axis pointing towards the origin of the "canonical coordinate system" as illustrated in the first diagram.
In this case the angle formula_354 is called true anomaly.
Relative to this coordinate system one has that
and
for pole = center:
With polar coordinates relative to the "canonical coordinate system" (see second diagram)
one has that
For the right branch of the hyperbola the range of formula_358 is
A hyperbola with equation formula_360 can be described by several parametric equations:
The reciprocation of a circle "B" in a circle "C" always yields a conic section such as a hyperbola. The process of "reciprocation in a circle "C"" consists of replacing every line and point in a geometrical figure with their corresponding pole and polar, respectively. The "pole" of a line is the inversion of its closest point to the circle "C", whereas the polar of a point is the converse, namely, a line whose closest point to "C" is the inversion of the point.
The eccentricity of the conic section obtained by reciprocation is the ratio of the distances between the two circles' centers to the radius "r" of reciprocation circle "C". If B and C represent the points at the centers of the corresponding circles, then
Since the eccentricity of a hyperbola is always greater than one, the center B must lie outside of the reciprocating circle "C".
This definition implies that the hyperbola is both the locus of the poles of the tangent lines to the circle "B", as well as the envelope of the polar lines of the points on "B". Conversely, the circle "B" is the envelope of polars of points on the hyperbola, and the locus of poles of tangent lines to the hyperbola. Two tangent lines to "B" have no (finite) poles because they pass through the center C of the reciprocation circle "C"; the polars of the corresponding tangent points on "B" are the asymptotes of the hyperbola. The two branches of the hyperbola correspond to the two parts of the circle "B" that are separated by these tangent points.
A hyperbola can also be defined as a second-degree equation in the Cartesian coordinates ("x", "y") in the plane,
provided that the constants "A""xx", "A""xy", "A""yy", "B""x", "B""y", and "C" satisfy the determinant condition
This determinant is conventionally called the discriminant of the conic section.
A special case of a hyperbola—the "degenerate hyperbola" consisting of two intersecting lines—occurs when another determinant is zero:
This determinant Δ is sometimes called the discriminant of the conic section.
Given the above general parametrization of the hyperbola in Cartesian coordinates, the eccentricity can be found using the formula in Conic section#Eccentricity in terms of parameters of the quadratic form.
The center ("x""c", "y""c") of the hyperbola may be determined from the formulae
In terms of new coordinates, and , the defining equation of the hyperbola can be written
The principal axes of the hyperbola make an angle "φ" with the positive "x"-axis that is given by
Rotating the coordinate axes so that the "x"-axis is aligned with the transverse axis brings the equation into its canonical form
The major and minor semiaxes "a" and "b" are defined by the equations
where λ1 and λ2 are the roots of the quadratic equation
For comparison, the corresponding equation for a degenerate hyperbola (consisting of two intersecting lines) is
The tangent line to a given point ("x"0, "y"0) on the hyperbola is defined by the equation
where "E", "F" and "G" are defined by
The normal line to the hyperbola at the same point is given by the equation
The normal line is perpendicular to the tangent line, and both pass through the same point ("x"0, "y"0).
From the equation
the left focus is formula_392 and the right focus is formula_393 where is the eccentricity. Denote the distances from a point ("x, y") to the left and right foci as formula_394 and formula_395 For a point on the right branch,
and for a point on the left branch,
This can be proved as follows:
If ("x","y") is a point on the hyperbola the distance to the left focal point is
To the right focal point the distance is
If ("x,y") is a point on the right branch of the hyperbola then formula_400 and
Subtracting these equations one gets
If ("x,y") is a point on the left branch of the hyperbola then formula_404 and
Subtracting these equations one gets
Besides providing a uniform description of circles, ellipses, parabolas, and hyperbolas, conic sections can also be understood as a natural model of the geometry of perspective in the case where the scene being viewed consists of circles, or more generally an ellipse. The viewer is typically a camera or the human eye and the image of the scene a central projection onto an image plane, that is, all projection rays pass a fixed point "O", the center. The lens plane is a plane parallel to the image plane at the lens "O".
The image of a circle c is
These results can be understood if one recognizes that the projection process can be seen in two steps: 1) circle c and point "O" generate a cone which is 2) cut by the image plane, in order to generate the image.
One sees a hyperbola whenever catching sight of a portion of a circle cut by one's lens plane. The inability to see very much of the arms of the visible branch, combined with the complete absence of the second branch, makes it virtually impossible for the human visual system to recognize the connection with hyperbolas.
The arc length of a hyperbola does not have a closed-form expression. The upper half of a hyperbola can be parameterized as
Then the integral giving the arc length formula_409 from formula_410 to formula_411 can be computed numerically:
After using the substitution formula_413, this can also be represented using the elliptic integral of the second kind with parameter formula_414:
Several other curves can be derived from the hyperbola by inversion, the so-called inverse curves of the hyperbola. If the center of inversion is chosen as the hyperbola's own center, the inverse curve is the lemniscate of Bernoulli; the lemniscate is also the envelope of circles centered on a rectangular hyperbola and passing through the origin. If the center of inversion is chosen at a focus or a vertex of the hyperbola, the resulting inverse curves are a limaçon or a strophoid, respectively.
A family of confocal hyperbolas is the basis of the system of elliptic coordinates in two dimensions. These hyperbolas are described by the equation
where the foci are located at a distance "c" from the origin on the "x"-axis, and where θ is the angle of the asymptotes with the "x"-axis. Every hyperbola in this family is orthogonal to every ellipse that shares the same foci. This orthogonality may be shown by a conformal map of the Cartesian coordinate system "w" = "z" + 1/"z", where "z"= "x" + "iy" are the original Cartesian coordinates, and "w"="u" + "iv" are those after the transformation.
Other orthogonal two-dimensional coordinate systems involving hyperbolas may be obtained by other conformal mappings. For example, the mapping "w" = "z"2 transforms the Cartesian coordinate system into two families of orthogonal hyperbolas.
Hyperbolas may be seen in many sundials. On any given day, the sun revolves in a circle on the celestial sphere, and its rays striking the point on a sundial traces out a cone of light. The intersection of this cone with the horizontal plane of the ground forms a conic section. At most populated latitudes and at most times of the year, this conic section is a hyperbola. In practical terms, the shadow of the tip of a pole traces out a hyperbola on the ground over the course of a day (this path is called the "declination line"). The shape of this hyperbola varies with the geographical latitude and with the time of the year, since those factors affect the cone of the sun's rays relative to the horizon. The collection of such hyperbolas for a whole year at a given location was called a "pelekinon" by the Greeks, since it resembles a double-bladed axe.
A hyperbola is the basis for solving multilateration problems, the task of locating a point from the differences in its distances to given points — or, equivalently, the difference in arrival times of synchronized signals between the point and the given points. Such problems are important in navigation, particularly on water; a ship can locate its position from the difference in arrival times of signals from a LORAN or GPS transmitters. Conversely, a homing beacon or any transmitter can be located by comparing the arrival times of its signals at two separate receiving stations; such techniques may be used to track objects and people. In particular, the set of possible positions of a point that has a distance difference of 2"a" from two given points is a hyperbola of vertex separation 2"a" whose foci are the two given points.
The path followed by any particle in the classical Kepler problem is a conic section. In particular, if the total energy "E" of the particle is greater than zero (that is, if the particle is unbound), the path of such a particle is a hyperbola. This property is useful in studying atomic and sub-atomic forces by scattering high-energy particles; for example, the Rutherford experiment demonstrated the existence of an atomic nucleus by examining the scattering of alpha particles from gold atoms. If the short-range nuclear interactions are ignored, the atomic nucleus and the alpha particle interact only by a repulsive Coulomb force, which satisfies the inverse square law requirement for a Kepler problem.
The hyperbolic trig function formula_417 appears as one solution to the Korteweg–de Vries equation which describes the motion of a soliton wave in a canal.
As shown first by Apollonius of Perga, a hyperbola can be used to trisect any angle, a well studied problem of geometry. Given an angle, first draw a circle centered at its vertex O, which intersects the sides of the angle at points A and B. Next draw the line segment with endpoints A and B and its perpendicular bisector formula_418. Construct a hyperbola of eccentricity "e"=2 with formula_418 as directrix and B as a focus. Let P be the intersection (upper) of the hyperbola with the circle. Angle POB trisects angle AOB.
To prove this, reflect the line segment OP about the line formula_418 obtaining the point P' as the image of P. Segment AP' has the same length as segment BP due to the reflection, while segment PP' has the same length as segment BP due to the eccentricity of the hyperbola. As OA, OP', OP and OB are all radii of the same circle (and so, have the same length), the triangles OAP', OPP' and OPB are all congruent. Therefore, the angle has been trisected, since 3×POB = AOB.
In portfolio theory, the locus of mean-variance efficient portfolios (called the efficient frontier) is the upper half of the east-opening branch of a hyperbola drawn with the portfolio return's standard deviation plotted horizontally and its expected value plotted vertically; according to this theory, all rational investors would choose a portfolio characterized by some point on this locus.
In biochemistry and pharmacology, the Hill equation and Hill-Langmuir equation respectively describe biological responses and the formation of protein–ligand complexes as functions of ligand concentration. They are both rectangular hyperbolae.
Hyperbolas appear as plane sections of the following quadrics: | https://en.wikipedia.org/wiki?curid=14052 |
Humayun
Nasir-ud-Din Muḥammad (; 6 March 1508 – 27 January 1556), better known by his regnal name, Humayun (), was the second emperor of the Mughal Empire, who ruled over territory in what is now Afghanistan, Pakistan, Northern India, and Bangladesh from 1530–1540 and again from 1555–1556. Like his father, Babur, he lost his kingdom early but regained it with the aid of the Safavid dynasty of Persia, with additional territory. At the time of his death in 1556, the Mughal Empire spanned almost one million square kilometres.
In December 1530, Humayun succeeded his father to the throne of Delhi as ruler of the Mughal territories in the Indian subcontinent. Humayun was an inexperienced ruler when he came to power, at the age of 22. His half-brother Kamran Mirza inherited Kabul and Kandahar, the northernmost parts of their father's empire. Kamran was to become a bitter rival of Humayun.
Humayun lost Mughal territories to Sher Shah Suri, but regained them 15 years later with Safavid aid. Humayun's return from Persia was accompanied by a large retinue of Persian noblemen and signalled an important change in Mughal court culture. The Central Asian origins of the dynasty were largely overshadowed by the influences of Persian art, architecture, language, and literature. There are many stone carvings and thousands of Persian manuscripts in India dating from the time of Humayun.
Subsequently, Humayun further expanded the Empire in a very short time, leaving a substantial legacy for his son, Akbar.
The decision of Babur to divide the territories of his empire between two of his sons was unusual in India, although it had been a common Central Asian practice since the time of Genghis Khan. Unlike most monarchies, which practised primogeniture, the Timurids followed the example of Genghis and did not leave an entire kingdom to the eldest son. Although under that system only a Chingissid could claim sovereignty and khanal authority, any male Chinggisid within a given sub-branch had an equal right to the throne (though the Timurids were not Chinggisid in their paternal ancestry). While Genghis Khan's Empire had been peacefully divided between his sons upon his death, almost every Chinggisid succession since had resulted in fratricide.
Timur himself had divided his territories among Pir Muhammad, Miran Shah, Khalil Sultan and Shah Rukh, which resulted in inter-family warfare. Upon Babur's death, Humayun's territories were the least secure. He had ruled only four years, and not all "umarah" (nobles) viewed Humayun as the rightful ruler. Indeed, earlier, when Babur had become ill, some of the nobles had tried to install his Brother-in-law, Mahdi Khwaja, as ruler. Although this attempt failed, it was a sign of problems to come.
When Humayun came to the throne of the Mughal Empire, several of his brothers revolted against him. Another brother Khalil Mirza (1509–1530) supported Humayun but was assassinated. The Emperor commenced construction of a tomb for his brother in 1538, but this was not yet finished when Humayun was forced to flee to Persia. Sher Shah destroyed the structure and no further work was done on it after Humayun's restoration.
Humayun had two major rivals for his lands: Sultan Bahadur of Gujarat to the southwest and Sher Shah Suri (Sher Khan) settled along the river Ganges in Bihar to the east. Humayun's first campaign was to confront Sher Shah Suri. Halfway through this offensive Humayun had to abandon it and concentrate on Gujarat, where a threat from Ahmed Shah had to be met. Humayun was victorious annexing Gujarat, Malwa, Champaner and the great fort of Mandu.
During the first five years of Humayun's reign, Bahadur and Sher Khan extended their rule, although Sultan Bahadur faced pressure in the east from sporadic conflicts with the Portuguese. While the Mughals had obtained firearms via the Ottoman Empire, Bahadur's Gujarat had acquired them through a series of contracts drawn up with the Portuguese, allowing the Portuguese to establish a strategic foothold in north western India.
In 1535 Humayun was made aware that the Sultan of Gujarat was planning an assault on the Mughal territories with Portuguese aid. Humayun gathered an army and marched on Bahadur. Within a month he had captured the forts of Mandu and Champaner. However, instead of pressing his attack, Humayun ceased the campaign and consolidated his newly conquered territory. Sultan Bahadur, meanwhile escaped and took up refuge with the Portuguese.
Shortly after Humayun had marched on Gujarat, Sher Shah Suri saw an opportunity to wrest control of Agra from the Mughals. He began to gather his army together hoping for a rapid and decisive siege of the Mughal capital. Upon hearing this alarming news, Humayun quickly marched his troops back to Agra allowing Bahadur to easily regain control of the territories Humayun had recently taken. In February 1537, however, Bahadur was killed when a botched plan to kidnap the Portuguese viceroy ended in a fire-fight that the Sultan lost.
Whilst Humayun succeeded in protecting Agra from Sher Shah, the second city of the Empire, Gaur the capital of the "vilayat" of Bengal, was sacked. Humayun's troops had been delayed while trying to take Chunar, a fort occupied by Sher Shah's son, in order to protect his troops from an attack from the rear. The stores of grain at Gauri, the largest in the empire, were emptied, and Humayun arrived to see corpses littering the roads. The vast wealth of Bengal was depleted and brought East, giving Sher Shah a substantial war chest.
Sher Shah withdrew to the east, but Humayun did not follow: instead he "shut himself up for a considerable time in his Harem, and indulged himself in every kind of luxury". Hindal, Humayun's 19-year-old brother, had agreed to aid him in this battle and protect the rear from attack, but he abandoned his position and withdrew to Agra, where he decreed himself acting emperor. When Humayun sent the grand "Mufti", Sheikh Buhlul, to reason with him; the Sheikh was killed. Further provoking the rebellion, Hindal ordered that the "Khutba", or sermon, in the main mosque surrounded.
Humayun's other brother, Kamran Mirza, marched from his territories in the Punjab, ostensibly to aid Humayun. However, his return home had treacherous motives as he intended to stake a claim for Humayun's apparently collapsing empire. He brokered a deal with Hindal providing that his brother would cease all acts of disloyalty in return for a share in the new empire, which Kamran would create once Humayun was deposed.
In June 1539 Sher Shah met Humayun in the Battle of Chausa on the banks of the Ganges, near Buxar. This was to become an entrenched battle in which both sides spent a lot of time digging themselves into positions. The major part of the Mughal army, the artillery, was now immobile, and Humayun decided to engage in some diplomacy using Muhammad Aziz as ambassador. Humayun agreed to allow Sher Shah to rule over Bengal and Bihar, but only as provinces granted to him by his Emperor, Humayun, falling short of outright sovereignty. The two rulers also struck a bargain in order to save face: Humayun's troops would charge those of Sher Shah whose forces then retreat in feigned fear. Thus honour would, supposedly, be satisfied.
Once the Army of Humayun had made its charge and Sher Shah's troops made their agreed-upon retreat, the Mughal troops relaxed their defensive preparations and returned to their entrenchments without posting a proper guard. Observing the Mughals' vulnerability, Sher Shah reneged on his earlier agreement. That very night, his army approached the Mughal camp and finding the Mughal troops unprepared with a majority asleep, they advanced and killed most of them. The Emperor survived by swimming across the Ganges using an air-filled "water skin", and quietly returned to Agra. Humayun was assisted across the Ganges by Shams al-Din Muhammad.
When Humayun returned to Agra, he found that all three of his brothers were present. Humayun once again not only pardoned his brothers for plotting against him, but even forgave Hindal for his outright betrayal. With his armies travelling at a leisurely pace, Sher Shah was gradually drawing closer and closer to Agra. This was a serious threat to the entire family, but Humayun and Kamran squabbled over how to proceed. Kamran withdrew after Humayun refused to make a quick attack on the approaching enemy, instead opting to build a larger army under his own name.
When Kamran returned to Lahore, Humayun, with his other brothers Askari and Hindal, marched to meet Sher Shah east of Agra at the battle of Kannauj on 17 May 1540. Humayun was soundly defeated. He retreated to Agra, pursued by Sher Shah, and thence through Delhi to Lahore. Sher Shah's founding of the short-lived Sur Empire, with its capital at Delhi, resulted in Humayun's exile for 15 years in the court of Shah Tahmasp I.
The four brothers were united in Lahore, but every day they were informed that Sher Shah was getting closer and closer. When he reached Sirhind, Humayun sent an ambassador carrying the message "I have left you the whole of Hindustan [i.e. the lands to the East of Punjab, comprising most of the Ganges Valley]. Leave Lahore alone, and let Sirhind be a boundary between you and me." Sher Shah, however, replied "I have left you Kabul. You should go there." Kabul was the capital of the empire of Humayun's brother Kamran, who was far from willing to hand over any of his territories to his brother. Instead, Kamran approached Sher Shah and proposed that he actually revolt against his brother and side with Sher Shah in return for most of the Punjab. Sher Shah dismissed his help, believing it not to be required, though word soon spread to Lahore about the treacherous proposal, and Humayun was urged to make an example of Kamran and kill him. Humayun refused, citing the last words of his father, Babur, "Do nothing against your brothers, even though they may deserve it."
Humayun decided it would be wise to withdraw still further. He and his army rode out through and across the Thar Desert, when the Hindu ruler Rao Maldeo Rathore allied with Sher Shah Suri against the Mughal Empire. In many accounts Humayun mentions how he and his pregnant wife had to trace their steps through the desert at the hottest time of year. Their rations were low, and they had little to eat; even drinking water was a major problem in the desert. When Hamida Bano's horse died, no one would lend the Queen (who was now eight months pregnant) a horse, so Humayun did so himself, resulting in him riding a camel for six kilometres (four miles), although Khaled Beg then offered him his mount. Humayun was later to describe this incident as the lowest point in his life. Humayun asked that his brothers join him as he fell back into Sindh. While the previously rebellious Hindal Mirza remained loyal and was ordered to join his brothers in Kandahar. Kamran Mirza and Askari Mirza instead decided to head to the relative peace of Kabul. This was to be a definitive schism in the family. Humayun headed for Sindh because he expected aid from the Emir of Sindh, Hussein Umrani, whom he had appointed and who owed him his allegiance. Also, his wife Hamida hailed from Sindh; she was the daughter of a prestigious "pir" family (a "pir" is an Islamic religious guide) of Persian heritage long settled in Sindh. En route to the Emir's court, Humayun had to break journey because his pregnant wife Hamida was unable to travel further. Humayun sought refuge with the Hindu ruler of the oasis town of Amarkot (now part of Sindh province).
Rana Prasad Rao of Amarkot duly welcomed Humayun into his home and sheltered the refugees for several months. Here, in the household of a Hindu Rajput nobleman, Humayun's wife Hamida Bano, daughter of a Sindhi family, gave birth to the future Emperor Akbar on 15 October 1542. The date of birth is well established because Humayun consulted his astronomer to utilise the astrolabe and check the location of the planets. The infant was the long-awaited heir-apparent to the 34-year-old Humayun and the answer of many prayers. Shortly after the birth, Humayun and his party left Amarkot for Sindh, leaving Akbar behind, who was not ready for the grueling journey ahead in his infancy. He was later adopted by Askari Mirza.
For a change, Humayun was not deceived in the character of the man on whom he has pinned his hopes. Emir Hussein Umrani, ruler of Sindh, welcomed Humayun's presence and was loyal to Humayun just as he had been loyal to Babur against the renegade Arghuns. While in Sindh, Humayun alongside Emir Hussein Umrani, gathered horses and weapons and formed new alliances that helped regain lost territories. Until finally Humayun had gathered hundreds of Sindhi and Baloch tribesmen alongside his Mughals and then marched towards Kandahar and later Kabul, thousands more gathered by his side as Humayun continually declared himself the rightful Timurid heir of the first Mughal Emperor, Babur.
After Humayun set out from his expedition in Sindh, along with 300 camels (mostly wild) and 2000 loads of grain, he set off to join his brothers in Kandahar after crossing the Indus River on 11 July 1543 along with the ambition to regain the Mughal Empire and overthrow the Suri dynasty. Among the tribes that had sworn allegiance to Humayun were the Magsi, Rind and many others.
In Kamran Mirza's territory, Hindal Mirza had been placed under house arrest in Kabul after refusing to have the "Khutba" recited in Kamran Mirza's name. His other brother, Askari Mirza, was now ordered to gather an army and march on Humayun. When Humayun received word of the approaching hostile army he decided against facing them, and instead sought refuge elsewhere. Akbar was left behind in camp close to Kandahar, as it was December, too cold and dangerous to include the 14-month-old toddler in the march through the mountains of the Hindu Kush. Askari Mirza took Akbar in, leaving the wives of Kamran and Askari Mirza to raise him. The Akbarnama specifies Kamran Mirza's wife, Sultan Begam.
Once again Humayun turned toward Kandahar where his brother Kamran Mirza was in power, but he received no help and had to seek refuge with the Shah of Persia.
Humayun fled to the refuge of the Safavid Empire in Persia, marching with 40 men, his wife Bega Begum, and her companion through mountains and valleys. Among other trials the Imperial party were forced to live on horse meat boiled in the soldiers' helmets. These indignities continued during the month it took them to reach Herat, however after their arrival they were reintroduced to the finer things in life. Upon entering the city his army was greeted with an armed escort, and they were treated to lavish food and clothing. They were given fine accommodations and the roads were cleared and cleaned before them. Shah Tahmasp, unlike Humayun's own family, actually welcomed the Mughal, and treated him as a royal visitor. Here Humayun went sightseeing and was amazed at the Persian artwork and architecture he saw: much of this was the work of the Timurid Sultan Husayn Bayqarah and his ancestor, princess Gauhar Shad, thus he was able to admire the work of his relatives and ancestors at first hand.
He was introduced to the work of the Persian miniaturists, and Kamaleddin Behzad had two of his pupils join Humayun in his court. Humayun was amazed at their work and asked if they would work for him if he were to regain the sovereignty of Hindustan: they agreed. With so much going on Humayun did not even meet the Shah until July, some six months after his arrival in Persia. After a lengthy journey from Herat the two met in Qazvin where a large feast and parties were held for the event. The meeting of the two monarchs is depicted in a famous wall-painting in the Chehel Sotoun (Forty Columns) palace in Esfahan.
The Shah urged that Humayun convert from Sunni to Shia Islam, and Humayun eventually accepted, in order to keep himself and several hundred followers alive. Although the Mughals initially disagreed to their conversion they knew that with this outward acceptance of Shi'ism, Shah Tahmasp was eventually prepared to offer Humayun more substantial support. When Humayun's brother, Kamran Mirza, offered to cede Kandahar to the Persians in exchange for Humayun, dead or alive, Shah Tahmasp refused. Instead the Shah staged a celebration for Humayun, with 300 tents, an imperial Persian carpet, 12 musical bands and "meat of all kinds". Here the Shah announced that all this, and 12,000 elite cavalry were his to lead an attack on his brother Kamran. All that Shah Tahmasp asked for was that, if Humayun's forces were victorious, Kandahar would be his.
With this Persian Safavid aid Humayun took Kandahar from Askari Mirza after a two-week siege. He noted how the nobles who had served Askari Mirza quickly flocked to serve him, "in very truth the greater part of the inhabitants of the world are like a flock of sheep, wherever one goes the others immediately follow". Kandahar was, as agreed, given to the Shah of Persia who sent his infant son, Murad, as the Viceroy. However, the baby soon died and Humayun thought himself strong enough to assume power.
Humayun now prepared to take Kabul, ruled by his brother Kamran Mirza. In the end, there was no actual siege. Kamran Mirza was detested as a leader and as Humayun's Persian army approached the city hundreds of Kamran Mirza's troops changed sides, flocking to join Humayun and swelling his ranks. Kamran Mirza absconded and began building an army outside the city. In November 1545, Hamida and Humayun were reunited with their son Akbar, and held a huge feast. They also held another, larger, feast in the child's honour when he was circumcised.
However, while Humayun had a larger army than his brother and had the upper hand, on two occasions his poor military judgement allowed Kamran Mirza to retake Kabul and Kandahar, forcing Humayun to mount further campaigns for their recapture. He may have been aided in this by his reputation for leniency towards the troops who had defended the cities against him, as opposed to Kamran Mirza, whose brief periods of possession were marked by atrocities against the inhabitants who, he supposed, had helped his brother.
His youngest brother, Hindal Mirza, formerly the most disloyal of his siblings, died fighting on his behalf. His brother Askari Mirza was shackled in chains at the behest of his nobles and aides. He was allowed go on Hajj, and died en route in the desert outside Damascus.
Humayun's other brother, Kamran Mirza, had repeatedly sought to have Humayun killed. In 1552 Kamran Mirza attempted to make a pact with Islam Shah, Sher Shah's successor, but was apprehended by a Gakhar. The Gakhars were one of the minority of tribal groups who had consistently remained loyal to their oath to the Mughals. Sultan Adam of the Gakhars handed Kamran Mirza over to Humayun. Humayun was inclined to forgive his brother. However he was warned that allowing Kamran Mirza's repeated acts of treachery to go unpunished could foment rebellion amongst his own supporters. So, instead of killing his brother, Humayun had Kamran Mirza blinded which would end any claim by the latter to the throne. Humayun sent Kamran Mirza on Hajj, as he hoped to see his brother thereby absolved of his offences. However Kamran Mirza died close to Mecca in the Arabian Peninsula in 1557.
Sher Shah Suri had died in 1545; his son and successor Islam Shah died in 1554. These two deaths left the dynasty reeling and disintegrating. Three rivals for the throne all marched on Delhi, while in many cities leaders tried to stake a claim for independence. This was a perfect opportunity for the Mughals to march back to India.
The Mughal Emperor Humayun gathered a vast army and attempted the challenging task of retaking the throne in Delhi. Humayun placed the army under the leadership of Bairam Khan, a wise move given Humayun's own record of military ineptitude, and it turned out to be prescient as Bairam proved himself a great tactician. At the Battle of Sirhind on 22 June 1555, the armies of Sikandar Shah Suri were decisively defeated and the Mughal Empire was re-established in India.
The "Gazetteer of Ulwur" states:
Bairam Khan led the army through the Punjab virtually unopposed. The fort of Rohtas, which was built in 1541–1543 by Sher Shah Suri to crush the Gakhars who were loyal to Humayun, was surrendered without a shot by a treacherous commander. The walls of the Rohtas Fort measure up to 12.5 meters in thickness and up to 18.28 meters in height. They extend for 4 km and feature 68 semi-circular bastions. Its sandstone gates, both massive and ornate, are thought to have exerted a profound influence on Mughal military architecture.
The only major battle faced by Humayun's armies was against Sikander Suri in Sirhind, where Bairam Khan employed a tactic whereby he engaged his enemy in open battle, but then retreated quickly in apparent fear. When the enemy followed after them they were surprised by entrenched defensive positions and were easily annihilated.
After Sirhind, most towns and villages chose to welcome the invading army as it made its way to the capital. On 23 July 1555, Humayun once again sat on Babur's throne in Delhi.
With all of Humayun's brothers now dead, there was no fear of another usurping his throne during his military campaigns. He was also now an established leader and could trust his generals. With this new-found strength Humayun embarked on a series of military campaigns aimed at extending his reign over areas in the east and west of the subcontinent. His sojourn in exile seems to have reduced his reliance on astrology, and his military leadership came to imitate the more effective methods that he had observed in Persia.
Edward S. Holden writes; "He was uniformly kind and considerate to his dependents, devotedly attached to his son Akbar, to his friends, and to his turbulent brothers. The misfortunes of his reign arose in great, from his failure to treat them with rigor." He further writes: "The very defects of his character, which render him less admirable as a successful ruler of nations, make us more fond of him as a man. His renown has suffered in that his reign came between the brilliant conquests of Babur and the beneficent statesmanship of Akbar; but he was not unworthy to be the son of the one and the father of the other." Stanley Lane-Poole writes in his book "Medieval India": "His name meant the winner (Lucky/Conqueror), there is no kind in the history to be named as wrong as Humayun", he was of a forgiving nature. He further writes, "He was in fact unfortunate ... Scarcely had he enjoyed his throne for six months in Delhi when he slipped down from the polished steps of his palace and died in his forty-ninth year (Jan. 24, 1556). If there was a possibility of falling, Humayun was not the man to miss it. He tumbled through his life and tumbled out of it."
Humayun ordered the crushing by elephant of an imam he mistakenly believed to be critical of his reign.
On 24 January 1556, Humayun, with his arms full of books, was descending the staircase from his library when the muezzin announced the Azaan (the call to prayer). It was his habit, wherever and whenever he heard the summons, to bow his knee in holy reverence. Trying to kneel, he caught his foot in his robe, tumbled down several steps and hit his temple on a rugged stone edge. He died three days later. His body was laid to rest in Purana Quila initially, but, because of an attack by Hemu on Delhi and the capture of Purana Qila, Humayun's body was exhumed by the fleeing army and transferred to Kalanaur in Punjab where Akbar was crowned. Humayun's Tomb in Delhi is the first very grand garden tomb in Mughal architecture, setting the precedent later followed by the Taj Mahal and many other Indian monuments. It was commissioned by his favourite and devoted chief wife, Bega Begum.
Akbar later asked his aunt, Gulbadan Begum, to write a biography of her brother, the "Humayun nameh" (or "Humayun-nama" etc), and what she remembered of Babur. The work begins:
There had been an order issued, ‘Write down whatever you know of the doings of Firdous-Makani (Babur) and Jannat-Ashyani (Humayun)’. At this time when his Majesty Firdaus-Makani passed from this perishable world to the everlasting home, I, this lowly one, was eight years old, so it may well be that I do not remember much. However in obedience to the royal command, I set down whatever there is that I have heard and remember.
The full title is "Ahwal Humayun Padshah Jamah Kardom Gulbadan Begum bint Babur Padshah amma Akbar Padshah". She was only eight when Babur died, and was married at 17, but her work, in a simple Persian style, has been found very interesting by its relatively few readers.
Unlike other Mughal royal biographies (the "Zafarnama" of Timur, "Baburnama", and his own "Akbarnama") no richly illustrated copy has survived, and the work is only known from a single battered and slightly incomplete manuscript, now in the British Library, that emerged in the 1860s. Annette Beveridge published an English translation in 1901, and editions in English and Bengali have been published since 2000.
His full title as Emperor of the Mughal Empire was "Al-Sultan al-'Azam wal Khaqan al-Mukarram, Jam-i-Sultanat-i-haqiqi wa Majazi, Sayyid al-Salatin, Abu'l Muzaffar Nasir ud-din Muhammad Humayun Padshah Ghazi, Zillu'llah". | https://en.wikipedia.org/wiki?curid=14055 |
Prince-elector
The prince-electors ( pl. "Kurfürsten", , ), or electors for short, were the members of the electoral college that elected the emperor of the Holy Roman Empire.
From the 13th century onwards, the prince-electors had the privilege of electing the monarch who would be crowned by the pope. After 1508, there were no imperial coronations and the election was sufficient. Charles V (elected in 1519) was the last emperor to be crowned (1530); his successors were elected emperors by the electoral college, each being titled "Elected Emperor of the Romans" (; ).
The dignity of elector carried great prestige and was considered to be second only to that of king or emperor. The electors held exclusive privileges that were not shared with other princes of the Empire, and they continued to hold their original titles alongside that of elector.
The heir apparent to a secular prince-elector was known as an electoral prince ().
The German element "Kur-" is based on the Middle High German irregular verb "kiesen" and is related etymologically to the English word "choose" (cf. Old English "ceosan" , participle "coren" 'having been chosen' and Gothic "kiusan"). In English, the "s"/"r" mix in the Germanic verb conjugation has been regularized to "s" throughout, while German retains the "r" in "Kur-". There is also a modern German verb "küren" which means 'to choose' in a ceremonial sense.
"Fürst" is German for 'prince', but while the German language distinguishes between the head of a principality ("der Fürst") and the son of a monarch ("der Prinz"), English uses "prince" for both concepts. "Fürst" itself is related to English "first" and is thus the 'foremost' person in his realm. Note that 'prince' derives from Latin "princeps", which carried the same meaning.
Electors were "reichsstände" (Imperial Estates), enjoying precedence over the other princes. They were, until the 18th century, exclusively entitled to be addressed with the title "Durchlaucht" (Serene Highness). In 1742, the electors became entitled to the superlative "Durchläuchtigste" (Most Serene Highness), while other princes were promoted to "Durchlaucht".
As Imperial Estates, the electors enjoyed all the privileges of the other princes enjoying that status, including the right to enter into alliances, to autonomy in relation to dynastic affairs and to precedence over other subjects. The Golden Bull had granted them the Privilegium de non appellando, which prevented their subjects from lodging an appeal to a higher Imperial court. However, while this privilege, and some others, were automatically granted to Electors, they were not exclusive to them and many of the larger Imperial Estates were also to be individually granted some or all those rights and privileges.
The electors, like the other princes ruling States of the Empire, were members of the Imperial Diet, which was divided into three "collegia": the Council of Electors, the Council of Princes, and the Council of Cities. In addition to being members of the Council of Electors, several lay electors were therefore members of the Council of Princes as well by virtue of other territories they possessed. In many cases, the lay electors ruled numerous States of the Empire, and therefore held several votes in the Council of Princes. In 1792, the King of Bohemia held three votes, the Elector of Bavaria six votes, the Elector of Brandenburg eight votes, and the Elector of Hanover six votes.
Thus, of the hundred votes in the Council of Princes in 1792, twenty-three belonged to electors. The lay electors therefore exercised considerable influence, being members of the small Council of Electors and holding a significant number of votes in the Council of Princes. The assent of both bodies was required for important decisions affecting the structure of the Empire, such as the creation of new electorates or States of the Empire.
In addition to voting by colleges or councils, the Imperial Diet also voted in religious coalitions, as provided for in the Peace of Westphalia. The Archbishop of Mainz presided over the Catholic body, or "corpus catholicorum", while the Elector of Saxony presided over the Protestant body, or "corpus evangelicorum". The division into religious bodies was on the basis of the official religion of the state, and not of its rulers. Thus, even when the Electors of Saxony were Catholics during the eighteenth century, they continued to preside over the "corpus evangelicorum", since the state of Saxony was officially Protestant.
The electors were originally summoned by the Archbishop of Mainz within one month of an Emperor's death, and met within three months of being summoned. During the "interregnum", imperial power was exercised by two imperial vicars. Each vicar, in the words of the Golden Bull, was "the administrator of the empire itself, with the power of passing judgments, of presenting to ecclesiastical benefices, of collecting returns and revenues and investing with fiefs, of receiving oaths of fealty for and in the name of the holy empire". The Elector of Saxony was vicar in areas operating under Saxon law (Saxony, Westphalia, Hanover, and northern Germany), while the Elector Palatine was vicar in the remainder of the Empire (Franconia, Swabia, the Rhine, and southern Germany). The Elector of Bavaria replaced the Elector Palatine in 1623, but when the latter was granted a new electorate in 1648, there was a dispute between the two as to which was vicar. In 1659, both purported to act as vicar, but the other vicar recognised the Elector of Bavaria. Later, the two electors made a pact to act as joint vicars, but the Imperial Diet rejected the agreement. In 1711, while the Elector of Bavaria was under the ban of the Empire, the Elector Palatine again acted as vicar, but his cousin was restored to his position upon his restoration three years later.
Finally, in 1745, the two agreed to alternate as vicars, with Bavaria starting first. This arrangement was upheld by the Imperial Diet in 1752. In 1777 the question was settled when the Elector Palatine inherited Bavaria. On many occasions, however, there was no interregnum, as a new king had been elected during the lifetime of the previous Emperor.
Frankfurt regularly served as the site of the election from the fifteenth century on, but elections were also held at Cologne (1531), Regensburg (1575 and 1636), and Augsburg (1653 and 1690). An elector could appear in person or could appoint another elector as his proxy. More often, an electoral suite or embassy was sent to cast the vote; the credentials of such representatives were verified by the Archbishop of Mainz, who presided over the ceremony. The deliberations were held at the city hall, but voting occurred in the cathedral. In Frankfurt, a special electoral chapel, or "Wahlkapelle", was used for elections. Under the Golden Bull, a majority of electors sufficed to elect a king, and each elector could cast only one vote. Electors were free to vote for whomsoever they pleased (including themselves), but dynastic considerations played a great part in the choice.
Electors drafted a "Wahlkapitulation", or electoral capitulation, which was presented to the king-elect. The capitulation may be described as a contract between the princes and the king, the latter conceding rights and powers to the electors and other princes. Once an individual swore to abide by the electoral capitulation, he assumed the office of King of the Romans.
In the 10th and 11th centuries, princes often acted merely to confirm hereditary succession in the Saxon Ottonian dynasty and Franconian Salian dynasty. But with the actual formation of the prince-elector class, elections became more open, starting with the election of Lothair II in 1125. The Staufen dynasty managed to get its sons formally elected in their fathers' lifetimes almost as a formality. After these lines ended in extinction, the electors began to elect kings from different families so that the throne would not once again settle within a single dynasty.
For some two centuries, the monarchy was elective both in theory and in practice; the arrangement, however, did not last, since the powerful House of Habsburg managed to secure succession within their dynasty during the fifteenth century. All kings elected from 1438 onwards were from among the Habsburg Archdukes of Austria (and later Kings of Hungary and Bohemia) until 1740, when the archduchy was inherited by a woman, Maria Theresa, sparking the War of the Austrian Succession.
A representative of the House of Wittelsbach was elected for a short period of time, but in 1745 Maria Theresa's husband, Francis I of the Habsburg-Lorraine dynasty, became King. All of his successors were also from the same family. Hence, for the greater part of the Empire's history, the role of the electors was largely ceremonial.
Each elector held a "High Office of the Empire" ("Reichserzämter") and was a member of the (ceremonial) Imperial Household. The three spiritual electors were all Arch-Chancellors (, ): the Archbishop of Mainz was Arch-Chancellor of Germany, the Archbishop of Cologne was Arch-Chancellor of Italy, and the Archbishop of Trier was Arch-Chancellor of Burgundy. That left between four and six secular electors:
When the Duke of Bavaria replaced the Elector Palatine in 1623, he assumed the latter's office of Arch-Steward. When the Count Palatine was granted a new electorate, he assumed the position of Arch-Treasurer of the Empire. When the Duke of Bavaria was banned in 1706, the Elector Palatine returned to the office of Arch-Steward, and in 1710 the Elector of Hanover was promoted to the post of Arch-Treasurer. Matters were complicated by the Duke of Bavaria's restoration in 1714; the Elector of Bavaria resumed the office of Arch-Steward, while the Elector Palatine returned to the post of Arch-Treasurer, and the Elector of Hanover was given the new office of Archbannerbearer. The Electors of Hanover, however, continued to be styled Arch-Treasurers, though the Elector Palatine was the one who actually exercised the office until 1777, when he inherited Bavaria and the Arch-Stewardship. After 1777, no further changes were made to the Imperial Household; new offices were planned for the Electors admitted in 1803, but the Empire was abolished before they could be created. The Duke of Württemberg, however, started to adopt the trappings of the Arch-Bannerbearer.
Many High Officers were entitled to use augmentations on their coats of arms; these augmentations, which were special marks of honour, appeared in the centre of the electors' shields (as shown in the image above) atop the other charges (in heraldic terms, the augmentations appeared in the form of inescutcheons). The Arch-Steward used "gules an orb Or" (a gold orb on a red field). The Arch-Marshal utilised the more complicated "per fess sable and argent, two swords in saltire gules" (two red swords arranged in the form of a saltire, on a black and white field). The Arch-Chamberlain's augmentation was "azure a sceptre palewise Or" (a gold sceptre on a blue field), while the Arch-Treasurer's was "gules the crown of Charlemagne Or" (a gold crown on a red field). As noted above, the Elector Palatine and the Elector of Hanover styled themselves Arch-Treasurer from 1714 until 1777; during this time, both electors used the corresponding augmentations. The three Arch-Chancellors and the Arch-Cupbearer did not use any augmentations.
The electors discharged the ceremonial duties associated with their offices only during coronations, where they bore the crown and regalia of the Empire. Otherwise, they were represented by holders of corresponding "Hereditary Offices of the Household". The Arch-Butler was represented by the Butler (Cupbearer) (the Count of Althann), the Arch-Seneschal by the Steward (the Count of Waldburg), the Arch-Chamberlain by the Chamberlain (the Count of Hohenzollern), the Arch-Marshal by the Marshal (the Count of Pappenheim), and the Arch-Treasurer by the Treasurer (the Count of Sinzendorf). The Duke of Württemberg assigned the count of Zeppelin-Aschhausen as hereditary Bannerbearer.
Below are the State arms of each Imperial Elector. Emblems of Imperial High Offices are shown on the appropriate arms.
Three Electors Spiritual (Archbishops):
Four Electors Secular:
Electors added in 17th century:
As Napoleon waged war on Europe, between 1803 and 1806, the following changes to the Constitution of the Holy Roman Empire were attempted until the Empire's collapse:
The German practice of electing monarchs began when ancient Germanic tribes formed "ad hoc" coalitions and elected the leaders thereof. Elections were irregularly held by the Franks, whose successor states include France and the Holy Roman Empire. The French monarchy eventually became hereditary, but the Holy Roman Emperors remained elective, at least in theory, although the Habsburgs provided most of the later monarchs. While all free men originally exercised the right to vote in such elections, suffrage eventually came to be limited to the leading men of the realm. In the election of Lothar II in 1125, a small number of eminent nobles chose the monarch and then submitted him to the remaining magnates for their approbation.
Soon, the right to choose the monarch was settled on an exclusive group of princes, and the procedure of seeking the approval of the remaining nobles was abandoned. The college of electors was mentioned in 1152 and again in 1198. The composition of electors at that time is unclear, but appears to have included representatives of the church and the dukes of the four nations of Germany: the Franks (Duchy of Franconia), Swabians (Duchy of Swabia), Saxons (Duchy of Saxony) and Bavarians (Duchy of Bavaria).
The electoral college is known to have existed by 1152, but its composition is unknown. A letter written by Pope Urban IV in 1265 suggests that by "immemorial custom", seven princes had the right to elect the King and future Emperor. The pope wrote that the seven electors were those who had just voted in the election of 1257, which resulted in the election of two kings.
The three Archbishops oversaw the most venerable and powerful sees in Germany, while the other four were supposed to represent the dukes of the four nations. The Count Palatine of the Rhine held most of the former Duchy of Franconia after the last Duke died in 1039. The Margrave of Brandenburg became an Elector when the Duchy of Swabia was dissolved after the last Duke of Swabia was beheaded in 1268. Saxony, even with diminished territory, retained its eminent position.
The Palatinate and Bavaria were originally (since 1214) held by the same individual, but in 1253 they were divided between two members of the House of Wittelsbach. The other electors refused to allow two princes from the same dynasty to have electoral rights, so a heated rivalry arose between the Count Palatine and the Duke of Bavaria over who should hold the Wittelsbach seat.
Meanwhile, the King of Bohemia, who held the ancient imperial office of Arch-Cupbearer, asserted his right to participate in elections. Sometimes he was challenged on the grounds that his kingdom was not German, though usually he was recognized, instead of Bavaria which after all was just a younger line of Wittelsbachs.
The Declaration of Rhense issued in 1338 had the effect that election by the majority of the electors automatically conferred the royal title and rule over the empire, without papal confirmation. The Golden Bull of 1356 finally resolved the disputes among the electors. Under it, the Archbishops of Mainz, Trier, and Cologne, as well as the King of Bohemia, the Count Palatine of the Rhine, the Duke of Saxony, and the Margrave of Brandenburg held the right to elect the King.
The college's composition remained unchanged until the 17th century, although the Electorate of Saxony was transferred from the senior to the junior branch of the Wettin family in 1547, in the aftermath of the Schmalkaldic War.
In 1621, the Elector Palatine, Frederick V, came under the imperial ban after participating in the Bohemian Revolt (a part of the Thirty Years' War). The Elector Palatine's seat was conferred on the Duke of Bavaria, the head of a junior branch of his family. Originally, the Duke held the electorate personally, but it was later made hereditary along with the duchy. When the Thirty Years' War concluded with the Peace of Westphalia in 1648, a new electorate was created for the Count Palatine of the Rhine. Since the Elector of Bavaria retained his seat, the number of electors increased to eight; the two Wittelsbach lines were now sufficiently estranged so as not to pose a combined potential threat.
In 1685, the religious composition of the College of Electors was disrupted when a Catholic branch of the Wittelsbach family inherited the Palatinate. A new Protestant electorate was created in 1692 for the Duke of Brunswick-Lüneburg, who became known as the Elector of Hanover (the Imperial Diet officially confirmed the creation in 1708). The Elector of Saxony converted to Catholicism in 1697 so that he could become King of Poland, but no additional Protestant electors were created. Although the Elector of Saxony was personally Catholic, the Electorate itself remained officially Protestant, and the Elector even remained the leader of the Protestant body in the Reichstag.
In 1706, the Elector of Bavaria and Archbishop of Cologne were banned during the War of the Spanish Succession, but both were restored in 1714 after the Peace of Baden. In 1777, the number of electors was reduced to eight when the Elector Palatine inherited Bavaria.
Many changes to the composition of the college were necessitated by Napoleon's aggression during the early 19th century. The Treaty of Lunéville (1801), which ceded territory on the Rhine's left bank to France, led to the abolition of the archbishoprics of Trier and Cologne, and the transfer of the remaining spiritual Elector from Mainz to Regensburg. In 1803, electorates were created for the Duke of Württemberg, the Margrave of Baden, the Landgrave of Hesse-Kassel, and the Duke of Salzburg, bringing the total number of electors to ten. When Austria annexed Salzburg under the Treaty of Pressburg (1805), the Duke of Salzburg moved to the Grand Duchy of Würzburg and retained his electorate. None of the new electors, however, had an opportunity to cast votes, as the Holy Roman Empire was abolished in 1806, and the new electorates were never confirmed by the Emperor.
After the abolition of the Holy Roman Empire in August 1806, the Electors continued to reign over their territories, many of them taking higher titles. The Electors of Bavaria, Württemberg, and Saxony styled themselves Kings, while the Electors of Baden, Hesse-Darmstadt, Regensburg, and Würzburg became Grand Dukes. The Elector of Hesse-Kassel, however, retained the meaningless title "Elector of Hesse", thus distinguishing himself from other Hessian princes (the Grand Duke of Hesse-Darmstadt and the Landgrave of Hesse-Homburg). Napoleon soon exiled him and Kassel was annexed to the Kingdom of Westphalia, a new creation. The King of Great Britain remained at war with Napoleon and continued to style himself Elector of Hanover, while the Hanoverian government continued to operate in London.
The Congress of Vienna accepted the Electors of Bavaria, Württemberg, and Saxony as Kings, along with the newly created Grand Dukes. The Elector of Hanover finally joined his fellow Electors by declaring himself the King of Hanover. The restored Elector of Hesse, a Napoleonic creation, tried to be recognized as the King of the Chatti. However, the European powers refused to acknowledge this title at the Congress of Aix-la-Chapelle (1818) and instead listed him with the grand dukes as a "Royal Highness". Believing the title of Prince-Elector to be superior in dignity to that of Grand Duke, the Elector of Hesse-Kassel chose to remain an Elector, even though there was no longer a Holy Roman Emperor to elect. Hesse-Kassel remained the only Electorate in Germany until 1866, when the country backed the losing side in the Austro-Prussian War and was absorbed into Prussia.
Religion was a factor in the election of the Holy Roman Emperor, as some Protestant electors would refuse to vote for a Roman Catholic and vice versa. Most of the time, religion played a minor role and was overshadowed by other factors, including dynastic, territorial and other political interests. For example, the Protestant Elector of Saxony voted for Ferdinand II, Archduke of Austria, putting his political interests first even though Ferdinand was a staunch Roman Catholic who would eventually lead the Empire into the Thirty Years' War.
At the height of the Protestant Reformation, there were times when the electoral college had a Protestant majority. However, all of the Holy Roman Emperors were Catholic. Some historians maintain that Ferdinand I had been touched by the Reformed theology and was probably the closest the Holy Roman Empire ever came to a Protestant emperor; he remained nominally Catholic throughout his life, although reportedly he refused last rites on his deathbed. Other historians maintain he was as Catholic as his brother, but tended to see religion as outside the political sphere. | https://en.wikipedia.org/wiki?curid=14056 |
Howard Hughes
Howard Robard Hughes Jr. (December 24, 1905 – April 5, 1976) was an American business magnate, investor, record-setting pilot, engineer, film director, and philanthropist, known during his lifetime as one of the most financially successful individuals in the world. He first became prominent as a film producer, and then as an influential figure in the aviation industry. Later in life, he became known for his eccentric behavior and reclusive lifestyle – oddities that were caused in part by a worsening obsessive-compulsive disorder (OCD), chronic pain from a near-fatal plane crash and increasing deafness.
As a film tycoon, Hughes gained fame in Hollywood beginning in the late 1920s, when he produced big-budget and often controversial films such as "The Racket" (1928), "Hell's Angels" (1930), and "Scarface" (1932). Later he controlled the RKO film studio.
Hughes formed the Hughes Aircraft Company in 1932, hiring numerous engineers and designers. He spent the rest of the 1930s and much of the 1940s setting multiple world air speed records and building the Hughes H-1 Racer and H-4 Hercules (the "Spruce Goose"). He acquired and expanded Trans World Airlines and later acquired Air West, renaming it Hughes Airwest. Hughes was included in "Flying" Magazine's list of the 51 Heroes of Aviation, ranked at 25. Today, his legacy is maintained through the Howard Hughes Medical Institute and the Howard Hughes Corporation.
Records locate the birthplace of Howard Hughes as either Humble or Houston, Texas. The date remains uncertain because of conflicting dates from various sources. He repeatedly claimed Christmas Eve as his birthday. A 1941 affidavit birth certificate of Hughes, signed by his aunt Annette Gano Lummis and by Estelle Boughton Sharp, states that he was born on December 24, 1905, in Harris County, Texas. However, his certificate of baptism, recorded on October 7, 1906, in the parish register of St. John's Episcopal Church in Keokuk, Iowa, listed his date of birth as September 24, 1905, without any reference to the place of birth.
Howard Robard Hughes Jr. was the son of Allene Stone Gano (1883–1922) and of Howard R. Hughes Sr. (1869–1924), a successful inventor and businessman from Missouri. He had English, Welsh and some French Huguenot ancestry, and was a descendant of John Gano (1727–1804), the minister who allegedly baptized George Washington. His father patented (1909) the two-cone roller bit, which allowed rotary drilling for petroleum in previously inaccessible places. The senior Hughes made the shrewd and lucrative decision to commercialize the invention by leasing the bits instead of selling them, obtained several early patents, and founded the Hughes Tool Company in 1909. Hughes' uncle was the famed novelist, screenwriter, and film-director Rupert Hughes.
At a young age, Hughes showed interest in science and technology. In particular, he had great engineering aptitude and built Houston's first "wireless" radio transmitter at age 11. He went on to be one of the first licensed ham-radio operators in Houston, having the assigned callsign W5CY (originally 5CY). At 12, Hughes was photographed in the local newspaper, identified as the first boy in Houston to have a "motorized" bicycle, which he had built from parts from his father's steam engine. He was an indifferent student, with a liking for mathematics, flying, and mechanics. He took his first flying lesson at 14, and attended Fessenden School in Massachusetts in 1921.
After a brief stint at The Thacher School, Hughes attended math and aeronautical engineering courses at Caltech. The red-brick house where Hughes lived as a teenager at 3921 Yoakum Blvd., Houston, still stands, now on the grounds of the University of St. Thomas.
His mother Allene died in March 1922 from complications of an ectopic pregnancy. Howard Hughes Sr. died of a heart attack in 1924. Their deaths apparently inspired Hughes to include the establishment of a medical research laboratory in the will that he signed in 1925 at age 19. Howard Sr.'s will had not been updated since Allene's death, and Hughes inherited 75% of the family fortune. On his 19th birthday, Hughes was declared an emancipated minor, enabling him to take full control of his life.
From a young age Hughes became a proficient and enthusiastic golfer. He often scored near-par figures, played the game to a two-three handicap during his 20s, and for a time aimed for a professional golf career. He golfed frequently with top players, including Gene Sarazen. Hughes rarely played competitively and gradually gave up his passion for the sport to pursue other interests. Hughes used to play golf every afternoon at LA courses including the Lakeside Golf Club, Wilshire Country Club, or the Bel-Air Country Club. Partners included George Von Elm or Ozzie Carlton. After Hughes hurt himself in the late 1920s, his golfing tapered off, and after his F-11 crash, Hughes was unable to play at all.
Hughes withdrew from Rice University shortly after his father's death. On June 1, 1925, he married Ella Botts Rice, daughter of David Rice and Martha Lawson Botts of Houston, and great-niece of William Marsh Rice, for whom Rice University was named. They moved to Los Angeles, where he hoped to make a name for himself as a filmmaker.
They moved into the Ambassador Hotel, and Hughes proceeded to learn to fly a Waco, while simultaneously producing his first motion picture, "Swell Hogan".
Hughes enjoyed a highly successful business career beyond engineering, aviation, and filmmaking; many of his career endeavors involved varying entrepreneurial roles. The Summa Corporation was the name adopted for the business interests of Howard Hughes after he sold the tool division of Hughes Tool Company in 1972. The company served as the principal holding company for Hughes' business ventures and investments. It is primarily involved in aerospace and defense, electronics, mass media, manufacturing, and hospitality industries, but has maintained a strong presence in a wide variety of industries including real estate, petroleum drilling and oilfield services, consulting, entertainment, and engineering. Much of his fortune was later used for philanthropic causes, notably towards health care and medical research.
His first film, "Swell Hogan," directed by Ralph Graves, was a disaster. His next two films, "Everybody's Acting" (1926) and "Two Arabian Knights" (1927), were financial successes, the latter winning the first Academy Award for Best Director of a comedy picture. "The Racket" (1928) and "The Front Page" (1931) were also nominated for Academy Awards.
Hughes spent $3.5 million to make the flying film "Hell's Angels" (1930). "Hell's Angels" received one Academy Award nomination for Best Cinematography.
He produced another hit, "Scarface" (1932), a production delayed by censors' concern over its violence.
"The Outlaw" premiered in 1943, but was not released nationally until 1946. The film featured Jane Russell, who received considerable attention from industry censors, this time owing to Russell's revealing costumes.
During the 1940s to the late 1950s, the Hughes Tool Company ventured into the film industry when it obtained partial ownership of the RKO companies which included RKO Pictures, RKO Studios, a chain of movie theaters known as RKO Theatres and a network of radio stations known as the RKO Radio Network.
In 1948, Hughes gained control of RKO, a struggling major Hollywood studio, by acquiring the 929,000 shares owned by Floyd Odlum's Atlas Corporation, for $8,825,000. Within weeks of acquiring the studio, Hughes dismissed 700 employees. Production dwindled to 9 pictures that first year Hughes was in control, while before, RKO averaged 30 per year.
Production was shut down for six months during which time investigations were conducted of each employee who remained with RKO as far as their political leanings were concerned. Only after ensuring that the stars under contract to RKO had no suspect affiliations would Hughes approve completed pictures to be sent back for re-shooting. This was especially true of the women who were under contract to RKO at that time. If Hughes felt that his stars did not properly represent the political views of his liking or if a film's anti-communist politics were not sufficiently clear, he pulled the plug. In 1952, an abortive sale to a Chicago-based group connected to the mafia with no experience in the industry disrupted studio operations at RKO even further.
In 1953, Hughes was involved with a high profile lawsuit as part of the settlement of the "United States v. Paramount Pictures, Inc." Antitrust Case. As a result of the hearings, the shaky status of RKO became increasingly apparent. A steady stream of lawsuits from RKO's minority shareholders had grown to be extremely annoying to Hughes. They had accused him of financial misconduct and corporate mismanagement. Since Hughes wanted to focus primarily on his aircraft manufacturing and TWA holdings during the Korean War years, Hughes offered to buy out all other stockholders in order to dispense with their distractions.
By the end of 1954, Hughes had gained near-total control of RKO at a cost of nearly $24 million, becoming the first sole owner of a major Hollywood studio since the silent film era. Six months later, Hughes sold the studio to the General Tire and Rubber Company for $25 million. Hughes retained the rights to pictures that he had personally produced, including those made at RKO. He also retained Jane Russell's contract. For Howard Hughes, this was the virtual end of his 25-year involvement in the motion picture industry. However, his reputation as a financial wizard emerged unscathed. During that time period, RKO became known as the home of film noir classic productions thanks in part to the limited budgets required to make such films during Hughes' tenure. Hughes reportedly walked away from RKO having made $6.5 million in personal profit. According to Noah Dietrich, Hughes made a $10,000,000 profit from the sale of the theaters, and made a profit of $1,000,000 from his 7-year ownership of RKO.
General Tire was interested mainly in exploiting the value of the RKO library for television programming even though it made some attempts to continue producing films. After a year and a half of mixed success, General Tire shut down film production entirely at RKO at the end of January 1957. The studio lots in Hollywood and Culver City were sold to Desilu Productions later that year for $6.15 million.
According to Noah Dietrich, "Land became a principal asset for the Hughes empire." This investment sheltered the profits his companies made. Hughes acquired 1200 acres in Culver City for Hughes Aircraft, bought 7 sections [4,480 acres] in Tucson for his Falcon missile plant, and purchased 25,000 acres near Las Vegas.
Beyond extending his business prowess in the manufacturing, aviation, entertainment, and hospitality industries, Hughes was a successful real estate investor. Hughes was deeply involved in the American real estate industry where he amassed vast holdings of undeveloped land both in Las Vegas and in the desert surrounding the city that had gone unused during his lifetime. In 1968, the Hughes Tool Company purchased the North Las Vegas Air Terminal.
Originally known as Summa Corporation, The Howard Hughes Corporation was formed in 1972 when the oil tools business of Hughes Tool Company, then owned by Howard Hughes Jr., was floated on the New York Stock Exchange under the Hughes Tool name. This forced the remaining businesses of the "original" Hughes Tool to adopt a new corporate name Summa. The name "Summa"Latin for "highest"was adopted without the approval of Hughes himself, who preferred to keep his own name on the business, and suggested HRH Properties (for Hughes Resorts and Hotels, and also his own initials). In 1988, Summa announced plans for Summerlin, a master-planned community named for the paternal grandmother of Howard Hughes, Jean Amelia Summerlin.
Initially staying in the Desert Inn, Hughes refused to vacate his room, and instead decided to purchase the entire hotel. Hughes extended his financial empire to include Las Vegas real estate, hotels, and media outlets, spending an estimated $300 million, and using his considerable powers to take-over many of the well known hotels, especially the organized crime connected venues. He quickly became one of the most powerful men in Las Vegas. He was instrumental in changing the image of Las Vegas from its Wild West roots into a more refined cosmopolitan city. In addition to the Desert Inn, Hughes would eventually own the Sands, Frontier, Silver Slipper, Castaways and Landmark and Harold's Club in Reno. Hughes would eventually become the largest employer in Nevada.
Another portion of Hughes' business interests lay in aviation, airlines, and the aerospace and defense industries. Hughes was a lifelong aircraft enthusiast and pilot. He survived four airplane accidents: one in a Thomas-Morse Scout while filming "Hell's Angels", one while setting the air speed record in the Hughes Racer, one at Lake Mead in 1943, and the near fatal crash of the Hughes XF-11 in 1946. At Rogers Airport in Los Angeles, he learned to fly from pioneer aviators, including Moye Stephens and J.B. Alexander. He set many world records and commissioned the construction of custom aircraft for himself while heading Hughes Aircraft at the airport in Glendale, CA. Operating from there, the most technologically important aircraft he commissioned was the Hughes H-1 Racer. On September 13, 1935, Hughes, flying the H-1, set the landplane airspeed record of over his test course near Santa Ana, California (Giuseppe Motta reached 362 mph in 1929 and George Stainforth reached 407.5 mph in 1931, both in seaplanes). This was the last time in history that the world airspeed record was set in an aircraft built by a private individual. A year and a half later, on January 19, 1937, flying the same H-1 Racer fitted with longer wings, Hughes set a new transcontinental airspeed record by flying non-stop from Los Angeles to Newark in seven hours, 28 minutes, and 25 seconds (beating his own previous record of nine hours, 27 minutes). His average ground speed over the flight was .
The H-1 Racer featured a number of design innovations: it had retractable landing gear (as Boeing Monomail had five years before), and all rivets and joints set flush into the body of the aircraft to reduce drag. The H-1 Racer is thought to have influenced the design of a number of World War II fighters such as the Mitsubishi A6M Zero, Focke-Wulf Fw 190, and F8F Bearcat, although that has never been reliably confirmed. The H-1 Racer was donated to the Smithsonian.
On July 14, 1938, Hughes set another record by completing a flight around the world in just 91 hours (three days, 19 hours, 17 minutes), beating the previous record set in 1933 by Wiley Post in a single engine Lockheed Vega by almost four days. Hughes returned home ahead of photographs of his flight. Taking off from New York City, Hughes continued to Paris, Moscow, Omsk, Yakutsk, Fairbanks, Minneapolis, then returning to New York City. For this flight he flew a Lockheed 14 Super Electra (NX18973, a twin-engine transport with a four-man crew) fitted with the latest radio and navigational equipment. Harry Connor was the co-pilot, Thomas Thurlow the navigator, Richard Stoddart the engineer, and Ed Lund the mechanic. Hughes wanted the flight to be a triumph of American aviation technology, illustrating that safe, long-distance air travel was possible. Albert Lodwick of Mystic, Iowa provided organizational skills as the flight operations manager. While he had previously been relatively obscure despite his wealth, being better known for dating Katharine Hepburn, New York City now gave Hughes a ticker-tape parade in the Canyon of Heroes. In 1938, the William P. Hobby Airport in Houston, Texas—known at the time as Houston Municipal Airport—was renamed after Hughes, but the name was changed back after people objected to naming the airport after a living person.
Hughes also had a role in the design and financing of both the Boeing 307 Stratoliner and Lockheed L-049 Constellation.
He received many awards as an aviator, including the Harmon Trophy in 1936 and 1938, the Collier Trophy and the Bibesco Cup of the Fédération Aéronautique Internationale in 1938, the Octave Chanute Award in 1940, and a special Congressional Gold Medal in 1939 "in recognition of the achievements of Howard Hughes in advancing the science of aviation and thus bringing great credit to his country throughout the world."
President Harry S. Truman sent the Congressional medal to Hughes after the F-11 crash. After his around-the-world flight, Hughes had declined to go to the White House to collect it.
The Hughes D-2 was conceived in 1939 as a bomber with five crew members, powered by 42-cylinder Wright R-2160 Tornado engines. In the end it appeared as two-seat fighter-reconnaissance aircraft designated the D-2A, powered by two Pratt & Whitney R-2800-49 engines. The aircraft was constructed using the Duramold process. The prototype was brought to Harper's Dry Lake California in great secrecy in 1943 and first flew on June 20 of that year. Acting on a recommendation of the president's son, Colonel Elliott Roosevelt, who had become friends with Hughes, in September 1943 the USAAF ordered 100 of a reconnaissance development of the D-2, known as the F-11. Hughes then attempted to get the military to pay for the development of the D-2. In November 1944, the hangar containing the D-2A was reportedly hit by lightning and the aircraft was destroyed. The D-2 design was abandoned, but led to the extremely controversial Hughes XF-11. The XF-11 was a large all-metal, two-seat reconnaissance aircraft, powered by two Pratt & Whitney R-4360-31 engines, each driving a set of contra-rotating propellers. Only the two prototypes were completed; the second one with a single propeller per side.
In the spring of 1943, Hughes spent nearly a month in Las Vegas, test flying his Sikorsky S-43 amphibian aircraft, practicing touch-and-go landings on Lake Mead in preparation for flying the H-4 Hercules. The weather conditions at the lake during the day were ideal and he enjoyed Las Vegas at night. On May 17, 1943, Hughes flew the Sikorsky from California carrying two CAA aviation inspectors, two of his employees, and actress Ava Gardner. Hughes dropped Gardner off in Las Vegas and proceeded to Lake Mead to conduct qualifying tests in the S-43. The test flight did not go well. The Sikorsky crashed into Lake Mead, killing CAA inspector Ceco Cline and Hughes' employee Richard Felt. Hughes suffered a severe gash on the top of his head when he hit the upper control panel and had to be rescued by one of the others on board. Hughes paid divers $100,000 to raise the aircraft and later spent more than $500,000 restoring it. Hughes sent the plane to Houston, where it remained for many years.
Hughes was involved in another near-fatal aircraft accident on July 7, 1946, while performing the first flight of the prototype U.S. Army Air Forces reconnaissance aircraft, the XF-11, near Hughes airfield at Culver City, California. An oil leak caused one of the contra-rotating propellers to reverse pitch, causing the aircraft to yaw sharply and lose altitude rapidly. Hughes attempted to save the aircraft by landing it at the Los Angeles Country Club golf course, but just seconds before reaching the course, the XF-11 started to drop dramatically and crashed in the Beverly Hills neighborhood surrounding the country club.
When the XF-11 finally came to a halt after destroying three houses, the fuel tanks exploded, setting fire to the aircraft and a nearby home at 808 North Whittier Drive, owned by Lt Col. Charles E. Meyer. Hughes managed to pull himself out of the flaming wreckage but lay beside the aircraft until he was rescued by Marine Master Sgt. William L. Durkin, who happened to be in the area visiting friends. Hughes sustained significant injuries in the crash, including a crushed collar bone, multiple cracked ribs, crushed chest with collapsed left lung, shifting his heart to the right side of the chest cavity, and numerous third-degree burns. An oft-told story said that Hughes sent a check to the Marine weekly for the remainder of his life as a sign of gratitude. However, Durkin's daughter denied knowing that he received any money from his rescue of Hughes. Yet, Noah Dietrich asserts that Hughes did send Durkin $200 a month.
Despite his physical injuries, Hughes was proud that his mind was still working. As he lay in his hospital bed, he decided that he did not like the bed's design. He called in plant engineers to design a customized bed, equipped with hot and cold running water, built in six sections, and operated by 30 electric motors, with push-button adjustments. The hospital bed was designed by Hughes specifically to alleviate the pain caused by moving with severe burn injuries. Although he never used the bed that he designed, Hughes' bed served as a prototype for the modern hospital bed. Hughes' doctors considered his recovery almost miraculous.
Many attribute his long-term dependence on opiates to his use of codeine as a painkiller during his convalescence. Yet, Dietrich asserts that Hughes recovered the "hard way - no sleeping pills, no opiates of any kind." The trademark mustache he wore afterward was used to hide a scar on his upper lip resulting from the accident.
The War Production Board (not the military) originally contracted with Henry Kaiser and Hughes to produce the gigantic HK-1 Hercules flying boat for use during World War II to transport troops and equipment across the Atlantic as an alternative to seagoing troop transport ships that were vulnerable to German U-boats. The project was opposed by the military services, thinking it would siphon resources from higher priority programs, but was advocated by Hughes' powerful allies in Washington, D.C. After disputes, Kaiser withdrew from the project and Hughes elected to continue it as the H-4 Hercules. However, the aircraft was not completed until after the end of World War II.
The Hercules was the world's largest flying boat, the largest aircraft made from wood, and, at , had the longest wingspan of any aircraft (the next largest wingspan was about ). (The Hercules is no longer the longest nor heaviest aircraft ever built; both of those titles are currently held by the Antonov An-225 "Mriya".)
The Hercules flew only once for one mile (1.6 km), and above the water, with Hughes at the controls, on November 2, 1947.
The Hercules was nicknamed the "Spruce Goose" by its critics, but it was actually made largely from birch, not spruce, rather than of aluminum, because the contract required that Hughes build the aircraft of "non-strategic materials". It was built in Hughes' Westchester, California, facility. In 1947, Howard Hughes was summoned to testify before the Senate War Investigating Committee to explain why the H-4 development had been so troubled, and why $22 million had produced only two prototypes of the XF-11. General Elliott Roosevelt and numerous other USAAF officers were also called to testify in hearings that transfixed the nation during August and November 1947. In hotly disputed testimony over TWA's route awards and malfeasance in the defense acquisition process, Hughes turned the tables on his main interlocutor, Maine Senator Owen Brewster, and the hearings were widely interpreted as a Hughes victory. After being displayed at the harbor of Long Beach, California, the Hercules was moved to McMinnville, Oregon, where it is now part of the Evergreen Aviation & Space Museum.
On November 4, 2017, the 70th anniversary of the only flight of the H-4 Hercules was celebrated at the Evergreen Aviation & Space Museum with Hughes' paternal cousin Michael Wesley Summerlin and Brian Palmer Evans, son of Hughes radio technology pioneer Dave Evans, taking their positions in the recreation of a photo that was previously taken of Hughes, Dave Evans and Joe Petrali on board the H-4 Hercules.
Hughes Aircraft Company, a division of Hughes Tool Company, was founded by Hughes in 1932, in a rented corner of a Lockheed Aircraft Corporation hangar in Burbank, California, to build the H-1 racer. During and after World War II, Hughes fashioned his company into a major defense contractor. The Hughes Helicopters division started in 1947 when helicopter manufacturer Kellett sold their latest design to Hughes for production. The company was a major American aerospace and defense contractor manufacturing numerous technology related products that include spacecraft vehicles, military aircraft, radar systems, electro-optical systems, the first working laser, aircraft computer systems, missile systems, ion-propulsion engines (for space travel), commercial satellites, and other electronics systems.
In 1948, Hughes created a new division of the company: the Hughes Aerospace Group. The Hughes Space and Communications Group and the Hughes Space Systems Division were later spun off in 1948 to form their own divisions and ultimately became the Hughes Space and Communications Company in 1961. In 1953, Howard Hughes gave all his stock in the Hughes Aircraft Company to the newly formed Howard Hughes Medical Institute, thereby turning the aerospace and defense contractor into a tax-exempt charitable organization. The Howard Hughes Medical Institute sold Hughes Aircraft in 1985 to General Motors for $5.2 billion. In 1997, General Motors sold Hughes Aircraft to Raytheon and in 2000, sold Hughes Space & Communications to Boeing. A combination of Boeing, GM, and Raytheon acquired the Hughes Research Laboratories, where it focused on advanced developments in microelectronics, information & systems sciences, materials, sensors, and photonics; their workspace spans from basic research to product delivery. It has particularly emphasized capabilities in high performance integrated circuits, high power lasers, antennas, networking, and smart materials.
In 1939, at the urging of Jack Frye, president of Transcontinental & Western Airlines, the predecessor of Trans World Airlines (TWA), Hughes began to quietly purchase a majority share of TWA stock, and took a controlling interest in the airline by 1944. Although he never had an official position with TWA, Hughes handpicked the board of directors, which included Noah Dietrich, and often issued orders directly to airline staff. Hughes Tool Co. purchased the first 6 Stratoliners Boeing manufactured. Hughes used one personally, and the other 5 he let TWA operate.
Hughes is commonly credited as the driving force behind the Lockheed Constellation airliner, which Hughes and Frye ordered in 1939 as a long-range replacement for TWA's fleet of Boeing 307 Stratoliners, Hughes personally financed TWA's acquisition of 40 Constellations for $18 million, the largest aircraft order in history up to that time. The Constellations were among the highest-performing commercial aircraft of the late 1940s and 1950s, and allowed TWA to pioneer nonstop transcontinental service. During World War II, Hughes leveraged political connections in Washington to obtain rights for TWA to serve Europe, making it the only U.S. carrier with a combination of domestic and transatlantic routes.
After the Boeing 707 was announced, Hughes opted to pursue a more advanced jet aircraft for TWA and approached Convair in late 1954. Convair proposed two concepts to Hughes, but Hughes was unable to decide which concept to adopt, and Convair eventually abandoned its initial jet project after the mockups of the 707 and Douglas DC-8 were unveiled. Even after competitors such as United Airlines, American Airlines and Pan American World Airways had placed large orders for the 707, Hughes only placed eight orders for 707s through the Hughes Tool Company and forbade TWA from using the aircraft. After finally beginning to reserve 707 orders in 1956, Hughes embarked on a plan to build his own "superior" jet aircraft for TWA, applied for CAB permission to sell Hughes aircraft to TWA, and began negotiations with the state of Florida to build a manufacturing plant there. However, he abandoned this plan around 1958, and in the interim, negotiated new contracts for 707 and Convair 880 aircraft and engines totaling $400 million.
The financing of TWA's jet orders precipitated the end of Hughes' relationship with Noah Dietrich, and ultimately Hughes's ouster from control of TWA. Hughes did not have enough cash on hand or future cash flow to pay for the orders, and did not immediately seek bank financing. Hughes's refusal to heed Dietrich's financing advice led to a major rift between the two by the end of 1956. Hughes believed that Dietrich wished to have Hughes committed as mentally incompetent, although the evidence of this is inconclusive. Dietrich resigned by telephone in May 1957 after repeated requests for stock options, which Hughes refused to grant, and with no further progress on the jet financing. As Hughes's mental state worsened, he ordered various tactics to delay payments to Boeing and Convair; his behavior led TWA's banks to insist that he be removed from management as a condition for further financing.
In 1960, Hughes was ultimately forced out of management of TWA, although he continued to own 78% of the company. In 1961, TWA filed suit against Hughes Tool Company, claiming that the latter had violated antitrust law by using TWA as a captive market for aircraft trading. The claim was largely dependent upon obtaining testimony from Hughes himself. Hughes went into hiding and refused to testify. A default judgment was issued against Hughes Tool Company for $135 million in 1963, but was overturned by the Supreme Court of the United States in 1973, on the basis that Hughes was immune from prosecution. In 1966, Hughes was forced to sell his TWA shares. The sale of his TWA shares brought Hughes $546,549,771.
Hughes acquired control of Boston-based Northeast Airlines in 1962. However, the airline's lucrative route authority between major northeastern cities and Miami was terminated by a CAB decision around the time of the acquisition, and Hughes sold control of the company to a trustee in 1964. Northeast went on to merge with Delta Air Lines in 1972.
In 1970, Hughes acquired San Francisco-based Air West and renamed it Hughes Airwest. Air West had been formed in 1968 by the merger of Bonanza Air Lines, Pacific Air Lines, and West Coast Airlines, all of which operated in the western U.S. By the late 1970s, Hughes Airwest operated an all-jet fleet of Boeing 727-200, Douglas DC-9-10, and McDonnell Douglas DC-9-30 jetliners serving an extensive route network in the western U.S. with flights to Mexico and western Canada as well. By 1980, the airline's route system reached as far east as Houston (Hobby Airport) and Milwaukee with a total of 42 destinations being served. Hughes Airwest was then acquired by and merged into Republic Airlines (1979–1986) in late 1980. Republic was subsequently acquired by and merged into Northwest Airlines which in turn was ultimately merged into Delta Air Lines in 2008.
Hughes had made numerous business partnerships through industrialist and producer, David Charnay. Their friendship and many partnerships began with the film "The Conqueror", which was first released to the public in 1956. The film caused many controversies due to its critical flop and radioactive location used in St. George, Utah that eventually led to Hughes buying up nearly every copy of the film he could, only to watch the film at home repeatedly for many nights in a row. Charnay later bought Four Star, the film and television production company that produced "The Conqueror." Hughes and Charnay's most published dealings were with a contested AirWest leveraged buyout (LBO). Charnay led the LBO buyout group that involved Howard Hughes and their partners acquiring Air West. Hughes, Charnay, as well as three others were indicted. The complexity of this LBO was the first of its kind. The indictment, made by U.S. Attorney DeVoe Heaton, accused the group of conspiring to drive down the stock price of Air West in order to pressure company directors to sell to Hughes. The charges were dismissed after a judge had determined that the indictment had failed to allege an illegal action on the part of Hughes, Charnay, and all the other accused in the indictment. Thompson, the federal judge that made the decision to dismiss the charges called the indictment one of the worst claims that he had ever seen. The charges were filed again, a second time, by U.S. Attorney DeVoe Heaton's assistant, Dean Vernon. The Federal Judge ruled on November 13, 1974 and elaborated to say that the case suggested a "reprehensible misuse of the power of great wealth", but in his judicial opinion, "no crime had been committed." The aftermath of the Air West deal was later settled with the SEC by paying former stockholders for alleged losses from the sale of their investment in Air West stock. As noted above, Air West was subsequently renamed Hughes Airwest. During a long pause between the years of the dismissed charges against Hughes, Charnay, and their partners, Howard Hughes mysteriously died mid-flight while on the way to Houston from Acapulco. No further attempts were made to file any indictments after Hughes died.
In 1953, Hughes launched the Howard Hughes Medical Institute in Miami, Florida (currently located in Chevy Chase, Maryland) with the expressed goal of basic biomedical research, including trying to understand, in Hughes' words, the "genesis of life itself", due to his lifelong interest in science and technology. Hughes' first will, which he signed in 1925 at the age of 19, stipulated that a portion of his estate should be used to create a medical institute bearing his name. When a major battle with the IRS loomed ahead, Hughes gave all his stock in the Hughes Aircraft Company to the Institute, thereby turning the aerospace and defense contractor into a for-profit entity of a fully tax-exempt charity. Hughes' internist, Verne Mason, who treated Hughes after his 1946 aircraft crash, was chairman of the Institute's medical advisory committee. The Howard Hughes Medical Institute's new board of trustees sold Hughes Aircraft in 1985 to General Motors for $5.2 billion, allowing the Institute to grow dramatically.
In 1954, Hughes transferred Hughes Aircraft to the foundation, which paid Hughes Tool Co. $18,000,000 for the assets. The foundation leased the land from Hughes Tool Co., which then subleased it to Hughes Aircraft Corp. The difference in rent, $2,000,000 per year, became the foundation's working capital.
The deal was the topic of a protracted legal battle between Hughes and the Internal Revenue Service, which Hughes ultimately won. After his death in 1976, many thought that the balance of Hughes' estate would go to the Institute, although it was ultimately divided among his cousins and other heirs, given the lack of a will to the contrary. The HHMI was the fourth largest private organization and one of the largest devoted to biological and medical research, with an endowment of $20.4 billion .
In 1972, during the cold war era, Hughes was approached by the CIA through his longtime partner, David Charnay, to help secretly recover the Soviet submarine K-129, which had sunk near Hawaii four years earlier. Hughes' involvement provided the CIA with a plausible cover story, conducting expensive civilian marine research at extreme depths and the mining of undersea manganese nodules. The recovery plan used the special-purpose salvage vessel "Glomar Explorer". In the summer of 1974, "Glomar Explorer" attempted to raise the Soviet vessel. However, during the recovery a mechanical failure in the ship's grapple caused half of the submarine to break off and fall to the ocean floor. This section is believed to have held many of the most sought-after items, including its code book and nuclear missiles. Two nuclear-tipped torpedoes and some cryptographic machines were recovered, along with the bodies of six Soviet submariners who were subsequently given formal burial at sea in a filmed ceremony. The operation, known as Project Azorian (but incorrectly referred to by the press as Project Jennifer), became public in February 1975 after secret documents were released, obtained by burglars of Hughes' headquarters during a burglary in June 1974. Although he lent his name and his company's resources to the operation, Hughes and his companies had no operational involvement in the project. The "Glomar Explorer" was eventually acquired by Transocean and was sent to the scrap yard in 2015 during a large decline in oil prices.
In 1929, Hughes' wife, Ella, returned to Houston and filed for divorce. Hughes dated many famous women, including Billie Dove, Faith Domergue, Bette Davis, Ava Gardner, Olivia de Havilland, Katharine Hepburn, Hedy Lamarr, Ginger Rogers, Janet Leigh, Rita Hayworth, Mamie Van Doren and Gene Tierney. He also proposed to Joan Fontaine several times, according to her autobiography "No Bed of Roses". Jean Harlow accompanied him to the premiere of "Hell's Angels", but Noah Dietrich wrote many years later that the relationship was strictly professional, as Hughes apparently disliked Harlow personally. In his 1971 book, "Howard: The Amazing Mr. Hughes", Dietrich said that Hughes genuinely liked and respected Jane Russell, but never sought romantic involvement with her. According to Russell's autobiography, however, Hughes once tried to bed her after a party. Russell (who was married at the time) refused him, and Hughes promised it would never happen again. The two maintained a professional and private friendship for many years. Hughes remained good friends with Tierney who, after his failed attempts to seduce her, was quoted as saying "I don't think Howard could love anything that did not have a motor in it." Later, when Tierney's daughter Daria was born deaf and blind and with a severe learning disability because of Tierney's being exposed to rubella during her pregnancy, Hughes saw to it that Daria received the best medical care and paid all expenses.
In 1933, Hughes made a purchase of an unseen luxury steam yacht named the "Rover", which was previously owned by British shipping magnate Lord Inchcape. "I have never seen the "Rover" but bought it on the blueprints, photographs and the reports of Lloyd's surveyors. My experience is that the English are the most honest race in the world." Hughes renamed the yacht "Southern Cross" and later sold her to Swedish entrepreneur Axel Wenner-Gren.
On July 11, 1936, Hughes struck and killed a pedestrian named Gabriel S. Meyer with his car at the corner of 3rd Street and Lorraine in Los Angeles. After the crash, Hughes was taken to the hospital and certified as sober, but an attending doctor made a note that Hughes had been drinking. A witness to the crash told police that Hughes was driving erratically and too fast, and that Meyer had been standing in the safety zone of a streetcar stop. Hughes was booked on suspicion of negligent homicide and held overnight in jail until his attorney, Neil S. McCarthy, obtained a writ of habeas corpus for his release pending a coroner's inquest. By the time of the coroner's inquiry, however, the witness had changed his story and claimed that Meyer had moved directly in front of Hughes' car. Nancy Bayly (Watts), who was in the car with Hughes at the time of the crash, corroborated this version of the story. On July 16, 1936, Hughes was held blameless by a coroner's jury at the inquest into Meyer's death. Hughes told reporters outside the inquiry, "I was driving slowly and a man stepped out of the darkness in front of me."
On January 12, 1957, Hughes married actress Jean Peters at a small hotel in Tonopah, Nevada. The couple met in the 1940s, before Peters became a film actress. They had a highly publicized romance in 1947 and there was talk of marriage, but she said she could not combine it with her career. Some later claimed that Peters was "the only woman [Hughes] ever loved," and he reportedly had his security officers follow her everywhere even when they were not in a relationship. Such reports were confirmed by actor Max Showalter, who became a close friend of Peters while shooting "Niagara" (1953). Showalter told in an interview that because he frequently met with Peters, Hughes' men threatened to ruin his career if he did not leave her alone.
Shortly before the 1960 Presidential election, Richard Nixon was alarmed when it was revealed that his brother, Donald, received a $205,000 loan from Hughes. It has long been speculated that Nixon's drive to learn what the Democrats were planning in 1972 was based in part on his belief that the Democrats knew about a later bribe that his friend Bebe Rebozo had received from Hughes after Nixon took office.
In late 1971, Donald Nixon was collecting intelligence for his brother in preparation for the upcoming presidential election. One of his sources was John H. Meier, a former business adviser of Hughes who had also worked with Democratic National Committee Chairman Larry O'Brien.
Meier, in collaboration with former Vice President Hubert Humphrey and others, wanted to feed misinformation to the Nixon campaign. Meier told Donald that he was sure the Democrats would win the election because Larry O'Brien had a great deal of information on Richard Nixon's illicit dealings with Howard Hughes that had never been released; O'Brien did not actually have any such information, but Meier wanted Nixon to think he did. Donald told his brother that O'Brien was in possession of damaging Hughes information that could destroy his campaign. Terry Lenzner, who was the chief investigator for the Senate Watergate Committee, speculates that it was Nixon's desire to know what O'Brien knew about Nixon's dealings with Hughes that may have partially motivated the Watergate break-in.
Hughes was eccentric, and suffered from severe obsessive-compulsive disorder (OCD).
Dietrich wrote that Hughes always ate the same thing for dinner, a New York strip steak cooked medium rare, dinner salad, and peas, but only the smaller ones, pushing the larger ones aside. For breakfast, Hughes wanted his eggs cooked the way his family cook, Lily, made them. Hughes had a "phobia about germs", and "his passion for secrecy became a mania."
While directing "The Outlaw", Hughes became fixated on a small flaw in one of Jane Russell's blouses, claiming that the fabric bunched up along a seam and gave the appearance of two nipples on each breast. He wrote a detailed memorandum to the crew on how to fix the problem. Richard Fleischer, who directed "His Kind of Woman" with Hughes as executive producer, wrote at length in his autobiography about the difficulty of dealing with the tycoon. In his book, "Just Tell Me When to Cry", Fleischer explained that Hughes was fixated on trivial details and was alternately indecisive and obstinate. He also revealed that Hughes' unpredictable mood swings made him wonder if the film would ever be completed.
In 1958, Hughes told his aides that he wanted to screen some movies at a film studio near his home. He stayed in the studio's darkened screening room for more than four months, never leaving. He ate only chocolate bars and chicken and drank only milk, and was surrounded by dozens of Kleenex boxes that he continuously stacked and re-arranged. He wrote detailed memos to his aides giving them explicit instructions neither to look at him nor speak to him unless spoken to. Throughout this period, Hughes sat fixated in his chair, often naked, continually watching movies. When he finally emerged in the summer of 1958, his hygiene was terrible. He had neither bathed nor cut his hair and nails for weeks; this may have been due to allodynia, which results in a pain response to stimuli that would normally not cause pain.
After the screening room incident, Hughes moved into a bungalow at the Beverly Hills Hotel where he also rented rooms for his aides, his wife, and numerous girlfriends. He would sit naked in his bedroom with a pink hotel napkin placed over his genitals, watching movies. This may have been because Hughes found the touch of clothing painful due to allodynia. He may have watched movies to distract himself from his pain—a common practice among patients with intractable pain, especially those who do not receive adequate treatment. In one year, Hughes spent an estimated $11 million at the hotel.
Hughes began purchasing restaurant chains and four-star hotels that had been founded within the state of Texas. This included, if for only a short period, many unknown franchises currently out of business. He placed ownership of the restaurants with the Howard Hughes Medical Institute, and all licenses were resold shortly after.
Another time, he became obsessed with the 1968 film "Ice Station Zebra", and had it run on a continuous loop in his home. According to his aides, he watched it 150 times. Feeling guilty about the commercial, critical, and literal toxicity of his film "The Conqueror", he bought every copy of the film for $12 million, watching the film on repeat. Paramount Pictures acquired the rights of the film in 1979, 3 years after his death.
Hughes insisted on using tissues to pick up objects to insulate himself from germs. He would also notice dust, stains, or other imperfections on people's clothes and demand that they take care of them. Once one of the most visible men in America, Hughes ultimately vanished from public view, although tabloids continued to follow rumors of his behavior and whereabouts. He was reported to be terminally ill, mentally unstable, or even dead.
Injuries from numerous aircraft crashes caused Hughes to spend much of his later life in pain, and he eventually became addicted to codeine, which he injected intramuscularly. Hughes had his hair cut and nails trimmed only once a year, likely due to the pain caused by the RSD/CRPS, which was caused by the plane crashes. He also stored his urine in bottles.
The wealthy and aging Hughes, accompanied by his entourage of personal aides, began moving from one hotel to another, always taking up residence in the top floor penthouse. In the last ten years of his life, 1966 to 1976, Hughes lived in hotels in many cities—including Beverly Hills, Boston, Las Vegas, Nassau, Freeport, and Vancouver.
On November 24, 1966 (Thanksgiving Day), Hughes arrived in Las Vegas by railroad car and moved into the Desert Inn. Because he refused to leave the hotel and to avoid further conflicts with the owners, Hughes bought the Desert Inn in early 1967. The hotel's eighth floor became the nerve center of Hughes' empire and the ninth-floor penthouse became his personal residence. Between 1966 and 1968, he bought several other hotel-casinos, including the Castaways, New Frontier, the Landmark Hotel and Casino, and the Sands. He bought the small Silver Slipper casino for the sole purpose of moving its trademark neon silver slipper; visible from Hughes' bedroom, as it had apparently kept him awake at night.
After Hughes left the Desert Inn, hotel employees discovered that his drapes had not been opened during the time he lived there and had rotted through.
Hughes wanted to change the image of Las Vegas to something more glamorous. He wrote in a memo to an aide, "I like to think of Las Vegas in terms of a well-dressed man in a dinner jacket and a beautifully jeweled and furred female getting out of an expensive car." Hughes bought several local television stations (including KLAS-TV).
Hughes' considerable business holdings were overseen by a small panel unofficially dubbed "The Mormon Mafia" because of the many Latter-day Saints on the committee, led by Frank William Gay. In addition to supervising day-to-day business operations and Hughes' health, they also went to great pains to satisfy Hughes' every whim. For example, Hughes once became fond of Baskin-Robbins' banana nut ice cream, so his aides sought to secure a bulk shipment for him, only to discover that Baskin-Robbins had discontinued the flavor. They put in a request for the smallest amount the company could provide for a special order, 350 gallons (1,300 L), and had it shipped from Los Angeles. A few days after the order arrived, Hughes announced he was tired of banana nut and wanted only French vanilla ice cream. The Desert Inn ended up distributing free banana nut ice cream to casino customers for a year. In a 1996 interview, ex–Howard Hughes Chief of Nevada Operations Robert Maheu said, "There is a rumor that there is still some banana nut ice cream left in the freezer. It is most likely true."
As an owner of several major Las Vegas businesses, Hughes wielded much political and economic influence in Nevada and elsewhere. During the 1960s and early 1970s, he disapproved of underground nuclear testing at the Nevada Test Site. Hughes was concerned about the risk from residual nuclear radiation, and attempted to halt the tests. When the tests finally went through despite Hughes' efforts, the detonations were powerful enough that the entire hotel where he was staying trembled due to the shock waves. In two separate, last-ditch maneuvers, Hughes instructed his representatives to offer million-dollar bribes to both Presidents Lyndon B. Johnson and Richard Nixon.
In 1970, Jean Peters filed for divorce. The two had not lived together for many years. Peters requested a lifetime alimony payment of $70,000 a year, adjusted for inflation, and waived all claims to Hughes' estate. Hughes offered her a settlement of over a million dollars, but she declined it. Hughes did not insist on a confidentiality agreement from Peters as a condition of the divorce. Aides reported that Hughes never spoke ill of her. She refused to discuss her life with Hughes and declined several lucrative offers from publishers and biographers. Peters would state only that she had not seen Hughes for several years before their divorce and had dealt with him only by phone.
Hughes was living in the Intercontinental Hotel near Lake Managua in Nicaragua, seeking privacy and security, when a magnitude 6.5 earthquake damaged Managua in December 1972. As a precaution, Hughes moved first to a rather large tent, facing the hotel, then after a few days there to the Nicaraguan National Palace and stayed there as a guest of Anastasio Somoza Debayle before leaving for Florida on a private jet the following day. He subsequently moved into the Penthouse at the Xanadu Princess Resort on Grand Bahama Island, which he had recently purchased. He lived almost exclusively in the penthouse of the Xanadu Beach Resort & Marina for the last four years of his life.
Hughes spent a total of $300 million on his many properties in Las Vegas.
In 1972, author Clifford Irving caused a media sensation when he claimed he had co-written an authorized autobiography of Hughes. Hughes was so reclusive that he did not immediately publicly refute Irving's statement, leading many to believe the Irving book was genuine. However, before the book's publication, Hughes finally denounced Irving in a teleconference and the entire project was eventually exposed as a hoax. Irving was later convicted of fraud and spent 17 months in prison. In 1974, the Orson Welles film "F for Fake" included a section on the Hughes biography hoax, leaving a question open as to whether it was actually Hughes who took part in the teleconference (since so few people had actually heard or seen him in recent years). In 1977, "The Hoax" by Clifford Irving was published in the United Kingdom, telling his story of these events. The 2006 film "The Hoax", starring Richard Gere, is also based on these events.
Hughes is reported to have died on April 5, 1976, at 1:27 p.m. on board an aircraft, Learjet 24B N855W, owned by Robert Graf and piloted by Jeff Abrams. He was en route from his penthouse at the Acapulco Fairmont Princess Hotel in Mexico to the Methodist Hospital in Houston.
His reclusiveness and possibly his drug use made him practically unrecognizable. His hair, beard, fingernails, and toenails were long—his tall frame now weighed barely , and the FBI had to use fingerprints to conclusively identify the body. Howard Hughes' alias, John T. Conover, was used when his body arrived at a morgue in Houston on the day of his death.
A subsequent autopsy recorded kidney failure as the cause of death. Hughes was in extremely poor physical condition at the time of his death. He suffered from malnutrition and was covered in bedsores. While his kidneys were damaged, his other internal organs, including his brain, which had no visible damage other than illnesses, were deemed perfectly healthy. X-rays revealed five broken-off hypodermic needles in the flesh of his arms. To inject codeine into his muscles, Hughes had used glass syringes with metal needles that easily became detached.
Hughes is buried next to his parents at Glenwood Cemetery in Houston.
Approximately three weeks after Hughes' death, a handwritten will was found on the desk of an official of The Church of Jesus Christ of Latter-Day Saints in Salt Lake City, Utah. The so-called "Mormon Will" gave $1.56 billion to various charitable organizations (including $625 million to the Howard Hughes Medical Institute), nearly $470 million to the upper management in Hughes' companies and to his aides, $156 million to first cousin William Lummis, and $156 million split equally between his two ex-wives Ella Rice and Jean Peters.
A further $156 million was endowed to a gas-station owner, Melvin Dummar, who told reporters that in 1967, he found a disheveled and dirty man lying along U.S. Route 95, just north of Las Vegas. The man asked for a ride to Vegas. Dropping him off at the Sands Hotel, Dummar said the man told him that he was Hughes. Dummar later claimed that days after Hughes' death a "mysterious man" appeared at his gas station, leaving an envelope containing the will on his desk. Unsure if the will was genuine and unsure of what to do, Dummar left the will at the LDS Church office. In 1978, a Nevada court ruled the Mormon Will a forgery, and officially declared that Hughes had died intestate (without a valid will). Dummar's story was later adapted into Jonathan Demme's film "Melvin and Howard" in 1980.
Hughes' $2.5 billion estate was eventually split in 1983 among 22 cousins, including William Lummis, who serves as a trustee of the Howard Hughes Medical Institute. The Supreme Court of the United States ruled that Hughes Aircraft was owned by the Howard Hughes Medical Institute, which sold it to General Motors in 1985 for $5.2 billion. The court rejected suits by the states of California and Texas that claimed they were owed inheritance tax.
In 1984 Hughes' estate paid an undisclosed amount to Terry Moore, who claimed she and Hughes had secretly married on a yacht in international waters off Mexico in 1949 and never divorced. Moore never produced proof of a marriage, but her book, "The Beauty and the Billionaire," became a bestseller.
The moving image collection of Howard Hughes is held at the Academy Film Archive. The collection consists of over 200 items including 35mm and 16mm elements of feature films, documentaries, and television programs made or accumulated by Hughes. | https://en.wikipedia.org/wiki?curid=14059 |
Hugh Binning
Hugh Binning (1627–1653) was a Scottish philosopher and theologian. He was born in Scotland during the reign of Charles I and was ordained in the (Presbyterian) Church of Scotland. He died in 1653, during the time of Oliver Cromwell and the Commonwealth of England.
Hugh Binning was the son of John Binning of Dalvennan, Straiton.and Margaret M'Kell. Margaret was the daughter of Rev. Matthew M'Kell,
who was a minister in the parish of Bothwell, Scotland, and sister of Hugh M'Kell, a minister in Edinburgh.
Binning was born on his father's estate in Dalvennan, Straiton, in the shire of Ayr. The family owned other lands in the parishes of Straiton and Colmonell as well as Maybole in Carrick.
A precocious child, Binning was admitted to the study of philosophy at the University of Glasgow at age thirteen. Binning has been described as "an extraordinary instance of precocious learning and genius."
In 1645, James Dalrymple, 1st Viscount of Stair, who was Hugh's master (primary professor) in the study of philosophy, announced he was retiring from the University of Glasgow. Dalrymple was afterward President of the Court of Session, and Viscount Stair. After a national search for a replacement on the faculty, three men were selected to compete for the position. Binning was one of those selected, but was at a disadvantage because of his extreme youth and because he was not of noble birth. However, he had strong support from the existing faculty, who suggested that the candidates speak extemporaneously on any topic of the candidate's choice. After hearing Hugh speak, the other candidates withdrew, making Hugh a regent and professor of philosophy, while he was still 18 years old.
On 7 February 1648, (at the age of 21) Hugh was appointed an Advocate before the Court of Sessions (an attorney). In the same year, he married Barbara Simpson (sometimes called Mary), daughter of Rev. James Simpson a minister in Ireland. Their son, John, was born in 1650.
Binning was called on 25 October 1649. As minister of Govan, he was the successor of Mr. William Wilkie. His ordination took place on the 8th of January 1649, when Mr David Dickson, one of the theological professors at the College of Glasgow, and author of "Therapeutica Sacra", presided. He was ordained in January, at the age of 22, holding his regency until 14 May that year. At that time Govan was a separate town rather than part of Glasgow.
Hugh died around September 1653 and was buried in the churchyard of Govan, where Patrick Gillespie, then principal of the University of Glasgow, ordered a monument inscribed in Latin, roughly translated:
Hugh's widow, Barbara (sometimes called Mary), then remarried James Gordon, an Anglican priest at Cumber in Ireland. Together they had a daughter,Jean who married Daniel MacKenzie, who was on the winning side of the Battle of Bothwell Bridge serving as an ensign under Lieutenant-Colonel William Ramsay (who became the third Earl of Dalhousie), in the Earl of Mar's Regiment of Foot.
Binning's son, John Binning, married Hanna Keir, who was born in Ireland. The Binnings were Covenanters, a resistance movement that objected to the return of Charles II (who was received into the Catholic Church on his deathbed). They were on the losing side in the 1679 Battle of Bothwell Bridge. Most of the rebels who were not executed were exiled to the Americas; about 30 Covenanters were exiled to the Carolinas on the Carolina Merchant in 1684. After the battle, John and Hanna were separated.
In the aftermath of the battle at Bothwell Bridge, Binning's widow (now Barbara Gordon) tried to reclaim the family estate at Dalvennan by saying that John and his wife owed his stepfather a considerable some of money. The legal action was successful and Dalvennan became the possession of John's half-sister Jean, and her husband Daniel MacKenzie. In addition, Jean came into possession of Hanna Keir's property in Ireland.
By 1683, Jean was widowed. John Binning was branded a traitor, was sentenced to death and forfeited his property to the Crown. John's wife (Hanna Keir) was branded as a traitor and forfeited her property in Ireland. In 1685, Jean "donated" the Binning family's home at Dalvennan and other properties, along with the Keir properties to Roderick MacKenzie, who was a Scottish advocate of James II (James VII of Scotland), and the baillie of Carrick. According to an act of the Scottish Parliament, Roderick MacKenzie was also very effective in "suppressing the rebellious, fanatical party in the western and other shires of this realm, and putting the laws to vigorous execution against them".
Since Bothwell Bridge, Hanna had been hiding from the authorities. In 1685, Hanna was in Edinburgh where she was found during a sweep for subversives and imprisoned in the Tolbooth of Edinburgh, a combination city hall and prison. Those arrested with Hanna were exiled to North America, however, she developed dysentery and remained behind. By 1687, near death, Hanna petitioned the Privy Council of Scotland for her release; she was exiled to her family in Ireland, where she died around 1692.
In 1690, the Scottish Parliament rescinded John's fines and forfeiture, but he was unable to recover his family's estates, the courts suggesting that he had relinquished his claim to Dalvennan in exchange for forgiveness of debt, rather than forfeiture.
There is little documentation about John after his wife's death. John received a small income from royalties on his father's works after parliament extended copyrights on Binning's writings to him. However, the income was not significant and John made several petitions to the Scottish parliament for money, the last occurring in 1717. It is thought that he died in Somerset county, in southwestern England.
He died of consumption at the age of 26 on September 1653. He was remarkably popular as a preacher, having been considered "the most accomplished philosopher and divine in his time, and styled the Scottish Cicero." He married (cont. 17 May 1650), Mary (who died at Paisley in 1694) and had a son, John of Dalvennan. She was the daughter of Richard Simson, minister of Sprouston. After John's early death Mary married her second husband, James Gordon, minister of Comber, in Ireland. A marble tablet, with an inscription in classical Latin, was erected to his memory by his friend Mr Patrick Gillespie, who was then Principal of the University of Glasgow. It has been placed in the vestibule of the present parish church. The whole of his works are posthumous publications.
He was a follower of James Dalrymple. In later life, he was well known as an evangelical Christian.
Hugh Binning was born two years after Charles I became monarch of England, Ireland, and Scotland. At the time, each was an independent country sharing the same monarch. The Acts of Union 1707 integrated Scotland and England to form the Kingdom of Great Britain, and the Acts of Union 1800 integrated Ireland to form the United Kingdom of Great Britain and Ireland.
The period was dominated by both political and religious strife between the three independent countries. Religious disputes centered on questions such as whether religion was to be dictated by the monarch or was to be the choice of the people, and whether individuals had a direct relationship with God or needed to use an intermediary. Civil disputes centered on debates about the extent of the King's power (a question of the Divine right of kings), and specifically whether the King had the right to raise taxes and armed forces without the consent of the governed. These wars ultimately changed the relationship between king and subjects.
In 1638, the General Assembly of the Church of Scotland voted to remove bishops and the "Book of Common Prayer" that had been introduced by Charles I to impose the Anglican model on the Presbyterian Church of Scotland. Public riots followed, culminating in the Wars of the Three Kingdoms, an interrelated series of conflicts that took place in the three countries. The first conflict, which was also the first of the Bishops' Wars, took place in 1639 and was a single border skirmish between England and Scotland, also known as "the war the armys did not wanted to fight."
To maintain his English power base, Charles I made secret alliances with Catholic Ireland and Presbyterian Scotland to invade Anglican England, promising that each country could establish their own separate state religion. Once these secret entreaties became known to the English Long Parliament, the Congregationalist faction (of which Oliver Cromwell was a primary spokesman) took matters into its own hands and Parliament established an army separate from the King. Charles I was executed in January 1649, which led to the rule of Cromwell and the establishment of the Commonwealth. The conflicts concluded with The English Restoration of the monarchy and the return of Charles II in 1660.
The Act of Classes was passed by the Parliament of Scotland on 23 January 1649; the act banned Royalists (people supporting the monarchy) from holding political or military office. In exile, Charles II signed the Treaty of Breda (1650) with the Scottish Parliament; among other things, the treaty established Presbyterianism as the national religion. Charles was crowned King of Scots at Scone in January 1651. By September 1651, Scotland was annexed by England, its legislative institutions abolished, Presbyterianism dis-established, and Charles was forced into exile in France.
The Scottish Parliament rescinded the Act of Classes in 1651, which produced a split within Scottish Society. The sides of the conflict were called the Resolutioners (who supported the rescission of the act – supported the monarchy and the Scottish House of Stewart) and the Protesters (who supported Cromwell and the Commonwealth); Binning sided with the Protestors. Binning joined the Protesters in 1651. When Cromwell had sent troops to Scotland, he was also attempting to dis-establish Presbyterianism and the Church of Scotland, Binning spoke against Cromwell's act.
On Saturday 19 April 1651, Cromwell entered Glasgow and the next day he heard a sermon by three ministers who condemned him for invading Scotland. That evening, Cromwell summoned those ministers and others, to a debate on the issue. a discussion on some of the controverted points of the times was held in his presence, between his chaplains, the learned Dr John Owen, Joseph Caryl, and others on the one side, and some Scots ministers on the other. Mr. Binning, who was one of the disputants, apparently nonplussed the Independents, which led Cromwell to ask who the learned and bold young man was. Told it was Binning, he said: "He hath bound well, indeed," ... " but, laying his hand on his sword, this will lose all again." The late Mr. Orme was of the opinion that there is nothing improbable in the account of the meeting, but that such a meeting took place is certain. This appears from two letters which were written by Principal Robert Baillie, who was then Professor of Theology at the University of Glasgow.At the debate, Rev Hugh Binning is said to have out-debated Cromwell's ministers so completely that he silenced them.
Hugh Binning's political views were based on his theology. Binning was a Covenanter, a movement that began in Scotland at Greyfriars Kirkyard in 1638 with the National Covenant and continued with the 1643 Solemn League and Covenant—in effect a treaty between the English Long Parliament and Scotland for the preservation of the reformed religion in exchange for troops to confront the threat of Irish Catholic troops joining the Royalist army. Binning could also be described as a Protestor; both political positions were taken because of their religious implications. However, he saw the evils of the politics of his day was not a "fomenter of factions" writing "A Treatise of Christian Love" as a response.
Because of the tumultuous time in which Hugh Binning lived, politics and religion were inexorably intertwined. Binning was a Calvinist and follower of John Knox. As a profession, Binning was trained as a Philosopher, and he believed that philosophy was the servant of theology. He thought that both Philosophy and Theology should be taught in parallel. Binning's writing, which is primarily a collection of his sermons, "forms an important bridge between the 17th century, when philosophy in Scotland was heavily dominated by Calvinism, and the 18th century when figures such as Francis Hutcheson re-asserted a greater degree of independence between the two and allied philosophy with the developing human sciences."
Religiously, Hugh Binning was, what we would call today, an Evangelical Calvinist. He spoke on the primacy of God's love as the ground of salvation:
"... our salvation is not the business of Christ alone, but the whole Godhead is interested in it deeply, so deeply that you cannot say who loves it most, or who likes it most. The Father is the very fountain of it, his love is the spring of all."
With regards to the extent of the 'atonement', Hugh Binning, did not hold that the offer of redemption applied only to the few that are elect but said that "the ultimate ground of faith is in the electing will of God." In Scotland, during the 1600s, the questions concerning atonement revolved around the terms in which the offer was expressed.
Binning believed that "forgiveness is based on Christ's death, understood as a satisfaction and as a sacrifice: 'If he had pardoned sin without any satisfaction what rich grace it had been! But truly, to provide the Lamb and sacrifice himself, to find out the ransom, and to exact it of his own Son, in our name, is a testimony
of mercy and grace far beyond that. But then, his justice is very conspicuous in this work'."
All of the works of Hugh Binning were published posthumously and were primarily collections of his sermons. Of his speaking style, it was said: "There is originality without any affectation, a rich imagination, without anything fanciful or extravert, the utmost simplicity, without an thing mean or trifling."
(The Common Principles of the Christian Religion, fulltext) Quotations from the publication include:
(A Treatise of Christian Love, fulltext) Binning argues: | https://en.wikipedia.org/wiki?curid=14063 |
Henry Home, Lord Kames
Henry Home, Lord Kames (169627 December 1782) was a Scottish advocate, judge, philosopher, writer and agricultural improver. A central figure of the Scottish Enlightenment, a founding member of the Philosophical Society of Edinburgh, and active in the Select Society, he acted as patron to some of the most influential thinkers of the Scottish Enlightenment, including the philosopher David Hume, the economist Adam Smith, the writer James Boswell, the chemical philosopher William Cullen, and the naturalist John Walker.
He was born at Kames House, between Eccles and Birgham, Berwickshire, the son of George Home of Kames. He was educated at home by a private tutor until the age of 16.
In 1712 he was apprenticed as a lawyer under a Writer to the Signet in Edinburgh, was called to the Scottish bar as an advocate bar in 1724. He soon acquired reputation by a number of publications on the civil and Scottish law, and was one of the leaders of the Scottish Enlightenment. In 1752, he was "raised to the bench", thus acquiring the title of Lord Kames.
Kames held a primary interest in the production of Linen in Scotland and encouraged the development of Linen manufacture. Kames was one of the original proprietors of the British Linen Company, and a director between 1754-1756.
Home was on the panel of judges in the Joseph Knight case which ruled that there could be no slavery in Scotland.
His address in 1775 is shown as New Street on the Canongate. Cassell's clarifies that this was a very fine mansion at the head of the street, on its east side, facing onto the Canongate.
He is buried in the Home-Drummond plot at Kincardine-in-Menteith just west of Blair Drummond.
Home wrote much about the importance of property to society. In his "Essay Upon Several Subjects Concerning British Antiquities", written just after the Jacobite rising of 1745, he showed that the politics of Scotland were based not on loyalty to Kings, as the Jacobites had said, but on the royal land grants that lay at the base of feudalism, the system whereby the sovereign maintained "an immediate hold of the persons and property of his subjects".
In "Historical Law Tracts" Home described a four-stage model of social evolution that became "a way of organizing the history of Western civilization". The first stage was that of the hunter-gatherer, wherein families avoided each other as competitors for the same food. The second was that of the herder of domestic animals, which encouraged the formation of larger groups but did not result in what Home considered a true society. No laws were needed at these early stages except those given by the head of the family, clan, or tribe. Agriculture was the third stage, wherein new occupations such as "plowman, carpenter, blacksmith, stonemason" made "the industry of individuals profitable to others as well as to themselves", and a new complexity of relationships, rights, and obligations required laws and law enforcers. A fourth stage evolved with the development of market towns and seaports, "commercial society", bringing yet more laws and complexity but also providing more benefit. Lord Kames could see these stages within Scotland itself, with the pastoral Highlands, the agricultural Lowlands, the "polite" commercial towns of Glasgow and Edinburgh, and in the Western Isles a remaining culture of rude huts where fishermen and gatherers of seaweed eked out their subsistence living.
Home was a polygenist, he believed God had created different races on earth in separate regions. In his book "Sketches of the History of Man", in 1774, Home claimed that the environment, climate, or state of society could not account for racial differences, so that the races must have come from distinct, separate stocks.
The above studies created the genre of the story of civilization and defined the fields of anthropology and sociology and therefore the modern study of history for two hundred years.
In the popular book "Elements of Criticism" (1762) Home interrogated the notion of fixed or arbitrary rules of literary composition, and endeavoured to establish a new theory based on the principles of human nature. The late eighteenth-century tradition of sentimental writing was associated with his notion that 'the genuine rules of criticism are all of them derived from the human heart. Prof Neil Rhodes has argued that Lord Kames played a significant role in the development of English as an academic discipline in the Scottish Universities.
He enjoyed intelligent conversation and cultivated a large number of intellectual associates, among them John Home, David Hume and James Boswell.. Lord Monboddo was also a frequent debater of Kames, although these two usually had a fiercely competitive and adversarial relationship.
He was married to Agatha Drummond of Blair Drummond. Their children included George Drummond-Home. | https://en.wikipedia.org/wiki?curid=14064 |
Harwich
Harwich is a town in Essex, England and one of the Haven ports, located on the coast with the North Sea to the east. It is in the Tendring district. Nearby places include Felixstowe to the northeast, Ipswich to the northwest, Colchester to the southwest and Clacton-on-Sea to the south. It is the northernmost coastal town within Essex.
Its position on the estuaries of the Stour and Orwell rivers and its usefulness to mariners as the only safe anchorage between the Thames and the Humber led to a long period of maritime significance, both civil and military. The town became a naval base in 1657 and was heavily fortified, with Harwich Redoubt, Beacon Hill Battery, and Bath Side Battery.
Harwich is the likely launch point of the "Mayflower" which carried English Puritans to North America, and is the presumed birthplace of "Mayflower" captain Christopher Jones.
Harwich today is contiguous with Dovercourt and the two, along with Parkeston, are often referred to collectively as Harwich.
The town's name means "military settlement", from Old English "here-wic".
The town received its charter in 1238, although there is evidence of earlier settlement – for example, a record of a chapel in 1177, and some indications of a possible Roman presence.
The town was the target of an abortive raid by French forces under Ayton Doria on 24 March 1339 during the Hundred Years' War.
Because of its strategic position, Harwich was the target for the invasion of Britain by William of Orange on 11 November 1688. However, unfavourable winds forced his fleet to sail into the English Channel instead and eventually land at Torbay. Due to the involvement of the Schomberg family in the invasion, Charles Louis Schomberg was made Marquess of Harwich.
Writer Daniel Defoe devotes a few pages to the town in "A tour thro' the Whole Island of Great Britain". Visiting in 1722, he noted its formidable fort and harbour "of a vast extent". The town, he recounts, was also known for an unusual chalybeate spring rising on Beacon Hill (a promontory to the north-east of the town), which "petrified" clay, allowing it to be used to pave Harwich's streets and build its walls. The locals also claimed that "the same spring is said to turn wood into iron", but Defoe put this down to the presence of "copperas" in the water. Regarding the atmosphere of the town, he states: "Harwich is a town of hurry and business, not much of gaiety and pleasure; yet the inhabitants seem warm in their nests and some of them are very wealthy".
Harwich played an important part in the Napoleonic and more especially the two world wars. Of particular note:
1793-1815—Post Office Station for communication with Europe, one of embarkation and evacuation bases for expeditions to Holland in 1799, 1809 and 1813/14; base for capturing enemy privateers. The dockyard built many ships for the Navy, including HMS "Conqueror" which captured the French Admiral Villeneuve at the Battle of Trafalgar. The Redoubt and the now-demolished Ordnance Building date from that era.
1914-18—base for the Royal Navy's Harwich Force light cruisers and destroyers under Commodore Tyrwhitt, and for British submarines. In November 1918 the German U-Boat fleet surrendered to the Royal Navy in the harbour.
1939-1945—one of main East Coast minesweeping and destroyer bases, at one period base for British and French submarines; assembled fleets for Dutch and Dunkirk evacuations and follow-up to D-Day; unusually a target in 1940 for Italian bombers.
Harwich Dockyard was established as a Royal Navy Dockyard in 1652. It ceased to operate as a Royal Dockyard in 1713 (though a Royal Navy presence was maintained until 1829). During the various wars with France and Holland, through to 1815, the dockyard was responsible for both building and repairing numerous warships. HMS "Conqueror", a 74-gun ship completed in 1801, captured the French admiral Villeneuve at Trafalgar.
The yard was then a semi-private concern, with the actual shipbuilding contracted to Joseph Graham, who was sometimes mayor of the town.
During World War II parts of Harwich were again requisitioned for naval use and ships were based at HMS "Badger"; "Badger" was decommissioned in 1946, but the Royal Naval Auxiliary Service maintained a headquarters on the site until 1992.
The Royal Navy no longer has a presence in Harwich but Harwich International Port at nearby Parkeston continues to offer regular ferry services to the Hook of Holland (Hoek van Holland) in the Netherlands. Mann Lines operates a roll-on roll-off ferry service from Harwich Navyard to Bremerhaven, Cuxhaven, Paldiski and Turku.
Many operations of the Port of Felixstowe and of Trinity House, the lighthouse authority, are managed from Harwich.
The Mayflower railway line serves Harwich and there are three operational passenger stations: , and . The line also allows freight trains to access the Port.
The port is famous for the phrase "Harwich for the Continent", seen on road signs and in London & North Eastern Railway (LNER) advertisements.
At least three pairs of lighthouses have been built over recent centuries as leading lights, to help guide vessels into Harwich. The earliest pair were wooden structures: the High Light stood on top of the old Town Gate, whilst the Low Light (featured in a painting by Constable) stood on the foreshore. Both were coal-fired.
In 1818 these were replaced by stone structures, designed by John Rennie Senior, which can still be seen today (they no longer function as lighthouses: one houses the town's maritime museum, the other is (in 2015) also being converted into a museum). They were owned by General Rebow of Wivenhoe Park, who was able to charge 1d per ton on all cargo entering the port, for upkeep of the lights. In 1836 Rebow's lease on the lights was purchased by Trinity House, but in 1863 they were declared redundant due to a change the position of the channel used by ships entering and leaving the port, caused by shifting sands.
They were in turn replaced by the pair of cast iron lights at nearby Dovercourt; these too remain in situ, but were decommissioned (again due to shifting of the channel) in 1917.
Despite, or perhaps because of, its small size Harwich is highly regarded in terms of architectural heritage, and the whole of the older part of the town, excluding Navyard Wharf, is a conservation area.
The regular street plan with principal thoroughfares connected by numerous small alleys indicates the town's medieval origins, although many buildings of this period are hidden behind 18th century facades.
The extant medieval structures are largely private homes. The house featured in the image of Kings Head St to the left is unique in the town and is an example of a sailmaker's house, thought to have been built circa 1600. Notable public buildings include the parish church of St. Nicholas (1821) in a restrained Gothic style, with many original furnishings, including a somewhat altered organ in the west end gallery. There is also the Guildhall of 1769, the only Grade I listed building in Harwich.
The Pier Hotel of 1860 and the building that was the Great Eastern Hotel of 1864 can both been seen on the quayside, both reflecting the town's new importance to travellers following the arrival of the Great Eastern Main Line from Colchester in 1854. In 1923, The Great Eastern Hotel was closed by the newly formed LNER, as the Great Eastern Railway had opened a new hotel with the same name at the new passenger port at Parkeston Quay, causing a decline in numbers.
The hotel became the Harwich Town Hall, which included the Magistrates Court and, following changes in local government, was sold and divided into apartments.
Also of interest are the High Lighthouse (1818), the unusual Treadwheel Crane (late 17th century), the Old Custom Houses on West Street, a number of Victorian shopfronts and the Electric Palace Cinema (1911), one of the oldest purpose-built cinemas to survive complete with its ornamental frontage and original projection room still intact and operational.
There is little notable building from the later parts of the 20th century, but major recent additions include the lifeboat station and two new structures for Trinity House. The Trinity House office building, next door to the Old Custom Houses, was completed in 2005. All three additions are influenced by the high-tech style.
Harwich has also historically hosted a number of notable inhabitants, linked with Harwich's maritime past.
Harwich is home to Harwich & Parkeston F.C.; Harwich and Dovercourt RFC; Harwich Rangers FC; Harwich & Dovercourt Sailing Club; Harwich, Dovercourt & Parkeston Swimming Club; Harwich & Dovercourt Rugby Union Football Club; Harwich & Dovercourt Cricket Club; and Harwich Runners who with support from Harwich Swimming Club host the annual Harwich Triathlons. | https://en.wikipedia.org/wiki?curid=14065 |
Hans Baldung
Hans Baldung (1484 or 1485 – September 1545), also called Hans Baldung Grien, the "Grien" element being an early nickname after his preferred colour green, was a German artist in painting and printmaking who was considered the most gifted student of Albrecht Dürer. Throughout his lifetime, Baldung developed a distinctive style, full of color, expression and imagination. His talents were varied, and he produced a great and extensive variety of work including portraits, woodcuts, drawings, tapestries, altarpieces, stained glass, allegories and mythological motifs.
Hans was born in the small free city of Schwäbisch Gmünd, part of the East Württemberg region in former Swabia, Germany, in the year 1484 or 1485, into a family of intellectuals, academics and professionals, where his father, Johann Baldung, a university-educated jurist, had been an episcopal official between 1492 - 1505. Hans mother was Margarethe Herlin, daughter of Arbogast Herlin, he was not propertyless, but with unknown occupation. His uncle, Hieronymus Baldung, was a doctor in medicine, he had a son, Pius Hieronymus, that can be seen as Hans' cousin, who taught law at Freiburg, and became by 1527 chancellor of the Tyrol. In fact, Baldung was the first male in his family not to attend university, but was one of the first German artists to come from an academic family. His earliest training as an artist began around 1500 in the Upper Rhineland by an artist from Strasbourg.
Beginning in 1503, during the "Wanderjahre" ("Hiking years") required of artists of the time, Baldung became an assistant to Albrecht Dürer. Here, he may have been given his nickname "Grien". This name is thought to have come foremost from a preference to the color green: he seems to have worn green clothing. He probably also got this nickname to distinguish him from at least two other Hanses in Dürer's shop, Hans Schäufelein and Hans Suess von Kulmbach. He later included the name "Grien" in his monogram, and it has also been suggested that the name came from, or consciously echoed, "grienhals", a German word for witch—one of his signature themes. Hans quickly picked up Dürer's influence and style, and they became friends: Baldung seems to have managed Dürer's workshop during the latter's second sojourn in Venice. In a later trip to the Netherlands in 1521 Dürer's account book records that he took with him and sold prints by Baldung. On Dürer's death Baldung was sent a lock of his hair, which suggests a close friendship. Near the end of his Nuremberg years, Grien oversaw the production by Dürer of stained glass, woodcuts and engravings, and therefore developed an affinity for these media and for the Nuremberg master's handing of them.
In 1509, when Baldung's time in Nuremberg was complete, he moved back to Strasbourg and became a citizen there. He became a celebrity of the town, and received many important commissions. The following year he married Margarethe Herlin, a local merchant's daughter joined the guild "Zur Steltz", opened a workshop, and began signing his works with the HGB monogram that he used for the rest of his career. His style also became much more deliberately individual—a tendency art historians used to term "mannerist." He also stayed in Freiburg im Breisgau in 1513–1516 where he made, among other things, the .
In addition to traditional religious subjects, Baldung was concerned during these years with the profane theme of the imminence of death and with scenes of sorcery and witchcraft. He helped introduce supernatural and erotic themes into German art, although these were already amply present in Dürer's work. Most famously, he depicted witches, also a local interest: Strasbourg's humanists studied witchcraft and its bishop was charged with finding and prosecuting witches. His most characteristic works in this area are small in scale and mostly in the medium of drawing; these include a series of puzzling, often erotic allegories and mythological works executed in quill pen and ink and white body color on primed paper. The number of Hans Baldung's religious works diminished with the Protestant Reformation, which generally repudiated church art as either wasteful or idolatrous. But earlier, around the same time that he produced an important chiaroscuro woodcut of Adam and Eve, the artist became interested in themes related to death, the supernatural, witchcraft, sorcery, and the relation between the sexes. Baldung's fascination with witchcraft began early, with his first chiaroscuro print (1510) lasted to the end of his career.
Hans Baldung Grien's work depicting witches was produced in the first half of the 16th century, before witch hunting became a widespread cultural phenomenon in Europe. According to one view, Baldung's work did not represent widespread cultural beliefs at the time of creation but reflected largely individual choices. On the other hand, through his family, Baldung stood as closer to the leading intellectuals of the day than any of his contemporaries, and could draw on a burgeoning literature on witchcraft, as well as on developing juridical and forensic strategies for witch-hunting. Furthermore, Baldung never worked directly with any Reformation leaders to spread religious ideals through his artwork, although living in fervently religious Strasbourg, although he was a supporter of the movement, working on the high altar in the city of Münster, Germany.
Baldung was the first artist to heavily incorporate witches and witchcraft into his artwork (his mentor Albrecht Dürer had sporadically included them but not as prominently as Baldung would). During his lifetime there were few witch trials, therefore, some believe Baldung's depictions of witchcraft to be based on folklore rather than the cultural beliefs of his time. By contrast, throughout the early sixteenth century, humanism became very popular, and within this movement, Latin literature was valorized, particularly poetry and satire, some of which included views on witches that could be combined with witch lore massively accumulated in works such as the Malleus Maleficarum. Baldung partook in this culture, producing not only many works depicting Strasbourg humanists and scenes from ancient art and literature, but what an earlier literature on the artist described as his satirical take on his depiction of witches. Gert von der Osten comments on this aspect of "Baldung [treating] his witches humorously, an attitude that reflects the dominant viewpoint of the humanists in Strasbourg at this time who viewed witchcraft as 'lustig,' a matter that was more amusing than serious". However, the separation of a satirical tone from deadly serious vilifying intent proves difficult to maintain for Baldung as it is for many other artists, including his rough contemporary Hieronymus Bosch. Baldung's art simultaneously represents ideals presented in ancient Greek and Roman poetry, such as the pre-16th century notion that witches could control the weather, which Baldung is believed to have alluded to in his 1523 oil painting "Weather Witches", which showcases two attractive and naked witches in front of a stormy sky.
Baldung also regularly incorporated scenes of witches flying in his art, a characteristic that had been contested centuries before his artwork came into being. Flying was inherently attributed to witches by those who believed in the myth of the Sabbath (without their ability to fly, the myth fragmented), such as Baldung, which he depicted in works like "Witches Preparing for the Sabbath Flight" (1514).
Throughout his life, Baldung painted numerous portraits, known for their sharp characterizations. While Dürer rigorously details his models, Baldung's style differs by focusing more on the personality of the represented character, an abstract conception of the model's state of mind. Baldung settled eventually in Strasbourg and then to Freiburg im Breisgau, where he executed what is held to be his masterpiece. Here in painted an eleven-panel altarpiece for the Freiburg Cathedral, still intact today, depicting scenes from the life of the Virgin, including, The Annunciation, The Visitation, The Nativity, The Flight into Egypt, The Crucifixion, Four Saints and The Donators. These depictions were a large part of the artist's greater body of work containing several renowned pieces of the Virgin.
The earliest pictures assigned to him by some are altar-pieces with the monogram H. B. interlaced, and the date of 1496, in the monastery chapel of Lichtenthal near Baden-Baden. Another early work is a portrait of the emperor Maximilian, drawn in 1501 on a leaf of a sketch-book now in the print-room at Karlsruhe. "The Martyrdom of St Sebastian and the Epiphany" (now Berlin, 1507), were painted for the market-church of Halle in Saxony.
Baldung's prints, though Düreresque, are very individual in style, and often in subject. They show little direct Italian influence. His paintings are less important than his prints. He worked mainly in woodcut, although he made six engravings, one very fine. He joined in the fashion for chiaroscuro woodcuts, adding a tone block to a woodcut of 1510. Most of his hundreds of woodcuts were commissioned for books, as was usual at the time; his "single-leaf" woodcuts (i.e. prints not for book illustration) are fewer than 100, though no two catalogues agree as to the exact number.
Unconventional as a draughtsman, his treatment of human form is often exaggerated and eccentric (hence his linkage, in the art historical literature, with European Mannerism), whilst his ornamental style—profuse, eclectic, and akin to the self-consciously "German" strain of contemporary limewood sculptors—is equally distinctive. Though Baldung has been commonly called the Correggio of the north, his compositions are a curious medley of glaring and heterogeneous colours, in which pure black is contrasted with pale yellow, dirty grey, impure red and glowing green. Flesh is a mere glaze under which the features are indicated by lines.
His works are notable for their individualistic departure from the Renaissance composure of his model, Dürer, for the wild and fantastic strength that some of them display, and for their remarkable themes. In the field of painting, his "Eve, the Serpent and Death" (National Gallery of Canada) shows his strengths well. There is special force in the "Death and the Maiden" panel of 1517 (Basel), in the "Weather Witches" (Frankfurt), in the monumental panels of "Adam" and "Eve" (Madrid), and in his many powerful portraits. Baldung's most sustained effort is the altarpiece of Freiburg, where the Coronation of the Virgin, and the Twelve Apostles, the Annunciation, Visitation, Nativity and Flight into Egypt, and the Crucifixion, with portraits of donors, are executed with some of that fanciful power that Martin Schongauer bequeathed to the Swabian school.
He is well known as a portrait painter, his works include historical pictures and portraits; among the latter may be named those of Maximilian I. and Charles V. His bust of Margrave Philip in the Munich Gallery tells us that he was connected with the reigning family of Baden as early as 1514. At a later period he had sittings with Margrave Christopher of Baden, Ottilia his wife, and all their children, and the picture containing these portraits is still in the gallery at Karlsruhe. Like Dürer and Cranach, Baldung supported the Protestant Reformation. He was present at the diet of Augsburg in 1518, and one of his woodcuts represents Luther in quasi-saintly guise, under the protection of (or being inspired by) the Holy Spirit, which hovers over him in the shape of a dove.
Attribution: | https://en.wikipedia.org/wiki?curid=14068 |
Hammered dulcimer
The hammered dulcimer (also called the hammer dulcimer, dulcimer, or tympanon) is a percussion-stringed instrument which consists of strings typically stretched over a trapezoidal resonant sound board. The hammered dulcimer is set before the musician, who in more traditional styles may sit cross-legged on the floor, or in a more modern style may stand or sit at a wooden support with legs. The player holds a small spoon-shaped mallet hammer in each hand to strike the strings (see Appalachian dulcimer). The Graeco-Roman "dulcimer" ("sweet song") derives from the Latin "dulcis" (sweet) and the Greek "melos" (song). The dulcimer, in which the strings are beaten with small hammers, originated from the psaltery, in which the strings are plucked.
Hammered dulcimers and other similar instruments are traditionally played in Iraq, India, Iran, Southwest Asia, China, Korea, and parts of Southeast Asia, Central Europe (Hungary, Slovenia, Romania, Slovakia, Poland, Czech Republic, Switzerland (particularly Appenzell), Austria and Bavaria), the Balkans, Eastern Europe (Ukraine and Belarus), and Scandinavia. The instrument is also played in the United Kingdom (Wales, East Anglia, Northumbria), and the US, where its traditional use in folk music saw a notable revival in the late 20th century.
A dulcimer usually has two bridges, a bass bridge near the right and a treble bridge on the left side. The bass bridge holds up bass strings, which are played to the left of the bridge. The treble strings can be played on either side of the treble bridge. In the usual construction, playing them on the left side gives a note a fifth higher than playing them on the right of the bridge.
The dulcimer comes in various sizes, identified by the number of strings that cross each of the bridges. A 15/14, for example, has 15 strings crossing the treble bridge and 14 crossing the bass bridge, and can span three octaves. The strings of a hammered dulcimer are usually found in pairs, two strings for each note (though some instruments have three or four strings per note). Each set of strings is tuned in unison and is called a course. As with a piano, the purpose of using multiple strings per course is to make the instrument louder, although as the courses are rarely in perfect unison, a chorus effect usually results like a mandolin. A hammered dulcimer, like an autoharp, harp, or piano, requires a tuning wrench for tuning, since the dulcimer's strings are wound around tuning pins with square heads. (Ordinarily, 5 mm "zither pins" are used, similar to, but smaller in diameter than piano tuning pins, which come in various sizes ranging upwards from "1/0" or 7 mm.)
The strings of the hammered dulcimer are often tuned according to a circle of fifths pattern. Typically, the lowest note (often a G or D) is struck at the lower right-hand of the instrument, just to the left of the right-hand (bass) bridge. As a player strikes the courses above in sequence, they ascend following a repeating sequence of two whole steps and a half step. With this tuning, a diatonic scale is broken into two tetrachords, or groups of four notes. For example, on an instrument with D as the lowest note, the D major scale is played starting in the lower-right corner and ascending the bass bridge: D – E – F – G. This is the lower tetrachord of the D major scale. At this point the player returns to the bottom of the instrument and shifts to the treble strings to the right of the treble bridge to play the higher tetrachord: A – B – C – D. The player can continue up the scale on the right side of the treble bridge with E – F – G – A – B, but the next note will be C, not C, so he or she must switch to the left side of the treble bridge (and closer to the player) to continue the D major scale. See the drawing on the left above, in which "DO" would correspond to D (see Movable do solfège).
The shift from the bass bridge to the treble bridge is required because the bass bridge's fourth string G is the start of the lower tetrachord of the G scale. The player could go on up a couple notes (G - A - B), but the next note will be a flatted seventh (C natural in this case), because this note is drawn from the G tetrachord. This D major scale with a flatted seventh is the mixolydian mode in D.
The same thing happens as the player goes up the treble bridge – after getting to La (B in this case), one has to go to the left of the treble bridge. Moving from the left side of the bass bridge to the right side of the treble bridge is analogous to moving from the right side of the treble bridge to the left side of the treble bridge.
The whole pattern can be shifted up by three courses, so that instead of a D-major scale one would have a G-major scale, and so on. This transposes one equally tempered scale to another. Shifting down three courses transposes the D-major scale to A-major, but of course the first Do-Re-Mi would be shifted off the instrument.
This tuning results in most, but not all, notes of the chromatic scale being available. To fill in the gaps, many modern dulcimer builders include extra short bridges at the top and bottom of the soundboard, where extra strings are tuned to some or all of the missing pitches. Such instruments are often called "chromatic dulcimers" as opposed to the more traditional "diatonic dulcimers".
The tetrachord markers found on the bridges of most hammered dulcimers in the English-speaking world were introduced by the American player and maker Sam Rizzetta in the 1960s.
In the Alps there are also chromatic dulcimers with crossed strings, which are in a whole tone distance in every row. This chromatic "Salzburger hackbrett" was developed in the mid 1930s from the diatonic hammered dulcimer by Tobi Reizer and his son along with Franz Peyer and Heinrich Bandzauner. In the postwar period it was one of the instruments taught in state-sponsored music schools.
Hammered dulcimers of non-European descent may have other tuning patterns, and builders of European-style dulcimers sometimes experiment with alternate tuning patterns.
The instrument is referred to as "hammered" in reference to the small mallets (referred to as "hammers") that players use to strike the strings. Hammers are usually made of wood (most likely hardwoods such as maple, cherry, padauk, oak, walnut, or any other hardwood), but can also be made from any material, including metal and plastic. In the Western hemisphere, hammers are usually stiff, but in Asia, flexible hammers are often used. The head of the hammer can be left bare for a sharp attack sound, or can be covered with adhesive tape, leather, or fabric for a softer sound. Two-sided hammers are also available. The heads of two sided hammers are usually oval or round. Most of the time, one side is left as bare wood while the other side may be covered in leather or a softer material such as piano felt.
Several traditional players have used hammers that differ substantially from those in common use today. Paul Van Arsdale (1920-2018), a player from upstate New York, used flexible hammers made from hacksaw blades, with leather-covered wooden blocks attached to the ends (these were modeled after the hammers used by his grandfather, Jesse Martin). The Irish player John Rea (1915–1983) used hammers made of thick steel wire, which he made himself from old bicycle spokes wrapped with wool. Billy Bennington (1900–1986), a player from Norfolk, England, used cane hammers bound with wool.
The hammered dulcimer was extensively used during the Middle Ages in England, France, Italy, Germany, the Netherlands, and Spain. Although it had a distinctive name in each country, it was everywhere regarded as a kind of psalterium. The importance of the method of setting the strings in vibration by means of hammers, and its bearing on the acoustics of the instrument, were recognized only when the invention of the pianoforte had become a matter of history. It was then perceived that the psalterium (in which the strings were plucked) and the dulcimer (in which they were struck), when provided with keyboards would give rise to two distinct families of instruments, differing essentially in tone quality, in technique and in capabilities. The evolution of the psalterium resulted in the harpsichord; that of the dulcimer produced the pianoforte.
Versions of the hammered dulcimer are used throughout the world. In Eastern Europe, a larger descendant of the hammered dulcimer called the cimbalom is played and has been used by a number of classical composers, including Zoltán Kodály, Igor Stravinsky, and Pierre Boulez. The khim is the name used by the Thai, the Khmer, and the Laotians for the hammered dulcimer. | https://en.wikipedia.org/wiki?curid=14070 |
Humanae vitae
Humanae vitae (Latin: "Of Human Life") is an encyclical written by Pope Paul VI and dated 25 July 1968. The text was issued at a Vatican press conference on 29 July. Subtitled "On the Regulation of Birth", it re-affirmed the teaching of the Catholic Church regarding married love, responsible parenthood, and the rejection of artificial contraception. In formulating his teaching he explained why he did not accept the conclusions of the Pontifical Commission on Birth Control established by his predecessor, Pope John XXIII, a commission he himself had expanded.
Mainly because of its restatement of the Church's opposition to artificial contraception, the encyclical was politically controversial. It affirmed traditional Church moral teaching on the sanctity of life and the procreative and unitive nature of conjugal relations.
It was the last of Paul's seven encyclicals.
In this encyclical Paul VI reaffirmed the Catholic Church's view of marriage and marital relations and a continued condemnation of "artificial" birth control. There were two Papal committees and numerous independent experts looking into the latest advancement of science and math on the question of artificial birth control, which were noted by the Pope in his encyclical. The expressed views of Paul VI reflected the teachings of his predecessors, especially Pius XI, Pius XII and John XXIII, all of whom had insisted on the divine obligations of the marital partners in light of their partnership with God the creator.
Paul VI himself, even as commission members issued their personal views over the years, always reaffirmed the teachings of the Church, repeating them more than once in the first years of his Pontificate.
To Pope Paul VI, marital relations are much more than a union of two people. In his view, they constitute a union of the loving couple with a loving God, in which the two persons generate the matter for the body, while God creates the unique soul of a person. For this reason, Paul VI teaches in the first sentence of "Humanae Vitae", that the "transmission of human life is a most serious role in which married people collaborate freely and responsibly with God the Creator." This is divine partnership, so Paul VI does not allow for arbitrary human decisions, which may limit divine providence. According to Paul VI, marital relations are a source of great joy, but also of difficulties and hardships. The question of human procreation with God, exceeds in the view of Paul VI specific disciplines such as biology, psychology, demography or sociology. According to Paul VI, married love takes its origin from God, who is love, and from this basic dignity, he defines his position:
The encyclical opens with an assertion of the competency of the magisterium of the Catholic Church to decide questions of morality. It then goes on to observe that circumstances often dictate that married couples should limit the number of children, and that the sexual act between husband and wife is still worthy even if it can be foreseen not to result in procreation. Nevertheless, it is held that the sexual act must retain its intrinsic relationship to the procreation of human life.
Every action specifically intended to prevent procreation is forbidden, except in medically necessary circumstances. Therapeutic means necessary to cure diseases are exempted, even if a foreseeable impediment to procreation should result, but only if infertility is not directly intended. This is held to directly contradict the moral order which was established by God. Abortion, even for therapeutic reasons, is absolutely forbidden, as is sterilization, even if temporary. Therapeutic means which induce infertility are allowed ("e.g.", hysterectomy), if they are not specifically intended to cause infertility (e.g., the uterus is cancerous, so the preservation of life is intended). If there are well grounded reasons (arising from the physical or psychological condition of husband or wife, or from external circumstances), Natural family planning methods (abstaining from intercourse during certain parts of the menstrual cycle) are allowed, since they take advantage of a faculty provided by nature.
The acceptance of artificial methods of birth control is then claimed to result in several negative consequences, among them a general lowering of moral standards resulting from sex without consequences, and the danger that men may reduce women to being a mere instrument for the satisfaction of [their] own desires; finally, abuse of power by public authorities, and a false sense of autonomy.
Public authorities should oppose laws which undermine natural law; scientists should further study effective methods of natural birth control; doctors should further familiarize themselves with this teaching, in order to be able to give advice to their patients, priests must spell out clearly and completely the Church's teaching on marriage. The encyclical acknowledges that "perhaps not everyone will easily accept this particular teaching", but that "...it comes as no surprise to the church that she, no less than her Divine founder is destined to be a sign of contradiction." Noted is the duty of proclaiming the entire moral law, "both natural and evangelical." The encyclical also points out that the Roman Catholic Church cannot "declare lawful what is in fact unlawful", because she is concerned with "safeguarding the holiness of marriage, in order to guide married life to its full human and Christian perfection." This is to be the priority for his fellow bishops and priests and lay people. The Pope predicts that future progress in social cultural and economic spheres will make marital and family life more joyful, provided God's design for the world is faithfully followed. The encyclical closes with an appeal to observe the natural laws of the Most High God. "These laws must be wisely and lovingly observed."
There had been a long-standing general Christian prohibition on contraception and abortion, with such Church Fathers as Clement of Alexandria and Saint Augustine condemning the practices. It was not until the 1930 Lambeth Conference that the Anglican Communion allowed for contraception in limited circumstances. Mainline Protestant denominations have since removed prohibitions against artificial contraception. In a partial reaction, Pope Pius XI wrote the encyclical "Casti connubii" ("On Christian Marriage") in 1930, reaffirming the Catholic Church's belief in various traditional Christian teachings on marriage and sexuality, including the prohibition of artificial birth control even within marriage. "Casti connubii" is against contraception and regarding natural family planning allowed married couples to use their nuptial rights "in the proper manner" when because of either time or defects, new life could not be brought forth.
With the appearance of the first oral contraceptives in 1960, dissenters in the Church argued for a reconsideration of the Church positions. In 1963 Pope John XXIII established a commission of six European non-theologians to study questions of birth control and population. It met once in 1963 and twice in 1964. As Vatican Council II was concluding, Pope Paul VI enlarged it to fifty-eight members, including married couples, laywomen, theologians and bishops. The last document issued by the council ("Gaudium et spes") contained a section titled "Fostering the Nobility of Marriage" (1965, nos. 47-52), which discussed marriage from the personalist point of view. The "duty of responsible parenthood" was affirmed, but the determination of licit and illicit forms of regulating birth was reserved to Pope Paul VI. In the spring of 1966, following the close of the council, the commission held its fifth and final meeting, having been enlarged again to include sixteen bishops as an executive committee. The commission was only consultative but it submitted a report approved by a majority of 64 members to Paul VI. It proposed he approve of artificial contraception without distinction of the various means. A minority of four members opposed this report and issued a parallel report to the Pope. Arguments in the minority report, against change in the church's teaching, were that "we should have to concede frankly that the Holy Spirit had been on the side of the Protestant churches in 1930" (when "Casti connubii" was promulgated) and that "it should likewise have to be admitted that for a half a century the Spirit failed to protect Pius XI, Pius XII, and a large part of the Catholic hierarchy from a very serious error."
After two more years of study and consultation, the pope issued "Humanae vitae", which removed any doubt that the Church views hormonal anti-ovulants as contraceptive. He explained why he did not accept the opinion of the majority report of the commission (1968, #6). Arguments were raised in the decades that followed that his decision has never passed the condition of "reception" to become church doctrine.
In his role as Theologian of the Pontifical Household Mario Luigi Ciappi advised Pope Paul VI during the drafting of "Humanae vitae". Ciappi, a doctoral graduate of the "Pontificium Athenaeum Internationale Angelicum", the future Pontifical University of Saint Thomas Aquinas, "Angelicum", served as professor of dogmatic theology there and was Dean of the "Angelicum's" Faculty of Theology from 1935 to 1955.
According to George Weigel, Paul VI named Archbishop Karol Wojtyła (later Pope John Paul II) to the commission, but Polish government authorities would not permit him to travel to Rome. Wojtyła had earlier defended the church's position from a philosophical standpoint in his 1960 book "Love and Responsibility". Wojtyła's position was strongly considered and it was reflected in the final draft of the encyclical, although much of his language and arguments were not incorporated. Weigel attributes much of the poor reception of the encyclical to the omission of many of Wojtyła's arguments.
In 2017, anticipating the 50th anniversary of the encyclical, four theologians led by Mgr. Gilfredo Marengo, a professor of theological anthropology at the Pontifical John Paul II Institute for Studies on Marriage and Family, launched a research project he called "a work of historical-critical investigation without any aim other than reconstructing as well as possible the whole process of composing the encyclical". Using the resources of the Vatican Secret Archives and the Congregation for the Doctrine of the Faith, they hope to detail the writing process and the interaction between the commission, publicity surrounding the commission's work, and Paul's own authorship.
13. Men rightly observe that a conjugal act imposed on one's partner without regard to his or her condition or personal and reasonable wishes in the matter, is no true act of love, and therefore offends the moral order in its particular application to the intimate relationship of husband and wife. If they further reflect, they must also recognize that an act of mutual love which impairs the capacity to transmit life which God the Creator, through specific laws, has built into it, frustrates His design which constitutes the norm of marriage, and contradicts the will of the Author of life. Hence to use this divine gift while depriving it, even if only partially, of its meaning and purpose, is equally repugnant to the nature of man and of woman, and is consequently in opposition to the plan of God and His holy will. But to experience the gift of married love while respecting the laws of conception is to acknowledge that one is not the master of the sources of life but rather the minister of the design established by the Creator. Just as man does not have unlimited dominion over his body in general, so also, and with more particular reason, he has no such dominion over his specifically sexual faculties, for these are concerned by their very nature with the generation of life, of which God is the source. "Human life is sacred—all men must recognize that fact," Our predecessor Pope John XXIII recalled. "From its very inception it reveals the creating hand of God."
15. ...the Church does not consider at all illicit the use of those therapeutic means necessary to cure bodily diseases, even if a foreseeable impediment to procreation should result therefrom — provided such impediment is not directly intended.
16. ...If therefore there are well-grounded reasons for spacing births, arising from the physical or psychological condition of husband or wife, or from external circumstances, the Church teaches that married people may then take advantage of the natural cycles immanent in the reproductive system and engage in marital intercourse only during those times that are infertile, thus controlling birth in a way which does not in the least offend the moral principles which We have just explained.
18. It is to be anticipated that perhaps not everyone will easily accept this particular teaching. There is too much clamorous outcry against the voice of the Church, and this is intensified by modern means of communication. But it comes as no surprise to the Church that it, no less than its divine Founder, is destined to be a "sign of contradiction.") The Church does not, because of this, evade the duty imposed on it of proclaiming humbly but firmly the entire moral law, both natural and evangelical. Since the Church did not make either of these laws, it cannot be their arbiter—only their guardian and interpreter. It could never be right for the Church to declare lawful what is in fact unlawful, since that, by its very nature, is always opposed to the true good of man. In preserving intact the whole moral law of marriage, the Church is convinced that it is contributing to the creation of a truly human civilization. The Church urges man not to betray his personal responsibilities by putting all his faith in technical expedients. In this way it defends the dignity of husband and wife. This course of action shows that the Church, loyal to the example and teaching of the divine Savior, is sincere and unselfish in its regard for men whom it strives to help even now during this earthly pilgrimage "to share God's life as sons of the living God, the Father of all men".
23. We are fully aware of the difficulties confronting the public authorities in this matter, especially in the developing countries. In fact, We had in mind the justifiable anxieties which weigh upon them when We published Our encyclical letter "Populorum Progressio". But now We join Our voice to that of Our predecessor John XXIII of venerable memory, and We make Our own his words: "No statement of the problem and no solution to it is acceptable which does violence to man's essential dignity; those who propose such solutions base them on an utterly materialistic conception of man himself and his life. The only possible solution to this question is one which envisages the social and economic progress both of individuals and of the whole of human society, and which respects and promotes true human values." No one can, without being grossly unfair, make divine Providence responsible for what clearly seems to be the result of misguided governmental policies, of an insufficient sense of social justice, of a selfish accumulation of material goods, and finally of a culpable failure to undertake those initiatives and responsibilities which would raise the standard of living of peoples and their children.
Cardinal Leo Joseph Suenens, a moderator of the ecumenical council, questioned, "whether moral theology took sufficient account of scientific progress, which can help determine, what is according to nature. I beg you my brothers let us avoid another Galileo affair. One is enough for the Church." In an interview in "Informations Catholiques Internationales" on 15 May 1969, he criticized the Pope’s decision again as frustrating the collegiality defined by the Council, calling it a non-collegial or even an anti-collegial act. He was supported by Vatican II theologians such as Karl Rahner, Hans Küng, several Episcopal conferences, e.g. the Episcopal Conference of Austria, Germany, and Switzerland, as well as several bishops, including Christopher Butler, who called it one of the most important contributions to contemporary discussion in the Church.
The publication of the encyclical marks the first time in the twentieth century that open dissent from the laity about teachings of the Church was voiced widely and publicly. The teaching has been criticized by development organizations and others who claim that it limits the methods available to fight worldwide population growth and struggle against HIV/AIDS. Within two days of the encyclical's release, a group of dissident theologians, led by Rev. Charles Curran, then of The Catholic University of America, issued a statement stating, "spouses may responsibly decide according to their conscience that artificial contraception in some circumstances is permissible and indeed necessary to preserve and foster the value and sacredness of marriage.
Two months later, the controversial "Winnipeg Statement" issued by the Canadian Conference of Catholic Bishops stated that those who cannot accept the teaching should not be considered shut off from the Catholic Church, and that individuals can in good conscience use contraception as long as they have first made an honest attempt to accept the difficult directives of the encyclical.
The Dutch Catechism of 1966, based on the Dutch bishops' interpretation of the just completed Vatican Council, and the first post-Council comprehensive Catholic catechism, noted the lack of mention of artificial contraception in the Council. "As everyone can ascertain nowadays, there are several methods of regulating births. The Second Vatican Council did not speak of any of these concrete methods… This is a different standpoint than that taken under Pius XI some thirty years which was also maintained by his successor ... we can sense here a clear development in the Church, a development, which is also going on outside the Church."
In the Soviet Union, "Literaturnaja Gazeta", a publication of Soviet intellectuals, included an editorial and statement by Russian physicians against the encyclical.
Ecumenical reactions were mixed. Liberal and Moderate Lutherans and the World Council of Churches were disappointed. Eugene Carson Blake criticised the concepts of nature and natural law, which, in his view, still dominated Catholic theology, as outdated. This concern dominated several articles in Catholic and non-Catholic journals at the time. Patriarch Athenagoras I stated his full agreement with Pope Paul VI: “He could not have spoken in any other way.”
In Latin America, much support developed for the Pope and his encyclical. As World Bank President Robert McNamara declared at the 1968 Annual Meeting of the International Monetary Fund and the World Bank Group that countries permitting birth control practices will get preferential access to resources, doctors in La Paz, Bolivia, called it insulting that money should be exchanged for the conscience of a Catholic nation. In Colombia, Cardinal Aníbal Muñoz Duque declared, if American conditionality undermines Papal teachings, we prefer not to receive one cent. The Senate of Bolivia passed a resolution, stating that "Humanae vitae" can be discussed in its implications on individual consciences, but is of greatest significance because it defends the rights of developing nations to determine their own population policies. The Jesuit Journal "Sic" dedicated one edition to the encyclical with supportive contributions. However, against eighteen insubordinate priests, professors of theology at Pontifical Catholic University of Chile, and the ensuing conspiracy of silence practiced by the Chilean Episcopate, which had to be censured by the Nuncio in Santiago at the behest of Cardinal Gabriel-Marie Garrone, prefect of the Congregation for Catholic Education, triggering eventually a media conflict with , Plinio Corrêa de Oliveira expressed his affliction with the lamentations of Jeremiah: "O ye all that pass through the way…" ().
In the book "Nighttime conversations in Jerusalem. On the risk of faith." well-known liberal Cardinal Carlo Maria Martini accused Paul VI of deliberately concealing the truth, leaving it to theologians and pastors to fix things by adapting precepts to practice:
"I knew Paul VI well. With the encyclical, he wanted to express consideration for human life. He explained his intention to some of his friends by using a comparison: although one must not lie, sometimes it is not possible to do otherwise; it may be necessary to conceal the truth, or it may be unavoidable to tell a lie. It is up to the moralists to explain where sin begins, especially in the cases in which there is a higher duty than the transmission of life."
Pope Paul VI was troubled by the encyclical's reception in the West. Acknowledging the controversy, Paul VI in a letter to the Congress of German Catholics (30 August 1968), stated: "May the lively debate aroused by our encyclical lead to a better knowledge of God’s will." In March 1969, he had a meeting with one of the main critics of "Humanae vitae", Cardinal Leo Joseph Suenens. Paul heard him out and said merely, "Yes, pray for me; because of my weaknesses, the Church is badly governed." And to jog the memory of his critics, he put in their minds the experience of no less a figure than Pope Saint Peter: "[n]ow I understand St Peter: he came to Rome twice, the second time to be crucified", — herewith directing their attention to his rejoicing in glorifying the Lord. Increasingly convinced, that "the smoke of Satan entered the temple of God from some fissure", Paul VI reaffirmed, on 23 June 1978, weeks before his death, in an address to the College of Cardinals, his "Humanae vitae": "following the confirmations of serious science", and which sought to affirm the principle of respect for the laws of nature and of "a conscious and ethically responsible paternity".
Polls show that most Catholics use artificial means of contraception, and very few use natural family planning, However, John L. Allen, Jr. wrote in 2008: "Three decades of bishops’ appointments by John Paul II and Benedict XVI, both unambiguously committed to "Humanae Vitae", mean that senior leaders in Catholicism these days are far less inclined than they were in 1968 to distance themselves from the ban on birth control, or to soft-pedal it. Some Catholic bishops have brought out documents of their own defending "Humanae Vitae"." Also, developments in fertility awareness since the 1960s have given rise to natural family planning organizations such as the Billings Ovulation Method, Couple to Couple League and the Creighton Model FertilityCare System, which actively provide formal instruction on the use and reliability of natural methods of birth control.
Albino Luciani's views on "Humanae vitae" have been debated. Journalist John L. Allen, Jr. claims that "it's virtually certain that John Paul I would not have reversed Paul VI’s teaching, particularly since he was no doctrinal radical. Moreover, as Patriarch in Venice some had seen a hardening of his stance on social issues as the years went by." According to Allen, "...it is reasonable to assume that John Paul I would not have insisted upon the negative judgment in "Humanae Vitae" as aggressively and publicly as John Paul II did, and probably would not have treated it as a quasi-infallible teaching. It would have remained a more 'open' question". Other sources take a different view and note that during his time as Patriarch of Venice that "Luciani was intransigent with his upholding of the teaching of the Church and severe with those, through intellectual pride and disobedience paid no attention to the Church's prohibition of contraception", though while not condoning the sin, he was tolerant of those who sincerely tried and failed to live up to the Church's teaching. The book states that "...if some people think that his compassion and gentleness in this respect implies he was against Humane Vitae one can only infer it was wishful thinking on their part and an attempt to find an ally in favor of artificial contraception."
After he became pope in 1978, John Paul II continued on the Catholic Theology of the Body of his predecessors with a series of lectures, entitled "Theology of the Body", in which he talked about an "original unity between man and women", purity of heart (on the Sermon on the Mount), marriage and celibacy and reflections on "Humanae vitae", focusing largely on responsible parenthood and marital chastity.
In 1981, the Pope's Apostolic exhortation, Familiaris consortio restated the Church's opposition to artificial birth control stated previously in Humanae vitae.
John Paul II readdressed some of the same issues in his 1993 encyclical "Veritatis splendor". He reaffirmed much of "Humanae vitae", and specifically described the practice of artificial contraception as an act not permitted by Catholic teaching in any circumstances. The same encyclical also clarifies the use of conscience in arriving at moral decisions, including in the use of contraception. However, John Paul also said, “It is not right then to regard the moral conscience of the individual and the magisterium of the Church as two contenders, as two realities in conflict. The authority which the magisterium enjoys by the will of Christ exists so that the moral conscience can attain the truth with security and remain in it.” John Paul quoted "Humanae vitae" as a compassionate encyclical, "Christ has come not to judge the world but to save it, and while he was uncompromisingly stern towards sin, he was patient and rich in mercy towards sinners".
Pope John Paul's 1995 encyclical, "Evangelium vitae" ("The Gospel of Life"), affirmed the Church's position on contraception and multiple topics related to the culture of life.
On 12 May 2008, Benedict XVI accepted an invitation to talk to participants in the International Congress organized by the Pontifical Lateran University on the 40th anniversary of "Humanae vitae". He put the encyclical in the broader view of love in a global context, a topic he called "so controversial, yet so crucial for humanity's future." "Humanae vitae" became "a sign of contradiction but also of continuity of the Church's doctrine and tradition... What was true yesterday is true also today." The Church continues to reflect "in an ever new and deeper way on the fundamental principles that concern marriage and procreation." The key message of "Humanae vitae" is love. Benedict states, that the fullness of a person is achieved by a unity of soul and body, but neither spirit nor body alone can love, only the two together. If this unity is broken, if only the body is satisfied, love becomes a commodity.
On 16 January 2015, Pope Francis said to a meeting with families in Manila, insisting on the need to protect the family: "The family is ...threatened by growing efforts on the part of some to redefine the very institution of marriage, by relativism, by the culture of the ephemeral, by a lack of openness to life. I think of Blessed Paul VI. At a time when the problem of population growth was being raised, he had the courage to defend openness to life in families. He knew the difficulties that are there in every family, and so in his Encyclical he was very merciful towards particular cases, and he asked confessors to be very merciful and understanding in dealing with particular cases. But he also had a broader vision: he looked at the peoples of the earth and he saw this threat of the destruction of the family through the privation of children [original Spanish: destrucción de la familia por la privación de los hijos]. Paul VI was courageous; he was a good pastor and he warned his flock of the wolves who were coming."
A year before, on 1 May 2014, Pope Francis, in an interview given to Italian newspaper "Corriere della Sera", expressed his opinion and praise for "Humanae Vitae": "Everything depends on how "Humanae Vitae" is interpreted. Paul VI himself, in the end, urged confessors to be very merciful and pay attention to concrete situations. But his genius was prophetic, he had the courage to take a stand against the majority, to defend moral discipline, to exercise a cultural restraint, to oppose present and future neo-Malthusianism. The question is not of changing doctrine, but of digging deep and making sure that pastoral care takes into account situations and what it is possible for persons to do." | https://en.wikipedia.org/wiki?curid=14071 |
History of Wikipedia
Wikipedia began with its first edit on 15 January 2001, two days after the domain was registered by Jimmy Wales and Larry Sanger. Its technological and conceptual underpinnings predate this; the earliest known proposal for an online encyclopedia was made by Rick Gates in 1993, and the concept of a free-as-in-freedom online encyclopedia (as distinct from mere open source) was proposed by Richard Stallman in December 2000.
Crucially, Stallman's concept specifically included the idea that no central organization should control editing. This characteristic greatly contrasted with contemporary digital encyclopedias such as Microsoft Encarta, "Encyclopædia Britannica", and even Bomis's Nupedia, which was Wikipedia's direct predecessor. In 2001, the license for Nupedia was changed to GFDL, and Wales and Sanger launched Wikipedia using the concept and technology of a wiki pioneered in 1995 by Ward Cunningham. Initially, Wikipedia was intended to complement Nupedia, an online encyclopedia project edited solely by experts, by providing additional draft articles and ideas for it. In practice, Wikipedia quickly overtook Nupedia, becoming a global project in multiple languages and inspiring a wide range of other online reference projects.
According to Alexa Internet, , Wikipedia is the world's ninth most popular website in terms of global internet engagement. Wikipedia's worldwide monthly readership is approximately 495 million. Worldwide in September 2018, WMF Labs tallied 15.5 billion page views for the month. According to comScore, Wikipedia receives over 117 million monthly unique visitors from the United States alone.
The concept of compiling the world's knowledge in a single location dates back to the ancient Libraries of Alexandria and Pergamum, but the modern concept of a general-purpose, widely distributed, printed encyclopedia originated with Denis Diderot and the 18th-century French encyclopedists. The idea of using automated machinery beyond the printing press to build a more useful encyclopedia can be traced to Paul Otlet's 1934 book "Traité de documentation"; Otlet also founded the Mundaneum, an institution dedicated to indexing the world's knowledge, in 1910. This concept of a machine-assisted encyclopedia was further expanded in H. G. Wells' book of essays "World Brain" (1938) and Vannevar Bush's future vision of the microfilm-based Memex in his essay "As We May Think" (1945). Another milestone was Ted Nelson's hypertext design Project Xanadu, which was begun in 1960.
Advances in information technology in the late 20th century led to changes in the form of encyclopedias. While previous encyclopedias, notably the "Encyclopædia Britannica", were book-based, Microsoft's Encarta, published in 1993, was available on CD-ROM and hyperlinked. The development of the World Wide Web led to many attempts to develop internet encyclopedia projects. An early proposal for an online encyclopedia was Interpedia in 1993 by Rick Gates; this project died before generating any encyclopedic content. Free software proponent Richard Stallman described the usefulness of a "Free Universal Encyclopedia and Learning Resource" in 1999. His published document "aims to lay out what the free encyclopedia needs to do, what sort of freedoms it needs to give the public, and how we can get started on developing it." On Wednesday 17 January 2001, two days after the founding of Wikipedia, the Free Software Foundation's (FSF) GNUPedia project went online, competing with Nupedia, but today the FSF encourages people "to visit and contribute to [Wikipedia]".
Wikipedia co-founder Jimmy Wales has stated that the germ of the concept for Wikipedia, for him, came back when he was a graduate student at Indiana University where he was impressed with the successes of the open-source movement and found Richard Stallman's Emacs Manifesto promoting free software and a sharing economy to be quite interesting. At the time, Wales was studying finance and was intrigued by the incentives of the many people who contributed as volunteers toward creating free software where there were many examples having excellent results.
Wikipedia was initially conceived as a feeder project for the Wales-founded Nupedia, an earlier project to produce a free online encyclopedia, volunteered by Bomis, a web-advertising firm owned by Jimmy Wales, Tim Shell and Michael E. Davis. Nupedia was founded upon the use of highly qualified volunteer contributors and an elaborate multi-step peer review process. Despite its mailing list of interested editors, and the presence of a full-time editor-in-chief, Larry Sanger, a graduate philosophy student hired by Wales, the writing of content for Nupedia was extremely slow, with only 12 articles written during the first year.
Wales and Sanger discussed various ways to create content more rapidly. The idea of a wiki-based complement originated from a conversation between Larry M. Sanger and Ben Kovitz. Ben Kovitz was a computer programmer and regular on Ward Cunningham's revolutionary wiki "the WikiWikiWeb". He explained to Sanger what wikis were, at that time a difficult concept to understand, over a dinner on Tuesday 2 January 2001. Wales first stated, in October 2001, that "Larry had the idea to use Wiki software", though he later stated in December 2005 that Jeremy Rosenfeld, a Bomis employee, introduced him to the concept.
Because Wikipedia biographies are often updated as soon as new information comes to light, they are often used as a reference source on the lives of . This has led to attempts to manipulate and falsify Wikipedia articles for promotional or defamatory purposes (see Controversies). It has also led to novel uses of the biographical material provided. Some notable people's lives are being affected by their Wikipedia biography.
Sanger played an important role in the early stages of creating Wikipedia. Wales says that Sanger was his subordinate employee. Sanger initially brought the wiki concept to Wales and suggested it be applied to Nupedia and then, after some initial skepticism, Wales agreed to try it. It was Jimmy Wales, along with other people, who came up with the broader idea of an open-source, collaborative encyclopedia that would accept contributions from ordinary people and it was Wales who invested in it. Wales stated in October 2001 that "Larry had the idea to use Wiki software." Sanger coined the portmanteau "Wikipedia" as the project name. In review, Larry Sanger conceived of a wiki-based encyclopedia as a strategic solution to Nupedia's inefficiency problems. In terms of project roles, Sanger spearheaded and pursued the project as its leader in its first year, and did most of the early work in formulating policies (including "Ignore all rules" and "Neutral point of view") and building up the community. Upon departure in March 2002, Sanger emphasized the main issue was purely the cessation of Bomis' funding for his role, which was not viable part-time, and his changing personal priorities; however, by 2004, the two had drifted apart and Sanger became more critical. Two weeks after the launch of Citizendium, Sanger criticized Wikipedia, describing the latter as "broken beyond repair." By 2005 Wales began to dispute Sanger's role in the project, three years after Sanger left.
In 2005, Wales described himself simply as the founder of Wikipedia; however, according to Brian Bergstein of the Associated Press, "Sanger has long been cited as a co-founder." There is evidence that Sanger was called co-founder, along with Wales, as early as 2001, and he is referred to as such in early Wikipedia press releases and Wikipedia articles and in a September 2001 "New York Times" article for which both were interviewed. In 2006, Wales said, "He used to work for me [...] I don't agree with calling him a co-founder, but he likes the title"; nonetheless, before January 2004, Wales did not dispute Sanger's status as co-founder and, indeed, identified himself as "co-founder" as late as August 2002. In Sanger's introductory message to the Nupedia mailing list, he said that "Jimmy Wales contacted me and asked me to apply as editor-in-chief of Nupedia. Apparently, Bomis, Inc. (which owns Nupedia)... who could manage this sort of long-term project, he thought I would be perfect for the job. This is indeed my dream job". Sanger said "He [Wales] had had the idea for Nupedia since at least last fall".
As of March 2007: Wales emphasized this employer–employee relationship and his ultimate authority, terming himself Wikipedia's sole founder; and Sanger emphasized their statuses as co-founders, referencing earlier versions of Wikipedia pages (2004, 2006), press releases (2002–2004), and media coverage from the time of his involvement routinely terming them in this manner.
The goals which led to GNUpedia were published at least as early as 18 December 2000, and these exact goals were finalized on the 12th and 13th of January 2001, albeit with a copyright of 1999, from when Stallman had first started considering the problem. The only sentence added between 18 December and the unveiling of GNUpedia the week of 12–16 January was this: "The GNU Free Documentation License would be a good license to use for courses."
GNUpedia was "formally" announced on the "slashdot" website, on 16 January, the same day that their mailing list first went online with a test-message. Wales posted to the list on 17 January, the first full day of messages, explaining the discussions with Stallman concerning the change in Nupedia content-licensing, and suggesting cooperation. Stallman himself first posted on 19 January, and, in his second post on 22 January, mentioned that discussions about merging Wikipedia and GNUpedia were ongoing. Within a couple of months, Wales had changed his email signature from the open source encyclopedia to the free encyclopedia; both Nupedia and Wikipedia had adopted the GFDL; and the merger of GNUpedia into Wikipedia was effectively accomplished.
In a separate but similar incident, the campaign manager for Cathy Cox, Morton Brilliant, resigned after being found to have added negative information to the Wikipedia entries of political opponents. Following media publicity, the incidents tapered off around August 2006.
The editor, Ryan Jordan, became a Wikia employee in January 2007 and divulged his real name; this was noticed by Daniel Brandt of Wikipedia Watch, and communicated to the original article author. (See: Essjay controversy)
There are a number of . Other sites also use the MediaWiki software and concept, popularized by Wikipedia. No list of them is maintained.
Specialized foreign language forks using the Wikipedia concept include Enciclopedia Libre (Spanish), "Wikiweise" (German), WikiZnanie (Russian), Susning.nu (Swedish), and Baidu Baike (Chinese). Some of these (such as "Enciclopedia Libre") use GFDL or compatible licenses as used by Wikipedia, leading to exchange of material with their respective language Wikipedias.
In 2006, Larry Sanger founded Citizendium, based upon a modified version of MediaWiki. The site said it aimed 'to improve on the Wikipedia model with "gentle expert oversight", among other things'. (See also Nupedia).
The German Wikipedia was the first to be partly published also using other media (rather than online on the internet), including releases on CD in November 2004 and more extended versions on CDs or DVD in April 2005 and December 2006. In December 2005, the publisher Zenodot Verlagsgesellschaft mbH, a sister company of Directmedia, published a 139-page book explaining Wikipedia, its history and policies, which was accompanied by a 7.5 GB DVD containing 300,000 articles and 100,000 images from the German Wikipedia. Originally, Directmedia also announced plans to print the German Wikipedia in its entirety, in 100 volumes of 800 pages each. Publication was due to begin in October 2006, and finish in 2010. In March 2006, however, this project was called off.
In September 2008, Bertelsmann published a 1000 pages volume with a selection of popular German Wikipedia articles. Bertelsmann paid voluntarily 1 Euro per sold copy to Wikimedia Deutschland.
The first CD version containing a selection of articles from the English Wikipedia was published in April 2006 by as the "2006 Wikipedia CD Selection". In April 2007, "Wikipedia Version 0.5", a CD containing around 2000 articles selected from the online encyclopedia was published by the Wikimedia Foundation and Linterweb. The selection of articles included was based on both the quality of the online version and the importance of the topic to be included. This CD version was created as a test-case in preparation for a DVD version including far more articles. The CD version can be purchased online, downloaded as a DVD image file or , or accessed online at the project's website.
A free software project has also been launched to make a static version of Wikipedia available for use on iPods. The "Encyclopodia" project was started around March 2006 and can currently be used on 1st to 4th generation iPods.
In limited ways, the Wikimedia Foundation is protected by Section 230 of the Communications Decency Act. In the defamation action "Bauer et al. v. Glatzer et al.", it was held that Wikimedia had no case to answer because of this section. A similar law in France caused a lawsuit to be dismissed in October 2007. In 2013, a German appeals court (the Higher Regional Court of Stuttgart) ruled that Wikipedia is a "service provider" not a "content provider", and as such is immune from liability as long as it takes down content that is accused of being illegal.
Historical summaries
Size and statistics
Discussion and debate archives
Other | https://en.wikipedia.org/wiki?curid=14072 |
Hydropower
Hydropower or water power (from , "water") is power derived from the energy of falling or fast-running water, which may be harnessed for useful purposes. Since ancient times, hydropower from many kinds of watermills has been used as a renewable energy source for irrigation and the operation of various mechanical devices, such as gristmills, sawmills, textile mills, trip hammers, dock cranes, domestic lifts, and ore mills. A trompe, which produces compressed air from falling water, is sometimes used to power other machinery at a distance.
In the late 19th century, hydropower became a source for generating electricity. Cragside in Northumberland was the first house powered by hydroelectricity in 1878 and the first commercial hydroelectric power plant was built at Niagara Falls in 1879. In 1881, street lamps in the city of Niagara Falls were powered by hydropower.
Since the early 20th century, the term has been used almost exclusively in conjunction with the modern development of hydroelectric power. International institutions such as the World Bank view hydropower as a means for economic development without adding substantial amounts of carbon to the atmosphere,
but dams can have significant negative social and environmental impacts.
The earliest evidence of water wheels and watermills date back to the ancient Near East in the 4th century BC, specifically in the Persian Empire before 350 BCE, in the regions of Iraq, Iran, and Egypt.
In the Roman Empire water-powered mills were described by Vitruvius by the first century BC. The Barbegal mill had sixteen water wheels processing up to 28 tons of grain per day. Roman waterwheels were also used for sawing marble such as the Hierapolis sawmill of the late 3rd century AD. Such sawmills had a waterwheel which drove two crank-and-connecting rods to power two saws. It also appears in two 6th century Eastern Roman saw mills excavated at Ephesus and Gerasa respectively. The crank and connecting rod mechanism of these Roman watermills converted the rotary motion of the waterwheel into the linear movement of the saw blades.
In China, it was theorized that its water-powered trip hammers and bellows from as early as the Han dynasty (202 BC - 220 AD) were powered by water scoops, but later historians believed that they were powered by waterwheels on the basis that water scoops would not have had the motive force to operate their blast furnace bellows. Evidence of Han vertical waterwheels can be seen in two contemporary funeral ware models depicting water powered trip hammers. The earliest texts to describe the device are the "Jijiupian" dictionary of 40 BC, Yang Xiong's text known as the "Fangyan" of 15 BC, as well as the "Xin Lun" written by Huan Tan about 20 AD. It was also during this time that the engineer Du Shi (c. AD 31) applied the power of waterwheels to piston-bellows in forging cast iron.
The power of a wave of water released from a tank was used for extraction of metal ores in a method known as hushing. The method was first used at the Dolaucothi Gold Mines in Wales from 75 AD onwards, but had been developed in Spain at such mines as Las Médulas. Hushing was also widely used in Britain in the Medieval and later periods to extract lead and tin ores. It later evolved into hydraulic mining when used during the California Gold Rush.
In the Muslim world during the Islamic Golden Age and Arab Agricultural Revolution (8th–13th centuries), engineers made wide use of hydropower as well as early uses of tidal power, and large hydraulic factory complexes. A variety of water-powered industrial mills were used in the Islamic world, including fulling mills, gristmills, paper mills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic world had these industrial mills in operation, from Al-Andalus and North Africa to the Middle East and Central Asia. Muslim engineers also used water turbines, employed gears in watermills and water-raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water-raising machines.
Islamic mechanical engineer Al-Jazari (1136–1206) described designs for 50 devices, many of them water powered, in his book, "The Book of Knowledge of Ingenious Mechanical Devices", including clocks, a device to serve wine, and five devices to lift water from rivers or pools, though three are animal-powered and one can be powered by animal or water. These include an endless belt with jugs attached, a cow-powered shadoof, and a reciprocating device with hinged valves.
In 1753, French engineer Bernard Forest de Bélidor published "Architecture Hydraulique" which described vertical- and horizontal-axis hydraulic machines. The growing demand for the Industrial Revolution would drive development as well.
Hydraulic power networks used pipes to carry pressurized water and transmit mechanical power from the source to end users. The power source was normally a head of water, which could also be assisted by a pump. These were extensive in Victorian cities in the United Kingdom. A hydraulic power network was also developed in Geneva, Switzerland. The world-famous Jet d'Eau was originally designed as the over-pressure relief valve for the network.
At the beginning of the Industrial Revolution in Britain, water was the main source of power for new inventions such as Richard Arkwright's water frame. Although the use of water power gave way to steam power in many of the larger mills and factories, it was still used during the 18th and 19th centuries for many smaller operations, such as driving the bellows in small blast furnaces (e.g. the Dyfi Furnace) and gristmills, such as those built at Saint Anthony Falls, which uses the 50-foot (15 m) drop in the Mississippi River.
In the 1830s, at the early peak in the US canal-building, hydropower provided the energy to transport barge traffic up and down steep hills using inclined plane railroads. As railroads overtook canals for transportation, canal systems were modified and developed into hydropower systems; the history of Lowell, Massachusetts is a classic example of commercial development and industrialization, built upon the availability of water power.
Technological advances had moved the open water wheel into an enclosed turbine or water motor. In 1848 James B. Francis, while working as head engineer of Lowell's Locks and Canals company, improved on these designs to create a turbine with 90% efficiency. He applied scientific principles and testing methods to the problem of turbine design. His mathematical and graphical calculation methods allowed the confident design of high-efficiency turbines to exactly match a site's specific flow conditions. The Francis reaction turbine is still in wide use today. In the 1870s, deriving from uses in the California mining industry, Lester Allan Pelton developed the high efficiency Pelton wheel impulse turbine, which utilized hydropower from the high head streams characteristic of the mountainous California interior.
A hydropower resource can be evaluated by its available power. Power is a function of the hydraulic head and volumetric flow rate. The head is the energy per unit weight (or unit mass) of water. The static head is proportional to the difference in height through which the water falls. Dynamic head is related to the velocity of moving water. Each unit of water can do an amount of work equal to its weight times the head.
The power available from falling water can be calculated from the flow rate and density of water, the height of fall, and the local acceleration due to gravity:
To illustrate, the power output of a turbine that is 85% efficient, with a flow rate of 80 cubic metres per second (2800 cubic feet per second) and a head of 145 metres (480 feet), is 97 Megawatts:
Operators of hydroelectric stations will compare the total electrical energy produced with the theoretical potential energy of the water passing through the turbine to calculate efficiency. Procedures and definitions for calculation of efficiency are given in test codes such as ASME PTC 18 and IEC 60041. Field testing of turbines is used to validate the manufacturer's guaranteed efficiency. Detailed calculation of the efficiency of a hydropower turbine will account for the head lost due to flow friction in the power canal or penstock, rise in tail water level due to flow, the location of the station and effect of varying gravity, the temperature and barometric pressure of the air, the density of the water at ambient temperature, and the altitudes above sea level of the forebay and tailbay. For precise calculations, errors due to rounding and the number of significant digits of constants must be considered.
Some hydropower systems such as water wheels can draw power from the flow of a body of water without necessarily changing its height. In this case, the available power is the kinetic energy of the flowing water. Over-shot water wheels can efficiently capture both types of energy.
The water flow in a stream can vary widely from season to season. Development of a hydropower site requires analysis of flow records, sometimes spanning decades, to assess the reliable annual energy supply. Dams and reservoirs provide a more dependable source of power by smoothing seasonal changes in water flow. However reservoirs have significant environmental impact, as does alteration of naturally occurring stream flow. The design of dams must also account for the worst-case, "probable maximum flood" that can be expected at the site; a spillway is often included to bypass flood flows around the dam. A computer model of the hydraulic basin and rainfall and snowfall records are used to predict the maximum flood.
Large dams can ruin river ecosystems, cover large areas of land causing green house gas emissions from underwater rotting vegetation and displace thousands of people and affect their livelihood.
Where there is a plentiful head of water it can be made to generate compressed air directly without moving parts. In these designs, a falling column of water is purposely mixed with air bubbles generated through turbulence or a venturi pressure reducer at the high-level intake. This is allowed to fall down a shaft into a subterranean, high-roofed chamber where the now-compressed air separates from the water and becomes trapped. The height of the falling water column maintains compression of the air in the top of the chamber, while an outlet, submerged below the water level in the chamber allows water to flow back to the surface at a lower level than the intake. A separate outlet in the roof of the chamber supplies the compressed air. A facility on this principle was built on the Montreal River at Ragged Shutes near Cobalt, Ontario in 1910 and supplied 5,000 horsepower to nearby mines.
Hydroelectricity is the application of hydropower to generate electricity.
It is the primary use of hydropower today.
Hydroelectric power plants can include a reservoir (generally created by a dam) to exploit the energy of falling water, or can use the kinetic energy of water as in run-of-the-river hydroelectricity.
Hydroelectric plants can vary in size from small community sized plants (micro hydro) to very large plants supplying power to a whole country.
As of 2019, the five largest power stations in the world are conventional hydroelectric power stations with dams.
Hydroelectricity can also be used to store energy in the form of potential energy between two reservoirs at different heights with pumped-storage hydroelectricity.
Water is pumped uphill into reservoirs during periods of low demand to be released for generation when demand is high or system generation is low.
Other forms of electricity generation with hydropower include tidal stream generators using energy from tidal power generated from oceans, rivers, and human-made canal systems to generating electricity. | https://en.wikipedia.org/wiki?curid=14073 |
Horse breed
A horse breed is a selectively bred population of domesticated horses, often with pedigrees recorded in a breed registry. However, the term is sometimes used in a very broad sense to define landrace animals, or naturally selected horses of a common phenotype located within a limited geographic region. Depending on definition, hundreds of "breeds" exist today, developed for many different uses. Horse breeds are loosely divided into three categories based on general temperament: spirited "hot bloods" with speed and endurance; "cold bloods," such as draft horses and some ponies, suitable for slow, heavy work; and "warmbloods," developed from crosses between hot bloods and cold bloods, often focusing on creating breeds for specific riding purposes, particularly in Europe.
Horse breeds are groups of horses with distinctive characteristics that are transmitted consistently to their offspring, such as conformation, color, performance ability, or disposition. These inherited traits are usually the result of a combination of natural crosses and artificial selection methods aimed at producing horses for specific tasks. Certain breeds are known for certain talents. For example, Standardbreds are known for their speed in harness racing. Some breeds have been developed through centuries of crossings with other breeds, while others, such as Tennessee Walking Horses and Morgans, developed from a single sire from which all current breed members descend. More than 300 horse breeds exist in the world today.
Modern horse breeds developed in response to a need for "form to function", the necessity to develop certain physical characteristics to perform a certain type of work. Thus, powerful but refined breeds such as the Andalusian or the Lusitano developed in the Iberian peninsula as riding horses that also had a great aptitude for dressage, while heavy draft horses such as the Clydesdale and the Shire developed out of a need to perform demanding farm work and pull heavy wagons. Ponies of all breeds originally developed mainly from the need for a working animal that could fulfill specific local draft and transportation needs while surviving in harsh environments. However, by the 20th century, many pony breeds had Arabian and other blood added to make a more refined pony suitable for riding. Other horse breeds developed specifically for light agricultural work, heavy and light carriage and road work, various equestrian disciplines, or simply as pets.
Horses have been selectively bred since their domestication. Today, over 300 breeds of horses are known in the world. However, the concept of purebred bloodstock and a controlled, written breed registry only became of significant importance in modern times. Today, the standards for defining and registration of different breeds vary. Sometimes, purebred horses are called Thoroughbreds, which is incorrect; "Thoroughbred" is a specific breed of horse, while a "purebred" is a horse (or any other animal) with a defined pedigree recognized by a breed registry.
An early example of people who practiced selective horse breeding were the Bedouin, who had a reputation for careful breeding practices, keeping extensive pedigrees of their Arabian horses and placing great value upon pure bloodlines. Though these pedigrees were originally transmitted by an oral tradition, written pedigrees of Arabian horses can be found that date to the 14th century. In the same period of the early Renaissance, the Carthusian monks of southern Spain bred horses and kept meticulous pedigrees of the best bloodstock; the lineage survives to this day in the Andalusian horse. One of the earliest formal registries was General Stud Book for Thoroughbreds, which began in 1791 and traced back to the Arabian stallions imported to England from the Middle East that became the foundation stallions for the breed.
Some breed registries have a closed stud book, where registration is based on pedigree, and no outside animals can gain admittance. For example, a registered Thoroughbred or Arabian must have two registered parents of the same breed.
Other breeds have a partially closed stud book, but still allow certain infusions from other breeds. For example, the modern Appaloosa must have at least one Appaloosa parent, but may also have a Quarter Horse, Thoroughbred, or Arabian parent, so long as the offspring exhibits appropriate color characteristics. The Quarter Horse normally requires both parents to be registered Quarter Horses, but allows "Appendix" registration of horses with one Thoroughbred parent, and the horse may earn its way to full registration by completing certain performance requirements.
Open stud books exist for horse breeds that either have not yet developed a rigorously defined standard phenotype, or for breeds that register animals that conform to an ideal via the process of passing a studbook selection process. Most of the warmblood breeds used in sport horse disciplines have open stud books to varying degrees. While pedigree is considered, outside bloodlines are admitted to the registry if the horses meet the set standard for the registry. These registries usually require a studbook selection process involving judging of an individual animal's quality, performance, and conformation before registration is finalized. A few "registries," particularly some color breed registries, are very open and will allow membership of all horses that meet limited criteria, such as coat color and species, regardless of pedigree or conformation.
Breed registries also differ as to their acceptance or rejection of breeding technology. For example, all Jockey Club Thoroughbred registries require that a registered Thoroughbred be a product of a natural mating, so-called "live cover". A foal born of two Thoroughbred parents, but by means of artificial insemination or embryo transfer, cannot be registered in the Thoroughbred studbook. However, since the advent of DNA testing to verify parentage, most breed registries now allow artificial insemination, embryo transfer, or both. The high value of stallions has helped with the acceptance of these techniques because they allow a stallion to breed more mares with each "collection" and greatly reduce the risk of injury during mating. Cloning of horses is highly controversial, and at the present time most mainstream breed registries will not accept cloned horses, though several cloned horses and mules have been produced. Such restrictions have led to legal challenges in the United States, sometime based on state law and sometimes based on antitrust laws.
Horses can crossbreed with other equine species to produce hybrids. These hybrid types are not breeds, but they resemble breeds in that crosses between certain horse breeds and other equine species produce characteristic offspring. The most common hybrid is the mule, a cross between a "jack" (male donkey) and a mare. A related hybrid, the hinny, is a cross between a stallion and a jenny (female donkey). Most other hybrids involve the zebra (see Zebroid). With rare exceptions, most equine hybrids are sterile and cannot reproduce. A notable exception is hybrid crosses between horses and "Equus ferus przewalskii", commonly known as Przewalski's horse. | https://en.wikipedia.org/wiki?curid=14076 |
Horse breeding
Horse breeding is reproduction in horses, and particularly the human-directed process of selective breeding of animals, particularly purebred horses of a given breed. Planned matings can be used to produce specifically desired characteristics in domesticated horses. Furthermore, modern breeding management and technologies can increase the rate of conception, a healthy pregnancy, and successful foaling.
The male parent of a horse, a stallion, is commonly known as the "sire" and the female parent, the mare, is called the "dam". Both are genetically important, as each parent provides half of the genetic makeup of the ensuing offspring, called a foal. Contrary to popular misuse, "colt" refers to a young male horse only; "filly" is a young female. Though many horse owners may simply breed a family mare to a local stallion in order to produce a companion animal, most professional breeders use selective breeding to produce individuals of a given phenotype, or breed. Alternatively, a breeder could, using individuals of differing phenotypes, create a new breed with specific characteristics.
A horse is "bred" where it is foaled (born). Thus a colt conceived in England but foaled in the United States is regarded as being bred in the US. In some cases, most notably in the Thoroughbred breeding industry, American- and Canadian-bred horses may also be described by the state or province in which they are foaled. Some breeds denote the country, or state, where conception took place as the origin of the foal.
Similarly, the "breeder", is the person who owned or leased the mare at the time of foaling. That individual may not have had anything to do with the mating of the mare. It is important to review each breed registry's rules to determine which applies to any specific foal.
In the horse breeding industry, the term "half-brother" or "half-sister" only describes horses which have the same dam, but different sires. Horses with the same sire but different dams are simply said to be "by the same sire", and no sibling relationship is implied. "Full" (or "own") siblings have both the same dam and the same sire. The terms paternal half-sibling, and maternal half-sibling are also often used. Three-quarter siblings are horses out of the same dam, and are by sires that are either half-brothers (i.e. same dam) or who are by the same sire.
Thoroughbreds and Arabians are also classified through the "distaff" or direct female line, known as their "family" or "tail female" line, tracing back to their taproot foundation bloodstock or the beginning of their respective stud books. The female line of descent always appears at the bottom of a tabulated pedigree and is therefore often known as the "bottom line". In addition, the maternal grandfather of a horse has a special term: damsire.
"Linebreeding" technically is the duplication of fourth generation or more distant ancestors. However, the term is often used more loosely, describing horses with duplication of ancestors closer than the fourth generation. It also is sometimes used as a euphemism for the practice of inbreeding, a practice that is generally frowned upon by horse breeders, though used by some in an attempt to fix certain traits.
The estrous cycle (also spelled oestrous) controls when a mare is sexually receptive toward a stallion, and helps to physically prepare the mare for conception. It generally occurs during the spring and summer months, although some mares may be sexually receptive into the late fall, and is controlled by the photoperiod (length of the day), the cycle first triggered when the days begin to lengthen. The estrous cycle lasts about 19–22 days, with the average being 21 days. As the days shorten, the mare returns to a period when she is not sexually receptive, known as anestrus. Anestrus – occurring in the majority of, but not all, mares – prevents the mare from conceiving in the winter months, as that would result in her foaling during the harshest part of the year, a time when it would be most difficult for the foal to survive.
This cycle contains 2 phases:
Depending on breed, on average, 16% of mares have double ovulations, allowing them to twin, though this does not affect the length of time of estrus or diestrus.
Changes in hormone levels can have great effects on the physical characteristics of the reproductive organs of the mare, thereby preparing, or preventing, her from conceiving.
The cycle is controlled by several hormones which regulate the estrous cycle, the mare's behavior, and the reproductive system of the mare. The cycle begins when the increased day length causes the pineal gland to reduce the levels of melatonin, thereby allowing the hypothalamus to secrete GnRH.
While horses in the wild mate and foal in mid to late spring, in the case of horses domestically bred for competitive purposes, especially horse racing, it is desirable that they be born as close to January 1 in the northern hemisphere or August 1 in the southern hemisphere as possible, so as to be at an advantage in size and maturity when competing against other horses in the same age group. When an early foal is desired, barn managers will put the mare "under lights" by keeping the barn lights on in the winter to simulate a longer day, thus bringing the mare into estrus sooner than she would in nature. Mares signal estrus and ovulation by urination in the presence of a stallion, raising the tail and revealing the vulva. A stallion, approaching with a high head, will usually nicker, nip and nudge the mare, as well as sniff her urine to determine her readiness for mating.
Once fertilized, the oocyte (egg) remains in the oviduct for approximately 5.5 more days, and then descends into the uterus. The initial single cell combination is already dividing and by the time of entry into the uterus, the egg might have already reached the blastocyst stage.
The gestation period lasts for about eleven months, or about 340 days (normal average range 320–370 days). During the early days of pregnancy, the conceptus is mobile, moving about in the uterus until about day 16 when "fixation" occurs. Shortly after fixation, the embryo proper (so called up to about 35 days) will become visible on trans-rectal ultrasound (about day 21) and a heartbeat should be visible by about day 23. After the formation of the endometrial cups and early placentation is initiated (35–40 days of gestation) the terminology changes, and the embryo is referred to as a fetus. True implantation – invasion into the endometrium of any sort – does not occur until about day 35 of pregnancy with the formation of the endometrial cups, and true placentation (formation of the placenta) is not initiated until about day 40-45 and not completed until about 140 days of pregnancy. The fetus's sex can be determined by day 70 of the gestation using ultrasound. Halfway through gestation the fetus is the size of between a rabbit and a beagle. The most dramatic fetal development occurs in the last 3 months of pregnancy when 60% of fetal growth occurs.
Colts are carried on average about 4 days longer than fillies.
Domestic mares receive specific care and nutrition to ensure that they and their foals are healthy. Mares are given vaccinations against diseases such as the Rhinopneumonitis (EHV-1) virus (which can cause miscarriage) as well as vaccines for other conditions that may occur in a given region of the world. Pre-foaling vaccines are recommended 4–6 weeks prior to foaling to maximize the immunoglobulin content of the colostrum in the first milk. Mares are dewormed a few weeks prior to foaling, as the mare is the primary source of parasites for the foal.
Mares can be used for riding or driving during most of their pregnancy. Exercise is healthy, though should be moderated when a mare is heavily in foal. Exercise in excessively high temperatures has been suggested as being detrimental to pregnancy maintenance during the embryonic period; however ambient temperatures encountered during the research were in the region of 100 degrees F and the same results may not be encountered in regions with lower ambient temperatures.
During the first several months of pregnancy, the nutritional requirements do not increase significantly since the rate of growth of the fetus is very slow. However, during this time, the mare may be provided supplemental vitamins and minerals, particularly if forage quality is questionable. During the last 3–4 months of gestation, rapid growth of the fetus increases the mare's nutritional requirements. Energy requirements during these last few months, and during the first few months of lactation are similar to those of a horse in full training. Trace minerals such as copper are extremely important, particularly during the tenth month of pregnancy, for proper skeletal formation. Many feeds designed for pregnant and lactating mares provide the careful balance required of increased protein, increased calories through extra fat as well as vitamins and minerals. Overfeeding the pregnant mare, particularly during early gestation, should be avoided, as excess weight may contribute to difficulties foaling or fetal/foal related problems.
Mares due to foal are usually separated from other horses, both for the benefit of the mare and the safety of the soon-to-be-delivered foal. In addition, separation allows the mare to be monitored more closely by humans for any problems that may occur while giving birth. In the northern hemisphere, a special foaling stall that is large and clutter free is frequently used, particularly by major breeding farms. Originally, this was due in part to a need for protection from the harsh winter climate present when mares foal early in the year, but even in moderate climates, such as Florida, foaling stalls are still common because they allow closer monitoring of mares. Smaller breeders often use a small pen with a large shed for foaling, or they may remove a wall between two box stalls in a small barn to make a large stall. In the milder climates seen in much of the southern hemisphere, most mares foal outside, often in a paddock built specifically for foaling, especially on the larger stud farms. Many stud farms worldwide employ technology to alert human managers when the mare is about to foal, including webcams, closed-circuit television, or assorted types of devices that alert a handler via a remote alarm when a mare lies down in a position to foal.
On the other hand, some breeders, particularly those in remote areas or with extremely large numbers of horses, may allow mares to foal out in a field amongst a herd, but may also see higher rates of foal and mare mortality in doing so.
Most mares foal at night or early in the morning, and prefer to give birth alone when possible. Labor is rapid, often no more than 30 minutes, and from the time the feet of the foal appear to full delivery is often only about 15 to 20 minutes. Once the foal is born, the mare will lick the newborn foal to clean it and help blood circulation. In a very short time, the foal will attempt to stand and get milk from its mother. A foal should stand and nurse within the first hour of life.
To create a bond with her foal, the mare licks and nuzzles the foal, enabling her to distinguish the foal from others. Some mares are aggressive when protecting their foals, and may attack other horses or unfamiliar humans that come near their newborns.
After birth, a foal's navel is dipped in antiseptic to prevent infection. The foal is sometimes given an enema to help clear the meconium from its digestive tract. The newborn is monitored to ensure that it stands and nurses without difficulty. While most horse births happen without complications, many owners have first aid supplies prepared and a veterinarian on call in case of a birthing emergency. People who supervise foaling should also watch the mare to be sure that she passes the placenta in a timely fashion, and that it is complete with no fragments remaining in the uterus. Retained fetal membranes can cause a serious inflammatory condition (endometritis) and/or infection. If the placenta is not removed from the stall after it is passed, a mare will often eat it, an instinct from the wild, where blood would attract predators.
Foals develop rapidly, and within a few hours a wild foal can travel with the herd. In domestic breeding, the foal and dam are usually separated from the herd for a while, but within a few weeks are typically pastured with the other horses. A foal will begin to eat hay, grass and grain alongside the mare at about 4 weeks old; by 10–12 weeks the foal requires more nutrition than the mare's milk can supply. Foals are typically weaned at 4–8 months of age, although in the wild a foal may nurse for a year.
Beyond the appearance and conformation of a specific type of horse, breeders aspire to improve physical performance abilities. This concept, known as matching "form to function," has led to the development of not only different breeds, but also families or bloodlines within breeds that are specialists for excelling at specific tasks.
For example, the Arabian horse of the desert naturally developed speed and endurance to travel long distances and survive in a harsh environment, and domestication by humans added a trainable disposition to the animal's natural abilities. In the meantime, in northern Europe, the locally adapted heavy horse with a thick, warm coat was domesticated and put to work as a farm animal that could pull a plow or wagon. This animal was later adapted through selective breeding to create a strong but rideable animal suitable for the heavily armored knight in warfare.
Then, centuries later, when people in Europe wanted faster horses than could be produced from local horses through simple selective breeding, they imported Arabians and other oriental horses to breed as an outcross to the heavier, local animals. This led to the development of breeds such as the Thoroughbred, a horse taller than the Arabian and faster over the distances of a few miles required of a European race horse or light cavalry horse. Another cross between oriental and European horses produced the Andalusian, a horse developed in Spain that was powerfully built, but extremely nimble and capable of the quick bursts of speed over short distances necessary for certain types of combat as well as for tasks such as bullfighting.
Later, the people who settled America needed a hardy horse that was capable of working with cattle. Thus, Arabians and Thoroughbreds were crossed on Spanish horses, both domesticated animals descended from those brought over by the Conquistadors, and feral horses such as the Mustangs, descended from the Spanish horse, but adapted by natural selection to the ecology and climate of the west. These crosses ultimately produced new breeds such as the American Quarter Horse and the Criollo of Argentina. In Canada, the Canadian Horse descended from the French stock Louis XIV sent to Canada in the late 17th century.[6] The initial shipment, in 1665, consisted of two stallions and twenty mares from the Royal Stables in Normandy and Brittany, the centre of French horse breeding.[7] Only 12 of the 20 mares survived the trip. Two more shipments followed, one in 1667 of 14 horses (mostly mares, but with at least one stallion), and one in 1670 of 11 mares and a stallion. The shipments included a mix of draft horses and light horses, the latter of which included both pacing and trotting horses.[1] The exact origins of all the horses are unknown, although the shipments probably included Bretons, Normans, Arabians, Andalusians and Barbs.
In modern times, these breeds themselves have since been selectively bred to further specialize at certain tasks. One example of this is the American Quarter Horse. Once a general-purpose working ranch horse, different bloodlines now specialize in different events. For example, larger, heavier animals with a very steady attitude are bred to give competitors an advantage in events such as team roping, where a horse has to start and stop quickly, but also must calmly hold a full-grown steer at the end of a rope. On the other hand, for an event known as cutting, where the horse must separate a cow from a herd and prevent it from rejoining the group, the best horses are smaller, quick, alert, athletic and highly trainable. They must learn quickly, have conformation that allows quick stops and fast, low turns, and the best competitors have a certain amount of independent mental ability to anticipate and counter the movement of a cow, popularly known as "cow sense."
Another example is the Thoroughbred. While most representatives of this breed are bred for horse racing, there are also specialized bloodlines suitable as show hunters or show jumpers. The hunter must have a tall, smooth build that allows it to trot and canter smoothly and efficiently. Instead of speed, value is placed on appearance and upon giving the equestrian a comfortable ride, with natural jumping ability that shows bascule and good form.
A show jumper, however, is bred less for overall form and more for power over tall fences, along with speed, scope, and agility. This favors a horse with a good galloping stride, powerful hindquarters that can change speed or direction easily, plus a good shoulder angle and length of neck. A jumper has a more powerful build than either the hunter or the racehorse.
The history of horse breeding goes back millennia. Though the precise date is in dispute, humans could have domesticated the horse as far back as approximately 4500 BCE. However, evidence of planned breeding has a more blurry history. It is well known, for example, that the Romans did breed horses and valued them in their armies, but little is known regarding their breeding and husbandry practices: all that remains are statues and artwork. Mankind has plenty of equestrian statues of Roman emperors, horses are mentioned in the Odyssey by Homer, and hieroglyphics and paintings left behind by Egyptians tell stories of pharaohs hunting elephants from chariots. Nearly nothing is known of what became of the horses they bred for hippodromes, for warfare, or even for farming.
One of the earliest people known to document the breedings of their horses were the Bedouin of the Middle East, the breeders of the Arabian horse. While it is difficult to determine how far back the Bedouin passed on pedigree information via an oral tradition, there were written pedigrees of Arabian horses by CE 1330. The Akhal-Teke of West-Central Asia is another breed with roots in ancient times that was also bred specifically for war and racing. The nomads of the Mongolian steppes bred horses for several thousand years as well, and the Caspian horse is believed to be a very close relative of Ottoman horses from the earliest origins of the Turks in Central Asia.
The types of horse bred varied with culture and with the times. The uses to which a horse was put also determined its qualities, including smooth amblers for riding, fast horses for carrying messengers, heavy horses for plowing and pulling heavy wagons, ponies for hauling cars of ore from mines, packhorses, carriage horses and many others.
Medieval Europe bred large horses specifically for war, called destriers. These horses were the ancestors of the great heavy horses of today, and their size was preferred not simply because of the weight of the armor, but also because a large horse provided more power for the knight's lance. Weighing almost twice as much as a normal riding horse, the destrier was a powerful weapon in battle meant to act like a giant battering ram that could quite literally run down men on an enemy line.
On the other hand, during this same time, lighter horses were bred in northern Africa and the Middle East, where a faster, more agile horse was preferred. The lighter horse suited the raids and battles of desert people, allowing them to outmaneuver rather than overpower the enemy. When Middle Eastern warriors and European knights collided in warfare, the heavy knights were frequently outmaneuvered. The Europeans, however, responded by crossing their native breeds with "oriental" type horses such as the Arabian, Barb, and Turkoman horse This cross-breeding led both to a nimbler war horse, such as today's Andalusian horse, but also created a type of horse known as a Courser, a predecessor to the Thoroughbred, which was used as a message horse.
During the Renaissance, horses were bred not only for war, but for haute ecole riding, derived from the most athletic movements required of a war horse, and popular among the elite nobility of the time. Breeds such as the Lipizzan and the now extinct Neapolitan horse were developed from Spanish-bred horses for this purpose, and also became the preferred mounts of cavalry officers, who were derived mostly from the ranks of the nobility. It was during this time that firearms were developed, and so the light cavalry horse, a faster and quicker war horse, was bred for "shoot and run" tactics rather than the shock action as in the Middle Ages. Fine horses usually had a well muscled, curved neck, slender body, and sweeping mane, as the nobility liked to show off their wealth and breeding in paintings of the era.
After Charles II retook the British throne in 1660, horse racing, which had been banned by Cromwell, was revived. The Thoroughbred was developed 40 years later, bred to be the ultimate racehorse, through the lines of three foundation Arabian stallions and one Turkish horse.
In the 18th century, James Burnett, Lord Monboddo noted the importance of selecting appropriate parentage to achieve desired outcomes of successive generations. Monboddo worked more broadly in the abstract thought of species relationships and evolution of species. The Thoroughbred breeding hub in Lexington, Kentucky was developed in the late 18th century, and became a mainstay in American racehorse breeding.
The 17th and 18th centuries saw more of a need for fine carriage horses in Europe, bringing in the dawn of the warmblood. The warmblood breeds have been exceptionally good at adapting to changing times, and from their carriage horse beginnings they easily transitioned during the 20th century into a sport horse type. Today's warmblood breeds, although still used for competitive driving, are more often seen competing in show jumping or dressage.
The Thoroughbred continues to dominate the horse racing world, although its lines have been more recently used to improve warmblood breeds and to develop sport horses. The French saddle horse is an excellent example as is the Irish Sport Horse, the latter being an unusual combination between a Thoroughbred and a draft breed.
The American Quarter Horse was developed early in the 18th century, mainly for quarter racing (racing ¼ of a mile). Colonists did not have racetracks or any of the trappings of Europe that the earliest Thoroughbreds had at their disposal, so instead the owners of Quarter Horses would run their horses on roads that lead through town as a form of local entertainment. As the USA expanded West, the breed went with settlers as a farm and ranch animal, and "cow sense" was particularly valued: their use for herding cattle increased on rough, dry terrain that often involved sitting in the saddle for long hours.
However, this did not mean that the original ¼-mile races that colonists held ever went out of fashion, so today there are three types: the stock horse type, the racer, and the more recently evolving sport type. The racing type most resembles the finer-boned ancestors of the first racing Quarter Horses, and the type is still used for ¼-mile races. The stock horse type, used in western events and as a farm and patrol animal is bred for a shorter stride, an ability to stop and turn quickly, and an unflappable attitude that remains calm and focused even in the face of an angry charging steer. The first two are still to this day bred to have a combination of explosive speed that exceeds the Thoroughbred on short distances clocked as high as 55 mph, but they still retain the gentle, calm, and kindly temperament of their ancestors that makes them easily handled.
The Canadian horse's origin corresponds to shipments of French horses, some of which came from Louis XIV's own stable and most likely were Baroque horses meant to be gentlemen's mounts. These were ill-suited to farm work and to the hardscrabble life of the New World, so like the Americans, early Canadians crossed their horses with natives escapees. In time they evolved along similar lines as the Quarter Horse to the South as both the US and Canada spread westward and needed a calm and tractable horse versatile enough to carry the farmer's son to school but still capable of running fast and running hard as a cavalry horse, a stockhorse, or a horse to pull a conestoga wagon.
Other horses from North America retained a hint of their mustang origins by being either derived from stock that Native Americans bred that came in a rainbow of color, like the Appaloosa and American Paint Horse. with those East of the Mississippi River increasingly bred to impress and mimic the trends of the upper classes of Europe: The Tennessee Walking Horse and Saddlebred were originally plantation horses bred for their gait and comfortable ride in the saddle as a plantation master would survey his vast lands like an English lord.
Horses were needed for heavy draft and carriage work until replaced by the automobile, truck, and tractor. After this time, draft and carriage horse numbers dropped significantly, though light riding horses remained popular for recreational pursuits. Draft horses today are used on a few small farms, but today are seen mainly for pulling and plowing competitions rather than farm work. Heavy harness horses are now used as an outcross with lighter breeds, such as the Thoroughbred, to produce the modern warmblood breeds popular in sport horse disciplines, particularly at the Olympic level.
Breeding a horse is an endeavor where the owner, particularly of the mare, will usually need to invest considerable time and money. For this reason, a horse owner needs to consider several factors, including:
There are value judgements involved in considering whether an animal is suitable breeding stock, hotly debated by breeders. Additional personal beliefs may come into play when considering a suitable level of care for the mare and ensuing foal, the potential market or use for the foal, and other tangible and intangible benefits to the owner.
If the breeding endeavor is intended to make a profit, there are additional market factors to consider, which may vary considerably from year to year, from breed to breed, and by region of the world. In many cases, the low end of the market is saturated with horses, and the law of supply and demand thus allows little or no profit to be made from breeding unregistered animals or animals of poor quality, even if registered.
The minimum cost of breeding for a mare owner includes the stud fee, and the cost of proper nutrition, management and veterinary care of the mare throughout gestation, parturition, and care of both mare and foal up to the time of weaning. Veterinary expenses may be higher if specialized reproductive technologies are used or health complications occur.
Making a profit in horse breeding is often difficult. While some owners of only a few horses may keep a foal for purely personal enjoyment, many individuals breed horses in hopes of making some money in the process.
A rule of thumb is that a foal intended for sale should be worth three times the cost of the stud fee if it were sold at the moment of birth. From birth forward, the costs of care and training are added to the value of the foal, with a sale price going up accordingly. If the foal wins awards in some form of competition, that may also enhance the price.
On the other hand, without careful thought, foals bred without a potential market for them may wind up being sold at a loss, and in a worst-case scenario, sold for "salvage" value—a euphemism for sale to slaughter as horsemeat.
Therefore, a mare owner must consider their reasons for breeding, asking hard questions of themselves as to whether their motivations are based on either emotion or profit and how realistic those motivations may be.
The stallion should be chosen to complement the mare, with the goal of producing a foal that has the best qualities of both animals, yet avoids having the weaker qualities of either parent. Generally, the stallion should have proven himself in the discipline or sport the mare owner wishes for the "career" of the ensuing foal. Mares should also have a competition record showing that they also have suitable traits, though this does not happen as often.
Some breeders consider the quality of the sire to be more important than the quality of the dam. However, other breeders maintain that the mare is the most important parent. Because stallions can produce far more offspring than mares, a single stallion can have a greater overall impact on a breed. However, the mare may have a greater influence on an individual foal because its physical characteristics influence the developing foal in the womb and the foal also learns habits from its dam when young. Foals may also learn the "language of intimidation and submission" from their dam, and this imprinting may affect the foal's status and rank within the herd. Many times, a mature horse will achieve status in a herd similar to that of its dam; the offspring of dominant mares become dominant themselves.
A purebred horse is usually worth more than a horse of mixed breeding, though this matters more in some disciplines than others. The breed of the horse is sometimes secondary when breeding for a sport horse, but some disciplines may prefer a certain breed or a specific phenotype of horse. Sometimes, purebred bloodlines are an absolute requirement: For example, most racehorses in the world must be recorded with a breed registry in order to race.
Bloodlines are often considered, as some bloodlines are known to cross well with others. If the parents have not yet proven themselves by competition or by producing quality offspring, the bloodlines of the horse are often a good indicator of quality and possible strengths and weaknesses. Some bloodlines are known not only for their athletic ability, but could also carry a conformational or genetic defect, poor temperament, or for a medical problem. Some bloodlines are also fashionable or otherwise marketable, which is an important consideration should the mare owner wish to sell the foal.
Horse breeders also consider conformation, size and temperament. All of these traits are heritable, and will determine if the foal will be a success in its chosen discipline. The offspring, or "get", of a stallion are often excellent indicators of his ability to pass on his characteristics, and the particular traits he actually passes on. Some stallions are fantastic performers but never produce offspring of comparable quality. Others sire fillies of great abilities but not colts. At times, a horse of mediocre ability sires foals of outstanding quality.
Mare owners also look into the question of if the stallion is fertile and has successfully "settled" (i.e. impregnated) mares. A stallion may not be able to breed naturally, or old age may decrease his performance. Mare care boarding fees and semen collection fees can be a major cost.
Breeding a horse can be an expensive endeavor, whether breeding a backyard competition horse or the next Olympic medalist. Costs may include:
Stud fees are determined by the quality of the stallion, his performance record, the performance record of his get (offspring), as well as the sport and general market that the animal is standing for.
The highest stud fees are generally for racing Thoroughbreds, which may charge from two to three thousand dollars for a breeding to a new or unproven stallion, to several hundred thousand dollars for a breeding to a proven producer of stakes winners. Stallions in other disciplines often have stud fees that begin in the range of $1,000 to $3,000, with top contenders who produce champions in certain disciplines able to command as much as $20,000 for one breeding. The lowest stud fees to breed to a grade horse or an animal of low-quality pedigree may only be $100–$200, but there are trade-offs: the horse will probably be unproven, and likely to produce lower-quality offspring than a horse with a stud fee that is in the typical range for quality breeding stock.
As a stallion's career, either performance or breeding, improves, his stud fee tends to increase in proportion. If one or two offspring are especially successful, winning several stakes races or an Olympic medal, the stud fee will generally greatly increase. Younger, unproven stallions will generally have a lower stud fee earlier on in their careers.
To help decrease the risk of financial loss should the mare die or abort the foal while pregnant, many studs have a live foal guarantee (LFG) – also known as "no foal, free return" or "NFFR" - allowing the owner to have a free breeding to their stallion the next year. However, this is not offered for every breeding.
There are two general ways to "cover" or breed the mare:
After the mare is bred or artificially inseminated, she is checked using ultrasound 14–16 days later to see if she "took", and is pregnant. A second check is usually performed at 28 days. If the mare is not pregnant, she may be bred again during her next cycle.
It is considered safe to breed a mare to a stallion of much larger size. Because of the mare's type of placenta and its attachment and blood supply, the foal will be limited in its growth within the uterus to the size of the mare's uterus, but will grow to its genetic potential after it is born. Test breedings have been done with draft horse stallions bred to small mares with no increase in the number of difficult births.
When breeding live cover, the mare is usually boarded at the stud. She may be "teased" several times with a stallion that will not breed to her, usually with the stallion being presented to the mare over a barrier. Her reaction to the teaser, whether hostile or passive, is noted. A mare that is in heat will generally tolerate a teaser (although this is not always the case), and may present herself to him, holding her tail to the side. A veterinarian may also determine if the mare is ready to be bred, by ultrasound or palpating daily to determine if ovulation has occurred. Live cover can also be done in liberty on a paddock or on pasture, although due to safety and efficacy concerns, it is not common at professional breeding farms.
When it has been determined that the mare is ready, both the mare and intended stud will be cleaned. The mare will then be presented to the stallion, usually with one handler controlling the mare and one or more handlers in charge of the stallion. Multiple handlers are preferred, as the mare and stallion can be easily separated should there be any trouble.
The Jockey Club, the organization that oversees the Thoroughbred industry in the United States, requires all registered foals to be bred through live cover. Artificial insemination, listed below, is not permitted. Similar rules apply in other countries.
By contrast, the U.S. standardbred industry allows registered foals to be bred by live cover, or by artificial insemination (AI) with fresh or frozen (not dried) semen. No other artificial fertility treatment is allowed. In addition, foals bred via AI of frozen semen may only be registered if the stallion's sperm was collected during his lifetime, and used no later than the calendar year of his death or castration.
Artificial insemination (AI) has several advantages over live cover, and has a very similar conception rate:
A stallion is usually trained to mount a phantom (or dummy) mare, although a live mare may be used, and he is most commonly collected using an artificial vagina (AV) which is heated to simulate the vagina of the mare. The AV has a filter and collection area at one end to capture the semen, which can then be processed in a lab. The semen may be chilled or frozen and shipped to the mare owner or used to breed mares "on-farm". When the mare is in heat, the person inseminating introduces the semen directly into her uterus using a syringe and pipette.
Often an owner does not want to take a valuable competition mare out of training to carry a foal. This presents a problem, as the mare will usually be quite old by the time she is retired from her competitive career, at which time it is more difficult to impregnate her. Other times, a mare may have physical problems that prevent or discourage breeding. However, there are now several options for breeding these mares. These options also allow a mare to produce multiple foals each breeding season, instead of the usual one. Therefore, mares may have an even greater value for breeding.
The world's first cloned horse, Prometea, was born in 2003. Other notable instances of horse cloning are: | https://en.wikipedia.org/wiki?curid=14082 |
Heterosexuality
Heterosexuality is romantic attraction, sexual attraction or sexual behavior between persons of the opposite sex or gender. As a sexual orientation, heterosexuality is "an enduring pattern of emotional, romantic, and/or sexual attractions" to persons of the opposite sex; it "also refers to a person's sense of identity based on those attractions, related behaviors, and membership in a community of others who share those attractions." Someone who is heterosexual is commonly referred to as "straight."
Along with bisexuality and homosexuality, heterosexuality is one of the three main categories of sexual orientation within the heterosexual–homosexual continuum. Across cultures, most people are heterosexual, and heterosexual activity is by far the most common type of sexual activity.
Scientists do not know the exact cause of sexual orientation, but they theorize that it is caused by a complex interplay of genetic, hormonal, and environmental influences, and do not view it as a choice. Although no single theory on the cause of sexual orientation has yet gained widespread support, scientists favor biologically-based theories. There is considerably more evidence supporting nonsocial, biological causes of sexual orientation than social ones, especially for males.
The term "heterosexual" or "heterosexuality" is usually applied to humans, but heterosexual behavior is observed in all mammals and in other animals, as it is necessary for sexual reproduction.
"Hetero-" comes from the Greek word "ἕτερος" [héteros], meaning "other party" or "another", used in science as a prefix meaning "different"; and the Latin word for sex (that is, characteristic sex or sexual differentiation).
The current use of the term "heterosexual" has its roots in the broader 19th century tradition of personality taxonomy. The term "heterosexual" was coined alongside the word "homosexual" by Karl Maria Kertbeny in 1869. The terms were not in current use during the late nineteenth century, but were reintroduced by Richard von Krafft-Ebing and Albert Moll around 1890. The noun came into wider use from the early 1920s, but did not enter common use until the 1960s. The colloquial shortening "hetero" is attested from 1933. The abstract noun "heterosexuality" is first recorded in 1900. The word ""heterosexual"" was listed in Merriam-Webster's "New International Dictionary" in 1923 as a medical term for "morbid sexual passion for one of the opposite sex"; however, in 1934 in their "Second Edition Unabridged" it is defined as a "manifestation of sexual passion for one of the opposite sex; normal sexuality".
In LGBT slang, the term "breeder" has been used as a denigrating phrase to deride heterosexuals. Hyponyms of heterosexual include "heteroflexible".
The word can be informally shortened to "hetero". The term "straight" originated as a mid-20th century gay slang term for heterosexuals, ultimately coming from the phrase "to go straight" (as in "straight and narrow"), or stop engaging in homosexual sex. One of the first uses of the word in this way was in 1941 by author G. W. Henry. Henry's book concerned conversations with homosexual males and used this term in connection with people who are identified as ex-gays. It is now simply a colloquial term for "heterosexual", having changed in primary meaning over time. Some object to usage of the term "straight" because it implies that non-heteros are crooked.
In their 2016 literature review, Bailey "et al." stated that they "expect that in all cultures the vast majority of individuals are sexually predisposed exclusively to the other sex (i.e., heterosexual)" and that there is no persuasive evidence that the demographics of sexual orientation have varied much across time or place. Heterosexual activity between only one male and one female is by far the most common type of sociosexual activity.
According to several major studies, 89% to 98% of people have had only heterosexual contact within their lifetime; but this percentage falls to 79–84% when either or both same-sex attraction and behavior are reported.
A 1992 study reported that 93.9% of males in Britain have always had heterosexual experience, while in France the number was reported at 95.9%. According to a 2008 poll, 85% of Britons have only opposite-sex sexual contact while only 94% of Britons identify themselves as heterosexual. Similarly, a survey by the UK Office for National Statistics (ONS) in 2010 found that 95% of Britons identified as heterosexual, 1.5% of Britons identified themselves as homosexual or bisexual, and the last 3.5% gave more vague answers such as "don't know", "other", or did not respond to the question. In the United States, according to a Williams Institute report in April 2011, 96% or approximately 250 million of the adult population are heterosexual.
An October 2012 Gallup poll provided unprecedented demographic information about those who identify as heterosexual, arriving at the conclusion that 96.6%, with a margin of error of ±1%, of all U.S. adults identify as heterosexual. The Gallup results show:
In a 2015 YouGov survey of 1,000 adults of the United States, 89% of the sample identified as heterosexual, 4% as homosexual (2% as homosexual male and 2% as homosexual female) and 4% as bisexual (of either sex).
Bailey "et al.", in their 2016 review, stated that in recent Western surveys, about 93% of men and 87% of women identify as completely heterosexual, and about 4% of men and 10% of women as mostly heterosexual.
No simple and singular determinant for sexual orientation has been conclusively demonstrated, but scientists believe that a combination of genetic, hormonal, and environmental factors determine sexual orientation. They favor biological theories for explaining the causes of sexual orientation, as there is considerably more evidence supporting nonsocial, biological causes than social ones, especially for males.
Factors related to the development of a heterosexual orientation include genes, prenatal hormones, and brain structure, and their interaction with the environment.
The neurobiology of the masculinization of the brain is fairly well understood. Estradiol and testosterone, which is catalyzed by the enzyme 5α-reductase into dihydrotestosterone, act upon androgen receptors in the brain to masculinize it. If there are few androgen receptors (people with androgen insensitivity syndrome) or too much androgen (females with congenital adrenal hyperplasia), there can be physical and psychological effects. It has been suggested that both male and female heterosexuality are the results of this process. In these studies heterosexuality in females is linked to a lower amount of masculinization than is found in lesbian females, though when dealing with male heterosexuality there are results supporting both higher and lower degrees of masculinization than homosexual males.
Sexual reproduction in the animal world is facilitated through opposite-sex sexual activity, although there are also animals that reproduce asexually, including protozoa and lower invertebrates.
Reproductive sex does not require a heterosexual orientation, since sexual orientation typically refers to a long-term enduring pattern of sexual and emotional attraction leading often to long-term social bonding, while reproduction requires as little as a single act of copulation to fertilize the ovum by sperm.
Often, sexual orientation and sexual orientation identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual orientation. Sexual orientation is stable and unlikely to change for the vast majority of people, but some research indicates that some people may experience change in their sexual orientation, and this is more likely for women than for men. | https://en.wikipedia.org/wiki?curid=14084 |
Hopewell Centre (Hong Kong)
Hopewell Centre is a , 64-storey skyscraper at 183 Queen's Road East, in Wan Chai, Hong Kong Island in Hong Kong. The tower is the first circular skyscraper in Hong Kong. It is named after Hong Kong–listed property firm Hopewell Holdings Limited, which constructed the building. Hopewell Holdings Limited's headquarters are in the building and its Chief executive officer, Gordon Wu, has his office on the top floor.
Construction started in 1977 and was completed in 1980. Upon completion, Hopewell Centre surpassed Jardine House as Hong Kong's tallest building. It was also the second tallest building in Asia at the time. It kept its title in Hong Kong until 1989, when the Bank of China Tower was completed. The building is now the 20th tallest building in Hong Kong.
The building has a circular floor plan. Although the front entrance is on the 'ground floor', commuters are taken through a set of escalators to the 3rd floor lift lobby. Hopewell Centre stands on the slope of a hill so steep that the building has its back entrance on the 17th floor towards Kennedy Road. There is a circular private swimming pool on the roof of the building built for feng shui reasons.
A revolving restaurant located on the 62nd floor, called "Revolving 66", overlooks other tall buildings below and the harbour. It was originally called Revolving 62, but soon changed its name as locals kept calling it Revolving 66. It completes a 360-degree rotation each hour. Passengers take either office lifts (faster) or the scenic lifts (with a view) to the 56/F, where they transfer to smaller lifts up to the 62/F. The restaurant is now named The Grand Buffet.
The building comprises several groups of lifts. Lobbies are on the 3rd and 17th floor, and are connected to Queen's Road East and Kennedy Road respectively. A mini-skylobby is on the 56th floor and serves as a transfer floor for diners heading to the 60/F and 62/F restaurants. The building's white 'bumps' between the windows have built in window-washer guide rails.
This skyscraper was the filming location for R&B group Dru Hill's music video for "How Deep Is Your Love," directed by Brett Ratner, who also directed the movie Rush Hour, whose soundtrack features the song. The circular private swimming pool is well visible in this music video. This swimming pool has also featured in an Australian television advertisement by one of that country's major gaming companies, Tattersall's Limited, promoting a weekly lottery competition.
Hopewell shares shoot up 31 per cent after developer unveils HK$21.26 billion privatisation plan | https://en.wikipedia.org/wiki?curid=14086 |
Harwich, Massachusetts
Harwich ( ) is a New England town on Cape Cod, in Barnstable County in the state of Massachusetts in the United States. At the 2010 census it had a population of 12,243. The town is a popular vacation spot, located near the Cape Cod National Seashore. Harwich's beaches are on the Nantucket Sound side of Cape Cod. Harwich has three active harbors. Saquatucket, Wychmere and Allen Harbors are all in Harwich Port. The town of Harwich includes the villages of Pleasant Lake, West Harwich, East Harwich, Harwich Port, Harwich Center, North Harwich and South Harwich.
Harwich was first settled by Europeans in 1670 as part of Yarmouth. The town was officially incorporated in 1694, and originally included the lands of the current town of Brewster. Early industry involved fishing and farming. The town is considered by some to be the birthplace of the cranberry industry, with the first commercial operation opened in 1846. There are still many bogs in the town, although the economy is now more centered on tourism and as a residential community. The town is also the site of the start/finish line of the "Sail Around the Cape", which rounds the Cape counter-clockwise, returning via the Cape Cod Canal.
Since 1976, the town has hosted the annual Harwich Cranberry Festival, noted for its fireworks display, in September.
In the summer, the town is host to the Harwich Mariners of the Cape Cod Baseball League. The Mariners were the 2008 league champions. The team plays at Whitehouse Field.
The Patriot Square Shopping Center in neighboring South Dennis is convenient for residents of North Harwich and West Harwich. The plaza contains a Stop & Shop supermarket and other stores around it. Supermarkets in Harwich include a Shaw's Star Market on the Harwich Port/West Harwich border and another Stop & Shop in East Harwich.
According to the United States Census Bureau, the town has a total area of , of which is land and , or 36.97%, is water. The seven villages of Harwich are West Harwich, North Harwich, East Harwich, South Harwich, Harwich Center, Harwich Port and Pleasant Lake. These are also referred to as the Harwiches.
Harwich is on the southern side of Cape Cod, just west of the southeastern corner. It is bordered by Dennis to the west, Brewster to the north, Orleans to the northeast, Chatham to the east, and Nantucket Sound to the south. Harwich is approximately east of Barnstable, east of the Cape Cod Canal, south of Provincetown, and southeast of Boston.
The town shares the largest lake on the Cape, called Long Pond, with the town of Brewster. Long Pond serves as a private airport for planes with the ability to land on water. The village of Pleasant Lake is at the southwest corner of the lake. Numerous other smaller bodies of water dot the town. Sand Pond, a public beach and swimming area, is located off Great Western Road in North Harwich.
The shore is home to several harbors and rivers, including the Herring River, Allens Harbor, Wychmere Harbor, Saquatucket Harbor, and the Andrews River. The town is also the home to the Hawksnest State Park, as well as a marina and several beaches, including two on Long Pond. There are also many beaches in West Harwich and South Harwich.
According to the Köppen climate classification system, Harwich, Massachusetts has a warm-summer, wet all year, humid continental climate ("Dfb"). Dfb climates are characterized by at least one month having an average mean temperature ≤ 32.0 °F (≤ 0.0 °C), at least four months with an average mean temperature ≥ 50.0 °F (≥ 10.0 °C), all months with an average mean temperature ≤ 71.6 °F (≤ 22.0 °C), and no significant precipitation difference between seasons. The average seasonal (Nov-Apr) snowfall total is around 30 in (76 cm). The average snowiest month is February which corresponds with the annual peak in nor'easter activity. The plant hardiness zone is 7a with an average annual extreme minimum air temperature of 4.0 °F (-15.6 °C).
According to the A. W. Kuchler U.S. Potential natural vegetation Types, Harwich, Massachusetts would primarily contain a Northeastern Oak/Pine ("110") vegetation type with a Southern Mixed Forest ("26") vegetation form.
As of the census of 2000, there were 12,386 people, 5,471 households, and 3,545 families residing in the town. The population density was 588.6 people per square mile (227.3/km²). There were 9,450 housing units at an average density of 449.1 per square mile (173.4/km²). The racial makeup of the town was 95.41% White, 0.71% Black or African American, 0.19% Native American, 0.22% Asian, 0.05% Pacific Islander, 2.03% from other races, and 1.40% from two or more races. 0.96% of the population were Hispanic or Latino of any race.
There were 5,471 households out of which 21.3% had children under the age of 18 living with them, 53.4% were married couples living together, 9.0% had a female householder with no husband present, and 35.2% were non-families. 29.8% of all households were made up of individuals and 16.9% had someone living alone who was 65 years of age or older. The average household size was 2.20 and the average family size was 2.72.
In the town, the population was spread out with 18.3% under the age of 18, 4.2% from 18 to 24, 22.1% from 25 to 44, 25.8% from 45 to 64, and 29.6% who were 65 years of age or older. The median age was 49 years. For every 100 females, there were 84.5 males. For every 100 females age 18 and over, there were 79.7 males.
The median income for a household in the town was $41,552, and the median income for a family was $51,070. Males had a median income of $38,948 versus $27,439 for females. The per capita income for the town was $23,063. About 2.9% of families and 15.5% of the population were below the poverty line, including 8.4% of those under age 18 and 8.1% of those age 65 or over.
The town of Harwich contains several smaller census-designated places (CDPs) for which the U.S. Census reports more focused geographic and demographic information. The CDPs in Harwich are Harwich Center, Harwich Port (including South Harwich), East Harwich and Northwest Harwich (including West Harwich, North Harwich, and Pleasant Lake).
Harwich is represented in the Massachusetts House of Representatives as a part of the Fourth Barnstable district, which includes (with the exception of Brewster) all the towns east and north of Harwich on the Cape. The town is represented in the Massachusetts Senate as a part of the Cape and Islands District, which includes all of Cape Cod, Martha's Vineyard and Nantucket except the towns of Bourne, Falmouth, Sandwich and a portion of Barnstable. The town is patrolled by the Second (Yarmouth) Barracks of Troop D of the Massachusetts State Police.
On the national level, Harwich is a part of Massachusetts's 9th congressional district, and is currently represented by William R. Keating. The state's senior member of the United States Senate is Elizabeth Warren, elected in 2012. The junior senator is Ed Markey, elected in 2013.
Harwich is governed by the open town meeting form of government, led by a town administrator and a board of selectmen.
There are three libraries in the town. The municipal library, the Brooks Free Library in Harwich Center, is the largest and is a member of the Cape Libraries Automated Materials Sharing (CLAMS) library network. There are two smaller non-municipal libraries – the Chase Library on Route 28 in West Harwich at the Dennis town line, and the Harwich Port Library on Lower Bank Street in Harwich Port.
Harwich is the site of the Long Pond Medical Center, which serves the southeastern Cape region.
Harwich has police and fire departments, with one fire and police station headquarters, and Station 2 in East Harwich.
There are post offices in Harwich Port, South Harwich, West Harwich, and East Harwich.
Harwich's schools are part of the Monomoy Regional School District. Harwich Elementary School serves students from pre-school through fourth grade, Monomoy Regional Middle School which serves both Harwich and its joining town, Chatham. This middle school serves grades 5–7, and Monomoy Regional High School serves grades 8–12 for both Harwich and Chatham. Monomoy's teams are known as the Sharks. Harwich is known for its excellent boys basketball, girls basketball, girls field hockey, softball and baseball teams.
The Lighthouse Charter School recently moved into where the Harwich Cinema building was located.
Harwich is the site of Cape Cod Regional Technical High School, a grades 9–12 high school which serves most of Cape Cod. The town is also home to Holy Trinity PreSchool, a Catholic pre-school which serves pre-kindergarten in West Harwich.
Two of Massachusetts major routes, U.S. Route 6 and Massachusetts Route 28, cross the town. The town has the southern termini of Routes 39 and 124, and a portion of Route 137 passes through the town. Route 39 leads east through East Harwich to Orleans. Route 28 passes through West Harwich and Harwich Port, connecting the towns of Dennis and Chatham. Route 124 leads from Harwich Center to Brewster, and Route 137 cuts through East Harwich leading from Chatham to Brewster.
A portion of the Cape Cod Rail Trail, as well as several other bicycle routes, are in town. There is no rail service in town, but the Cape Cod Rail Trail rotary is located in North Harwich near Main Street.
Other than the occasional sea plane landing on the pond, the nearest airport is in neighboring Chatham; the nearest regional service is at Barnstable Municipal Airport; and the nearest national and international air service is at Logan International Airport in Boston.
In recent years parts of Cape Cod have introduced bus service, especially during the summer to help cut down on traffic. | https://en.wikipedia.org/wiki?curid=14089 |
Hull classification symbol
The United States Navy, United States Coast Guard, and United States National Oceanic and Atmospheric Administration (NOAA) use a hull classification symbol (sometimes called hull code or hull number) to identify their ships by type and by individual ship within a type. The system is analogous to the pennant number system that the Royal Navy and other European and Commonwealth navies use.
The U.S. Navy began to assign unique Naval Registry Identification Numbers to its ships in the 1890s. The system was a simple one in which each ship received a number which was appended to its ship type, fully spelled out, and added parenthetically after the ship's name when deemed necessary to avoid confusion between ships. Under this system, for example, the battleship "Indiana" was USS "Indiana" (Battleship No. 1,) the cruiser "Olympia" was USS "Olympia" (Cruiser No. 6,) and so on. Beginning in 1907, some ships also were referred to alternatively by single-letter or three-letter codes—for example, USS "Indiana" (Battleship No. 1) could be referred to as USS "Indiana" (B-1) and USS "Olympia" (Cruiser No. 6) could also be referred to as USS "Olympia" (C-6), while USS "Pennsylvania" (Armored Cruiser No. 4) could be referred to as USS "Pennsylvania" (ACR-4). However, rather than replacing it, these codes coexisted and were used interchangeably with the older system until the modern system was instituted on 17 July 1920.
During World War I, the U.S. Navy acquired large numbers of privately owned and commercial ships and craft for use as patrol vessels, mine warfare vessels, and various types of naval auxiliary ships, some of them with identical names. To keep track of them all, the Navy assigned unique identifying numbers to them. Those deemed appropriate for patrol work received section patrol numbers (SP), while those intended for other purposes received "identification numbers", generally abbreviated "Id. No." or "ID;" some ships and craft changed from an SP to an ID number or vice versa during their careers, without their unique numbers themselves changing, and some ships and craft assigned numbers in anticipation of naval service were never acquired by the Navy. The SP/ID numbering sequence was unified and continuous, with no SP number repeated in the ID series or vice versa so that there could not be, for example, both an "SP-435" and an "Id. No 435". The SP and ID numbers were used parenthetically after each boat's or ship's name to identify it; although this system pre-dated the modern hull classification system and its numbers were not referred to at the time as "hull codes" or "hull numbers," it was used in a similar manner to today's system and can be considered its precursor.
The United States Revenue Cutter Service, which merged with the United States Lifesaving Service in January 1915 to form the modern United States Coast Guard, began following the Navy's lead in the 1890s, with its cutters having parenthetical numbers called Naval Registry Identification Numbers following their names, such as (Cutter No. 1), etc. This persisted until the Navy's modern hull classification system's introduction in 1920, which included Coast Guard ships and craft.
Like the U.S. Navy, the United States Coast and Geodetic Survey – a uniformed seagoing service of the United States Government and a predecessor of the National Oceanic and Atmospheric Administration (NOAA) – adopted a hull number system for its fleet in the 20th century. Its largest vessels, "Category I" oceanographic survey ships, were classified as "ocean survey ships" and given the designation "OSS". Intermediate-sized "Category II" oceanographic survey ships received the designation "MSS" for "medium survey ship," and smaller "Category III" oceanographic survey ships were given the classification "CSS" for "coastal survey ship." A fourth designation, "ASV" for "auxiliary survey vessel," included even smaller vessels. In each case, a particular ship received a unique designation based on its classification and a unique hull number separated by a space rather than a hyphen; for example, the third Coast and Geodetic Survey ship named "Pioneer" was an ocean survey ship officially known as USC&GS "Pioneer" (OSS 31). The Coast and Geodetic Surveys system persisted after the creation of NOAA in 1970, when NOAA took control of the Surveys fleet, but NOAA later changed to its modern hull classification system.
The U.S. Navy instituted its modern hull classification system on 17 July 1920, doing away with section patrol numbers, "identification numbers", and the other numbering systems described above. In the new system, all hull classification symbols are at least two letters; for basic types the symbol is the first letter of the type name, doubled, except for aircraft carriers.
The combination of symbol and hull number identifies a modern Navy ship uniquely. A heavily modified or re-purposed ship may receive a new symbol, and either retain the hull number or receive a new one. For example, the heavy gun cruiser was converted to a gun/missile cruiser, changing the hull number to CAG-1. Also, the system of symbols has changed a number of times both since it was introduced in 1907 and since the modern system was instituted in 1920, so ships' symbols sometimes change without anything being done to the physical ship.
Hull numbers are assigned by classification. Duplication between, but not within, classifications is permitted. Hence, CV-1 was the aircraft carrier and BB-1 was the battleship .
Ship types and classifications have come and gone over the years, and many of the symbols listed below are not presently in use. The Naval Vessel Register maintains an online database of U.S. Navy ships showing which symbols are presently in use.
After World War II until 1975, the U.S. Navy defined a "frigate" as a type of surface warship larger than a destroyer and smaller than a cruiser. In other navies, such a ship generally was referred to as a "flotilla leader", or "destroyer leader". Hence the U.S. Navy's use of "DL" for "frigate" prior to 1975, while "frigates" in other navies were smaller than destroyers and more like what the U.S. Navy termed a "destroyer escort", "ocean escort", or "DE". The United States Navy 1975 ship reclassification of cruisers, frigates, and ocean escorts brought U.S. Navy classifications into line with other nations' classifications, at least cosmetically in terms of terminology, and eliminated the perceived "cruiser gap" with the Soviet Navy by redesignating the former "frigates" as "cruisers".
If a U.S. Navy ship's hull classification symbol begins with "T-", it is part of the Military Sealift Command, has a primarily civilian crew, and is a United States Naval Ship (USNS) in non-commissioned service – as opposed to a commissioned United States Ship (USS) with an all-military crew.
If a ship's hull classification symbol begins with "W", it is a commissioned cutter of the United States Coast Guard. Until 1965, the Coast Guard used U.S. Navy hull classification codes, prepending a "W" to their beginning. In 1965, it retired some of the less mission-appropriate Navy-based classifications and developed new ones of its own, most notably WHEC for "high endurance cutter" and WMEC for "medium endurance cutter".
The National Oceanic and Atmospheric Administration (NOAA), a component of the United States Department of Commerce, includes the National Oceanic and Atmospheric Administration Commissioned Officer Corps (or "NOAA Corps"), one of the eight uniformed services of the United States, and operates a fleet of seagoing research and survey ships. The NOAA fleet also uses a hull classification symbol system, which it also calls "hull numbers," for its ships.
After NOAA took over the former Coast and Geodetic Survey fleet in 1970 along with research vessels of other government agencies, it adopted a new system of ship classification. In its system, the NOAA fleet is divided into two broad categories, research ships and survey ships. The research ships, which include oceanographic and fisheries research vessels, are given hull numbers beginning with "R", while the survey ships, generally hydrographic survey vessels, receive hull numbers beginning with "S". The letter is followed by a three-digit number; the first digit indicates the NOAA "class" (i.e., size) of the vessel, which NOAA assigns based on the ship's gross tonnage and horsepower, while the next two digits combine with the first digit to create a unique three-digit identifying number for the ship.
Generally, each NOAA hull number is written with a space between the letter and the three-digit number, as in, for example, or .
Unlike the Navy, once an older NOAA ship leaves service, a newer one can be given the same hull number; for example, "S 222" was assigned to , then assigned to NOAAS "Thomas Jefferson" (S 222), which entered NOAA service after "Mount Mitchell" was stricken.
The U.S. Navy's system of alpha-numeric ship designators, and its associated hull numbers, have been for several decades a unique method of categorizing ships of all types: combatants, auxiliaries and district craft. Though considerably changed in detail and expanded over the years, this system remains essentially the same as when formally implemented in 1920. It is a very useful tool for organizing and keeping track of naval vessels, and also provides the basis for the identification numbers painted on the bows (and frequently the sterns) of most U.S. Navy ships.
The ship designator and hull number system's roots extend back to the late 1880s when ship type serial numbers were assigned to most of the new-construction warships of the emerging "Steel Navy". During the course of the next thirty years, these same numbers were combined with filing codes used by the Navy's clerks to create an informal version of the system that was put in place in 1920. Limited usage of ship numbers goes back even earlier, most notably to the "Jeffersonian Gunboats" of the early 1800s and the "Tinclad" river gunboats of the Civil War Mississippi Squadron.
It is important to understand that hull number-letter prefixes are not acronyms, and should not be carelessly treated as abbreviations of ship type classifications. Thus, "DD" does not stand for anything more than "Destroyer". "SS" simply means "Submarine". And "FF" is the post-1975 type code for "Frigate."
The hull classification codes for ships in active duty in the United States Navy are governed under Secretary of the Navy Instruction 5030.8B (SECNAVINST 5030.8B).
Warships are designed to participate in combat operations.
The origin of the two-letter code derives from the need to distinguish various cruiser subtypes.
Aircraft carriers are ships designed primarily for the purpose of conducting combat operations by aircraft which engage in attacks against airborne, surface, sub-surface and shore targets. Contrary to popular belief, the "CV" hull classification symbol does not stand for "carrier vessel". "CV" derives from the cruiser designation, with the v for French "voler", "to fly". Aircraft carriers are designated in two sequences: the first sequence runs from CV-1 USS "Langley" to the very latest ships, and the second sequence, "CVE" for escort carriers, ran from CVE-1 "Long Island" to CVE-127 "Okinawa" before being discontinued.
Surface combatants are ships which are designed primarily to engage enemy forces on the high seas. The primary surface combatants are battleships, cruisers and destroyers. Battleships are very heavily armed and armored; cruisers moderately so; destroyers and smaller warships, less so. Before 1920, ships were called " no. X", with the type fully pronounced. The types were commonly abbreviated in ship lists to "B-X", "C-X", "D-X" et cetera—for example, before 1920, would have been called "USS "Minnesota", Battleship number 22" orally and "USS "Minnesota", B-22" in writing. After 1920, the ship's name would have been both written and pronounced "USS "Minnesota" (BB-22)". In generally decreasing size, the types are:
Submarines are all self-propelled submersible types (usually started with SS) regardless of whether employed as combatant, auxiliary, or research and development vehicles which have at least a residual combat capability. While some classes, including all diesel-electric submarines, are retired from USN service, non-U.S. navies continue to employ SS, SSA, SSAN, SSB, SSC, SSG, SSM, and SST types. With the advent of new Air Independent Propulsion/Power (AIP) systems, both SSI and SSP are used to distinguish the types within the USN, but SSP has been declared the preferred term. SSK, retired by the USN, continues to be used colloquially and interchangeably with SS for diesel-electric attack/patrol submarines within the USN, and, more formally, by the Royal Navy and British firms such as Jane's Information Group.
Patrol combatants are ships whose mission may extend beyond coastal duties and whose characteristics include adequate endurance and seakeeping, providing a capability for operations exceeding 48 hours on the high seas without support. This notably included Brown Water Navy/Riverine Forces during the Vietnam War. Few of these ships are in service today.
Amphibious warfare vessels include all ships having organic capability for amphibious warfare and which have characteristics enabling long duration operations on the high seas. There are two classifications of craft: amphibious warfare ships which are built to cross oceans, and landing craft, which are designed to take troops from ship to shore in an invasion.
Ships
Landing Craft
Operated by Military Sealift Command, have ship prefix "USNS", hull code begins with "T-".
Ships which have the capability to provide underway replenishment to fleet units.
Mine warfare ships are those ships whose primary function is mine warfare on the high seas.
Coastal defense ships are those whose primary function is coastal patrol and interdiction.
Mobile logistics ships have the capability to provide direct material support to other deployed units operating far from home ports.
An auxiliary ship is designed to operate in any number of roles supporting combatant ships and other naval operations.
Although technically an aircraft, pre-World War II rigid airships (e.g., zeppelins) were treated like commissioned surface warships and submarines, flew the U.S. ensign from their stern and carried a United States Ship (USS) designation. Non-rigid airships (e.g., blimps) continued to fly the U.S. ensign from their stern but were always considered to be primarily aircraft.
Support ships are not designed to participate in combat and are generally not armed. For ships with civilian crews (owned by and/or operated for Military Sealift Command and the Maritime Administration), the prefix T- is placed at the front of the hull classification.
Support ships are designed to operate in the open ocean in a variety of sea states to provide general support to either combatant forces or shore-based establishments. They include smaller auxiliaries which, by the nature of their duties, leave inshore waters.
Service craft are navy-subordinated craft (including non-self-propelled) designed to provide general support to either combatant forces or shore-based establishments. The suffix "N" refers to non-self-propelled variants.
Prior to 1965, U.S. Coast Guard cutters used the same designation as naval ships but preceded by a "W" to indicate Coast Guard commission. The U.S. Coast Guard considers any ship over 65 feet in length with a permanently assigned crew, a cutter.
United States Navy Designations (Temporary) are a form of U.S. Navy ship designation, intended for temporary identification use. Such designations usually occur during periods of sudden mobilization, such as that which occurred prior to, and during, World War II or the Korean War, when it was determined that a sudden temporary need arose for a ship for which there was no official Navy designation.
During World War II, for example, a number of commercial vessels were requisitioned, or acquired, by the U.S. Navy to meet the sudden requirements of war. A yacht acquired by the U.S. Navy during the start of World War II might seem desirable to the Navy whose use for the vessel might not be fully developed or explored at the time of acquisition.
On the other hand, a U.S. Navy vessel, such as the yacht in the example above, already in commission or service, might be desired, or found useful, for another need or purpose for which there is no official designation.
Numerous other U.S. Navy vessels were launched with a temporary, or nominal, designation, such as YMS or PC, since it could not be determined, at the time of construction, what they should be used for. Many of these were vessels in the 150 to 200 feet length class with powerful engines, whose function could be that of a minesweeper, patrol craft, submarine chaser, seaplane tender, tugboat, or other. Once their destiny, or capability, was found or determined, such vessels were reclassified with their actual designation.
The letter is paired with a three-digit number. The first digit of the number is determined by the ships "power tonnage," defined as the sum of its shaft horsepower and gross international tonnage, as follows:
The second and third digits are assigned to create a unique three-digit hull number. | https://en.wikipedia.org/wiki?curid=14090 |
Habeas corpus
Habeas corpus (; Medieval Latin meaning "[we, a Court, command] that you have the body [of the detainee brought before us]") is a recourse in law through which a person can report an unlawful detention or imprisonment to a court and request that the court order the custodian of the person, usually a prison official, to bring the prisoner to court, to determine whether the detention is lawful.
The writ of "habeas corpus" is known as the "great and efficacious writ in all manner of illegal confinement". It is a summons with the force of a court order; it is addressed to the custodian (a prison official, for example) and demands that a prisoner be brought before the court, and that the custodian present proof of authority, allowing the court to determine whether the custodian has lawful authority to detain the prisoner. If the custodian is acting beyond their authority, then the prisoner must be released. Any prisoner, or another person acting on their behalf, may petition the court, or a judge, for a writ of "habeas corpus". One reason for the writ to be sought by a person other than the prisoner is that the detainee might be held incommunicado. Most civil law jurisdictions provide a similar remedy for those unlawfully detained, but this is not always called "habeas corpus". For example, in some Spanish-speaking nations, the equivalent remedy for unlawful imprisonment is the "amparo de libertad" ("protection of freedom").
"Habeas corpus" has certain limitations. Though a writ of right, it is not a writ of course. It is technically only a procedural remedy; it is a guarantee against any detention that is forbidden by law, but it does not necessarily protect other rights, such as the entitlement to a fair trial. So if an imposition such as internment without trial is permitted by the law, then "habeas corpus" may not be a useful remedy. In some countries, the writ has been temporarily or permanently suspended under the pretext of war or state of emergency.
The right to petition for a writ of "habeas corpus" has nonetheless long been celebrated as the most efficient safeguard of the liberty of the subject. The jurist Albert Venn Dicey wrote that the British Habeas Corpus Acts "declare no principle and define no rights, but they are for practical purposes worth a hundred constitutional articles guaranteeing individual liberty".
The writ of "habeas corpus" is one of what are called the "extraordinary", "common law", or "prerogative writs", which were historically issued by the English courts in the name of the monarch to control inferior courts and public authorities within the kingdom. The most common of the other such prerogative writs are "quo warranto", "prohibito", "mandamus", "procedendo", and "certiorari". The due process for such petitions is not simply civil or criminal, because they incorporate the presumption of non-authority. The official who is the respondent must prove their authority to do or not do something. Failing this, the court must decide for the petitioner, who may be any person, not just an interested party. This differs from a motion in a civil process in which the movant must have standing, and bears the burden of proof.
From Latin "habeas", 2nd person singular present subjunctive active of "habere", "to have", "to hold"; and "corpus", accusative singular of "corpus", "body". In reference to more than one person, "habeas corpora".
Literally, the phrase means "[we command] that you should have the [detainee's] body [brought to court]". The complete phrase "habeas corpus [coram nobis] ad subjiciendum" means "that you have the person [before us] for the purpose of subjecting (the case to examination)". These are words of writs included in a 14th-century Anglo-French document requiring a person to be brought before a court or judge, especially to determine if that person is being legally detained.
The full name of the writ is often used to distinguish it from similar ancient writs, also named "habeas corpus". These include:
"Habeas corpus" originally stems from the Assize of Clarendon, a re-issuance of rights during the reign of Henry II of England in the 12th century. The foundations for "habeas corpus" are "wrongly thought" to have originated in Magna Carta. This charter declared that:
However the preceding article of Magna Carta, nr 38, declares:
Pursuant to that language, a person may not be subjected to any legal proceeding, such as arrest and imprisonment, without sufficient evidence having already been collected to show that there is a "prima facie" case to answer. This evidence must be collected beforehand, because it must be available to be exhibited in a public hearing within hours, or at the most days, after arrest, not months or longer as may happen in other jurisdictions that apply Napoleonic-inquisitorial criminal laws where evidence is commonly sought after a suspect's incarceration. Any charge leveled at the hearing thus must be based on evidence already collected, and an arrest and incarceration order is not lawful if not supported by sufficient evidence.
In contrast with the common law approach, consider the case of "Luciano Ferrari-Bravo v. Italy" the European Court of Human Rights ruled that "detention is intended to facilitate … the preliminary investigation". Ferrari-Bravo sought relief after nearly five years of preventive detention, and his application was rejected. The European Court of Human Rights deemed the five year detention to be "reasonable" under Article 6 of the European Convention on Human Rights, which provides that a prisoner has a right to a public hearing before an impartial tribunal within a "reasonable" time after arrest. After his eventual trial, the evidence against Ferrari-Bravo was deemed insufficient and he was found not guilty.
William Blackstone cites the first recorded usage of "habeas corpus ad subjiciendum" in 1305, during the reign of King Edward I. However, other writs were issued with the same effect as early as the reign of Henry II in the 12th century. Blackstone explained the basis of the writ, saying "[t]he king is at all times entitled to have an account, why the liberty of any of his subjects is restrained, wherever that restraint may be inflicted." The procedure for issuing a writ of "habeas corpus" was first codified by the Habeas Corpus Act 1679, following judicial rulings which had restricted the effectiveness of the writ. A previous law (the Habeas Corpus Act 1640) had been passed forty years earlier to overturn a ruling that the command of the King was a sufficient answer to a petition of "habeas corpus". The cornerstone purpose of the "writ of habeas corpus" was to limit the King's Chancery's ability to undermine the surety of law by allowing courts of justice decisions to be overturned in favor and application of "equity", a process managed by the Chancellor (a bishop) with the King's authority.
The 1679 codification of "habeas corpus" took place in the context of a sharp confrontation between King Charles II and the Parliament, which was dominated by the then sharply oppositional, nascent Whig Party. The Whig leaders had good reasons to fear the King moving against them through the courts (as indeed happened in 1681) and regarded "habeas corpus" as safeguarding their own persons. The short-lived Parliament which made this enactment came to be known as the "Habeas Corpus Parliament" – being dissolved by the King immediately afterwards.
Then, as now, the writ of "habeas corpus" was issued by a superior court in the name of the Sovereign, and commanded the addressee (a lower court, sheriff, or private subject) to produce the prisoner before the royal courts of law. A "habeas corpus" petition could be made by the prisoner him or herself or by a third party on his or her behalf and, as a result of the Habeas Corpus Acts, could be made regardless of whether the court was in session, by presenting the petition to a judge. Since the 18th century the writ has also been used in cases of unlawful detention by private individuals, most famously in "Somersett's Case" (1772), where the black slave, Somersett, was ordered to be freed. During that case, these famous words are said to have been uttered: "... that the air of England was too pure for slavery." (although it was the lawyers in argument who expressly used this phrase – referenced from a much earlier argument heard in The Star Chamber – and not Lord Mansfield himself). During the Seven Years' War and later conflicts, the Writ was used on behalf of soldiers and sailors pressed into military and naval service. The Habeas Corpus Act 1816 introduced some changes and expanded the territoriality of the legislation.
The privilege of "habeas corpus" has been suspended or restricted several times during English history, most recently during the 18th and 19th centuries. Although internment without trial has been authorised by statute since that time, for example during the two World Wars and the Troubles in Northern Ireland, the "habeas corpus" procedure has in modern times always technically remained available to such internees. However, as "habeas corpus" is only a procedural device to examine the lawfulness of a prisoner's detention, so long as the detention is in accordance with an Act of Parliament, the petition for "habeas corpus" is unsuccessful. Since the passage of the Human Rights Act 1998, the courts have been able to declare an Act of Parliament to be incompatible with the European Convention on Human Rights, but such a declaration of incompatibility has no legal effect unless and until it is acted upon by the government.
The wording of the writ of "habeas corpus" implies that the prisoner is brought to the court for the legality of the imprisonment to be examined. However, rather than issuing the writ immediately and waiting for the return of the writ by the custodian, modern practice in England is for the original application to be followed by a hearing with both parties present to decide the legality of the detention, without any writ being issued. If the detention is held to be unlawful, the prisoner can usually then be released or bailed by order of the court without having to be produced before it. With the development of modern public law, applications for habeas corpus have been to some extent discouraged, in favour of applications for judicial review. The writ, however, maintains its vigour, and was held by the UK Supreme Court to be available in respect of a prisoner captured by British forces in Afghanistan, albeit that the Secretary of State made a valid return to the writ justifying the detention of the claimant.
The writ of "habeas corpus" as a procedural remedy is part of Australia's English law inheritance. In 2005, the Australian parliament passed the Australian Anti-Terrorism Act 2005. Some legal experts questioned the constitutionality of the act, due in part to limitations it placed on "habeas corpus".
"Habeas corpus" rights are part of the British legal tradition inherited by Canada. The rights exist in the common law but have been enshrined in section 10(c) of the "Charter of Rights and Freedoms", which states that "[e]veryone has the right on arrest or detention … to have the validity of the detention determined by way of "habeas corpus" and to be released if the detention is not lawful". The test for "habeas corpus" in Canada was recently laid down by the Supreme Court of Canada in "Mission Institution v Khela", as follows:To be successful, an application for "habeas corpus" must satisfy the following criteria. First, the applicant [i.e., the person seeking "habeas corpus" review] must establish that he or she has been deprived of liberty. Once a deprivation of liberty is proven, the applicant must raise a legitimate ground upon which to question its legality. If the applicant has raised such a ground, the onus shifts to the respondent authorities [i.e., the person or institution detaining the applicant] to show that the deprivation of liberty was lawful.Suspension of the writ in Canadian history occurred famously during the October Crisis, during which the "War Measures Act" was invoked by the Governor General of Canada on the constitutional advice of Prime Minister Pierre Trudeau, who had received a request from the Quebec Cabinet. The Act was also used to justify German, Slavic, and Ukrainian Canadian internment during the First World War, and the internment of German-Canadians, Italian-Canadians and Japanese-Canadians during the Second World War. The writ was suspended for several years following the Battle of Fort Erie (1866) during the Fenian Rising, though the suspension was only ever applied to suspects in the Thomas D'Arcy McGee assassination.
The writ is available where there is no other adequate remedy. However, a superior court always has the discretion to grant the writ even in the face of an alternative remedy (see "May v Ferndale Institution"). Under the "Criminal Code" the writ is largely unavailable if a statutory right of appeal exists, whether or not this right has been exercised.
A fundamental human right in the "1789 Declaration of the Rights of Man" drafted by Lafayette in cooperation with Thomas Jefferson, the guarantees against arbitrary detention are enshrined in the French Constitution and regulated by the Penal Code. The safeguards are equivalent to those found under the Habeas-Corpus provisions found in Germany, the United States and several Commonwealth countries. The French system of accountability prescribes severe penalties for ministers, police officers and civil and judiciary authorities who either violate or fail to enforce the law.
"Article 7 of [1789] Declaration also provides that 'No individual may be accused, arrested, or detained except where the law so prescribes, and in accordance with the procedure it has laid down.' ... The Constitution further states that 'No one may be arbitrarily detained. The judicial authority, guardian of individual liberty, ensures the observance of this principle under the condition specified by law.' Its article 5 provides that everyone has the right to liberty and sets forth permissible circumstances under which people may be deprived of their liberty and procedural safeguards in case of detention. In particular, it states that 'anyone deprived of his liberty by arrest or detention shall be entitled to take proceedings by which the lawfulness of his detention shall be decided speedily by a court and his release ordered if the detention is not lawful'."
France and the United States played a synergistic role in the international team, led by Eleanor Roosevelt, which crafted the Universal Declaration of Human Rights. The French judge and Nobel Peace Laureate René Cassin produced the first draft and argued against arbitrary detentions. René Cassin and the French team subsequently championed the "habeas corpus" provisions enshrined in the European Convention for the Protection of Human Rights and Fundamental Freedoms.
Germany has constitutional guarantees against improper detention and these have been implemented in statutory law in a manner that can be considered as equivalent to writs of habeas corpus.
Article 104, paragraph 1 of the Basic Law for the Federal Republic of Germany provides that deprivations of liberty may be imposed only on the basis of a specific enabling statute that also must include procedural rules. Article 104, paragraph 2 requires that any arrested individual be brought before a judge by the end of the day following the day of the arrest. For those detained as criminal suspects, article 104, paragraph 3 specifically requires that the judge must grant a hearing to the suspect in order to rule on the detention.
Restrictions on the power of the authorities to arrest and detain individuals also emanate from article 2 paragraph 2 of the Basic Law which guarantees liberty and requires a statutory authorization for any deprivation of liberty. In addition, several other articles of the Basic Law have a bearing on the issue. The most important of these are article 19, which generally requires a statutory basis for any infringements of the fundamental rights guaranteed by the Basic Law while also guaranteeing judicial review; article 20, paragraph 3, which guarantees the rule of law; and article 3 which guarantees equality.
In particular, a constitutional obligation to grant remedies for improper detention is required by article 19, paragraph 4 of the Basic Law, which provides as follows: "Should any person's right be violated by public authority, he may have recourse to the courts. If no other jurisdiction has been established, recourse shall be to the ordinary courts."
The Indian judiciary, in a catena of cases, has effectively resorted to the writ of "habeas corpus" to secure release of a person from illegal detention. For example, in October 2009, the Karnataka High Court heard a "habeas corpus" petition filed by the parents of a girl who married a Muslim boy from Kannur district and was allegedly confined in a "madrasa" in Malapuram town. Usually, in most other jurisdictions, the writ is directed at police authorities. The extension to non-state authorities has its grounds in two cases: the 1898 Queen's Bench case of "Ex Parte Daisy Hopkins", wherein the Proctor of Cambridge University did detain and arrest Hopkins without his jurisdiction, and Hopkins was released, and that of "Somerset v Stewart", in which an African slave whose master had moved to London was freed by action of the writ.
The Indian judiciary has dispensed with the traditional doctrine of "locus standi", so that if a detained person is not in a position to file a petition, it can be moved on his behalf by any other person. The scope of "habeas" relief has expanded in recent times by actions of the Indian judiciary.
In 1976, the "habeas" writ was used in the Rajan case, a student victim of torture in local police custody during the nationwide Emergency in India. On 12 March 2014, Subrata Roy's counsel approached the Chief Justice moving a "habeas corpus" petition. It was also filed by the Panthers Party to protest the imprisonment of Anna Hazare, a social activist.
In the Republic of Ireland, the writ of "habeas corpus" is available at common law and under the Habeas Corpus Acts of 1782 and 1816. A remedy equivalent to "habeas corpus" is also guaranteed by Article 40 of the 1937 constitution.
The article guarantees that "no citizen shall be deprived of his personal liberty save in accordance with law" and outlines a specific procedure for the High Court to enquire into the lawfulness of any person's detention. It does not mention the Latin term, "habeas corpus", but includes the English phrase "produce the body".
Article 40.4.2° provides that a prisoner, or anyone acting on his behalf, may make a complaint to the High Court (or to any High Court judge) of unlawful detention. The court must then investigate the matter "forthwith" and may order that the defendant bring the prisoner before the court and give reasons for his detention. The court must immediately release the detainee unless it is satisfied that he is being held lawfully. The remedy is available not only to prisoners of the state, but also to persons unlawfully detained by any private party. However the constitution provides that the procedure is not binding on the Defence Forces during a state of war or armed rebellion.
The full text of Article 40.4.2° is as follows:
The writ of "habeas corpus" continued as part of the Irish law when the state seceded from the United Kingdom in 1922. A remedy equivalent to "habeas corpus" was also guaranteed by Article 6 of the Constitution of the Irish Free State, enacted in 1922. That article used similar wording to Article 40.4 of the current constitution, which replaced it 1937.
The relationship between the Article 40 and the Habeas Corpus Acts of 1782 and 1816 is ambiguous, and Forde and Leonard write that "The extent if any to which Article 40.4 has replaced these Acts has yet to be determined". In "The State (Ahern) v. Cotter" (1982) Walsh J. opined that the ancient writ referred to in the Habeas Corpus Acts remains in existence in Irish law as a separate remedy from that provided for in Article 40.
In 1941, the Article 40 procedure was restricted by the Second Amendment. Prior to the amendment, a prisoner had the constitutional right to apply to any High Court judge for an enquiry into her detention, and to as many High Court judges as she wished. If the prisoner successfully challenged her detention before the High Court she was entitled to immediate, unconditional release.
The Second Amendment provided that a prisoner has only the right to apply to a single judge, and, once a writ has been issued, the President of the High Court has authority to choose the judge or panel of three judges who will decide the case. If the High Court finds that the prisoner's detention is unlawful due to the unconstitutionality of a law the judge must refer the matter to the Supreme Court, and until the Supreme's Court's decision is rendered the prisoner may be released only on bail.
The power of the state to detain persons prior to trial was extended by the Sixteenth Amendment, in 1996. In 1965, the Supreme Court ruled in the "O'Callaghan" case that the constitution required that an individual charged with a crime could be refused bail only if she was likely to flee or to interfere with witnesses or evidence. Since the Sixteenth Amendment, it has been possible for a court to take into account whether a person has committed serious crimes while on bail in the past.
The right to freedom from arbitrary detention is guaranteed by Article 13 of the Constitution of Italy, which states:
This implies that within 48 hours every arrest made by a police force must be validated by a court.
Furthermore, if subject to a valid detention, an arrested can ask for a review of the detention to another court, called the Review Court ("Tribunale del Riesame", also known as the Freedom Court, "Tribunale della Libertà").
In Malaysia, the remedy of "habeas corpus" is guaranteed by the federal constitution, although not by name. Article 5(2) of the Constitution of Malaysia provides that "Where complaint is made to a High Court or any judge thereof that a person is being unlawfully detained the court shall inquire into the complaint and, unless satisfied that the detention is lawful, shall order him to be produced before the court and release him".
As there are several statutes, for example, the Internal Security Act 1960, that still permit detention without trial, the procedure is usually effective in such cases only if it can be shown that there was a procedural error in the way that the detention was ordered.
In New Zealand, "habeas corpus" may be invoked against the government or private individuals. In 2006, a child was allegedly kidnapped by his maternal grandfather after a custody dispute. The father began "habeas corpus" proceedings against the mother, the grandfather, the grandmother, the great grandmother, and another person alleged to have assisted in the kidnap of the child. The mother did not present the child to the court and so was imprisoned for contempt of court. She was released when the grandfather came forward with the child in late January 2007.
Issuance of a writ is an exercise of an extraordinary jurisdiction of the superior courts in Pakistan. A writ of habeas corpus may be issued by any High Court of a province in Pakistan. Article 199 of the 1973 Constitution of the Islamic Republic of Pakistan, specifically provides for the issuance of a writ of habeas corpus, empowering the courts to exercise this prerogative. Subject to the Article 199 of the Constitution, "A High Court may, if it is satisfied that no other adequate remedy is provided by law, on the application of any person, make an order that a person in custody within the territorial jurisdiction of the Court be brought before it so that the Court may satisfy itself that he is not being held in custody without a lawful authority or in an unlawful manner". The hallmark of extraordinary constitutional jurisdiction is to keep various functionaries of State within the ambit of their authority. Once a High Court has assumed jurisdiction to adjudicate the matter before it, justiciability of the issue raised before it is beyond question. The Supreme Court of Pakistan has stated clearly that the use of words "in an unlawful manner" implies that the court may examine, if a statute has allowed such detention, whether it was a colorable exercise of the power of authority. Thus, the court can examine the malafides of the action taken.
In the Bill of Rights of the Philippine constitution, "habeas corpus" is guaranteed in terms almost identically to those used in the U.S. Constitution. Article 3, Section 15 of the Constitution of the Philippines states that "The privilege of the writ of "habeas corpus" shall not be suspended except in cases of invasion or rebellion when the public safety requires it".
In 1971, after the Plaza Miranda bombing, the Marcos administration, under Ferdinand Marcos, suspended "habeas corpus" in an effort to stifle the oncoming insurgency, having blamed the Filipino Communist Party for the events of August 21. Many considered this to be a prelude to martial law. After widespread protests, however, the Arroyo administration decided to reintroduce the writ. In December 2009, "habeas corpus" was suspended in Maguindanao as the province was placed under martial law. This occurred in response to the Maguindanao massacre.
In 2016, President Rodrigo Duterte said he was planning on suspending the habeas corpus.
At 10 pm on 23 May 2017 Philippine time, President Rodrigo Duterte declared martial law in the whole island of Mindanao including Sulu and Tawi-tawi for the period of 60 days due to the series of attacks mounted by the Maute group, an ISIS-linked terrorist organization. The declaration suspends the writ.
The Parliament of Scotland passed a law to have the same effect as "habeas corpus" in the 18th century. This is now known as the Criminal Procedure Act 1701 c.6. It was originally called "the Act for preventing wrongful imprisonment and against undue delays in trials". It is still in force although certain parts have been repealed.
The present Constitution of Spain states that "A "habeas corpus" procedure shall be provided for by law to ensure the immediate handing over to the judicial authorities of any person illegally arrested". The statute which regulates the procedure is the "Law of Habeas Corpus of 24 May 1984", which provides that a person imprisoned may, on her or his own or through a third person, allege that she or he is imprisoned unlawfully and request to appear before a judge. The request must specify the grounds on which the detention is considered to be unlawful, which can be, for example, that the custodian holding the prisoner does not have the legal authority, that the prisoner's constitutional rights have been violated, or that he has been subjected to mistreatment. The judge may then request additional information if needed, and may issue a "habeas corpus" order, at which point the custodian has 24 hours to bring the prisoner before the judge.
Historically, many of the territories of Spain had remedies equivalent to the "habeas corpus", such as the privilege of "manifestación" in the Crown of Aragon or the right of the Tree in Biscay.
The United States inherited "habeas corpus" from the English common law. In England, the writ was issued in the name of the monarch. When the original thirteen American colonies declared independence, and became a republic based on popular sovereignty, any person, in the name of the people, acquired authority to initiate such writs. The U.S. Constitution specifically includes the "habeas" procedure in the Suspension Clause (Clause 2), located in Article One, Section 9. This states that "The privilege of the writ of "habeas corpus" shall not be suspended, unless when in cases of rebellion or invasion the public safety may require it".
The writ of "habeas corpus ad subjiciendum" is a civil, not criminal, "ex parte" proceeding in which a court inquires as to the legitimacy of a prisoner's custody. Typically, "habeas corpus" proceedings are to determine whether the court that imposed sentence on the defendant had jurisdiction and authority to do so, or whether the defendant's sentence has expired. "Habeas corpus" is also used as a legal avenue to challenge other types of custody such as pretrial detention or detention by the United States Bureau of Immigration and Customs Enforcement pursuant to a deportation proceeding.
Presidents Abraham Lincoln and Ulysses Grant suspended "habeas corpus" during the Civil War and Reconstruction for some places or types of cases. During World War II, President Franklin D. Roosevelt suspended habeas corpus. Following the September 11 attacks, President George W. Bush attempted to place Guantanamo Bay detainees outside of the jurisdiction of "habeas corpus", but the Supreme Court of the United States overturned this action in "Boumediene v. Bush".
In 1526, the "Fuero Nuevo of the Señorío de Vizcaya" ("New Charter of the Lordship of Biscay") established a form of "habeas corpus" in the territory of the "Señorío de Vizcaya", nowadays part of Spain. This revised version of the "Fuero Viejo" (Old Charter) of 1451 codified the medieval custom whereby no person could be arbitrarily detained without being summoned first to the Oak of Gernika, an ancestral oak tree located in the outskirts of Gernika under which all laws of the Lordship of Biscay were passed.
The New Charter formalised that no one could be detained without a court order (Law 26 of Chapter 9) nor due to debts (Law 3 of Chapter 16). It also established that no one could be arrested without previously having been summoned to the Oak of Gernika and given 30 days to answer the said summon, and that upon presenting themselves under the Tree, they had to be provided with all evidence and accusations so that they could defend themselves (Law 7 of Chapter 9). No one could be sent to prison or deprived of their freedom until being formally trialed, and no one could be accused of a different crime until their current court trial was over (Law 5 of Chapter 5). Those fearing they were being arrested illegally could appeal to the "Regimiento General" that their rights could be upheld. The "Regimiento" (the executive arm of the Juntas Generales of Biscay) would demand the prisoner be handed over to them, and thereafter the prisoner would be released and placed under the protection of the Regimiento while awaiting for trial.
The Crown of Aragon also had a remedy equivalent to the "habeas corpus" called the "manifestación de personas" (literally, "demonstration of persons"). According to the right of "manifestación", the Justicia de Aragon (lit. "Justice of Aragon", an Aragonese judiciary figure similar to an ombudsman, but with far reaching executive powers) could require a judge, a court of justice, or any other official that they handed over to the "Justicia" (i.e., that they "demonstrated") anyone being prosecuted so as to guarantee that this person's rights were upheld, and that no violence would befall this person prior to him being sentenced. Furthermore, the "Justicia" retained the right to examine the judgement and decide whether it satisfied the conditions of a fair trial; if the "Justicia" was not satisfied, he could refuse to hand the accused back to the authorities. The right of "manifestación" acted like an habeas corpus: knowing that the appeal to the "Justicia" would immediately follow any unlawful detention, these were effectively illegal. Equally, torture (which had been banned since 1325 in Aragon) could never take place. In some cases, people exerting their right of manifestación were kept under the Justicia's watch in "manifestación" prisons (famous for their mild and easy conditions) or house arrest; more generally however, the person was released from confinement and placed under the "Justicia's protection", awaiting trial. The "Justicia" always granted the right of "manifestación" by default, but they only really had to act in extreme cases, as for instance famously happened in 1590 when Antonio Pérez, the disgraced secretary to Philip II of Spain, fled from Castile to Aragon and used his Aragonese ascendency to appeal to the "Justicia" for manifestación right, and therefore prevent his arrest at the King's behest.
The right of "manifestación" was codified in 1325 in the Declaratio Privilegii generalis passed by the Aragonese Corts under king James II of Aragon. It had been practiced since the inception of the kingdom of Aragon in the 11th century, and therefore predates the "habeas corpus" itself.
In 1430, King Władysław II Jagiełło of Poland granted the Privilege of Jedlnia, which proclaimed, "Neminem captivabimus nisi iure victum" ("We will not imprison anyone except if convicted by law"). This revolutionary innovation in civil libertarianism gave Polish citizens due process-style rights that did not exist in any other European country for another 250 years. Originally, the Privilege of Jedlnia was restricted to the nobility (the szlachta), but it was extended to cover townsmen in the 1791 Constitution. Importantly, social classifications in the Polish–Lithuanian Commonwealth were not as rigid as in other European countries; townspeople and Jews were sometimes ennobled. The Privilege of Jedlnia provided broader coverage than many subsequently enacted habeas corpus laws, because Poland's nobility constituted an unusually large percentage of the country's total population, which was Europe's largest. As a result, by the 16th century, it was protecting the liberty of between five hundred thousand and a million Poles.
In South Africa and other countries whose legal systems are based on Roman-Dutch law, the "interdictum de homine libero exhibendo" is the equivalent of the writ of "habeas corpus". In South Africa, it has been entrenched in the Bill of Rights, which provides in section 35(2)(d) that every detained person has the right to challenge the lawfulness of the detention in person before a court and, if the detention is unlawful, to be released.
In the 1950s, American lawyer Luis Kutner began advocating an international writ of "habeas corpus" to protect individual human rights. In 1952, he filed a petition for a "United Nations Writ of Habeas Corpus" on behalf of William N. Oatis, an American journalist jailed the previous year by the Communist government of Czechoslovakia. | https://en.wikipedia.org/wiki?curid=14091 |
Prince Henry the Navigator
Infante Dom Henrique of Portugal, Duke of Viseu (4 March 1394 – 13 November 1460), better known as Prince Henry the Navigator (), was a central figure in the early days of the Portuguese Empire and in the 15th-century European maritime discoveries and maritime expansion. Through his administrative direction, he is regarded as the main initiator of what would be known as the Age of Discovery. Henry was the fourth child of the Portuguese king John I, who founded the House of Aviz.
Henry was responsible for the early development of Portuguese exploration and maritime trade with other continents through the systematic exploration of Western Africa, the islands of the Atlantic Ocean, and the search for new routes. He encouraged his father to conquer Ceuta (1415), the Muslim port on the North African coast across the Straits of Gibraltar from the Iberian Peninsula. He learned of the opportunities offered by the Saharan trade routes that terminated there, and became fascinated with Africa in general; he was most intrigued by the Christian legend of Prester John and the expansion of Portuguese trade. He is regarded as the patron of Portuguese exploration.
Henry was the third surviving son of King John I and his wife Philippa, sister of King Henry IV of England. He was baptized in Porto, and may have been born there, probably when the royal couple was living in the city's old mint, now called Casa do Infante (Prince's House), or in the region nearby. Another possibility is that he was born at the Monastery of Leça do Bailio, in Leça de Palmeira, during the same period of the royal couple's residence in the city of Porto.
Henry was 21 when he and his father and brothers captured the Moorish port of Ceuta in northern Morocco. Ceuta had long been a base for Barbary pirates who raided the Portuguese coast, depopulating villages by capturing their inhabitants to be sold in the African slave trade. Following this success, Henry began to explore the coast of Africa, most of which was unknown to Europeans. His objectives included finding the source of the West African gold trade and the legendary Christian kingdom of Prester John, and stopping the pirate attacks on the Portuguese coast.
At that time, the ships of the Mediterranean were too slow and too heavy to make these voyages. Under his direction, a new and much lighter ship was developed, the caravel, which could sail further and faster, and, above all, was highly maneuverable and could sail much nearer the wind, or "into the wind". This made the caravel largely independent of the prevailing winds.
With the caravel, Portuguese mariners explored rivers and shallow waters as well as the open ocean with wide autonomy. In fact, the invention of the caravel was what made Portugal poised to take the lead in transoceanic exploration.
In 1419, Henry's father appointed him governor of the province of the Algarve.
On 25 May 1420, Henry gained appointment as the Grand Master of the Military Order of Christ, the Portuguese successor to the Knights Templar, which had its headquarters at Tomar, in central Portugal. Henry held this position for the remainder of his life, and the Order was an important source of funds for Henry's ambitious plans, especially his persistent attempts to conquer the Canary Islands, which the Portuguese had claimed to have discovered before the year 1346.
In 1425, his second brother the Infante Peter, Duke of Coimbra, made a tour of Europe. While largely a diplomatic mission, among his goals was to seek out geographic material for his brother Henry. Peter returned from Venice with a current world map drafted by a Venetian cartographer.
In 1431, he donated houses for the "Estudo Geral" to reunite all the sciences—grammar, logic, rhetoric, arithmetic, music, and astronomy—into what would later become the University of Lisbon. For other subjects like medicine or philosophy, he ordered that each room should be decorated according to each subject that was being taught.
Henry also had other resources. When John I died in 1433, Henry's eldest brother Edward of Portugal became king. He granted Henry all profits from trading within the areas he discovered as well as the sole right to authorize expeditions beyond Cape Bojador. Henry also held a monopoly on tuna fishing in the Algarve. When Edward died eight years later, Henry supported his brother Peter, Duke of Coimbra for the regency during the minority of Edward's son Afonso V, and in return received a confirmation of this levy.
Henry functioned as a primary organizer of the disastrous expedition to Tangier in 1437. Henry's younger brother Ferdinand was given as a hostage to guarantee that the Portuguese would fulfill the terms of the peace agreement that had been made with Çala Ben Çala. The Portuguese Cortes refused to approve the return of Ceuta in exchange for the Infante Ferdinand who remained in captivity until his death six years later.
Prince Regent Peter had an important role and responsibility in the Portuguese maritime expansion in the Atlantic Ocean and Africa during his administration. Henry promoted the colonization of the Azores during Peter's regency (1439–1448).
For most of the latter part of his life, Henry concentrated on his maritime activities, or on Portuguese court politics.
According to João de Barros, in the Algarve he repopulated a village that he called Terçanabal (from "terça nabal" or "tercena nabal"). This village was situated in a strategic position for his maritime enterprises and was later called Vila do Infante ("Estate or Town of the Prince").
It is traditionally suggested that Henry gathered at his villa on the Sagres peninsula a school of navigators and map-makers. However modern historians hold this to be a misconception. He did employ some cartographers to chart the coast of Mauritania after the voyages he sent there, but there was no center of navigation science or observatory in the modern sense of the word, nor was there an organized navigational center.
Referring to Sagres, sixteenth-century Portuguese mathematician and cosmographer Pedro Nunes remarked, "from it our sailors went out well taught and provided with instruments and rules which all map makers and navigators should know."
The view that Henry's court rapidly grew into the technological base for exploration, with a naval arsenal and an observatory, etc., although repeated in popular culture, has never been established. Henry did possess geographical curiosity, and employed cartographers. Jehuda Cresques, a noted cartographer, has been said to have accepted an invitation to come to Portugal to make maps for the infante. Prestage makes the argument that the presence of the latter at the Prince's court "probably accounts for the legend of the School of Sagres, which is now discredited."
The first contacts with the African slave market were made by expeditions to ransom Portuguese subjects enslaved by pirate attacks on Portuguese ships or villages. As Sir Peter Russell remarks in his biography, "In Henryspeak, conversion and enslavement were interchangeable terms."
Henry sponsored voyages, collecting a 20% tax ("o quinto") on the profits made by naval expeditions, which was the usual practice in the Iberian states of that time. The nearby port of Lagos provided a convenient harbor from which these expeditions left. The voyages were made in very small ships, mostly the caravel, a light and maneuverable vessel. The caravel used the lateen sail, the prevailing rig in Christian Mediterranean navigation since late antiquity. Most of the voyages sent out by Henry consisted of one or two ships that navigated by following the coast, stopping at night to tie up along some shore.
During Prince Henry's time and after, the Portuguese navigators discovered and perfected the North Atlantic "Volta do Mar" (the "turn of the sea" or "return from the sea"): the dependable pattern of trade winds blowing largely from the east near the equator and the returning westerlies in the mid-Atlantic. This was a major step in the history of navigation, when an understanding of oceanic wind patterns was crucial to Atlantic navigation, from Africa and the open ocean to Europe, and enabled the main route between the New World and Europe in the North Atlantic in future voyages of discovery. Although the lateen sail allowed sailing upwind to some extent, it was worth even major extensions of course to have a faster and calmer following wind for most of a journey. Portuguese mariners who sailed south and southwest towards the Canary Islands and West Africa would afterwards sail far to the northwest—that is, away from continental Portugal, and seemingly in the wrong direction—before turning northeast near the Azores islands and finally east to Europe in order to have largely following winds for their full journey. Christopher Columbus used this on his transatlantic voyages.
The first explorations followed not long after the capture of Ceuta in 1415. Henry was interested in locating the source of the caravans that brought gold to the city. During the reign of his father, John I, João Gonçalves Zarco and Tristão Vaz Teixeira were sent to explore along the African coast. Zarco, a knight in service to Prince Henry, had commanded the caravels guarding the coast of Algarve from the incursions of the Moors. He had also been at Ceuta.
In 1418, Zarco and Teixeira were blown off-course by a storm while making the "volta do mar" westward swing to return to Portugal. They found shelter at an island they named Porto Santo. Henry directed that Porto Santo be colonized. The move to claim the Madeiran islands was probably a response to Castile's efforts to claim the Canary Islands. In 1420, settlers then moved to the nearby island of Madeira.
A chart drawn by the Catalan cartographer, Gabriel de Vallseca of Mallorca, has been interpreted to indicate that the Azores were first discovered by Diogo de Silves in 1427. In 1431, Gonçalo Velho was dispatched with orders to determine the location of "islands" first identified by de Silves. Velho apparently got as far as the Formigas, in the eastern archipelago, before having to return to Sagres, probably due to bad weather.
By this time the Portuguese navigators had also reached the Sargasso Sea (western North Atlantic region), naming it after the "Sargassum" seaweed growing there ("sargaço" / "sargasso" in Portuguese).
Until Henry's time, Cape Bojador remained the most southerly point known to Europeans on the desert coast of Africa. Superstitious seafarers held that beyond the cape lay sea monsters and the edge of the world. In 1434, Gil Eanes, the commander of one of Henry's expeditions, became the first European known to pass Cape Bojador.
Using the new ship type, the expeditions then pushed onwards. Nuno Tristão and Antão Gonçalves reached Cape Blanco in 1441. The Portuguese sighted the Bay of Arguin in 1443 and built an important fort there around the year 1448. Dinis Dias soon came across the Senegal River and rounded the peninsula of Cap-Vert in 1444. By this stage the explorers had passed the southern boundary of the desert, and from then on Henry had one of his wishes fulfilled: the Portuguese had circumvented the Muslim land-based trade routes across the western Sahara Desert, and slaves and gold began arriving in Portugal. This rerouting of trade devastated Algiers and Tunis, but made Portugal rich. By 1452, the influx of gold permitted the minting of Portugal's first gold "cruzado" coins. A cruzado was equal to 400 reis at the time. From 1444 to 1446, as many as forty vessels sailed from Lagos on Henry's behalf, and the first private mercantile expeditions began.
Alvise Cadamosto explored the Atlantic coast of Africa and discovered several islands of the Cape Verde archipelago between 1455 and 1456. In his first voyage, which started on 22 March 1455, he visited the Madeira Islands and the Canary Islands. On the second voyage, in 1456, Cadamosto became the first European to reach the Cape Verde Islands. António Noli later claimed the credit. By 1462, the Portuguese had explored the coast of Africa as far as present-day Sierra Leone. Twenty-eight years later, Bartolomeu Dias proved that Africa could be circumnavigated when he reached the southern tip of the continent, now known as the Cape of Good Hope. In 1498, Vasco da Gama became the first European sailor to reach India by sea.
No one used the nickname "Navigator" to refer to prince Henry during his lifetime or in the following three centuries. The term was coined by two nineteenth-century German historians: Heinrich Schaefer and Gustave de Veer. Later on it was made popular by two British authors who included it in the titles of their biographies of the prince: Henry Major in 1868 and Raymond Beazley in 1895. In Portuguese, even in modern times, it is uncommon to call him by this epithet; the preferred use is "Infante D. Henrique". | https://en.wikipedia.org/wiki?curid=14092 |
Human cloning
Human cloning is the creation of a genetically identical copy (or clone) of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissue. It does not refer to the natural conception and delivery of identical twins. The possibility of person cloning has raised controversies. These ethical concerns have prompted several nations to pass laws regarding human cloning and its legality.
Two commonly discussed types of theoretical human cloning are "therapeutic cloning" and "reproductive cloning". Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants, and is an active area of research, but is not in medical practice anywhere in the world, . Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and, more recently, pluripotent stem cell induction. Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues.
Although the possibility of cloning humans had been the subject of speculation for much of the 20th century, scientists and policymakers began to take the prospect seriously in 1969. J. B. S. Haldane was the first to introduce the idea of human cloning, for which he used the terms "clone" and "cloning", which had been used in agriculture since the early 20th century. In his speech on "Biological Possibilities for the Human Species of the Next Ten Thousand Years" at the "Ciba Foundation Symposium on Man and his Future" in 1963, he said:
Nobel Prize-winning geneticist Joshua Lederberg advocated cloning and genetic engineering in an article in "The American Naturalist" in 1966 and again, the following year, in "The Washington Post". He sparked a debate with conservative bioethicist Leon Kass, who wrote at the time that "the programmed reproduction of man will, in fact, dehumanize him." Another Nobel Laureate, James D. Watson, publicized the potential and the perils of cloning in his "Atlantic Monthly" essay, "Moving Toward the Clonal Man", in 1971.
With the cloning of a sheep known as Dolly in 1996 by somatic cell nuclear transfer (SCNT), the idea of human cloning became a hot debate topic. Many nations outlawed it, while a few scientists promised to make a clone within the next few years. The first hybrid human clone was created in November 1998, by Advanced Cell Technology. It was created using SCNT; a nucleus was taken from a man's leg cell and inserted into a cow's egg from which the nucleus had been removed, and the hybrid cell was cultured and developed into an embryo. The embryo was destroyed after 12 days.
In 2004 and 2005, Hwang Woo-suk, a professor at Seoul National University, published two separate articles in the journal " Science" claiming to have successfully harvested pluripotent, embryonic stem cells from a cloned human blastocyst using SCNT techniques. Hwang claimed to have created eleven different patient-specific stem cell lines. This would have been the first major breakthrough in human cloning. However, in 2006 "Science" retracted both of his articles on clear evidence that much of his data from the experiments was fabricated.
In January 2008, Dr. Andrew French and Samuel Wood of the biotechnology company Stemagen announced that they successfully created the first five mature human embryos using SCNT. In this case, each embryo was created by taking a nucleus from a skin cell (donated by Wood and a colleague) and inserting it into a human egg from which the nucleus had been removed. The embryos were developed only to the blastocyst stage, at which point they were studied in processes that destroyed them. Members of the lab said that their next set of experiments would aim to generate embryonic stem cell lines; these are the "holy grail" that would be useful for therapeutic or reproductive cloning.
In 2011, scientists at the New York Stem Cell Foundation announced that they had succeeded in generating embryonic stem cell lines, but their process involved leaving the oocyte's nucleus in place, resulting in triploid cells, which would not be useful for cloning.
In 2013, a group of scientists led by Shoukhrat Mitalipov published the first report of embryonic stem cells created using SCNT. In this experiment, the researchers developed a protocol for using SCNT in human cells, which differs slightly from the one used in other organisms. Four embryonic stem cell lines from human fetal somatic cells were derived from those blastocysts. All four lines were derived using oocytes from the same donor, ensuring that all mitochondrial DNA inherited was identical. A year later, a team led by Robert Lanza at Advanced Cell Technology reported that they had replicated Mitalipov's results and further demonstrated the effectiveness by cloning adult cells using SCNT.
In 2018, the first successful cloning of primates using SCNT was reported with the birth of two live female clones, crab-eating macaques named Zhong Zhong and Hua Hua. | https://en.wikipedia.org/wiki?curid=14094 |
Internet troll
In Internet slang, a troll is a person who starts flame wars or intentionally upsets people on the Internet by posting inflammatory and digressive, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the intent of provoking readers into displaying emotional responses and normalizing tangential discussion, either for the troll's amusement or a specific gain.
Both the noun and the verb forms of "troll" are associated with Internet discourse. However, the word has also been used more widely. Media attention in recent years has equated trolling with online harassment. For example, the mass media have used "troll" to mean "a person who defaces Internet tribute sites with the aim of causing grief to families". In addition, depictions of trolling have been included in popular fictional works, such as the HBO television program "The Newsroom", in which a main character encounters harassing persons online and tries to infiltrate their circles by posting negative sexual comments.
Application of the term "troll" is subjective. Some readers may characterize a post as "trolling", while others may regard the same post as a legitimate contribution to the discussion, even if controversial. Like any pejorative term, it can be used as an "ad hominem" attack, suggesting a negative motivation.
As noted in an "OS News" article titled "Why People Troll and How to Stop Them" (25 January 2012), "The traditional definition of trolling includes intent. That is, trolls purposely disrupt forums. This definition is too narrow. Whether someone intends to disrupt a thread or not, the results are the same if they do." Others have addressed the same issue, e.g., Claire Hardaker, in her Ph.D. thesis "Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions." Popular recognition of the existence (and prevalence) of non-deliberate, "accidental trolls", has been documented widely, in sources as diverse as Nicole Sullivan's keynote speech at the 2012 Fluent Conference, titled "Don't Feed the Trolls" Gizmodo, online opinions on the subject written by Silicon Valley executives and comics.
Regardless of the circumstances, controversial posts may attract a particularly strong response from those unfamiliar with the robust dialogue found in some online, rather than physical, communities. Experienced participants in online forums know that the most effective way to discourage a troll is usually to ignore them, because responding tends to encourage trolls to continue disruptive postshence the often-seen warning: "Please do not feed the trolls". Some believe this to be bad or incomplete advice for effectively dealing with trolls.
The "trollface" is an image occasionally used to indicate trolling in Internet culture.
At times the word is incorrectly used to refer to anyone with controversial, or differing, opinions. Such usage goes against the ordinary meaning of troll in multiple ways. While psychologists have determined that the dark triad traits are common among Internet trolls, some observers claim trolls don't actually believe the controversial views they claim. Farhad Manjoo criticises this view, noting that if the person really is trolling, they are more intelligent than their critics would believe.
There are competing theories of where and when "troll" was first used in Internet slang, with numerous unattested accounts of BBS and UseNet origins in the early 1980s or before.
The English noun "troll" in the standard sense of ugly dwarf or giant dates to 1610 and comes from the Old Norse word "troll" meaning giant or demon. The word evokes the trolls of Scandinavian folklore and children's tales: antisocial, quarrelsome and slow-witted creatures which make life difficult for travellers. Trolls have existed in folklore and fantasy literature for centuries, but online trolling has been around for as long as the internet has existed.
In modern English usage, "trolling" may describe the fishing technique of slowly dragging a lure or baited hook from a moving boat, whereas "trawling" describes the generally commercial act of dragging a fishing net. Early non-Internet slang use of "trolling" can be found in the military: by 1972 the term "trolling for MiGs" was documented in use by US Navy pilots in Vietnam. It referred to use of "...decoys, with the mission of drawing...fire away..."
The contemporary use of the term is said to have appeared on the Internet in the late 1980s, but the earliest known attestation according to the "Oxford English Dictionary" is in 1992.
The context of the quote cited in the "Oxford English Dictionary" sets the origin in Usenet in the early 1990s as in the phrase "trolling for newbies", as used in "alt.folklore.urban" (AFU). Commonly, what is meant is a relatively gentle inside joke by veteran users, presenting questions or topics that had been so overdone that only a new user would respond to them earnestly. For example, a veteran of the group might make a post on the common misconception that glass flows over time. Long-time readers would both recognize the poster's name and know that the topic had been discussed repeatedly, but new subscribers to the group would not realize, and would thus respond. These types of trolls served as a practice to identify group insiders. This definition of trolling, considerably narrower than the modern understanding of the term, was considered a positive contribution. One of the most notorious AFU trollers, David Mikkelson, went on to create the urban folklore website Snopes.com.
By the late 1990s, "alt.folklore.urban" had such heavy traffic and participation that trolling of this sort was frowned upon. Others expanded the term to include the practice of playing a seriously misinformed user, even in newsgroups where one was not a regular; these were often attempts at humor rather than provocation. The noun "troll" usually referred to an act of trolling – or to the resulting discussion – rather than to the author, though some posts punned on the dual meaning of "troll."
In Chinese, trolling is referred to as "bái mù" (), which can be straightforwardly explained as "eyes without pupils", in the sense that whilst the pupil of the eye is used for vision, the white section of the eye cannot see, and trolling involves blindly talking nonsense over the Internet, having total disregard to sensitivities or being oblivious to the situation at hand, akin to having eyes without pupils. An alternative term is "bái làn" (), which describes a post completely nonsensical and full of folly made to upset others, and derives from a Taiwanese slang term for the male genitalia, where genitalia that is pale white in colour represents that someone is young, and thus foolish. Both terms originate from Taiwan, and are also used in Hong Kong and mainland China. Another term, "xiǎo bái" () is a derogatory term for both "bái mù" and "bái làn" that is used on anonymous posting Internet forums. Another common term for a troll used in mainland China is "pēn zi" ().
In Japanese, means "fishing" and refers to intentionally misleading posts whose only purpose is to get the readers to react, i.e. get trolled. means "laying waste" and can also be used to refer to simple spamming.
In Icelandic, "þurs" (a thurs) or "tröll" (a troll) may refer to trolls, the verbs "þursa" (to troll) or "þursast" (to be trolling, to troll about) may be used.
In Korean, "nak-si" (낚시) means "fishing", refers to Internet trolling attempts, as well as purposefully misleading post titles. A person who recognizes the troll after having responded (or, in case of a post title "nak-si", having read the actual post) would often refer to himself as a caught fish.
In Portuguese, more commonly in its Brazilian variant, (produced in most of Brazil as spelling pronunciation) is the usual term to denote Internet trolls (examples of common derivate terms are "trollismo" or "trollagem", "trolling", and the verb "trollar", "to troll", which entered popular use), but an older expression, used by those which want to avoid anglicisms or slangs, is "" to denote trolling behavior, and "pombos enxadristas" (literally, "chessplayer pigeons") or simply "pombos" are the terms used to name the trolls. The terms are explained by an adage or popular saying: "Arguing with "fulano" (i.e., John Doe) is the same as playing chess with a pigeon: it defecates on the table, drops the pieces and simply flies off, claiming victory."
In Thai, the term "krian" (เกรียน) has been adopted to address Internet trolls. According to the Royal Institute of Thailand, the term, which literally refers to a closely cropped hairstyle worn by schoolboys in Thailand, is from the behaviour of these schoolboys who usually gather to play online games and, during which, make annoying, disruptive, impolite, or unreasonable expressions. The term "top krian" (ตบเกรียน; "slap a cropped head") refers to the act of posting intellectual replies to refute and cause the messages of Internet trolls to be perceived as unintelligent.
Early incidents of trolling were considered to be the same as flaming, but this has changed with modern usage by the news media to refer to the creation of any content that targets another person. The Internet dictionary NetLingo suggests there are four grades of trolling: playtime trolling, tactical trolling, strategic trolling, and domination trolling. The relationship between trolling and flaming was observed in open-access forums in California, on a series of modem-linked computers. "CommuniTree" was begun in 1978 but was closed in 1982 when accessed by high school teenagers, becoming a ground for trashing and abuse. Some psychologists have suggested that flaming would be caused by deindividuation or decreased self-evaluation: the anonymity of online postings would lead to disinhibition amongst individuals Others have suggested that although flaming and trolling is often unpleasant, it may be a form of normative behavior that expresses the social identity of a certain user group According to Tom Postmes, a professor of social and organisational psychology at the universities of Exeter, England, and Groningen, The Netherlands, and the author of "Individuality and the Group", who has studied online behavior for 20 years, "Trolls aspire to violence, to the level of trouble they can cause in an environment. They want it to kick off. They want to promote antipathetic emotions of disgust and outrage, which morbidly gives them a sense of pleasure." Someone who brings something off topic into the conversation in order to make that person mad is trolling.
The practice of trolling has been documented by a number of academics as early as the 1990s. This included Steven Johnson in 1997 in the book "Interface Culture", and a paper by Judith Donath in 1999. Donath's paper outlines the ambiguity of identity in a disembodied "virtual community" such as Usenet:
Donath provides a concise overview of identity deception games which trade on the confusion between physical and epistemic community:
Trolls can be costly in several ways. A troll can disrupt the discussion on a newsgroup or online forum, disseminate bad advice, and damage the feeling of trust in the online community. Furthermore, in a group that has become sensitized to trollingwhere the rate of deception is highmany honestly naïve questions may be quickly rejected as trolling. This can be quite off-putting to the new user who upon venturing a first posting is immediately bombarded with angry accusations. Even if the accusation is unfounded, being branded a troll may be damaging to one's online reputation.
Susan Herring and colleagues in "Searching for Safety Online: Managing 'Trolling' in a Feminist Forum" point out the difficulty inherent in monitoring trolling and maintaining freedom of speech in online communities: "harassment often arises in spaces known for their freedom, lack of censure, and experimental nature". Free speech may lead to tolerance of trolling behavior, complicating the members' efforts to maintain an open, yet supportive discussion area, especially for sensitive topics such as race, gender, and sexuality.
In an effort to reduce uncivil behavior by increasing accountability, many web sites (e.g. Reuters, Facebook, and Gizmodo) now require commenters to register their names and e-mail addresses.
Organizations and countries may utilize trolls to manipulate public opinion as part and parcel of an astroturfing initiative. Teams of sponsored trolls are sometimes referred to as sockpuppet armies.
A 2016 study by Harvard political scientist Gary King reported that the Chinese government's 50 Cent Party creates 440 million pro-government social media posts per year. The report said that government employees were paid to create pro-government posts around the time of national holidays to avoid mass political protests. The Chinese Government ran an editorial in the state-funded "Global Times" defending censorship and 50 Cent Party trolls.
A 2016 study for the NATO Strategic Communications Centre of Excellence on hybrid warfare notes that the Ukrainian crisis "demonstrated how fake identities and accounts were used to disseminate narratives through social media, blogs, and web commentaries in order to manipulate, harass, or deceive opponents." The NATO report describes that a "Wikipedia troll" uses a type of message design where a troll does not add "emotional value" to reliable "essentially true" information in re-posts, but presents it "in the wrong context, intending the audience to draw false conclusions." For example, information, without context, from Wikipedia about the military history of the United States "becomes value-laden if it is posted in the comment section of an article criticizing Russia for its military actions and interests in Ukraine. The Wikipedia troll is 'tricky', because in terms of actual text, the information is true, but the way it is expressed gives it a completely different meaning to its readers."
Unlike "classic trolls," Wikipedia trolls "have no emotional input, they just supply misinformation" and are one of "the most dangerous" as well as one of "the most effective trolling message designs." Even among people who are "emotionally immune to aggressive messages" and apolitical, "training in critical thinking" is needed, according to the NATO report, because "they have relatively blind trust in Wikipedia sources and are not able to filter information that comes from platforms they consider authoritative." While Russian-language hybrid trolls use the Wikipedia troll message design to promote anti-Western sentiment in comments, they "mostly attack aggressively to maintain emotional attachment to issues covered in articles." Discussions about topics other than international sanctions during the Ukrainian crisis "attracted very aggressive trolling" and became polarized according to the NATO report, which "suggests that in subjects in which there is little potential for re-educating audiences, emotional harm is considered more effective" for pro-Russian Latvian-language trolls.
"The New York Times" reported in late October 2018 that Saudi Arabia used an online army of Twitter trolls to harass the late Saudi dissident journalist Jamal Khashoggi and other critics of the Saudi government.
In October 2018, "The Daily Telegraph" reported that Facebook "banned hundreds of pages and accounts which it says were fraudulently flooding its site with partisan political content – although they came from the US instead of being associated with Russia."
Researcher Ben Radford wrote about the phenomenon of clowns in history and modern day in his book "Bad Clowns" and found that bad clowns have evolved into Internet trolls. They do not dress up as traditional clowns but, for their own amusement, they tease and exploit "human foibles" in order to speak the "truth" and gain a reaction. Like clowns in make-up, Internet trolls hide behind "anonymous accounts and fake usernames." In their eyes they are the trickster and are performing for a nameless audience via the Internet. Trolling correlated positively with sadism, psychopathy, and Machiavellianism. Trolls take pleasure from causing pain. Their ability to upset or harm gives them a feeling of power.
A "concern troll" is a false flag pseudonym created by a user whose actual point of view is opposed to the one that the troll claims to hold. The concern troll posts in web forums devoted to its declared point of view and attempts to sway the group's actions or opinions while claiming to share their goals, but with professed "concerns". The goal is to sow fear, uncertainty, and doubt within the group often by appealing to outrage culture. This is a particular case of sockpuppeting and safe-baiting.
An example of this occurred in 2006 when Tad Furtado, a staffer for then-Congressman Charles Bass (R-NH), was caught posing as a "concerned" supporter of Bass's opponent, Democrat Paul Hodes, on several liberal New Hampshire blogs, using the pseudonyms "IndieNH" or "IndyNH". "IndyNH" expressed concern that Democrats might just be wasting their time or money on Hodes, because Bass was unbeatable. Hodes eventually won the election.
Although the term "concern troll" originated in discussions of online behavior, it now sees increasing use to describe similar behaviors that take place offline. For example, James Wolcott of "Vanity Fair" accused a conservative "New York Daily News" columnist of "concern troll" behavior in his efforts to downplay the Mark Foley scandal. Wolcott links what he calls concern trolls to what Saul Alinsky calls "Do-Nothings", giving a long quote from Alinsky on the Do-Nothings' method and effects:
"The Hill" published an op-ed piece by Markos Moulitsas of the liberal blog "Daily Kos" titled "Dems: Ignore 'Concern Trolls. The concern trolls in question were not Internet participants but rather Republicans offering public advice and warnings to the Democrats. The author defines "concern trolling" as "offering a poisoned apple in the form of advice to political opponents that, if taken, would harm the recipient". Concern trolls use a different type of bait than the more stereotypical troll in their attempts to manipulate participants and disrupt conversations.
A "The New York Times" article discussed troll activity at 4chan and at Encyclopedia Dramatica, which it described as "an online compendium of troll humor and troll lore". 4chan's /b/ board is recognized as "one of the Internet's most infamous and active trolling hotspots". This site and others are often used as a base to troll against sites that their members can not normally post on. These trolls feed off the reactions of their victims because "their agenda is to take delight in causing trouble". Places like Reddit, 4chan, and other anonymous message boards are prime real-estate for online trolls. Because there’s no way of tracing who someone is, trolls can post very inflammatory content without repercussion.
The online French group Ligue du LOL has been accused of organized harassment and described as a troll group.
Mainstream media outlets have focused their attention on the willingness of some Internet users to go to extreme lengths to participate in organized psychological harassment.
In February 2010, the Australian government became involved after users defaced the Facebook tribute pages of murdered children Trinity Bates and Elliott Fletcher. Australian communications minister Stephen Conroy decried the attacks, committed mainly by 4chan users, as evidence of the need for greater Internet regulation, stating, "This argument that the Internet is some mystical creation that no laws should apply to, that is a recipe for anarchy and the wild west." Facebook responded by strongly urging administrators to be aware of ways to ban users and remove inappropriate content from Facebook pages. In 2012, the "Daily Telegraph" started a campaign to take action against "Twitter trolls", who abuse and threaten users. Several high-profile Australians including Charlotte Dawson, Robbie Farah, Laura Dundovic, and Ray Hadley have been victims of this phenomenon.
Newslaundry covered the phenomenon of "Twitter trolling" in its "Criticles". It has also been characterising Twitter trolls in its weekly podcasts.
In the United Kingdom, contributions made to the Internet are covered by the Malicious Communications Act 1988 as well as Section 127 of the Communications Act 2003, under which jail sentences were, until 2015, limited to a maximum of six months. In October 2014, the UK's Justice Secretary, Chris Grayling, said that "Internet trolls" would face up to two years in jail, under measures in the Criminal Justice and Courts Bill that extend the maximum sentence and time limits for bringing prosecutions. The House of Lords Select Committee on Communications had earlier recommended against creating a specific offence of trolling. Sending messages which are "grossly offensive or of an indecent, obscene or menacing character" is an offence whether they are received by the intended recipient or not. Several people have been imprisoned in the UK for online harassment.
Trolls of the testimonial page of Georgia Varley faced no prosecution due to misunderstandings of the legal system in the wake of the term trolling being popularized. In October 2012, a twenty-year-old man was jailed for twelve weeks for posting offensive jokes to a support group for friends and family of April Jones.
As Phillips states, trolling embodies the values that are said to make America the greatest and most powerful nation on earth, due to the liberty and freedom of expression it encourages.
On 31 March 2010, NBC's "Today" ran a segment detailing the deaths of three separate adolescent girls and trolls' subsequent reactions to their deaths. Shortly after the suicide of high school student Alexis Pilkington, anonymous posters began performing organized psychological harassment across various message boards, referring to Pilkington as a "suicidal slut", and posting graphic images on her Facebook memorial page. The segment also included an exposé of a 2006 accident, in which an eighteen-year-old fatally crashed her father's car into a highway pylon; trolls emailed her grieving family the leaked pictures of her mutilated corpse (see Nikki Catsouras photographs controversy).
In 2007, the media was fooled by trollers into believing that students were consuming a drug called Jenkem, purportedly made of human waste. A user named Pickwick on TOTSE posted pictures implying that he was inhaling this drug. Major news corporations such as Fox News Channel reported the story and urged parents to warn their children about this drug. Pickwick's pictures of Jenkem were fake and the pictures did not actually feature human waste.
In August 2012, the subject of trolling was featured on the HBO television series "The Newsroom". The character of Neal Sampat encounters harassing individuals online, particularly looking at 4chan, and he ends up choosing to post negative comments himself on an economics-related forum. The attempt by the character to infiltrate trolls' inner circles attracted debate from media reviewers critiquing the series.
The publication of the 2015 non-fiction book "The Dark Net: Inside the Digital Underworld" by Jamie Bartlett, a journalist and a representative of the British think tank Demos, attracted some attention for its depiction of misunderstood sections of the Internet, describing interactions on encrypted sites such as those accessible with the software Tor. Detailing trolling-related groups and the harassment created by them, Bartlett advocated for greater awareness of them and monitoring of their activities. Professor Matthew Wisnioski wrote for "The Washington Post" that a "league of trolls, anarchists, perverts and drug dealers is at work building a digital world beyond the Silicon Valley offices where our era's best and brightest have designed a Facebook-friendly" surface and agreed with Bartlett that the activities of trolls go back decades to the Usenet "flame wars" of the 1990s and even earlier.
In February 2019, Glenn Greenwald wrote that a cybersecurity company New Knowledge "was caught just six weeks ago engaging in a massive scam to create fictitious Russian troll accounts on Facebook and Twitter in order to claim that the Kremlin was working to defeat Democratic Senate nominee Doug Jones in Alabama. "The New York Times", when exposing the scam, quoted a New Knowledge report that boasted of its fabrications: “We orchestrated an elaborate ‘false flag’ operation that planted the idea that the [Roy] Moore campaign was amplified on social media by a Russian botnet.'"
The 2020 Democratic presidential candidate Bernie Sanders has faced criticism for the behavior of some of his supporters online, but has deflected such criticism, suggesting that "Russians" were impersonating people claiming to be "Bernie Bro" supporters. Twitter rejected Sanders' suggestion that Russia could be responsible for the bad reputation of his supporters. A Twitter spokesperson told CNBC: "Using technology and human review in concert, we proactively monitor Twitter to identify attempts at platform manipulation and mitigate them. As is standard, if we have reasonable evidence of state-backed information operations, we’ll disclose them following our thorough investigation to our public archive — the largest of its kind in the industry." Twitter had suspended 70 troll accounts that posted content in support of Michael Bloomberg's presidential campaign.
As reported on 8 April 1999, investors became victims of trolling via an online financial discussion regarding PairGain, a telephone equipment company based in California. Trolls operating in the stock's Yahoo Finance chat room posted a fabricated Bloomberg News article stating that an Israeli telecom company could potentially acquire PairGain. As a result, PairGain's stock jumped by 31%. However, the stock promptly crashed after the reports were identified as false.
So-called Gold Membership trolling originated in 2007 on 4chan boards, when users posted fake images claiming to offer upgraded 4chan account privileges; without a "Gold" account, one could not view certain content. This turned out to be a hoax designed to fool board members, especially newcomers. It was copied and became an Internet meme. In some cases, this type of troll has been used as a scam, most notably on Facebook, where fake Facebook Gold Account upgrade ads have proliferated in order to link users to dubious websites and other content.
The case of "Zeran v. America Online, Inc." resulted primarily from trolling. Six days after the Oklahoma City bombing, anonymous users posted advertisements for shirts celebrating the bombing on AOL message boards, claiming that the shirts could be obtained by contacting Mr. Kenneth Zeran. The posts listed Zeran's address and home phone number. Zeran was subsequently harassed.
Anti-scientology protests by Anonymous, commonly known as Project Chanology, are sometimes labeled as "trolling" by media such as "Wired", and the participants sometimes explicitly self-identify as "trolls".
Neo-Nazi website "The Daily Stormer" orchestrates what it calls a "Troll Army", and has encouraged trolling of Jewish MP Luciana Berger and Muslim activist Mariam Veiszadeh.
In 2012, after feminist Anita Sarkeesian started a Kickstarter campaign to fund a series of YouTube videos chronicling misogyny in video games, she received bomb threats at speaking engagements, doxxing threats, rape threats and an unwanted starring role in a video game called Beat Up Anita Sarkeesian. | https://en.wikipedia.org/wiki?curid=14594 |
Geography of India
India lies on the Indian Plate, the northern part of the Indo-Australian Plate, whose continental crust forms the Indian subcontinent. The country is situated north of the equator between 8°4' north to 37°6' north latitude and 68°7' east to 97°25' east longitude. It is the seventh-largest country in the world, with a total area of . India measures from north to south and from east to west. It has a land frontier of and a coastline of .
On the south, India projects into and is bounded by the Indian Ocean—in particular, by the Arabian Sea on the west, the Lakshadweep Sea to the southwest, the Bay of Bengal on the east, and the Indian Ocean proper to the south. The Palk Strait and Gulf of Mannar separate India from Sri Lanka to its immediate southeast, and the Maldives are some to the south of India's Lakshadweep Islands across the Eight Degree Channel. India's Andaman and Nicobar Islands, some southeast of the mainland, share maritime borders with Myanmar, Thailand and Indonesia. Kanyakumari at 8°4′41″N and 77°55′230″E is the southernmost tip of the Indian mainland, while the southernmost point in India is Indira Point on Great Nicobar Island. The northernmost point which is under Indian administration is Indira Col, Siachen Glacier. India's territorial waters extend into the sea to a distance of from the coast baseline. India has the 18th largest Exclusive Economic Zone of .
The northern frontiers of India are defined largely by the Himalayan mountain range, where the country borders China, Bhutan, and Nepal. Its western border with Pakistan lies in the Karakoram range, Punjab Plains, the Thar Desert and the Rann of Kutch salt marshes. In the far northeast, the Chin Hills and Kachin Hills, deeply forested mountainous regions, separate India from Burma. On the east, its border with Bangladesh is largely defined by the Khasi Hills and Mizo Hills, and the watershed region of the Indo-Gangetic Plain.
The Ganga is the longest river originating in India. The Ganga–Brahmaputra system occupies most of northern, central, and eastern India, while the Deccan Plateau occupies most of southern India. Kangchenjunga, in the Indian state of Sikkim, is the highest point in India at and the world's third highest peak.The climate across India ranges from equatorial in the far south, to alpine and tundra in the upper regions of the Himalayas.
India is situated entirely on the Indian Plate, a major tectonic plate that was formed when it split off from the ancient continent Gondwanaland (ancient landmass, consisting of the southern part of the supercontinent of Pangea). The Indo-Australian plate is subdivided into the Indian and Australian plates. About 90 million years ago, during the late Cretaceous Period, the Indian Plate began moving north at about 15 cm/year (6 in/yr). About 50 to 55 million years ago, in the Eocene Epoch of the Cenozoic Era, the plate collided with Asia after covering a distance of , having moved faster than any other known plate. In 2007, German geologists determined that the Indian Plate was able to move so quickly because it is only half as thick as the other plates which formerly constituted Gondwanaland. The collision with the Eurasian Plate along the modern border between India and Nepal formed the orogenic belt that created the Tibetan Plateau and the Himalayas. , the Indian Plate is moving northeast at 5 cm/yr (2 in/yr), while the Eurasian Plate is moving north at only 2 cm/yr (0.8 in/yr). India is thus referred to as the "fastest continent". This is causing the Eurasian Plate to deform, and the Indian Plate to compress at a rate of 4 cm/yr (1.6 in/yr).
India is divided into 28 States (further subdivided into districts) and 8 union territories including the National capital territory (I.e., Delhi).
India's borders run a total length of .
Its borders with Pakistan and Bangladesh were delineated according to the Radcliffe Line, which was created in 1947 during Partition of India. Its western border with Pakistan extends up to , dividing the Punjab region and running along the boundaries of the Thar Desert and the Rann of Kutch. This border runs along the Indian states of Jammu and Kashmir, Rajasthan, Gujarat, and Punjab. Both nations delineated a Line of Control (LoC) to serve as the informal boundary between the Indian and Pakistan-administered areas of Jammu and Kashmir. India claims the whole state of Jammu and Kashmir, which includes Pakistan-administered Kashmir and China-administered Aksai Chin, which, according to India are illegally occupied areas.
India's border with Bangladesh runs . West Bengal, Assam, Meghalaya, Tripura and Mizoram are the states which share the border with Bangladesh. Before 2015, there were 92 enclaves of Bangladesh on Indian soil and 106 enclaves of India were on Bangladeshi soil. These enclaves were eventually exchanged in order to simplify the border. After the exchange, India lost roughly 40 km2 (10,000 acres) to Bangladesh.
The Line of Actual Control (LAC) is the effective border between India and the People's Republic of China. It traverses 4,057 km along the Indian states of Jammu and Kashmir, Uttarakhand, Himachal Pradesh, Sikkim and Arunachal Pradesh. The border with Burma (Myanmar) extends up to along the southern borders of India's northeastern states viz. Arunachal Pradesh, Nagaland, Manipur and Mizoram. Located amidst the Himalayan range, India's border with Bhutan runs . Sikkim, West Bengal, Assam and Arunachal Pradesh are the states which share the border with Bhutan. The border with Nepal runs along the foothills of the Himalayas in northern India. Uttarakhand, Uttar Pradesh, Bihar, West Bengal and Sikkim are the states which share the border with Nepal. The Siliguri Corridor, narrowed sharply by the borders of Bhutan, Nepal and Bangladesh, connects peninsular India with the northeastern states.
Cratons are a specific kind of continental crust made up of a top layer called platform and an older layer called basement. A shield is the part of a craton where basement rock crops out of the ground, and it is relatively the older and more stable section, unaffected by plate tectonics.
The Indian Craton can be divided into five major cratons as such:
India can be divided into six physiographic regions. They are:
An arc of mountains consisting of the Himalayas, Hindu Kush, and Patkai ranges define the northern frontiers of the Indian subcontinent. These were formed by the ongoing tectonic plates collision of the Indian and Eurasian plates. The mountains in these ranges include some of the world's tallest mountains which act as a colourful barrier to cold polar winds. They also facilitate the monsoon winds which in turn influence the climate in India. Rivers originating in these mountains flow through the fertile Indo–Gangetic plains. These mountains are recognised by biogeographers as the boundary between two of the Earth's great biogeographic realms: the temperate Palearctic realm that covers most of Eurasia, and the tropical and subtropical Indomalayan realm which includes the Indian subcontinent, Southeast Asia and Indonesia.
The Himalayan range is the world's highest mountain range, with its tallest peak Mt. Everest () on the Nepal–China border. They form India's northeastern border, separating it from northeastern Asia. They are one of the world's youngest mountain ranges and extend almost uninterrupted for , covering an area of . The Himalayas extend from Jammu and Kashmir in the north to Arunachal Pradesh in the east. These states along with Himachal Pradesh, Uttarakhand, and Sikkim lie mostly in the Himalayan region. Numerous Himalayan peaks rise over and the snow line ranges between in Sikkim to around in Kashmir. Kanchenjunga—on the Sikkim–Nepal border—is the highest point in the area administered by India. Most peaks in the Himalayas remain snowbound throughout the year. The Himalayas act as a barrier to the frigid katabatic winds flowing down from Central Asia. Thus, northern India is kept warm or only mildly cooled during winter; in summer, the same phenomenon makes India relatively hot.
The main features of Indian Craton are:
The Indo-Gangetic plains, also known as the "Great Plains" are large alluvial plains dominated by three main rivers, the Indus, Ganges, and Brahmaputra. They run parallel to the Himalayas, from Jammu and Kashmir in the west to Assam in the east, and drain most of northern and eastern India. The plains encompass an area of . The major rivers in this region are the Ganges, Indus, and Brahmaputra along with their main tributaries—Yamuna, Chambal, Gomti, Ghaghara, Kosi, Sutlej, Ravi, Beas, Chenab, and Tista—as well as the rivers of the Ganges Delta, such as the Meghna.
The great plains are sometimes classified into four divisions:
The Indo-Gangetic belt is the world's most extensive expanse of uninterrupted alluvium formed by the deposition of silt by the numerous rivers. The plains are flat making it conducive for irrigation through canals. The area is also rich in ground water sources.The plains are one of the world's most intensely farmed areas. The main crops grown are rice and wheat, which are grown in rotation. Other important crops grown in the region include maize, sugarcane and cotton. The Indo-Gangetic plains rank among the world's most densely populated areas.
The Thar Desert (also known as "the deserts") is by some calculations the world's seventh largest desert, by some others the tenth. It forms a significant portion of western India and covers an area of . The desert continues into Pakistan as the Cholistan Desert. Most of the Thar Desert is situated in Rajasthan, covering 61% of its geographic area.
About 10 percent of this region consists of sand dunes, and the remaining 90 percent consist of craggy rock forms, compacted salt-lake bottoms, and interdunal and fixed dune areas. Annual temperatures can range from in the winter to over during the summer. Most of the rainfall received in this region is associated with the short July–September southwest monsoon that brings of precipitation. Water is scarce and occurs at great depths, ranging from below the ground level. Rainfall is precarious and erratic, ranging from below in the extreme west to eastward.The only river in this region is Luni. The soils of the arid region are generally sandy to sandy-loam in texture. The consistency and depth vary as per the topographical features. The low-lying loams are heavier may have a hard pan of clay, calcium carbonate or gypsum.
In western India, the Kutch region in Gujarat and Koyna in Maharashtra are classified as a Zone IV region (high risk) for earthquakes. The Kutch city of Bhuj was the epicentre of the 2001 Gujarat earthquake, which claimed the lives of more than 1,337 people and injured 166,836 while destroying or damaging near a million homes. The 1993 Latur earthquake in Maharashtra killed 7,928 people and injured 30,000. Other areas have a moderate to low risk of an earthquake occurring.
The Eastern Coastal Plain is a wide stretch of land lying between the Eastern Ghats and the oceanic boundary of India. It stretches from Tamil Nadu in the south to West Bengal in the east. The Mahanadi, Godavari, Kaveri, and Krishna rivers drain these plains. The temperature in the coastal regions often exceeds , and is coupled with high levels of humidity. The region receives both the northeast monsoon and southwest monsoon rains. The southwest monsoon splits into two branches, the Bay of Bengal branch and the Arabian Sea branch. The Bay of Bengal branch moves northwards crossing northeast India in early June. The Arabian Sea branch moves northwards and discharges much of its rain on the windward side of Western Ghats. Annual rainfall in this region averages between . The width of the plains varies between . The plains are divided into six regions—the Mahanadi delta, the southern Andhra Pradesh plain, the Krishna-Godavari deltas, the Kanyakumari coast, the Coromandel Coast, and sandy coastal.
The Western Coastal Plain is a narrow strip of land sandwiched between the Western Ghats and the Arabian Sea, ranging from in width. It extends from Gujarat in the north and extends through Maharashtra, Goa, Karnataka, and Kerala. Numerous rivers and backwaters inundate the region. Mostly originating in the Western Ghats, the rivers are fast-flowing, usually perennial, and empty into estuaries. Major rivers flowing into the sea are the Tapti, Narmada, Mandovi and Zuari. Vegetation is mostly deciduous, but the Malabar Coast moist forests constitute a unique ecoregion. The Western Coastal Plain can be divided into two parts, the Konkan and the Malabar Coast.
The Lakshadweep and the Andaman and Nicobar Islands are India's two major island formations and are classified as union territories.
The Lakshadweep Islands lie off the coast of Kerala in the Arabian sea with an area of . They consist of twelve atolls, three reefs, and five submerged banks, with a total of about 35 islands and islets.
The Andaman and Nicobar Islands are located between 6° and 14° north latitude and 92° and 94° east longitude. They consist of 572 islands, lying in the Bay of Bengal near the Myanmar coast running in a North-South axis for approximately 910 km. They are located from Kolkata (Calcutta) and from Cape Negrais in Burma. The territory consists of two island groups, the Andaman Islands and the Nicobar Islands. The Andaman and Nicobar Islands consist of 572 islands which run in a North-South axis for around 910 km. The Andaman group has 325 islands which cover an area of 6,170 km2 (2,382 sq mi) while the Nicobar group has only 247 islands with an area of 1,765 km2 (681 sq mi). India's only active volcano, Barren Island is situated here. It last erupted in 2017. The Narcondum is a dormant volcano and there is a mud volcano at Baratang. Indira Point, India's southernmost land point, is situated in the Nicobar islands at 6°45’10″N and 93°49’36″E, and lies just from the Indonesian island of Sumatra, to the southeast. The highest point is Mount Thullier at .
Other significant islands in India include Diu, a former Portuguese colony; Majuli, a river island of the Brahmaputra; Elephanta in Bombay Harbour; and Sriharikota, a barrier island in Andhra Pradesh. Salsette Island is India's most populous island on which the city of Mumbai (Bombay) is located. Forty-two islands in the Gulf of Kutch constitute the Marine National Park.
India has around 14,500 km of inland navigable waterways. There are twelve rivers which are classified as major rivers, with the total catchment area exceeding . All major rivers of India originate from one of the three main watersheds:
The Himalayan river networks are snow-fed and have a perennial supply throughout the year. The other two river systems are dependent on the monsoons and shrink into rivulets during the dry season. The Himalayan rivers that flow westward into Punjab are the Indus, Jhelum, Chenab, Ravi, Beas, and Sutlej.
The Ganges-Brahmaputra-Meghana system has the largest catchment area of about . The Ganges Basin alone has a catchment of about . The Ganges originates from the Gangotri Glacier in Uttarakhand. It flows southeast, draining into the Bay of Bengal). (The Yamuna and Gomti rivers also arise in the western Himalayas and join the Ganges in the plains. The Brahmaputra originates in Tibet, China, where it is known as the Yarlung Tsangpo River) (or "Tsangpo"). It enters India in the far-eastern state of Arunachal Pradesh, then flows west through Assam. The Brahmaputra merges with the Ganges in Bangladesh, where it is known as the Jamuna River.
The Chambal, another tributary of the Ganges, via the Yamuna, originates from the Vindhya-Satpura watershed. The river flows eastward. Westward-flowing rivers from this watershed are the Narmada and Tapi, which drain into the Arabian Sea in Gujarat. The river network that flows from east to west constitutes 10% of the total outflow.
(The Western Ghats are the source of all Deccan rivers, which include the through Godavari River, Krishna River and Kaveri River, all draining into the Bay of Bengal. These rivers constitute 20% of India's total outflow).
The heavy southwest monsoon rains cause the Brahmaputra and other rivers to distend their banks, often flooding surrounding areas. Though they provide rice paddy farmers with a largely dependable source of natural irrigation and fertilisation, such floods have killed thousands of people and tend to cause displacements of people in such areas.
Major gulfs include the Gulf of Cambay, Gulf of Kutch, and the Gulf of Mannar. Straits include the Palk Strait, which separates India from Sri Lanka; the Ten Degree Channel, which separates the Andamans from the Nicobar Islands; and the Eight Degree Channel, which separates the Laccadive and Amindivi Islands from the Minicoy Island to the south. Important capes include the Kanyakumari (formerly called Cape Comorin), the southern tip of mainland India; Indira Point, the southernmost point in India (on Great Nicobar Island); Rama's Bridge, and Point Calimere. The Arabian Sea lies to the west of India, the Bay of Bengal and the Indian Ocean lie to the east and south, respectively. Smaller seas include the Laccadive Sea and the Andaman Sea. There are four coral reefs in India, located in the Andaman and Nicobar Islands, the Gulf of Mannar, Lakshadweep, and the Gulf of Kutch. Important lakes include Sambhar Lake, the country's largest saltwater lake in Rajasthan, Vembanad Lake in Kerala, Kolleru Lake in Andhra Pradesh, Loktak Lake in Manipur, Dal Lake in Kashmir, Chilka Lake (lagoon lake) in Odisha, and Sasthamkotta Lake in Kerala.
India's wetland ecosystem is widely distributed from the cold and arid located in the Ladakh region of Jammu and Kashmir, and those with the wet and humid climate of peninsular India. Most of the wetlands are directly or indirectly linked to river networks. The Indian government has identified a total of 71 wetlands for conservation and are part of sanctuaries and national parks. Mangrove forests are present all along the Indian coastline in sheltered estuaries, creeks, backwaters, salt marshes and mudflats. The mangrove area covers a total of , which comprises 7% of the world's total mangrove cover. Prominent mangrove covers are located in the Andaman and Nicobar Islands, the Sundarbans delta, the Gulf of Kutch and the deltas of the Mahanadi, Godavari and Krishna rivers. Parts of Maharashtra, Karnataka and Kerala also have large mangrove covers.
The Sundarbans delta is home to the largest mangrove forest in the world. It lies at the mouth of the Ganges and spreads across areas of Bangladesh and West Bengal. The Sundarbans is a UNESCO World Heritage Site, but is identified separately as the Sundarbans (Bangladesh) and the Sundarbans National Park (India). The Sundarbans are intersected by a complex network of tidal waterways, mudflats and small islands of salt-tolerant mangrove forests. The area is known for its diverse fauna, being home to a large variety of species of birds, spotted deer, crocodiles and snakes. Its most famous inhabitant is the Bengal tiger. It is estimated that there are now 400 Bengal tigers and about 30,000 spotted deer in the area.
The Rann of Kutch is a marshy region located in northwestern Gujarat and the bordering Sindh province of Pakistan. It occupies a total area of . The region was originally a part of the Arabian Sea. Geologic forces such as earthquakes resulted in the damming up of the region, turning it into a large saltwater lagoon. This area gradually filled with silt thus turning it into a seasonal salt marsh. During the monsoons, the area turn into a shallow marsh, often flooding to knee-depth. After the monsoons, the region turns dry and becomes parched.
Based on the Köppen system, India hosts six major climatic subtypes, ranging from arid desert in the west, alpine tundra and glaciers in the north, and humid tropical regions supporting rainforests in the southwest and the island territories. The nation has four seasons: winter (January–February), summer (March–May), a monsoon (rainy) season (June–September) and a post-monsoon period (October–December).
The Himalayas act as a barrier to the frigid katabatic winds flowing down from Central Asia. Thus, northern India is kept warm or only mildly cooled during winter; in summer, the same phenomenon makes India relatively hot. Although the Tropic of Cancer—the boundary between the tropics and subtropics—passes through the middle of India, the whole country is considered to be tropical.
Summer lasts between March and June in most parts of India. Temperatures can exceed during the day. The coastal regions exceed coupled with high levels of humidity. In the Thar desert area temperatures can exceed . The rain-bearing monsoon clouds are attracted to the low-pressure system created by the Thar Desert. The southwest monsoon splits into two arms, the Bay of Bengal arm and the Arabian Sea arm. The Bay of Bengal arm moves northwards crossing northeast India in early June. The Arabian Sea arm moves northwards and deposits much of its rain on the windward side of Western Ghats. Winters in peninsula India see mild to warm days and cool nights. Further north the temperature is cooler. Temperatures in some parts of the Indian plains sometimes fall below freezing. Most of northern India is plagued by fog during this season. The highest temperature recorded in India was in Phalodi, Rajasthan. The lowest was in Kashmir.
India's geological features are classified based on their era of formation. The Precambrian formations of Cudappah and Vindhyan systems are spread out over the eastern and southern states. A small part of this period is spread over western and central India. The Paleozoic formations from the Cambrian, Ordovician, Silurian and Devonian system are found in the Western Himalaya region in Kashmir and Himachal Pradesh. The Mesozoic Deccan Traps formation is seen over most of the northern Deccan; they are believed to be the result of sub-aerial volcanic activity. The Trap soil is black in colour and conducive to agriculture. The Carboniferous system, Permian System and Triassic systems are seen in the western Himalayas. The Jurassic system is seen in the western Himalayas and Rajasthan.
Tertiary imprints are seen in parts of Manipur, Nagaland, Arunachal Pradesh and along the Himalayan belt. The Cretaceous system is seen in central India in the Vindhyas and part of the Indo-Gangetic plains. The Gondwana system is seen in the Narmada River area in the Vindhyas and Satpuras. The Eocene system is seen in the western Himalayas and Assam. Oligocene formations are seen in Kutch and Assam. The Pleistocene system is found over central India. The Andaman and Nicobar Island are thought to have been formed in this era by volcanoes. The Himalayas were formed by the convergence and deformation of the Indo-Australian and Eurasian Plates. Their continued convergence raises the height of the Himalayas by one centimetre each year.
Soils in India can be classified into eight categories: alluvial, black, red, laterite, forest, arid and desert, saline and alkaline and peaty and organic soils. Alluvial soil constitute the largest soil group in India, constituting 80% of the total land surface. It is derived from the deposition of silt carried by rivers and are found in the Great Northern plains from Punjab to the Assam valley. Alluvial soil are generally fertile but they lack nitrogen and tend to be phosphoric. National Disaster Management Authority says that 60% of Indian landmass is prone to earthquakes and 8% susceptible to cyclone risks.
Black soil are well developed in the Deccan lava region of Maharashtra, Gujarat, and Madhya Pradesh. These contain high percentage of clay and are moisture retentive. Red soils are found in Tamil Nadu, Karnataka plateau, Andhra plateau, Chota Nagpur plateau and the Aravallis. These are deficient in nitrogen, phosphorus and humus. Laterite soils are formed in tropical regions with heavy rainfall. Heavy rainfall results in leaching out all soluble material of top layer of soil. These are generally found in Western ghats, Eastern ghats and hilly areas of northeastern states that receive heavy rainfall. Forest soils occur on the slopes of mountains and hills in Himalayas, Western Ghats and Eastern Ghats. These generally consist of large amounts of dead leaves and other organic matter called humus.
India's total renewable water resources are estimated at 1,907.8 km³ a year. Its annual supply of usable and replenshable groundwater amounts to 350 billion cubic metres. Only 35% of groundwater resources are being utilised. About 44 million tonnes of cargo is moved annually through the country's major rivers and waterways. Groundwater supplies 40% of water in India's irrigation canals. 56% of the land is arable and used for agriculture. Black soils are moisture-retentive and are preferred for dry farming and growing cotton, linseed, etc. Forest soils are used for tea and coffee plantations. Red soils have a wide diffusion of iron content.
Most of India's estimated in oil reserves are located in the Mumbai High, upper Assam, Cambay, the Krishna-Godavari and Cauvery basins. India possesses about seventeen trillion cubic feet of natural gas in Andhra Pradesh, Gujarat and Odisha. Uranium is mined in Andhra Pradesh. India has 400 medium-to-high enthalpy thermal springs for producing geothermal energy in seven "provinces"—the Himalayas, Sohana, Cambay, the Narmada-Tapti delta, the Godavari delta and the Andaman and Nicobar Islands (specifically the volcanic Barren Island.)
India is the world's biggest producer of mica blocks and mica splittings. India ranks second amongst the world's largest producers of barite and chromite. The Pleistocene system is rich in minerals. India is the third-largest coal producer in the world and ranks fourth in the production of iron ore. It is the fifth-largest producer of bauxite, second largest of crude steel as of February 2018 replacing Japan, the seventh-largest of manganese ore and the eighth-largest of aluminium. India has significant sources of titanium ore, diamonds and limestone. India possesses 24% of the world's known and economically viable thorium, which is mined along shores of Kerala. Gold had been mined in the now-defunct Kolar Gold Fields in Karnataka. | https://en.wikipedia.org/wiki?curid=14597 |
Demographics of India
India is the second most populated country in the world with nearly a fifth of the world's population. According to population stood at .
During 1975–2010, the population doubled to 1.2 billion. The Indian population reached the billion mark in 1998. India is projected to be the world's most populous country by 2024, surpassing the population of China. It is expected to become the first country to be home to more than 1.5 billion people by 2030, and its population is set to reach 1.7 billion by 2050. Its population growth rate is 1.13%, ranking 112th in the world in 2017.
India has more than 50% of its population below the age of 25 and more than 65% below the age of 35. It is expected that, in 2020, the average age of an Indian will be 29 years, compared to 37 for China and 48 for Japan; and, by 2030, India's dependency ratio should be just over 0.4.. However, the number of children in India peaked more than a decade ago and is now falling. The number of children under the age of five (under-5s) peaked in 2007; since then the number has been falling. The number of Indians under 15 years old peaked slightly later (in 2011) and is now also declining.
India has more than two thousand ethnic groups, and every major religion is represented, as are four major families of languages (Indo-European, Dravidian, Austroasiatic and Sino-Tibetan languages) as well as two language isolates (the Nihali language spoken in parts of Maharashtra and the Burushaski language spoken in parts of Jammu and Kashmir (Kashmir)). 1,000,000 people in India are Anglo-Indians and 700,000 Westerners from the United States are living in India. They represent over 0.1% of the total population of India.
Further complexity is lent by the great variation that occurs across this population on social parameters such as income and education. Only the continent of Africa exceeds the linguistic, genetic and cultural diversity of the nation of India.
The sex ratio is 944 females for 1000 males (2016) (940 per 1000 in 2011). This ratio has been showing an upwards trend for the last two decades after a continuous decline in the last century.
The following table lists estimates for the population of India (including what are now Pakistan and Bangladesh) from prehistory up until 1820. It includes estimates and growth rates according to five different economic historians, along with interpolated estimates and overall aggregate averages derived from their estimates.
The population grew from the South Asian Stone Age in 10,000 BC to the Maurya Empire in 200 BC at a steadily increasing growth rate, before population growth slowed down in the classical era up to 500 AD, and then became largely stagnant during the early medieval era up to 1000 AD. The population growth rate then increased in the late medieval era (during the Delhi Sultanate) from 1000 to 1500.
India's population growth rate under the Mughal Empire (16th–18th centuries) was higher than during any previous period in Indian history. Under the Mughal Empire, India experienced an unprecedented economic and demographic upsurge, due to Mughal agrarian reforms that intensified agricultural production, proto-industrialization that established India as the most important centre of manufacturing in international trade, and a relatively high degree of urbanisation for its time; 15% of the population lived in urban centres, higher than the percentage of the population in 19th-century British India and contemporary Europe up until the 19th century.
Under the reign of Akbar (reigned 1556–1605) in 1600, the Mughal Empire's urban population was up to 17 million people, larger than the urban population in Europe. By 1700, Mughal India had an urban population of 23 million people, larger than British India's urban population of 22.3 million in 1871. Nizamuddin Ahmad (1551–1621) reported that, under Akbar's reign, Mughal India had 120 large cities and 3,200 townships. A number of cities in India had a population between a quarter-million and half-million people, with larger cities including Agra (in Agra Subah) with up to 800,000 people and Dhaka (in Bengal Subah) with over 1 million people. Mughal India also had a large number of villages, with 455,698 villages by the time of Aurangzeb (reigned 1658–1707).
In the early 18th century, the average life expectancy in Mughal India was 35 years. In comparison, the average life expectancy for several European nations in the 18th century were 34 years in early modern England, up to 30 years in France, and about 25 years in Prussia.
The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire years. Sources: Our World In Data and Gapminder Foundation.
Life expectancy from 1881 to 1950
The population of India under the British Raj (including what are now Pakistan and Bangladesh) according to censuses:
Studies of India's population since 1881 have focused on such topics as total population, birth and death rates, growth rates, geographic distribution, literacy, the rural and urban divide, cities of a million, and the three cities with populations over eight million: Delhi, Greater Mumbai (Bombay), and Kolkata (Calcutta).
Mortality rates fell in the period 1920–45, primarily due to biological immunisation. Other factors included rising incomes, better living conditions, improved nutrition, a safer and cleaner environment, and better official health policies and medical care.
India occupies 2.41% of the world's land area but supports over 18% of the world's population. At the 2001 census 72.2% of the population lived in about 638,000 villages and the remaining 27.8% lived in more than 5,100 towns and over 380 urban agglomerations.
India's population exceeded that of the entire continent of Africa by 200 million people in 2010. However, because Africa's population growth is nearly double that of India, it is expected to surpass both China and India by 2025.
The table below summarises India's demographics (excluding the Mao-Maram, Paomata and Purul subdivisions of Senapati District of Manipur state due to cancellation of census results) according to religion at the 2011 census in per cent. The data is "unadjusted" (without excluding Assam and Jammu and Kashmir); the 1981 census was not conducted in Assam and the 1991 census was not conducted in Jammu and Kashmir. Missing citing/reference for "Changes in religious demagraphics over time" table below.
The table below represents the infant mortality rate trends in India, based on sex, over the last 15 years. In the urban areas of India, average male infant mortality rates are slightly higher than average female infant mortality rates.
Some activists believe India's 2011 census shows a serious decline in the number of girls under the age of seven – activists posit that eight million female fetuses may have been aborted between 2001 and 2011. These claims are controversial. Scientists who study human sex ratios and demographic trends suggest that birth sex ratio between 1.08 and 1.12 can be because of natural factors, such as the age of mother at baby's birth, age of father at baby's birth, number of babies per couple, economic stress, endocrinological factors, etc. The 2011 census birth sex ratio in India, of 917 girls to 1000 boys, is similar to 870–930 girls to 1000 boys birth sex ratios observed in Japanese, Chinese, Cuban, Filipino and Hawaiian ethnic groups in the United States between 1940 and 2005. They are also similar to birth sex ratios below 900 girls to 1000 boys observed in mothers of different age groups and gestation periods in the United States.
41.03% of the Indians speak Hindi while the rest speak Assamese, Bengali, Gujarati, Maithili, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, Telugu, Urdu and a variety of other languages. There are a total of 122 languages and 234 mother tongues. The 22 languages are Languages specified in the Eighth Schedule of Indian Constitution and 100 non-specified languages.
The table immediately below excludes Mao-Maram, Paomata and Purul subdivisions of Senapati District of Manipur state due to cancellation of census results.
Source: "UN World Population Prospects"
Structure of the population (9 February 2011) (Census) (Includes data for the Indian-administered part of Jammu and Kashmir) age wise are shown below:
Population pyramid 2016 (estimates):
From the Demographic Health Survey:
The following demographic statistics are from the CIA World Factbook, unless otherwise indicated.
1,166,079,217 (July 2009 est. CIA), 1,210 million (2011 census), 1,281,935,911 (July 2017 est.)
72.2%; male: 381,668,992, female: 360,948,755 (2001 census)
"0–14 years:" 27.34% (male 186,087,665/female 164,398,204)
"15-24 years:" 17.9% (male 121,879,786/female 107,583,437)
"25-54 years:" 41.08% (male 271,744,709/female 254,834,569)
"55-64 years:" 7.45% (male 47,846,122/female 47,632,532)
"65+ years:" 6.24% (male 37,837,801/female 42,091,086) (2017 est.)
Total: 28.7 years
Male: 28 years
female: 29.5 years (2020 est.)
1.1% (2020 est)
74% (age 7 and above, in 2011)
81.4% (total population, age 15–25, in 2006)
22% (2006 est.)
7.8%
0.00 migrant(s)/1,000 population (2020 est.)
"At birth:"
1.12 male(s)/female
"Under 10 years:"
1.13 male(s)/female
"15–24 years:"
1.13 male(s)/female
"24–64 years:"
1.06 male(s)/female
"65 years and over:"
0.9 male(s)/female
"Total population:"
1.08 male(s)/female (2017 est.)
"Total population:" 69.7 years
"Male:" 68.4 years
"Female:" 71.2 years (2020 est.)
2.35 (2020 est.)
The TFR (total number of children born per women) by religion in 2005–2006 was: Hindus, 2.7; Muslims, 3.1; Christians, 2.4; and Sikhs, 2.0.
Hindus 79.5%, Muslims 15%, Christian 2.3%, Sikh 1.7%, other and unspecified 2% (2011 est.)
Scheduled castes: 16.6% (2011 census);
scheduled tribes: 8.6% (2011 census)
See Languages of India and List of Indian languages by total speakers. There are 216 languages with more than 10,000 native speakers in India. The largest of these is Hindi with some 337 million, and the second largest is Bengali with 238 million. 22 languages are recognised as official languages. In India, there are 1,652 languages and dialects in total.
Caste and community statistics as recorded from "Socially and Educationally Backward Classes Commission" ("SEBC) or Mandal Commission of 1979. This was completed in 1983."
there has not yet been a proper consensus on contemporary figures.
The following data is from the Mandal report
India is projected to overtake China as the world's most populous nation by 2027. Note that these projections make assumptions about future fertility and death rates which may not turn out to be correct in the event. Fertility rates also vary from region to region, with some higher than the national average and some lower of china.
In millions
The national Census of India does not recognise racial or ethnic groups within India, but recognises many of the tribal groups as Scheduled Castes and Tribes (see list of Scheduled Tribes in India).
According to a 2009 study published by Reich "et al"., the modern Indian population is composed of two genetically divergent and heterogeneous populations which mixed in ancient times (about 1,200–3,500 BP), known as Ancestral North Indians (ANI) and Ancestral South Indians (ASI). ASI corresponds to the Dravidian-speaking population of southern India, whereas ANI corresponds to the Indo-Aryan-speaking population of northern India. 700,000 people from the United States live in India. Between 300,000 and 1 million Anglo-Indians live in India.
For a list of ethnic groups in the Republic of India (as well as neighbouring countries) see ethnic groups of the Indian subcontinent.
Y-Chromosome DNA Y-DNA represents the male lineage, The Indian Y-chromosome pool may be summarised as follows where haplogroups R-M420, H, R2, L and NOP comprise generally more than 80% of the total chromosomes.
Mitochondrial DNA mtDNA represents the female lineage. The Indian mitochondrial DNA is primarily made up of Haplogroup M
Numerous genomic studies have been conducted in the last 15 years to seek insights into India's demographic and cultural diversity. These studies paint a complex and conflicting picture. | https://en.wikipedia.org/wiki?curid=14598 |
Politics of India
The Politics of India works within the framework of the country's constitution. India is a federal parliamentary democratic republic in which the President of India is the head of state and the Prime Minister of India is the head of government. India follows the dual polity system, i.e. a double government (federal in nature) that consists of the central authority at the centre and states at the periphery. The constitution defines the organisational powers and limitations of both central and state governments, and it is well recognised, rigid and considered supreme; i.e. the laws of the nation must confirm to it.
There is a provision for a bicameral legislature consisting of an upper house, the Rajya Sabha (Council of States), which represents the states of the Indian federation, and a lower house, the Lok Sabha (House of the People), which represents the people of India as a whole. The Indian constitution provides for an independent judiciary, which is headed by the Supreme Court. The court's mandate is to protect the constitution, to settle disputes between the central government and the states, to settle inter-state disputes, to nullify any central or state laws that go against the constitution and to protect the fundamental rights of citizens, issuing writs for their enforcement in cases of violation.
There are 543 members in the Lok Sabha, who are elected from the 543 Indian constituencies.
There are 245 members in the Rajya Sabha, out of which 233 are elected by indirect elections by single transferable vote by the members of the state legislative assemblies and other 12 members are elected/nominated by the President of India.
Governments are formed through elections held every five years (unless otherwise specified), by parties that secure a majority of members in their respective lower houses (Lok Sabha in the central government and Vidhan Sabha in states). India had its first general election in 1951, which was won by the Indian National Congress, a political party that went on to dominate subsequent elections until 1977, when a non-Congress government was formed for the first time in independent India. The 1990s saw the end of single-party domination and the rise of coalition governments. The elections for the 16th Lok Sabha, held from April 2014 to May 2014, once again brought back single-party rule in the country, with the Bharatiya Janata Party being able to claim a majority in the Lok Sabha.
In recent decades, Indian politics has become a dynastic affair. Possible reasons for this could be the party stability, absence of party organisations, independent civil society associations that mobilise support for the parties and centralised financing of elections.
When compared to other democracies, India has had a large number of political parties during its history under democratic governance. It has been estimated that over 200 parties were formed after India became independent in 1947. Leadership of political parties in India is commonly interwoven with well-known families whose dynastic leaders actively play the dominant role in a party. Further, party leadership roles are often transferred to subsequent generations in the same families. The two main parties in India are the Bharatiya Janata Party, also known as the BJP, which is the leading right-wing party, and the Indian National Congress, commonly called the INC or Congress, which is the leading centre-left leaning party. These two parties currently dominate national politics, both adhering their policies loosely to their places on the left–right political spectrum. At present, there are eight national parties and many more state parties.
Every political party in India - whether a national or regional/state party - must have a symbol and must be registered with the Election Commission of India. Symbols are used in the Indian political system to identify political parties in part so that illiterate people can vote by recognizing the party symbols.
In the current amendment to the Symbols Order, the Commission has asserted the following five principles:
Criteria
it wins at least two percent (2%) seats in the House of the People (i.e., 11 seats in the existing House having 543 members), and these members are elected from at least three different States.
it wins at least three percent (3%) of the total number of seats in the Legislative Assembly of the State, or at least three seats in the Assembly, whichever is more.
Since 1984, when a strict anti-defection law was passed, there has been a tendency amongst the Indian politicians to float their own parties rather than join a broad based party such as the Congress or the BJP. For example, between the 1984 and 1989 elections, the number of parties contesting elections increased from 33 to 113. In the decades since, this fragmentation has continued.
India has a history of party alliances and breakdown of alliances. However, there are three party alliances regularly aligning on a national level in competing for Government positions. The member parties work in harmony for gratifying national interests, although parties can jump ships. The three alliances are–
India has seen political corruption for decades. After the British left the subcontinent, the corruption became to be increasingly pronounced in the country. Democratic institutions soon became federally owned, dissent was eliminated and a majority of citizens paid the price. The political corruption in India is weakening its democracy and has led to the erosion of trust by the general public in the political system. A good amount of money is required in elections which is source of political-capitalist nexus.
Pre-election alliances are common in India with parties deciding to share seats. This is seen mainly on a state by state basis rather than on the national level. Candidate selection starts after seat sharing has been agreed by alliance partners.
Indian political parties have low level of internal party democracy and therefore, in Indian elections, both at the state or national level, party candidates are typically selected by the party elites, more commonly called the party high command. The party elites use a number of criteria for selecting candidates. These include the ability of the candidates to finance their own election, their educational attainment, and the level of organization the candidates have in their respective constituencies. Quite often the last criterion is associated with candidate criminality.
Panchayati Raj Institutions or Local self-government bodies play a crucial role in Indian politics, as it focuses on grassroot-level administration in India.
On 24 April 1993, the Constitutional (73rd Amendment) Act, 1992 came into force to provide constitutional status to the Panchayati Raj institutions. This Act was extended to Panchayats in the tribal areas of eight States, namely Andhra Pradesh, Bihar, Gujarat, Himachal Pradesh, Maharashtra, Madhya Pradesh, Odisha and Rajasthan from 24 December 1996.
The Act aims to provide 3-tier system of Panchayati Raj for all States having population of over 2 million, to hold Panchayat elections regularly every 5 years, to provide reservation of seats for Scheduled Castes, Scheduled Tribes and Women, to appoint State Finance Commission to make recommendations as regards the financial powers of the Panchayats and to constitute District Planning Committee to prepare draft development plan for the district.
As with any other democracy, political parties represent different sections among the Indian society and regions, and their core values play a major role in the politics of India. Both the executive branch and the legislative branch of the government are run by the representatives of the political parties who have been elected through the elections. Through the electoral process, the people of India choose which representative and which political party should run the government. Through the elections any party may gain simple majority in the lower house. Coalitions are formed by the political parties, in case no single party gains a simple majority in the lower house. Unless a party or a coalition have a majority in the lower house, a government cannot be formed by that party or the coalition.
India has a multi-party system, where there are a number of national as well as regional parties. A regional party may gain a majority and rule a particular state. If a party is represented in more than 4 states, it would be labelled a national party. Out of the 72 years of India's independence, India has been ruled by the Indian National Congress (INC) for 53 of those years, as of January 2020.
The party enjoyed a parliamentary majority save for two brief periods during the 1970s and late 1980s. This rule was interrupted between 1977 and 1980, when the Janata Party coalition won the election owing to public discontent with the controversial state of emergency declared by the then Prime Minister Indira Gandhi. The Janata Dal won elections in 1989, but its government managed to hold on to power for only two years.
Between 1996 and 1998, there was a period of political flux with the government being formed first by the nationalist Bharatiya Janata Party (BJP) followed by a left-leaning United Front coalition. In 1998, the BJP formed the National Democratic Alliance with smaller regional parties, and became the first non-INC and coalition government to complete a full five-year term. The 2004 Indian elections saw the INC winning the largest number of seats to form a government leading the United Progressive Alliance, and supported by left-parties and those opposed to the BJP.
On 22 May 2004, Manmohan Singh was appointed the Prime Minister of India following the victory of the INC & the left front in the 2004 Lok Sabha election. The UPA ruled India without the support of the left front. Previously, Atal Bihari Vajpayee had taken office in October 1999 after a general election in which a BJP-led coalition of 13 parties called the National Democratic Alliance emerged with a majority. In May 2014, Narendra Modi of BJP was elected as Prime Minister of India.
Formation of coalition governments reflects the transition in Indian politics away from the national parties toward smaller, more narrowly based regional parties. Some regional parties, especially in South India, are deeply aligned to the ideologies of the region unlike the national parties and thus the relationship between the central government and the state government in various states has not always been free of rancor. Disparity between the ideologies of the political parties ruling the centre and the state leads to severely skewed allocation of resources between the states.
The lack of homogeneity in the Indian population causes division between different sections of the people based on religion, region, language, caste and ethnicity. This has led to the rise of political parties with agendas catering to one or a mix of these groups. Parties in India also target people who are not in favour of other parties and use them as an asset.
Some parties openly profess their focus on a particular group; for example, the Dravida Munnetra Kazhagam's and the All India Anna Dravida Munnetra Kazhagam's focus on the Dravidian population and Tamil identity; Biju Janata Dal's championing of Odia culture; the Shiv Sena's pro-Marathi agenda; Naga People's Front's demand for protection of Naga tribal identity; People's Democratic Party ;National Conference's calling for Kashmiri Muslim identity and The Telugu Desam Party was formed in United Andhra Pradesh by late Shri N.T.Rama Rao which demands for rights and needs of people of the state only. Some other parties claim to be universal in nature, but tend to draw support from particular sections of the population. For example, the Rashtriya Janata Dal (translated as National People's Party) has a vote bank among the Yadav and Muslim population of Bihar and the All India Trinamool Congress does not have any significant support outside West Bengal.
The narrow focus and votebank politics of most parties, even in the central government and central legislature, supplements national issues such as economic welfare and national security. Moreover, internal security is also threatened as incidences of political parties instigating and leading violence between two opposing groups of people is a frequent occurrence.
Economic issues like poverty, unemployment, development are main issues that influence politics. "Garibi Hatao" (eradicate poverty) has been a slogan of the Indian National Congress for a long time. The well known Bharatiya Janata Party encourages a free market economy. The more popular slogan in this field is "Sabka Saath, Sabka Vikas" (Cooperation with all, progress of all). The Communist Party of India (Marxist) vehemently supports left-wing politics like land-for-all, right to work and strongly opposes neoliberal policies such as globalisation, capitalism and privatisation.
Terrorism, Naxalism, religious violence and caste-related violence are important issues that affect the political environment of the Indian nation. Stringent anti-terror legislation such as TADA, POTA and MCOCA have received much political attention, both in favour and opposed and some of these laws were disbanded eventually due to human rights violations, however UAPA was put into force again in 2019 after a new legislation.
Terrorism has affected politics India since its conception, be it the terrorism supported from Pakistan or the internal guerrilla groups such as Naxalites. In 1991 the former prime minister Rajiv Gandhi was assassinated during an election campaign. The suicide bomber was later linked to the Sri Lankan terrorist group Liberation Tigers of Tamil Eelam, as it was later revealed the killing was an act of vengeance for Rajiv Gandhi sending troops in Sri Lanka against them in 1987.
The Babri Masjid demolition on 6 December 1992 by Hindu Karsevaks and the Godhra Train Killings resulted in nationwide communal riots in two months, with worst occurring in Mumbai with at least 900 dead. The riots were followed by 1993 Mumbai Bomb Blasts, which resulted in more deaths.
Law and order issues, such as action against organised crime are issues which do not affect the outcomes of elections. On the other hand, there is a criminal–politician nexus. Many elected legislators have criminal cases against them. In July 2008, the "Washington Post" reported that nearly a fourth of the 540 Indian Parliament members faced criminal charges, "including human trafficking, child prostitution immigration rackets, embezzlement, rape and even murder".
The Constitution of India lays down that the Head of State and Union Executive is the President of India. She/He is elected for a five-year term by an electoral college consisting of members of both Houses of Parliament and members of legislative assemblies of the states. The President is eligible for re-elections; however, in India's independent history, only one president has been re-elected, Rajendra Prasad.
The President appoints the Prime Minister of India from the party or coalition which commands maximum support of the Lok Sabha, on whose recommendation he/she nominates the other ministers. The President also appoints judges of the Supreme Court and High Court. It is on the President's recommendation that the Houses of Parliament meet, and only the president has the power to dissolve the Lok Sabha. Furthermore, no bill passed by Parliament can become law without the president's assent.
However, the role of the president of India is highly ceremonial. All the powers of the president mentioned above are exercised on recommendation of the Union Cabinet, and the president does not have much discretion in any of these matters. The president also does not have discretion in the exercise of his executive powers, as the real executive authority lies in the cabinet. The current President is Ram Nath Kovind.
The Office of the Vice-President of India is constitutionally the second most senior office in the country, after the President. The vice-president is also elected by an electoral college, consisting of members of both houses of parliament.
Like the president, the role of the Vice-President is also ceremonial, with no real authority vested in him/her. The Vice-President fills in a vacancy in the office of President (till the election of a new president). His only regular function is that he functions as the Chairman of the Rajya Sabha. No other duties/powers are vested in the office. The current Vice President is Venkaiah Naidu.
The Union Council of Ministers, headed by the Prime Minister, is the body with which the real executive power resides. The Prime Minister is the recognized head of the government.
The Union Council of Ministers is the body of ministers with which the PM works with on a day-to-day basis. Work is divided between various ministers into various departments and ministries. The Union cabinet is a smaller body of ministers which lies within the Council of Ministers, which is the most powerful set of people in the country, playing an instrumental role in legislation and execution alike.
All members of the Union Council of ministers must be members of either House of Parliament at the time of appointment or must get elected/nominated to either House within six months of their appointment.
It is the Union Cabinet that co-ordinates all foreign and domestic policy of the Union. It exercises immense control over administration, finance, legislation, military, etc. The Head of the Union Cabinet is the Prime Minister. The current Prime Minister of India is Narendra Modi.
India has a federal form of government, and hence each state also has its own government. The executive of each state is the Governor (equivalent to the president of India), whose role is ceremonial. The real power resides with the Chief Minister (equivalent to the Prime Minister) and the state council of ministers. States may either have a unicameral or bicameral legislature, varying from state to state. The Chief Minister and other state ministers are also members of the legislature.
Since the 1980s, Indian politics has become dynastic, possibly due to the absence of a party organization, independent civil society associations that mobilize support for the party, and centralized financing of elections. This phenomenon is seen from national level down to district level. One example of dynastic politics has been the Nehru–Gandhi family which produced three Indian prime ministers. Family members have also led the Congress party for most of the period since 1978 when Indira Gandhi floated the then Congress(I) faction of the party.. The ruling Bharatiya Janata Party also features several senior leaders who are dynasts. Dynastic politics is prevalent also in a number of political parties with regional presence such as All India Majlis-e-Ittehadul Muslimeen, Dravida Munnetra Kazhagam, Indian National Lok Dal, Jammu & Kashmir National Conference, Jammu and Kashmir Peoples Democratic Party, Janata Dal (Secular), Jharkhand Mukti Morcha, National People's Party, Nationalist Congress Party, Rashtriya Janata Dal, Rashtriya Lok Dal, Samajwadi Party, Shiromani Akali Dal, Shiv Sena, Telangana Rashtra Samithi and Telugu Desam Party. | https://en.wikipedia.org/wiki?curid=14599 |
Telecommunications in India
India's telecommunication network is the second largest in the world by number of telephone users (both fixed and mobile phone) with 1.1724 billion subscribers as on 31 Dec 2019. It has one of the lowest call tariffs in the world enabled by mega telecom operators and hyper-competition among them. As on 31 Dec 2019, India has the world's second-largest Internet user-base with 661.94 million broadband internet subscribers in the country. As of 31 December 2018, India had a population of 130 crore people (1.3 billion), 123 crore (1.23 billion) Aadhaar digital biometric identity cards, 121 crore (1.21 billion) mobile phones, 44.6 crore (446 million) smartphones, 56 crore (560 million or 43% of total population) internet users up from 481 million people (35% of the country's total population) in December 2017, and 51 per cent growth in e-commerce.
Major sectors of the Indian telecommunication industry are telephone, internet and television broadcast industry in the country which is in an ongoing process of transforming into next generation network, employs an extensive system of modern network elements such as digital telephone exchanges, mobile switching centres, media gateways and signalling gateways at the core, interconnected by a wide variety of transmission systems using fibre-optics or Microwave radio relay networks. The access network, which connects the subscriber to the core, is highly diversified with different copper-pair, optic-fibre and wireless technologies. DTH, a relatively new broadcasting technology has attained significant popularity in the Television segment. The introduction of private FM has given a fillip to the radio broadcasting in India. Telecommunication in India has greatly been supported by the INSAT system of the country, one of the largest domestic satellite systems in the world. India possesses a diversified communications system, which links all parts of the country by telephone, Internet, radio, television and satellite.
Indian telecom industry underwent a high pace of market liberalisation and growth since the 1990s and now has become the world's most competitive and one of the fastest growing telecom markets. The Industry has grown over twenty times in just ten years, from under 37 million subscribers in the year 2001 to over 846 million subscribers in the year 2011, and 1.1514 billion at the end of December 2019.
As of Dec 2019, India has the world's second-largest mobile phone user base with over 1.1514 billion users.
Telecommunication has supported the socioeconomic development of India and has played a significant role to narrow down the rural-urban digital divide to some extent. It also has helped to increase the transparency of governance with the introduction of e-governance in India. The government has pragmatically used modern telecommunication facilities to deliver mass education programmes for the rural folk of India.
According to London-based telecom trade body GSMA, the telecom sector accounted for 6.5% of India's GDP in 2015, or about , and supported direct employment for 2.2 million people in the country. GSMA estimates that the Indian telecom sector will contribute to the economy and support 3 million direct jobs and 2 million indirect jobs by 2020.
Telecommunications in India began with the introduction of the telegraph. The Indian postal and telecom sectors are one of the world's oldest. In 1850, the first experimental electric telegraph line was started between Calcutta and Diamond Harbour. In 1851, it was opened for the use of the British East India Company. The Posts and Telegraphs department occupied a small corner of the Public Works Department, at that time.
The construction of of telegraph lines was started in November 1853. These connected Kolkata (then Calcutta) and Peshawar in the north; Agra, Mumbai (then Bombay) through Sindwa Ghats, and Chennai (then Madras) in the south; Ootacamund and Bangalore. William O'Shaughnessy, who pioneered the telegraph and telephone in India, belonged to the Public Works Department, and worked towards the development of telecom throughout this period. A separate department was opened in 1854 when telegraph facilities were opened to the public.
In 1880, two telephone companies namely The Oriental Telephone Company Ltd. and The Anglo-Indian Telephone Company Ltd. approached the Government of India to establish telephone exchange in India. The permission was refused on the grounds that the establishment of telephones was a Government monopoly and that the Government itself would undertake the work. In 1881, the Government later reversed its earlier decision and a licence was granted to the Oriental Telephone Company Limited of England for opening telephone exchanges at Calcutta, Bombay, Madras and Ahmedabad and the first formal telephone service was established in the country. On 28 January 1882, Major E. Baring, Member of the Governor General of India's Council declared open the Telephone Exchanges in Calcutta, Bombay and Madras. The exchange in Calcutta named the "Central Exchange" had a total of 93 subscribers in its early stage. Later that year, Bombay also witnessed the opening of a telephone exchange.
Development of Broadcasting: Radio broadcasting was initiated in 1927 but became state responsibility only in 1930. In 1937 it was given the name "All India Radio" and since 1957 it has been called "Akashvani". Limited duration of television programming began in 1959, and complete broadcasting followed in 1965. The Ministry of Information and Broadcasting owned and maintained the audio-visual apparatus—including the television channel "Doordarshan"—in the country prior to the economic reforms of 1991. In 1997, an autonomous body was established in the name of Prasar Bharti to take care of the public service broadcasting under the Prasar Bharti Act. All India Radio and Doordarshan, which earlier were working as media units under the Ministry of I&B became constituents of the body.
Pre-liberalisation statistics: While all the major cities and towns in the country were linked with telephones during the British period, the total number of telephones in 1948 numbered only around 80,000. Post-independence, growth remained slow because the telephone was seen more as a status symbol rather than being an instrument of utility. The number of telephones grew leisurely to 980,000 in 1971, 2.15 million in 1981 and 5.07 million in 1991, the year economic reforms were initiated in the country.
Liberalisation of Indian telecommunication in industry started in 1981 when Prime Minister Indira Gandhi signed contracts with Alcatel CIT of France to merge with the state owned Telecom Company (ITI), in an effort to set up 5,000,000 lines per year. But soon the policy was let down because of political opposition. Attempts to liberalise the telecommunication industry were continued by the following government under the prime-minister-ship of Rajiv Gandhi. He invited Sam Pitroda, a US-based Non-resident Indian NRI and a former Rockwell International executive to set up a Centre for Development of Telematics(C-DOT) which manufactured electronic telephone exchanges in India for the first time. Sam Pitroda had a significant role as a consultant and adviser in the development of telecommunication in India.
In 1985, the Department of Telecom(DoT) was separated from Indian Post & Telecommunication Department. DoT was responsible for telecom services in entire country until 1986 when Mahanagar Telephone Nigam Limited (MTNL) and Videsh Sanchar Nigam Limited (VSNL) were carved out of DoT to run the telecom services of metro cities (Delhi and Mumbai) and international long-distance operations respectively.
The demand for telephones was ever increasing and in the 1990s Indian government was under increasing pressure to open up the telecom sector for private investment as a part of Liberalisation-Privatisation-Globalisation policies that the government had to accept to overcome the severe fiscal crisis and resultant balance of payments issue in 1991. Consequently, private investment in the sector of Value Added Services (VAS) was allowed and cellular telecom sector were opened up for competition from private investments. It was during this period that the Narsimha Rao-led government introduced the "National Telecommunications policy (NTP)" in 1994 which brought changes in the following areas: ownership, service and regulation of telecommunications infrastructure. The policy introduced the concept of "telecommunication for all" and its vision was to expand the telecommunication facilities to all the villages in India. Liberalisation in the basic telecom sector was also envisaged in this policy. They were also successful in establishing joint ventures between state owned telecom companies and international players. Foreign firms were eligible to 49% of the total stake. The multi-nationals were just involved in technology transfer, and not policy making.
During this period, the World Bank and ITU had advised the Indian Government to liberalise long-distance services to release the monopoly of the state-owned DoT and VSNL and to enable competition in the long-distance carrier business which would help reduce tariff's and better the economy of the country. The Rao run government instead liberalised the local services, taking the opposite political parties into confidence and assuring foreign involvement in the long-distance business after 5 years. The country was divided into 20 telecommunication circles for basic telephony and 18 circles for mobile services. These circles were divided into category A, B and C depending on the value of the revenue in each circle. The government threw open the bids to one private company per circle along with government-owned DoT per circle. For cellular service two service providers were allowed per circle and a 15 years licence was given to each provider. During all these improvements, the government did face oppositions from ITI, DoT, MTNL, VSNL and other labour unions, but they managed to keep away from all the hurdles.
In 1997, the government set up TRAI (Telecom Regulatory Authority of India) which reduced the interference of Government in deciding tariffs and policymaking. The political powers changed in 1999 and the new government under the leadership of Atal Bihari Vajpayee was more pro-reforms and introduced better liberalisation policies. In 2000, the Vajpayee government constituted the Telecom Disputes Settlement and Appellate Tribunal (TDSAT) through an amendment of the TRAI Act, 1997. The primary objective of TDSAT's establishment was to release TRAI from adjudicatory and dispute settlement functions in order to strengthen the regulatory framework. Any dispute involving parties like licensor, licensee, service provider and consumers are resolved by TDSAT. Moreover, any direction, order or decision of TRAI can be challenged by appealing in TDSAT. The government corporatised the operations wing of DoT on 1 October 2000 and named it as "Department of Telecommunication Services (DTS)" which was later named as Bharat Sanchar Nigam Limited (BSNL). The proposal of raising the stake of foreign investors from 49% to 74% was rejected by the opposite political parties and leftist thinkers. Domestic business groups wanted the government to privatise VSNL. Finally in April 2002, the government decided to cut its stake of 53% to 26% in VSNL and to throw it open for sale to private enterprises. TATA finally took 25% stake in VSNL.
This was a gateway to many foreign investors to get entry into the Indian telecom markets. After March 2000, the government became more liberal in making policies and issuing licences to private operators. The government further reduced licence fees for [cellular service providers and increased the allowable stake to 74% for foreign companies. Because of all these factors, the service fees finally reduced and the call costs were cut greatly enabling every common middle-class family in India to afford a cell phone. Nearly 32 million handsets were sold in India. The data reveals the real potential for growth of the Indian mobile market. Many private operators, such as Reliance Communications, Jio, Tata Indicom, Vodafone, Loop Mobile, Airtel, Idea etc., successfully entered the high potential Indian telecom market. In the initial 5–6 years the average monthly subscribers additions were around 0.05 to 0.1 million only and the total mobile subscribers base in December 2002 stood at 10.5 million. However, after a number of proactive initiatives taken by regulators and licensors, the total number of mobile subscribers has increased rapidly to over 929 million subscribers as of May 2012.
In March 2008, the total GSM and CDMA mobile subscriber base in the country was 375 million, which represented a nearly 50% growth when compared with previous year.
As the unbranded Chinese cell phones which do not have International Mobile Equipment Identity (IMEI) numbers pose a serious security risk to the country, Mobile network operators therefore suspended the usage of around 30 million mobile phones (about 8% of all mobiles in the country) by 30 April 2009. Phones without valid IMEI cannot be connected to cellular operators.
India has opted for the use of both the GSM (global system for mobile communications) and CDMA (code-division multiple access) technologies in the mobile sector. In addition to landline and mobile phones, some of the companies also provide the WLL service. The mobile tariffs in India have also become the lowest in the world. A new mobile connection can be activated with a monthly commitment of US$0.15 only.
On 2 February 2012 the Supreme Court ruled on petitions filed by Subramanian Swamy and the Centre for Public Interest Litigation (CPIL) represented by Prashant Bhushan, challenging the 2008 allotment of 2G licenses, cancelling all 122 spectrum licences granted during A. Raja (Minister of Communications & IT from 2007 to 2009), the primary official accused's term as communications minister. and described the allocation of 2G spectrum as "unconstitutional and arbitrary". The bench of GS Singhvi and Asok Kumar Ganguly imposed a fine of on Unitech Wireless, Swan Telecom and Tata Teleservices and a fine on Loop Telecom, S Tel, Allianz Infratech and Sistema Shyam Tele Services. According to the ruling the then granted licences would remain in place for four months, after which time the government would reissue the licences.
Post starting of the commercial operation of Reliance Jio in September 2016, the telecom market saw a huge change in terms of falling tariff rates and reduction of data charges, which changed the economics for some of the telecom players. This resulted in exit of many smaller players from the market. Players like Videocon and Systema sold their spectrum under spectrum trading agreements to Airtel and RCOM respectively in Q4 2016.
On 23 February 2017, Telenor India announced that Bharti Airtel will take over all its business and assets in India and deal will be completed in 12 months timeframe. On 14 May 2018, Department of Telecom approved the merger of Telenor India with Bharti Airtel paving the way for final commercial closing of the merger between the two companies. Telenor India has been acquired by Airtel almost without any cost.
On 12 October 2017, Bharti Airtel announced that it would acquire the consumer mobile businesses of Tata Teleservices Ltd (TTSL) and Tata Teleservices Maharastra Ltd (TTML) in a debt-free cash-free deal. The deal was essentially free for Airtel which incurred TTSL's unpaid spectrum payment liability. TTSL will continue to operate its enterprise, fixed line and broadband businesses and its stake in tower company Viom Networks. The consumer mobile businesses of Tata Docomo, Tata Teleservices (TTSL) and Tata Teleservices Maharashtra Limited (TTML) have been merged into Bharti Airtel from 1 July 2019.
Reliance Communications had to shut down its 2G and 3G services including all voice services and only offer 4G data services from 29 December 2017, as a result of debt and a failed merger with Aircel. Surprisingly, the shut down was shortly after completion of acquisition of MTS India on 31 October 2017. In February 2019, the company filed for bankruptcy as it was unable to sell assets to repay its debt. It has an estimated debt of ₹ 57,383 crore against assets worth ₹18,000 crore.
Aircel shut down its operations in unprofitable circles including, Gujarat, Maharashtra, Haryana, Himachal Pradesh, Madhya Pradesh and Uttar Pradesh (West) from 30 January 2018. Aircel along with its units - Aircel Cellular and Dishnet Wireless - on 1 March 2018, filed for bankruptcy in the National Companies Law Tribunal (NCLT) in Mumbai due to huge competition and high levels of debt.
Vodafone and Idea Cellular completed their merger on 31 August 2018, and the merged entity is renamed to Vodafone Idea Limited. The merger created the largest telecom company in India by subscribers and by revenue, and the second largest mobile network in terms of number of subscribers in the world. Under the terms of the deal, the Vodafone Group holds a 45.1% stake in the combined entity, the Aditya Birla Group holds 26% and the remaining shares will be held by the public. However, even after the merger both the brands have been continued to carry their own independent brands.
With all this consolidation, the Indian Mobile market has turned into a four-player market, with Jio as the number-one player, with revenue market share of 31%, Vodafone Idea Limited in second position, with revenue market share of 30% and Airtel India, with revenue market share of 28%. The government operator BSNL/MTNL is in the distant 4th position, with approx market share of 11%
Private-sector and two state-run businesses dominate the telephony segment. Most companies were formed by a recent revolution and restructuring launched within a decade, directed by Ministry of Communications and IT, Department of Telecommunications and Minister of Finance. Since then, most companies gained 2G, 3G and 4G licences and engaged fixed-line, mobile and internet business in India. On landlines, intra-circle calls are considered local calls while inter-circle are considered long-distance calls. Foreign Direct Investment policy which increased the foreign ownership cap from 49% to 100%. The Government is working to integrate the whole country in one telecom circle. For long-distance calls, the area code prefixed with a zero is dialled first which is then followed by the number (i.e., to call Delhi, 011 would be dialled first followed by the phone number). For international calls, "00" must be dialled first followed by the country code, area code and local phone number. The country code for India is 91. Several international fibre-optic links include those to Japan, South Korea, Hong Kong, Russia, and Germany. Some major telecom operators in India include the privately-owned companies like Vodafone Idea, Airtel, and Reliance Jio and the state-owned companies- BSNL and MTNL.
Before the New Telecom Policy was announced in 1999, only the Government-owned BSNL and MTNL were allowed to provide land-line phone services through copper wire in India with MTNL operating in Delhi and Mumbai and BSNL servicing all other areas of the country. Due to the rapid growth of the cellular phone industry in India, landlines are facing stiff competition from cellular operators, with the number of wireline subscribers fell from 37.90 million in December 2008 to 21 million in December 2019. This has forced land-line service providers to become more efficient and improve their quality of service. As of December 2019, India has 21 million wireline customers.
In August 1995, then Chief Minister of West Bengal, Jyoti Basu made the first mobile phone call in India to then Union Telecom Minister Sukhram. Sixteen years later 4G services were launched in Kolkata in 2012.
With a subscriber base of more than 1.1514 billion (as of Dec 2019), the mobile telecommunications system in India is the second-largest in the world and it was thrown open to private players in the 1990s. GSM was comfortably maintaining its position as the dominant mobile technology with 80% of the mobile subscriber market, but CDMA seemed to have stabilised its market share at 20% for the time being.
The country is divided into multiple zones, called circles (roughly along state boundaries). Government and several private players run local and long-distance telephone services. Competition, especially after entry of Reliance Jio, has caused prices to drop across India, which are already one of the cheapest in the world. The rates are supposed to go down further with new measures to be taken by the Information Ministry.
In September 2004, the number of mobile phone connections crossed the number of fixed-line connections and presently dwarfs the wireline segment substantially. The mobile subscriber base has grown from 5 million subscribers in 2001 to over 1,179.32 million subscribers as of July 2018. India primarily follows the GSM mobile system, in the 900 MHz band. Recent operators also operate in the 1800 MHz band. The dominant players are Vodafone Idea, Airtel, Jio, and BSNL/MTNL. International roaming agreements exist between most operators and many foreign carriers. The government allowed Mobile number portability (MNP) which enables mobile telephone users to retain their mobile telephone numbers when changing from one mobile network operator to another. In 2014, Trivandrum became the first city in India to cross the mobile penetration milestone of 100 mobile connections per 100 people. In 2015 three more cities from Kerala, Kollam, Kochi and Kottayam crossed the 100 mark. In 2017 many other major cities in the country like Chennai, Mysore, Mangalore, Bangalore, Hyderabad, etc. also crossed the milestone. Currently Trivandrum tops the Indian cities with a mobile penetration of 168.4 followed by Kollam 143.2 and Kochi 141.7.
As of 2016, India has deployed telecom operations in a total of 8 radio frequency bands.
India is divided into 22 telecom circles:
Population statistics are available state-wise only.
West Bengal circle includes Andaman-Nicobar and Sikkim
The history of the Internet in India started with the launch of services by VSNL on 15 August 1995. They were able to add about 10,000 Internet users within 6 months. However, for the next 10 years the Internet experience in the country remained less attractive with narrow-band connections having speeds less than 56 kbit/s (dial-up). In 2004, the government formulated its broadband policy which defined broadband as "an always-on Internet connection with a download speed of 256 kbit/s or above." From 2005 onward the growth of the broadband sector in the country accelerated but remained below the growth estimates of the government and related agencies due to resource issues in last-mile access which were predominantly wired-line technologies. This bottleneck was removed in 2010 when the government auctioned 3G spectrum followed by an equally high-profile auction of 4G spectrum that set the scene for a competitive and invigorated wireless broadband market. Now Internet access in India is provided by both public and private companies using a variety of technologies and media including dial-up (PSTN), xDSL, coaxial cable, Ethernet, FTTH, ISDN, HSDPA (3G), 4G, WiFi, WiMAX, etc. at a wide range of speeds and costs.
According to the Internet And Mobile Association of India (IAMAI), the Internet user base in the country stood at 190 million at the end of June 2013, rosing to 378.10 million in January 2018. Cumulative Annual Growth rate (CAGR) of broadband during the five-year period between 2005 and 2010 was about 117 per cent.
There were 204 Internet Service Providers (ISPs) offering broadband services in India as of 31 December 2017. As of January 2018, the top five ISPs in terms subscriber base were Reliance Jio (168.39 million), Bharti Airtel (75.01 million), Vodafone (54.83 million), Idea Cellular (37.33 million) and BSNL (21.81 million). In 2009, about 37 per cent of the users access the Internet from cyber cafes, 30 per cent from an office, and 23 per cent from home. However, the number of mobile Internet users increased rapidly from 2009 on and there were about 359.80 million mobile users at the end of January 2018, with a majority using 4G mobile networks.
One of the major issues facing the Internet segment in India is the lower average bandwidth of broadband connections compared to that of developed countries. According to 2007 statistics, the average download speed in India hovered at about 40 KB per second (256 kbit/s), the minimum speed set by TRAI, whereas the international average was 5.6 Mbit/s during the same period. In order to attend this infrastructure issue the government declared 2007 as "the year of broadband". To compete with international standards of defining broadband speed the Indian Government has taken the aggressive step of proposing a $13 billion national broadband network to connect all cities, towns and villages with a population of more than 500 in two phases targeted for completion by 2012 and 2013. The network was supposed to provide speeds up to 10 Mbit/s in 63 metropolitan areas and 4 Mbit/s in an additional 352 cities. In February 2018, the average broadband speed of fixed-line connection in India was 20.72 Mbit/s, which is less than the global average download speed of 42.71 Mbit/s. In terms of mobile internet speed, India performed quite poorly, with average speed of 9.01 Mbit/s when compared with global average mobile broadband speed was 22.16 Mbit/s.
As of December 2017, according to Internet and Mobile Association of India, the Internet penetration rate in India is one of the lowest in the world and only accounts for 35% of the population compared to the global average internet penetration is over 54.4%. Another issue is the digital divide where growth is biased in favour of urban areas; according to December 2017 statistics, internet penetration in urban India was 64.84%, whereas internet penetration in rural India is only 20.26%. Regulators have tried to boost the growth of broadband in rural areas by promoting higher investment in rural infrastructure and establishing subsidised tariffs for rural subscribers under the Universal service obligation scheme of the Indian government.
As of May 2014, the Internet was delivered to India mainly by 9 different undersea fibres, including SEA-ME-WE 3, Bay of Bengal Gateway and Europe India Gateway, arriving at 5 different landing points.
In March 2015, the TRAI released a formal consultation paper on "Regulatory Framework for Over-the-top (OTT)" services, seeking comments from the public. The consultation paper was criticised for being one sided and having confusing statements. It was condemned by various politicians and internet users. By 18 April 2015, over 800,000 emails had been sent to TRAI demanding net neutrality.
The TRAI on 8 February 2016, notified the Prohibition of Discriminatory Tariffs for Data Services Regulations, 2016 which barred telecom service providers from charging differential rates for data services.
The 2016 Regulation, stipulates that:
Television broadcasting began in India in 1959 by "Doordarshan", a state-run medium of communication, and had slow expansion for more than two decades. The policy reforms of the government in the 1990s attracted private initiatives in this sector, and since then, satellite television has increasingly shaped popular culture and Indian society. However, still, only the government-owned "Doordarshan" has the licence for terrestrial television broadcast. Private companies reach the public using satellite channels; both cable television as well as DTH has obtained a wide subscriber base in India. In 2012, India had about 148 million TV homes of which 126 million has access to cable and satellite services.
Following the economic reforms in the 1990s, satellite television channels from around the world—BBC, CNN, CNBC, and other private television channels gained a foothold in the country. There are no regulations to control the ownership of satellite dish antennas and also for operating cable television systems in India, which in turn has helped for an impressive growth in the viewership. The growth in the number of satellite channels was triggered by corporate business houses such as Star TV group and Zee TV. Initially restricted to music and entertainment channels, viewership grew, giving rise to several channels in regional languages, especially Hindi. The main news channels available were CNN and BBC World. In the late 1990s, many current affairs and news channels sprouted, becoming immensely popular because of the alternative viewpoint they offered compared to Doordarshan. Some of the notable ones are Aaj Tak (run by the India Today group) and STAR News, CNN-IBN, Times Now, initially run by the NDTV group and their lead anchor, Prannoy Roy (NDTV now has its own channels, NDTV 24x7, NDTV Profit and NDTV India). Over the years, Doordarshan services also have grown from a single national channel to six national and eleven regional channels. Nonetheless, it has lost the leadership in market, though it underwent many phases of modernisation in order to contain tough competition from private channels.
Today, television is the most penetrative media in India with industry estimates indicating that there are over 554 million TV consumers, 462 million with satellite connections, compared to other forms of mass media such as radio or internet. Government of India has used the popularity of TV and radio among rural people for the implementation of many social-programmes including that of mass-education. On 16 November 2006, the Government of India released the community radio policy which allowed agricultural centres, educational institutions and civil society organisations to apply for community based FM broadcasting licence. Community Radio is allowed 100 watts of Effective Radiated Power (ERP) with a maximum tower height of 30 metres. The licence is valid for five years and one organisation can only get one licence, which is non-transferable and to be used for community development purposes.
As of June 2018, there are 328 private FM radio stations in India. Apart from the private FM radio stations, All India Radio, the national public radio broadcaster of India, runs multiple radio channels. AIR's service comprises 420 stations located across the country, reaching nearly 92% of the country's area and 99.19% of the total population. AIR originates programming in 23 languages and 179 dialects.
Historically, the role of telecommunication has evolved from that of plain information exchange to a multi-service field, with "Value Added Services (VAS)" integrated with various discrete networks like PSTN, PLMN, Internet Backbone etc. However, with decreasing average revenue per user and increasing demand for VAS has become a compelling reason for the service providers to think of the convergence of these parallel networks into a single core network with service layers separated from network layer. Next-generation networking is such a convergence concept which according to ITU-T is:
Access network: The user can connect to the IP-core of NGN in various ways, most of which use the standard Internet Protocol (IP). User terminals such as mobile phones, personal digital assistants (PDAs) and computers can register directly on NGN-core, even when they are roaming in another network or country. The only requirement is that they can use IP and Session Initiation Protocol (SIP). Fixed access (e.g., digital subscriber line (DSL), cable modems, Ethernet), mobile access (e.g. W-CDMA, CDMA2000, GSM, GPRS) and wireless access (e.g.WLAN, WiMAX) are all supported. Other phone systems like plain old telephone service and non-compatible VoIP systems, are supported through gateways. With the deployment of the NGN, users may subscribe to many simultaneous access-providers providing telephony, internet or entertainment services. This may provide end-users with virtually unlimited options to choose between service providers for these services in NGN environment.
The hyper-competition in the telecom market, which was effectively caused by the introduction of "Universal Access Service (UAS)" licence in 2003 became much tougher after 3G and 4G competitive auction. About 670,000 route-kilometer (419,000-mile) of optical fibres has been laid in India by the major operators, including in the financially nonviable rural areas and the process continues. Keeping in mind the viability of providing services in rural areas, the government of India also took a proactive role to promote the NGN implementation in the country; an expert committee called "NGN eCO" was constituted in order to deliberate on the licensing, interconnection and Quality of Service (QoS) issues related to NGN and it submitted its report on 24 August 2007. Telecom operators found the NGN model advantageous, but huge investment requirements have prompted them to adopt a multi-phase migration and they have already started the migration process to NGN with the implementation of IP-based core-network.
LIRNEasia's Telecommunications Regulatory Environment (TRE) index, which summarises stakeholders' perception on certain TRE dimensions, provides insight into how conducive the environment is for further development and progress. The most recent survey was conducted in July 2008 in eight Asian countries, including Bangladesh, India, Indonesia, Sri Lanka, Maldives, Pakistan, Thailand, and the Philippines. The tool measured seven dimensions: i) market entry; ii) access to scarce resources; iii) interconnection; iv) tariff regulation; v) anti-competitive practices; and vi) universal services; vii) quality of service, for the fixed, mobile and broadband sectors.
The results for India, point out to the fact that the stakeholders perceive the TRE to be most conducive for the mobile sector followed by fixed and then broadband. Other than for Access to ScarceResources the fixed sector lags behind the mobile sector. The fixed and mobile sectors have the highest scores for Tariff Regulation. Market entry also scores well for the mobile sector
as competition is well entrenched with most of the circles with 4–5 mobile service providers. The broadband sector has the lowest score in the aggregate. The low penetration of broadband of mere 3.87 against the policy objective of 9 million at the end of 2007 clearly indicates that the regulatory environment is not very conducive.
In 2013 the home ministry stated that legislation must ensure that law enforcement agencies are empowered to intercept communications.
In India, electromagnetic spectrum, being a scarce resource for wireless communication, is auctioned by the Government of India to telecom companies for use. As an example of its value, in 2010, 20 MHz of 3G spectrum was auctioned for . This part of the spectrum is allocated for terrestrial communication (cell phones). However, in January 2005, Antrix Corporation (commercial arm of ISRO) signed an agreement with Devas Multimedia (a private company formed by former ISRO employees and venture capitalists from USA) for lease of S band transponders (amounting to 70 MHz of spectrum) on two ISRO satellites (GSAT 6 and GSAT 6A) for a price of , to be paid over a period of 12 years. The spectrum used in these satellites (2500 MHz and above) is allocated by the International Telecommunication Union specifically for satellite-based communication in India. Hypothetically, if the spectrum allocation is changed for utilisation for terrestrial transmission and if this 70 MHz of spectrum were sold at the 2010 auction price of the 3G spectrum, its value would have been over . This was a hypothetical situation. However, the Comptroller and Auditor General of India considered this hypothetical situation and estimated the difference between the prices as a loss to the Indian Government.
There were lapses on implementing Government of India procedures. Antrix/ISRO had allocated the capacity of the above two satellites to Devas Multimedia on an exclusive basis, while rules said it should always be non-exclusive. The Cabinet was misinformed in November 2005 that several service providers were interested in using satellite capacity, while the Devas deal was already signed. Also, the Space Commission was kept in the dark while taking approval for the second satellite (its cost was diluted so that Cabinet approval was not needed). ISRO committed to spending of public money on building, launching, and operating two satellites that were leased out for Devas.
In late 2009, some ISRO insiders exposed information about the Devas-Antrix deal, and the ensuing investigations resulted in the deal being annulled. G. Madhavan Nair (ISRO Chairperson when the agreement was signed) was barred from holding any post under the Department of Space. Some former scientists were found guilty of "acts of commission" or "acts of omission". Devas and Deutsche Telekom demanded US$2 billion and US$1 billion, respectively, in damages.
The Central Bureau of Investigation concluded investigations into the Antrix-Devas scam and registered a case against the accused in the Antrix-Devas deal under Section 120-B, besides Section 420 of IPC and Section 13(2) read with 13(1)(d) of PC Act, 1988 on 18 March 2015 against the then Executive Director of Antrix Corporation, two officials of USA-based company, Bangalore based private multimedia company, and other unknown officials of Antrix Corporation or Department of Space.
Devas Multimedia started arbitration proceedings against Antrix in June 2011. In September 2015, the International Court of Arbitration of the International Chamber of Commerce ruled in favour of Devas, and directed Antrix to pay US$672 million (Rs 44.35 billion) in damages to Devas. Antrix opposed the Devas plea for tribunal award in the Delhi High Court.
The adjusted gross revenue in the telecom service sector was in 2017 as against in 2016, registering a negative growth of 18.87%. The major contributions to this revenue are as follows (in INR crores):
102. IRAM - Radio Audience Measurement in India from 2016. | https://en.wikipedia.org/wiki?curid=14601 |
Transport in India
Transport in India consists of transport by land, water and air. Public transport is the primary mode of road transport for most
of the Indian citizens, and India's public transport systems are among the most heavily used in the world.
India's road network is the second-largest and one of the busiest in the world, transporting 8.225 billion passengers and over 980 million tonnes of freight annually, as of 2015. India's rail network is the fourth largest and second busiest in the world, transporting 8.44 billion passengers and 1.23 billion tonnes of freight annually, Aviation in India, broadly divided into military and civil aviation, is the fastest-growing aviation market in the world (IATA data) and Bangalore with 65% national share is the largest aviation manufacturing hub of India. India's waterways network in the form of rivers, canals, backwaters and creeks, is the ninth largest waterway network in the world. Freight transport by waterways is highly under-utilised in India with the total cargo moved (in tonne kilometres) by inland waterways being 0.1 percent of the total inland traffic in India. Roads in India are maintained by (NHAI)-National Highway Authority Of India.
In total, about 21 percent of households have two wheelers whereas 4.7 percent of households in India have cars/vans as per the 2011 Census. The automobile industry in India is currently rapidly growing with an annual production of over 4.6 million vehicles, with an annual growth rate of 10.5% and vehicle volume is expected to rise greatly in the future.
Walking has constituted a major form of transport in ancient times.This mode of transport has always been a first foot for humans. People used to cover long distances on foot or bullock carts. For instance, Adi Sankaracharya travelled all over India from Kalady near Kochi. Walking still constitutes an important mode of transport in rural areas. In the city of Mumbai, to further improve the transit conditions for pedestrians, the Mumbai Metropolitan Region Development Authority, has commenced the construction of more than 50 skywalks, as part of the Mumbai Skywalk project, which is very helpful as walk enthusiasts take part in reducing traffic. The Dakshineswar skywalk has also come up in west Bengal .
Palanquins are also known as "palkis" , was one of the luxurious methods primarily used by the rich and noblemen for travelling and also to carry a deity (idol) of a God. Many temples have sculptures of God being carried in "palkis". Modern use of the palanquin is limited to Indian weddings, pilgrimage and carrying idols of Gods.
Bullock carts have been traditionally used for transport, especially in rural India. The arrival of the British saw drastic improvements in the horse carriages which were used for transport since early days. Today, they are used in smaller towns and are referred as Tanga or "buggies". Victorias of Mumbai which were used for tourist purposes, are now banned and plans are afloat to replace it with Victoria styled electric carriages. Horse carriages are now rarely found in the cities of India barring tourist areas and hill sations. In recent years cities have banned the movement of slow moving vehicles on the main roads.
Bicycles or cycles, have ownership rates ranging from around 30% to 75% at the state level. Along with walking, cycling accounts for 50 to 80% of the commuter trips for those in the informal sector in urban areas. However, recent developments suggest that bicycle riding is quickly becoming popular in the metropolitan cities of India. Today, government development authorities all over India encourages the setup and use of separate bicycle lanes alongside the roads to combat pollution and ease traffic congestion.
Human-pulled rickshaws are nowadays rarely available in various cities and villages in the country. Many local governments have proposed a ban on these rickshaws describing them as "inhuman". But in reality the rickshaws are still not yet banned. The Government of West Bengal proposed a ban on these rickshaws in 2005. Though a bill aiming to address this issue, termed as Calcutta Hackney Carriage Bill, was passed by the West Bengal Assembly in 2006, it has not been implemented yet. The Government of West Bengal is working on an amendment to this bill to avoid the loopholes that got exposed when the Hand-pulled Rickshaw Owners' Association filed a petition against the bill.
Cycle rickshaws were introduced in India in the 1940s. They are bigger than a tricycle where two people sit on an elevated seat at the back and a person pedals from the front. In the late 2000s, they were banned in several cities for causing traffic congestion. The Delhi Police recently submitted an affidavit against plying of cycle rickshaws to ease traffic congestion in the city but it was dismissed by the Delhi High Court. In addition, environmentalists have supported the retention of cycle rickshaws as a non-polluting mode of transport.
As per 2017 estimates, the total road length in India is ; making the Indian road network the second largest road network in the world after the United States. At 0.66 km of highway per square kilometre of land the density of India's highway network is higher than that of the United States (0.65) and far higher than that of China's (0.16) or Brazil's (0.20).
India has a network of National Highways connecting all the major cities and state capitals, forming the economic backbone of the country. As of 2013, India has a total of of National Highways, of which are classified as expressways. Although India has large network of four or more lane highways of international quality standards, but without access control (entry/exit control), they are not called as expressways but simply as highways.
As per the National Highways Authority of India, about 65% of freight and 80% passenger traffic is carried by the roads. The National Highways carry about 40% of total road traffic, though only about 2% of the road network is covered by these roads. Average growth of the number of vehicles has been around 10.16% per annum over recent years.
Under National Highways Development Project (NHDP), work is under progress to equip national highways with at least four lanes; also there is a plan to convert some stretches of these roads to six lanes. All national highways are metalled, but very few are constructed of concrete, the most notable being the Mumbai-Pune Expressway. In recent years construction has commenced on a nationwide system of multi-lane highways, including the Golden Quadrilateral and North-South and East-West Corridors which link the largest cities in India.
In 2000, around 40% of villages in India lacked access to all-weather roads and remained isolated during the monsoon season. To improve rural connectivity, "Pradhan Mantri Gram Sadak Yojana" (Prime Minister's Rural Road Program), a project funded by the Central Government with the help of World Bank, was launched in 2000 to build all-weather roads to connect all habitations with a population of 500 or above (250 or above for hilly areas).
Buses are an important means of public transport in India. Due to this social significance, urban bus transport is often owned and operated by public agencies, and most state governments operate bus services through a State Road Transport Corporation. These corporations have proven extremely useful in connecting villages and towns across the country. Alongside the public companies there are many private bus fleets:As of 2012, there were 131,800 publicly owned buses in India, but 1,544,700 buses owned by private companies.
However, the share of buses is negligible in most Indian cities as compared to personalised vehicles, and two-wheelers and cars account for more than 80 percent of the vehicle population in most large cities.
Bus rapid transit systems (BRTS), exist in several cities of the country. Buses take up over 90% of public transport in Indian cities, and serve as an important mode of transport. Services are mostly run by state government owned Transport Corporations. In 1990s all government State Transport Corporations have introduced various facilities like low-floor buses for the disabled and air-conditioned buses to attract private car owners to help decongest roads. The Ahmedabad Bus Rapid Transport System, in 2010 won the prestigious Sustainable Transport Award from the Transportation Research Board in Washington.
Rainbow BRTS in Pune is the first BRTS system in the country. Mumbai introduced air conditioned buses in 1998. Bangalore was the first city in India to introduce Volvo B7RLE intra-city buses in India in January 2005 .
Bangalore is the first Indian city to have an air-conditioned bus stop, located near Cubbon Park. It was built by Airtel. The city of Chennai houses one of Asia's largest bus terminus, the Chennai Mofussil Bus Terminus.
Motorised two-wheeler vehicles like scooters motorcycles and mopeds are very popular mode of transport due to their fuel efficiency and ease of use in congested roads or streets. The number of two-wheelers sold is several times to that of cars. There were 47.5 million powered two-wheelers in India in 2003 compared with just 8.6 million cars.
Manufacture of motorcycles in India started when Royal Enfield began assembly in its plant in Chennai in 1948. Royal Enfield, an iconic brand name in the country, manufactures different variants of the British Bullet motorcycle which is a classic motorcycle that is still in production. Hero MotoCorp (formerly Hero Honda), Honda, Bajaj Auto, Yamaha, TVS Motors and Mahindra 2 Wheelers are the largest two-wheeler companies in terms of market-share.
Manufacture of scooters in India started when "Automobile Products of India (API)", set up at Mumbai and incorporated in 1949, began assembling Innocenti-built Lambretta scooters in India. They eventually acquired licence for the Li150 series model, of which they began full-fledged production from the early sixties onwards. In 1972, "Scooters India Ltd (SIL)", a state-run enterprise based in Lucknow, Uttar Pradesh, bought the entire manufacturing rights of the last Innocenti Lambretta model. API has infrastructural facilities at Mumbai, Aurangabad, and Chennai but has been non-operational since 2002. SIL stopped producing scooters in 1998.
Motorcycles and scooters can be rented in many cities, Wicked Ride, Metro bikes and many other companies are working with state governments to solve last mile connectivity problems with mass transit solutions. Wearing protective headgear is mandatory for both the rider and the pillion-rider in most cities.
Private automobiles account for 30% of the total transport demand in urban areas of India. An average of 963 new private vehicles are registered every day in Delhi alone. The number of automobiles produced in India rose from 6.3 million in 2002–03 to 11 million (11.2 million) in 2008–09. There is substantial variation among different cities and states in terms of dependence on private cars: Bangalore, Chennai, Delhi and Kolkata have 185, 127, 157 and 140 cars per 1000 people respectively. This reflects different levels of urban density and varied qualities of public transport infrastructure. Nationwide, India still has a very low rate of car ownership. When comparing car ownership between BRIC developing countries, it is on a par with China, and exceeded by Brazil and Russia.
Compact cars, especially hatchbacks predominate due to affordability, fuel efficiency, congestion, and lack of parking space in most cities. Chennai is known as the "Detroit of India" for its automobile industry. Maruti, Hyundai and Tata Motors are the most popular brands in the order of their market share. The Ambassador once had a monopoly but is now an icon of pre-liberalisation India, and is still used by taxi companies. Maruti 800 launched in 1984 created the first revolution in the Indian auto sector because of its low pricing. It had the highest market share until 2004, when it was overtaken by other low-cost models from Maruti such as the Alto and the Wagon R, the Indica from Tata Motors and the Santro from Hyundai. Over the 20-year period since its introduction, about 2.4 million units of the Maruti 800 have been sold. However, with the launch of the Tata Nano, the least expensive production car in the world, Maruti 800 lost its popularity.
India is also known for a variety of indigenous vehicles made in villages out of simple motors and vehicle spare-parts. A few of these innovations are the Jugaad, "Maruta", "Chhakda", P"eter Rehda" and the "Fame".
In the city of Bangalore, Radio One and the Bangalore Traffic Police, launched a carpooling drive which has involved celebrities such as Robin Uthappa, and Rahul Dravid encouraging the public to carpool. The initiative got a good response, and by the end of May 2009, 10,000 people are said to have carpooled in the city.
There have been efforts to improve the energy efficiency of transport systems in Indian cities, including by introducing performance standards for private automobiles or by banning particularly polluting older cars. The city of Kolkata, for example, passed a law in 2009/10 phasing out vehicles over 15 years old with the purpose of reducing air pollution in the city. However, the distributional effects were mixed. On the one hand, poorer urban residents are more likely to see public health improvements from better air quality, since they are more likely to live in polluted areas and work outdoors than richer urban residents. On the other hand, drivers of such vehicles suffered from losing their livelihoods as a result of this environmental regulation.
The first utility vehicle in India was manufactured by Mahindra. It was a copy of the original Jeep and was manufactured under licence. The vehicle was an instant hit and made Mahindra one of the top companies in India. The Indian Army and police extensively use Mahindra vehicles along with Maruti Gypsys for transporting personnel and equipment.
Tata Motors, the automobile manufacturing arm of the Tata Group, launched its first utility vehicle, the Tata Sumo, in 1994. The Sumo, owing to its then-modern design, captured a 31% share of the market within two years. The Tempo Trax from Force Motors till recently was ruling the rural areas. Sports utility vehicles now form a sizeable part of the passenger vehicle market. Models from Tata, Honda, Hyundai, Ford, Chevrolet and other brands are available.
Most of the taxicabs in Kolkata and Mumbai are either Premier Padmini or Hindustan Ambassador cars. in rest of cities all modern cars are available, However, with app based taxi services like Uber coming to India as well as homegrown Indian app based taxi services like Ola coming to the fore, taxicabs now include Sedans, SUVs and even motorcycle taxis. Depending on the city/state, taxis can either be hailed or hired from taxi-stands. In cities such as Bangalore, Chennai, Hyderabad and Ahmedabad, taxis need to be hired over phone, whereas in cities like Kolkata and Mumbai, taxis can be hailed on the street. According to Government of India regulations, all taxis are required to have a fare-meter installed. There are additional surcharges for luggage, late-night rides and toll taxes are to be paid by the passenger. Since 2006, radio taxis have become increasingly popular with the public due to reasons of safety and convenience.
In cities and localities where taxis are expensive or do not ply as per the government or municipal regulated fares, people use share taxis. These are normal taxis which carry one or more passengers travelling to destinations either on one route to the final destination, or near the final destination. The passengers are charged according to the number of people with different destinations. The city of Mumbai will soon be the first city in India, to have an "in-taxi" magazine, titled "MumBaee", which will be issued to taxis which are part of the Mumbai Taximen's Union. The magazine debuted on 13 July 2009. In Kolkata, there are many "no refusal taxi" available with white and blue in colour.
An auto is a three-wheeler vehicle for hire that does not have doors and is generally characterised by a small cabin for the driver in the front and a seat for passengers in the rear. Generally it is painted in yellow, green or black colour and has a black, yellow or green canopy on the top, but designs vary considerably from place to place. The color of the autorickshaw is also determined by the fuel that it is powered by, for example Agartala, Ahmedabad, Mumbai, Pune and Delhi have green or black autos indicating the use of compressed natural gas(CNG), whereas the autos of Kolkata, Bangalore, Hyderabad have green autos indicating the use of LPG.
In Mumbai and other metropolitan cities, 'autos' or 'rickshaws' as they are popularly known have regulated metered fares. A recent law prohibits auto rickshaw drivers from charging more than the specified fare, or charging night-fare before midnight, and also prohibits the driver from refusing to go to a particular location. Mumbai and Kolkata are also the only two cities which prohibit auto rickshaws from entering a certain part of the city, in these cases being South Mumbai and certain parts of Downtown Kolkata. However, in cities like Chennai, it is common to see autorickshaw drivers demand more than the specified fare and refuse to use fare meter.
Airports and railway stations at many cities such as Howrah, Chennai and Bangalore provide a facility of prepaid auto booths, where the passenger pays a fixed fare as set by the authorities for various locations.
Electric rickshaw is new popular means of transport, rapidly growing in number in India, due to low running and initial cost, other economic and environment benefits, these vehicles are becoming popular in India. E-Rickshaws are made in fiberglass or metal body, powered by a BLDC Electric Motor with max power 2000W and speed 25 km/h.
Country-wide rail services in India, are provided by the state-run Indian Railways under the supervision of the Ministry of Railways. IR is divided into eighteen zones including the Kolkata Metro Railway. The IR are further sub-divided into sixty seven divisions, each having a divisional headquarters.
The railway network travels across the country, covering more than 7,321 stations over a total route length of more than and track length of about as of March 2019. About or 50.90% of the route-kilometre was electrified as on March 2019. IR provides an important mode of transport in India, transporting 23.1 million passengers and 3.3 million tons of freight daily as on March 2019. IR is the world's eighth-largest employer, it had 1.227 million employees as of March 2019. As to rolling stock, IR owns over 289,185 (freight) wagons, 74,003 coaches and 12,147 locomotives as of March 2019. It also owns locomotive and coach production facilities. It operates both long distance and suburban rail systems.
The IR runs a number of special types of services which are given higher priority. The fastest train at present is the Vande Bharat Express with operation speeds of up to 180 km/h, though the fastest service is Gatimaan Express with an operational speed of and average speed of , since the Vande Bharat Express is capped at 120 km/h for safety reasons. The Rajdhani trains introduced in 1969 provides connectivity between the national capital, Delhi and capitals of the states. On the other hand, Shatabdi Express provides connectivity between centres of tourism, pilgrimage or business. The Shatabdi Express trains run over short to medium distances and do not have sleepers while the Rajdhani Expresses run over longer distances and have only sleeping accommodation. Both series of trains have a maximum permissible speed of 110 to 140 km/h (81 to 87 mph) but average speed of less than 100 km/h..
Besides, the IR also operates a number of luxury trains which cater to various tourist circuits. For instance, the Palace on Wheels serves the Rajasthan circuit and The Golden Chariot serves the Karnataka and Goa circuits. There are two UNESCO World Heritage Sites on IR, the Chhatrapati Shivaji Maharaj Terminus and the Mountain railways of India. The latter consists of three separate railway lines located in different parts of India, the Darjeeling Himalayan Railway, a narrow gauge railway in Lesser Himalayas in West Bengal, the Nilgiri Mountain Railway, a rack railway in the Nilgiri Hills in Tamil Nadu and the Kalka-Shimla Railway, a narrow gauge railway in the Siwalik Hills in Himachal Pradesh.
In the freight segment, IR ferries various commodities and fuels in industrial, consumer, and agricultural segments across the length and breadth of India. IR has historically subsidised the passenger segment with income from the freight business. As a result, freight services are unable to compete with other modes of transport on both cost and speed of delivery, leading to continuous erosion of market share. To counter this downward trend, IR has started new initiatives in freight segments including upgrading of existing goods sheds, attracting private capital to build multi-commodity multi-modal logistics terminals, changing container sizes, operating time-tabled freight trains, and tweaking with the freight pricing/product mix.
In 1999, the Konkan Railway Corporation introduced the Roll on Roll off (RORO) service, a unique road-rail synergy system, on the section between Kolad in Maharashtra and Verna in Goa, which was extended up to Surathkal in Karnataka in 2004. The RORO service, the first of its kind in India, allowed trucks to be transported on flatbed trailers. It was highly popular, carrying about trucks and bringing in about 740 million worth of earnings to the corporation till 2007.
Perhaps the game changer for IR in the freight segment are the new dedicated freight corridors that are expected to be completed by 2020. When fully implemented, the new corridors, spanning around 3300 km, could support hauling of trains up to 1.5 km in length with 32.5 ton axle-load at speeds of . Also, they will free-up capacity on dense passenger routes and will allow IR to run more trains at higher speeds. Additional corridors are being planned to augment the freight infrastructure in the country.
In many Indian metropolitan regions, rail is the more efficient and affordable mode of public transport for daily commute. Examples of types of services include long established local or suburban rail services such as Mumbai, the century old tram service in Kolkata, the more recent metro service in Delhi, and Monorail feeder service in Mumbai.
The Mumbai Suburban Railway is the first rail system in India which began services in Mumbai in 1853, transports 6.3 million passengers daily and has the highest passenger density in the world. The Kolkata Suburban Railway, was established in Kolkata in 1854.
The operational suburban rail systems in India are in Mumbai Suburban Railway, Kolkata Suburban Railway, Lucknow-Kanpur Suburban Railway, Chennai Suburban Railway, Delhi Suburban Railway, Pune Suburban Railway, Hyderabad Multi-Modal Transport System, Barabanki-Lucknow Suburban Railway and Karwar railway division.
Other planned systems are Bengaluru Commuter Rail, Ahmedabad Suburban Railway and Coimbatore Suburban Railway.
The first modern rapid transit in India is the Kolkata Metro which started its operations in 1984 as the 17th Zone of the Indian Railways. The Delhi Metro in New Delhi is India's second conventional metro and began operations in 2002. The Namma Metro in Bangalore is India's third operational rapid transit and began operations in 2011.
The operational systems are Kolkata Metro, Delhi Metro, Namma Metro, Rapid Metro, Mumbai Metro, Jaipur Metro, Chennai Metro, Kochi Metro, Lucknow Metro, Nagpur Metro and Hyderabad Metro.
The planned systems are Noida Metro, Ghaziabad Metro, Navi Mumbai Metro, Nagpur Metro, Metro-Link Express for Gandhinagar and Ahmedabad, Varanasi Metro, Kanpur Metro, Bareilly Metro, Pune Metro, Vijayawada Metro, Patna Metro, Meerut Metro, Guwahati Metro, Chandigarh Metro, Bhopal Metro, Kozhikode Light Metro, Indore Metro, Thiruvananthapuram Light Metro, Agra Metro, Coimbatore Metro, Visakhapatnam Metro, Surat Metro, Srinagar Metro, Greater Gwalior Metro, Jabalpur Metro , Greater Nashik Metro and Bengaluru Metro
Currently, rapid transit is under construction or in planning in several major cities of India and will be opened shortly.
Monorail is generally considered as feeder system for the metro trains in India. In 2004, monorail was first proposed for Kolkata. But, later the idea was put on hold due to lack of funds and infeasibility. The Mumbai Monorail, which started in 2014, is the first operational monorail network in India (excluding the Skybus Metro) since the Patiala State Monorail Trainways closed in 1927.
Other planned systems are Chennai Monorail, Kolkata Monorail, Allahabad Monorail, Bengaluru Monorail, Delhi Monorail, Indore Monorail, Kanpur Monorail, Navi Mumbai Monorail, Patna Monorail, Pune Monorail, Ahmedabad Monorail, Aizawl Monorail, Bhubaneswar Monorail, Jodhpur Monorail, Kota Monorail, Nagpur Monorail and Nashik Monorail.
In addition to trains, trams were introduced in many cities in late 19th century, though almost all of these were phased out. The trams in Kolkata is currently the only tram system in the country. The Calcutta Tramways Company is in the process of upgrading the existing tramway network at a cost of .
Rail links between India and neighbouring countries are not well-developed. Two trains operate to Pakistan—the "Samjhauta Express" between Delhi and Lahore, and the "Thar Express" between Jodhpur and Karachi. Bangladesh is connected by a biweekly train, the "Maitree Express" that runs from Kolkata to Dhaka and a weekly train, the "Bandhan Express" that runs from Kolkata to Khulna. Two rail links to Nepal exist—passenger services between Jainagar and Bijalpura, and freight services between Raxaul and Birganj.
No rail link exists with Myanmar but a railway line is to be built through from Jiribam (in Manipur) to Tamu through Imphal and Moreh. The construction of this missing link, as per the feasibility study conducted by the Ministry of External Affairs through RITES Ltd, is estimated to cost . An 18 km railway link with Bhutan is being constructed from Hashimara in West Bengal to Toribari in Bhutan. No rail link exists with either China or Sri Lanka.
India does not have any railways classified as high-speed rail (HSR), which have operational speeds in excess of .
Prior to the 2014 general election, the two major national parties (Bharatiya Janata Party and Indian National Congress) pledged to introduce high-speed rail. The INC pledged to connect all of India's million-plus cities by high-speed rail, whereas BJP, which won the election, promised to build the Diamond Quadrilateral project, which would connect the cities of Chennai, Delhi, Kolkata, and Mumbai via high-speed rail. This project was approved as a priority for the new government in the incoming prime minister's speech. Construction of one kilometer of high speed railway track will cost – which is 10–14 times higher than the construction of standard railway.
Indian government approved the choice of Japan to build India's first high-speed railway. The planned rail would run some between Mumbai and the western city of Ahmedabad, at a top speed of . Under the proposal, construction is expected to begin in 2017 and be completed in 2023. It would cost about and be financed by a low-interest loan from Japan. India will use the wheel-based 300 km/hr HSR technology, instead of new maglev 600 km/hr technology of the Japan used in Chūō Shinkansen. India is expected to have its HSR line operational from 2025 onwards, once the safety checks are completed.
Like monorail, light rail is also considered as a feeder system for the metro systems. The planned systems are Kolkata Light Rail Transit and Delhi Light Rail Transit.
Directorate General of Civil Aviation is the national regulatory body for the aviation industry. It is controlled by the Ministry of Civil Aviation. The ministry also controls aviation related autonomous organisations like the Airports Authority of India (AAI), Bureau of Civil Aviation Security (BCAS), Indira Gandhi Rashtriya Uran Akademi and Public Sector Undertakings including Air India, Pawan Hans Helicopters Limited and Hindustan Aeronautics Limited.
Air India is India's national flag carrier after merging with Indian (airline) in 2011 and plays a major role in connecting India with the rest of the world. IndiGo, Air India, Spicejet and GoAir are the major carriers in order of their market share. These airlines connect more than 80 cities across India and also operate overseas routes after the liberalisation of Indian aviation. Several other foreign airlines connect Indian cities with other major cities across the globe. However, a large section of country's air transport potential remains untapped, even though the Mumbai-Delhi air corridor was ranked the world's tenth busiest route by Amadeus in 2012.
While there are 346 civilian airfields in India – 253 with paved runways and 93 with unpaved runways, only 132 were classified as "airports" as of November 2014. Of these, Indira Gandhi International Airport in Delhi is the busiest in the country. The operations of the major airports in India have been privatised over the past five years and this has resulted in better equipped and cleaner airports. The terminals have either been refurbished or expanded.
India also has 33 "ghost airports," which were built in an effort to make air travel more accessible for those in remote regions but are now non-operational due to a lack of demand. The Jaisalmer Airport in Rajasthan, for example, was completed in 2013 and was expected to host 300,000 passengers a year but has yet to see any commercial flights take off. Despite the number of non-operational airports, India is currently planning on constructing another 200 "low-cost" airports over the next 20 years.
As of 2013, there are 45 heliports in India. India also has the world's highest helipad at the Siachen Glacier at a height of 6400 m (21,000 ft) above mean sea level.
Pawan Hans Helicopters Limited is a public sector company that provides helicopter services to ONGC to its off-shore locations, and also to various State Governments in India, particularly in North-east India.
India has a coastline of , and thus ports are the main centres of trade.
India also has an extensive network of inland waterways.
In India about 96% of the foreign trade by quantity and 70% by value takes place through the ports. Mumbai Port & JNPT(Navi Mumbai) handles 70% of maritime trade in India. There are twelve major ports: Navi Mumbai, Mumbai, Kochi, Kolkata (including Haldia), Paradip, Visakhapatnam, Ennore, Chennai, Thoothukudi, New Mangaluru, Mormugao and Kandla. Other than these, there are 187 minor and intermediate ports, 43 of which handle cargo.
Maritime transportation in India is managed by the Shipping Corporation of India, a government-owned company that also manages offshore and other marine transport infrastructure in the country. It owns and operates about 35% of Indian tonnage and operates in practically all areas of shipping business servicing both national and international trades. The only Indian state with three ports is Tamil Nadu, they are Ennore, Chennai and Tuticorin.
It has a fleet of 79 ships of 2,750,000 GT (4.8 million DWT) and also manages 53 research, survey and support vessels of 120,000 GT (060,000 DWT) on behalf of various government departments and other organisations. Personnel are trained at the Maritime Training Institute in Mumbai, a branch of the World Maritime University, which was set up in 1987. The Corporation also operates in Malta and Iran through joint ventures.
The distinction between major and minor ports is not based on the amount of cargo handled. The major ports are managed by port trusts which are regulated by the central government. They come under the purview of the Major Port Trusts Act, 1963. The minor ports are regulated by the respective state governments and many of these ports are private ports or captive ports. The total amount of traffic handled at the major ports in 2005–2006 was 382.33 Mt.
India has an extensive network of inland waterways in the form of rivers, canals, backwaters and creeks. The total navigable length is , out of which about of river and of canals can be used by mechanised crafts. Freight transport by waterways is highly underutilised in India compared to other large countries. The total cargo moved by inland waterways is just 0.15% of the total inland traffic in India, compared to the corresponding figures of 20% for Germany and 32% for Bangladesh.
Cargo that is transported in an organised manner is confined to a few waterways in Goa, West Bengal, Assam and Kerala. The Inland Waterways Authority of India (IWAI) is the statutory authority in charge of the waterways in India. It does the function of building the necessary infrastructure in these waterways, surveying the economic feasibility of new projects and also administration and regulation. The following waterways have been declared as National Waterways:
Oil and gas industry in India imports 82% of its oil needs and aims to bring that down to 67% by 2022 by replacing it with local exploration, renewable energy and indigenous ethanol fuel (c. Jan 2018).
"Logistics in India" ranking moved up to 35th place in 2016 from 54th in 2014 on World Bank's Global Logistics Performance Index. Government strategy aims to raise the share of global trade in India's GDP (US$2.7 trillion in FY 2017–18) to 40%, including half of it (20% of GDP) from exports (c. Jan 2018). Cost of logistics in India is 14% of GDP, which is higher than the developed nations, and government reforms aim to bring it down to 10% of GDP by 2022 (c. Jan 2018). Ministry of Commerce and Industry has created a new dedicated centralised Logistics division in collaboration with Singapore and Japan to handle the logistics which was earlier handled by several different ministries, such as railways, roads, shipping and aviation. To boot exports, each state will have exports and logistic policy and Nodal officers will be appointed at district level (c. Jan 2018). There are 64 transactions and 37 government agencies in the end-to-end production-to-export process. To further improve the ranking, improve speed of logistics, ease of doing business and reduce the cost of logistics, India is creating a ""common online integrated logistics e-marketplace portal"" that will cover all transactions in production and export, connect buyers with logistics service providers and government agencies such as the customs department Icegate system, Port Community Systems, Sea and Air Port terminals, Shipping lines, Railways, etc. (c. Jan 2018).
As part of the US$125 billion port-led development project Sagarmala, government will define the regulatory framework for the Indian logistics operational standards by benchmarking India's 300 dry ports logistics parks (inland container depots or ICDs) to the top 10 logistics international best practices nations to boost exports, remove supply chain bottle necks, reduce transaction costs, optimise logistics mix, set up new hub-and-spoke dry ports (c. Jan 2018). To reduce the logistics costs by 10% and C02 emissions by 12%, the government is also developing 35 new ""Multimodal Logistics Parks"" (MMLPs) on 36 ring roads, which will facilitate 50% of the freight moved in India. Land has been earmarked and pre-feasibility study is underway for 6 of these MMLPs (c. May 2017).
Confederation of Indian Industry (CII) and government will organise an annual National Logistics convention. Major supply chain solution providers include Container Corporation of India and Transport Corporation of India, and magazine Logistics Management India magazine is one of the industry publication.
In 1998, the Supreme Court of India published a Directive that specified the date of April 2001 as deadline to replace or convert all buses, three-wheelers and taxis in Delhi to compressed natural gas.
The Karnataka State Road Transport Corporation was the first State Transport Undertaking in India to utilise bio-fuels and ethanol-blended fuels. KSRTC took an initiative to do research in alternative fuel forms by experimenting with various alternatives— blending diesel with biofuels such as honge, palm, sunflower, groundnut, coconut and sesame. In 2009, the corporation decided to promote the use of biofuel buses.
In 2017, the government announced that by 2030, only electric vehicles would be sold in the country. It also announced that by 2022 all trains would be electric trains.
In March 2020, India's central government suspended all passenger rail, metro and bus services due to COVID-19 | https://en.wikipedia.org/wiki?curid=14602 |
Foreign relations of India
The Ministry of External Affairs of India (MEA), also known as the Foreign Ministry, is the government agency responsible for the conduct of foreign relations of India. With the world's third largest military expenditure, largest armed force, fifth largest economy by nominal rates and third largest economy in terms of purchasing power parity, India is a regional power, a nuclear power, a nascent global power and a potential superpower. India has a growing international influence and a prominent voice in global affairs.
India faces serious economic and social issues as a result of centuries of economic exploitation by colonial powers. However, since gaining independence from Britain in 1947, India has become a newly industrialised country, has a history of collaboration with several countries, is a component of the BRICS and a major part of developing world. India was one of the founding members of several international organisations—the United Nations, the Asian Development Bank, New Development BRICS Bank, and G-20—and the founder of the Non-Aligned Movement.
India has also played an important and influential role in other international organisations like East Asia Summit, World Trade Organization, International Monetary Fund (IMF), G8+5 and IBSA Dialogue Forum. India is also a member of the Asian Infrastructure Investment Bank and the Shanghai Cooperation Organisation.
Regionally, India is a part of SAARC and BIMSTEC. India has taken part in several UN peacekeeping missions and in 2007, it was the second-largest troop contributor to the United Nations. India is currently seeking a permanent seat in the UN Security Council, along with the other G4 nations.
India wields enormous influence in global affairs and can be classified as an emerging superpower.
India's relations with the world have evolved since the British Raj (1857–1947), when the British Empire monopolised external and defence relations. When India gained independence in 1947, few Indians had experience in making or conducting foreign policy. However, the country's oldest political party, the Indian National Congress, had established a small foreign department in 1925 to make overseas contacts and to publicise its independence struggle. From the late 1920s on, Jawaharlal Nehru, who had a long-standing interest in world affairs among independence leaders, formulated the Congress stance on international issues. As Prime Minister from 1947, Nehru articulated India's approach to the world.
India's international influence varied over the years after independence. Indian prestige and moral authority were high in the 1950s and facilitated the acquisition of developmental assistance from both East and West. Although the prestige stemmed from India's nonaligned stance, the nation was unable to prevent Cold War politics from becoming intertwined with interstate relations in South Asia. On the intensely debated Kashmir issue with Pakistan, India lost credibility by rejecting United Nations calls for a plebiscite in the disputed area.
In the 1960s and 1970s India's international position among developed and developing countries faded in the course of wars with China and Pakistan, disputes with other countries in South Asia, and India's attempt to match Pakistan's support from the United States and China by signing the Indo-Soviet Treaty of Friendship and Cooperation in August 1971. Although India obtained substantial Soviet military and economic aid, which helped to strengthen the nation, India's influence was undercut regionally and internationally by the perception that its friendship with the Soviet Union prevented a more forthright condemnation of the Soviet presence in Afghanistan. In the late 1980s, India improved relations with the United States, other developed countries, and China while continuing close ties with the Soviet Union. Relations with its South Asian neighbours, especially Pakistan, Sri Lanka, and Nepal, occupied much of the energies of the Ministry of External Affairs.
Even before independence, the British-controlled Government of India maintained semi-autonomous diplomatic relations. It had colonies (such as the Aden Settlement), who sent and received full missions. India was a founder member of both the League of Nations and the United Nations. After India gained independence from the United Kingdom in 1947, it soon joined the Commonwealth of Nations and strongly supported independence movements in other colonies, like the Indonesian National Revolution. The partition and various territorial disputes, particularly that over Kashmir, would strain its relations with Pakistan for years to come. During the Cold War, India adopted a foreign policy of not aligning itself with any major power bloc. However, India developed close ties with the Soviet Union and received extensive military support from it.
The end of the Cold War significantly affected India's foreign policy, as it did for much of the world. The country now seeks to strengthen its diplomatic and economic ties with the United States, the European Union trading bloc, Japan, Israel, Mexico, and Brazil. India has also forged close ties with the member states of the Association of Southeast Asian Nations, the African Union, the Arab League and Iran.
Though India continues to have a military relationship with Russia, Israel has emerged as India's second largest military partner while India has built a strong strategic partnership with the United States. The foreign policy of Narendra Modi indicated a shift towards focusing on the Asian region and, more broadly, trade deals.
India's foreign policy has always regarded the concept of neighbourhood as one of widening concentric circles, around a central axis of historical and cultural commonalities.
As many as 44 million people of Indian origin live and work abroad and constitute an important link with the mother country. An important role of India's foreign policy has been to ensure their welfare and wellbeing within the framework of the laws of the country where they live.
Jawaharlal Nehru, India's first Prime Minister, promoted a strong personal role for the Prime Minister but a weak institutional structure. Nehru served concurrently as Prime Minister and Minister of External Affairs; he made all major foreign policy decisions himself after consulting with his advisers and then entrusted the conduct of international affairs to senior members of the Indian Foreign Service. He was the main founding fathers of the Panchsheel or the Five Principles of Peaceful Co-existence.
His successors continued to exercise considerable control over India's international dealings, although they generally appointed separate ministers of external affairs.
India's second prime minister, Lal Bahadur Shastri (1964–66), expanded the Prime Minister Office (sometimes called the Prime Minister's Secretariat) and enlarged its powers. By the 1970s, the Office of the Prime Minister had become the de facto coordinator and supraministry of the Indian government. The enhanced role of the office strengthened the prime minister's control over foreign policy making at the expense of the Ministry of External Affairs. Advisers in the office provided channels of information and policy recommendations in addition to those offered by the Ministry of External Affairs. A subordinate part of the office—the Research and Analysis Wing (RAW)—functioned in ways that significantly expanded the information available to the prime minister and his advisers. The RAW gathered intelligence, provided intelligence analysis to the Office of the Prime Minister, and conducted covert operations abroad.
The prime minister's control and reliance on personal advisers in the Office of the Prime Minister was particularly strong under the tenures of Indira Gandhi (1966–77 and 1980–84) and her son, Rajiv (1984–89), who succeeded her, and weaker during the periods of coalition governments. Observers find it difficult to determine whether the locus of decision-making authority on any particular issue lies with the Ministry of External Affairs, the Council of Ministers, the Office of the Prime Minister, or the prime minister himself.
The Prime Minister is however free to appoint advisers and special committees to examine various foreign policy options and areas of interest. In a recent instance, Manmohan Singh appointed K. Subrahmanyam in 2005 to head a special government task force to study 'Global Strategic Developments' over the next decade. The Task Force submitted its conclusions to the Prime Minister in 2006. The report has not yet been released in the public domain.
The Ministry of External Affairs is the Indian government's agency responsible for the foreign relations of India. The Minister of External Affairs holds cabinet rank as a member of the Council of Ministers.
Subrahmanyam Jaishankar is current Minister of External Affairs. The Ministry has a Minister of State V Muraleedharan. The Indian Foreign Secretary is the head of Indian Foreign Service (IFS) and therefore, serves as the head of all Indian (ambassadors) and high commissioners. Harsh Vardhan Shringla is the current Foreign Secretary of India.
In the post Cold War era, a significant aspect of India's foreign policy is the Look East Policy. During the cold war, India's relations with its South East Asian neighbours was not very strong. After the end of the cold war, the government of India particularly realised the importance of redressing this imbalance in India's foreign policy. Consequently, the Narsimha Rao government in the early nineties of the last century unveiled the look east policy. Initially it focused on renewing political and economic contacts with the countries of East and South-East Asia.
At present, under the Look East Policy, the Government of India is giving special emphasis on the economic development of backward north eastern region of India taking advantage of huge market of ASEAN as well as of the energy resources available in some of the member countries of ASEAN like Burma.
Look-east policy was launched in 1991 just after the end of the cold war, following the dissolution of the Soviet Union. After the start of liberalisation, it was a very strategic policy decision taken by the government in the foreign policy. To quote Prime Minister Manmohan Singh "it was also a strategic shift in India's vision of the world and India's place in the evolving global economy".
The policy was given an initial thrust with the then Prime Minister Narasimha Rao visiting China, Japan, South Korea, Vietnam and Singapore and India becoming an important dialogue partner with ASEAN in 1992. Since the beginning of this century, India has given a big push to this policy by becoming a summit level partner of ASEAN (2002) and getting involved in some regional initiatives such as the BIMSTEC and the Ganga–Mekong Cooperation and now becoming a member of the East Asia Summit (EAS) in December 2005.
Since the dissolution of the Soviet Union, India has forged a closer partnership with Western powers.
In the 1990s, India's economic problems and the demise of the bipolar world political system forced India to reassess its foreign policy and adjust its foreign relations. Previous policies proved inadequate to cope with the serious domestic and international problems facing India. The end of the Cold War gutted the core meaning of nonalignment and left Indian foreign policy without significant direction. The hard, pragmatic considerations of the early 1990s were still viewed within the nonaligned framework of the past, but the disintegration of the Soviet Union removed much of India's international leverage, for which relations with Russia and the other post-Soviet states could not compensate. After the dissolution of the Soviet Union, India improved its relations with the United States, Canada, France, Japan and Germany. In 1992, India established formal diplomatic relations with Israel and this relationship grew during the tenures of the Bharatiya Janata Party (BJP) government and the subsequent UPA (United Progressive Alliance) governments.
In the mid-1990s, India attracted the world attention towards the Pakistan-backed terrorism in Kashmir. The Kargil War resulted in a major diplomatic victory for India. The United States and European Union recognised the fact that Pakistani military had illegally infiltrated into Indian territory and pressured Pakistan to withdraw from Kargil. Several anti-India militant groups based in Pakistan were labelled as terrorist groups by the United States and European Union.
In 1998, India tested nuclear weapons for the second time (see Pokhran-II) which resulted in several US, Japanese and European sanctions on India. India's then-defence minister, George Fernandes, said that India's nuclear programme was necessary as it provided a deterrence to potential Chinese nuclear threat. Most of the sanctions imposed on India were removed by 2001.
After September 11 attacks in 2001, Indian intelligence agencies provided the US with significant information on Al-Qaeda and related groups' activities in Pakistan and Afghanistan. India's extensive contribution to the War on Terror, coupled with a surge in its economy, has helped India's diplomatic relations with several countries. Over the past three years, India has held numerous joint military exercises with US and European nations that have resulted in a strengthened US-India and EU-India bilateral relationship. India's bilateral trade with Europe and United States had more than doubled in the five years since 2003.
India has been pushing for reforms in the UN and WTO with mixed results. India's candidature for a permanent seat at the UN Security Council is currently backed by several countries including France, Russia, the United Kingdom, Germany, Japan, Brazil, Australia and UAE. In 2004, the United States signed a nuclear co-operation agreement with India even though the latter is not a part of the Nuclear Non-Proliferation Treaty. The US argued that India's strong nuclear non-proliferation record made it an exception, however this has not persuaded other Nuclear Suppliers Group members to sign similar deals with India. During a state visit to India in November 2010, US president Barack Obama announced US support for India's bid for permanent membership to UN Security Council as well as India's entry to Nuclear Suppliers Group, Wassenaar Arrangement, Australia Group and Missile Technology Control Regime. As of January 2018, India has become member of Wassenaar Arrangement, Australia Group and Missile Technology Control Regime.
India's growing economy, strategic location, mix of friendly and diplomatic foreign policy and large and vibrant diaspora has won it more allies than enemies. India has friendly relations with several countries in the developing world. Though India is not a part of any major military alliance, it has close strategic and military relationship with most of the fellow major powers.
Countries considered India's closest include the Russian Federation, Israel, Afghanistan, France, Bhutan, Bangladesh, and the United States. Russia is the largest supplier of military equipment to India, followed by Israel and France. According to some analysts, Israel is set to overtake Russia as India's largest military and strategic partner. The two countries also collaborate extensively in the sphere of counter-terrorism and space technology. India also enjoys strong military relations with several other countries, including the United Kingdom, the United States, Japan, Singapore, Brazil, South Africa and Italy. In addition, India operates an airbase in Tajikistan, signed a landmark defence accord with Qatar in 2008, and has leased out Assumption Island from Seychelles to build a naval base in 2015.
India has also forged relationships with developing countries, especially South Africa, Brazil, and Mexico. These countries often represent the interests of the developing countries through economic forums such as the G8+5, IBSA and WTO. India was seen as one of the standard bearers of the developing world and claimed to speak for a collection of more than 30 other developing nations at the Doha Development Round. Indian Look East policy has helped it develop greater economic and strategic partnership with Southeast Asian countries, South Korea, Japan, and Taiwan. India also enjoys friendly relations with the Persian Gulf countries and most members of the African Union.
The Foundation for National Security Research in New Delhi published "India's Strategic Partners: A Comparative Assessment" and ranked India's top strategic partners with a score out of 90 points : Russia comes out on top with 62, followed by the United States (58), France (51), UK (41), Germany (37), and Japan (34).
India has signed strategic partnership agreements with more than two dozen countries/supranational entities listed here in the chronological order of the pacts:
Currently, India is taking steps towards establishing strategic partnerships with Canada and Argentina. Although India has not signed any formal strategic partnership agreements with Bhutan and Qatar, its foreign ministry often describes relations with these countries as 'strategic'.
Certain aspects of India's relations within the subcontinent are conducted through the South Asian Association for Regional Cooperation (SAARC). Other than India, its members are Afghanistan, Bangladesh, Bhutan, Maldives, Nepal, Pakistan and Sri Lanka. Established in 1985, SAARC encourages co-operation in agriculture, rural development, science and technology, culture, health, population control, narcotics control and anti-terrorism.
SAARC has intentionally stressed these "core issues" and avoided more divisive political issues, although political dialogue is often conducted on the margins of SAARC meetings. In 1993, India and its SAARC partners signed an agreement to gradually lower tariffs within the region. Forward movement in SAARC has come to a standstill because of the tension between India and Pakistan, and the SAARC Summit originally scheduled for, but not held in, November 1999 has not been rescheduled. The Fourteenth SAARC Summit was held during 3–4 April 2007 in New Delhi.
Recent SAARC summit that was scheduled to be held in Islamabad was postponed recently due to terrorist acts particularly Uri attack.
Bilateral relations between India and Afghanistan have been traditionally strong and friendly. While India was the only South Asian country to recognise the Soviet-backed Democratic Republic of Afghanistan in the 1980s, its relations were diminished during the Afghan civil wars and the rule
the Islamist Taliban in the 1990s. India aided the overthrow of the Taliban and became the largest regional provider of humanitarian and reconstruction aid.
The new democratically elected Afghan government strengthened its ties with India in wake of persisting tensions and problems with Pakistan, which is continuing to shelter and support the Taliban. India pursues a policy of close co-operation to bolster its standing as a regional power and contain its rival Pakistan, which it maintains is supporting Islamic militants in Kashmir and other parts of India. India is the largest regional investor in Afghanistan, having committed more than US$2.2 billion for reconstruction purposes.
India was the second country to recognise Bangladesh as a separate and independent state, doing so on 6 December 1971. India fought alongside the Bangladeshis to liberate Bangladesh from West Pakistan in 1971.
Bangladesh's relationship with India has been difficult in terms of irrigation and land border disputes post 1976. However, India has enjoyed favourable relationship with Bangladesh during governments formed by the Awami League in 1972 and 1996. The recent solutions of land and maritime disputes have taken out irritants in ties.
At the outset India's relations with Bangladesh could not have been stronger because of India's unalloyed support for independence and opposition against Pakistan in 1971. During the independence war, many refugees fled to India. When the struggle of resistance matured in November 1971, India also intervened militarily and may have helped bring international attention to the issue through Indira Gandhi's visit to Washington, D.C. Afterwards India furnished relief and reconstruction aid. India extended recognition to Bangladesh prior to the end of the war in 1971 (the second country to do so after Bhutan) and subsequently lobbied others to follow suit. India also withdrew its military from the land of Bangladesh when Sheikh Mujibur Rahman requested Indira Gandhi to do so during the latter's visit to Dhaka in 1972.
Indo-Bangladesh relations have been somewhat less friendly since the fall of Mujib government in August 1975. over the years over issues such as South Talpatti Island, the Tin Bigha Corridor and access to Nepal, the Farakka Barrage and water sharing, border conflicts near Tripura and the construction of a fence along most of the border which India explains as security provision against migrants, insurgents and terrorists. Many Bangladeshis feel India likes to play "big brother" to smaller neighbours, including Bangladesh. Bilateral relations warmed in 1996, due to a softer Indian foreign policy and the new Awami League Government. A 30-year water-sharing agreement for the Ganges River was signed in December 1996, after an earlier bilateral water-sharing agreement for the Ganges River lapsed in 1988. Both nations also have cooperated on the issue of flood warning and preparedness. The Bangladesh Government and tribal insurgents signed a peace accord in December 1997, which allowed for the return of tribal refugees who had fled into India, beginning in 1986, to escape violence caused by an insurgency in their homeland in the Chittagong Hill Tracts. The Bangladesh Army maintains a very strong presence in the area to this day. The army is increasingly concerned about a growing problem of cultivation of illegal drugs.
There are also small pieces of land along the border region that Bangladesh is diplomatically trying to reclaim. Padua, part of Sylhet Division before 1971, has been under Indian control since the war in 1971. This small strip of land was re-occupied by the BDR in 2001, but later given back to India after Bangladesh government decided to solve the problem through diplomatic negotiations. The Indian New Moore island no longer exists, but Bangladesh repeatedly claims it to be part of the Satkhira district of Bangladesh.
In recent years India has increasingly complained that Bangladesh does not secure its border properly. It fears an increasing flow of poor Bangladeshis and it accuses Bangladesh of harbouring Indian separatist groups like ULFA and alleged terrorist groups. The Bangladesh government has refused to accept these allegations. India estimates that over 20 million Bangladeshis are living illegally in India. One Bangladeshi official responded that "there is not a single Bangladeshi migrant in India". Since 2002, India has been constructing an India – Bangladesh Fence along much of the 2500 mile border. The failure to resolve migration disputes bears a human cost for illegal migrants, such as imprisonment and health risks (namely HIV/AIDS).
India's prime minister Narendra Modi and his Bangladeshi counterpart Sheikh Hasina have completed a landmark deal redrawing their messy shared border and there by solving disputes between India and Bangladesh. Bangladesh has also given India transit route to travel through Bangladesh to its North East states. India and Bangladesh also have free trade agreement on 7 June 2015.
Both countries solved its border dispute on 6 June 2015.
To connect Kolkata with Tripura via Bangladesh through railway, the Union Government on 10 February 2016 sanctioned about 580 crore rupees. The project that is expected to be completed by 2017 will pass through Bangladesh.
The Agartala-Akhaura rail-link between Indian Railway and Bangladesh Railway will reduce the current 1700 km road distance between Kolkata to Agartala via Siliguri to just 350-kilometer by railway.
The project ranks high on Prime Minister's 'Act East’ Policy, and is expected to increase connectivity and boost trade between India and Bangladesh.
Historically, there have been close ties with India. Both countries signed a friendship treaty in 1949, where India would assist Bhutan in foreign relations. On 8 February 2007, the Indo-Bhutan Friendship Treaty was substantially revised under the Bhutanese King, Jigme Khesar Namgyel Wangchuck. Whereas in the Treaty of 1949 Article 2 read as "The Government of India undertakes to exercise no interference in the internal administration of Bhutan. On its part the Government of Bhutan agrees to be guided by the advice of the Government of India in regard to its external relations."
In the revised treaty it now reads as, "In keeping with the abiding ties of close friendship and cooperation between Bhutan and India, the Government of the Kingdom of Bhutan and the Government of the Republic of India shall cooperate closely with each other on issues relating to their national interests. Neither government shall allow the use of its territory for activities harmful to the national security and interest of the other". The revised treaty also includes in it the preamble "Reaffirming their respect for each other's independence, sovereignty and territorial integrity", an element that was absent in the earlier version. The Indo-Bhutan Friendship Treaty of 2007 strengthens Bhutan's status as an independent and sovereign nation.
India continues to be the largest trade and development partner of Bhutan. Planned development efforts in Bhutan began in the early 1960s. The First Five Year Plan (FYP) of Bhutan was launched in 1961. Since then, India has been extending financial assistance to Bhutan's FYPs. The 10th FYP ended in June 2013. India's overall assistance to the 10th FYP was a little over Rs. 5000 crores, excluding grants for hydropower projects. India has committed Rs. 4500 crores for Bhutan's 11th FYP along with Rs. 500 crores as an Economic Stimulus Package.
The hydropower sector is one of the main pillars of bilateral co-operation, exemplifying mutually beneficial synergy by providing clean energy to India and exports revenue to Bhutan (power contributes 14% to the Bhutanese GDP, comprising about 35% of Bhutan's total exports). Three hydroelectric projects (HEPs) totalling 1416 MW, (336 MW Chukha HEP, the 60 MW Kurichu HEP, and the 1020 MW Tala HEP), are already exporting electricity to India. In 2008 the two governments identified ten more projects for development with a total generation capacity of 10,000 MW. Of these, three projects totalling 2940 MW (1200 MW Punatsangchu-I, 1020 MW Punatsangchu-II and 720 MW Mangdechu HEPs) are under construction and are scheduled to be commissioned in the last quarter of 2017–2018. Out of the remaining 7 HEPs, 4 projects totalling 2120 MW (600 MW Kholongchhu, 180 MW Bunakha, 570 MW Wangchu and 770 MW Chamkarchu) will be constructed under Joint Venture model, for which a Framework Inter-Governmental Agreement was signed between both governments in 2014. Of these 4 JV-model projects, pre-construction activities for Kholongchhu HEP have commenced. Tata Power is also building a hydro-electric dam in Bhutan.
India established diplomatic relations after Burma's independence from Great Britain in 1948. For many years, Indo-Burmese relations were strong due to cultural links, flourishing commerce, common interests in regional affairs and the presence of a significant Indian community in Burma. India provided considerable support when Burma struggled with regional insurgencies. However, the overthrow of the democratic government by the Military of Burma led to strains in ties. Along with much of the world, India condemned the suppression of democracy and Burma ordered the expulsion of the Burmese Indian community, increasing its own isolation from the world. Only China maintained close links with Burma while India supported the pro-democracy movement.
However, due to geo-political concerns, India revived its relations and recognised the military junta ruling Burma in 1993, overcoming strains over drug trafficking, the suppression of democracy and the rule of the military junta in Burma. Burma is situated to the south of the states of Mizoram, Manipur, Nagaland and Arunachal Pradesh in Northeast India. and the proximity of the People's Republic of China gives strategic importance to Indo-Burmese relations. The Indo-Burmese border stretches over 1,600 kilometres and some insurgents in North-east India seek refuge in Burma. Consequently, India has been keen on increasing military co-operation with Burma in its counter-insurgency activities. In 2001, the Indian Army completed the construction of a major road along its border with Burma. India has also been building major roads, highways, ports and pipelines within Burma in an attempt to increase its strategic influence in the region and also to counter China's growing strides in the Indochina peninsula. Indian companies have also sought active participation in oil and natural gas exploration in Burma. In February 2007, India announced a plan to develop the Sittwe port, which would enable ocean access from Indian Northeastern states like Mizoram, via the Kaladan River.
India is a major customer of Burmese oil and gas. In 2007, Indian exports to Burma totalled US$185 million, while its imports from Burma were valued at around US$810 million, consisting mostly of oil and gas. India has granted US$100 million credit to fund highway infrastructure projects in Burma, while US$57 million has been offered to upgrade Burmese railways. A further US$27 million in grants has been pledged for road and rail projects. India is one of the few countries that has provided military assistance to the Burmese junta. However, there has been increasing pressure on India to cut some of its military supplies to Burma. Relations between the two remain close which was evident in the aftermath of Cyclone Nargis, when India was one of the few countries whose relief and rescue aid proposals were accepted by Burma's ruling junta.
Both India and the PRC maintain embassies in Rangoon and consulate-generals in Mandalay.
Despite lingering suspicions remaining from the 1962 Sino-Indian War, the 1967 Nathu La and Cho La incidents, and continuing boundary disputes over Aksai Chin and Arunachal Pradesh, Sino-Indian relations have improved gradually since 1988. Both countries have sought to reduce tensions along the frontier, expand trade and cultural ties, and normalise relations.
A series of high-level visits between the two nations have helped improve relations. In December 1996, PRC President Jiang Zemin visited India during a tour of South Asia. While in New Delhi, he signed with the Indian Prime Minister a series of confidence-building measures for the disputed borders. Sino-Indian relations suffered a brief setback in May 1998 when the Indian Defence minister justified the country's nuclear tests by citing potential threats from the PRC. However, in June 1999, during the Kargil crisis, then-External Affairs Minister Jaswant Singh visited Beijing and stated that India did not consider China a threat. By 2001, relations between India and the PRC were on the mend, and the two sides handled the move from Tibet to India of the 17th Karmapa in January 2000 with delicacy and tact. In 2003, India formally recognised Tibet as a part of China, and China recognised Sikkim as a formal part of India in 2004.
Since 2004, the economic rise of both China and India has also helped forge closer relations between the two. Sino-Indian trade reached US$65.47 billion in 2013–14, making China the single largest trading partner of India. The increasing economic reliance between India and China has also bought the two nations closer politically, with both India and China eager to resolve their boundary dispute. They have also collaborated on several issues ranging from WTO's Doha round in 2008 to regional free trade agreement. Similar to Indo-US nuclear deal, India and China have also agreed to co-operate in the field of civilian nuclear energy. However, China's economic interests have clashed with those of India. Both the countries are the largest Asian investors in Africa and have competed for control over its large natural resources.
India enjoys a considerable influence over Maldives' foreign policy and provides extensive security co-operation especially after the Operation Cactus in 1988 during which India repelled Tamil mercenaries who invaded the country.
As a founder member in 1985 of the South Asian Association for Regional Cooperation, SAARC, which brings together Afghanistan, Bangladesh, Bhutan, India, Maldives, Nepal, Pakistan and Sri Lanka, the country plays a very active role in SAARC. The Maldives has taken the lead in calling for a South Asian Free Trade Agreement, the formulation of a Social Charter, the initiation of informal political consultations in SAARC forums, the lobbying for greater action on environmental issues, the proposal of numerous human rights measures such as the regional convention on child rights and for setting up a SAARC Human Rights Resource Centre. The Maldives is also an advocate of greater international profile for SAARC such as through formulating common positions at the UN.
India is starting the process to bring the island country into India's security grid. The move comes after the moderate Islamic nation approached New Delhi earlier this year over fears that one of its island resorts could be taken over by terrorists given its lack of military assets and surveillance capabilities.
India also signed an agreement with the Maldives in 2011 which is centred around the following:
Relations between India and Nepal are close yet fraught with difficulties stemming from border disputes, geography, economics, the problems inherent in big power-small power relations, and common ethnic and linguistic identities that overlap the two countries' borders. In 1950 New Delhi and Kathmandu initiated their intertwined relationship with the Treaty of Peace and Friendship and accompanying secret letters that defined security relations between the two countries, and an agreement governing both bilateral trade and trade transiting Indian soil. The 1950 treaty and letters stated that "neither government shall tolerate any threat to the security of the other by a foreign aggressor" and obligated both sides "to inform each other of any serious friction or misunderstanding with any neighboring state likely to cause any breach in the friendly relations subsisting between the two governments", and also granted the Indian and Nepali citizens right to get involved in any economic activity such as work and business related activity in each other's territory. These accords cemented a "special relationship" between India and Nepal that granted Nepalese in India the same economic and educational opportunities as Indian citizens.
Relations between India and Nepal reached its lowest during 1989 when India imposed a 13-month-long economic blockade of Nepal. Indian PM Narendra Modi visited Nepal in 2014, the first by an Indian PM in nearly 17 years.
In 2015, a blockade of the India-Nepal border has effected relations. The blockade is led by ethnic communities angered by Nepal's recently promulgated new constitution. However, the Nepalese government accuses India of deliberately worsening the embargo, but India denies this.
Despite historical, cultural and ethnic links between them, relations between India and Pakistan have been plagued by years of mistrust and suspicion ever since the partition of India in 1947. The principal source of contention between India and its western neighbour has been the Kashmir conflict. After an invasion by Pashtun tribesmen and Pakistani paramilitary forces, the Hindu Maharaja of the Dogra Kingdom of Jammu and Kashmir, Hari Singh, and its Muslim Prime Minister, Sheikh Abdullah, signed an Instrument of Accession with New Delhi. The First Kashmir War started after the Indian Army entered Srinagar, the capital of the state, to secure the area from the invading forces. The war ended in December 1948 with the Line of Control dividing the erstwhile princely state into territories administered by Pakistan (northern and western areas) and India (southern, central and northeastern areas). Pakistan contested the legality of the Instrument of Accession since the Dogra Kingdom has signed a standstill agreement with it. The Indo-Pakistani War of 1965 started following the failure of Pakistan's Operation Gibraltar, which was designed to infiltrate forces into Jammu and Kashmir to precipitate an insurgency against rule by India. The five-week war caused thousands of casualties on both sides. It ended in a United Nations (UN) mandated ceasefire and the subsequent issuance of the Tashkent Declaration.
India and Pakistan went to war again in 1971, this time the conflict being over East Pakistan. The large-scale atrocities committed there by the Pakistan army led to millions of Bengali refugees pouring over into India. India, along with the Mukti Bahini, defeated Pakistan and the Pakistani forces surrendered on the eastern front. The war resulted in the creation of Bangladesh.
In 1998, India carried out the Pokhran-II nuclear tests which was followed by Pakistan's Chagai-I tests. Following the Lahore Declaration in February 1999, relations briefly improved. A few months later, however, Pakistani paramilitary forces and Pakistan Army, infiltrated in large numbers into the Kargil district of Indian Kashmir. This initiated the Kargil War after India moved in thousands of troops to successfully flush out the infiltrators. Although the conflict did not result in a full-scale war between India and Pakistan, relations between the two reached all-time low which worsened even further following the involvement of Pakistan-based terrorists in the hijacking of the Indian Airlines Flight 814 in December 1999. Attempts to normalise relations, such as the Agra summit held in July 2001, failed. An attack on the Indian Parliament in December 2001, which was blamed on Pakistan, which had condemned the attack caused a military standoff between the two countries which lasted for nearly a year raising fears of a nuclear warfare. However, a peace process, initiated in 2003, led to improved relations in the following years.
Since the initiation of the peace process, several confidence-building-measures (CBMs) between India and Pakistan have taken shape. The Samjhauta Express and Delhi–Lahore Bus service are two of these successful measures which have played a crucial role in expanding people-to-people contact between the two countries. The initiation of Srinagar–Muzaffarabad Bus service in 2005 and opening of a historic trade route across the Line of Control in 2008 further reflects increasing eagerness between the two sides to improve relations. Although bilateral trade between India and Pakistan was a modest US$1.7 billion in March 2007, it is expected to cross US$10 billion by 2010. After the 2005 Kashmir earthquake, India sent aid to affected areas in Pakistani Kashmir and Punjab as well as Indian Kashmir.
The 2008 Mumbai attacks seriously undermined the relations between the two countries. India alleged Pakistan of harbouring militants on their soil, while Pakistan vehemently denies such claims.
A new chapter started in India Pakistan relation when a new NDA government took charge in Delhi after victory in 2014 election and invited SAARC members' leaders in oath taking ceremony. Subsequently visit of Indian Prime Minister on 25 December informally to wish Pakistani Prime minister Nawaj Sharif on his Birth Day and participate in his daughter's wedding. It was hoped that the relation between the neighbor will improve but attack on Indian army camp by Pakistani infilterators on 18 September 2016 and subsequent surgical strike by India aggravated the already strained relation between the nations.
A SAARC summit scheduled in Islamabad was called off because of after boycott by India and other SAARC member's subsequently.
The relation took a further nosedive after another attack on CRPF on February 2019 by a terrorist associated with the Pakistan based Terror Organisation, Jaish-e-Mohammed, when the terrorist rammed his vehicle packed with explosive against a bus carrying CRPF jawans in Pulwama, Kashmir, killing 40. India blamed Pakistan which was denied by the Pakistani establishment.
Bilateral relations between Sri Lanka and India have been generally friendly, but were affected by the Sri Lankan Civil War and by the failure of Indian intervention during the civil war as well as India's support for Tamil Tiger militants. India is Sri Lanka's only neighbour, separated by the Palk Strait; both nations occupy a strategic position in South Asia and have sought to build a common security umbrella in the Indian Ocean.
India-Sri Lanka relations have undergone a qualitative and quantitative transformation in the recent past. Political relations are close, trade and investments have increased dramatically, infrastructural linkages are constantly being augmented, defence collaboration has increased and there is a general, broad-based improvement across all sectors of bilateral co-operation. India was the first country to respond to Sri Lanka's request for assistance after the tsunami in December 2004. In July 2006, India evacuated 430 Sri Lankan nationals from Lebanon, first to Cyprus by Indian Navy ships and then to Delhi and Colombo by special Air India flights.
There exists a broad consensus within the Sri Lankan polity on the primacy of India in Sri Lanka's external relations matrix. Both the major political parties in Sri Lanka, the Sri Lanka Freedom Party and the United Nationalist Party have contributed to the rapid development of bilateral relations in the last ten years. Sri Lanka has supported India's candidature to the permanent membership of the UN Security Council.
India & Australia are both Commonwealth members. Sporting and cultural ties are significant. Australian cricketers often undertake large commercial ventures in India, enhanced with the IPL, and, to a lesser degree, the ICL. Bollywood productions enjoy a large market in Australia. In 2007, PM John Howard visited Mumbai and its entertainment industry, in efforts to increase Tourism in India to Australia.
There are ongoing strategic attempts to form an "Asian NATO" with India, Japan, the US and Australia through the Quadrilateral Security Dialogue. During the first decade of the 21st century, the deepening of strategic relations between the two nations was prevented by a range of policy disagreements, such as India's refusal to sign the NPT and Australia's consequent refusal to provide India with uranium. Australia's parliament later allowed for the sale of uranium to India, following changes in government. Closer strategic cooperation between India, Japan, the United States and Australia also began during the second half of the 2010s, which some analysts attributed to a desire to balance Chinese initiatives in the Indo-Pacific region.
Brunei has a high commission in New Delhi, and India has a high commission in Bandar Seri Begawan. Both countries are full members of the Commonwealth of Nations.
Fiji's relationship with the Republic of India is often seen by observers against the backdrop of the sometimes tense relations between its indigenous people and the 44 percent of the population who are of Indian descent. India has used its influence in international forums such as the Commonwealth of Nations and United Nations on behalf of ethnic Indians in Fiji, lobbying for sanctions against Fiji in the wake of the 1987 coups and the 2000 coup, both of which removed governments, one dominated and one led, by Indo-Fijians.
The ties between Indonesia and India date back to the times of the Ramayana, "Yawadvipa" (Java) is mentioned in India's earliest epic, the Ramayana. Sugriva, the chief of Rama's army dispatched his men to Yawadvipa, the island of Java, in search of Sita. Indonesians had absorbed many aspects of Indian culture since almost two millennia ago. The most obvious trace is the large adoption of Sanskrit into Indonesian language. Several of Indonesian toponymy has Indian parallel or origin, such as Madura with Mathura, Serayu and Sarayu rivers, Kalingga from Kalinga Kingdom, and Ngayogyakarta from Ayodhya. Indianised Hindu–Buddhist kingdoms, such as Kalingga, Srivijaya, Medang i Bhumi Mataram, Sunda, Kadiri, Singhasari and Majapahit were the predominant governments in Indonesia, and lasted from 200 to the 1500s, with the last remaining being in Bali. The example of profound Hindu-Buddhist influences in Indonesian history are the 9th century Prambanan and Borobudur temples.
In 1950, the first President of Indonesia – Sukarno called upon the peoples of Indonesia and India to "intensify the cordial relations" that had existed between the two countries "for more than 1000 years" before they had been "disrupted" by colonial powers. In the spring of 1966, the foreign ministers of both countries began speaking again of an era of friendly relations. India had supported Indonesian independence and Nehru had raised the Indonesian question in the United Nations Security Council.
India has an embassy in Jakarta and Indonesia operates an embassy in Delhi. India regards Indonesia as a key member of ASEAN. Today, both countries maintain cooperative and friendly relations. India and Indonesia is one of the few (and also one of the largest) democracies in Asian region which can be projected as a real democracy. Both nations had agreed to establish a strategic partnership. As fellow Asian democracies that share common values, it is natural for both countries to nurture and foster strategic alliance. Indonesia and India are member states of the G-20, the E7, the Non-Aligned Movement, and the United Nations.
India-Japan relations have always been strong. India has culturally influenced Japan through Buddhism. During World War II, the Imperial Japanese Army helped Netaji Subhash Chandra Bose's Indian National Army. Relations have remained warm since India's independence, despite Japan imposing sanctions on India after the 1998 Pokhran-II nuclear tests (the sanctions were removed in 2001). Japanese companies, like Sony, Toyota, and Honda, have manufacturing facilities in India, and with the growth of the Indian economy, India is a big market for Japanese firms. The most prominent Japanese company to have a big investment in India is automobiles giant Suzuki which is in partnership with Indian automobiles company Maruti Suzuki, the largest car manufacturer in India. Honda was also a partner in "Hero Honda", one of the largest motor cycle sellers in the world (the companies split in 2011).
According to Prime Minister Shinzō Abe's "arc of freedom" theory, it is in Japan's interests to develop closer ties with India, world's most populous democracy, while its relations with China remain chilly. To this end, Japan has funded many infrastructure projects in India, most notably in New Delhi's metro subway system.
In December 2006, Prime Minister Manmohan Singh's visit to Japan culminated in the signing of the "Joint Statement Towards Japan-India Strategic and Global Partnership". Indian applicants were welcomed in 2006 to the JET Programme, starting with just one slot available in 2006 and 41 in 2007. Also, in 2007, the Japan Self-Defense Forces took part in a naval exercise in the Indian Ocean, known as Malabar 2007, which also involved the naval forces of India, Australia, Singapore and the United States.
In October 2008, Japan signed an agreement with India under which it would grant the latter a low-interest loan worth US$4.5 billion to construct a high-speed rail line between Delhi and Mumbai. This is the single largest overseas project being financed by Japan and reflects growing economic partnership between the two. India and Japan signed a security co-operation agreement in which both will hold military exercises, police the Indian Ocean and conduct military-to-military exchanges on fighting terrorism, making India one of only three countries, the other two being the United States and Australia, with which Japan has such a security pact. There are 25,000 Indians in Japan as of 2008.
In recent years, India has endeavoured to build relations, with this small Southeast Asian nation. They have strong military relations, and India shall be building an Airforce Academy in Laos.
India has a high commission in Kuala Lumpur, and Malaysia has a high commission in New Delhi. Both countries are full members of the Commonwealth of Nations and the Asian Union. India and Malaysia are also connected by various cultural and historical ties that date back to antiquity. The two countries are on friendly terms with each other and Malaysia harbours a small population of Indian immigrants. Mahathir bin Mohamad the fourth and longest serving Prime Minister of Malaysia is of Indian origin. His father Mohamad Iskandar, is a Malayalee Muslim who migrated from Kerala and his mother Wan Tampawan, is a Malay.
India and Nauru relations have been established since the island nation's independence in 1968. Leaders of both countries have been meeting on the sidelines of some of the international forums of which both the nations are part of such as the United Nations and the Non-Aligned Movement. India is one of the largest donors to the island by improving the education ministry and creating transportation and computer connections for the MPs and the Speaker of the Parliament of Nauru. There were numerous visits by the President of Nauru to the republic for further strengthen in ties and co-operation.
Bilateral relations were established between India and New Zealand in 1952. India has a High Commission in Wellington with an Honorary Consulate in Auckland, while New Zealand has a High Commission in New Delhi along with a Consulate in Mumbai, trade offices in New Delhi and Mumbai and an Honorary Consulate in Chennai.
India–New Zealand relations were cordial but not extensive after Indian independence. More recently, New Zealand has shown interest in extending ties with India due to India's impressive GDP growth.
India and North Korea have growing trade and diplomatic relations. India maintains a fully functioning embassy in Pyongyang, and North Korea has an embassy in New Delhi. India has said that it wants the "reunification" of Korea.
India and Papua New Guinea established relations in 1975, following PNG's independence from Australia. Since 1975, relations have grown between the two nations. India maintains a High Commission in Port Moresby while Papua New Guinea maintains a High Commission in New Delhi In the 2010 Fiscal Year, Trade between the two nations grew to US$239 Million. PNG has sent numerous military officers and students to be trained and educated in India's academies and universities respectively. In recent years, India and PNG have signed an Economic Partnership Agreement, allowing India to further invest into PNG's infrastructure, telecommunications and educational institutions.
Through the Srivijaya and Majapahit empires, Hindu influence has been visible in Philippine history from the 10th to 14th centuries. During the 18th century, there was robust trade between Manila and the Coromandel Coast of Bengal, involving Philippine exports of tobacco, silk, cotton, indigo, sugar cane and coffee.
Formal diplomatic relations between Philippines and India were established on 16 November 1949. The first Philippine envoy to India was the late Foreign Secretary Narciso Ramos. Seven years after India's independence in 1947, the Philippines and India signed a Treaty of Friendship on 11 July 1952 in Manila to strengthen the friendly relations existing between the two countries. Soon after, the Philippine Legation in New Delhi was established and then elevated to an embassy. However, due to foreign policy differences as a result of the bipolar alliance structure of the Cold War, the development of bilateral relations was stunted. It was only in 1976 that relations started to normalise when Aditya Birla, one of India's successful industrialists, met with then President Ferdinand E. Marcos to explore possibilities of setting up joint ventures in the Philippines.
Today, like India, the Philippines is the leading voice-operated business process outsourcing (BPO) source in terms of revenue (US$5.7) and number of people (500,000) employed in the sector. In partnership with the Philippines, India has 20 IT/BPO companies in the Philippines. Philippines-India bilateral trade stood at US$986.60 million in 2009. In 2004 it was US$600 million. Both countries aim to reach US$1 billion by 2010. There are 60,000 Indians living in the Philippines. The Philippines and India signed in October 2007 the Framework for Bilateral Cooperation which created the PH-India JCBC. It has working groups in trade, agriculture, tourism, health, renewable energy and a regular policy consultation mechanism and security dialogue.
Both countries established diplomatic relations in June 1970.
India and Singapore share long-standing cultural, commercial and strategic relations, with Singapore being a part of the "Greater India" cultural and commercial region. More than 300,000 people of Indian Tamil "தமிழ்" origin live in Singapore. Following its independence in 1965, Singapore was concerned with China-backed communist threats as well as domination from Malaysia and Indonesia and sought a close strategic relationship with India, which it saw as a counterbalance to Chinese influence and a partner in achieving regional security. Singapore had always been an important strategic trading post, giving India trade access to Maritime Southeast Asia and the Far East. Although the rival positions of both nations over the Vietnam War and the Cold War caused consternation between India and Singapore, their relationship expanded significantly in the 1990s; Singapore was one of the first to respond to Indian Look East policy of expanding its economic, cultural and strategic ties in Southeast Asia to strengthen its standing as a regional power. Singapore, and especially, the Singaporean Foreign Minister, George Yeo, have taken an interest, in re-establishing the ancient Indian university, Nalanda University.
Singapore is the 8th largest source of investment in India and the largest amongst ASEAN member nations. It is also India's 9th biggest trading partner as of 2005–06. Its cumulative investment in India totals US$3 billion as of 2006 and is expected to rise to US 5 billion by 2010 and US 10 billion by 2015. India's economic liberalisation and its "Look East" policy have led to a major expansion in bilateral trade, which grew from USD 2.2 billion in 2001 to US 9–10 billion in 2006 – a 400% growth in span of five years – and to USD 50 billion by 2010. Singapore accounts for 38% of India's trade with ASEAN member nations and 3.4% of its total foreign trade. India's main exports to
Singapore in 2005 included petroleum, gemstones, jewellery, machinery and its imports from Singapore included electronic goods, organic chemicals and metals. More than half of Singapore's exports to India are basically "re-exports" – items
that had been imported from India.
The cordial relationship between the two countries extends back to 48AD, when Queen Suro, or Princess Heo, travelled from the kingdom of Ayodhya to Korea. According to the Samguk Yusa, the princess had a dream about a heavenly king who was awaiting heaven's anointed ride. After Princess Heo had the dream, she asked her parents, the king and queen, for permission to set out and seek the man, which the king and queen urged with the belief that god orchestrated the whole fate. Upon approval, she set out on a boat, carrying gold, silver, a tea plant, and a stone which calmed the waters. Archeologists discovered a stone with two fish kissing each other, a symbol of the Gaya kingdom that is unique to the Mishra royal family in Ayodhya, India. This royal link provides further evidence that there was an active commercial engagements between India and Korea since the queen's arrival to Korea. Current descendants live in the city of Kimhae as well as abroad in America's state of New Jersey and Kentucky. Many of them became prominent and well-known around the world like President Kim Dae Jung, Prime Minister Jong Pil Kim.
The relations between the countries have been relatively limited, although much progress arose during the three decades. Since the formal establishment of the diplomatic ties between two countries in 1973, several trade agreements have been reached. Trade between the two nations has increased exponentially, exemplified by the $530 million during the fiscal year of 1992–1993, and the $10 billion during 2006–2007. During the 1997 Asian financial crisis, South Korean businesses sought to increase access to the global markets, and began trade investments with India. The last two presidential visits from South Korea to India were in 1996 and 2006, and the embassy works between the two countries are seen as needing improvements. Recently, there have been acknowledgements in the Korean public and political spheres that expanding relations with India should be a major economical and political priority for South Korea. Much of the economic investments of South Korea have been drained into China; however, South Korea is currently the fifth largest source of investment in India. To The Times of India, President Roh Moo-hyun voiced his opinion that co-operation between India's software and Korea's IT industries would bring very efficient and successful outcomes. The two countries agreed to shift their focus to the revision of the visa policies between the two countries, expansion of trade, and establishment of free trade agreement to encourage further investment between the two countries. Korean companies such as LG, Hyundai and Samsung have established manufacturing and service facilities in India, and several Korean construction companies won grants for a portion of the many infrastructural building plans in India, such as the "National Highway Development Project". Tata Motor's purchase of Daewoo Commercial Vehicles at the cost of $102 million highlights the India's investments in Korea, which consist mostly of subcontracting.
India's Indian Look East policy, saw India grow relations with ASEAN countries including Thailand, and Thailand's Look West policy, also saw it grow its relations with India. Both countries are members of BIMSTEC. Indian Prime Ministers Rajiv Gandhi, P.V. Narasimha Rao, Atal Bihari Vajpayee, and Manmohan Singh, have visited Thailand, which were reciprocated by contemporary Thai Prime Ministers Chatichai Choonhavan, Thaksin Sinawatra, and Surayud Chulanont. In 2003, a Free Trade Agreement was signed between the two countries. India, is the 13th largest investor in Thailand. The spheres of trade are in chemicals, pharmaceuticals, textiles, nylon, tyre cord, real estate, rayon fibres, paper grade pulps, steel wires, and rods. However, IT services, and manufacturing, are the main spheres. Through Buddhism, India, has culturally influenced Thailand. The Indian epics, Mahabharata, and Ramayana, are popular and are widely taught in schools as part of the curriculum in Thailand. The example can also be seen in temples around Thailand, where the story of Ramayana and renowned Indian folk stories are depicted on the temple wall. Thailand, has become a big tourist destination for Indians.
India supported Vietnam's independence from France, opposed US involvement in the Vietnam War and supported unification of Vietnam. India established official diplomatic relations in 1972 and maintained friendly relations, especially in the wake of Vietnam's hostile relations with the People's Republic of China, which had become India's strategic rival.
India granted the "Most favoured nation" status to Vietnam in 1975 and both nations signed a bilateral trade agreement in 1978 and the Bilateral Investment Promotion and Protection Agreement (BIPPA) on 8 March 1997. In 2007, a fresh joint declaration was issued during the state visit of the Prime Minister of Vietnam Nguyen Tan Dung. Bilateral trade has increased rapidly since the liberalisation of the economies of both Vietnam and India. India is the 13th-largest exporter to Vietnam, with exports have grown steadily from US$11.5 million in 1985–86 to USD 395.68 million by 2003. Vietnam's exports to India rose to USD 180 million, including agricultural products, handicrafts, textiles, electronics and other goods. Between 2001 and 2006, the volume of bilateral trade expanded at 20–30% per annum to reach $1 billion by 2006. Continuing the rapid pace of growth, bilateral trade is expected to rise to $2 billion by 2008, two years ahead of the official target. India and Vietnam have also expanded co-operation in information technology, education and collaboration of the respective national space programmes. Direct air links and lax visa regulations have been established to bolster tourism.
India and Vietnam are members of the Mekong–Ganga Cooperation, created to develop to enhance close ties between India and nations of Southeast Asia. Vietnam has supported India's bid to become a permanent member of the United Nations Security Council and join the Indo-Pacific Economic Cooperation (APEC). In the 2003 joint declaration, India and Vietnam envisaged creating an "Arc of Advantage and Prosperity" in Southeast Asia; to this end, Vietnam has backed a more important relationship and role between India and the Association of Southeast Asian Nations (ASEAN) and its negotiation of an Indo–ASEAN free trade agreement. India and Vietnam have also built strategic partnerships, including extensive co-operation on developing nuclear power, enhancing regional security and fighting terrorism, transnational crime and drug trafficking.
India's interaction with ASEAN during the Cold War was very limited. India declined to get associated with ASEAN in the 1960s when full membership was offered even before the grouping was formed.
It is only with the formulation of the Look East policy in the last decade (1992), India had started giving this region due importance in the foreign policy. India became a sectoral dialogue partner with ASEAN in 1992, a full dialogue partner in 1995, a member of the ASEAN Regional Forum (ARF) in 1996, and a summit-level partner (on par with China, Japan and Korea) in 2002.
The first India–ASEAN Business Summit was held at New Delhi in October 2002. The then Prime Minister A. B. Vajpayee addressed this meet and since then this business summit has become an annual feature before the India–ASEAN Summits, as a forum for networking and exchange of business experiences between policy makers and business leaders from ASEAN and India.
Four India-ASEAN Summits, first in 2002 at Phnom Penh (Cambodia), second in 2003 at Bali, Indonesia, third in 2004 at Vientiane, Laos, and the fourth in 2005 at Kuala Lumpur, Malaysia, have taken place.
The following agreements have been entered into with ASEAN:
The following proposals were announced by the Prime Minister at the 4th ASEAN-India Summit:
The ASEAN region has an abundance of natural resources and significant technological skills. These provide a natural base for the integration between ASEAN and India in both trade and investment. The present level of bilateral trade with ASEAN of nearly US$18 billion is reportedly increasing by about 25% per year. India hopes to reach the level of US$30 billion by 2007. India is also improving its relations with the help of other policy decisions like offers of lines of credit, better connectivity through air (open skies policy), rail and road links.
India's commonalities with developing nations in Latin America, especially Brazil and Mexico have continued to grow. India and Brazil continue to work together on the reform of Security Council through the G4 nations while have also increased strategic and economic co-operation through the IBSA Dialogue Forum. The process of finalising Preferential Trade Agreement (PTA) with MERCOSUR (Brazil, Argentina, Uruguay, and Paraguay) is on the itinerary and negotiations are being held with Chile. Brazilian President Luiz Inácio Lula da Silva was the guest of honour at the 2004 Republic Day celebrations in New Delhi.
Both countries have established diplomatic relations and have an Extradition Arrangement.
Formal relations between both the countries were first established in 1949. India has an embassy in Buenos Aires and Argentina has an embassy in New Delhi. The current Indian Ambassador to Argentina (concurrently accredited to Uruguay and Paraguay) is R Viswanathan.
According to the Ministry of External Affairs of the Government of India, "Under the 1968 Visa agreement, (Argentine)fees for transit and tourist visas have been abolished. Under the new visa agreement signed during Argentine Presidential visit in October 2009, it has been agreed that five-year multi-entry business visas would be given free of cost. The Embassy of India in Buenos Aires gives Cafe Con Visa (coffee with visa) to Argentine visitors. The applicants are invited for coffee and visa is given immediately. This has been praised by the Argentine media, public and the Foreign Minister himself.
India and Barbados established diplomatic relations on 30 November 1966 (the date of Barbados' national independence). On that date, the government of India gifted Barbados the throne in Barbados' national House of Assembly. India is represented in Barbados through its embassy in Suriname and an Indian consulate in Holetown, St. James. In 2011–12 the Indian-based firm Era's Lucknow Medical College and Hospital, established the American University of Barbados (AUB), as the island's first Medical School for international students. In 2015 the governments of Barbados and India signed a joint Open Skies Agreement.
Today around 3,000 persons from India call Barbados home. Two-thirds are from the India's Surat district of Gujarat known as Suratis. Most of the Suratis are involved in trading. The rest are mainly of Sindhis ancestry.
India has an Honorary Consulate in Belize City and Belize has an Honorary Consulate in New Delhi. Bilateral trade stood at US$45.3 Million in 2014 and has steadily increased since. Belize and India have engaged in dialogue in Central American Integration System (SICA) discussing anti-terrorism, climate change and food security. India signed a Tax Information Exchange Agreement in 2013 with Belize. India also provides Belize US$30 Million as part of its foreign aid commitment to SICA countries. Citizens of Belize are eligible for scholarships in Indian universities under Indian Technical and Economic Cooperation Programme and the Indian Council for Cultural Relations.
The two nations share a close cultural link due to Belize's large East Indian Population, estimated at 4% of the total population.
Relations between Brazil and India has been extended to diverse areas as science and technology, pharmaceuticals and space as both are member nations of BRICS. The two-way trade in 2007 nearly tripled to US$3.12 billion from US$1.2 billion in 2004. India attaches tremendous importance to its relationship with this Latin American giant and hopes to see the areas of co-operation expand in the coming years.
Both countries want the participation of developing countries in the UNSC permanent membership since the underlying philosophy for both of them are: UNSC should be more democratic, legitimate and representative – the G4 is a novel grouping for this realisation.
Brazil and India are deeply committed to IBSA (South-South co-operation) initiatives and attach utmost importance to this trilateral co-operation between the three large, multi-ethnic, multi-racial and multi-religious developing countries, which are bound by the common principle of pluralism and democracy.
Indo-Canadian relations, are the longstanding bilateral relations between India and Canada, which are built upon a "mutual commitment to democracy", "pluralism", and "people-to-people links", according to the government of Canada. In 2004, bilateral trade between India and Canada was at about C$2.45 billion. However, the botched handling of the Air India investigation and the case in general suffered a setback to Indo-Canadian relations. India's Smiling Buddha nuclear test led to connections between the two countries being frozen, with allegations that India broke the terms of the Colombo Plan. Although Jean Chrétien and Roméo LeBlanc both visited India in the late 1990s, relations were again halted after the Pokhran-II tests.
Canada-India relations have been on an upward trajectory since 2005. Governments at all levels, private-sector organisations, academic institutes in two countries, and people-to-people contacts—especially diaspora networks—have contributed through individual and concerted efforts to significant improvements in the bilateral relationship.
The two governments have agreed on important policy frameworks to advance the bilateral relationship. In particular, the Nuclear Cooperation Agreement (signed in June 2010) and the current successful negotiations of the Comprehensive Economic Partnership Agreement (CEPA) constitute a watershed in Canada-India relations.
The two governments have attempted to make up for lost time and are eager to complete CEPA negotiations by 2013 and ensure its ratification by 2014. After conclusion of CEPA, Canada and India must define the areas for their partnership which will depend on their ability to convert common interests into common action and respond effectively for steady co-operation. For example, during "pull-aside" meetings between Prime Minister Manmohan Singh and Stephen Harper at the G-20 summit in Mexico in June 2012, and an earlier meeting in Toronto between External Affairs Minister S. M. Krishna and John Baird, the leaders discussed developing a more comprehensive partnership going beyond food security and including the possibility of tie-ups in the energy sector, mainly hydrocarbon.
Both countries established diplomatic ties on 19 January 1959. Since then the relationship between the two countries has been gradually increasing with more frequent diplomatic visits to promote political, commercial cultural and academic exchanges. Colombia is currently the commercial point of entry into Latin America for Indian companies.
Relations between India and Cuba are relatively warm. Both nations are part of the Non-Aligned Movement. Cuba has repeatedly called for a more "democratic" representation of the United Nations Security Council and supports India's candidacy as a permanent member on a reformed Security Council. Fidel Castro said that "The maturity of India…, its unconditional adherence to the principles which lay at the foundation of the Non-Aligned Movement give us the assurances that under the wise leadership of Indira Gandhi (the former Prime Minister of India), the non-aligned countries will continue advancing in their inalienable role as a bastion for peace, national independence and development…"
India has an embassy in Havana, the capital of Cuba which opened in January 1960. This had particular significance as it symbolised Indian solidarity with the Cuban revolution. India had been one of the first countries in the world to have recognised the new Cuban government after the Cuban Revolution.
Cuba has an embassy in New Delhi, the Indian capital.
Relations between India and Jamaica are generally cordial and close. There are many cultural and political connections inherited from British colonisation, such as membership in the Commonwealth of Nations, parliamentary democracy, the English language and cricket.
Both nations are members of the Non-Aligned Movement, the United Nations and the Commonwealth, and Jamaica supports India's candidacy for permanent membership on a reformed UN Security Council.
During the British era, Indians voluntarily went to jobs in Jamaica and the West Indies. This has created a considerable population of people of Indian origin in Jamaica.
India has a High Commission in Kingston, whilst Jamaica has a consulate in New Delhi and plans to upgrade it to a High Commission soon.
Mexico is a very important and major economic partner of India. Nobel Prize laureate and ambassador to India Octavio Paz wrote is book "In Light of India" which is an analysis of Indian history and culture. Both nations are regional powers and members of the G-20 major economies.
Bilateral relations between India and Nicaragua have been limited to SICA dialogue and visits by Nicaraguan Ministers to India. India maintains an honorary consul general in Nicaragua, concurrently accredited to the Indian embassy in Panama City and Nicaragua used to maintain an embassy in India but was reduced to honorary consulate general in New Delhi. the current Foreign minister Samuel Santos López visited India in 2008 for the SICA-India Foreign ministers' meeting and in 2013 for high-level talks with the then External Affairs minister Salman Khurshid which also expanded bilateral trade with the two countries reaching a total of US$60.12 million during 2012–13.
Bilateral relations between Panama and India have been growing steadily, reflecting the crucial role the Panama Canal plays in global trade and commerce. Moreover, with over 15,000 Indians living in Panama, diplomatic ties have considerably increased over the past decade.
The opening of the expanded Canal in 2016 is expected to provide new prospects for maritime connectivity. In seeking to rapidly strengthen trade relations such the flow of trade triples between the two countries, India is keen to leverage these transit trade facilities in Panama to access the wider market of Latin America. Along with pursuing a free trade agreement, India wants to promote investment in various sectors of Panama's economy, including the banking and maritime industry and the multimodal centre of the Colón Free Trade Zone.
The bilateral relations between the Republic of India and the Paraguay have been traditionally strong due to strong commercial, cultural and strategic co-operation. India is represented in Paraguay through its embassy in Buenos Aires in Argentina. India also has an Honorary Consul-General in Asuncion. Paraguay opened its embassy in India in 2005.
Bilateral relations between the Republic of India and the Republic of Trinidad and Tobago have considerably expanded in recent years with both nations building strategic and commercial ties. Both nations formally established diplomatic relations in 1962.
Both nations were colonised by the British Empire; India supported independence of Trinidad and Tobago from colonial rule and established its diplomatic mission in 1962 – the year that Trinidad and Tobago officially gained independence from British rule. They possess diverse natural and economic resources and are the largest economies in their respective regions. Both are members of the Commonwealth of Nations, the United Nations, G-77 and the Non-Aligned Movement (NAM).
The Republic of India operates a High Commission in Port of Spain, whilst the Republic of Trinidad and Tobago operates a High Commission in New Delhi.
Historically, United States gave very strong support to the Indian independence movement in defiance of the British Empire. Relations between India and the United States were lukewarm following Indian independence, as India took a leading position in the Non-Aligned Movement, and received support from the Soviet Union. The US provided support to India in 1962 during its war with China. For most of the Cold War, the USA tended to have warmer relations with Pakistan, primarily as a way to contain Soviet-friendly India and to use Pakistan to back the Afghan Mujahideen against the Soviet occupation of Afghanistan. An Indo-Soviet Treaty of Friendship and Cooperation, signed in 1971, also positioned India against the USA.
After the Sino-Indian War and the Indo-Pakistani War of 1965, India made considerable changes to its foreign policy. It developed a close relationship with the Soviet Union and started receiving massive military equipment and financial assistance from the USSR. This had an adverse effect on the Indo-US relationship. The United States saw Pakistan as a counterweight to pro-Soviet India and started giving the former military assistance. This created an atmosphere of suspicion between India and the US. The Indo-US relationship suffered a considerable setback when the Soviets took over Afghanistan and India overtly supported the Soviet Union.
Relations between India and the United States came to an all-time low during the early 1970s. Despite reports of atrocities in East Pakistan, and being told, most notably in the "Blood telegram", of genocidal activities being perpetrated by Pakistani forces, US. Secretary of State Henry Kissinger and US President Richard Nixon did nothing to discourage then Pakistani President Yahya Khan and the Pakistan Army. Kissinger was particularly concerned about Soviet expansion into South Asia as a result of a treaty of friendship that had recently been signed between India and the Soviet Union, and sought to demonstrate to the People's Republic of China the value of a tacit alliance with the United States. During the Indo-Pakistani War of 1971, Indian Armed Forces, along with the Mukti Bahini, succeeded in liberating East Pakistan which soon declared independence. Nixon feared that an Indian invasion of West Pakistan would mean total Soviet domination of the region, and that it would seriously undermine the global position of the United States and the regional position of America's new tacit ally, China. To demonstrate to China the "bona fides" of the United States as an ally, and in direct violation of the Congress-imposed sanctions on Pakistan, Nixon sent military supplies to Pakistan, routing them through Jordan and Iran, while also encouraging China to increase its arms supplies to Pakistan.
When Pakistan's defeat in the eastern sector seemed certain, Nixon sent the to the Bay of Bengal, a move deemed by the Indians as a nuclear threat. The "Enterprise" arrived on station on 11 December 1971. On 6 and 13 December, the Soviet Navy dispatched two groups of ships, armed with nuclear missiles, from Vladivostok; they trailed US Task Force 74 into the Indian Ocean from 18 December 1971 until 7 January 1972. The Soviets also sent nuclear submarines to ward off the threat posed by USS "Enterprise" in the Indian Ocean.
Though American efforts had no effect in turning the tide of the war, the incident involving USS "Enterprise" is viewed as the trigger for India's subsequent interest in developing nuclear weapons. American policy towards the end of the war was dictated primarily by a need to restrict the escalation of war on the western sector to prevent the 'dismemberment' of West Pakistan. Years after the war, many American writers criticised the White House policies during the war as being badly flawed and ill-serving the interests of the United States. India carried out nuclear tests a few years later resulting in sanctions being imposed by United States, further drifting the two countries apart. In recent years, Kissinger came under fire for comments made during the Indo-Pakistan War in which he described Indians as "bastards." Kissinger has since expressed his regret over the comments.
Since the end of the Cold War, India-USA relations have improved dramatically. This has largely been fostered by the fact that the United States and India are both democracies and have a large and growing trade relationship. During the Gulf War, the economy of India went through an extremely difficult phase. The Government of India adopted liberalised economic systems. After the break-up of the Soviet Union, India improved diplomatic relations with the members of the NATO particularly Canada, France and Germany. In 1992, India established formal diplomatic relations with Israel.
In 1998, India tested nuclear weapons which resulted in several US, Japanese and European sanctions on India. India's then defence minister, George Fernandes, said that India's nuclear programme was necessary as it provided a deterrence to some potential nuclear threat. Most of the sanctions imposed on India were removed by 2001. India has categorically stated that it will never use weapons first but will defend if attacked.
The economic sanctions imposed by the United States in response to India's nuclear tests in May 1998 appeared, at least initially, to seriously damage Indo-American relations. President Bill Clinton imposed wide-ranging sanctions pursuant to the 1994 Nuclear Proliferation Prevention Act. US sanctions on Indian entities involved in the nuclear industry and opposition to international financial institution loans for non-humanitarian assistance projects in India. The United States encouraged India to sign the Comprehensive Nuclear-Test-Ban Treaty (CTBT) immediately and without condition. The United States also called for restraint in missile and nuclear testing and deployment by both India and Pakistan. The non-proliferation dialogue initiated after the 1998 nuclear tests has bridged many of the gaps in understanding between the countries.
Diplomatic relations between India and Venezuela were established on 1 October 1959. India maintains an embassy in Caracas, while Venezuela maintains an embassy in New Delhi.
There have been several visits by heads of state and government, and other high-level officials between the countries. President Hugo Chávez visited New Delhi on 4–7 March 2005. Chávez met with Indian President APJ Abdul Kalam and Prime Minister Manmohan Singh. The two countries signed six agreements including one to establish a Joint Commission to promote bilateral relations and another on cooperation in the hydrocarbon sector. Foreign Minister Nicolás Maduro visited India to attend the First Meeting of the India-CELAC Troika Foreign Ministers meeting in New Delhi on 7 August 2012.
The Election Commission of India (ECI) and the National Electoral Council (CNE) of Venezuela signed an MoU during a visit by Indian Election Commissioner V S Sampath to Caracas in 2012. Minister of State for Corporate Affairs visited Venezuela to attend the state funeral of President Chavez in March 2013. The President and Prime Minister of India expressed condolences on the death of Chávez. The Rajya Sabha, the upper house of Parliament, observed a minute's silence to mark his death. Ambassador Smita Purushottam represented India at the swearing-in ceremony of Chávez's successor Nicolás Maduro on 19 April 2013.
Citizens of Venezuela are eligible for scholarships under the Indian Technical and Economic Cooperation Programme and the Indian Council for Cultural Relations.
The first contacts between both civilisations date back from 2,500 years ago, during the 5th century BC. In modern times, India recognised Armenia on 26 December 1991.India has an embassy in Yerevan.Since 1999, Armenia has an embassy in New Delhi and 2 honorary consulates Mumbai, and Chennai.
Armenia recognises Jammu and Kashmir to be part of India and not of Pakistan.Armenia supports India's bid for permanent seat in the UNSC.
Austria–India relations refers to the bilateral ties between Austria and India. Indo-Austrian relations were established in May 1949 by the first Prime Minister of India Jawaharlal Nehru and the Chancellor of Austria Leopold Figl. Historically, Indo-Austrian ties have been particularly strong and India intervened in June 1953 in Austria's favour whilst negotiations were going on with Soviet Union about the Austrian State Treaty. There is a fully functioning Indian embassy in Vienna, Austria's capital, which is concurrently accredited to the United Nations offices in the city. Austria is represented in India by its embassy and Trade commission in New Delhi, India's capital, as well as honorary consulates in Mumbai, Kolkata, Chennai and Goa.
Czech-Indian relations were established in 1921 by a consulate in Bombay. The Czech Republic has an embassy in New Delhi. Consulates of Czech Republic in India are in Chennai, Mumbai and Kolkata. India has an embassy in Prague.
Denmark has an embassy in New Delhi, and India has an embassy in Copenhagen.
Tranquebar, a town in the southern Indian state of Tamil Nadu, was a Danish colony in India from 1620 to 1845. It is spelled "Trankebar" or "Tranquebar" in Danish, which comes from the native Tamil, Tarangambadi, meaning "place of the singing waves". It was sold, along with the other Danish settlements in mainland India, most notably Serampore (now in West Bengal), to Great Britain in 1845. The Nicobar Islands were also colonised by Denmark, until sold to the British in 1868, who made them part of their colony of British India.
After Independence in 1947, Indian prime minister Jawaharlal Nehru's visit to Denmark in 1957 laid the foundation for a friendly relationship between India and Denmark that has endured ever since. The bilateral relations between India and Denmark are cordial and friendly, based on synergies in political, economic, academic and research fields. There have been periodic high level visits between the two countries.
Anders Fogh Rasmussen, former Prime Minister of Denmark, accompanied by a large business delegation, paid a state visit to India from 4 to 8 February 2008. He visited Infosys, Biocon and IIM Bangalore in Bangalore and Agra. He launched an 'India Action Plan', which called for strengthening of the political dialogue, strengthening of co-operation in trade and investments, research in science and technology, energy, climate and environment, culture, education, student exchanges and attracting skilled manpower and IT experts to Denmark for short periods. The two countries signed an Agreement for establishment of a Bilateral Joint Commission for Cooperation.
In July 2012, the Government of India decided to scale down its diplomatic ties with Denmark after that country's refusal to appeal in their Supreme Court against a decision of its lower court rejecting the extradition of Purulia arms drop case prime accused Kim Davy a.k.a. Niels Holck. Agitated over Denmark's refusal to act on India's repeated requests to appeal in their apex court to facilitate Davy's extradition to India, government issued a circular directing all senior officials not to meet or entertain any Danish diplomat posted in India.
India's first recognition of Estonia came on 22 September 1921 when the former had just acquired membership in the League of Nations. India re-recognised Estonia on 9 September 1991 and diplomatic relations were established on 2 December of the same year in Helsinki. Neither country has a resident ambassador. Estonia is represented in India by two honorary consulates (in Mumbai and New Delhi). India is represented in Estonia through its embassy in Helsinki (Finland) and through an honorary consulate in Tallinn.
France and India established diplomatic relationships soon after India's independence from the United Kingdom in 1947. France's Indian possessions were returned to India after a treaty of cession was signed by the two countries in May 1956. On 16 August 1962, India and France exchanged the instruments of ratification under which France ceded to India full sovereignty over the territories it held. Pondicherry and the other enclaves of Karaikal, Mahe and Yanam came to be administered as the Union Territory of Puducherry from 1 July 1963.
France, Russia and Israel were the only countries that did not condemn India's decision to go nuclear in 1998. In 2003, France became the largest supplier of nuclear fuel and technology to India and remains a large military and economic trade partner. India's candidacy for permanent membership in the UN Security Council has found very strong support from former French President Nicolas Sarkozy. The Indian Government's decisions to purchase French s worth US$3 billion and 43 Airbus aircraft for Air India worth US$2.5 billion have further cemented the strategic, military and economic co-operation between India and France.
France's decision to ban schoolchildren from wearing of head-dresses and veils had the unintended consequence of affecting Sikh children who have been refused entry in public schools. The Indian Government, citing historic traditions of the Sikh community, has requested French authorities to review the situation so as to not to exclude Sikh children from education.
Nicolas Sarkozy visited India in January 2008 and was the Chief Guest of the Republic Day parade in New Delhi. France was the first country to sign a nuclear energy co-operation agreement with India; this was done during Prime Minister Singh's visit, following the waiver by the Nuclear Suppliers Group. During the Bastille Day celebrations on 14 July 2009, a detachment of 400 Indian troops marched alongside the French troops and the then Indian Prime Minister Manmohan Singh was the guest of honour.
During the Cold War India maintained diplomatic relations with both West Germany and East Germany. Since the fall of the Berlin Wall, and the reunification of Germany, relations have further improved.
Germany is India's largest trade partner in Europe. Between 2004 and 2013, Indo-German trade grew in volume but dropped in importance. According to Indian Ministry of Commerce MX data: Total trade between India and Germany was $5.5billion (3.8% share of Indian trade and ranked 6) in 2004 and $21.6billion (2.6% share of Indian trade and ranked 9) in 2013. Indian exports to Germany were $2.54billion (3.99% ranked 6) in 2004 and $7.3billion (2.41% ranked 10) in 2013. Indian imports from Germany were $2.92billion (3.73% ranked 6) in 2004 and $14.33billion (2.92% ranked 10) in 2013.
Indo-German ties are transactional. The strategic relationship between Germany and India suffers from sustained anti-Asian sentiment, institutionalized discrimination against minority groups, and xenophobic incidents against Indians in Germany. The 2007 Mügeln mob attack on Indians and the 2015 Leipzig University internship controversy has clouded the predominantly commercial-oriented relationship between the two countries. Stiff competition between foreign manufactured goods within the Indian market has seen machine-tools, automotive parts and medical supplies from German "Mittelstand" ceding ground to high-technology imports manufactured by companies located in ASEAN & BRICS countries. The Volkswagen emissions scandal drew the spotlight to corrupt behaviour in German boardrooms and brought back memories of the HDW bribery scandal surrounding the procurement of s by the Indian Navy. The India-Germany strategic relationship is limited by the insignificance of German geopolitical influence in Asian affairs. Germany has no strategic footprint in Asia. Germany like India is working towards gaining permanent seats in the United Nations Security Council.
In modern time, diplomatic relations between Greece and India were established in May 1950. The new Greek Embassy building in New Delhi was inaugurated on 6 February 2001.
Iceland and India established diplomatic relations in 1972. The Embassy of Iceland in London was accredited to India and the Embassy of India in Oslo, Norway, was accredited to Iceland. However, it was only after 2003 that the two countries began close diplomatic and economic relationships. In 2003, President of Iceland Ólafur Ragnar Grímsson visited India on a diplomatic mission. This was the first visit by an Icelandic President to India. During the visit, Iceland pledged support to New Delhi's candidature for a permanent seat in the United Nation Security Council thus becoming the first Nordic country to do so. This was followed by an official visit of President of India A. P. J. Abdul Kalam to Iceland in May 2005. Following this a new embassy of Iceland was opened in New Delhi on 26 February 2006. Soon, an Indian Navy team visited Iceland on friendly mission. Gunnar Pálsson is the ambassador of Iceland to India. The Embassy's area of accreditation, apart from India includes Bangladesh, Indonesia, the Seychelles, Singapore, Sri Lanka, Malaysia, Maldives, Mauritius and Nepal. India appointed S. Swaminathan as the first resident ambassador to Iceland in March 2008.
Indo-Irish relations picked up steam during the freedom struggles of the respective countries against a common imperial empire in the United Kingdom. Political relations between the two states have largely been based on socio-cultural ties, although political and economic ties have also helped build relations. Indo-Irish relations were greatly strengthened by such luminaries as Pandit Nehru, Éamon de Valera, Rabindranath Tagore, W. B. Yeats, James Joyce, and, above all, Annie Besant. Politically, relations have not been cold or warm. Mutual benefit has led to economic ties that are fruitful for both states. Visits by government leaders have kept relations cordial at regular intervals.
India maintains an embassy in Rome, and a consulate-general in Milan. Italy has an embassy in New Delhi, and consulate-generals in Mumbai and Calcutta.
Indo-Italian relations have historically been cordial. In recent times, their state has mirrored the political fortunes of Sonia Maino-Gandhi, the Italian-born leader of the Indian National Congress and "de facto" leader of the UPA government of Manmohan Singh.
Since 2012 the relationship has been affected by the ongoing Enrica Lexie case: two Indian fishermen were killed on the Indian fishing vessel "St. Antony" as a result of gunshot wounds following a confrontation with the Italian oil tanker "Enrica Lexie" in international waters, off the Kerala coast.
After a period of tensions, in 2017 Italian Prime Minister Paolo Gentiloni visited India and met his Indian counterpart Narendra Modi; they held extensive talks in order to strengthen the political cooperation and to boost the bilateral trade.
There are around 150,000 people of Indian Origins living in Italy. Around 1,000 Italian citizens reside in India, mostly working on behalf of Italian industrial groups.
Relations were established in 1947, following India's independence. Luxembourg operates an Embassy in New Delhi whilst India operates a Consulate General in Luxembourg City. Bilateral Trade stood at US$37 Million in 2014 and trade continues to grow every year. Diplomats from both countries have visited the other several time. In 2019, Luxembourg plans to host the annual Asian Infrastructure Investment Bank and open an economic mission in India.
Both countries established diplomatic relations in March 1993.
India–Netherlands relations refer to foreign relations between India and the Netherlands. India maintains an embassy in The Hague, Netherlands and the Netherlands maintains an embassy in New Delhi and a consulate general in Mumbai. Both countries established diplomatic relations in 1947.
Both countries established diplomatic relations in 1996.
In 2012, Trond Giske met with Minister of Finance Pranab Mukherjee, to save Telenor's investments to put forth Norway's "strong wish" that there must not be a waiting period between the confiscation of telecom licences and the re-sale of those. The leader of Telenor attended the meeting.
Diplomatic ties with Spain started in 1956. The first Spanish embassy was established in Delhi in 1958. India and Spain have had cordial relationship with each other especially after the establishment of democracy in Spain in 1978. Spain has been a main tourist spot for Indians over the years. Many presidents including Prathibha Patil visited Spain.
The royal family of Spain have always liked the humble nature of the Indian government and they have thus paid several visits to India.
There was no direct flight from India to Spain but it all changed in 1986 when Iberain travels started to fly directly from Mumbai to Madrid. However, it was stopped in 22 months. In 2006 this issue of direct flight was reconsidered so as to improve the ties between India and Spain. "Zindagi Na Milegi Dobara" was shot completely in Spain in 2011. The tourism ministry of Spain are using this movie to promote tourism to Spain in India.
India is one of Switzerland's most important partners in Asia. Bilateral and political contacts are constantly developing, and trade and scientific co-operation between the two countries are flourishing. Switzerland was the first country in the World to sign a Friendship treaty with India in 1947.
Diplomatic relations between India and Ukraine were established in January 1992. The Indian Embassy in Kiev was opened in May 1992 and Ukraine opened its mission in New Delhi in February 1993. The Consulate General of India in Odessa functioned from 1962 till its closure in March 1999.
India has a high commission in London and two consulates-general in Birmingham and Edinburgh. The United Kingdom has a high commission in New Delhi and five deputy high commissions in Mumbai, Chennai, Bangalore, Hyderabad and Kolkata. Since 1947, India's relations with the United Kingdom have been through bilateral, as well as through the Commonwealth of Nations framework. Although the Sterling Area no longer exists and the Commonwealth is much more an informal forum, India and the UK still have many enduring links. This is in part due to the significant number of people of Indian origin living in the UK. The large South Asian population in the UK results in steady travel and communication between the two countries. The British Raj allowed for both cultures to imbibe tremendously from the other. The English language and cricket are perhaps the two most evident British exports, whilst in the UK food from the Indian subcontinent are very popular. The United Kingdom's favourite food is often reported to be Indian cuisine, although no official study reports this.
Economically the relationship between Britain and India is also strong. India is the second largest investor in Britain after the US. Britain is also one of the largest investors in India.
Formal bilateral relations between India and the Vatican City have existed since 12 June 1948. An Apostolic Delegation existed in India from 1881. The Holy See has a nunciature in New Delhi whilst India has accredited its embassy in Bern, Switzerland to the Holy See as well. India's Ambassador in Bern has traditionally been accredited to the Holy See.
The connections between the Catholic Church and India can be traced back to the apostle St. Thomas, who, according to tradition, came to India in 52 CE in the 9th century, the patriarch of the Nestorians in Persia sent bishops to India. There is a record of an Indian bishop visiting Rome in the early part of the 12th century.
The diplomatic mission was established as the Apostolic Delegation to the East Indies in 1881, and included Ceylon, and was extended to Malaca in 1889, and then to Burma in 1920, and eventually included Goa in 1923. It was raised to an Internunciature by Pope Pius XII on 12 June 1948 and to a full Apostolic Nunciature by Pope Paul VI on 22 August 1967.
There have been three Papal visits to India. The first Pope to visit India was Pope Paul VI, who visited Mumbai in 1964 to attend the Eucharistic Congress. Pope John Paul II visited India in February 1986 and November 1999. Several Indian dignitaries have, from time to time, called on the Pope in the Vatican. These include Prime Minister Indira Gandhi in 1981 and Prime Minister I. K. Gujral in September 1987. Atal Bihari Vajpayee, Prime Minister, called on the Pope in June
2000 during his official visit to Italy. Vice-President Bhairon Singh Shekhawat represented the country at the funeral of Pope John Paul II.
India was one of the first countries to develop relations with the European Union. The Joint Political Statement of 1993 and the 1994 Co-operation Agreement were the foundational agreements for the bilateral partnership. In 2004, India and European Union became "Strategic Partners". A Joint Action Plan was agreed upon in 2005 and updated in 2008. India-EU Joint Statements was published in 2009 and 2012 following the India-European Union Summits.
India and the European Commission initiated negotiations on a "Broad-based Trade and Investment Agreement" (BTIA) in 2007. Seven rounds of negotiations have been completed without reaching a Free Trade Agreement.
According to the Government of India, trade between India and the EU was $57.25 billion between April and October 2014 and stood at $101.5 billion for the fiscal period of 2014–2015.
The European Union is India's second largest trading bloc, accounting for around 20% of Indian trade (Gulf Cooperation Council is the largest trading bloc with almost $160 billion in total trade). India was the European Union's 8th largest trading partner in 2010. EU-India trade grew from €28.6 billion in 2003 to €72.7 billion in 2013.
France, Germany and UK collectively represent the major part of EU-India trade. Annual trade in commercial services tripled from €5.2billion in 2002 to €17.9 billion in 2010.
Denmark, Sweden, Finland and the Netherlands are the other more prominent European Union countries who trade with India.
India and the Arab states of the Persian Gulf enjoy strong cultural and economic ties. This is reflected in the fact that more than 50% of the oil consumed by India comes from the Persian Gulf countries and Indian nationals form the largest expatriate community in the Arabian peninsula. The annual remittance by Indian expatriates in the region amounted to US$20 billion in 2007. India is one of the largest trading partners of the CCASG with non-oil trade between India and Dubai alone amounting to US$19 billion in 2007. The Persian Gulf countries have also played an important role in addressing India's energy security concerns, with Saudi Arabia and Kuwait regularly increasing their oil supply to India to meet the country's rising energy demand. In 2005, Kuwait increased its oil exports to India by 10% increasing the net oil trade between the two to US$4.5 billion. In 2008, Qatar decided to invest US$5 billion in India's energy sector.
India has maritime security arrangement in place with Oman and Qatar. In 2008, a landmark defence pact was signed, under which India committed its military assets to protect "Qatar from external threats".
There has been progress in a proposed deep-sea gas pipeline from Qatar, via Oman, to India.
India is a close ally of Bahrain, the Kingdom along with its GCC partners are (according to Indian officials) among the most prominent backers of India's bid for a permanent seat on the UN Security Council, and Bahraini officials have urged India to play a greater role in international affairs. For instance, over concerns about Iran's nuclear programme Bahrain's Crown Prince appealed to India to play an active role in resolving the crisis.
Ties between India and Bahrain go back generations, with many of Bahrain's most prominent figures having close connections: poet and constitutionalist Ebrahim Al-Arrayedh grew up in Bombay, while 17th century Bahraini theologians Sheikh Salih Al-Karzakani and Sheikh Ja'far bin Kamal al-Din were influential figures in the Kingdom of Golkonda and the development of Shia thought in the sub-continent.
Bahraini politicians have sought to enhance these long standing ties, with Parliamentary Speaker Khalifa Al Dhahrani in 2007 leading a delegation of parliamentarians and business leaders to meet the then Indian President Pratibha Patil, the then opposition leader L K Advani, and take part in training and media interviews. Politically, it is easier for Bahrain's politicians to seek training and advice from India than it is from the United States or other western alternative.
Adding further strength to the ties, Sheikh Hamad Bin Isa Al-Khalifa visited India during which MOUs and bilateral deals worth $450 million were approved. India expressed its support for Bahrain's bid for a non-permanent seat in the UNSC in 2026–27.
Modern Egypt-India relations go back to the contacts between Saad Zaghloul and Mohandas Gandhi on the common goals of their respective movements of independence. In 1955, Egypt under Gamal Abdul Nasser and India under Jawaharlal Nehru became the founders of the Non-Aligned Movement. During the 1956 War, Nehru stood supporting Egypt to the point of threatening to withdraw his country from the Commonwealth of Nations. In 1967, following the Arab–Israeli conflict, India supported Egypt and the Arabs. In 1977, New Delhi described the visit of President Anwar al-Sadat to Jerusalem as a "brave" move and considered the peace treaty between Egypt and Israel a primary step on the path of a just settlement of the Middle East problem. Major Egyptian exports to India include raw cotton, raw and manufactured fertilisers, oil and oil products, organic and non-organic chemicals, leather and iron products. Major imports into Egypt from India are cotton yarn, sesame, coffee, herbs, tobacco, lentils, pharmaceutical products and transport equipment. The Egyptian Ministry of Petroleum is also currently negotiating the establishment of a natural gas-operated fertiliser plant with another Indian company. In 2004 the Gas Authority of India Limited, bought 15% of Egypt Nat Gas distribution and marketing company. In 2008 Egyptian investment in India was worth some 750 million dollars, according to the Egyptian ambassador.
After Arab Spring of 2011, with ousting of Hosni Mubarak, Egypt has asked for help of India in conducting nationwide elections.
Independent India and Iran established diplomatic links on 15 March 1950.
After the Iranian Revolution of 1979, Iran withdrew from CENTO and dissociated itself from US-friendly countries, including Pakistan, which automatically meant improved relationship with the Republic of India.
Currently, the two countries have friendly relations in many areas. There are significant trade ties, particularly in crude oil imports into India and diesel exports to Iran. Iran frequently objected to Pakistan's attempts to draft anti-India resolutions at international organisations such as the OIC. India welcomed Iran's inclusion as an observer state in the SAARC regional organisation. Lucknow continues to be a major centre of Shiite culture and Persian study in the subcontinent.
In the 1990s, India and Iran both supported the Northern Alliance in Afghanistan against the Taliban regime. They continue to collaborate in supporting the broad-based anti-Taliban government led by Hamid Karzai and backed by the United States.
However, one complex issue in Indo-Iran relations is the issue of Iran's nuclear programme. In this intricate issue, India tries to make a delicate balance. According to Rejaul Laskar, an Indian expert on international relations, "India's position on Iran's nuclear programme has been consistent, principled and balanced, and makes an endeavour to reconcile Iran's quest for energy security with the international community's concerns on proliferation. So, while India acknowledges and supports Iran's ambitions to achieve energy security and in particular, its quest for peaceful use of nuclear energy, it is also India's principled position that Iran must meet all its obligations under the international law, particularly its obligations under the nuclear Non Proliferation Treaty (NPT) and other such treaties to which it is a signatory"
Following an attack on an Israeli diplomat in India in February 2012, the Delhi Police contended that the Iranian Revolutionary Guard Corps had some involvement in the attack. This was subsequently confirmed in July 2012, after a report by the Delhi Police found evidence that members of Iranian Revolutionary Guard Corps had been involved in the 13 February bomb attack in the capital.
Iraq was one of the few countries in the Middle East with which India established diplomatic relations at the embassy level immediately after its independence in 1947. Both nations signed the "Treaty of Perpetual Peace and Friendship" in 1952 and an agreement of co-operation on cultural affairs in 1954. India was amongst the first to recognise the Ba'ath Party-led government, and Iraq remained neutral during the Indo-Pakistani War of 1965. However, Iraq sided alongside other Persian Gulf states in supporting Pakistan against India during the Indo-Pakistani War of 1971, which saw the creation of Bangladesh. The eight-year-long Iran–Iraq War caused a steep decline in trade and commerce between the two nations.
During the 1991 Persian Gulf War, India remained neutral but permitted refuelling for US aircraft. It opposed UN sanctions on Iraq, but the period of war and Iraq's isolation further diminished India's commercial and diplomatic ties. From 1999 onwards, Iraq and India began to work towards a stronger relationship. Iraq had supported India's right to conduct nuclear tests following its tests of five nuclear weapons on 11 and 13 May 1998. In 2000, the then-Vice-President of Iraq Taha Yassin Ramadan visited India, and on 6 August 2002 President Saddam Hussein conveyed Iraq's "unwavering support" to India over the Kashmir conflict with Pakistan. India and Iraq established joint ministerial committees and trade delegations to promote extensive bilateral co-operation. Although initially disrupted during the 2003 invasion of Iraq, diplomatic and commercial ties between India and the new democratic government of Iraq have since been normalised.
The establishment of Israel at the end of World War II was a complex issue. Based on its own experience during partition, when 14 million people were displaced and an estimated 200,000 to 500,000 people were killed in Punjab Province, India had recommended a single state, as did Iran and Yugoslavia (later to undergo its own genocidal partition). The state could allocate Arab- and Jewish-majority provinces with a goal of preventing partition of historic Palestine and prevent widespread conflict. But, the final UN resolution recommended partition of Mandatory Palestine into Arab and Jewish states based on religious and ethnic majorities. India opposed this in the final vote as it did not agree with the concept of partition on the basis of religion.
Due to the security threat from a US-backed Pakistan and its nuclear programme in the 1980s, Israel and India started a clandestine relationship that involved co-operation between their respective intelligence agencies. Israel shared India's concerns about the growing danger posed by Pakistan and nuclear proliferation to Iran and other Arab states. After the end of the Cold War, formal relations with Israel started improving significantly.
Since the establishment of full diplomatic relations with Israel in 1992, India has improved its relation with the Jewish state. India is regarded as Israel's strongest ally in Asia, and Israel is India's second-largest arms supplier. Since India achieved its independence in 1947, it has supported Palestinian self-determination. India recognised Palestine's statehood following Palestine's declaration on 18 November 1988 and Indo-Palestinian relations were first established in 1974. This has not adversely affected India's improved relations with Israel.
India has entertained the Israeli Prime Minister in a visit in 2003, and Israel has entertained Indian dignitaries such as Finance Minister Jaswant Singh in diplomatic visits. India and Israel collaborate in scientific and technological endeavours. Israel's Minister for Science and Technology has expressed interest in collaborating with the Indian Space Research Organisation (ISRO) towards using satellites to better manage land and other resources. Israel has also expressed interest in participating in ISRO's Chandrayaan Mission involving an unmanned mission to the moon. On 21 January 2008, India successfully launched an Israeli spy satellite into orbit from Sriharikota space station in southern India.
Israel and India share intelligence on terrorist groups. They have developed close defence and security ties since establishing diplomatic relations in 1992. India has bought more than $5 billion worth of Israeli equipment since 2002. In addition, Israel is training Indian military units and in 2008 was discussing an arrangement to give Indian commandos instruction in counter-terrorist tactics and urban warfare. In December 2008, Israel and India signed a memorandum to set up an Indo-Israel Legal Colloquium to facilitate discussions and exchange programmes between judges and jurists of the two countries.
Following the Israeli invasion of Lebanon in 2006, India stated that the Israeli use of force was "disproportionate and excessive."
India and Lebanon enjoy cordial and friendly relations based on many complementarities
such as political system based on parliamentary democracy, non-alignment, human rights,
commitment to a just world order, regional and global peace, liberal market economy and a vibrant
entrepreneurial spirit. India has a peacekeeping force as part of the United Nations Interim Force in Lebanon (UNIFIL). One infantry battalion is deployed in Lebanon and about 900 personnel are stationed in the Eastern part of Southern Lebanon. The force also provided non-patrol aid to citizens.
India and Lebanon have very good relations since the 1950s.
India–Oman relations are foreign relations between India and the Sultanate of Oman. India has an embassy in Muscat, Oman. The Indian consulate was opened in Muscat in February 1955 and five years later it was upgraded to a consulate general and later developed into a full-fledged embassy in 1971. The first Ambassador of India arrived in Muscat in 1973. Oman established its embassy in New Delhi in 1972 and a consulate general in Mumbai in 1976.
$5.6 bn Oman-India energy pipeline plans progressing: Fox Petroleum Group envisions a roughly five-year timeframe for the execution of the pipeline project.
Ajay Kumar, the chairman and managing director of Fox Petroleum, based in New Delhi, which is an associate company of Fox Petroleum FZC in the UAE, said that Mr Modi had "fired the best weapon of economic development and growth". "He has given a red carpet for global players to invest in India," Mr Kumar added. "It will boost all sectors of industry – especially for small-scale manufacturing units and heavy industries too."
After India achieved its independence in 1947, the country has moved to support Palestinian self-determination following the partition of British India. In the light of a religious partition between India and Pakistan, the impetus to boost ties with Muslim states around the world was a further tie to India's support for the Palestinian cause. Though it started to waver in the late 1980s and 1990s as the recognition of Israel led to diplomatic exchanges, the ultimate support for the Palestinian cause was still an underlying concern.
Beyond the recognition for Palestinian self-determination ties have been largely dependent upon socio-cultural bonds, while economic relations were neither cold nor warm.
India recognised Palestine's statehood following its own declaration on 18 November 1988; although relations were first established in 1974.
PNA President Abbas paid a State visit to India in September 2012, during which India pledged $10 million as aid. Indian officials said it was the third such donation, adding that New Delhi was committed to helping other development projects. India also pledged support to Palestine's bid for full and equal membership of the UN.
Bilateral relations between India and the Saudi Arabia have strengthened considerably owing to co-operation in regional affairs and trade. Saudi Arabia is the one of largest suppliers of oil to India, who is one of the top seven trading partners and the 5th biggest investor in Saudi Arabia.
India was one of the first nations to establish ties with the Third Saudi State. During the 1930s, India heavily funded Nejd through financial subsidies.
India's strategic relations with Saudi Arabia have been affected by the latter's close ties with Pakistan. Saudi Arabia supported Pakistan's stance on the Kashmir conflict and during the Indo-Pakistani War of 1971 at the expense of its relations with India. The Soviet Union's close relations with India also served as a source of consternation. During the Persian Gulf War (1990–91), India officially maintained neutrality. Saudi Arabia's close military and strategic ties with Pakistan have also been a source of continuing strain.
Since the 1990s, both nations have taken steps to improve ties. Saudi Arabia has supported granting observer status to India in the Organisation of Islamic Cooperation (OIC) and has expanded its co-operation with India to fight terrorism. In January 2006, King Abdullah of Saudi Arabia made a special visit to India, becoming the first Saudi monarch in 51 years to do so. The Saudi king and former Prime Minister of India Manmohan Singh signed an agreement forging a strategic energy partnership that was termed the "Delhi Declaration". The pact provides for a "reliable, stable and increased volume of crude oil supplies to India through long-term contracts." Both nations also agreed on joint ventures and the development of oil and natural gas in public and private sectors. An Indo-Saudi joint declaration in the Indian capital New Delhi described the king's visit as "heralding a new era in India-Saudi Arabia relations."
Bilateral relations between the India and Syria are historic where the two have ancient civilizational ties. Both countries were on the silk Road through which civilizational exchanges took place for centuries.
The Syriac Christianity, originating in ancient Syria, spread further to the East and created the first Christian communities in ancient India.
Due to controversial issues such as Turkey's close relationship with Pakistan, relations between the two countries have often been blistered at certain times, but better at others. India and Turkey's relationship alters from unsureness to collaboration when the two nations work together to combat terrorism in Central and South Asia, and the Middle East. India and Turkey are also connected by history, seeing as they have known each other since the days of the Ottoman Empire, and seeing as India was one of the countries to send aid to Turkey following its war of independence. The Indian real estate firm GMR, has invested in and is working towards the modernisation of Istanbul's Sabiha Gökçen International Airport.
The dissolution of the Soviet Union and the emergence of the Commonwealth of Independent States (CIS) had major repercussions for Indian foreign policy. Substantial trade with the former Soviet Union plummeted after the Soviet collapse and has yet to recover. Longstanding military supply relationships were similarly disrupted due to questions over financing, although Russia continues to be India's largest supplier of military systems and spare parts.
The relationship with USSR was tested (and proven) during the 1971 war with Pakistan, which led to the subsequent liberation of Bangladesh. Soon after the victory of the Indian Armed Forces, one of the foreign delegates to visit India was Admiral S.G. Gorshkov, Chief of the Soviet Navy. During his visit to Mumbai (Bombay) he came on board INS "Vikrant". During a conversation with Vice Admiral Swaraj Prakash, Gorshkov asked the Vice Admiral, "Were you worried about a battle against the American carrier?" He answered himself: "Well, you had no reason to be worried, as I had a Soviet nuclear submarine trailing the American task force all the way into the Indian Ocean."
India's ties with the Russian Federation are time-tested and based on continuity, trust
and mutual understanding. There is national consensus in both the countries on the
need to preserve and strengthen India-Russia relations and further consolidate the
strategic partnership between the two countries. A Declaration on Strategic
Partnership was signed between present Russian President Vladimir Putin and former Indian Prime Minister Atal Bihari Vajpayee in October 2000.
Russia and India have decided not to renew the 1971 Indo-Soviet Peace and Friendship Treaty and have sought to follow what both describe as a more pragmatic, less ideological relationship. Russian President Yeltsin's visit to India in January 1993 helped cement this new relationship. Ties have grown stronger with President Vladimir Putin's 2004 visit. The pace of high-level visits has since increased, as has discussion of major defence purchases. Russia, is working for the development of the Kudankulam Nuclear Power Plant, that will be capable of producing 1000 MW of electricity. Gazprom, is working for the development of oil and natural gas, in the Bay of Bengal. India and Russia, have collaborated extensively, on space technology. Other areas of collaboration include software, ayurveda, etc. India and Russia, have set a determination in increasing trade to $10 billion. Cooperation between clothing manufacturers of the two countries continues to strengthen. India and Russia signed an agreement on joint efforts to increase investment and trade volumes in the textile industry in both countries. In signing the document included representatives of the Russian Union of Entrepreneurs of Textile and Light Industry Council and apparel exports of India (AEPC). A co-operation agreement provides, inter alia, exchange of technology and know-how in textile production. For this purpose, a special Commission on Affairs textile (Textile Communication Committee). Counter-terrorism techniques are also in place between Russia and India. In 2007 President Vladimir Putin was guest of honour at Republic Day celebration on 26 January 2007. 2008, has been declared by both countries as the Russia-India Friendship Year. Bollywood films are quite popular in Russia. The Indian public sector oil company ONGC bought Imperial Energy Corporation in 2008. In December 2008, during President Medvedev's visit, to New Delhi, India and Russia, signed a nuclear energy co-operation agreement. In March 2010, Russian Prime Minister Vladimir Putin signed an additional 19 pacts with India which included civilian nuclear energy, space and military co-operation and the final sale of Admiral Gorshkov (Aircraft Carrier) along with MiG-29K fighter jets.
During the 2014 Crimean crisis India refused to support American sanctions against Russia and one of India's national security advisers Shivshankar Menon was reported to have said "There are legitimate Russian and other interests involved and we hope they are discussed and resolved."
From 7 August 2014 India and Russia will hold a joint counter-terrorism exercise near Moscow boundary with China and Mongolia. It will involve the use of tanks and armoured vehicles.
India and Russia have so far conducted three rounds of INDRA exercises. The first exercise was carried out in 2005 in Rajasthan, followed by Prshkov in Russia. The third exercise was conducted at Chaubattia in Kumaon hills in October 2010.
India is working towards developing strong relations with this resource rich Central Asian country. The Indian oil company, Oil and Natural Gas Corporation, has got oil exploration and petroleum development grants in Kazakhstan. The two countries are collaborating in petrochemicals, information technology, and space technology. Kazakhstan has offered India five blocks for oil and gas exploration. India and Kazakhstan, are to set up joint projects in construction, minerals and metallurgy. India also signed four other pacts, including an extradition treaty, in the presence of President Prathibha Patil and her Kazakh counterpart Nursultan Nazarbayev. Kazakhstan will provide uranium and related products under the MoU between Nuclear Power Corp. of India and KazatomProm. These MoU also opens possibilities of joint exploration of uranium in Kazakhstan, which has the world's second largest reserves, and India building atomic power plants in the Central Asian country.
The relations between India and Mongolia are still at a nascent stage and Indo-Mongolian co-operation is limited to diplomatic visits, provision of soft loans and financial aid and the collaborations in the IT sector.
India established diplomatic relations in December 1955. India was the first country outside the Soviet bloc to establish diplomatic relations with Mongolia. Since then, there have been treaties of mutual friendship and co-operation between the two countries in 1973, 1994, 2001 and 2004.
Diplomatic relations were established India and Tajikistan following Tajikistan's independence from the 1991 dissolution of the Soviet Union, which had been friendly with India. Tajikistan occupies a strategically important position in Central Asia, bordering Afghanistan, the People's Republic of China and separated by a small strip of Afghan territory from Pakistan. India's role in fighting the Taliban and Al-Qaeda and its strategic rivalry with both China and Pakistan have made its ties with Tajikistan important to its strategic and security policies. Despite their common efforts, bilateral trade has been comparatively low, valued at USD 12.09 million in 2005; India's exports to Tajikistan were valued at USD 6.2 million and its imports at USD 5.89 million. India's military presence and activities have been significant, beginning with India's extensive support to the anti-Taliban Afghan Northern Alliance (ANA). India began renovating the Farkhor Air Base and stationed aircraft of the Indian Air Force there. The Farkhor Air Base became fully operational in 2006, and 12 MiG-29 bombers and trainer aircraft are planned to be stationed there.
India has an embassy in Tashkent. Uzbekistan has an embassy in New Delhi. Uzbekistan has had a great impact on Indian culture mostly due to the Mughal Empire which was founded by Babur of Ferghana (in present-day Uzbekistan) who created his empire southward first in Afghanistan and then in India.
As of year 2011, India's total trade with Africa is over US$46 billion and total investment is over US$11 billion with US$5.7 billion line of credit for executing various projects in Africa.
India has had good relationships with most sub-Saharan African nations for most of its history. In the Prime Minister's visit to Mauritius in 1997, the two countries secured a deal to a new Credit Agreement of INR 105 million (US$3 million) to finance import by Mauritius of capital goods, consultancy services and consumer durable from India. The government of India secured a rice and medicine agreement with the people of Seychelles. India continued to build upon its historically close relations with Ethiopia, Kenya, Uganda and Tanzania. Visits from political ministers from Ethiopia provided opportunities for strengthening bilateral co-operation between the two countries in the fields of education and technical training, water resources management and development of small industries. This has allowed India to gain benefits from nations that are generally forgotten by other Western Nations. The South African President, Thabo Mbeki has called for a strategic relationship between India and South Africa to avoid imposition by Western Nations. India continued to build upon its close and friendly relations with Angola, Botswana, Lesotho, Malawi, Mozambique, Namibia, Swaziland, Zambia and Zimbabwe. The Minister of Foreign Affairs arranged for the sending of Special Envoys to each of these countries during 1996–97 as a reaffirmation of India's assurance to strengthening co-operation with these countries in a spirit of South-South partnership. These relations have created a position of strength with African nations that other nations may not possess.
India and Ethiopia have warm bilateral ties based on mutual co-operation and support. India has been a partner in Ethiopia's developmental efforts, training Ethiopian personnel under its ITEC programmer, providing it with several lines of credit and launching the Pan-African e-Network Project there in 2007. The Second India–Africa Forum Summit was held in Addis Ababa in 2011. India is also Ethiopia's second largest source of foreign direct investments.
Gabon maintains an embassy in New Delhi. The Embassy of India in Kinshasa, Democratic Republic of Congo is jointly accredited to Gabon.
Relations between Ghana and India are generally close and cordial mixed with economic and cultural connections. Trade between India and Ghana amounted to US$818 million in 2010–11 and is expected to be worth US$1 billion by 2013. Ghana imports automobiles and buses from India and companies like Tata Motors and Ashok Leyland have a significant presence in the country. Ghanaian exports to India consist of gold, cocoa and timber while Indian exports to Ghana comprise pharmaceuticals, agricultural machinery, electrical equipment, plastics, steel and cement.
The Government of India has extended $228 million in lines of credit to Ghana which has been used for projects in sectors like agro-processing, fish processing, waste management, rural electrification and the expansion of Ghana's railways. India has also offered to set up an India-Africa Institute of Information Technology (IAIIT) and a Food Processing Business Incubation Centre in Ghana under the India–Africa Forum Summit.
India is among the largest foreign investors in Ghana's economy. At the end of 2011, Indian investments in Ghana amounted to $550 million covering some 548 projects. Indian investments are primarily in the agriculture and manufacturing sectors of Ghana while Ghanaian companies manufacture drugs in collaboration with Indian companies. The IT sector in Ghana too has a significant Indian presence in it. India and Ghana also have a Bilateral Investment Protection Agreement between them. India's Rashtriya Chemicals and Fertilisers is in the process of setting up a fertiliser plant in Ghana at Nyankrom in the Shama District of the Western Region of Ghana. The project entails an investment of US$1.3 billion and the plant would have an annual production capacity of 1.1 million tonnes, the bulk of which would be exported to India. There are also plans to develop a sugar processing plant entailing an investment of US$36 million. Bank of Baroda, Bharti Airtel, Tata Motors and Tech Mahindra are amongst the major Indian companies in Ghana.
There are about seven to eight thousand Indians and Persons of Indian Origin living in Ghana today with some of them having been there for over 70 years. Ghana is home to a growing indigenous Hindu population that today numbers 3000 families. Hinduism first came to Ghana only in the late 1940s with the Sindhi traders who migrated here following India's Partition. It has been growing in Ghana and neighbouring Togo since the mid-1970s when an African Hindu monastery was established in Accra.
The bilateral relations between India and Ivory Coast have expanded considerably in recent years as India seeks to develop an extensive commercial and strategic partnership in the West African region. The Indian diplomatic mission in Abidjan was opened in 1979. Ivory Coast opened its resident mission in New Delhi in September 2004. Both nations are currently fostering efforts to increase trade, investments and economic co-operation.
As littoral states of the Indian Ocean, trade links and commercial ties between India and Kenya go back several centuries. Kenya has a large minority of Indians and Persons of Indian Origin living there who are descendants of labourers who were brought in by the British to construct the Uganda Railway and Gujarati merchants.
India and Kenya have growing trade and commercial ties. Bilateral trade amounted to $2.4 billion in 2010–2011 but with Kenyan imports from India accounting for $2.3 billion, the balance of trade was heavily in India's favour. India is Kenya's sixth largest trading partner and the largest exporter to Kenya. Indian exports to Kenya include pharmaceuticals, steel, machinery and automobiles while Kenyan exports to India are largely primary commodities such as soda ash, vegetables and tea. Indian companies have a significant presence in Kenya with Indian corporates like the Tata Group, Essar Group, Reliance Industries and Bharti Airtel operating there.
India operates a High Commission in Pretoria which serves Lesotho and Lesotho operates a residential mission in India. Lesotho and India have strong ties. Lesotho has backed India's bid for a Permanent UN seat and has also recognized Jammu and Kashmir as a part of India. India exported US$11 Million to Lesotho in the 2010-2011 year while only importing US$1 Million in goods from Lesotho. Since 2001, an India Army Training Team has trained several soldiers in the LDF.
The bilateral relations between the Republic of India and the Republic of Liberia have expanded on growing bilateral trade and strategic co-operation.
India is represented in Liberia through its embassy in Abidjan (Ivory Coast) and an active honorary consulate in Monrovia since 1984. Liberia was represented in India through its resident mission in New Delhi which subsequently closed due to budgetary constraints.
India is represented in Mauritania by its embassy in Bamako, Mali. India also has an honorary consulate in Nouakchott.
The relations between India and Mauritius existed since 1730, diplomatic relations were established in 1948, before Mauritius became an independent state. The relationship is very cordial due to cultural affinities and long historical ties that exist between the two nations. More than 68% of the Mauritian population are of Indian origin, most commonly known as Indo-Mauritian. Economic and commercial corporation has been increasing over the years. India has become Mauritius' largest source of imports since 2007 and Mauritius imported US$816 million worth of goods in the April 2010 – March 2011 financial year. Mauritius has remained the largest source of FDI for India for more than a decade with FDI equity inflows totalling US$55.2 billion in the period April 2000 to April 2011. India and Mauritius co-operate in combating piracy which has emerged as a major threat in the Indian Ocean region and support India's stand against terrorism.
The relationship between Mauritius and India date back in the early 1730s, when artisans were brought from Puducherry and Tamil Nadu. Diplomatic relations between India and Mauritius were established in 1948. Mauritius maintained contacts with India through successive Dutch, French and British occupation. From the 1820s, Indian workers started coming into Mauritius to work on sugar plantations. From 1834 when slavery was abolished by the British Parliament, large numbers of Indian workers began to be brought into Mauritius as indentured labourers. On 2 November 1834 the ship named 'Atlas' docked in Mauritius carrying the first batch of Indian indentured labourers.
Morocco has an embassy in New Delhi. It also has an Honorary Consul based in Mumbai. India operates an embassy in Rabat. Both nations are part of the Non-Aligned Movement.
In the United Nations, India supported the decolonisation of Morocco and the Moroccan freedom movement. India recognised Morocco on 20 June 1956 and established relations in 1957. The Ministry of External Affairs of the Government of India states that "India and Morocco have enjoyed cordial and friendly relations and over the years bilateral relations have witnessed significant depth and growth."
The Indian Council for Cultural Relations promotes Indian culture in Morocco. Morocco seeks to increase its trade ties with India and is seeking Indian investment in various sectors The bilateral relations between India and Morocco strengthened after the Moroccan Ambassador to India spent nearly a week in Srinagar, the capital city of Jammu and Kashmir. This showed Moroccan solidarity with India in regard to Kashmir.
India has a high commissioner in Maputo and Mozambique has a high commissioner in New Delhi.
Relations between India and Namibia are warm and cordial.
India was one of SWAPO's earliest supporters during the Namibian liberation movement. The first SWAPO embassy was established in India in 1986. India's observer mission was converted to a full High Commissioner on Namibia's independence day of 21 March 1990. India has helped train the Namibian Air Force since its creation in 1995. The two countries work closely in mutual multilateral organisations such as the United Nations, Non-Aligned Movement and the Commonwealth of Nations. Namibia supports expansion of the United Nations Security Council to include a permanent seat for India.
In 2008–09, trade between the two countries stood at approximately US$80 million. Namibia's main imports from India were drugs and pharmaceuticals, chemicals, agricultural machinery, automobile and automobile parts, glass and glassware, plastic and linoleum products. India primarily imported nonferrous metals, ores and metal scarps. Indian products are also exported to neighbouring South Africa and re-imported to Namibia as South African imports. Namibian diamonds are often exported to European diamond markets before being again imported to India. In 2009, the first direct sale of Namibian diamonds to India took place. In 2008, two Indian companies won a US$105 million contract from NamPower to lay a high-voltage direct current bi-polar line from Katima Mulilo to Otjiwarongo. Namibia is a beneficiary of the Indian Technical and Economic Cooperation (ITEC) programme for telecommunications professionals from developing countries.
India has a high commissioner in Windhoek and Namibia has a high commissioner in New Delhi. Namibia's high commissioner is also accredited for Bangladesh, the Maldives and Sri Lanka.
India has close relations with this oil rich West African country. Twenty percent of India's crude oil needs are met, by Nigeria. of oil, is the amount of oil, that India receives from Nigeria. Trade, between these two countries stands at $875 million in 2005–2006. Indian companies have also invested in manufacturing, pharmaceuticals, iron ore, steel, information technology, and communications, amongst other things. Both India and Nigeria, are members of the Commonwealth of Nations, G-77, and the Non-Aligned Movement. Former Nigerian President, Olusegun Obasanjo was the guest of honour, at the Republic Day parade, in 1999, and the Indian Prime Minister Manmohan Singh, visited Nigeria in 2007, and addressed the Nigerian Parliament.
Indo-Rwandan relations are the foreign relations between the Republic of India and the Republic of Rwanda. India is represented in Rwanda through its honorary consulate in Kigali. Rwanda has been operating its Embassy in New Delhi since 1998 and appointed its first resident Ambassador in 2001.
India–Seychelles relations are bilateral relations between the Republic of India and the Republic of Seychelles. India has a High Commission in Victoria while Seychelles maintains a High Commission in New Delhi.
India and South Africa, have always had strong relations even though India revoked diplomatic relations in protest to the apartheid regime in the mid 20th century. The history of British rule connects both lands. There is a large group of Indian South Africans. Mahatma Gandhi, spent many years in South Africa, during which time, he fought for the rights of the ethnic Indians. Nelson Mandela was inspired by Gandhi. After India's independence, India strongly condemned apartheid, and refused diplomatic relations while apartheid was conducted as state policy in South Africa.
The two countries, now have close economic, political, and sports relations. Trade between the two countries grew from $3 million in 1992–1993 to $4 billion in 2005–2006, and aim to reach trade of $12 billion by 2010. One third of India's imports from South Africa is gold bar. Diamonds, that are mined from South Africa, are polished in India. Nelson Mandela was awarded the Gandhi Peace Prize. The two countries are also members of the IBSA Dialogue Forum, with Brazil. India hopes to get large amounts of uranium, from resource rich South Africa, for India's growing civilian nuclear energy sector.
India recognised South Sudan on 10 July 2011, a day after South Sudan became an independent state. At the moment relations are primarily economic. Pramit Pal Chaudhuri wrote in the "Hindustan Times" that South Sudan "has other attractions. As the Indian Foreign Ministry's own literature notes, South Sudan [is] 'reported to has ("sic") some of the largest oil reserves in Africa outside Nigeria and Angola.'" An article in ""The Telegraph"" read that South Sudan is "one of the poorest [countries] in the world, [but] is oil rich. Foreign ministry officials said New Delhi has [a] keen interest in increasing its investments in the oil fields in South Sudan, which now owns over two-thirds of the erstwhile united Sudan's oil fields."
In return for the oil resources that can be provided by South Sudan, India said it was willing to assist in developing infrastructure, training officials in health, education and rural development. "We have compiled a definite road map using ("sic") which India can help South Sudan."
Indo-Sudanese relations have always been characterised as longstanding, close, and friendly, even since the early development stages of their countries. At the time of Indian independence, Sudan had contributed 70,000 pounds, which was used to build part of the National Defence Academy in Pune. The main building of NDA is called Sudan Block. The two nations established diplomatic relations shortly after India became known as one of the first Asian countries to recognise the newly independent African country. India and Sudan also share geographic and historical similarities, as well as economic interests. Both countries are former British colonies, and remotely border Saudi Arabia by means of a body of water. India and Sudan continue to have cordial relations, despite issues such as India's close relationship with Israel, India's solidarity with Egypt over border issues with Sudan, and Sudan's intimate bonds with Pakistan and Bangladesh. India had also contributed some troops as United Nations peacekeeping force in Darfur.
Togo opened its embassy in New Delhi in October 2010. The High Commission of India in Accra, Ghana is concurrently accredited to Togo. Togolese President Gnassingbé Eyadéma made an official state visit to India in September 1994. During the visit, the two countries agreed to establish Joint Commission.
India and Uganda established diplomatic relations in 1965 and each maintain a High Commissioner in the other's capital. The Indian High Commission in Kampala has concurrent accreditation to Burundi and Rwanda. Uganda hosts a large Indian community and India–Uganda relations cover a broad range of sectors including political, economic, commercial, cultural and scientific co-operation.
Relations between India and Uganda began with the arrival of over 30,000 Indians in Uganda in the 19th century who were brought there to construct the Mombasa–Kampala railway line. Ugandan independence activists were inspired in their struggle for Ugandan independence by the success of the Indian independence movement and were also supported in their struggle by the Prime Minister of India Jawaharlal Nehru.
Indo-Ugandan relations have been good since Uganda's independence except during the regime of Idi Amin. Amin in 1972 expelled over 55,000 people of Indian origin and 5,000 Indians who had largely formed the commercial and economic backbone of the country accusing them of exploiting native Ugandans. Since the mid-1980s when President Yoweri Museveni came to power, relations have steadily improved. Today some 20,000 Indians and PIOs live or work in Uganda. Ethnic tensions between Indians and Ugandans have been a recurring issue in bilateral relations given the role of Indians in the Ugandan economy.
India participates in the following international organisations:
India became independent within the British Commonwealth in August 1947 as the Dominion of India after the partition of India into India and the Dominion of Pakistan. King George VI, the last Emperor of India became the King of India with the Governor-General of India as his viceregal representative.
India became the very first Commonwealth republic on 26 January 1950, as a result of the London Declaration.
India played an important role in the multilateral movements of colonies and newly independent countries that developed into the Non-Aligned Movement.
Nonalignment had its origins in India's colonial experience and the nonviolent Indian independence movement led by the Congress, which left India determined to be the master of its fate in an international system dominated politically by Cold War alliances and economically by Western capitalism and Soviet communism. The principles of nonalignment, as articulated by Nehru and his successors, were preservation of India's freedom of action internationally through refusal to align India with any bloc or alliance, particularly those led by the United States or the Soviet Union; nonviolence and international co-operation as a means of settling international disputes.
Nonalignment was a consistent feature of Indian foreign policy by the late 1940s and enjoyed strong, almost unquestioning support among the Indian elite.
The term "Non-Alignment" was coined by V K Menon in his speech at UN in 1953 which was later used by Indian Prime Minister, Jawaharlal Nehru during his speech in 1954 in Colombo, Sri Lanka. In this speech, Nehru described the five pillars to be used as a guide for China–India relations, which were first put forth by PRC Premier Zhou Enlai. Called Panchsheel (five restraints), these principles would later serve as the basis of the Non-Aligned Movement. The five principles were:
Jawaharlal Nehru's concept of nonalignment brought India considerable international prestige among newly independent states that shared India's concerns about the military confrontation between the superpowers and the influence of the former colonial powers. New Delhi used nonalignment to establish a significant role for itself as a leader of the newly independent world in such multilateral organisations as the United Nations (UN) and the Nonaligned Movement. The signing of the Treaty of Peace, Friendship, and Cooperation between India and the Soviet Union in 1971 and India's involvement in the internal affairs of its smaller neighbours in the 1970s and 1980s tarnished New Delhi's image as a nonaligned nation and led some observers to note that in practice, nonalignment applied only to India's relations with countries outside South Asia.
India was among the original members of the United Nations that signed the Declaration by United Nations at Washington on 1 January 1942 and also participated in the United Nations Conference on International Organization at San Francisco from 25 April to 26 June 1945. As a founding member of the United Nations, India strongly supports the purposes and principles of the UN and has made significant contributions to implementing the goals of the Charter, and the evolution of the UN's specialised programmes and agencies. India is a charter member of the United Nations and participates in all of its specialised agencies and organisations. India has contributed troops to United Nations peacekeeping efforts in Korea, Egypt and the Congo in its earlier years and in Somalia, Angola, Haiti, Liberia, Lebanon and Rwanda in recent years, and more recently in the South Sudan conflict. India has been a member of the UN Security Council for six terms (a total of 12 years), and was a member for the term 2011–12. India is a member of the G4 group of nations who back each other in seeking a permanent seat on the security council and advocate in favour of the reformation of the UNSC. India is also part of the Group of 77.
Described by the WTO's former chief, Pascal Lamy, as one of the organisation's "big brothers", India was instrumental in bringing down the Doha Development Round of talks in 2008. It has played an important role of representing as many as 100 developing nations during WTO summits.
India's territorial disputes with neighbouring Pakistan and People's Republic of China have played a crucial role in its foreign policy. India is also involved in minor territorial disputes with neighbouring Bangladesh, Nepal and Maldives. India currently maintains two manned stations in Antarctica but has made some unofficial territorial claims, which are yet to be clarified.
India is involved in the following international disputes:
Kalapani village of India is claimed by Nepal and Susta village in Nawalparasi district of Nepal is claimed by India.
The dispute between India and Nepal involves about of area in Kalapani, where China, India, and Nepal meet. Indian forces occupied the area in 1962 after China and India fought their border war. Three villages are located in the disputed zone: Kuti [Kuthi, 30°19'N, 80°46'E], Gunji, and Knabe. India and Nepal disagree about how to interpret the 1816 Sugauli treaty between the British East India Company and Nepal, which delimited the boundary along the Maha Kali River (Sarda River in India). The dispute intensified in 1997 as the Nepali parliament considered a treaty on hydro-electric development of the river. India and Nepal differ as to which stream constitutes the source of the river. Nepal regards the Limpiyadhura as the source; India claims the Lipu Lekh. Nepal has reportedly tabled an 1856 map from the British India Office to support its position. The countries have held several meetings about the dispute and discussed jointly surveying to resolve the issue. Although the Indo-Nepali dispute appears to be minor, it was aggravated in 1962 by tensions between China and India. Because the disputed area lies near the Sino-Indian frontier, it gains strategic value.
Two regions are claimed by both India and China. Aksai Chin is in the disputed territory of Jammu and Kashmir, at the junction of India, Tibet and Xinjiang, India claims the 38,000-square-kilometre territory, currently administered by China after Sino-Indian War. India also considers the cessation of Shaksam Valley to China by Pakistan as illegal and a part of its territory.
Arunachal Pradesh is a state of India in the country's northeast, bordering on Bhutan, Burma and China's Tibet, though it is under Indian administration since 1914, China claims the 90,000-square-kilometre area as South Tibet. Also the boundary between the North Indian states of Himachal Pradesh and Uttarakhand with China's Tibet is not properly demarcated with some portions under de facto administration of India. | https://en.wikipedia.org/wiki?curid=14604 |
Indian religions
Indian religions, sometimes also termed Dharmic religions or Indic religions, are the religions that originated in the Indian subcontinent; namely Hinduism, Jainism, Buddhism, and Sikhism. These religions are also all classified as Eastern religions. Although Indian religions are connected through the history of India, they constitute a wide range of religious communities, and are not confined to the Indian subcontinent.
Evidence attesting to prehistoric religion in the Indian subcontinent derives from scattered Mesolithic rock paintings. The Harappan people of the Indus Valley Civilisation, which lasted from 3300 to 1300 BCE (mature period 2600–1900 BCE), had an early urbanized culture which predates the Vedic religion.
The documented history of Indian religions begins with the historical Vedic religion, the religious practices of the early Indo-Iranians, which were collected and later redacted into the "Vedas". The period of the composition, redaction and commentary of these texts is known as the Vedic period, which lasted from roughly 1750 to 500 BCE. The philosophical portions of the Vedas were summarized in Upanishads, which are commonly referred to as "Vedānta", variously interpreted to mean either the "last chapters, parts of the Veda" or "the object, the highest purpose of the Veda". The early Upanishads all predate the Common Era, five of the eleven principal Upanishads were composed in all likelihood before 6th century BCE, and contain the earliest mentions of "Yoga" and Moksha.
The Shramanic Period between 800 and 200 BCE marks a "turning point between the Vedic Hinduism and Puranic Hinduism". The Shramana movement, an ancient Indian religious movement parallel to but separate from Vedic tradition, often defied many of the Vedic and Upanishadic concepts of soul (Atman) and the ultimate reality (Brahman). In 6th century BCE, the Shramnic movement matured into Jainism and Buddhism and was responsible for the schism of Indian religions into two main philosophical branches of astika, which venerates Veda (e.g., six orthodox schools of Hinduism) and nastika (e.g., Buddhism, Jainism, Charvaka, etc.). However, both branches shared the related concepts of Yoga, "saṃsāra" (the cycle of birth and death) and "moksha" (liberation from that cycle).
The Puranic Period (200 BCE – 500 CE) and Early Medieval period (500–1100 CE) gave rise to new configurations of Hinduism, especially bhakti and Shaivism, Shaktism, Vaishnavism, Smarta, and smaller groups like the conservative Shrauta.
The early Islamic period (1100–1500 CE) also gave rise to new movements. Sikhism was founded in the 15th century on the teachings of Guru Nanak and the nine successive Sikh Gurus in Northern India. The vast majority of its adherents originate in the Punjab region.
With the colonial dominance of the British a reinterpretation and synthesis of Hinduism arose, which aided the Indian independence movement.
James Mill (1773–1836), in his The History of British India (1817), distinguished three phases in the history of India, namely Hindu, Muslim and British civilisations. This periodisation has been criticised, for the misconceptions it has given rise to. Another periodisation is the division into "ancient, classical, medieval and modern periods", although this periodization has also received criticism.
Romila Thapar notes that the division of Hindu-Muslim-British periods of Indian history gives too much weight to "ruling dynasties and foreign invasions," neglecting the social-economic history which often showed a strong continuity. The division in Ancient-Medieval-Modern overlooks the fact that the Muslim-conquests took place between the eight and the fourteenth century, while the south was never completely conquered. According to Thapar, a periodisation could also be based on "significant social and economic changes," which are not strictly related to a change of ruling powers.
Smart and Michaels seem to follow Mill's periodisation, while Flood and Muesse follow the "ancient, classical, mediaeval and modern periods" periodisation. An elaborate periodisation may be as follows:
Evidence attesting to prehistoric religion in the Indian subcontinent derives from scattered Mesolithic rock paintings such as at Bhimbetka, depicting dances and rituals. Neolithic agriculturalists inhabiting the Indus River Valley buried their dead in a manner suggestive of spiritual practices that incorporated notions of an afterlife and belief in magic. Other South Asian Stone Age sites, such as the Bhimbetka rock shelters in central Madhya Pradesh and the Kupgal petroglyphs of eastern Karnataka, contain rock art portraying religious rites and evidence of possible ritualised music.
The religion and belief system of the Indus valley people have received considerable attention, especially from the view of identifying precursors to deities and religious practices of Indian religions that later developed in the area. However, due to the sparsity of evidence, which is open to varying interpretations, and the fact that the Indus script remains undeciphered, the conclusions are partly speculative and largely based on a retrospective view from a much later Hindu perspective. An early and influential work in the area that set the trend for Hindu interpretations of archaeological evidence from the Harrapan sites was that of John Marshall, who in 1931 identified the following as prominent features of the Indus religion: a Great Male God and a Mother Goddess; deification or veneration of animals and plants; symbolic representation of the phallus (linga) and vulva (yoni); and, use of baths and water in religious practice. Marshall's interpretations have been much debated, and sometimes disputed over the following decades.
One Indus valley seal shows a seated, possibly ithyphallic and tricephalic, figure with a horned headdress, surrounded by animals. Marshall identified the figure as an early form of the Hindu god Shiva (or Rudra), who is associated with asceticism, yoga, and linga; regarded as a lord of animals; and often depicted as having three eyes. The seal has hence come to be known as the Pashupati Seal, after "Pashupati" (lord of all animals), an epithet of Shiva. While Marshall's work has earned some support, many critics and even supporters have raised several objections. Doris Srinivasan has argued that the figure does not have three faces, or yogic posture, and that in Vedic literature Rudra was not a protector of wild animals. Herbert Sullivan and Alf Hiltebeitel also rejected Marshall's conclusions, with the former claiming that the figure was female, while the latter associated the figure with "Mahisha", the Buffalo God and the surrounding animals with vahanas (vehicles) of deities for the four cardinal directions. Writing in 2002, Gregory L. Possehl concluded that while it would be appropriate to recognise the figure as a deity, its association with the water buffalo, and its posture as one of ritual discipline, regarding it as a proto-Shiva would be going too far. Despite the criticisms of Marshall's association of the seal with a proto-Shiva icon, it has been interpreted as the Tirthankara Rishabha by Jains and Dr. Vilas Sangave or an early Buddha by Buddhists. Historians like Heinrich Zimmer, Thomas McEvilley are of the opinion that there exists some link between first Jain Tirthankara Rishabha and Indus Valley civilisation.
Marshall hypothesized the existence of a cult of Mother Goddess worship based upon excavation of several female figurines, and thought that this was a precursor of the Hindu sect of Shaktism. However the function of the female figurines in the life of Indus Valley people remains unclear, and Possehl does not regard the evidence for Marshall's hypothesis to be "terribly robust". Some of the baetyls interpreted by Marshall to be sacred phallic representations are now thought to have been used as pestles or game counters instead, while the ring stones that were thought to symbolise "yoni" were determined to be architectural features used to stand pillars, although the possibility of their religious symbolism cannot be eliminated. Many Indus Valley seals show animals, with some depicting them being carried in processions, while others show chimeric creations. One seal from Mohen-jodaro shows a half-human, half-buffalo monster attacking a tiger, which may be a reference to the Sumerian myth of such a monster created by goddess Aruru to fight Gilgamesh.
In contrast to contemporary Egyptian and Mesopotamian civilisations, Indus valley lacks any monumental palaces, even though excavated cities indicate that the society possessed the requisite engineering knowledge. This may suggest that religious ceremonies, if any, may have been largely confined to individual homes, small temples, or the open air. Several sites have been proposed by Marshall and later scholars as possibly devoted to religious purpose, but at present only the Great Bath at Mohenjo-daro is widely thought to have been so used, as a place for ritual purification. The funerary practices of the Harappan civilisation is marked by its diversity with evidence of supine burial; fractional burial in which the body is reduced to skeletal remains by exposure to the elements before final interment; and even cremation.
The early Dravidian religion constituted of non-Vedic form of Hinduism in that they were either historically or are at present Āgamic. The Agamas are non-vedic in origin and have been dated either as post-vedic texts. or as pre-vedic oral compositions. The "Agamas" are a collection of Tamil and later Sanskrit scriptures chiefly constituting the methods of temple construction and creation of "murti", worship means of deities, philosophical doctrines, meditative practices, attainment of sixfold desires and four kinds of yoga. The worship of tutelary deity, sacred flora and fauna in Hinduism is also recognized as a survival of the pre-Vedic Dravidian religion.
Ancient Tamil grammatical works Tolkappiyam, the ten anthologies Pattuppāṭṭu, the eight anthologies Eṭṭuttokai also sheds light on early religion of ancient Dravidians. "Seyon" was glorified as "the red god seated on the blue peacock, who is ever young and resplendent," as "the favored god of the Tamils." Sivan was also seen as the supreme God. Early iconography of Seyyon and Sivan and their association with native flora and fauna goes back to Indus Valley Civilization. The Sangam landscape was classified into five categories, "thinais", based on the mood, the season and the land. Tolkappiyam, mentions that each of these "thinai" had an associated deity such Seyyon in "Kurinji"-the hills, Thirumaal in "Mullai"-the forests, and Kotravai in "Marutham"-the plains, and Wanji-ko in the "Neithal"-the coasts and the seas. Other gods mentioned were Mayyon and Vaali who were all assimilated into Hinduism over time. Dravidian linguistic influence on early Vedic religion is evident, many of these features are already present in the oldest known Indo-Aryan language, the language of the "Rigveda" (c. 1500 BCE), which also includes over a dozen words borrowed from Dravidian. This represents an early religious and cultural fusion or synthesis between ancient Dravidians and Indo-Aryans, which became more evident over time with sacred iconography, traditions, philosophy, flora and fauna that went on to influence Hinduism, Buddhism, Charvaka, Sramana and Jainism.
Throughout Tamilakam, a king was considered to be divine by nature and possessed religious significance. The king was 'the representative of God on earth’ and lived in a “koyil”, which means the “residence of a god”. The Modern Tamil word for temple is koil. Titual worship was also given to kings. Modern words for god like “kō” (“king”), “iṟai” (“emperor”) and “āṇḍavar” ( “conqueror”) now primarily refer to gods. These elements were incorporated later into Hinduism like the legendary marriage of Shiva to Queen Mīnātchi who ruled Madurai or Wanji-ko, a god who later merged into Indra. Tolkappiyar refers to the Three Crowned Kings as the “Three Glorified by Heaven”. In the Dravidian-speaking South, the concept of divine kingship led to the assumption of major roles by state and temple.
The cult of the mother goddess is treated as an indication of a society which venerated femininity. This mother goddess was conceived as a virgin, one who has given birth to all and one, typically associated with Shaktism. The temples of the Sangam days, mainly of Madurai, seem to have had priestesses to the deity, which also appear predominantly a goddess. In the Sangam literature, there is an elaborate description of the rites performed by the Kurava priestess in the shrine Palamutircholai. Among the early Dravidians the practice of erecting memorial stones “Natukal" or "Hero Stone had appeared, and it continued for quite a long time after the Sangam age, down to about 16th century. It was customary for people who sought victory in war to worship these hero stones to bless them with victory.
The documented history of Indian religions begins with the historical Vedic religion, the religious practices of the early Indo-Aryans, which were collected and later redacted into the "Samhitas" (usually known as the Vedas), four canonical collections of hymns or mantras composed in archaic Sanskrit. These texts are the central "shruti" (revealed) texts of Hinduism. The period of the composition, redaction and commentary of these texts is known as the Vedic period, which lasted from roughly 1750 to 500 BCE.
The Vedic Period is most significant for the composition of the four Vedas, Brahmanas and the older Upanishads (both presented as discussions on the rituals, mantras and concepts found in the four Vedas), which today are some of the most important canonical texts of Hinduism, and are the codification of much of what developed into the core beliefs of Hinduism.
Some modern Hindu scholars use the "Vedic religion" synonymously with "Hinduism." According to Sundararajan, Hinduism is also known as the Vedic religion. Other authors state that the Vedas contain "the fundamental truths about Hindu Dharma" which is called "the modern version of the ancient Vedic Dharma" The Arya Samajis recognize the Vedic religion as true Hinduism. Nevertheless, according to Jamison and Witzel,
The rishis, the composers of the hymns of the Rigveda, were considered inspired poets and seers.
The mode of worship was the performance of Yajna, sacrifices which involved sacrifice and sublimation of the havana sámagri (herbal preparations) in the fire, accompanied by the singing of Samans and 'mumbling' of Yajus, the sacrificial mantras. The sublime meaning of the word yajna is derived from the Sanskrit verb yaj, which has a three-fold meaning of worship of deities (devapujana), unity (saògatikaraña) and charity (dána). An essential element was the sacrificial fire – the divine Agni – into which oblations were poured, as everything offered into the fire was believed to reach God.
Central concepts in the Vedas are Satya and Rta. "Satya" is derived from Sat, the present participle of the verbal root "as", "to be, to exist, to live". "Sat" means "that which really exists [...] the really existent truth; the Good", and "Sat-ya" means "is-ness". "Rta", "that which is properly joined; order, rule; truth", is the principle of natural order which regulates and coordinates the operation of the universe and everything within it. "Satya (truth as being) and rita (truth as law) are the primary principles of Reality and its manifestation is the background of the canons of dharma, or a life of righteousness." "Satya is the principle of integration rooted in the Absolute, rita is its application and function as the rule and order operating in the universe." Conformity with Ṛta would enable progress whereas its violation would lead to punishment. Panikkar remarks:
The term rta is inherited from the Proto-Indo-Iranian religion, the religion of the Indo-Iranian peoples prior to the earliest Vedic (Indo-Aryan) and Zoroastrian (Iranian) scriptures. "Asha" is the Avestan language term (corresponding to Vedic language ṛta) for a concept of cardinal importance to Zoroastrian theology and doctrine. The term "dharma" was already used in Brahmanical thought, where it was conceived as an aspect of Rta.
Major philosophers of this era were Rishis Narayana, Kanva, Rishaba, Vamadeva, and Angiras.
During the Middle Vedic period Rgveda X, the mantras of the Yajurveda and the older Brahmana texts were composed. The Brahmans became powerful intermediairies.
Historical roots of Jainism in India is traced back to 9th-century BC with the rise of Parshvanatha and his non-violent philosophy.
The Vedic religion evolved into Hinduism and Vedanta, a religious path considering itself the 'essence' of the Vedas, interpreting the Vedic pantheon as a unitary view of the universe with 'God' (Brahman) seen as immanent and transcendent in the forms of Ishvara and Brahman. This post-Vedic systems of thought, along with the Upanishads and later texts like epics (namely Gita of Mahabharat), is a major component of modern Hinduism. The ritualistic traditions of Vedic religion are preserved in the conservative Śrauta tradition.
Since Vedic times, "people from many strata of society throughout the subcontinent tended to adapt their religious and social life to Brahmanic norms", a process sometimes called Sanskritization. It is reflected in the tendency to identify local deities with the gods of the Sanskrit texts.
During the time of the shramanic reform movements "many elements of the Vedic religion were lost". According to Michaels, "it is justified to see a turning point between the Vedic religion and Hindu religions".
The late Vedic period (9th to 6th centuries BCE) marks the beginning of the Upanisadic or Vedantic period. This period heralded the beginning of much of what became classical Hinduism, with the composition of the Upanishads, later the Sanskrit epics, still later followed by the Puranas.
Upanishads form the speculative-philosophical basis of classical Hinduism and are known as Vedanta (conclusion of the Vedas). The older Upanishads launched attacks of increasing intensity on the ritual. Anyone who worships a divinity other than the Self is called a domestic animal of the gods in the Brihadaranyaka Upanishad. The Mundaka launches the most scathing attack on the ritual by comparing those who value sacrifice with an unsafe boat that is endlessly overtaken by old age and death.
Scholars believe that Parsva, the 23rd Jain "tirthankara" lived during this period in the 9th century BCE.
Jainism and Buddhism belong to the sramana tradition. These religions rose into prominence in 700–500 BCE in the Magadha kingdom., reflecting "the cosmology and anthropology of a much older, pre-Aryan upper class of northeastern India", and were responsible for the related concepts of "saṃsāra" (the cycle of birth and death) and "moksha" (liberation from that cycle).
The shramana movements challenged the orthodoxy of the rituals. The shramanas were wandering ascetics distinct from Vedism. Mahavira, proponent of Jainism, and Buddha (c. 563-483), founder of Buddhism were the most prominent icons of this movement.
Shramana gave rise to the concept of the cycle of birth and death, the concept of samsara, and the concept of liberation. The influence of Upanishads on Buddhism has been a subject of debate among scholars. While Radhakrishnan, Oldenberg and Neumann were convinced of Upanishadic influence on the Buddhist canon, Eliot and Thomas highlighted the points where Buddhism was opposed to Upanishads. Buddhism may have been influenced by some Upanishadic ideas, it however discarded their orthodox tendencies. In Buddhist texts Buddha is presented as rejecting avenues of salvation as "pernicious views".
Jainism was established by a lineage of 24 enlightened beings culminating with Parsva (9th century BCE) and Mahavira (6th century BCE).
The 24th Tirthankara of Jainism, Mahavira, stressed five vows, including "ahimsa" (non-violence), "satya" (truthfulness), "asteya" (non-stealing) and "aparigraha" (non-attachment). Jain orthodoxy believes the teachings of the Tirthankaras predates all known time and scholars believe Parshva, accorded status as the 23rd Tirthankara, was a historical figure. The Vedas are believed to have documented a few Tirthankaras and an ascetic order similar to the shramana movement.
Buddhism was historically founded by Siddhartha Gautama, a Kshatriya prince-turned-ascetic, and was spread beyond India through missionaries. It later experienced a decline in India, but survived in Nepal and Sri Lanka, and remains more widespread in Southeast and East Asia.
Gautama Buddha, who was called an "awakened one" (Buddha), was born into the Shakya clan living at Kapilavastu and Lumbini in what is now southern Nepal. The Buddha was born at Lumbini, as emperor Ashoka's Lumbini pillar records, just before the kingdom of Magadha (which traditionally is said to have lasted from c. 546–324 BCE) rose to power. The Shakyas claimed Angirasa and Gautama Maharishi lineage, via descent from the royal lineage of Ayodhya.
Buddhism emphasises enlightenment (nibbana, nirvana) and liberation from the rounds of rebirth. This objective is pursued through two schools, Theravada, the Way of the Elders (practised in Sri Lanka, Burma, Thailand, SE Asia, etc.) and Mahayana, the Greater Way (practised in Tibet, China, Japan etc.). There may be some differences in the practice between the two schools in reaching the objective.
In the Theravada practice this is pursued in seven stages of purification (visuddhi); viz. physical purification by taking precepts (sila visiddhi), mental purification by insight meditation (citta visuddhi), followed by purification of views and concepts (ditthi visuddhi), purification by overcoming of doubts (kinkha vitarana vishuddhi), purification by acquiring knowledge and wisdom of the right path (maggarmagga-nanadasana visuddhi), attaining knowledge and wisdom through the course of practice (patipada-nanadasana visuddhi), and purification by attaining knowledge and insight wisdom (nanadasana visuddhi).
Both Jainism and Buddhism spread throughout India during the period of the Magadha empire.
Buddhism in India spread during the reign of Ashoka of the Maurya Empire, who patronised Buddhist teachings and unified the Indian subcontinent in the 3rd century BCE. He sent missionaries abroad, allowing Buddhism to spread across Asia. Jainism began its golden period during the reign of Emperor Kharavela of Kalinga in the 2nd century BCE.
Flood and Muesse take the period between 200 BCE and 500 BCE as a separate period, in which the epics and the first puranas were being written. Michaels takes a greater timespan, namely the period between 200 BCE and 1100 CE, which saw the rise of so-called "Classical Hinduism", with its "golden age" during the Gupta Empire.
According to Alf Hiltebeitel, a period of consolidation in the development of Hinduism took place between the time of the late Vedic Upanishad (c. 500 BCE) and the period of the rise of the Guptas (c. 320–467 CE), which he calls the "Hindus synthesis", "Brahmanic synthesis", or "orthodox synthesis". It develops in interaction with other religions and peoples:
The end of the Vedantic period around the 2nd century CE spawned a number of branches that furthered Vedantic philosophy, and which ended up being seminaries in their own right. Prominent amongst these developers were Yoga, Dvaita, Advaita and the medieval Bhakti movement.
The "smriti" texts of the period between 200 BCE-100 CE proclaim the authority of the Vedas, and "nonrejection of the Vedas comes to be one of the most important touchstones for defining Hinduism over and against the heterodoxies, which rejected the Vedas." Of the six Hindu darsanas, the Mimamsa and the Vedanta "are rooted primarily in the Vedic "sruti" tradition and are sometimes called "smarta" schools in the sense that they develop "smarta" orthodox current of thoughts that are based, like "smriti", directly on "sruti"." According to Hiltebeitel, "the consolidation of Hinduism takes place under the sign of "bhakti"." It is the "Bhagavadgita" that seals this achievement. The result is a universal achievement that may be called "smarta". It views Shiva and Vishnu as "complementary in their functions but ontologically identical".
In earlier writings, Sanskrit 'Vedānta' simply referred to the Upanishads, the most speculative and philosophical of the Vedic texts. However, in the medieval period of Hinduism, the word Vedānta came to mean the school of philosophy that interpreted the Upanishads. Traditional Vedānta considers scriptural evidence, or shabda pramāna, as the most authentic means of knowledge, while perception, or pratyaksa, and logical inference, or anumana, are considered to be subordinate (but valid).
The systematisation of Vedantic ideas into one coherent treatise was undertaken by Badarāyana in the Brahma Sutras which was composed around 200 BCE. The cryptic aphorisms of the Brahma Sutras are open to a variety of interpretations. This resulted in the formation of numerous Vedanta schools, each interpreting the texts in its own way and producing its own sub-commentaries.
After 200 CE several schools of thought were formally codified in Indian philosophy, including Samkhya, Yoga, Nyaya, Vaisheshika, Mimāṃsā and Advaita Vedanta. Hinduism, otherwise a highly polytheistic, pantheistic or monotheistic religion, also tolerated atheistic schools. The thoroughly materialistic and anti-religious philosophical Cārvāka school that originated around the 6th century BCE is the most explicitly atheistic school of Indian philosophy. Cārvāka is classified as a "nāstika" ("heterodox") system; it is not included among the six schools of Hinduism generally regarded as orthodox. It is noteworthy as evidence of a materialistic movement within Hinduism. Our understanding of Cārvāka philosophy is fragmentary, based largely on criticism of the ideas by other schools, and it is no longer a living tradition. Other Indian philosophies generally regarded as atheistic include Samkhya and Mimāṃsā.
Two of Hinduism's most revered "epics", the Mahabharata and Ramayana were compositions of this period. Devotion to particular deities was reflected from the composition of texts composed to their worship. For example, the "Ganapati Purana" was written for devotion to Ganapati (or Ganesh). Popular deities of this era were Shiva, Vishnu, Durga, Surya, Skanda, and Ganesh (including the forms/incarnations of these deities).
In the latter Vedantic period, several texts were also composed as summaries/attachments to the Upanishads. These texts collectively called as Puranas allowed for a divine and mythical interpretation of the world, not unlike the ancient Hellenic or Roman religions. Legends and epics with a multitude of gods and goddesses with human-like characteristics were composed.
The Gupta period marked a watershed of Indian culture: the Guptas performed Vedic sacrifices to legitimize their rule, but they also patronized Buddhism, which continued to provide an alternative to Brahmanical orthodoxy. Buddhism continued to have a significant presence in some regions of India until the 12th century.
There were several Buddhistic kings who worshiped Vishnu, such as the Gupta Empire, Pala Empire, Malla Empire, Somavanshi, and Satavahana. Buddhism survived followed by Hindus.
Tantrism originated in the early centuries CE and developed into a fully articulated tradition by the end of the Gupta period. According to Michaels this was the "Golden Age of Hinduism" (c. 320–650 CE), which flourished during the Gupta Empire (320 to 550 CE) until the fall of the Harsha Empire (606 to 647 CE). During this period, power was centralised, along with a growth of far distance trade, standardizarion of legal procedures, and general spread of literacy. Mahayana Buddhism flourished, but the orthodox Brahmana culture began to be rejuvenated by the patronage of the Gupta Dynasty. The position of the Brahmans was reinforced, and the first Hindu temples emerged during the late Gupta age.
After the end of the Gupta Empire and the collapse of the Harsha Empire, power became decentralised in India. Several larger kingdoms emerged, with "countless vasal states". The kingdoms were ruled via a feudal system. Smaller kingdoms were dependent on the protection of the larger kingdoms. "The great king was remote, was exalted and deified", as reflected in the Tantric Mandala, which could also depict the king as the centre of the mandala.
The disintegration of central power also lead to regionalisation of religiosity, and religious rivalry. Local cults and languages were enhanced, and the influence of "Brahmanic ritualistic Hinduism" was diminished. Rural and devotional movements arose, along with Shaivism, Vaisnavism, Bhakti and Tantra, though "sectarian groupings were only at the beginning of their development". Religious movements had to compete for recognition by the local lords. Buddhism lost its position, and began to disappear in India.
In the same period Vedanta changed, incorporating Buddhist thought and its emphasis on consciousness and the working of the mind. Buddhism, which was supported by the ancient Indian urban civilisation lost influence to the traditional religions, which were rooted in the countryside. In Bengal, Buddhism was even prosecuted. But at the same time, Buddhism was incorporated into Hinduism, when Gaudapada used Buddhist philosophy to reinterpret the Upanishads. This also marked a shift from Atman and Brahman as a "living substance" to "maya-vada", where Atman and Brahman are seen as "pure knowledge-consciousness". According to Scheepers, it is this "maya-vada" view which has come to dominate Indian thought.
Between 400 and 1000 CE Hinduism expanded as the decline of Buddhism in India continued. Buddhism subsequently became effectively extinct in India but survived in Nepal and Sri Lanka.
The Bhakti movement began with the emphasis on the worship of God, regardless of one's status – whether priestly or laypeople, men or women, higher social status or lower social status. The movements were mainly centered on the forms of Vishnu (Rama and Krishna) and Shiva. There were however popular devotees of this era of Durga. The best-known devotees are the Nayanars from southern India. The most popular Shaiva teacher of the south was Basava, while of the north it was Gorakhnath. Female saints include figures like Akkamadevi, Lalleshvari and Molla.
The "alwar" or "azhwars" (, "āzvārkaḷ" , those immersed in god) were Tamil poet-saints of south India who lived between the 6th and 9th centuries CE and espoused "emotional devotion" or bhakti to Visnu-Krishna in their songs of longing, ecstasy and service. The most popular Vaishnava teacher of the south was Ramanuja, while of the north it was Ramananda.
Several important icons were women. For example, within the Mahanubhava sect, the women outnumbered the men, and administration was many times composed mainly of women. Mirabai is the most popular female saint in India.
Sri Vallabha Acharya (1479–1531) is a very important figure from this era. He founded the Shuddha Advaita ("Pure Non-dualism") school of Vedanta thought.
According to "The Centre for Cultural Resources and Training",
In the 12th and 13th centuries, Turks and Afghans invaded parts of northern India and established the Delhi Sultanate in the former Rajput holdings. The subsequent Slave dynasty of Delhi managed to conquer large areas of northern India, approximately equal in extent to the ancient Gupta Empire, while the Khalji dynasty conquered most of central India but were ultimately unsuccessful in conquering and uniting the subcontinent. The Sultanate ushered in a period of Indian cultural renaissance. The resulting "Indo-Muslim" fusion of cultures left lasting syncretic monuments in architecture, music, literature, religion, and clothing.
During the 14th to 17th centuries, a great "Bhakti" movement swept through central and northern India, initiated by a loosely associated group of teachers or "Sants". Ramananda, Ravidas, Srimanta Sankardeva, Chaitanya Mahaprabhu, Vallabha Acharya, Sur, Meera, Kabir, Tulsidas, Namdev, Dnyaneshwar, Tukaram and other mystics spearheaded the Bhakti movement in the North while Annamacharya, Bhadrachala Ramadas, Tyagaraja among others propagated Bhakti in the South. They taught that people could cast aside the heavy burdens of ritual and caste, and the subtle complexities of philosophy, and simply express their overwhelming love for God. This period was also characterized by a spate of devotional literature in vernacular prose and poetry in the ethnic languages of the various Indian states or provinces.
Lingayatism is a distinct Shaivite tradition in India, established in the 12th century by the philosopher and social reformer Basavanna.
The adherents of this tradition are known as Lingayats. The term is derived from Lingavantha in Kannada, meaning 'one who wears "Ishtalinga" on their body' ("Ishtalinga" is the representation of the God). In Lingayat theology, "Ishtalinga" is an oval-shaped emblem symbolising Parasiva, the absolute reality. Contemporary Lingayatism follows a progressive reform–based theology propounded, which has great influence in South India, especially in the state of Karnataka.
According to Nicholson, already between the 12th and 16th century,
The tendency of "a blurring of philosophical distinctions" has also been noted by Burley. Lorenzen locates the origins of a distinct Hindu identity in the interaction between Muslims and Hindus, and a process of "mutual self-definition with a contrasting Muslim other", which started well before 1800. Both the Indian and the European thinkers who developed the term "Hinduism" in the 19th century were influenced by these philosophers.
Sikhism originated in 15th-century Punjab, Delhi Sultanate (present-day India and Pakistan) with the teachings of Nanak and nine successive gurus. The principal belief in Sikhism is faith in "Vāhigurū"— represented by the sacred symbol of "ēk ōaṅkār" [meaning one god]. Sikhism's traditions and teachings are distinctly associated with the history, society and culture of the Punjab. Adherents of Sikhism are known as Sikhs ("students" or "disciples") and number over 27 million across the world.
According to Gavin Flood, the modern period in India begins with the first contacts with western nations around 1500. The period of Mughal rule in India saw the rise of new forms of religiosity.
In the 19th century, under influence of the colonial forces, a synthetic vision of Hinduism was formulated by Raja Ram Mohan Roy, Swami Vivekananda, Sri Aurobindo, Sarvepalli Radhakrishnan and Mahatma Gandhi. These thinkers have tended to take an inclusive view of India's religious history, emphasising the similarities between the various Indian religions.
The modern era has given rise to dozens of Hindu saints with international influence. For example, Brahma Baba established the Brahma Kumaris, one of the largest new Hindu religious movements which teaches the discipline of Raja Yoga to millions. Representing traditional Gaudiya Vaishnavism, Prabhupada founded the Hare Krishna movement, another organisation with a global reach. In late 18th-century India, Swaminarayan founded the Swaminarayan Sampraday. Anandamurti, founder of the Ananda Marga, has also influenced many worldwide. Through the international influence of all of these new Hindu denominations, many Hindu practices such as yoga, meditation, mantra, divination, and vegetarianism have been adopted by new converts.
Jainism continues to be an influential religion and Jain communities live in Indian states Gujarat, Rajasthan, Madhya Pradesh, Maharashtra, Karnataka and Tamil Nadu. Jains authored several classical books in different Indian languages for a considerable period of time.
The Dalit Buddhist movement also referred to as Navayana is a 19th- and 20th-century Buddhist revival movement in India. It received its most substantial impetus from B. R. Ambedkar's call for the conversion of Dalits to Buddhism in 1956 and the opportunity to escape the caste-based society that considered them to be the lowest in the hierarchy.
According to Tilak, the religions of India can be interpreted "differentially" or "integrally", that is by either highlighting the differences or the similarities. According to Sherma and Sarma, western Indologists have tended to emphasise the differences, while Indian Indologists have tended to emphasise the similarities.
Hinduism, Buddhism, Jainism, and Sikhism share certain key concepts, which are interpreted differently by different groups and individuals. Until the 19th century, adherents of those various religions did not tend to label themselves as in opposition to each other, but "perceived themselves as belonging to the same extended cultural family."
Hinduism, Buddhism, Jainism, and Sikhism share the concept of moksha, liberation from the cycle of rebirth. They differ however on the exact nature of this liberation.
Common traits can also be observed in ritual. The head-anointing ritual of "abhiseka" is of importance in three of these distinct traditions, excluding Sikhism (in Buddhism it is found within Vajrayana). Other noteworthy rituals are the cremation of the dead, the wearing of vermilion on the head by married women, and various marital rituals. In literature, many classical narratives and purana have Hindu, Buddhist or Jain versions. All four traditions have notions of "karma", "dharma", "samsara", "moksha" and various "forms of Yoga".
Rama is a heroic figure in all of these religions. In Hinduism he is the God-incarnate in the form of a princely king; in Buddhism, he is a Bodhisattva-incarnate; in Jainism, he is the perfect human being. Among the Buddhist Ramayanas are: "Vessantarajataka", Reamker, Ramakien, Phra Lak Phra Lam, Hikayat Seri Rama etc. There also exists the "Khamti Ramayana" among the Khamti tribe of Asom wherein Rama is an Avatar of a Bodhisattva who incarnates to punish the demon king Ravana (B.Datta 1993). The "Tai Ramayana" is another book retelling the divine story in Asom.
Critics point out that there exist vast differences between and even within the various Indian religions. All major religions are composed of innumerable sects and subsects.
For a Hindu, "dharma" is his duty. For a Jain, "dharma" is righteousness, his conduct. For a Buddhist, "dharma" is usually taken to be the Buddha's teachings.
Indian mythology also reflects the competition between the various Indian religions. A popular story tells how Vajrapani kills Mahesvara, a manifestation of Shiva depicted as an evil being. The story occurs in several scriptures, most notably the "Sarvatathagatatattvasamgraha" and the "Vajrapany-abhiseka-mahatantra". According to Kalupahana, the story "echoes" the story of the conversion of Ambattha. It is to be understood in the context of the competition between Buddhist institutions and Shaivism.
"Āstika" and "nāstika" are variously defined terms sometimes used to categorise Indian religions. The traditional definition, followed by Adi Shankara, classifies religions and persons as "āstika" and "nāstika" according to whether they accept the authority of the main Hindu texts, the Vedas, as supreme revealed scriptures, or not. By this definition, Nyaya, Vaisheshika, Samkhya, Yoga, Purva Mimamsa and Vedanta are classified as "āstika" schools, while Charvaka is classified as a "nāstika" school. Buddhism and Jainism are also thus classified as "nāstika" religions since they do not accept the authority of the Vedas.
Another set of definitions—notably distinct from the usage of Hindu philosophy—loosely characterise "āstika" as "theist" and "nāstika" as "atheist". By these definitions, "Sāṃkhya" can be considered a "nāstika" philosophy, though it is traditionally classed among the Vedic "āstika" schools. From this point of view, Buddhism and Jainism remain "nāstika" religions.
Buddhists and Jains have disagreed that they are nastika and have redefined the phrases āstika and nāstika in their own view. Jains assign the term nastika to one who is ignorant of the meaning of the religious texts, or those who deny the existence of the soul was well known to the Jainas.
Frawley and Malhotra use the term "Dharmic traditions" to highlight the similarities between the various Indian religions. According to Frawley, "all religions in India have been called the Dharma", and can be
According to Paul Hacker, as described by Halbfass, the term "dharma"
The emphasis on the similarities and integral unity of the dharmic faiths has been criticised for neglecting the vast differences between and even within the various Indian religions and traditions. According to Richard E. King it is typical of the "inclusivist appropriation of other traditions" of Neo-Vedanta:
The "Council of Dharmic Faiths" (UK) regards Zoroastrianism, whilst not originating in the Indian subcontinent, also as a Dharmic religion.
The inclusion of Buddhists, Jains and Sikhs within Hinduism is part of the Indian legal system. The 1955 Hindu Marriage Act "[defines] as Hindus all Buddhists, Jains, Sikhs and anyone who is not a Christian, Muslim, Parsee (Zoroastrian) or Jew". And the Indian Constitution says that "reference to Hindus shall be construed as including a reference to persons professing the Sikh, Jaina or Buddhist religion".
In a judicial reminder, the Indian Supreme Court observed Sikhism and Jainism to be sub-sects or "special" faiths within the larger Hindu fold, and that Jainism is a denomination within the Hindu fold. Although the government of British India counted Jains in India as a major religious community right from the first Census conducted in 1873, after independence in 1947 Sikhs and Jains were not treated as national minorities. In 2005 the Supreme Court of India declined to issue a writ of Mandamus granting Jains the status of a religious minority throughout India. The Court however left it to the respective states to decide on the minority status of Jain religion.
However, some individual states have over the past few decades differed on whether Jains, Buddhists and Sikhs are religious minorities or not, by either pronouncing judgments or passing legislation. One example is the judgment passed by the Supreme Court in 2006, in a case pertaining to the state of Uttar Pradesh, which declared Jainism to be indisputably distinct from Hinduism, but mentioned that, "The question as to whether the Jains are part of the Hindu religion is open to debate. However, the Supreme Court also noted various court cases that have held Jainism to be a distinct religion.
Another example is the Gujarat Freedom of Religion Bill, that is an amendment to a legislation that sought to define Jains and Buddhists as denominations within Hinduism. Ultimately on 31 July 2007, finding it not in conformity with the concept of freedom of religion as embodied in Article 25 (1) of the Constitution, Governor Naval Kishore Sharma returned the Gujarat Freedom of Religion (Amendment) Bill, 2006 citing the widespread protests by the Jains as well as Supreme Court's extrajudicial observation that Jainism is a "special religion formed on the basis of quintessence of Hindu religion by the Supreme Court". | https://en.wikipedia.org/wiki?curid=14605 |
Idaho
Idaho () is a state in the Pacific Northwest region of the United States. It borders the state of Montana to the east and northeast, Wyoming to the east, Nevada and Utah to the south, and Washington and Oregon to the west. To the north, it shares a small portion of the Canadian border with the province of British Columbia. With a population of approximately 1.7 million and an area of , Idaho is the 14th largest, the 12th least populous and the 7th least densely populated of the 50 U.S. states. The state's capital and largest city is Boise.
For thousands of years Idaho has been inhabited by Native American peoples. In the early 19th century, Idaho was considered part of the Oregon Country, an area disputed between the United States and the United Kingdom. It officially became U.S. territory with the signing of the Oregon Treaty of 1846, but a separate Idaho Territory was not organized until 1863, instead being included for periods in Oregon Territory and Washington Territory. Idaho was eventually admitted to the Union on July 3, 1890, becoming the 43rd state.
Forming part of the Pacific Northwest (and the associated Cascadia bioregion), Idaho is divided into several distinct geographic and climatic regions. The state's north, the relatively isolated Idaho Panhandle, is closely linked with Eastern Washington with which it shares the Pacific Time Zone—the rest of the state uses the Mountain Time Zone. The state's south includes the Snake River Plain (which has most of the population and agricultural land). The state's southeast incorporates part of the Great Basin. Idaho is quite mountainous, and contains several stretches of the Rocky Mountains. The United States Forest Service holds about 38% of Idaho's land, the highest proportion of any state.
Industries significant for the state economy include manufacturing, agriculture, mining, forestry, and tourism. A number of science and technology firms are either headquartered in Idaho or have factories there, and the state also contains the Idaho National Laboratory, which is the country's largest Department of Energy facility. Idaho's agricultural sector supplies many products, but the state is best known for its potato crop, which comprises around one-third of the nationwide yield. The official state nickname is the "Gem State", which references Idaho's natural beauty.
The name's origin remains a mystery. In the early 1860s, when the U.S. Congress was considering organizing a new territory in the Rocky Mountains, the name "Idaho" was suggested by George M. Willing, a politician posing as an unrecognized delegate from the unofficial Jefferson Territory. Willing claimed that the name was derived from a Shoshone term meaning "the sun comes from the mountains" or "gem of the mountains", but it was revealed later that there was no such term and Willing claimed that he had been inspired to coin the name when he met a little girl named "Ida". Since the name appeared to be fabricated, the U.S. Congress ultimately decided to name the area Colorado Territory instead when it was created in February 1861, but by the time this decision was made, the town of Idaho Springs, Colorado had already been named after Willing's proposal.
The same year Congress created Colorado Territory, a county called Idaho County was created in eastern Washington Territory. The county was named after a steamship named Idaho, which was launched on the Columbia River in 1860. It is unclear whether the steamship was named before or after Willing's claim was revealed. Regardless, part of Washington Territory, including Idaho County, was used to create Idaho Territory in 1863. Eventually, the name was given to the Idaho Territory, which would later become the U.S. state.
Despite this lack of evidence for the origin of the name, many textbooks well into the 20th century repeated as fact Willing's account the name "Idaho" derived from the Shoshone term "ee-da-how". A 1956 Idaho history textbook says:
"Idaho" is a Shoshoni Indian exclamation. The word consists of three parts. The first is "Ee", which in English conveys the idea of "coming down". The second is "dah" which is the Shoshoni stem or root for both "sun" and "mountain". The third syllable, "how", denotes the exclamation and stands for the same thing in Shoshoni that the exclamation mark (!) does in English. The Shoshoni word is "Ee-dah-how", and the Indian thought thus conveyed when translated into English means, "Behold! the sun coming down the mountain.
An alternative etymology attributes the name to the Plains Apache word "ídaahę́" (enemy) that was used in reference to the Comanche.
Idaho borders six U.S. states and one Canadian province. The states of Washington and Oregon are to the west, Nevada and Utah are to the south, and Montana and Wyoming are to the east. Idaho also shares a short border with the Canadian province of British Columbia to the north.
The landscape is rugged with some of the largest unspoiled natural areas in the United States. For example, at 2.3 million acres (930,000 ha), the Frank Church-River of No Return Wilderness Area is the largest contiguous area of protected wilderness in the continental United States. Idaho is a Rocky Mountain state with abundant natural resources and scenic areas. The state has snow-capped mountain ranges, rapids, vast lakes and steep canyons. The waters of the Snake River run through Hells Canyon, the deepest gorge in the United States. Shoshone Falls falls down cliffs from a height greater than Niagara Falls.
The most important river in Idaho is the major tributary of the Columbia River, the Snake River, which flows out from Yellowstone in northwestern Wyoming through the Snake River Plain in southern Idaho before turning north, leaving the state at Lewiston before joining the Columbia in Kennewick. Other major rivers are the Clark Fork/Pend Oreille River, the Spokane River, and major tributaries of the Snake river, including the Clearwater River, the Salmon River, the Boise River, and the Payette River. The Salmon River empties into the Snake in Hells Canyon and forms the southern boundary of Nez Perce County on its north shore, of which Lewiston is the county seat. The Port of Lewiston, at the confluence of the Clearwater and the Snake Rivers is the farthest inland seaport on the West Coast at 465 river miles from the Pacific at Astoria, Oregon.
The vast majority of Idaho's population lives in the Snake River Plain, a valley running from across the entirety of southern Idaho from east to west. The valley contains the major cities of Boise, Meridian, Nampa, Caldwell, Twin Falls, Idaho Falls, and Pocatello. The plain served as an easy pass through the Rocky Mountains for westward-bound settlers on the Oregon Trail, and many settlers chose to settle the area rather than risking the treacherous route through the Blue Mountains and the Cascade Range to the west. The western region of the plain is known as the Treasure Valley, bound between the Owyhee Mountains to the southwest and the Boise Mountains to the northeast. The central region of the Snake River Plain is known as the Magic Valley.
Idaho's highest point is Borah Peak, , in the Lost River Range north of Mackay. Idaho's lowest point, , is in Lewiston, where the Clearwater River joins the Snake River and continues into Washington. The Sawtooth Range is often considered Idaho's most famous mountain range. Other mountain ranges in Idaho include the Bitterroot Range, the White Cloud Mountains, the Lost River Range, the Clearwater Mountains, and the Salmon River Mountains.
Idaho has two time zones, with the dividing line approximately midway between Canada and Nevada. Southern Idaho, including the Boise metropolitan area, Idaho Falls, Pocatello, and Twin Falls, are in the Mountain Time Zone. A legislative error ( §264) theoretically placed this region in the Central Time Zone, but this was corrected with a 2007 amendment. Areas north of the Salmon River, including Coeur d'Alene, Moscow, Lewiston, and Sandpoint, are in the Pacific Time Zone, which contains less than a quarter of the state's population and land area.
Idaho's climate varies widely. Although the state's western border is about from the Pacific Ocean, the maritime influence is still felt in Idaho, especially in the winter when cloud cover, humidity, and precipitation are at their maximum extent. This influence has a moderating effect in the winter where temperatures are not as low as would otherwise be expected for a northern state with predominantly high elevations. The maritime influence is least prominent in the state's eastern part where the precipitation patterns are often reversed, with wetter summers and drier winters, and seasonal temperature differences are more extreme, showing a more semi-arid continental climate.
Idaho can be hot, although extended periods over are rare, except for the lowest point in elevation, Lewiston, which correspondingly sees little snow. Hot summer days are tempered by the low relative humidity and cooler evenings during summer months since, for most of the state, the highest diurnal difference in temperature is often in the summer. Winters can be cold, although extended periods of bitter cold weather below zero are unusual. Idaho's all-time highest temperature of was recorded at Orofino on July 28, 1934; the all-time lowest temperature of was recorded at Island Park Dam on January 18, 1943.
Humans may have been present in the Idaho area as long as 14,500 years ago. Excavations at Wilson Butte Cave near Twin Falls in 1959 revealed evidence of human activity, including arrowheads, that rank among the oldest dated artifacts in North America. American Indian peoples predominant in the area included the Nez Percé in the north and the Northern and Western Shoshone in the south.
A Late Upper Paleolithic site was identified at Cooper's Ferry in western Idaho near the town of Cottonwood by archaeologists in 2019. Based on evidence found at the site, first people lived in this area 15,300 to 16,600 years ago, predating the Beringia land bridge by about a thousand years. The discoverers, anthropology professor Loren Davis and colleagues, emphasized that they possess similarities with tools and artifacts discovered in Japan that date from 16,000 to 13,000 years ago. The discovery also showed that the first people might not have come to North America by land, as previously theorized. On the contrary, they probably came through the water, using a Pacific coastal road.
An early presence of French-Canadian trappers is visible in names and toponyms: Nez Percé, Cœur d'Alène, Boisé, Payette, some preexisting the Lewis and Clark and Astorian expeditions which themselves included significant numbers of French and Métis guides recruited for their familiarity with the terrain.
Idaho, as part of the Oregon Country, was claimed by both the United States and Great Britain until the United States gained undisputed jurisdiction in 1846. From 1843 to 1849, present-day Idaho was under the de facto jurisdiction of the Provisional Government of Oregon. When Oregon became a state, what is now Idaho was in what remained of the original Oregon Territory not part of the new state, and designated as the Washington Territory.
Between then and the creation of the Idaho Territory on March 4, 1863, at Lewiston, parts of the present-day state were included in the Oregon, Washington, and Dakota Territories. The new territory included present-day Idaho, Montana, and most of Wyoming. The Lewis and Clark expedition crossed Idaho in 1805 on the way to the Pacific and in 1806 on the return, largely following the Clearwater River both directions. The first non-indigenous settlement was Kullyspell House, established on the shore of Lake Pend Oreille for fur trading in 1809 by David Thompson of the North West Company. In 1812 Donald Mackenzie, working for the Pacific Fur Company at the time, established a post on the lower Clearwater River near present-day Lewiston. This post, known as "MacKenzie's Post" or "Clearwater", operated until the Pacific Fur Company was bought out by the North West Company in 1813, after which it was abandoned. The first attempts at organized communities, within the present borders of Idaho, were established in 1860. The first permanent, substantial incorporated community was Lewiston in 1861.
After some tribulation as a territory, including the chaotic transfer of the territorial capital from Lewiston to Boise, disenfranchisement of Mormon polygamists upheld by the U.S. Supreme Court in 1877, and a federal attempt to split the territory between Washington Territory which gained statehood in 1889, a year before Idaho, and the state of Nevada which had been a state since 1864, Idaho achieved statehood in 1890.
Idaho was one of the hardest hit of the Pacific Northwest states during the Great Depression. Prices plummeted for Idaho's major crops: in 1932 a bushel of potatoes brought only ten cents compared to $1.51 in 1919, while Idaho farmers saw their annual income of $686 in 1929 drop to $250 by 1932.
In recent years, Idaho has expanded its commercial base as a tourism and agricultural state to include science and technology industries. Science and technology have become the largest single economic center (over 25% of the state's total revenue) within the state and are greater than agriculture, forestry and mining combined.
The United States Census Bureau estimates Idaho's population was 1,787,065 on July 1, 2018, a 14% increase since 2010.
Idaho had an estimated population of 1,754,208 in 2018, which was an increase of 37,265, from the prior year and an increase of 186,626, or 11.91%, since 2010. This includes a natural increase since the last census of 58,884 (111,131 births minus 52,247 deaths) and an increase due to net migration of 75,795 people into the state. There are large numbers of Americans of English and German ancestry in Idaho. Immigration from outside the United States resulted in a net increase of 14,522 people, and migration within the country produced a net increase of 61,273 people.
This made Idaho the tenth fastest-growing state after District of Columbia (+16.74%), Utah (+14.37%), Texas (+14.14%), Florida (+13.29%), Colorado (+13.25%), North Dakota (+13.01%), Nevada (+12.36%), Arizona (+12.20%) and Washington. From 2017 to 2018, Idaho grew the second-fastest, surpassed only by Nevada.
Nampa, about west of downtown Boise, became the state's second largest city in the late 1990s, passing Pocatello and Idaho Falls. Nampa's population was under 29,000 in 1990 and grew to over 81,000 by 2010. Located between Nampa and Boise, Meridian also experienced high growth, from fewer than 10,000 residents in 1990 to more than 75,000 in 2010 and is now Idaho's third largest city. Growth of 5% or more over the same period has also been observed in Caldwell, Coeur d'Alene, Post Falls, and Twin Falls.
From 1990 to 2010, Idaho's population increased by over 560,000 (55%). The Boise metropolitan area (officially known as the Boise City-Nampa, ID Metropolitan Statistical Area) is Idaho's largest metropolitan area. Other metropolitan areas in order of size are Coeur d'Alene, Idaho Falls, Pocatello and Lewiston.
The table below shows the racial composition of Idaho's population as of 2016.
According to the 2017 American Community Survey, 12.2% of Idaho's population were of Hispanic or Latino origin (of any race): Mexican (10.6%), Puerto Rican (0.2%), Cuban (0.1%), and other Hispanic or Latino origin (1.3%). The five largest ancestry groups were: German (17.5%), English (16.4%), Irish (9.3%), American (8.1%), and Scottish (3.2%).
"Note: Births in table don't add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number."
According to the Pew Research Center on Religion & Public Life, the self-identified religious affiliations of Idahoans over the age of 18 in 2008 and 2014 were:
According to the Association of Religion Data Archives, the largest denominations by number of members in 2010 were The Church of Jesus Christ of Latter-day Saints with 409,265; the Catholic Church with 123,400; the non-denominational Evangelical Protestant with 62,637; and the Assemblies of God with 22,183.
English is the state's predominant language. Minority languages include Spanish and various Native American languages.
Gross state product for 2015 was $64.9 billion, and the per capita income based on 2015 GDP and 2015 population estimates was $39,100.
Idaho is an important agricultural state, producing nearly one-third of the potatoes grown in the United States. All three varieties of wheat, dark northern spring, hard red, and soft white are grown in the state. Nez Perce County is considered a premier soft white growing locale.
Important industries in Idaho are food processing, lumber and wood products, machinery, chemical products, paper products, electronics manufacturing, silver and other mining, and tourism. The world's largest factory for barrel cheese, the raw product for processed cheese is in Gooding, Idaho. It has a capacity of 120,000 metric tons per year of barrel cheese and belongs to the Glanbia group. The Idaho National Laboratory (INL) is the largest Department of Energy facility in the country by area. INL is an important part of the eastern Idaho economy. Idaho also is home to three facilities of Anheuser-Busch which provide a large part of the malt for breweries across the nation.
A variety of industries are important. Outdoor recreation is a common example ranging from numerous snowmobile and downhill and cross-country ski areas in winter to the evolution of Lewiston as a retirement community based on mild winters, dry, year-round climate and one of the lowest median wind velocities anywhere, combined with the rivers for a wide variety of activities. Other examples would be ATK Corporation, which operates three ammunition and ammunition components plants in Lewiston. Two are sporting and one is defense contract. The Lewis-Clark valley has an additional independent ammunition components manufacturer and the Chipmunk rifle factory until it was purchased in 2007 by Keystone Sporting Arms and production was moved to Milton, Pennsylvania. Four of the world's six welded aluminum jet boat (for running river rapids) manufacturers are in the Lewiston-Clarkston, WA valley. Wine grapes were grown between Kendrick and Juliaetta in the Idaho Panhandle by the French Rothschilds until Prohibition. In keeping with this, while there are no large wineries or breweries in Idaho, there are numerous and growing numbers of award-winning boutique wineries and microbreweries in the northern part of the state.
Today, Idaho's largest industry is the science and technology sector. It accounts for over 25% of the state's revenue and over 70% of the state's exports. Idaho's industrial economy is growing, with high-tech products leading the way. Since the late 1970s, Boise has emerged as a center for semiconductor manufacturing. Boise is the home of Micron Technology, the only U.S. manufacturer of dynamic random-access memory (DRAM) chips. Micron at one time manufactured desktop computers, but with very limited success. Hewlett-Packard has operated a large plant in Boise since the 1970s, which is devoted primarily to LaserJet printers production. Boise-based Clearwater Analytics is another rapidly growing investment accounting and reporting software firm, reporting on over $1 trillion in assets. ON Semiconductor, whose worldwide headquarters is in Pocatello, is a widely recognized innovator of modern integrated mixed-signal semiconductor products, mixed-signal foundry services, and structured digital products. Coldwater Creek, a women's clothing retailer, is headquartered in Sandpoint. Sun Microsystems (now a part of Oracle Corporation) has two offices in Boise and a parts depot in Pocatello. Sun brings $4 million in annual salaries and over $300 million of revenue to the state each year.
A number of Fortune 500 companies started in or trace their roots to Idaho, including Safeway in American Falls, Albertsons in Boise, JR Simplot across southern Idaho, and Potlatch Corp. in Lewiston. Zimmerly Air Transport in Lewiston-Clarkston was one of the five companies in the merger centered around Varney Air Lines of Pasco, Washington, which became United Airlines and subsequently Varney Air Group which became Continental Airlines.
In 2014, Idaho emerged as the second most small business friendly state, ranking behind Utah, based on a study drawing upon data from more than 12,000 small business owners.
Idaho has a state gambling lottery which contributed $333.5 million in payments to all Idaho public schools and Idaho higher education from 1990 to 2006.
Tax is collected by the Idaho State Tax Commission.
The state personal income tax ranges from 1.6% to 7.8% in eight income brackets. Idahoans may apply for state tax credits for taxes paid to other states, as well as for donations to Idaho state educational entities and some nonprofit youth and rehabilitation facilities.
The state sales tax is 6% with a very limited, selective local option up to 6.5%. Sales tax applies to the sale, rental or lease of tangible personal property and some services. Food is taxed, but prescription drugs are not. Hotel, motel, and campground accommodations are taxed at a higher rate (7% to 11%). Some jurisdictions impose local option sales tax.
The sales tax was introduced at 3% in 1965, easily approved by voters, where it remained at 3% until 1983.
As of 2017, the primary energy source in Idaho was hydropower, and the energy companies had a total retail sales of 23,793,790 megawatthours (MWh). As of 2017, Idaho had a regulated electricity market, with the Idaho Public Utilities Commission regulating the three major utilities of Avista Utilities, Idaho Power, and Rocky Mountain Power.
Idaho's energy landscape is favorable to the development of renewable energy systems. The state is rich in renewable energy resources but has limited fossil fuel resources. The Snake River Plain and smaller river basins provide Idaho with some of the nation's best hydroelectric power resources and its geologically active mountain areas have significant geothermal power and wind power potential. These realities have shaped much of the state's energy landscape.
Idaho imports most of the energy it consumes. Imports account for more than 80% of energy consumption, including all of Idaho's natural gas and petroleum supplies and more than half of its electricity. Of the electricity consumed in Idaho in 2005, 48% came from hydroelectricity, 42% was generated by burning coal and 9% was generated by burning natural gas. The remainder came from other renewable sources such as wind.
The state's numerous river basins allow hydroelectric power plants to provide 556,000 MWh, which amounts to about three-fourths of Idaho's electricity generated in the state. Washington State provides most of the natural gas used in Idaho through one of the two major pipeline systems supplying the state. Although the state relies on out-of-state sources for its entire natural gas supply, it uses natural gas-fired plants to generate 127,000 MWh, or about ten percent of its output. Coal-fired generation and the state's small array of wind turbines supplies the remainder of the state's electricity output. The state produces 739,000 MWh but still needs to import half of its electricity from out-of-state to meet demand.
While Idaho's total energy consumption is low compared with other states and represents just 0.5% of United States consumption, the state also has the nation's 11th smallest population, 1.5 million, so its per capita energy consumption of is just above the national average of . As the 13th‑largest state in terms of land area of , distance creates the additional problem of "line loss". When the length of an electrical transmission line is doubled, the resistance to an electric current passing through it is also doubled.
In addition, Idaho also has the 6th fastest growing population in the United States with the population expected to increase by 31% from 2008 to 2030. This projected increase in population will contribute to a 42% increase in demand by 2030, further straining Idaho's finite hydroelectric resources.
Idaho has an upper-boundary estimate of development potential to generate 44,320 GWh/year from 18,076 MW of wind power, and 7,467,000 GWh/year from solar power using 2,061,000 MW of photovoltaics (PV), including 3,224 MW of rooftop photovoltaics, and 1,267,000 MW of concentrated solar power.
The Idaho Transportation Department is the government agency responsible for Idaho's transportation infrastructure, including operations and maintenance as well as planning for future needs. The agency is also responsible for overseeing the disbursement of federal, state, and grant funding for the transportation programs of the state.
Idaho is among the few states in the nation without a major freeway linking its two largest metropolitan areas, Boise in the south and Coeur d'Alene in the north. US-95 links the two ends of the state, but like many other highways in Idaho, it is badly in need of repair and upgrade. In 2007, the Idaho Transportation Department stated the state's highway infrastructure faces a $200 million per year shortfall in maintenance and upgrades. I-84 is the main highway linking the southeast and southwest portions of the state, along with I-86 and I-15.
Major federal aid highways in Idaho:
Major airports include the Boise International Airport which serves the southwest region of Idaho and the Spokane International Airport (in Spokane, Washington) which serves northern Idaho. Other airports with scheduled service are the Pullman-Moscow Regional Airport serving the Palouse; the Lewiston-Nez Perce County Airport, serving the Lewis-Clark Valley and north central and west central Idaho; The Magic Valley Regional Airport in Twin Falls; the Idaho Falls Regional Airport; and the Pocatello Regional Airport.
Idaho is served by three transcontinental railroads. The Burlington Northern Santa Fe (BNSF) connects the Idaho Panhandle with Seattle, Portland, and Spokane to the west, and Minneapolis and Chicago to the east. The BNSF travels through Kootenai, Bonner, and Boundary counties. The Union Pacific Railroad crosses North Idaho entering from Canada through Boundary and Bonner, and proceeding to Spokane. Canadian Pacific Railway uses Union Pacific Railroad tracks in North Idaho carrying products from Alberta to Spokane and Portland, Oregon. Amtrak's Empire Builder crosses northern Idaho, with its only stop being in Sandpoint. Montana Rail Link also operates between Billings, Montana and Sandpoint, Idaho
The Union Pacific Railroad also crosses southern Idaho traveling between Portland, Oregon, Green River, Wyoming, and Ogden, Utah and serves Boise, Nampa, Twin Falls, and Pocatello.
The Port of Lewiston is the farthest inland Pacific port on the west coast. A series of dams and locks on the Snake River and Columbia River facilitate barge travel from Lewiston to Portland, where goods are loaded on ocean-going vessels.
The constitution of Idaho is roughly modeled on the national constitution with several additions. The constitution defines the form and functions of the state government, and may be amended through plebiscite. Notably, the state constitution presently requires the state government to maintain a balanced budget. As result, Idaho has limited debt (construction bonds, etc.).
All of Idaho's state laws are contained in the Idaho Code and Statutes. The code is amended through the legislature with the approval of the governor. Idaho still operates under its original (1889) state constitution.
The constitution of Idaho provides for three branches of government: the executive, legislative and judicial branches. Idaho has a bicameral legislature, elected from 35 legislative districts, each represented by one senator and two representatives.
Since 1946, statewide elected constitutional officers have been elected to four-year terms. They include: Governor, Lieutenant Governor, Secretary of State, Idaho state controller (Auditor before 1994), Treasurer, Attorney General, and Superintendent of Public Instruction.
Last contested in 1966, Inspector of Mines was an originally elected constitutional office. Afterward it was an appointed position and ultimately done away with entirely in 1974.
Idaho's government has an alcohol monopoly.
The governor of Idaho serves a four-year term, and is elected during what is nationally referred to as midterm elections. As such, the governor is not elected in the same election year as the president of the United States. The current governor is Republican Brad Little, who was elected in 2018.
Idaho's legislature is part-time. However, the session may be extended if necessary, and often is. Because of this, Idaho's legislators are considered "citizen legislators", meaning their position as a legislator is not their main occupation.
Terms for both the Senate and House of Representatives are two years. Legislative elections occur every even numbered year.
The Idaho Legislature has been continuously controlled by the Republican Party since the late 1950s, although Democratic legislators are routinely elected from Boise, Pocatello, Blaine County and the northern Panhandle.
The highest court in Idaho is the Idaho Supreme Court. There is also an intermediate appellate court, the Idaho Court of Appeals, which hears cases assigned to it from the Supreme Court. The state's District Courts serve seven judicial districts.
Idaho is divided into political jurisdictions designated as "counties". Since 1919 there are 44 counties in the state, ranging in size from .
Total Counties: 44. Total 2018 Population Est.: 1,754,208. Total Area: .
Three counties were first designated as such by the Washington Territorial Legislature in 1861; they were subsequently redesignated as Idaho counties in 1864. The 1861 Nez Percé county has since been broken up into Nez Percé, Lewis, Boundary, Benewah, Latah, Kootenai, and Clearwater counties.
Idaho license plates begin with a county designation based on the first letter of the county's name. Where a letter is at the beginning of more than one name, a number accompanies precedingly in alphabetical order. This reflects an anomalous coincidental situation wherein 10 counties begin with B, seven with C and four with L, which is 21 of the 44 counties.
After the Civil War, many Midwestern and Southern Democrats moved to the Idaho Territory. As a result, the early territorial legislatures were solidly Democrat-controlled. In contrast, most of the territorial governors were appointed by Republican presidents and were Republicans. This led to sometimes-bitter clashes between the two parties, including a range war with the Democrats backing the sheepherders and the Republicans the cattlemen. That ended with the "Diamondfield" Jack Davis murder trial. In the 1880s, Republicans became more prominent in local politics.
In 1864, Clinton DeWitt Smith removed the territorial seal and the state constitution from a locked safe, and took them to Boise. This effectively moved the capital from where they were stored (Lewiston, Idaho) to the current capital Boise.
Since statehood, the Republican Party has usually been the dominant party in Idaho. At one time, Idaho had two Democratic parties, one being the mainstream and the other called the Anti-Mormon Democrats, lasting into the early 20th century. In the 1890s and early 1900s, the Populist Party enjoyed prominence while the Democratic Party maintained a brief dominance in the 1930s during the Great Depression. Since World WarII most statewide-elected officials have been Republicans, though the Democrats did hold the majority in the House (by one seat) in 1958 and the governorship from 1971 to 1995.
Idaho Congressional delegations have also been generally Republican since statehood. Several Idaho Democrats have had electoral success in the House over the years, but the Senate delegation has been a Republican stronghold for decades. Several Idaho Republicans, including current Senator Mike Crapo, have won reelection to the Senate, but only Frank Church has won reelection as a Democrat. Church was the last Idaho Democrat to win a U.S. Senate race, in 1974. Walt Minnick's 2008 win in the First Congressional District was the state's first Democratic Congressional victory in 16 years.
In modern times, Idaho has been a reliably Republican state in presidential politics. It has not supported a Democrat for president since 1964. Even in that election, Lyndon Johnson defeated Barry Goldwater in the state by fewer than two percentage points, compared to a landslide nationally. In 2004, Republican George W. Bush carried Idaho by a margin of 38 percentage points and with 68.4% of the vote, winning in 43 of 44 counties. Only Blaine County, which contains the Sun Valley ski resort, supported John Kerry, who owns a home in the area. In 2008 Barack Obama's 36.1 percent showing was the best for a Democratic presidential candidate in Idaho since 1976. However, Republican margins were narrower in 1992 and 1976.
In the 2006 elections, Republicans, led by gubernatorial candidate CL "Butch" Otter, won all the state's constitutional offices and retained both of the state's seats in the United States House of Representatives. However, Democrats picked up several seats in the Idaho Legislature, notably in the Boise area.
Republicans lost one of the House seats in 2008 to Minnick, but Republican Jim Risch retained Larry Craig's Senate seat for the GOP by a comfortable margin. Minnick lost his seat in the 2010 election to Republican State Rep. Raul Labrador.
As of January 2020, the State of Idaho contains 105 school districts and 62 charter schools. The school districts range in enrollment from two to 39,507 students.
Idaho school districts are governed by elected school boards, which are elected in November of odd-numbered years, except for the Boise School District, whose elections are held in September.
The Idaho State Board of Education oversees three comprehensive universities. The University of Idaho in Moscow was the first university in the state (founded in 1889). It opened its doors in 1892 and is the land-grant institution and primary research university of the state. Idaho State University in Pocatello opened in 1901 as the Academy of Idaho, attained four-year status in 1947 and university status in 1963. Boise State University is the most recent school to attain university status in Idaho. The school opened in 1932 as Boise Junior College and became Boise State University in 1974. Lewis-Clark State College in Lewiston is the only public, non-university four-year college in Idaho. It opened as a normal school in 1893.
Idaho has four regional community colleges: North Idaho College in Coeur d'Alene; College of Southern Idaho in Twin Falls; College of Western Idaho in Nampa, which opened in 2009, College of Eastern Idaho in Idaho Falls, which transitioned from a technical college in 2017.
Private institutions in Idaho are Boise Bible College, affiliated with congregations of the Christian churches and churches of Christ; Brigham Young University-Idaho in Rexburg, which is affiliated with The Church of Jesus Christ of Latter-day Saints and a sister college to Brigham Young University; The College of Idaho in Caldwell, which still maintains a loose affiliation with the Presbyterian Church; Northwest Nazarene University in Nampa; and New Saint Andrews College in Moscow, of reformed Christian theological background. McCall College is a non-affiliated two-year private college in McCall, which was founded in 2011 and later opened in 2013.
Central Idaho is home to one of North America's oldest ski resorts, Sun Valley, where the world's first chairlift was installed in 1936. Other noted outdoor sites include Hells Canyon, the Salmon River, and its embarkation point of Riggins.
The Boise Open professional golf tournament has been played at Hillcrest Country Club since 1990 as part of the Web.com Tour.
High school sports are overseen by the Idaho High School Activities Association (IHSAA).
In 2016, Meridian's Michael Slagowski ran 800 meters in 1:48.70. That is one of the 35 fastest 800-meter times ever run by a high school boy in the United States. Weeks later, he would become only the ninth high school boy to complete a mile in under four minutes, running 3:59.53.
Judy Garland performed the elaborate song-and-dance routine "Born in a Trunk in the Princess Theater in Pocatello, Idaho" in the 1954 version of the film "A Star is Born".
The 1985 film "Pale Rider" was primarily filmed in the Boulder Mountains and the Sawtooth National Recreation Area in central Idaho, just north of Sun Valley.
The 1988 film "Moving", starring Richard Pryor, has the main character take a promotion in Idaho.
River Phoenix and Keanu Reeves starred in the 1991 movie "My Own Private Idaho", portions of which take place in Idaho.
The 1996 film "Dante's Peak" was filmed in Wallace.
The 2004 cult film "Napoleon Dynamite" takes place in Preston, Idaho; the film's director, Jared Hess, attended Preston High School. | https://en.wikipedia.org/wiki?curid=14607 |
Interrogatories
In law, interrogatories (also known as requests for further information) are a formal set of written questions propounded by one litigant and required to be answered by an adversary in order to clarify matters of fact and help to determine in advance what facts will be presented at any trial in the case.
Interrogatories are used to gain information from the other party relevant to the issues in a lawsuit. The law and issues will differ depending upon the facts of a case and the laws of the jurisdiction in which a lawsuit is filed. For some types of cases there are standard sets of interrogatories available that cover the essential facts, and may be modified for the case in which they are used.
When a lawsuit is filed, the pleadings filed by the parties are intended to let the other parties know what each side intends to prove at trial, and what legal case they have to answer. However, in most cases, the parties will require additional information to fully understand each other's legal and factual claims. The discovery process, including the use of interrogatories, can help the parties obtain that information from each other.
For an example of how interrogatories may be used, in a motor vehicle accident lawsuit an injured plaintiff typically asserts that the defendant driver committed the tort of negligence in causing the accident. To prove negligence, the law requires the injured plaintiff to show that the driver owed them a duty of care and breached it, causing the injury. Assuming that the defendant did not dispute driving a vehicle that was involved in the accident that injured the plaintiff, the case would come down to whether the driver drove in accordance with the standard of a reasonable driver, and whether the injured person's injuries are a foreseeable consequence of the driving.
The parties may use interrogatories to seek information, including concessions as to how the accident occurred, from each other. The injured plaintiff might serve interrogatories on the defendant driver seeking information that would support the plaintiff's theory of the case. If the plaintiff is alleging that the defendant was speeding, the plaintiff might ask the defendant to state the speed of the defendant's vehicle at the time of the accident. If the plaintiff alleges that the defendant failed to control the car properly or failed to pay proper attention to the road and other vehicles, the plaintiff could ask interrogatory questions that would help prove those allegations or require disclosure of the basis of any denial of negligence by the defendant. The driver may have a defense to those allegations, perhaps if the accident occurred at low speed, and was unavoidable (maybe due to some third party intervention). The injured person may, however, argue that the driver was still responsible (perhaps the driver should have used the horn of the vehicle to alert the third party), or there may be other allegations.
The defense may similarly use interrogatories to help build legal and factual defenses to the plaintiff's case. Continuing with the example of a car accident, the defendant may seek information or concessions from the plaintiff that would suggest that a different driver was partially or wholly responsible for the accident, or that under the facts the accident was unavoidable despite the proper exercise of care.
In England and Wales, this procedure is governed by Part 18 of the Civil Procedure Rules. It is known as a "Request for Further Information".
In the "Request for Further Information" procedure, use of standard pre-printed forms is not common, and any such request would almost certainly be looked upon critically by the courts, as use of standard forms rather than requests tailored specifically to the case is likely to offend against the 'Overriding Objective' in that it is unlikely to be proportionate to the case, and instead result in the parties or their lawyers having to spend time, money and resources in answering the questions. The way the rules work, this could easily result in the party making the request having to pay both their own costs and the costs of the opponent - even if they win the case at the end.
In England and Wales, firstly the person wanting to know the information requests it in writing, either in letter form or, more usually, on a blank document with the questions on one side of the page and space for the answers on the other side. A deadline is set for the opponent to answer the request. If they fail to answer, the person requesting can make an Application on Notice to the court and ask the procedural judge to make an order compelling the opponent to answer the questions. Whether the judge will make an order is discretionary and will be determined in accordance with the overriding objective, and in the context of the questions asked.
In particular, the procedure is not intended to be used to ask questions that would ordinarily be dealt with at trial.
In the United States, use of interrogatories is governed by the law where the case has been filed. All federal courts operate under the Federal Rules of Civil Procedure, which places various limitations on the use of this device, permitting individual jurisdictions to limit interrogatories to twenty-five questions per party. Interrogatories are typically "verified", meaning that the response will include an affidavit and will therefore be under oath. The affidavit may distinguish interrogatories from requests for admission, which are not normally answered under oath.
California, on the other hand, operates under the Civil Discovery Act of 1986 (a revision of an older 1957 act), which is codified in the California Code of Civil Procedure. The Discovery Act allows up to thirty-five specially prepared interrogatories per party, but this limit may be exceeded simply by executing and serving a declaration of necessity with the interrogatories. However, because the declaration of necessity must be executed under penalty of perjury, it can expose an attorney to "personal" sanctions for propounding an excessive number of harassing and burdensome interrogatories.
In nearly all U.S. jurisdictions, interrogatories are called just that and are supposed to be custom-written, although many questions can be reused from one case to the next. In the U.S. states of California, New Jersey, and Florida, the courts have promulgated standard "form" interrogatories. In California these come on an official court form promulgated by the Judicial Council of California and a party may ask another party to answer any of them by checking the appropriate boxes. The advantage of the California form interrogatories is that they do not count against the limit of 35 (except when used in limited civil cases); the disadvantage is that they are written in a very generic fashion, so about half of the questions are useful only in the simplest cases. In turn, California calls custom-written interrogatories "specially prepared interrogatories."
Because interrogatories are so heavily used in American discovery, there are two major compilations of generic interrogatories covering almost every conceivable type of legal case: "Bender's Forms of Discovery: Interrogatories" (published by LexisNexis) and "Pattern Discovery" (published by West). | https://en.wikipedia.org/wiki?curid=14612 |
Intel
Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, in Silicon Valley. It is the world's largest and highest valued semiconductor chip manufacturer based on revenue, and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel ranked No. 46 in the 2018 "Fortune" 500 list of the largest United States corporations by total revenue. Intel is incorporated in Delaware.
Intel supplies microprocessors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactures motherboard chipsets, network interface controllers and integrated circuits, flash memory, graphics chips, embedded processors and other devices related to communications and computing.
Intel Corporation was founded on July 18, 1968, by semiconductor pioneers Robert Noyce and Gordon Moore (of Moore's law), and is associated with the executive leadership and vision of Andrew Grove. The company's name was conceived as portmanteau of the words "int"egrated and "el"ectronics, with co-founder Noyce having been a key inventor of the integrated circuit (the microchip). The fact that "intel" is the term for intelligence information also made the name appropriate. Intel was an early developer of SRAM and DRAM memory chips, which represented the majority of its business until 1981. Although Intel created the world's first commercial microprocessor chip in 1971, it was not until the success of the personal computer (PC) that this became its primary business.
During the 1990s, Intel invested heavily in new microprocessor designs fostering the rapid growth of the computer industry. During this period Intel became the dominant supplier of microprocessors for PCs and was known for aggressive and anti-competitive tactics in defense of its market position, particularly against Advanced Micro Devices (AMD), as well as a struggle with Microsoft for control over the direction of the PC industry.
The Open Source Technology Center at Intel hosts PowerTOP and LatencyTOP, and supports other open-source projects such as Wayland, Mesa3D, Intel Array Building Blocks, Threading Building Blocks (TBB), and Xen.
In 2017, Dell accounted for about 16% of Intel's total revenues, Lenovo accounted for 13% of total revenues, and HP Inc. accounted for 11% of total revenues.
According to IDC, while Intel enjoyed the biggest market share in both the overall worldwide PC microprocessor market (73.3%) and the mobile PC microprocessor (80.4%) in the second quarter of 2011, the numbers decreased by 1.5% and 1.9% compared to the first quarter of 2011.
Intels market share decreased significantly in the enthusiast market as of 2019. Intel has faced delays for their 10 nm products.According to Intel CEO Bob Swan, that delay was caused by the company's overly aggressive strategy for moving to its next node. Some OEMs,for example Microsoft, started newly shipping products with AMD CPUs.
In the 1980s Intel was among the top ten sellers of semiconductors (10th in 1987) in the world. In 1992, Intel became the biggest chip maker by revenue and has held the position ever since. Other top semiconductor companies include TSMC, Advanced Micro Devices, Samsung, Texas Instruments, Toshiba and STMicroelectronics.
Competitors in PC chipsets include Advanced Micro Devices, VIA Technologies, Silicon Integrated Systems, and Nvidia. Intel's competitors in networking include NXP Semiconductors, Infineon, Broadcom Limited, Marvell Technology Group and Applied Micro Circuits Corporation, and competitors in flash memory include Spansion, Samsung, Qimonda, Toshiba, STMicroelectronics, and SK Hynix.
The only major competitor in the x86 processor market is Advanced Micro Devices (AMD), with which Intel has had full cross-licensing agreements since 1976: each partner can use the other's patented technological innovations without charge after a certain time. However, the cross-licensing agreement is canceled in the event of an AMD bankruptcy or takeover.
Some smaller competitors such as VIA Technologies produce low-power x86 processors for small factor computers and portable equipment. However, the advent of such mobile computing devices, in particular, smartphones, has in recent years led to a decline in PC sales. Since over 95% of the world's smartphones currently use processors designed by ARM Holdings, ARM has become a major competitor for Intel's processor market. ARM is also planning to make inroads into the PC and server market.
Intel has been involved in several disputes regarding violation of antitrust laws, which are noted below.
Intel was founded in Mountain View, California, in 1968 by Gordon E. Moore (known for "Moore's law"), a chemist, and Robert Noyce, a physicist and co-inventor of the integrated circuit. Arthur Rock (investor and venture capitalist) helped them find investors, while Max Palevsky was on the board from an early stage. Moore and Noyce had left Fairchild Semiconductor to found Intel. Rock was not an employee, but he was an investor and was chairman of the board. The total initial investment in Intel was $2.5 million in convertible debentures (equivalent to $ million in ) and $10,000 from Rock. Just 2 years later, Intel became a public company via an initial public offering (IPO), raising $6.8 million ($23.50 per share). Intel's third employee was Andy Grove, a chemical engineer, who later ran the company through much of the 1980s and the high-growth 1990s.
In deciding on a name, Moore and Noyce quickly rejected "Moore Noyce", near homophone for "more noise" – an ill-suited name for an electronics company, since noise in electronics is usually undesirable and typically associated with bad interference. Instead, they founded the company as N M Electronics on July 18, 1968, but by the end of the month had changed the name to Intel which stood for Integrated Electronics. Since "Intel" was already trademarked by the hotel chain Intelco, they had to buy the rights for the name.
At its founding, Intel was distinguished by its ability to make logic circuits using semiconductor devices. The founders' goal was the semiconductor memory market, widely predicted to replace magnetic-core memory. Its first product, a quick entry into the small, high-speed memory market in 1969, was the 3101 Schottky TTL bipolar 64-bit static random-access memory (SRAM), which was nearly twice as fast as earlier Schottky diode implementations by Fairchild and the Electrotechnical Laboratory in Tsukuba, Japan. In the same year, Intel also produced the 3301 Schottky bipolar 1024-bit read-only memory (ROM) and the first commercial metal–oxide–semiconductor field-effect transistor (MOSFET) silicon gate SRAM chip, the 256-bit 1101. While the 1101 was a significant advance, its complex static cell structure made it too slow and costly for mainframe memories. The three-transistor cell implemented in the first commercially available dynamic random-access memory (DRAM), the 1103 released in 1970, solved these issues. The 1103 was the bestselling semiconductor memory chip in the world by 1972, as it replaced core memory in many applications. Intel's business grew during the 1970s as it expanded and improved its manufacturing processes and produced a wider range of products, still dominated by various memory devices.
Intel created the first commercially available microprocessor (Intel 4004) in 1971. The microprocessor represented a notable advance in the technology of integrated circuitry, as it miniaturized the central processing unit of a computer, which then made it possible for small machines to perform calculations that in the past only very large machines could do. Considerable technological innovation was needed before the microprocessor could actually become the basis of what was first known as a "mini computer" and then known as a "personal computer". Intel also created one of the first microcomputers in 1973. Intel opened its first international manufacturing facility in 1972, in Malaysia, which would host multiple Intel operations, before opening assembly facilities and semiconductor plants in Singapore and Jerusalem in the early 1980s, and manufacturing and development centres in China, India and Costa Rica in the 1990s. By the early 1980s, its business was dominated by dynamic random-access memory (DRAM) chips. However, increased competition from Japanese semiconductor manufacturers had, by 1983, dramatically reduced the profitability of this market. The growing success of the IBM personal computer, based on an Intel microprocessor, was among factors that convinced Gordon Moore (CEO since 1975) to shift the company's focus to microprocessors and to change fundamental aspects of that business model. Moore's decision to sole-source Intel's 386 chip played into the company's continuing success.
By the end of the 1980s, buoyed by its fortuitous position as microprocessor supplier to IBM and IBM's competitors within the rapidly growing personal computer market, Intel embarked on a 10-year period of unprecedented growth as the primary (and most profitable) hardware supplier to the PC industry, part of the winning 'Wintel' combination. Moore handed over to Andy Grove in 1987. By launching its Intel Inside marketing campaign in 1991, Intel was able to associate brand loyalty with consumer selection, so that by the end of the 1990s, its line of Pentium processors had become a household name.
After 2000, growth in demand for high-end microprocessors slowed. Competitors, notably AMD (Intel's largest competitor in its primary x86 architecture market), garnered significant market share, initially in low-end and mid-range processors but ultimately across the product range, and Intel's dominant position in its core market was greatly reduced, mostly due to controversial NetBurst microarchitecture. In the early 2000s then-CEO, Craig Barrett attempted to diversify the company's business beyond semiconductors, but few of these activities were ultimately successful.
Intel had also for a number of years been embroiled in litigation. US law did not initially recognize intellectual property rights related to microprocessor topology (circuit layouts), until the Semiconductor Chip Protection Act of 1984, a law sought by Intel and the Semiconductor Industry Association (SIA). During the late 1980s and 1990s (after this law was passed), Intel also sued companies that tried to develop competitor chips to the 80386 CPU. The lawsuits were noted to significantly burden the competition with legal bills, even if Intel lost the suits. Antitrust allegations had been simmering since the early 1990s and had been the cause of one lawsuit against Intel in 1991. In 2004 and 2005, AMD brought further claims against Intel related to unfair competition.
In 2005, CEO Paul Otellini reorganized the company to refocus its core processor and chipset business on platforms (enterprise, digital home, digital health, and mobility).
In 2006, Intel unveiled its Core microarchitecture to widespread critical acclaim; the product range was perceived as an exceptional leap in processor performance that at a stroke regained much of its leadership of the field. In 2008, Intel had another "tick" when it introduced the Penryn microarchitecture, which was 45 nm. Later that year, Intel released a processor with the Nehalem architecture. Nehalem had positive reviews.
On June 27, 2006, the sale of Intel's XScale assets was announced. Intel agreed to sell the XScale processor business to Marvell Technology Group for an estimated $600 million and the assumption of unspecified liabilities. The move was intended to permit Intel to focus its resources on its core x86 and server businesses, and the acquisition completed on November 9, 2006.
In 2010, Intel purchased McAfee, a manufacturer of computer security technology, for $7.68 billion. As a condition for regulatory approval of the transaction, Intel agreed to provide rival security firms with all necessary information that would allow their products to use Intel's chips and personal computers. After the acquisition, Intel had about 90,000 employees, including about 12,000 software engineers. In September 2016, Intel sold a majority stake in its computer-security unit to TPG Capital, reversing the five-year-old McAfee acquisition.
In August 2010, Intel and Infineon Technologies announced that Intel would acquire Infineon's Wireless Solutions business. Intel planned to use Infineon's technology in laptops, smart phones, netbooks, tablets and embedded computers in consumer products, eventually integrating its wireless modem into Intel's silicon chips.
In March 2011, Intel bought most of the assets of Cairo-based SySDSoft.
In July 2011, Intel announced that it had agreed to acquire Fulcrum Microsystems Inc., a company specializing in network switches. The company used to be included on the EE Times list of 60 Emerging Startups.
In October 2011, Intel reached a deal to acquire Telmap, an Israeli-based navigation software company. The purchase price was not disclosed, but Israeli media reported values around $300 million to $350 million.
In July 2012, Intel agreed to buy 10% of the shares of ASML Holding NV for $2.1 billion and another $1 billion for 5% of the shares that need shareholder approval to fund relevant research and development efforts, as part of a EUR3.3 billion ($4.1 billion) deal to accelerate the development of 450-millimeter wafer technology and extreme ultra-violet lithography by as much as two years.
In July 2013, Intel confirmed the acquisition of Omek Interactive, an Israeli company that makes technology for gesture-based interfaces, without disclosing the monetary value of the deal. An official statement from Intel read: "The acquisition of Omek Interactive will help increase Intel's capabilities in the delivery of more immersive perceptual computing experiences." One report estimated the value of the acquisition between US$30 million and $50 million.
The acquisition of a Spanish natural language recognition startup, Indisys was announced in September 2013. The terms of the deal were not disclosed but an email from an Intel representative stated: "Intel has acquired Indisys, a privately held company based in Seville, Spain. The majority of Indisys employees joined Intel. We signed the agreement to acquire the company on May 31 and the deal has been completed." Indysis explains that its artificial intelligence (AI) technology "is a human image, which converses fluently and with common sense in multiple languages and also works in different platforms."
In December 2014, Intel bought PasswordBox.
In January 2015, Intel purchased a 30% stake in Vuzix, a smart glasses manufacturer. The deal was worth $24.8 million.
In February 2015, Intel announced its agreement to purchase German network chipmaker Lantiq, to aid in its expansion of its range of chips in devices with Internet connection capability.
In June 2015, Intel announced its agreement to purchase FPGA design company Altera for $16.7 billion, in its largest acquisition to date. The acquisition completed in December 2015.
In October 2015, Intel bought cognitive computing company Saffron Technology for an undisclosed price.
In August 2016, Intel purchased deep-learning startup Nervana Systems for $350 million.
In December 2016, Intel acquired computer vision startup Movidius for an undisclosed price.
In March 2017, Intel announced that they had agreed to purchase Mobileye, an Israeli developer of "autonomous driving" systems for US$15.3 billion.
In June 2017, Intel Corporation announced an investment of over Rs.1100 crore ($170 million) for its upcoming Research and Development (R&D) centre in Bangalore.
In January 2019, Intel announced an investment of over $11 billion on a new Israeli chip plant, as told by the Israeli Finance Minister.
In 2008, Intel spun off key assets of a solar startup business effort to form an independent company, SpectraWatt Inc. In 2011, SpectraWatt filed for bankruptcy.
In February 2011, Intel began to build a new microprocessor manufacturing facility in Chandler, Arizona, completed in 2013 at a cost of $5 billion. The building was never used. The company produces three-quarters of its products in the United States, although three-quarters of its revenue come from overseas.
In April 2011, Intel began a pilot project with ZTE Corporation to produce smartphones using the Intel Atom processor for China's domestic market.
In December 2011, Intel announced that it reorganized several of its business units into a new mobile and communications group that would be responsible for the company's smartphone, tablet, and wireless efforts.
Finding itself with excess fab capacity after the failure of the Ultrabook to gain market traction and with PC sales declining, in 2013 Intel reached a foundry agreement to produce chips for Altera using 14-nm process. General Manager of Intel's custom foundry division Sunit Rikhi indicated that Intel would pursue further such deals in the future. This was after poor sales of Windows 8 hardware caused a major retrenchment for most of the major semiconductor manufacturers, except for Qualcomm, which continued to see healthy purchases from its largest customer, Apple.
As of July 2013, five companies were using Intel's fabs via the "Intel Custom Foundry" division: Achronix, Tabula, Netronome, Microsemi, and Panasonic most are field-programmable gate array (FPGA) makers, but Netronome designs network processors. Only Achronix began shipping chips made by Intel using the 22-nm Tri-Gate process. Several other customers also exist but were not announced at the time.
The Alliance for Affordable Internet (A4AI) was launched in October 2013 and Intel is part of the coalition of public and private organisations that also includes Facebook, Google, and Microsoft. Led by Sir Tim Berners-Lee, the A4AI seeks to make Internet access more affordable so that access is broadened in the developing world, where only 31% of people are online. Google will help to decrease Internet access prices so that they fall below the UN Broadband Commission's worldwide target of 5% of monthly income.
In October 2018, Arm Holdings partnered with Intel in order to share code for embedded systems through the Yocto Project.
On July 25, 2019, Apple and Intel announced an agreement for Apple to acquire the smartphone modem business of Intel Mobile Communications for US$1 billion.
Intel's first products were shift register memory and random-access memory integrated circuits, and Intel grew to be a leader in the fiercely competitive DRAM, SRAM, and ROM markets throughout the 1970s. Concurrently, Intel engineers Marcian Hoff, Federico Faggin, Stanley Mazor and Masatoshi Shima invented Intel's first microprocessor. Originally developed for the Japanese company Busicom to replace a number of ASICs in a calculator already produced by Busicom, the Intel 4004 was introduced to the mass market on November 15, 1971, though the microprocessor did not become the core of Intel's business until the mid-1980s. (Note: Intel is usually given credit with Texas Instruments for the almost-simultaneous invention of the microprocessor)
In 1983, at the dawn of the personal computer era, Intel's profits came under increased pressure from Japanese memory-chip manufacturers, and then-president Andy Grove focused the company on microprocessors. Grove described this transition in the book "Only the Paranoid Survive". A key element of his plan was the notion, then considered radical, of becoming the single source for successors to the popular 8086 microprocessor.
Until then, the manufacture of complex integrated circuits was not reliable enough for customers to depend on a single supplier, but Grove began producing processors in three geographically distinct factories, and ceased licensing the chip designs to competitors such as Zilog and AMD. When the PC industry boomed in the late 1980s and 1990s, Intel was one of the primary beneficiaries.
Despite the ultimate importance of the microprocessor, the 4004 and its successors the 8008 and the 8080 were never major revenue contributors at Intel. As the next processor, the 8086 (and its variant the 8088) was completed in 1978, Intel embarked on a major marketing and sales campaign for that chip nicknamed "Operation Crush", and intended to win as many customers for the processor as possible. One design win was the newly created IBM PC division, though the importance of this was not fully realized at the time.
IBM introduced its personal computer in 1981, and it was rapidly successful. In 1982, Intel created the 80286 microprocessor, which, two years later, was used in the IBM PC/AT. Compaq, the first IBM PC "clone" manufacturer, produced a desktop system based on the faster 80286 processor in 1985 and in 1986 quickly followed with the first 80386-based system, beating IBM and establishing a competitive market for PC-compatible systems and setting up Intel as a key component supplier.
In 1975, the company had started a project to develop a highly advanced 32-bit microprocessor, finally released in 1981 as the Intel iAPX 432. The project was too ambitious and the processor was never able to meet its performance objectives, and it failed in the marketplace. Intel extended the x86 architecture to 32 bits instead.
During this period Andrew Grove dramatically redirected the company, closing much of its DRAM business and directing resources to the microprocessor business. Of perhaps greater importance was his decision to "single-source" the 386 microprocessor. Prior to this, microprocessor manufacturing was in its infancy, and manufacturing problems frequently reduced or stopped production, interrupting supplies to customers. To mitigate this risk, these customers typically insisted that multiple manufacturers produce chips they could use to ensure a consistent supply. The 8080 and 8086-series microprocessors were produced by several companies, notably AMD, with which Intel had a technology-sharing contract. Grove made the decision not to license the 386 design to other manufacturers, instead, producing it in three geographically distinct factories: Santa Clara, California; Hillsboro, Oregon; and Chandler, a suburb of Phoenix, Arizona. He convinced customers that this would ensure consistent delivery. In doing this, Intel breached its contract with AMD, which sued and was paid millions of dollars in damages but could not manufacture new Intel CPU designs any longer. (Instead, AMD started to develop and manufacture its own competing x86 designs.) As the success of Compaq's Deskpro 386 established the 386 as the dominant CPU choice, Intel achieved a position of near-exclusive dominance as its supplier. Profits from this funded rapid development of both higher-performance chip designs and higher-performance manufacturing capabilities, propelling Intel to a position of unquestioned leadership by the early 1990s.
Intel introduced the 486 microprocessor in 1989, and in 1990 established a second design team, designing the processors code-named "P5" and "P6" in parallel and committing to a major new processor every two years, versus the four or more years such designs had previously taken. Engineers Vinod Dham and Rajeev Chandrasekhar (Member of Parliament, India) were key figures on the core team that invented the 486 chip and later, Intel's signature Pentium chip. The P5 project was earlier known as "Operation Bicycle," referring to the cycles of the processor through two parallel execution pipelines. The P5 was introduced in 1993 as the Intel Pentium, substituting a registered trademark name for the former part number (numbers, such as 486, cannot be legally registered as trademarks in the United States). The P6 followed in 1995 as the Pentium Pro and improved into the Pentium II in 1997. New architectures were developed alternately in Santa Clara, California and Hillsboro, Oregon.
The Santa Clara design team embarked in 1993 on a successor to the x86 architecture, codenamed "P7". The first attempt was dropped a year later but quickly revived in a cooperative program with Hewlett-Packard engineers, though Intel soon took over primary design responsibility. The resulting implementation of the IA-64 64-bit architecture was the Itanium, finally introduced in June 2001. The Itanium's performance running legacy x86 code did not meet expectations, and it failed to compete effectively with x86-64, which was AMD's 64-bit extension of the 32-bit x86 architecture (Intel uses the name Intel 64, previously EM64T). In 2017, Intel announced that the would be the last Itanium chips produced.
The Hillsboro team designed the Willamette processors (initially code-named P68), which were marketed as the Pentium 4.
In June 1994, Intel engineers discovered a flaw in the floating-point math subsection of the P5 Pentium microprocessor. Under certain data-dependent conditions, the low-order bits of the result of a floating-point division would be incorrect. The error could compound in subsequent calculations. Intel corrected the error in a future chip revision, and under public pressure it issued a total recall and replaced the defective Pentium CPUs (which were limited to some 60, 66, 75, 90, and 100 MHz models) on customer request.
The bug was discovered independently in October 1994 by Thomas Nicely, Professor of Mathematics at Lynchburg College. He contacted Intel but received no response. On October 30, he posted a message about his finding on the Internet. Word of the bug spread quickly and reached the industry press. The bug was easy to replicate; a user could enter specific numbers into the calculator on the operating system. Consequently, many users did not accept Intel's statements that the error was minor and "not even an erratum." During Thanksgiving, in 1994, "The New York Times" ran a piece by journalist John Markoff spotlighting the error. Intel changed its position and offered to replace every chip, quickly putting in place a large end-user support organization. This resulted in a $475 million charge against Intel's 1994 revenue. Dr. Nicely later learned that Intel had discovered the FDIV bug in its own testing a few months before him (but had decided not to inform customers).
The "Pentium flaw" incident, Intel's response to it, and the surrounding media coverage propelled Intel from being a technology supplier generally unknown to most computer users to a household name. Dovetailing with an uptick in the "Intel Inside" campaign, the episode is considered to have been a positive event for Intel, changing some of its business practices to be more end-user focused and generating substantial public awareness, while avoiding a lasting negative impression.
During this period, Intel undertook two major supporting advertising campaigns. The first campaign, the 1991 "Intel Inside" marketing and branding campaign, is widely known and has become synonymous with Intel itself. The idea of "ingredient branding" was new at the time, with only NutraSweet and a few others making attempts to do so. This campaign established Intel, which had been a component supplier little-known outside the PC industry, as a household name.
The second campaign, Intel's Systems Group, which began in the early 1990s, showcased manufacturing of PC motherboards, the main board component of a personal computer, and the one into which the processor (CPU) and memory (RAM) chips are plugged. The Systems Group campaign was lesser known than the Intel Inside campaign.
Shortly after, Intel began manufacturing fully configured "white box" systems for the dozens of PC clone companies that rapidly sprang up. At its peak in the mid-1990s, Intel manufactured over 15% of all PCs, making it the third-largest supplier at the time.
During the 1990s, Intel Architecture Labs (IAL) was responsible for many of the hardware innovations for the PC, including the PCI Bus, the PCI Express (PCIe) bus, and Universal Serial Bus (USB). IAL's software efforts met with a more mixed fate; its video and graphics software was important in the development of software digital video, but later its efforts were largely overshadowed by competition from Microsoft. The competition between Intel and Microsoft was revealed in testimony by then IAL Vice-President Steven McGeady at the Microsoft antitrust trial ("United States v. Microsoft Corp.").
In early January 2018, it was reported that all Intel processors made since 1995 (besides Intel Itanium and pre-2013 Intel Atom) have been subject to two security flaws dubbed Meltdown and Spectre.
The impact on performance resulting from software patches is "workload-dependent". Several procedures to help protect home computers and related devices from the Spectre and Meltdown security vulnerabilities have been published. Spectre patches have been reported to significantly slow down performance, especially on older computers; on the newer 8th generation Core platforms, benchmark performance drops of 2–14 percent have been measured. Meltdown patches may also produce performance loss. It is believed that "hundreds of millions" of systems could be affected by these flaws.
On March 15, 2018, Intel reported that it will redesign its CPU processors (performance losses to be determined) to protect against the Spectre security vulnerability, and expects to release the newly redesigned processors later in 2018.
On May 3, 2018, eight additional Spectre-class flaws were reported. Intel reported that they are preparing new patches to mitigate these flaws.
On August 14, 2018, Intel disclosed three additional chip flaws referred to as L1 Terminal Fault (L1TF). They reported that previously released microcode updates, along with new, pre-release microcode updates can be used to mitigate these flaws.
On January 18, 2019, Intel disclosed three new vulnerabilities affecting all Intel CPUs, named "Fallout", "RIDL", and "ZombieLoad", allowing a program to read information recently written, read data in the line-fill buffers and load ports, and leak information from other processes and virtual machines. Recent Coffeelake-series CPUs are even more vulnerable, due to hardware mitigations for Spectre.
On March 5, 2020, computer security experts reported another Intel chip security flaw, besides the Meltdown and Spectre flaws, with the systematic name (or, "Intel CSME Bug"). This newly found flaw is not fixable with a firmware update, and affects nearly "all Intel chips released in the past five years".
Intel has decided to discontinue with their recent Intel Remote Keyboard Android app after encountering several security bugs. This app was launched in early 2015 to help users control Intel single-board computers and Intel NUC. The company has asked Remote Keyboard Users to delete the app at their first convenience.
In 2008, Intel began shipping mainstream solid-state drives (SSDs) with up to 160 GB storage capacities. As with their CPUs, Intel develops SSD chips using ever-smaller nanometer processes. These SSDs make use of industry standards such as NAND flash, mSATA, PCIe, and NVMe. In 2017, Intel introduced SSDs based on 3D XPoint technology under the Optane brand name.
The Intel Scientific Computers division was founded in 1984 by Justin Rattner, to design and produce parallel computers based on Intel microprocessors connected in hypercube internetwork topology. In 1992, the name was changed to the Intel Supercomputing Systems Division, and development of the iWarp architecture was also subsumed. The division designed several supercomputer systems, including the Intel iPSC/1, iPSC/2, iPSC/860, Paragon and ASCI Red. In November 2014, Intel revealed that it is going to use light beams to speed up supercomputers.
In 2007, Intel formed the Moblin project to create an open source Linux operating system for x86-based mobile devices. Following the success of Google's Android platform which ran exclusively on ARM processors, Intel announced on February 15, 2010, that it would partner with Nokia and merge Moblin with Nokia's ARM-based Maemo project to create MeeGo. MeeGo was supported by the Linux Foundation.
In February 2011, Nokia left the project after partnering with Microsoft, leaving Intel in sole charge of MeeGo. An Intel spokeswoman said it was "disappointed" by Nokia's decision but that Intel was committed to MeeGo. In September 2011 Intel stopped working on MeeGo and partnered with Samsung to create Tizen, a new project hosted by the Linux Foundation. Intel has since been co-developing the Tizen operating system which runs on several Samsung devices.
Two factors combined to end this dominance: the slowing of PC demand growth beginning in 2000 and the rise of the low-cost PC. By the end of the 1990s, microprocessor performance had outstripped software demand for that CPU power. Aside from high-end server systems and software, whose demand dropped with the end of the "dot-com bubble", consumer systems ran effectively on increasingly low-cost systems after 2000. Intel's strategy of producing ever-more-powerful processors and obsoleting their predecessors stumbled, leaving an opportunity for rapid gains by competitors, notably AMD. This, in turn, lowered the profitability of the processor line and ended an era of unprecedented dominance of the PC hardware by Intel.
Intel's dominance in the x86 microprocessor market led to numerous charges of antitrust violations over the years, including FTC investigations in both the late 1980s and in 1999, and civil actions such as the 1997 suit by Digital Equipment Corporation (DEC) and a patent suit by Intergraph. Intel's market dominance (at one time it controlled over 85% of the market for 32-bit x86 microprocessors) combined with Intel's own hardball legal tactics (such as its infamous 338 patent suit versus PC manufacturers) made it an attractive target for litigation, but few of the lawsuits ever amounted to anything.
A case of industrial espionage arose in 1995 that involved both Intel and AMD. Bill Gaede, an Argentine formerly employed both at AMD and at Intel's Arizona plant, was arrested for attempting in 1993 to sell the i486 and P5 Pentium designs to AMD and to certain foreign powers. Gaede videotaped data from his computer screen at Intel and mailed it to AMD, which immediately alerted Intel and authorities, resulting in Gaede's arrest. Gaede was convicted and sentenced to 33 months in prison in June 1996.
On June 6, 2005, Steve Jobs, then CEO of Apple, announced that Apple would be transitioning from its long favored PowerPC architecture to the Intel x86 architecture because the future PowerPC road map was unable to satisfy Apple's needs. The first Macintosh computers containing Intel CPUs were announced on January 10, 2006, and Apple had its entire line of consumer Macs running on Intel processors by early August 2006. The Apple Xserve server was updated to Intel Xeon processors from November 2006 and was offered in a configuration similar to Apple's Mac Pro.
On June 22, 2020, during the virtual WWDC, Apple announced that they would be switching some of their Mac line to their own ARM-based designs.
In July 2007, the company released a print advertisement for its Intel Core 2 Duo processor featuring six black runners appearing to bow down to a Caucasian male inside of an office setting (due to the posture taken by runners on starting blocks). According to Nancy Bhagat, Vice President of Intel Corporate Marketing, viewers found the ad to be "insensitive and insulting", and several Intel executives made public apologies.
The Classmate PC is the company's first low-cost netbook computer. In 2014, the company released an updated version of the Classmate PC.
In June 2011, Intel introduced the first Pentium mobile processor based on the Sandy Bridge core. The B940, clocked at 2 GHz, is faster than existing or upcoming mobile Celerons, although it is almost identical to dual-core Celeron CPUs in all other aspects. According to IHS iSuppli's report on September 28, 2011, Sandy Bridge chips have helped Intel increase its market share in global processor market to 81.8%, while AMD's market share dropped to 10.4%.
Intel planned to introduce Medfield – a processor for tablets and smartphones – to the market in 2012, as an effort to compete with ARM. As a 32-nanometer processor, Medfield is designed to be energy-efficient, which is one of the core features in ARM's chips.
At the Intel Developers Forum (IDF) 2011 in San Francisco, Intel's partnership with Google was announced. By January 2012, Google's Android 2.3 will use Intel's Atom microprocessor.
In July 2011, Intel announced that its server chips, the Xeon series, will use new sensors that can improve data center cooling efficiency.
In 2011, Intel announced the Ivy Bridge processor family at the Intel Developer Forum. Ivy Bridge supports both DDR3 memory and DDR3L chips.
As part of its efforts in the Positive Energy Buildings Consortium, Intel has been developing an application, called Personal Office Energy Monitor (POEM), to help office buildings to be more energy-efficient. With this application, employees can get the power consumption info for their office machines, so that they can figure out a better way to save energy in their working environment.
Intel has introduced some simulation games, starting in 2009 with web-based "". In it, the player manages a company's IT department. The goal is to apply technology and skill to enable the company to grow from a small business into a global enterprise. The game has since been discontinued and succeeded in 2012 by the web-based multiplayer game "", which is no longer available.
In 2011, Intel announced that it is working on a car security system that connects to smartphones via an application. The application works by streaming video to a cloud service if a car armed with the system is broken into.
Intel also developed High-Bandwidth Digital Content Protection (HDCP) to prevent access of digital audio and video content as it travels across connections.
In 2013, Intel's Kirk Skaugen said that Intel's exclusive focus on Microsoft platforms was a thing of the past and that they would now support all "tier-one operating systems" such as Linux, Android, iOS, and Chrome.
In 2014, Intel cut thousands of employees in response to "evolving market trends", and offered to subsidize manufacturers for the extra costs involved in using Intel chips in their tablets.
In June 2013, Intel unveiled its fourth generation of Intel Core processors (Haswell) in an event named Computex in Taipei.
On January 6, 2014, Intel announced that it was "teaming with the Council of Fashion Designers of America, Barneys New York and Opening Ceremony around the wearable tech field."
Intel developed a reference design for wearable smart earbuds that provide biometric and fitness information. The Intel smart earbuds provide full stereo audio, and monitor heart rate, while the applications on the user's phone keep track of run distance and calories burned.
CNBC reported that Intel eliminated the division that worked on health wearables in 2017.
On November 19, 2015, Intel, alongside ARM Holdings, Dell, Cisco Systems, Microsoft, and Princeton University, founded the OpenFog Consortium, to promote interests and development in fog computing. Intel's Chief Strategist for the IoT Strategy and Technology Office, Jeff Faders, became the consortium's first president.
In 2009, Intel announced that it planned to undertake an effort to remove conflict resources—materials sourced from mines whose profits are used to fund armed militant groups, particularly within the Democratic Republic of the Congo—from its supply chain. Intel sought conflict-free sources of the precious metals common to electronics from within the country, using a system of first- and third-party audits, as well as input from the Enough Project and other organizations. During a keynote address at Consumer Electronics Show 2014, Intel CEO at the time, Brian Krzanich, announced that the company's microprocessors would henceforth be conflict free. In 2016, Intel stated that it had expected its entire supply chain to be conflict-free by the end of the year.
Intel is one of the biggest stakeholders in the self-driving car industry, having joined the race in mid 2017 after joining forces with Mobileye. The company is also one of the first in the sector to research consumer acceptance, after an AAA report quoted a 78% nonacceptance rate of the technology in the US.
Safety levels of the technology, the thought of abandoning control to a machine, and psychological comfort of passengers in such situations were the major discussion topics initially. The commuters also stated that they did not want to see everything the car was doing. This was primarily a referral to the auto-steering wheel with no one sitting in the driving seat. Intel also learned that voice control regulator is vital, and the interface between the humans and machine eases the discomfort condition, and brings some sense of control back. It is important to mention that Intel included only 10 people in this study, which makes the study less credible. In a video posted on YouTube, Intel accepted this fact and called for further testing.
Robert Noyce was Intel's CEO at its founding in 1968, followed by co-founder Gordon Moore in 1975. Andy Grove became the company's president in 1979 and added the CEO title in 1987 when Moore became chairman. In 1998, Grove succeeded Moore as Chairman, and Craig Barrett, already company president, took over. On May 18, 2005, Barrett handed the reins of the company over to Paul Otellini, who had been the company president and COO and who was responsible for Intel's design win in the original IBM PC. The board of directors elected Otellini as President and CEO, and Barrett replaced Grove as Chairman of the Board. Grove stepped down as chairman but is retained as a special adviser. In May 2009, Barrett stepped down as chairman of the Board and was succeeded by Jane Shaw. In May 2012, Intel vice chairman Andy Bryant, who had held the posts of CFO (1994) and Chief Administrative Officer (2007) at Intel, succeeded Shaw as executive chairman.
In November 2012, president and CEO Paul Otellini announced that he would step down in May 2013 at the age of 62, three years before the company's mandatory retirement age. During a six-month transition period, Intel's board of directors commenced a search process for the next CEO, in which it considered both internal managers and external candidates such as Sanjay Jha and Patrick Gelsinger. Financial results revealed that, under Otellini, Intel's revenue increased by 55.8 percent (US$34.2 to 53.3 billion), while its net income increased by 46.7% (US$7.5 billion to 11 billion).
On May 2, 2013, Executive Vice President and COO Brian Krzanich was elected as Intel's sixth CEO, a selection that became effective on May 16, 2013, at the company's annual meeting. Reportedly, the board concluded that an insider could proceed with the role and exert an impact more quickly, without the need to learn Intel's processes, and Krzanich was selected on such a basis. Intel's software head Renée James was selected as president of the company, a role that is second to the CEO position.
As of May 2013, Intel's board of directors consists of Andy Bryant, John Donahoe, Frank Yeary, Ambassador Charlene Barshefsky, Susan Decker, Reed Hundt, Paul Otellini, James Plummer, David Pottruck, and David Yoffie and Creative director will.i.am. The board was described by former "Financial Times" journalist Tom Foremski as "an exemplary example of corporate governance of the highest order" and received a rating of ten from GovernanceMetrics International, a form of recognition that has only been awarded to twenty-one other corporate boards worldwide.
On June 21, 2018, Intel announced the resignation of Brian Krzanich as CEO, with the exposure of a relationship he had with an employee. Bob Swan was named interim CEO, as the Board began a search for a permanent CEO.
On January 31, 2019, Swan transitioned from his role as CFO and interim CEO and was named by the Board as the 7th CEO to lead the company.
As of 17 May 2020:
As of 2017 Intel shares are mainly held by institutional investors (The Vanguard Group, BlackRock, Capital Group Companies, State Street Corporation and others)
The firm promotes very heavily from within, most notably in its executive suite. The company has resisted the trend toward outsider CEOs. Paul Otellini was a 30-year veteran of the company when he assumed the role of CEO. All of his top lieutenants have risen through the ranks after many years with the firm. In many cases, Intel's top executives have spent their entire working careers with Intel.
Intel has a mandatory retirement policy for its CEOs when they reach age 65. Andy Grove retired at 62, while both Robert Noyce and Gordon Moore retired at 58. Grove retired as Chairman and as a member of the board of directors in 2005 at age 68.
Intel's headquarters are located in Santa Clara, California, and the company has operations around the world. Its largest workforce concentration anywhere is in Washington County, Oregon (in the Portland metropolitan area's "Silicon Forest"), with 18,600 employees at several facilities. Outside the United States, the company has facilities in China, Costa Rica, Malaysia, Israel, Ireland, India, Russia, Argentina and Vietnam, in 63 countries and regions internationally. In the U.S. Intel employs significant numbers of people in California, Colorado, Massachusetts, Arizona, New Mexico, Oregon, Texas, Washington and Utah. In Oregon, Intel is the state's largest private employer. The company is the largest industrial employer in New Mexico while in Arizona the company has over 10,000 employees.
Intel invests heavily in research in China and about 100 researchers or 10% of the total number of researchers from Intel are located in Beijing.
In 2011, the Israeli government offered Intel $290 million to expand in the country. As a condition, Intel would employ 1,500 more workers in Kiryat Gat and between 600–1000 workers in the north.
In January 2014, it was reported that Intel would cut about 5,000 jobs from its work force of 107,000. The announcement was made a day after it reported earnings that missed analyst targets.
In March 2014, it was reported that Intel would embark upon a $6 billion plan to expand its activities in Israel. The plan calls for continued investment in existing and new Intel plants until 2030. , Intel employs 10,000 workers at four development centers and two production plants in Israel.
Intel has a Diversity Initiative, including employee diversity groups as well as supplier diversity programs. Like many companies with employee diversity groups, they include groups based on race and nationality as well as sexual identity and religion. In 1994, Intel sanctioned one of the earliest corporate Gay, Lesbian, Bisexual, and Transgender employee groups, and supports a Muslim employees group, a Jewish employees group, and a Bible-based Christian group.
Intel has received a 100% rating on numerous Corporate Equality Indices released by the Human Rights Campaign including the first one released in 2002. In addition, the company is frequently named one of the 100 Best Companies for Working Mothers by "Working Mother" magazine.
In January 2015, Intel announced the investment of $300 million over the next five years to enhance gender and racial diversity in their own company as well as the technology industry as a whole.
In February 2016, Intel released its Global Diversity & Inclusion 2015 Annual Report. The male-female mix of US employees was reported as 75.2% men and 24.8% women. For US employees in technical roles, the mix was reported as 79.8% male and 20.1% female. NPR reports that Intel is facing a retention problem (particularly for African Americans), not just a pipeline problem.
In 2011, ECONorthwest conducted an economic impact analysis of Intel's economic contribution to the state of Oregon. The report found that in 2009 "the total economic impacts attributed to Intel's operations, capital spending, contributions and taxes amounted to almost $14.6 billion in activity, including $4.3 billion in personal income and 59,990 jobs". Through multiplier effects, every 10 Intel jobs supported, on average, was found to create 31 jobs in other sectors of the economy.
In Rio Rancho, New Mexico, Intel is the leading employer. In 1997, a community partnership between Sandoval County and Intel Corporation funded and built Rio Rancho High School.
In 2011, Intel Capital announced a new fund to support startups working on technologies in line with the company's concept for next generation notebooks. The company is setting aside a $300 million fund to be spent over the next three to four years in areas related to ultrabooks. Intel announced the ultrabook concept at Computex in 2011. The ultrabook is defined as a thin (less than 0.8 inches [~2 cm] thick) notebook that utilizes Intel processors and also incorporates tablet features such as a touch screen and long battery life.
At the Intel Developers Forum in 2011, four Taiwan ODMs showed prototype ultrabooks that used Intel's Ivy Bridge chips. Intel plans to improve power consumption of its chips for ultrabooks, like new Ivy Bridge processors in 2013, which will only have 10W default thermal design power.
Intel's goal for Ultrabook's price is below $1000; however, according to two presidents from Acer and Compaq, this goal will not be achieved if Intel does not lower the price of its chips.
Intel has become one of the world's most recognizable computer brands following its long-running "Intel Inside" campaign. The idea for "Intel Inside" came out of a meeting between Intel and one of the major computer resellers, MicroAge.
In the late 1980s, Intel's market share was being seriously eroded by upstart competitors such as Advanced Micro Devices (now AMD), Zilog, and others who had started to sell their less expensive microprocessors to computer manufacturers. This was because, by using cheaper processors, manufacturers could make cheaper computers and gain more market share in an increasingly price-sensitive market. In 1989, Intel's Dennis Carter visited MicroAge's headquarters in Tempe, Arizona, to meet with MicroAge's VP of Marketing, Ron Mion. MicroAge had become one of the largest distributors of Compaq, IBM, HP, and others and thus was a primary although indirect driver of demand for microprocessors. Intel wanted MicroAge to petition its computer suppliers to favor Intel chips. However, Mion felt that the marketplace should decide which processors they wanted. Intel's counterargument was that it would be too difficult to educate PC buyers on why Intel microprocessors were worth paying more for ... and they were right. But Mion felt that the public didn't really need to fully understand why Intel chips were better, they just needed to feel they were better. So Mion proposed a market test. Intel would pay for a MicroAge billboard somewhere saying, "If you're buying a personal computer, make sure it has Intel inside." In turn, MicroAge would put "Intel Inside" stickers on the Intel-based computers in their stores in that area. To make the test easier to monitor, Mion decided to do the test in Boulder, Colorado, where it had a single store. Virtually overnight, the sales of personal computers in that store dramatically shifted to Intel-based PCs. Intel very quickly adopted "Intel Inside" as its primary branding and rolled it out worldwide.
As is often the case with computer lore, other tidbits have been combined to explain how things evolved. "Intel Inside" has not escaped that tendency and there are other "explanations" that had been floating around.
Intel's branding campaign started with "The Computer Inside" tagline in 1990 in the US and Europe. The Japan chapter of Intel proposed an "Intel in it" tagline and kicked off the Japanese campaign by hosting EKI-KON (meaning "Station Concert" in Japanese) at the Tokyo railway station dome on Christmas Day, December 25, 1990. Several months later, "The Computer Inside" incorporated the Japan idea to become "Intel Inside" which eventually elevated to the worldwide branding campaign in 1991, by Intel marketing manager Dennis Carter. A case study, "Inside Intel Inside", was put together by Harvard Business School. The five-note jingle was introduced in 1994 and by its tenth anniversary was being heard in 130 countries around the world. The initial branding agency for the "Intel Inside" campaign was DahlinSmithWhite Advertising of Salt Lake City. The Intel "swirl" logo was the work of DahlinSmithWhite art director Steve Grigg under the direction of Intel president and CEO Andy Grove.
The "Intel Inside" advertising campaign sought public brand loyalty and awareness of Intel processors in consumer computers. Intel paid some of the advertiser's costs for an ad that used the "Intel Inside" logo and xylo-marimba jingle.
In 2008, Intel planned to shift the emphasis of its Intel Inside campaign from traditional media such as television and print to newer media such as the Internet. Intel required that a minimum of 35% of the money it provided to the companies in its co-op program be used for online marketing. The Intel 2010 annual financial report indicated that $1.8 billion (6% of the gross margin and nearly 16% of the total net income) was allocated to all advertising with Intel Inside being part of that.
The famous D♭ D♭ G♭ D♭ A♭ xylophone/xylomarimba jingle, sonic logo, tag, audio mnemonic was produced by Musikvergnuegen and written by Walter Werzowa, once a member of the Austrian 1980s sampling band Edelweiss. The sonic Intel logo was remade in 1999 to coincide with the launch of the Pentium III, and a second time in 2004 to coincide with the new logo change (although it overlapped with the 1999 version and was not mainstreamed until the launch of the Core processors in 2006), with the melody unchanged. Advertisements for products featuring Intel processors with prominent MMX branding featured a version of the jingle with an embellishment (shining sound) after the final note.
In 2006, Intel expanded its promotion of open specification platforms beyond Centrino, to include the Viiv media center PC and the business desktop Intel vPro.
In mid-January 2006, Intel announced that they were dropping the long running "Pentium" name from their processors. The Pentium name was first used to refer to the P5 core Intel processors and was done to comply with court rulings that prevent the trademarking of a string of numbers, so competitors could not just call their processor the same name, as had been done with the prior 386 and 486 processors (both of which had copies manufactured by IBM and AMD). They phased out the Pentium names from mobile processors first, when the new Yonah chips, branded Core Solo and Core Duo, were released. The desktop processors changed when the Core 2 line of processors were released. By 2009, Intel was using a good-better-best strategy with Celeron being good, Pentium better, and the Intel Core family representing the best the company has to offer.
According to spokesman Bill Calder, Intel has maintained only the Celeron brand, the Atom brand for netbooks and the vPro lineup for businesses. Since late 2009, Intel's mainstream processors have been called Celeron, Pentium, Core i3, Core i5, Core i7, and Core i9 in order of performance from lowest to highest. The first generation core products carry a 3 digit name, such as i5 750, and the second generation products carry a 4 digit name, such as the i5 2500. In both cases, a K at the end of it shows that it is an unlocked processor, enabling additional overclocking abilities (for instance, 2500K). vPro products will carry the Intel Core i7 vPro processor or the Intel Core i5 vPro processor name. In October 2011, Intel started to sell its Core i7-2700K "Sandy Bridge" chip to customers worldwide.
Since 2010, "Centrino" is only being applied to Intel's WiMAX and Wi-Fi technologies.
Neo Sans Intel is a customized version of Neo Sans based on the Neo Sans and Neo Tech, designed by Sebastian Lester in 2004.
Intel Clear is a global font announced in 2014 designed for to be used across all communications. The font family was designed by Red Peek Branding and Daltan Maag Ltd. Initially available in Latin, Greek and Cyrillic scripts, it replaced Neo Sans Intel as the company's corporate typeface. Intel Clear Hebrew, Intel Clear Arabic were added by Daltan Maag Ltd.
It is a book produced by Red Peak Branding as part of new brand identity campaign, celebrating Intel's achievements while setting the new standard for what Intel looks, feels and sounds like.
Intel has a significant participation in the open source communities since 1999. For example, in 2006 Intel released MIT-licensed X.org drivers for their integrated graphic cards of the i965 family of chipsets. Intel released FreeBSD drivers for some networking cards, available under a BSD-compatible license, which were also ported to OpenBSD. Binary firmware files for non-wireless Ethernet devices were also released under a BSD licence allowing free redistribution. Intel ran the Moblin project until April 23, 2009, when they handed the project over to the Linux Foundation. Intel also runs the "LessWatts.org" campaigns.
However, after the release of the wireless products called Intel Pro/Wireless 2100, 2200BG/2225BG/2915ABG and 3945ABG in 2005, Intel was criticized for not granting free redistribution rights for the firmware that must be included in the operating system for the wireless devices to operate. As a result of this, Intel became a target of campaigns to allow free operating systems to include binary firmware on terms acceptable to the open source community. Linspire-Linux creator Michael Robertson outlined the difficult position that Intel was in releasing to open source, as Intel did not want to upset their large customer Microsoft. Theo de Raadt of OpenBSD also claimed that Intel is being "an Open Source fraud" after an Intel employee presented a distorted view of the situation at an open-source conference. In spite of the significant negative attention Intel received as a result of the wireless dealings, the binary firmware still has not gained a license compatible with free software principles.
The Firmware Support Package (FSP) is a proprietary firmware library developed by Intel for platform initialization and can be integrated into other firmware.
Due to declining PC sales, in 2016 Intel cut 12,000 jobs.
In October 2006, a Transmeta lawsuit was filed against Intel for patent infringement on computer architecture and power efficiency technologies. The lawsuit was settled in October 2007, with Intel agreeing to pay US$150 million initially and US$20 million per year for the next five years. Both companies agreed to drop lawsuits against each other, while Intel was granted a perpetual non-exclusive license to use current and future patented Transmeta technologies in its chips for 10 years.
In September 2005, Intel filed a response to an AMD lawsuit, disputing AMD's claims, and claiming that Intel's business practices are fair and lawful. In a rebuttal, Intel deconstructed AMD's offensive strategy and argued that AMD struggled largely as a result of its own bad business decisions, including underinvestment in essential manufacturing capacity and excessive reliance on contracting out chip foundries. Legal analysts predicted the lawsuit would drag on for a number of years since Intel's initial response indicated its unwillingness to settle with AMD. In 2008 a court date was finally set, but in 2009, Intel settled with a $1.25 billion payout to AMD (see below).
On November 4, 2009, New York's attorney general filed an antitrust lawsuit against Intel Corp, claiming the company used "illegal threats and collusion" to dominate the market for computer microprocessors.
On November 12, 2009, AMD agreed to drop the antitrust lawsuit against Intel in exchange for $1.25 billion. A joint press release published by the two chip makers stated "While the relationship between the two companies has been difficult in the past, this agreement ends the legal disputes and enables the companies to focus all of our efforts on product innovation and development."
An antitrust lawsuit and a class-action suit relating to cold calling employees of other companies has been settled.
In 2005, the local Fair Trade Commission found that Intel violated the Japanese Antimonopoly Act. The commission ordered Intel to eliminate discounts that had discriminated against AMD. To avoid a trial, Intel agreed to comply with the order.
In July 2007, the European Commission accused Intel of anti-competitive practices, mostly against AMD. The allegations, going back to 2003, include giving preferential prices to computer makers buying most or all of their chips from Intel, paying computer makers to delay or cancel the launch of products using AMD chips, and providing chips at below standard cost to governments and educational institutions. Intel responded that the allegations were unfounded and instead qualified its market behavior as consumer-friendly. General counsel Bruce Sewell responded that the Commission had misunderstood some factual assumptions as to pricing and manufacturing costs.
In February 2008, Intel stated that its office in Munich had been raided by European Union regulators. Intel reported that it was cooperating with investigators. Intel faced a fine of up to 10% of its annual revenue, if found guilty of stifling competition. AMD subsequently launched a website promoting these allegations. In June 2008, the EU filed new charges against Intel. In May 2009, the EU found that Intel had engaged in anti-competitive practices and subsequently fined Intel €1.06 billion (US$1.44 billion), a record amount. Intel was found to have paid companies, including Acer, Dell, HP, Lenovo and NEC, to exclusively use Intel chips in their products, and therefore harmed other companies including AMD. The European Commission said that Intel had deliberately acted to keep competitors out of the computer chip market and in doing so had made a "serious and sustained violation of the EU's antitrust rules". In addition to the fine, Intel was ordered by the Commission to immediately cease all illegal practices. Intel has stated that they will appeal against the Commission's verdict. In June 2014, the General Court, which sits below the European Court of Justice, rejected the appeal.
In September 2007, South Korean regulators accused Intel of breaking antitrust law. The investigation began in February 2006, when officials raided Intel's South Korean offices. The company risked a penalty of up to 3% of its annual sales, if found guilty. In June 2008, the Fair Trade Commission ordered Intel to pay a fine of US$25.5 million for taking advantage of its dominant position to offer incentives to major Korean PC manufacturers on the condition of not buying products from AMD.
New York started an investigation of Intel in January 2008 on whether the company violated antitrust laws in pricing and sales of its microprocessors. In June 2008, the Federal Trade Commission also began an antitrust investigation of the case. In December 2009, the FTC announced it would initiate an administrative proceeding against Intel in September 2010.
In November 2009, following a two-year investigation, New York Attorney General Andrew Cuomo sued Intel, accusing them of bribery and coercion, claiming that Intel bribed computer makers to buy more of their chips than those of their rivals, and threatened to withdraw these payments if the computer makers were perceived as working too closely with its competitors. Intel has denied these claims.
On July 22, 2010, Dell agreed to a settlement with the U.S. Securities and Exchange Commission (SEC) to pay $100M in penalties resulting from charges that Dell did not accurately disclose accounting information to investors. In particular, the SEC charged that from 2002 to 2006, Dell had an agreement with Intel to receive rebates in exchange for not using chips manufactured by AMD. These substantial rebates were not disclosed to investors, but were used to help meet investor expectations regarding the company's financial performance; "These exclusivity payments grew from 10 percent of Dell's operating income in FY 2003 to 38 percent in FY 2006, and peaked at 76 percent in the first quarter of FY 2007." Dell eventually did adopt AMD as a secondary supplier in 2006, and Intel subsequently stopped their rebates, causing Dell's financial performance to fall.
Intel has been accused by some residents of Rio Rancho, New Mexico of allowing VOCs to be released in excess of their pollution permit. One resident claimed that a release of 1.4 tons of carbon tetrachloride was measured from one acid scrubber during the fourth quarter of 2003 but an emission factor allowed Intel to report no carbon tetrachloride emissions for all of 2003.
Another resident alleges that Intel was responsible for the release of other VOCs from their Rio Rancho site and that a necropsy of lung tissue from two deceased dogs in the area indicated trace amounts of toluene, hexane, ethylbenzene, and xylene isomers, all of which are solvents used in industrial settings but also commonly found in gasoline, retail paint thinners and retail solvents. During a sub-committee meeting of the New Mexico Environment Improvement Board, a resident claimed that Intel's own reports documented more than of VOCs were released in June and July 2006.
Intel's environmental performance is published annually in their corporate responsibility report.
In its 2012 rankings on the progress of consumer electronics companies relating to conflict minerals, the Enough Project rated Intel the best of 24 companies, calling it a "Pioneer of progress". In 2014, chief executive Brian Krzanich urged the rest of the industry to follow Intel's lead by also shunning conflict minerals.
Intel has faced complaints of age discrimination in firing and layoffs. Intel was sued in 1993 by nine former employees, over allegations that they were laid off because they were over the age of 40.
A group called FACE Intel (Former and Current Employees of Intel) claims that Intel weeds out older employees. FACE Intel claims that more than 90 percent of people who have been laid off or fired from Intel are over the age of 40. "Upside" magazine requested data from Intel breaking out its hiring and firing by age, but the company declined to provide any. Intel has denied that age plays any role in Intel's employment practices. FACE Intel was founded by Ken Hamidi, who was fired from Intel in 1995 at the age of 47. Hamidi was blocked in a 1999 court decision from using Intel's email system to distribute criticism of the company to employees, which overturned in 2003 in Intel Corp. v. Hamidi.
In August 2016, Indian officials of the Bruhat Bengaluru Mahanagara Palike (BBMP) parked garbage trucks on Intel's campus and threatened to dump them for evading payment of property taxes between 2007 and 2008, to the tune of 340 million Indian rupees (US$4.9 million). Intel had reportedly been paying taxes as a non-air-conditioned office, when the campus in fact had central air conditioning. Other factors, such as land acquisition and construction improvements, added to the tax burden. Previously, Intel had appealed the demand in the Karnataka high court in July, during which the court ordered Intel to pay BBMP half the owed amount (170 million rupees, or US$2.4 million) plus arrears by August 28 of that year. | https://en.wikipedia.org/wiki?curid=14617 |
İsmet İnönü
Mustafa İsmet İnönü (; 24 September 1884 – 25 December 1973) was a Turkish general and statesman, who served as the second President of Turkey from 11 November 1938 to 22 May 1950, when his Republican People's Party was defeated in Turkey's second free elections. He also served as the first Chief of the General Staff from 1922 to 1924, and as the first Prime Minister after the declaration of the Republic, serving three terms: from 1923 to 1924, 1925 to 1937, and 1961 to 1965. As President, he was granted the official title of "Millî Şef" (National Chief). He is the first President of Turkey with Kurdish ancestry.
When the 1934 Surname Law was adopted, Mustafa Kemal gave him a surname taken from İnönü, where he commanded the forces of Army of Grand National Assembly as the Minister of the Chief of the General Staff () during the Greco-Turkish War of 1919–1922. Afterwards these battles became to be known as the First Battle of İnönü and Second Battle of İnönü.
İsmet İnönü was born in Smyrna (now known in English as İzmir), Aidin Vilayet to Hacı Reşit () and Cevriye () (later Cevriye Temelli), and was of Kurdish descent on his father's side and of Turkish descent through his mother. Hacı Reşit was retired from the First Examinant Department of Legal Affairs Bureau of the War Ministry ("Harbiye Nezareti Muhakemat Dairesi Birinci Mümeyyizliği"), who was born in Malatya and a member of Kürümoğulları family of Bitlis. Cevriye was a daughter of "Müderris" (professor) Hasan Efendi who belonged to the ulema and was a member of a Turkish family of Razgrad. Due to his father's assignments, the family moved from one city to another. Thus, Ismet completed his primary education in Sivas and graduated Sivas Military Junior High School ("Sivas Askerî Rüştiyesi") in 1894. And then he studied at Sivas School for Civil Servants ("Sivas Mülkiye İdadisi") for a year.
Ismet graduated from the Imperial School of Military Engineering ("Mühendishane-i Berrî-i Hümâyûn") in 1903 as gunnery officer, and received his first military assignment in the Ottoman Army. He joined the Committee of Union and Progress. He won his first military victories by suppressing two major revolts against the struggling Ottoman Empire, first in Rumelia and later in Yemen, whose leader was Yahya Muhammad Hamid ed-Din. He served as a military officer during the Balkan Wars on the Ottoman-Bulgarian front. During World War I, he served with the Ottoman military rank of Miralay (arbitrarily the equivalent of Colonel or Senior Colonel (Brigadier)) and worked under Mustafa Kemal Pasha during his assignments at the Caucasus and Palestine fronts.
During the war, on 13 April 1916, Ismet married Mevhibe, who was a daughter of an Ashraf ("Eşraf") of Ziştovi (present day Svishtov) Zühtü Efendi. They had three children: Ömer, Erdal and Özden (married to Metin Toker).
After losing the Battle of Megiddo against General Edmund Allenby during the last days of World War I, he went to Istanbul and was assigned Undersecretary of the Ministry of War and then General Secretary of the Documentation in the Military Council.
After the military occupation of Constantinople on 16 March 1920, he decided to pass to Anatolia to join the Turkish National Movement. He and his chief of staff Major Saffet (Arıkan) wore soldier uniform and left Maltepe in the evening of 19 March 1920 and arrived at Ankara on 9 April 1920.
He was appointed the commander of the Western Front of the Army of the Grand National Assembly (GNA), a position in which he remained during the Turkish War of Independence. He was promoted to the rank of Mirliva (arbitrarily the equivalent of Brigadier General or Major General; the most junior General rank with the title Pasha in the Ottoman and pre-1934 Turkish Army) after winning the First Battle of İnönü which took place between 9 and 11 January 1921. He also won the subsequent Second Battle of İnönü which was fought between 26 and 31 March 1921. During the Turkish War of Independence, he was also a member of the GNA in Ankara.
İnönü was replaced by Mustafa Fevzi Pasha, who was also the Prime Minister and Minister of Defense at the time, as the Chief of Staff of the Army of the GNA after the Turkish forces lost major battles against the advancing Greek Army in July 1921, as a result of which the cities Afyonkarahisar, Kütahya and Eskişehir were temporarily lost. He participated as a staff officer (with the rank Brigadier General) to the later battles, until the final Turkish victory in September 1922.
After the War of Independence was won, İsmet Pasha was appointed as the chief negotiator of the Turkish delegation, both for the Armistice of Mudanya and for the Treaty of Lausanne.
The Lausanne conference convened in late 1922 to settle the terms of a new treaty that would take the place of the Treaty of Sèvres. Inönü became famous for his stubborn resolve in determining Ankara's position as the legitimate, sovereign government of Turkey. After delivering his position, Inönü turned off his hearing aid during the speeches of British foreign secretary Lord Curzon. When Curzon had finished, Inönü reiterated his position as if Curzon had never said a word.
İnönü later served as the Prime Minister of Turkey for several terms, maintaining the system that Mustafa Kemal had put in place. He acted after every major crisis (such as the Sheikh Said rebellion or the attempted assassination in İzmir against Mustafa Kemal) to restore peace in the country. In October 1923 he suggested to make Ankara the capital of Turkey, which successively was approved by the parliament. He shortly stepped down as prime minister between He replaced prime minister Fethi Okyar at a time the seriousness of the situation around the Sheikh Said Rebellion was realized by the Turkish Government in spring 1925. While dealing with the Sheikh Said revolt he proclaimed a Turkish nationalist policy and encouraged the turkification of the non-Turkish population. In September 1925, following the suppression of the Sheikh Said rebellion, he presided over the Reform Council for the East () which prepared the Report for Reform in the East (), which recommended to impede an establishment of a Kurdish elite, to forbid non-Turkish languages and the creation of regional administrative units called Inspectorates-General, which were to be governed with martial law. Following this report, three Inspectorates-Generals were established in the Kurdish areas comprising several provinces.
He tried to manage the economy with heavy-handed government intervention, especially after the 1929 economic crisis, by implementing an economic plan inspired by the "Five Year Plan" of the Soviet Union. In doing so, he took much private property under government control. Due to his efforts, to this day, more than 70% of land in Turkey is still owned by the state.
Desiring a more liberal economic system, Atatürk dissolved the government of İnönü and appointed Celâl Bayar, the founder of the first Turkish commercial bank Türkiye İş Bankası, as Prime Minister.
After the death of Atatürk on 10 November 1938, İnönü was viewed as the most appropriate candidate to succeed him, and was elected the second President of the Republic of Turkey. He enjoyed the official title of "Millî Şef", i.e. "National Chief". He became the first Kurdish descendant President of Turkey.
World War II broke out in the first year of his presidency, and both the Allies and the Axis pressured İnönü to bring Turkey into the war on their side. The Germans sent Franz von Papen to Ankara in April 1939 while the British sent Hughe Knatchbull-Hugessen and the French René Massigli. On 23 April 1939, Turkish Foreign Minister Şükrü Saracoğlu told Knatchbull-Hugessen of his nation's fears of Italian claims of the Mediterranean as "Mare Nostrum" and German control of the Balkans, and suggested an Anglo-Soviet-Turkish alliance as the best way of countering the Axis. In May 1939, during the visit of Maxime Weygand to Turkey, İnönü told the French Ambassador René Massigli that he believed that the best way of stopping Germany was an alliance of Turkey, the Soviet Union, France and Britain; that if such an alliance came into being, the Turks would allow Soviet ground and air forces onto their soil; and that he wanted a major programme of French military aid to modernize the Turkish armed forces. The signing of the Molotov–Ribbentrop Pact on 23 August 1939 drew Turkey away from the Allies; the Turks always believed that it was essential to have the Soviet Union as an ally to counter Germany, and thus the signing of the German-Soviet pact undercut completely the assumptions behind Turkish security policy. With the signing of the Molotov-Ribbentrop pact, İnönü chose to be neutral in World War II as taking on Germany and the Soviet Union at the same time would be too much for Turkey, through he signed a treaty of alliance with Britain and France on 19 October 1939. It was only with France's defeat in June 1940 that İnönü abandoned the pro-Allied neutrality that he had followed since the beginning of the war. A major embarrassment for the Turks occurred in July 1940 when the Germans captured and published documents from the Quai d'Orsay in Paris showing the Turks were aware of Operation Pike—as the Anglo-French plan in the winter of 1939–40 to bomb the oil fields in the Soviet Union from Turkey was codenamed—which was intended by Berlin to worsen relations between Ankara and Moscow. In turn, worsening relations between the Soviet Union and Turkey were intended to drive Turkey into the arms of the "Reich". After the publication of the French documents relating to Operation Pike, İnönü had to fire Saracoğlu as Foreign Minister following Soviet complaints and signed an economic treaty with Germany that placed Turkey within the German economic sphere of influence, but İnönü would go no further towards the Axis.
In the first half of 1941, Germany which was intent upon invading the Soviet Union went out of its way to improve relations with Turkey as the "Reich" hoped for a benevolent Turkish neutrality when the German-Soviet war began. At the same time, the British had great hopes in the spring of 1941 when they dispatched an expeditionary force to Greece that İnönü could be persuaded to enter the war on the Allied side as the British leadership had high hopes of creating a Balkan front that would tie down German forces, and which thus led a major British diplomatic offensive with the Foreign Secretary Sir Anthony Eden visiting Ankara several times to meet with İnönü. İnönü always told Eden that the Turks would not join the British forces in Greece, and the Turks would only enter the war if Germany attacked Turkey. For his part, Papen offered İnönü parts of Greece if Turkey were to enter the war on the Axis side, an offer İnönü declined. In May 1941 when the Germans dispatched an expeditionary force to Iraq to fight against the British, İnönü refused Papen's request that the German forces be allowed transit rights to Iraq.
British Prime Minister Winston Churchill travelled to Ankara on 30 January 1943 for a conference with President İnönu, to urge Turkey's entry into the war on the allied side. Churchill met secretly with İnönü in January 1943, inside a railroad car at the Yenice Station near Adana. However, by December 4–6, 1943, İnönü felt confident enough about the outcome of the war, that he met openly with Franklin D. Roosevelt and Winston Churchill at the Second Cairo Conference. Until 1941, both Roosevelt and Churchill had thought that Turkey's continuing neutrality would serve the interests of the Allies by blocking the Axis from reaching the strategic oil reserves of the Middle East. But the early victories of the Axis up to the end of 1942 caused Roosevelt and Churchill to re-evaluate a possible Turkish participation in the war on the side of the Allies. Turkey had maintained a decently-sized Army and Air Force throughout the war, and Churchill wanted the Turks to open a new front in the Balkans. Roosevelt, on the other hand, still believed that a Turkish attack would be too risky, and an eventual Turkish failure would have disastrous effects for the Allies.
İnönü knew very well the hardships which his country had suffered during decades of incessant war between 1908 and 1922 and was determined to keep Turkey out of another war as long as he could. The young Turkish Republic was still re-building, recovering from the losses due to earlier wars, and lacked any modern weapons and the infrastructure to enter a war to be fought along and possibly within its borders. İnönü based his neutrality policy during the Second World War on the premise that Western Allies and the Soviet Union would sooner or later have a falling out after the war. Thus, İnönu wanted assurances on financial and military aid for Turkey, as well as a guarantee that the United States and the United Kingdom would stand beside Turkey in the event of a Soviet invasion of the Turkish Straits after the war. In August 1944 İnönü broke off diplomatic relations with Germany and on 5 January 1945, İnönü severed diplomatic relations with Japan. Shortly afterwards, İnönü allowed Allied shipping to use the Turkish straits to send supplies to the Soviet Union and on 25 February 1945 he declared war on Germany and Japan.
The post-war tensions and arguments surrounding the Turkish Straits would come to be known as the Turkish Straits crisis. The fear of Soviet invasion and Joseph Stalin's unconcealed desire for Soviet military bases in the Turkish Straits eventually caused Turkey to give up its principle of neutrality in foreign relations and join NATO in February 1952.
Under international pressure to transform the country to a democratic state, İnönü presided over the infamous 1946 elections, where voting was carried out under the gaze of onlookers who could determine which voters had voted for which parties, and where secrecy prevailed as to the subsequent counting of votes. Free and fair national elections had to wait till 1950, and on that occasion İnönü's government was defeated.
In the 1950 campaign, the leading figures of the opposition Democrat Party used the following slogan: ""Geldi İsmet, kesildi kısmet"" ("Ismet arrived, [our] fortune left" ). İnönü presided over the peaceful transfer of power to the Democratic Party of Celâl Bayar and Adnan Menderes. For ten years he served as the leader of the opposition before returning to power as Prime Minister after the 1961 election, held after the military coup-d'etat in 1960.
Even though the pro-Menderes opposition was forbidden to contest the 1961 election (most of its leaders who were still alive were in prison), İnönü's forces still did not gain enough seats in the legislature to win a majority. Therefore, they had to form coalition governments until 1965. İnönü lost both the 1965 and 1969 general elections to a much younger man, Süleyman Demirel, but he remained leader of the party till 1972, whereupon he was defeated by leadership rival Bülent Ecevit. In a meeting in Bursa for the 1969 general elections, a young opposite viewer man yelled at him; "You let us go without food!" by implying not joining World War II. İnönü replied him by saying "Yes, I let you go without food, but I did not let you become fatherless" by implying death of millions of people from the both sides of World War II.
A highly educated man, İnönü was able to speak fluently Arabic, English, French and German in addition to his native Turkish. He died on 25 December 1973 of a heart attack, at the age of 89, and was interred opposite to Atatürk's mausoleum at Anıtkabir in Ankara.
İnönü University and Malatya İnönü Stadium in Malatya are named after him, as is the İnönü Stadium in Istanbul, home of the Beşiktaş football club. | https://en.wikipedia.org/wiki?curid=14618 |
Inorganic chemistry
Inorganic chemistry deals with synthesis and behavior of inorganic and organometallic compounds. This field covers all chemical compounds except the myriad of organic compounds (carbon-based compounds, usually containing C-H bonds), which are the subjects of organic chemistry. The distinction between the two disciplines is far from absolute, as there is much overlap in the subdiscipline of organometallic chemistry. It has applications in every aspect of the chemical industry, including catalysis, materials science, pigments, surfactants, coatings, medications, fuels, and agriculture.
Many inorganic compounds are ionic compounds, consisting of cations and anions joined by ionic bonding. Examples of salts (which are ionic compounds) are magnesium chloride MgCl2, which consists of magnesium cations Mg2+ and chloride anions Cl−; or sodium oxide Na2O, which consists of sodium cations Na+ and oxide anions O2−. In any salt, the proportions of the ions are such that the electric charges cancel out, so that the bulk compound is electrically neutral. The ions are described by their oxidation state and their ease of formation can be inferred from the ionization potential (for cations) or from the electron affinity (anions) of the parent elements.
Important classes of inorganic compounds are the oxides, the carbonates, the sulfates, and the halides. Many inorganic compounds are characterized by high melting points. Inorganic salts typically are poor conductors in the solid state. Other important features include their high melting point and ease of crystallization. Where some salts (e.g., NaCl) are very soluble in water, others (e.g., FeS) are not.
The simplest inorganic reaction is double displacement when in mixing of two salts the ions are swapped without a change in oxidation state. In redox reactions one reactant, the "oxidant", lowers its oxidation state and another reactant, the "reductant", has its oxidation state increased. The net result is an exchange of electrons. Electron exchange can occur indirectly as well, e.g., in batteries, a key concept in electrochemistry.
When one reactant contains hydrogen atoms, a reaction can take place by exchanging protons in acid-base chemistry. In a more general definition, any chemical species capable of binding to electron pairs is called a Lewis acid; conversely any molecule that tends to donate an electron pair is referred to as a Lewis base. As a refinement of acid-base interactions, the HSAB theory takes into account polarizability and size of ions.
Inorganic compounds are found in nature as minerals. Soil may contain iron sulfide as pyrite or calcium sulfate as gypsum. Inorganic compounds are also found multitasking as biomolecules: as electrolytes (sodium chloride), in energy storage (ATP) or in construction (the polyphosphate backbone in DNA).
The first important man-made inorganic compound was ammonium nitrate for soil fertilization through the Haber process. Inorganic compounds are synthesized for use as catalysts such as vanadium(V) oxide and titanium(III) chloride, or as reagents in organic chemistry such as lithium aluminium hydride.
Subdivisions of inorganic chemistry are organometallic chemistry, cluster chemistry and bioinorganic chemistry. These fields are active areas of research in inorganic chemistry, aimed toward new catalysts, superconductors, and therapies.
Inorganic chemistry is a highly practical area of science. Traditionally, the scale of a nation's economy could be evaluated by their productivity of sulfuric acid. The top 20 inorganic chemicals manufactured in Canada, China, Europe, India, Japan, and the US (2005 data):
Aluminium sulfate, Ammonia, Ammonium nitrate, Ammonium sulfate, Carbon black, Chlorine, hydrochloric acid, hydrogen, hydrogen peroxide, nitric acid, nitrogen, oxygen, phosphoric acid, sodium carbonate, sodium chlorate, sodium hydroxide, sodium silicate, sodium sulfate, sulfuric acid, and titanium dioxide.
The manufacturing of fertilizers is another practical application of industrial inorganic chemistry.
Descriptive inorganic chemistry focuses on the classification of compounds based on their properties. Partly the classification focuses on the position in the periodic table of the heaviest element (the element with the highest atomic weight) in the compound, partly by grouping compounds by their structural similarities.
Classifications of inorganic chemistry:
Classical coordination compounds feature metals bound to "lone pairs" of electrons residing on the main group atoms of ligands such as H2O, NH3, Cl−, and CN−. In modern coordination compounds almost all organic and inorganic compounds can be used as ligands. The "metal" usually is a metal from the groups 3-13, as well as the "trans"-lanthanides and "trans"-actinides, but from a certain perspective, all chemical compounds can be described as coordination complexes.
The stereochemistry of coordination complexes can be quite rich, as hinted at by Werner's separation of two enantiomers of [Co((OH)2Co(NH3)4)3]6+, an early demonstration that chirality is not inherent to organic compounds. A topical theme within this specialization is supramolecular coordination chemistry.
These species feature elements from groups I, II, III, IV, V,VI, VII, 0 (excluding hydrogen) of the periodic table. Due to their often similar reactivity, the elements in group 3 (Sc, Y, and La) and group 12 (Zn, Cd, and Hg) are also generally included, and the lanthanides and actinides are sometimes included as well.
Main group compounds have been known since the beginnings of chemistry, e.g., elemental sulfur and the distillable white phosphorus. Experiments on oxygen, O2, by Lavoisier and Priestley not only identified an important diatomic gas, but opened the way for describing compounds and reactions according to stoichiometric ratios. The discovery of a practical synthesis of ammonia using iron catalysts by Carl Bosch and Fritz Haber in the early 1900s deeply impacted mankind, demonstrating the significance of inorganic chemical synthesis.
Typical main group compounds are SiO2, SnCl4, and N2O. Many main group compounds can also be classed as “organometallic”, as they contain organic groups, e.g., B(CH3)3). Main group compounds also occur in nature, e.g., phosphate in DNA, and therefore may be classed as bioinorganic. Conversely, organic compounds lacking (many) hydrogen ligands can be classed as “inorganic”, such as the fullerenes, buckytubes and binary carbon oxides.
Compounds containing metals from group 4 to 11 are considered transition metal compounds. Compounds with a metal from group 3 or 12 are sometimes also incorporated into this group, but also often classified as main group compounds.
Transition metal compounds show a rich coordination chemistry, varying from tetrahedral for titanium (e.g., TiCl4) to square planar for some nickel complexes to octahedral for coordination complexes of cobalt. A range of transition metals can be found in biologically important compounds, such as iron in hemoglobin.
Usually, organometallic compounds are considered to contain the M-C-H group. The metal (M) in these species can either be a main group element or a transition metal. Operationally, the definition of an organometallic compound is more relaxed to include also highly lipophilic complexes such as metal carbonyls and even metal alkoxides.
Organometallic compounds are mainly considered a special category because organic ligands are often sensitive to hydrolysis or oxidation, necessitating that organometallic chemistry employs more specialized preparative methods than was traditional in Werner-type complexes. Synthetic methodology, especially the ability to manipulate complexes in solvents of low coordinating power, enabled the exploration of very weakly coordinating ligands such as hydrocarbons, H2, and N2. Because the ligands are petrochemicals in some sense, the area of organometallic chemistry has greatly benefited from its relevance to industry.
Clusters can be found in all classes of chemical compounds. According to the commonly accepted definition, a cluster consists minimally of a triangular set of atoms that are directly bonded to each other. But metal-metal bonded dimetallic complexes are highly relevant to the area. Clusters occur in "pure" inorganic systems, organometallic chemistry, main group chemistry, and bioinorganic chemistry. The distinction between very large clusters and bulk solids is increasingly blurred. This interface is the chemical basis of nanoscience or nanotechnology and specifically arise from the study of quantum size effects in cadmium selenide clusters. Thus, large clusters can be described as an array of bound atoms intermediate in character between a molecule and a solid.
By definition, these compounds occur in nature, but the subfield includes anthropogenic species, such as pollutants (e.g., methylmercury) and drugs (e.g., Cisplatin). The field, which incorporates many aspects of biochemistry, includes many kinds of compounds, e.g., the phosphates in DNA, and also metal complexes containing ligands that range from biological macromolecules, commonly peptides, to ill-defined species such as humic acid, and to water (e.g., coordinated to gadolinium complexes employed for MRI). Traditionally bioinorganic chemistry focuses on electron- and energy-transfer in proteins relevant to respiration. Medicinal inorganic chemistry includes the study of both non-essential and essential elements with applications to diagnosis and therapies.
This important area focuses on structure, bonding, and the physical properties of materials. In practice, solid state inorganic chemistry uses techniques such as crystallography to gain an understanding of the properties that result from collective interactions between the subunits of the solid. Included in solid state chemistry are metals and their alloys or intermetallic derivatives. Related fields are condensed matter physics, mineralogy, and materials science.
An alternative perspective on the area of inorganic chemistry begins with the Bohr model of the atom and, using the tools and models of theoretical chemistry and computational chemistry, expands into bonding in simple and then more complicated molecules. Precise quantum mechanical descriptions for multielectron species, the province of inorganic chemistry, is difficult. This challenge has spawned many semi-quantitative or semi-empirical approaches including molecular orbital theory and ligand field theory, In parallel with these theoretical descriptions, approximate methodologies are employed, including density functional theory.
Exceptions to theories, qualitative and quantitative, are extremely important in the development of the field. For example, CuII2(OAc)4(H2O)2 is almost diamagnetic below room temperature whereas Crystal Field Theory predicts that the molecule would have two unpaired electrons. The disagreement between qualitative theory (paramagnetic) and observation (diamagnetic) led to the development of models for "magnetic coupling." These improved models led to the development of new magnetic materials and new technologies.
Inorganic chemistry has greatly benefited from qualitative theories. Such theories are easier to learn as they require little background in quantum theory. Within main group compounds, VSEPR theory powerfully predicts, or at least rationalizes, the structures of main group compounds, such as an explanation for why NH3 is pyramidal whereas ClF3 is T-shaped. For the transition metals, crystal field theory allows one to understand the magnetism of many simple complexes, such as why [FeIII(CN)6]3− has only one unpaired electron, whereas [FeIII(H2O)6]3+ has five. A particularly powerful qualitative approach to assessing the structure and reactivity begins with classifying molecules according to electron counting, focusing on the numbers of valence electrons, usually at the central atom in a molecule.
A central construct in inorganic chemistry is the theory of molecular symmetry. Mathematical group theory provides the language to describe the shapes of molecules according to their point group symmetry. Group theory also enables factoring and simplification of theoretical calculations.
Spectroscopic features are analyzed and described with respect to the symmetry properties of the, "inter alia", vibrational or electronic states. Knowledge of the symmetry properties of the ground and excited states allows one to predict the numbers and intensities of absorptions in vibrational and electronic spectra. A classic application of group theory is the prediction of the number of C-O vibrations in substituted metal carbonyl complexes. The most common applications of symmetry to spectroscopy involve vibrational and electronic spectra.
Group Theory highlights commonalities and differences in the bonding of otherwise disparate species. For example, the metal-based orbitals transform identically for WF6 and W(CO)6, but the energies and populations of these orbitals differ significantly. A similar relationship exists CO2 and molecular beryllium difluoride.
An alternative quantitative approach to inorganic chemistry focuses on energies of reactions. This approach is highly traditional and empirical, but it is also useful. Broad concepts that are couched in thermodynamic terms include redox potential, acidity, phase changes. A classic concept in inorganic thermodynamics is the Born-Haber cycle, which is used for assessing the energies of elementary processes such as electron affinity, some of which cannot be observed directly.
An important aspect of inorganic chemistry focuses on reaction pathways, i.e. reaction mechanisms.
The mechanisms of main group compounds of groups 13-18 are usually discussed in the context of organic chemistry (organic compounds are main group compounds, after all). Elements heavier than C, N, O, and F often form compounds with more electrons than predicted by the octet rule, as explained in the article on hypervalent molecules. The mechanisms of their reactions differ from organic compounds for this reason. Elements lighter than carbon (B, Be, Li) as well as Al and Mg often form electron-deficient structures that are electronically akin to carbocations. Such electron-deficient species tend to react via associative pathways. The chemistry of the lanthanides mirrors many aspects of chemistry seen for aluminium.
Transition metal and main group compounds often react differently. The important role of d-orbitals in bonding strongly influences the pathways and rates of ligand substitution and dissociation. These themes are covered in articles on coordination chemistry and ligand. Both associative and dissociative pathways are observed.
An overarching aspect of mechanistic transition metal chemistry is the kinetic lability of the complex illustrated by the exchange of free and bound water in the prototypical complexes [M(H2O)6]n+:
The rates of water exchange varies by 20 orders of magnitude across the periodic table, with lanthanide complexes at one extreme and Ir(III) species being the slowest.
Redox reactions are prevalent for the transition elements. Two classes of redox reaction are considered: atom-transfer reactions, such as oxidative addition/reductive elimination, and electron-transfer. A fundamental redox reaction is "self-exchange", which involves the degenerate reaction between an oxidant and a reductant. For example, permanganate and its one-electron reduced relative manganate exchange one electron:
Coordinated ligands display reactivity distinct from the free ligands. For example, the acidity of the ammonia ligands in [Co(NH3)6]3+ is elevated relative to NH3 itself. Alkenes bound to metal cations are reactive toward nucleophiles whereas alkenes normally are not. The large and industrially important area of catalysis hinges on the ability of metals to modify the reactivity of organic ligands. Homogeneous catalysis occurs in solution and heterogeneous catalysis occurs when gaseous or dissolved substrates interact with surfaces of solids. Traditionally homogeneous catalysis is considered part of organometallic chemistry and heterogeneous catalysis is discussed in the context of surface science, a subfield of solid state chemistry. But the basic inorganic chemical principles are the same. Transition metals, almost uniquely, react with small molecules such as CO, H2, O2, and C2H4. The industrial significance of these feedstocks drives the active area of catalysis. Ligands can also undergo ligand transfer reactions such as transmetalation.
Because of the diverse range of elements and the correspondingly diverse properties of the resulting derivatives, inorganic chemistry is closely associated with many methods of analysis. Older methods tended to examine bulk properties such as the electrical conductivity of solutions, melting points, solubility, and acidity. With the advent of quantum theory and the corresponding expansion of electronic apparatus, new tools have been introduced to probe the electronic properties of inorganic molecules and solids. Often these measurements provide insights relevant to theoretical models. For example, measurements on the photoelectron spectrum of methane demonstrated that describing the bonding by the two-center, two-electron bonds predicted between the carbon and hydrogen using Valence Bond Theory is not appropriate for describing ionisation processes in a simple way. Such insights led to the popularization of molecular orbital theory as fully delocalised orbitals are a more appropriate simple description of electron removal and electron excitation.
Commonly encountered techniques are:
Although some inorganic species can be obtained in pure form from nature, most are synthesized in chemical plants and in the laboratory.
Inorganic synthetic methods can be classified roughly according to the volatility or solubility of the component reactants. Soluble inorganic compounds are prepared using methods of organic synthesis. For metal-containing compounds that are reactive toward air, Schlenk line and glove box techniques are followed. Volatile compounds and gases are manipulated in “vacuum manifolds” consisting of glass piping interconnected through valves, the entirety of which can be evacuated to 0.001 mm Hg or less. Compounds are condensed using liquid nitrogen (b.p. 78K) or other cryogens. Solids are typically prepared using tube furnaces, the reactants and products being sealed in containers, often made of fused silica (amorphous SiO2) but sometimes more specialized materials such as welded Ta tubes or Pt “boats”. Products and reactants are transported between temperature zones to drive reactions. | https://en.wikipedia.org/wiki?curid=14624 |
Ingmar Bergman
Ernst Ingmar Bergman (14 July 1918 – 30 July 2007) was a Swedish director, writer, and producer who worked in film, television, theatre, and radio. Considered to be among the most accomplished and influential filmmakers of all time, Bergman's films include "Smiles of a Summer Night" (1955), "The Seventh Seal" (1957), "Wild Strawberries" (1957), "Persona" (1966), "Cries and Whispers" (1972), "Scenes from a Marriage" (1973), and "Fanny and Alexander" (1982); the last two exist in extended television versions.
Bergman directed over sixty films and documentaries for cinematic release and for television screenings, most of which he also wrote. He also directed over 170 plays. He eventually forged a creative partnership with his cinematographers Gunnar Fischer and Sven Nykvist. Among his company of actors were Harriet and Bibi Andersson, Liv Ullmann, Gunnar Björnstrand, Erland Josephson, Ingrid Thulin, and Max von Sydow. Most of his films were set in Sweden, and many films from "Through a Glass Darkly" (1961) onward were filmed on the island of Fårö.
Philip French referred to Bergman as "one of the greatest artists of the 20th century ... he found in literature and the performing arts a way of both recreating and questioning the human condition." Director Martin Scorsese commented: "If you were alive in the 50s and the 60s and of a certain age, a teenager on your way to becoming an adult, and you wanted to make movies, I don't see how you couldn't be influenced by Bergman ...It's impossible to overestimate the effect that those films had on people."
Ernst Ingmar Bergman was born on 14 July 1918 in Uppsala, Sweden, the son of Erik Bergman, a Lutheran minister and later chaplain to the King of Sweden, and Karin ("née" Åkerblom), a nurse who also had Walloon ancestors. He grew up with his older brother Dag and sister Margareta surrounded by religious imagery and discussion. His father was a conservative parish minister with strict ideas of parenting. Ingmar was locked up in dark closets for infractions such as wetting himself. "While father preached away in the pulpit and the congregation prayed, sang, or listened", Ingmar wrote in his autobiography "Laterna Magica",
I devoted my interest to the church's mysterious world of low arches, thick walls, the smell of eternity, the coloured sunlight quivering above the strangest vegetation of medieval paintings and carved figures on ceilings and walls. There was everything that one's imagination could desire—angels, saints, dragons, prophets, devils, humans ...
Although raised in a devout Lutheran household, Bergman later stated that he lost his faith when aged eight, and only came to terms with this fact while making "Winter Light" in 1962. His interest in theatre and film began early: "At the age of nine, he traded a set of tin soldiers for a magic lantern, a possession that altered the course of his life. Within a year, he had created, by playing with this toy, a private world in which he felt completely at home, he recalled. He fashioned his own scenery, marionettes, and lighting effects and gave puppet productions of Strindberg plays in which he spoke all the parts."
Bergman attended Palmgren's School as a teenager. His school years were unhappy, and he remembered them unfavourably in later years. In a 1944 letter concerning the film "Torment" (sometimes known as "Frenzy"), which sparked debate on the condition of Swedish high schools (and which Bergman had written), the school's principal Henning Håkanson wrote, among other things, that Bergman had been a "problem child". Bergman wrote in a response that he had strongly disliked the emphasis on homework and testing in his formal schooling.
In 1934, aged 16, he was sent to Germany to spend the summer holidays with family friends. He attended a Nazi rally in Weimar at which he saw Adolf Hitler. He later wrote in "Laterna Magica" ("The Magic Lantern") about the visit to Germany, describing how the German family had put a portrait of Hitler on the wall by his bed, and that "for many years, I was on Hitler's side, delighted by his success and saddened by his defeats". Bergman commented that "Hitler was unbelievably charismatic. He electrified the crowd. ... The Nazism I had seen seemed fun and youthful". Bergman did two five-month stretches in Sweden of mandatory military service.
Bergman enrolled at Stockholm University College (later renamed Stockholm University) in 1937, to study art and literature. He spent most of his time involved in student theatre and became a "genuine movie addict". At the same time, a romantic involvement led to a physical confrontation with his father which resulted in a break in their relationship which lasted for many years. Although he did not graduate from the university, he wrote a number of plays and an opera, and became an assistant director at a local theatre. In 1942, he was given the opportunity to direct one of his own scripts, "Caspar's Death". The play was seen by members of Svensk Filmindustri, which then offered Bergman a position working on scripts. He married Else Fisher in 1943.
Bergman's film career began in 1941 with his work rewriting scripts, but his first major accomplishment was in 1944 when he wrote the screenplay for "Torment" (a.k.a. "Frenzy") ("Hets"), a film directed by Alf Sjöberg. Along with writing the screenplay, he was also appointed assistant director of the film. In his second autobiographical book, "Images: My Life in Film", Bergman describes the filming of the exteriors as his actual film directorial debut. The film sparked debate on Swedish formal education. When Henning Håkanson (the principal of the high school Bergman had attended) wrote a letter following the film's release, Bergman, according to scholar Frank Gado, disparaged in a response what he viewed as Håkanson's implication that students "who did not fit some arbitrary prescription of worthiness deserved the system's cruel neglect". Bergman also stated in the letter that he "hated school as a principle, as a system and as an institution. And as such I have definitely not wanted to criticize my own school, but all schools." The international success of this film led to Bergman's first opportunity to direct a year later. During the next ten years he wrote and directed more than a dozen films, including "Prison" ("Fängelse") in 1949, as well as "Sawdust and Tinsel" ("Gycklarnas afton") and "Summer with Monika" ("Sommaren med Monika"), both released in 1953.
Bergman first achieved worldwide success with "Smiles of a Summer Night" ("Sommarnattens leende", 1955), which won for "Best poetic humour" and was nominated for the Palme d'Or at Cannes the following year. This was followed by "The Seventh Seal" ("Det sjunde inseglet") and "Wild Strawberries" ("Smultronstället"), released in Sweden ten months apart in 1957. "The Seventh Seal" won a special jury prize and was nominated for the Palme d'Or at Cannes, and "Wild Strawberries" won numerous awards for Bergman and its star, Victor Sjöström. Bergman continued to be productive for the next two decades. From the early 1960s, he spent much of his life on the island of Fårö, where he made several films.
In the early 1960s he directed three films that explored the theme of faith and doubt in God, "Through a Glass Darkly" ("Såsom i en Spegel", 1961), "Winter Light" ("Nattvardsgästerna", 1962), and "The Silence" ("Tystnaden", 1963). Critics created the notion that the common themes in these three films made them a trilogy or cinematic triptych. Bergman initially responded that he did not plan these three films as a trilogy and that he could not see any common motifs in them, but he later seemed to adopt the notion, with some equivocation. His parody of the films of Federico Fellini, "All These Women" ("För att inte tala om alla dessa kvinnor") was released in 1964.
Largely a two-hander with Bibi Andersson and Liv Ullmann, "Persona" (1966) is a film that Bergman himself considered one of his most important works. While the highly experimental film won few awards, it has been considered his masterpiece. Other films of the period include "The Virgin Spring" ("Jungfrukällan", 1960), "Hour of the Wolf" ("Vargtimmen", 1968), "Shame" ("Skammen", 1968) and "The Passion of Anna" ("En Passion", 1969). With his cinematographer Sven Nykvist, Bergman made use of a crimson color scheme for "Cries and Whispers" (1972), which received a nomination for the Academy Award for Best Picture. He also produced extensively for Swedish television at this time. Two works of note were "Scenes from a Marriage" ("Scener ur ett äktenskap", 1973) and "The Magic Flute" ("Trollflöjten", 1975).
On 30 January 1976, while rehearsing August Strindberg's "The Dance of Death" at the Royal Dramatic Theatre in Stockholm, he was arrested by two plainclothes police officers and charged with income tax evasion. The impact of the event on Bergman was devastating. He suffered a nervous breakdown as a result of the humiliation, and was hospitalised in a state of deep depression.
The investigation was focused on an alleged 1970 transaction of 500,000 Swedish kronor (SEK) between Bergman's Swedish company "Cinematograf" and its Swiss subsidiary "Persona", an entity that was mainly used for the paying of salaries to foreign actors. Bergman dissolved "Persona" in 1974 after having been notified by the Swedish Central Bank and subsequently reported the income. On 23 March 1976, the special prosecutor Anders Nordenadler dropped the charges against Bergman, saying that the alleged crime had no legal basis, saying it would be like bringing "charges against a person who has stolen his own car, thinking it was someone else's". Director General Gösta Ekman, chief of the Swedish Internal Revenue Service, defended the failed investigation, saying that the investigation was dealing with important legal material and that Bergman was treated just like any other suspect. He expressed regret that Bergman had left the country, hoping that Bergman was a "stronger" person now when the investigation had shown that he had not done any wrong.
Although the charges were dropped, Bergman became disconsolate, fearing he would never again return to directing. Despite pleas by the Swedish prime minister Olof Palme, high public figures, and leaders of the film industry, he vowed never to work in Sweden again. He closed down his studio on the island of Fårö, suspended two announced film projects, and went into self-imposed exile in Munich, Germany. Harry Schein, director of the Swedish Film Institute, estimated the immediate damage as ten million SEK (kronor) and hundreds of jobs lost.
Bergman then briefly considered the possibility of working in America; his next film, "The Serpent's Egg" (1977) was a German-U.S. production and his second English-language film (the first being "The Touch", 1971). This was followed by a British-Norwegian co-production, "Autumn Sonata" ("Höstsonaten", 1978) starring Ingrid Bergman (no relation), and "From the Life of the Marionettes" ("Aus dem Leben der Marionetten", 1980) which was a British-German co-production.
He temporarily returned to his homeland to direct "Fanny and Alexander" ("Fanny och Alexander", 1982). Bergman stated that the film would be his last, and that afterwards he would focus on directing theatre. After that he wrote several film scripts and directed a number of television specials. As with previous work for television, some of these productions were later theatrically released. The last such work was "Saraband" (2003), a sequel to "Scenes from a Marriage" and directed by Bergman when he was 84 years old.
Although he continued to operate from Munich, by mid-1978 Bergman had overcome some of his bitterness toward the Swedish government. In July of that year he visited Sweden, celebrating his sixtieth birthday on the island of Fårö, and partly resumed his work as a director at Royal Dramatic Theatre. To honour his return, the Swedish Film Institute launched a new Ingmar Bergman Prize to be awarded annually for excellence in filmmaking. Still, he remained in Munich until 1984. In one of the last major interviews with Bergman, conducted in 2005 on the island of Fårö, Bergman said that despite being active during the exile, he had effectively lost eight years of his professional life.
Bergman retired from filmmaking in December 2003. He had hip surgery in October 2006 and was making a difficult recovery. He died in his sleep at age 89; his body was found at his home on the island of Fårö, on 30 July 2007, sixteen days after his 89th birthday. (It was the same day another renowned existentialist film director, Michelangelo Antonioni, died.) The interment was private, at the Fårö Church on 18 August 2007. A place in the Fårö churchyard was prepared for him under heavy secrecy. Although he was buried on the island of Fårö, his name and date of birth were inscribed under his wife's name on a tomb at Roslagsbro churchyard, Norrtälje Municipality, several years before his death.
Selected work:
Bergman developed a personal "repertory company" of Swedish actors whom he repeatedly cast in his films, including Max von Sydow, Bibi Andersson, Harriet Andersson, Erland Josephson, Ingrid Thulin, Gunnel Lindblom, and Gunnar Björnstrand, each of whom appeared in at least five Bergman features. Norwegian actress Liv Ullmann, who appeared in nine of Bergman's films and one televisual film ("Saraband"), was the last to join this group (in the film "Persona"), and ultimately became the most closely associated with Bergman, both artistically and personally. They had a daughter together, Linn Ullmann (born 1966).
In Bergman's working arrangement with Sven Nykvist, his best-known cinematographer, the two men developed sufficient rapport to allow Bergman not to worry about the composition of a shot until the day before it was filmed. On the morning of the shoot, he would briefly speak to Nykvist about the mood and composition he hoped for, and then leave Nykvist to work, lacking interruption or comment until post-production discussion of the next day's work.
By Bergman's own account, he never had a problem with funding. He cited two reasons for this: one, that he did not live in the United States, which he viewed as obsessed with box-office earnings; and two, that his films tended to be low-budget affairs. ("Cries and Whispers", for instance, was finished for about $450,000, while "Scenes from a Marriage", a six-episode television feature, cost only $200,000.)
Bergman usually wrote his films' screenplays, thinking about them for months or years before starting the actual process of writing, which he viewed as somewhat tedious. His earlier films are carefully constructed and are either based on his plays or written in collaboration with other authors. Bergman stated that in his later works, when on occasion his actors would want to do things differently from his own intention, he would let them, noting that the results were often "disastrous" when he did not do so. As his career progressed, Bergman increasingly let his actors improvise their dialogue. In his later films, he wrote just the ideas informing the scene and allowed his actors to determine the exact dialogue. When viewing daily rushes, Bergman stressed the importance of being critical but unemotive, claiming that he asked himself not if the work was great or terrible, but rather if it was sufficient or needed to be reshot.
Bergman's films usually deal with existential questions of mortality, loneliness, and religious faith. In addition to these cerebral topics, however, sexual desire features in the foreground of most of his films, whether the central event is medieval plague ("The Seventh Seal"), upper-class family activity in early twentieth century Uppsala ("Fanny and Alexander"), or contemporary alienation ("The Silence"). His female characters are usually more in touch with their sexuality than their male equivalents, and unafraid to proclaim it, sometimes with breathtaking overtness (as in "Cries and Whispers") as would define the work of "the conjurer," as Bergman called himself in a 1960 "TIME" cover story. In an interview with "Playboy" in 1964, he said: "The manifestation of sex is very important, and particularly to me, for above all, I don't want to make merely intellectual films. I want audiences to feel, to sense my films. This to me is much more important than their understanding them." Film, Bergman said, was his demanding mistress. While he was a social democrat as an adult, Bergman stated that "as an artist I'm not politically involved ... I don't make propaganda for either one attitude or the other."
When asked in the series of interviews later titled "Ingmar Bergman – 3 dokumentärer om film, teater, Fårö och livet" conducted by Marie Nyreröd for Swedish TV and released in 2004, Bergman said that of his works, he held "Winter Light", "Persona", and "Cries and Whispers" in the highest regard. There he also states that he managed to push the envelope of film making in the films "Persona" and "Cries and Whispers". Bergman stated on numerous occasions (for example in the interview book "Bergman on Bergman") that "The Silence" meant the end of the era in which religious questions were a major concern of his films. Bergman said that he would get depressed by his own films: "jittery and ready to cry... and miserable." In the same interview he also stated: "If there is one thing I miss about working with films, it is working with Sven" (Nykvist), the third cinematographer with whom he had collaborated.
Although Bergman was universally famous for his contribution to cinema, he was also an active and productive stage director all his life. During his studies at what was then Stockholm University College, he became active in its student theatre, where he made a name for himself early on. His first work after graduation was as a trainee-director at a Stockholm theatre. At twenty-six years, he became the youngest theatrical manager in Europe at the Helsingborg City Theatre. He stayed at Helsingborg for three years and then became the director at Gothenburg city theatre from 1946 to 1949.
He became director of the Malmö city theatre in 1953, and remained for seven years. Many of his star actors were people with whom he began working on stage. He was the director of the Royal Dramatic Theatre in Stockholm from 1960 to 1966, and manager from 1963 to 1966, where he began a long-time collaboration with choreographer Donya Feuer.
After Bergman left Sweden because of the tax evasion incident, he became director of the "Residenz Theatre" of Munich, Germany (1977–84). He remained active in theatre throughout the 1990s and made his final production on stage with Henrik Ibsen's "The Wild Duck" at the Royal Dramatic Theatre in 2002.
Bergman was married five times:
The first four marriages ended in divorce, while the last ended when his wife Ingrid died of stomach cancer in 1995, aged 65. Aside from his marriages, Bergman had romantic relationships with actresses Harriet Andersson (1952–55), Bibi Andersson (1955–59), and Liv Ullmann (1965–70). He was the father of writer Linn Ullmann with Ullmann. In all, Bergman had nine children, one of whom predeceased him. Bergman eventually married all the mothers of his children, with the exception of Liv Ullmann. His daughter with his last wife, Ingrid von Rosen, was born twelve years before their marriage.
Although Bergman described himself as one who had lost his faith in an afterlife, Max Von Sydow stated in an interview that he had had many discussions with him about religion, and indicated that Bergman's belief in the afterlife was restored.
In 1958, he won the Best Director award for "Brink of Life" at the Cannes Film Festival, and won the Golden Bear for "Wild Strawberries" at the Berlin International Film Festival"." In 1971, Bergman received the Irving G. Thalberg Memorial Award at the Academy Awards ceremony. Three of his films ("Through a Glass Darkly", "The Virgin Spring", and "Fanny and Alexander") won the Academy Award for Best Foreign Language Film. In 1997, he was awarded the Palme des Palmes (Palm of the Palms) at the 50th anniversary of the Cannes Film Festival. He won many other awards and has been nominated for numerous other awards.
Academy Awards
Bergman's work was a point of reference and inspiration for director Woody Allen. His films are mentioned and praised in "Annie Hall" and others of his films. Allen also admired Bergman's longtime director of photography Sven Nykvist and invited him to be his DP on "Crimes and Misdemeanors".
After Bergman died, a large archive of notes was donated to the Swedish Film Institute. Among the notes are several unpublished and unfinished scripts both for stage and films, and many more ideas for works in different stages of development. A never-performed play has the title "Kärlek utan älskare" ("Love without lovers"), and has the note "Complete disaster!" written on the envelope; the play is about a director who disappears and an editor who tries to complete a work he has left unfinished. Other canceled projects include the script for a pornographic film which Bergman abandoned since he did not think it was alive enough, a play about a cannibal, some loose scenes set inside a womb, a film about the life of Jesus, a film about "The Merry Widow", and a play with the title "Från sperm till spöke" ("From sperm to spook"). The Swedish director Marcus Lindeen went through the material, and inspired by "Kärlek utan älskare" he took samples from many of the works and turned them into a play, titled "Arkivet för orealiserbara drömmar och visioner" ("The archive for unrealisable dreams and visions"). Lindeen's play premiered on 28 May 2012 at the Stockholm City Theatre.
Terrence Rafferty of "The New York Times" wrote that throughout the 1960s, when Bergman "was considered pretty much the last word in cinematic profundity, his every tic was scrupulously pored over, analyzed, elaborated in ingenious arguments about identity, the nature of film, the fate of the artist in the modern world and so on."
Writer and director Richard Ayoade counts Bergman as one of his inspirations. In 2017, the British Film Institute (BFI) hosted an Ingmar Bergman season and Ayoade said in a "Guardian" interview that he saw everything in it, "which was one of the best two months ever." The BFI's programme included a discussion with Ayoade on Bergman's 1966 film, "Persona", before a screening. | https://en.wikipedia.org/wiki?curid=14626 |
Isaac Newton
Sir Isaac Newton (25 December 1642 – 20 March 1726/27) was an English mathematician, physicist, astronomer, theologian, and author (described in his own day as a "natural philosopher") who is widely recognised as one of the most influential scientists of all time and as a key figure in the scientific revolution. His book "Philosophiæ Naturalis Principia Mathematica" ("Mathematical Principles of Natural Philosophy"), first published in 1687, laid the foundations of classical mechanics. Newton also made seminal contributions to optics, and shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus.
In "Principia", Newton formulated the laws of motion and universal gravitation that formed the dominant scientific viewpoint until it was superseded by the theory of relativity. Newton used his mathematical description of gravity to prove Kepler's laws of planetary motion, account for tides, the trajectories of comets, the precession of the equinoxes and other phenomena, eradicating doubt about the Solar System's heliocentricity. He demonstrated that the motion of objects on Earth and celestial bodies could be accounted for by the same principles. Newton's inference that the Earth is an oblate spheroid was later confirmed by the geodetic measurements of Maupertuis, La Condamine, and others, convincing most European scientists of the superiority of Newtonian mechanics over earlier systems.
Newton built the first practical reflecting telescope and developed a sophisticated theory of colour based on the observation that a prism separates white light into the colours of the visible spectrum. His work on light was collected in his highly influential book "Opticks", published in 1704. He also formulated an empirical law of cooling, made the first theoretical calculation of the speed of sound, and introduced the notion of a Newtonian fluid. In addition to his work on calculus, as a mathematician Newton contributed to the study of power series, generalised the binomial theorem to non-integer exponents, developed a method for approximating the roots of a function, and classified most of the cubic plane curves.
Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge. He was a devout but unorthodox Christian who privately rejected the doctrine of the Trinity. Unusually for a member of the Cambridge faculty of the day, he refused to take holy orders in the Church of England. Beyond his work on the mathematical sciences, Newton dedicated much of his time to the study of alchemy and biblical chronology, but most of his work in those areas remained unpublished until long after his death. Politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–90 and 1701–02. He was knighted by Queen Anne in 1705 and spent the last three decades of his life in London, serving as Warden (1696–1700) and Master (1700–1727) of the Royal Mint, as well as president of the Royal Society (1703–1727).
Isaac Newton was born (according to the Julian calendar, in use in England at the time) on Christmas Day, 25 December 1642 (NS 4 January 1643) "an hour or two after midnight", at Woolsthorpe Manor in Woolsthorpe-by-Colsterworth, a hamlet in the county of Lincolnshire. His father, also named Isaac Newton, had died three months before. Born prematurely, Newton was a small child; his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Margery Ayscough (née Blythe). Newton disliked his stepfather and maintained some enmity towards his mother for marrying him, as revealed by this entry in a list of sins committed up to the age of 19: "Threatening my father and mother Smith to burn them and the house over them." Newton's mother had three children (Mary, Benjamin and Hannah) from her second marriage.
From the age of about twelve until he was seventeen, Newton was educated at The King's School, Grantham, which taught Latin and Greek and probably imparted a significant foundation of mathematics. He was removed from school and returned to Woolsthorpe-by-Colsterworth by October 1659. His mother, widowed for the second time, attempted to make him a farmer, an occupation he hated. Henry Stokes, master at The King's School, persuaded his mother to send him back to school. Motivated partly by a desire for revenge against a schoolyard bully, he became the top-ranked student, distinguishing himself mainly by building sundials and models of windmills.
In June 1661, he was admitted to Trinity College, Cambridge, on the recommendation of his uncle Rev William Ayscough, who had studied there. He started as a subsizar—paying his way by performing valet's duties—until he was awarded a scholarship in 1664, guaranteeing him four more years until he could get his MA. At that time, the college's teachings were based on those of Aristotle, whom Newton supplemented with modern philosophers such as Descartes, and astronomers such as Galileo and Thomas Street, through whom he learned of Kepler's work. He set down in his notebook a series of ""Quaestiones"" about mechanical philosophy as he found it. In 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton had obtained his BA degree in August 1665, the university temporarily closed as a precaution against the Great Plague. Although he had been undistinguished as a Cambridge student, Newton's private studies at his home in Woolsthorpe over the subsequent two years saw the development of his theories on calculus, optics, and the law of gravitation.
In April 1667, he returned to Cambridge and in October was elected as a fellow of Trinity. Fellows were required to become ordained priests, although this was not enforced in the restoration years and an assertion of conformity to the Church of England was sufficient. However, by 1675 the issue could not be avoided and by then his unconventional views stood in the way. Nevertheless, Newton managed to avoid it by means of special permission from Charles II.
His studies had impressed the Lucasian professor Isaac Barrow, who was more anxious to develop his own religious and administrative potential (he became master of Trinity two years later); in 1669 Newton succeeded him, only one year after receiving his MA. He was elected a Fellow of the Royal Society (FRS) in 1672.
Newton's work has been said "to distinctly advance every branch of mathematics then studied." His work on the subject usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newton's mathematical papers. The author of the manuscript "De analysi per aequationes numero terminorum infinitas", sent by Isaac Barrow to John Collins in June 1669, was identified by Barrow in a letter sent to Collins in August of that year as "[...] of an extraordinary genius and proficiency in these things."
Newton later became involved in a dispute with Leibniz over priority in the development of calculus (the Leibniz–Newton calculus controversy). Most modern historians believe that Newton and Leibniz developed calculus independently, although with very different mathematical notations. Occasionally it has been suggested that Newton published almost nothing about it until 1693, and did not give a full account until 1704, while Leibniz began publishing a full account of his methods in 1684. Leibniz's notation and "differential Method", nowadays recognised as much more convenient notations, were adopted by continental European mathematicians, and after 1820 or so, also by British mathematicians.
Such a suggestion fails to account for the calculus in Book 1 of Newton's "Principia" itself and in its forerunner manuscripts, such as "De motu corporum in gyrum" of 1684; this content has been pointed out by critics of both Newton's time and modern times.
His work extensively uses calculus in geometric form based on limiting values of the ratios of vanishingly small quantities: in the "Principia" itself, Newton gave demonstration of this under the name of "the method of first and last ratios" and explained why he put his expositions in this form, remarking also that "hereby the same thing is performed as by the method of indivisibles."
Because of this, the "Principia" has been called "a book dense with the theory and application of the infinitesimal calculus" in modern times and in Newton's time "nearly all of it is of this calculus." His use of methods involving "one or more orders of the infinitesimally small" is present in his "De motu corporum in gyrum" of 1684 and in his papers on motion "during the two decades preceding 1684".
Newton had been reluctant to publish his calculus because he feared controversy and criticism. He was close to the Swiss mathematician Nicolas Fatio de Duillier. In 1691, Duillier started to write a new version of Newton's "Principia", and corresponded with Leibniz. In 1693, the relationship between Duillier and Newton deteriorated and the book was never completed.
Starting in 1699, other members of the Royal Society accused Leibniz of plagiarism. The dispute then broke out in full force in 1711 when the Royal Society proclaimed in a study that it was Newton who was the true discoverer and labelled Leibniz a fraud; it was later found that Newton wrote the study's concluding remarks on Leibniz. Thus began the bitter controversy which marred the lives of both Newton and Leibniz until the latter's death in 1716.
Newton is generally credited with the generalised binomial theorem, valid for any exponent. He discovered Newton's identities, Newton's method, classified cubic plane curves (polynomials of degree three in two variables), made substantial contributions to the theory of finite differences, and was the first to use fractional indices and to employ coordinate geometry to derive solutions to Diophantine equations. He approximated partial sums of the harmonic series by logarithms (a precursor to Euler's summation formula) and was the first to use power series with confidence and to revert power series. Newton's work on infinite series was inspired by Simon Stevin's decimals.
When Newton received his MA and became a Fellow of the "College of the Holy and Undivided Trinity" in 1667, he made the commitment that "I will either set Theology as the object of my studies and will take holy orders when the time prescribed by these statutes [7 years] arrives, or I will resign from the college." Up until this point he had not thought much about religion and had twice signed his agreement to the thirty-nine articles, the basis of Church of England doctrine.
He was appointed Lucasian Professor of Mathematics in 1669, on Barrow's recommendation. During that time, any Fellow of a college at Cambridge or Oxford was required to take holy orders and become an ordained Anglican priest. However, the terms of the Lucasian professorship required that the holder be active in the church – presumably, so as to have more time for science. Newton argued that this should exempt him from the ordination requirement, and Charles II, whose permission was needed, accepted this argument. Thus a conflict between Newton's religious views and Anglican orthodoxy was averted.
In 1666, Newton observed that the spectrum of colours exiting a prism in the position of minimum deviation is oblong, even when the light ray entering the prism is circular, which is to say, the prism refracts different colours by different angles. This led him to conclude that colour is a property intrinsic to light – a point which had, until then, been a matter of debate.
From 1670 to 1672, Newton lectured on optics. During this period he investigated the refraction of light, demonstrating that the multicoloured spectrum produced by a prism could be recomposed into white light by a lens and a second prism. Modern scholarship has revealed that Newton's analysis and resynthesis of white light owes a debt to corpuscular alchemy.
He showed that coloured light does not change its properties by separating out a coloured beam and shining it on various objects and that regardless of whether reflected, scattered, or transmitted, the light remains the same colour. Thus, he observed that colour is the result of objects interacting with already-coloured light rather than objects generating the colour themselves. This is known as Newton's theory of colour.
From this work, he concluded that the lens of any refracting telescope would suffer from the dispersion of light into colours (chromatic aberration). As a proof of the concept, he constructed a telescope using reflective mirrors instead of lenses as the objective to bypass that problem. Building the design, the first known functional reflecting telescope, today known as a Newtonian telescope, involved solving the problem of a suitable mirror material and shaping technique. Newton ground his own mirrors out of a custom composition of highly reflective speculum metal, using Newton's rings to judge the quality of the optics for his telescopes. In late 1668, he was able to produce this first reflecting telescope. It was about eight inches long and it gave a clearer and larger image. In 1671, the Royal Society asked for a demonstration of his reflecting telescope. Their interest encouraged him to publish his notes, "Of Colours", which he later expanded into the work "Opticks". When Robert Hooke criticised some of Newton's ideas, Newton was so offended that he withdrew from public debate. Newton and Hooke had brief exchanges in 1679–80, when Hooke, appointed to manage the Royal Society's correspondence, opened up a correspondence intended to elicit contributions from Newton to Royal Society transactions, which had the effect of stimulating Newton to work out a proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. But the two men remained generally on poor terms until Hooke's death.
Newton argued that light is composed of particles or corpuscles, which were refracted by accelerating into a denser medium. He verged on soundlike waves to explain the repeated pattern of reflection and transmission by thin films (Opticks Bk.II, Props. 12), but still retained his theory of 'fits' that disposed corpuscles to be reflected or transmitted (Props.13). However, later physicists favoured a purely wavelike explanation of light to account for the interference patterns and the general phenomenon of diffraction. Today's quantum mechanics, photons, and the idea of wave–particle duality bear only a minor resemblance to Newton's understanding of light.
In his "Hypothesis of Light" of 1675, Newton posited the existence of the ether to transmit forces between particles. The contact with the Cambridge Platonist philosopher Henry More revived his interest in alchemy. He replaced the ether with occult forces based on Hermetic ideas of attraction and repulsion between particles. John Maynard Keynes, who acquired many of Newton's writings on alchemy, stated that "Newton was not the first of the age of reason: He was the last of the magicians." Newton's interest in alchemy cannot be isolated from his contributions to science. This was at a time when there was no clear distinction between alchemy and science. Had he not relied on the occult idea of action at a distance, across a vacuum, he might not have developed his theory of gravity.
In 1704, Newton published "Opticks", in which he expounded his corpuscular theory of light. He considered light to be made up of extremely subtle corpuscles, that ordinary matter was made of grosser corpuscles and speculated that through a kind of alchemical transmutation "Are not gross Bodies and Light convertible into one another, ... and may not Bodies receive much of their Activity from the Particles of Light which enter their Composition?" Newton also constructed a primitive form of a frictional electrostatic generator, using a glass globe.
In his book "Opticks", Newton was the first to show a diagram using a prism as a beam expander, and also the use of multiple-prism arrays. Some 278 years after Newton's discussion, multiple-prism beam expanders became central to the development of narrow-linewidth tunable lasers. Also, the use of these prismatic beam expanders led to the multiple-prism dispersion theory.
Subsequent to Newton, much has been amended. Young and Fresnel combined Newton's particle theory with Huygens' wave theory to show that colour is the visible manifestation of light's wavelength. Science also slowly came to realise the difference between perception of colour and mathematisable optics. The German poet and scientist, Goethe, could not shake the Newtonian foundation but "one hole Goethe did find in Newton's armour, ... Newton had committed himself to the doctrine that refraction without colour was impossible. He, therefore, thought that the object-glasses of telescopes must forever remain imperfect, achromatism and refraction being incompatible. This inference was proved by Dollond to be wrong."
In 1679, Newton returned to his work on celestial mechanics by considering gravitation and its effect on the orbits of planets with reference to Kepler's laws of planetary motion. This followed stimulation by a brief exchange of letters in 1679–80 with Hooke, who had been appointed to manage the Royal Society's correspondence, and who opened a correspondence intended to elicit contributions from Newton to Royal Society transactions. Newton's reawakening interest in astronomical matters received further stimulus by the appearance of a comet in the winter of 1680–1681, on which he corresponded with John Flamsteed. After the exchanges with Hooke, Newton worked out proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. Newton communicated his results to Edmond Halley and to the Royal Society in "De motu corporum in gyrum", a tract written on about nine sheets which was copied into the Royal Society's Register Book in December 1684. This tract contained the nucleus that Newton developed and expanded to form the "Principia".
The "Principia" was published on 5 July 1687 with encouragement and financial help from Edmond Halley. In this work, Newton stated the three universal laws of motion. Together, these laws describe the relationship between any object, the forces acting upon it and the resulting motion, laying the foundation for classical mechanics. They contributed to many advances during the Industrial Revolution which soon followed and were not improved upon for more than 200 years. Many of these advancements continue to be the underpinnings of non-relativistic technologies in the modern world. He used the Latin word "gravitas" (weight) for the effect that would become known as gravity, and defined the law of universal gravitation.
In the same work, Newton presented a calculus-like method of geometrical analysis using 'first and last ratios', gave the first analytical determination (based on Boyle's law) of the speed of sound in air, inferred the oblateness of Earth's spheroidal figure, accounted for the precession of the equinoxes as a result of the Moon's gravitational attraction on the Earth's oblateness, initiated the gravitational study of the irregularities in the motion of the Moon, provided a theory for the determination of the orbits of comets, and much more.
Newton made clear his heliocentric view of the Solar System—developed in a somewhat modern way because already in the mid-1680s he recognised the "deviation of the Sun" from the centre of gravity of the Solar System. For Newton, it was not precisely the centre of the Sun or any other body that could be considered at rest, but rather "the common centre of gravity of the Earth, the Sun and all the Planets is to be esteem'd the Centre of the World", and this centre of gravity "either is at rest or moves uniformly forward in a right line" (Newton adopted the "at rest" alternative in view of common consent that the centre, wherever it was, was at rest).
Newton's postulate of an invisible force able to act over vast distances led to him being criticised for introducing "occult agencies" into science. Later, in the second edition of the "Principia" (1713), Newton firmly rejected such criticisms in a concluding General Scholium, writing that it was enough that the phenomena implied a gravitational attraction, as they did; but they did not so far indicate its cause, and it was both unnecessary and improper to frame hypotheses of things that were not implied by the phenomena. (Here Newton used what became his famous expression ""hypotheses non-fingo"").
With the "Principia", Newton became internationally recognised. He acquired a circle of admirers, including the Swiss-born mathematician Nicolas Fatio de Duillier.
Newton found 72 of the 78 "species" of cubic curves and categorised them into four types. In 1717, and probably with Newton's help, James Stirling proved that every cubic was one of these four types. Newton also claimed that the four types could be obtained by plane projection from one of them, and this was proved in 1731, four years after his death.
In the 1690s, Newton wrote a number of religious tracts dealing with the literal and symbolic interpretation of the Bible. A manuscript Newton sent to John Locke in which he disputed the fidelity of —the Johannine Comma—and its fidelity to the original manuscripts of the New Testament, remained unpublished until 1785.
Newton was also a member of the Parliament of England for Cambridge University in 1689 and 1701, but according to some accounts his only comments were to complain about a cold draught in the chamber and request that the window be closed. He was, however, noted by Cambridge diarist Abraham de la Pryme to have rebuked students who were frightening locals by claiming that a house was haunted.
Newton moved to London to take up the post of warden of the Royal Mint in 1696, a position that he had obtained through the patronage of Charles Montagu, 1st Earl of Halifax, then Chancellor of the Exchequer. He took charge of England's great recoining, trod on the toes of Lord Lucas, Governor of the Tower, and secured the job of deputy comptroller of the temporary Chester branch for Edmond Halley. Newton became perhaps the best-known Master of the Mint upon the death of Thomas Neale in 1699, a position Newton held for the last 30 years of his life. These appointments were intended as sinecures, but Newton took them seriously. He retired from his Cambridge duties in 1701, and exercised his authority to reform the currency and punish clippers and counterfeiters.
As Warden, and afterwards as Master, of the Royal Mint, Newton estimated that 20 percent of the coins taken in during the Great Recoinage of 1696 were counterfeit. Counterfeiting was high treason, punishable by the felon being hanged, drawn and quartered. Despite this, convicting even the most flagrant criminals could be extremely difficult, however, Newton proved equal to the task.
Disguised as a habitué of bars and taverns, he gathered much of that evidence himself. For all the barriers placed to prosecution, and separating the branches of government, English law still had ancient and formidable customs of authority. Newton had himself made a justice of the peace in all the home counties. A draft letter regarding the matter is included in Newton's personal first edition of "Philosophiæ Naturalis Principia Mathematica", which he must have been amending at the time. Then he conducted more than 100 cross-examinations of witnesses, informers, and suspects between June 1698 and Christmas 1699. Newton successfully prosecuted 28 coiners.
Newton was made President of the Royal Society in 1703 and an associate of the French Académie des Sciences. In his position at the Royal Society, Newton made an enemy of John Flamsteed, the Astronomer Royal, by prematurely publishing Flamsteed's "Historia Coelestis Britannica", which Newton had used in his studies.
In April 1705, Queen Anne knighted Newton during a royal visit to Trinity College, Cambridge. The knighthood is likely to have been motivated by political considerations connected with the parliamentary election in May 1705, rather than any recognition of Newton's scientific work or services as Master of the Mint. Newton was the second scientist to be knighted, after Sir Francis Bacon.
As a result of a report written by Newton on 21 September 1717 to the Lords Commissioners of His Majesty's Treasury, the bimetallic relationship between gold coins and silver coins was changed by Royal proclamation on 22 December 1717, forbidding the exchange of gold guineas for more than 21 silver shillings. This inadvertently resulted in a silver shortage as silver coins were used to pay for imports, while exports were paid for in gold, effectively moving Britain from the silver standard to its first gold standard. It is a matter of debate as to whether he intended to do this or not. It has been argued that Newton conceived of his work at the Mint as a continuation of his alchemical work.
Newton was invested in the South Sea Company and lost some £20,000 (US$3 million in 2003) when it collapsed in around 1720.
Toward the end of his life, Newton took up residence at Cranbury Park, near Winchester with his niece and her husband, until his death in 1727. His half-niece, Catherine Barton Conduitt, served as his hostess in social affairs at his house on Jermyn Street in London; he was her "very loving Uncle", according to his letter to her when she was recovering from smallpox.
Newton died in his sleep in London on 20 March 1727 (OS 20 March 1726; NS 31 March 1727). His body was buried in Westminster Abbey. Voltaire may have been present at his funeral. A bachelor, he had divested much of his estate to relatives during his last years, and died intestate. His papers went to John Conduitt and Catherine Barton. After his death, Newton's hair was examined and found to contain mercury, probably resulting from his alchemical pursuits. Mercury poisoning could explain Newton's eccentricity in late life.
Although it was claimed that he was once engaged, Newton never married. The French writer and philosopher Voltaire, who was in London at the time of Newton's funeral, said that he "was never sensible to any passion, was not subject to the common frailties of mankind, nor had any commerce with women—a circumstance which was assured me by the physician and surgeon who attended him in his last moments". The widespread belief that he died a virgin has been commented on by writers such as mathematician Charles Hutton, economist John Maynard Keynes, and physicist Carl Sagan.
Newton had a close friendship with the Swiss mathematician Nicolas Fatio de Duillier, whom he met in London around 1689—some of their correspondence has survived. Their relationship came to an abrupt and unexplained end in 1693, and at the same time Newton suffered a nervous breakdown which included sending wild accusatory letters to his friends Samuel Pepys and John Locke—his note to the latter included the charge that Locke "endeavoured to embroil me with woemen".
The mathematician Joseph-Louis Lagrange said that Newton was the greatest genius who ever lived, and once added that Newton was also "the most fortunate, for we cannot find more than once a system of the world to establish." English poet Alexander Pope wrote the famous epitaph:
Nature and nature's laws lay hid in night;
God said "Let Newton be" and all was light.
Newton was relatively modest about his achievements, writing in a letter to Robert Hooke in February 1676:
If I have seen further it is by standing on the shoulders of giants.
Two writers think that the above quotation, written at a time when Newton and Hooke were in dispute over optical discoveries, was an oblique attack on Hooke (said to have been short and hunchbacked), rather than—or in addition to—a statement of modesty. On the other hand, the widely known proverb about standing on the shoulders of giants, published among others by seventeenth-century poet George Herbert (a former orator of the University of Cambridge and fellow of Trinity College) in his "Jacula Prudentum" (1651), had as its main point that "a dwarf on a giant's shoulders sees farther of the two", and so its effect as an analogy would place Newton himself rather than Hooke as the 'dwarf'.
In a later memoir, Newton wrote:
I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.
In 1816, a tooth said to have belonged to Newton was sold for £730 ($3,633) in London to an aristocrat who had it set in a ring. The "Guinness World Records 2002" classified it as the most valuable tooth, which would value approximately £25,000 ($35,700) in late 2001. Who bought it and who currently has it has not been disclosed.
Albert Einstein kept a picture of Newton on his study wall alongside ones of Michael Faraday and James Clerk Maxwell. In a 2005 survey of members of Britain's Royal Society (formerly headed by Newton) asking who had the greater effect on the history of science, Newton or Einstein, the members deemed Newton to have made the greater overall contribution. In 1999, an opinion poll of 100 of today's leading physicists voted Einstein the "greatest physicist ever," with Newton the runner-up, while a parallel survey of rank-and-file physicists by the site PhysicsWeb gave the top spot to Newton.
Newton's monument (1731) can be seen in Westminster Abbey, at the north of the entrance to the choir against the choir screen, near his tomb. It was executed by the sculptor Michael Rysbrack (1694–1770) in white and grey marble with design by the architect William Kent. The monument features a figure of Newton reclining on top of a sarcophagus, his right elbow resting on several of his great books and his left hand pointing to a scroll with a mathematical design. Above him is a pyramid and a celestial globe showing the signs of the Zodiac and the path of the comet of 1680. A relief panel depicts putti using instruments such as a telescope and prism. The Latin inscription on the base translates as:Here is buried Isaac Newton, Knight, who by a strength of mind almost divine, and mathematical principles peculiarly his own, explored the course and figures of the planets, the paths of comets, the tides of the sea, the dissimilarities in rays of light, and, what no other scholar has previously imagined, the properties of the colours thus produced. Diligent, sagacious and faithful, in his expositions of nature, antiquity and the holy Scriptures, he vindicated by his philosophy the majesty of God mighty and good, and expressed the simplicity of the Gospel in his manners. Mortals rejoice that there has existed such and so great an ornament of the human race! He was born on 25 December 1642, and died on 20 March 1726/7.—Translation from G.L. Smyth, "The Monuments and Genii of St. Paul's Cathedral, and of Westminster Abbey" (1826), ii, 703–704.
From 1978 until 1988, an image of Newton designed by Harry Ecclestone appeared on Series D £1 banknotes issued by the Bank of England (the last £1 notes to be issued by the Bank of England). Newton was shown on the reverse of the notes holding a book and accompanied by a telescope, a prism and a map of the Solar System.
A statue of Isaac Newton, looking at an apple at his feet, can be seen at the Oxford University Museum of Natural History. A large bronze statue, "Newton, after William Blake", by Eduardo Paolozzi, dated 1995 and inspired by Blake's etching, dominates the piazza of the British Library in London.
Although born into an Anglican family, by his thirties Newton held a Christian faith that, had it been made public, would not have been considered orthodox by mainstream Christianity, with one historian labelling him a heretic.
By 1672, he had started to record his theological researches in notebooks which he showed to no one and which have only recently been examined. They demonstrate an extensive knowledge of early Church writings and show that in the conflict between Athanasius and Arius which defined the Creed, he took the side of Arius, the loser, who rejected the conventional view of the Trinity. Newton "recognized Christ as a divine mediator between God and man, who was subordinate to the Father who created him." He was especially interested in prophecy, but for him, "the great apostasy was trinitarianism."
Newton tried unsuccessfully to obtain one of the two fellowships that exempted the holder from the ordination requirement. At the last moment in 1675 he received a dispensation from the government that excused him and all future holders of the Lucasian chair.
In Newton's eyes, worshipping Christ as God was idolatry, to him the fundamental sin. In 1999, historian Stephen D. Snobelen wrote, "Isaac Newton was a heretic. But ... he never made a public declaration of his private faith—which the orthodox would have deemed extremely radical. He hid his faith so well that scholars are still unraveling his personal beliefs." Snobelen concludes that Newton was at least a Socinian sympathiser (he owned and had thoroughly read at least eight Socinian books), possibly an Arian and almost certainly an anti-trinitarian.
In a minority position, T.C. Pfizenmaier offers a more nuanced view, arguing that Newton held closer to the Semi-Arian view of the Trinity that Jesus Christ was of a "similar substance" (homoiousios) from the Father rather than the orthodox view that Jesus Christ is of the "same substance" of the Father (homoousios) as endorsed by modern Eastern Orthodox, Roman Catholics and Protestants. However, this type of view 'has lost support of late with the availability of Newton's theological papers', and now most scholars identify Newton as an Antitrinitarian monotheist.
Although the laws of motion and universal gravitation became Newton's best-known discoveries, he warned against using them to view the Universe as a mere machine, as if akin to a great clock. He said, "So then gravity may put the planets into motion, but without the Divine Power it could never put them into such a circulating motion, as they have about the sun".
Along with his scientific fame, Newton's studies of the Bible and of the early Church Fathers were also noteworthy. Newton wrote works on textual criticism, most notably "An Historical Account of Two Notable Corruptions of Scripture" and "Observations upon the Prophecies of Daniel, and the Apocalypse of St. John". He placed the crucifixion of Jesus Christ at 3 April, AD 33, which agrees with one traditionally accepted date.
He believed in a rationally immanent world, but he rejected the hylozoism implicit in Leibniz and Baruch Spinoza. The ordered and dynamically informed Universe could be understood, and must be understood, by an active reason. In his correspondence, Newton claimed that in writing the "Principia" "I had an eye upon such Principles as might work with considering men for the belief of a Deity". He saw evidence of design in the system of the world: "Such a wonderful uniformity in the planetary system must be allowed the effect of choice". But Newton insisted that divine intervention would eventually be required to reform the system, due to the slow growth of instabilities. For this, Leibniz lampooned him: "God Almighty wants to wind up his watch from time to time: otherwise it would cease to move. He had not, it seems, sufficient foresight to make it a perpetual motion."
Newton's position was vigorously defended by his follower Samuel Clarke in a famous correspondence. A century later, Pierre-Simon Laplace's work "Celestial Mechanics" had a natural explanation for why the planet orbits do not require periodic divine intervention. The contrast between Laplace's mechanistic worldview and Newton's one is the most strident considering the famous answer which the French scientist gave Napoleon, who had criticised him for the absence of the Creator in the "Mécanique céleste": "Sire, j'ai pu me passer de cette hypothese" ("I do not need such a hypothesis").
Scholars long debated whether Newton disputed the doctrine of the Trinity. His first biographer, Sir David Brewster, who compiled his manuscripts, interpreted Newton as questioning the veracity of some passages used to support the Trinity, but never denying the doctrine of the Trinity as such. In the twentieth century, encrypted manuscripts written by Newton and bought by John Maynard Keynes (among others) were deciphered and it became known that Newton did indeed reject Trinitarianism.
Newton and Robert Boyle's approach to the mechanical philosophy was promoted by rationalist pamphleteers as a viable alternative to the pantheists and enthusiasts, and was accepted hesitantly by orthodox preachers as well as dissident preachers like the latitudinarians. The clarity and simplicity of science was seen as a way to combat the emotional and metaphysical superlatives of both superstitious enthusiasm and the threat of atheism, and at the same time, the second wave of English deists used Newton's discoveries to demonstrate the possibility of a "Natural Religion".
The attacks made against pre-Enlightenment "magical thinking", and the mystical elements of Christianity, were given their foundation with Boyle's mechanical conception of the universe. Newton gave Boyle's ideas their completion through mathematical proofs and, perhaps more importantly, was very successful in popularising them.
In a manuscript he wrote in 1704 (never intended to be published), he mentions the date of 2060, but it is not given as a date for the end of days. It has been falsely reported as a prediction. The passage is clear when the date is read in context. He was against date setting for the end of days, concerned that this would put Christianity into disrepute.
"So then the time times & half a time are 42 months or 1260 days or three years & an half, recconing twelve months to a year & 30 days to a month as was done in the Calender of the primitive year. And the days of short lived Beasts being put for the years of [long-]lived kingdoms the period of 1260 days, if dated from the complete conquest of the three kings A.C. 800, will end 2060. It may end later, but I see no reason for its ending sooner."
"This I mention not to assert when the time of the end shall be, but to put a stop to the rash conjectures of fanciful men who are frequently predicting the time of the end, and by doing so bring the sacred prophesies into discredit as often as their predictions fail. Christ comes as a thief in the night, and it is not for us to know the times and seasons which God hath put into his own breast."
In the character of Morton Opperly in "Poor Superman" (1951), speculative fiction author Fritz Leiber says of Newton, "Everyone knows Newton as the great scientist. Few remember that he spent half his life muddling with alchemy, looking for the philosopher's stone. That was the pebble by the seashore he really wanted to find."
Of an estimated ten million words of writing in Newton's papers, about one million deal with alchemy. Many of Newton's writings on alchemy are copies of other manuscripts, with his own annotations. Alchemical texts mix artisanal knowledge with philosophical speculation, often hidden behind layers of wordplay, allegory, and imagery to protect craft secrets. Some of the content contained in Newton's papers could have been considered heretical by the church.
In 1888, after spending sixteen years cataloguing Newton's papers, Cambridge University kept a small number and returned the rest to the Earl of Portsmouth. In 1936, a descendant offered the papers for sale at Sotheby's. The collection was broken up and sold for a total of about £9,000. John Maynard Keynes was one of about three dozen bidders who obtained part of the collection at auction. Keynes went on to reassemble an estimated half of Newton's collection of papers on alchemy before donating his collection to Cambridge University in 1946.
All of Newton's known writings on alchemy are currently being put online in a project undertaken by Indiana University: "The Chymistry of Isaac Newton" and summarised in a book.
Charles Coulston Gillispie disputes that Newton ever practised alchemy, saying that "his chemistry was in the spirit of Boyle's corpuscular philosophy."
In June 2020, two unpublished pages of Newton's notes on Jan Baptist van Helmont's book on plague, "De Peste", were being auctioned online by Bonham's. Newton's analysis of this book, which he made in Cambridge while protecting himself from London's 1665-1666 infection, is the most substantial written statement he is known to have made about the plague, according to Bonham's. As far as the therapy is concerned, Newton writes that "the best is a toad suspended by the legs in a chimney for three days, which at last vomited up earth with various insects in it, on to a dish of yellow wax, and shortly after died. Combining powdered toad with the excretions and serum made into lozenges and worn about the affected area drove away the contagion and drew out the poison".
Enlightenment philosophers chose a short history of scientific predecessors—Galileo, Boyle, and Newton principally—as the guides and guarantors of their applications of the singular concept of nature and natural law to every physical and social field of the day. In this respect, the lessons of history and the social structures built upon it could be discarded.
It was Newton's conception of the universe based upon natural and rationally understandable laws that became one of the seeds for Enlightenment ideology. Locke and Voltaire applied concepts of natural law to political systems advocating intrinsic rights; the physiocrats and Adam Smith applied natural conceptions of psychology and self-interest to economic systems; and sociologists criticised the current social order for trying to fit history into natural models of progress. Monboddo and Samuel Clarke resisted elements of Newton's work, but eventually rationalised it to conform with their strong religious views of nature.
Newton himself often told the story that he was inspired to formulate his theory of gravitation by watching the fall of an apple from a tree. The story is believed to have passed into popular knowledge after being related by Catherine Barton, Newton's niece, to Voltaire. Voltaire then wrote in his "Essay on Epic Poetry" (1727), "Sir Isaac Newton walking in his gardens, had the first thought of his system of gravitation, upon seeing an apple falling from a tree."
Although it has been said that the apple story is a myth and that he did not arrive at his theory of gravity at any single moment, acquaintances of Newton (such as William Stukeley, whose manuscript account of 1752 has been made available by the Royal Society) do in fact confirm the incident, though not the apocryphal version that the apple actually hit Newton's head. Stukeley recorded in his "Memoirs of Sir Isaac Newton's Life" a conversation with Newton in Kensington on 15 April 1726:
John Conduitt, Newton's assistant at the Royal Mint and husband of Newton's niece, also described the event when he wrote about Newton's life:
It is known from his notebooks that Newton was grappling in the late 1660s with the idea that terrestrial gravity extends, in an inverse-square proportion, to the Moon; however, it took him two decades to develop the full-fledged theory. The question was not whether gravity existed, but whether it extended so far from Earth that it could also be the force holding the Moon to its orbit. Newton showed that if the force decreased as the inverse square of the distance, one could indeed calculate the Moon's orbital period, and get good agreement. He guessed the same force was responsible for other orbital motions, and hence named it "universal gravitation".
Various trees are claimed to be "the" apple tree which Newton describes. The King's School, Grantham claims that the tree was purchased by the school, uprooted and transported to the headmaster's garden some years later. The staff of the (now) National Trust-owned Woolsthorpe Manor dispute this, and claim that a tree present in their gardens is the one described by Newton. A descendant of the original tree can be seen growing outside the main gate of Trinity College, Cambridge, below the room Newton lived in when he studied there. The National Fruit Collection at Brogdale in Kent can supply grafts from their tree, which appears identical to Flower of Kent, a coarse-fleshed cooking variety.
ed. Joannes Nichols, "Isaaci Newtoni Opera quae exstant omnia", vol. 4 (1782), 403–407.
Mark P. Silverman, "A Universe of Atoms, An Atom in the Universe", Springer, 2002, p. 49. | https://en.wikipedia.org/wiki?curid=14627 |
Inventor
An inventor is a person who creates or discovers a new method, form, device or other useful means that becomes known as an invention. The word "inventor" comes from the Latin verb "invenire", "invent-", to find. The system of patents was established to encourage inventors by granting limited-term, limited monopoly on inventions determined to be sufficiently novel, non-obvious, and useful. Although inventing is closely associated with science and engineering, inventors are not necessarily engineers nor scientists. | https://en.wikipedia.org/wiki?curid=14629 |
Immanuel Kant
Immanuel Kant (, ; ; 22 April 1724 – 12 February 1804) was an influential German philosopher in the Age of Enlightenment. In his doctrine of transcendental idealism, he argued that space, time, and causation are mere sensibilities; "things-in-themselves" exist, but their nature is unknowable. In his view, the mind shapes and structures experience, with all human experience sharing certain structural features. In one of his major works, the "Critique of Pure Reason" (1781; second edition 1787), he drew a parallel to the Copernican revolution in his proposition that worldly objects can be intuited "a priori" ('beforehand'), and that intuition is therefore independent from objective reality.
Kant believed that reason is also the source of morality, and that aesthetics arise from a faculty of disinterested judgment. Kant's views continue to have a major influence on contemporary philosophy, especially the fields of epistemology, ethics, political theory, and post-modern aesthetics. He attempted to explain the relationship between reason and human experience and to move beyond the failures of traditional philosophy and metaphysics. He wanted to put an end to what he saw as an era of futile and speculative theories of human experience, while resisting the skepticism of thinkers such as David Hume. He regarded himself as showing the way past the impasse between rationalists and empiricists, and is widely held to have synthesized both traditions in his thought.
Kant was an exponent of the idea that perpetual peace could be secured through universal democracy and international cooperation. He believed that this would be the eventual outcome of universal history, although it is not rationally planned. The nature of Kant's religious ideas continues to be the subject of philosophical dispute, with viewpoints ranging from the impression that he was an initial advocate of atheism who at some point developed an ontological argument for God, to more critical treatments epitomized by Schopenhauer, who criticized the imperative form of Kantian ethics as "theological morals" and the "Mosaic Decalogue in disguise", and Nietzsche, who claimed that Kant had "theologian blood" and was merely a sophisticated apologist for traditional Christian faith.
Kant was also an important enlightenment figure in the history of racism. He sketched out a formal racial hierarchy, writing in his work "On the Different Races of Man" that “humanity exists in its greatest perfection in the white race... The yellow Indians have a smaller amount of Talent. The Negroes are lower and the lowest are a part of the American peoples.” Robert Bernasconi stated that Kant "supplied the first scientific definition of race."
Kant published other important works on ethics, religion, law, aesthetics, astronomy, and history. These include the "Universal Natural History" (1755), the "Critique of Practical Reason" (1788), the "Metaphysics of Morals" (1797), the "Critique of Judgment" (1790), which looks at aesthetics and teleology, and "Religion within the Bounds of Bare Reason" (1793).
Kant's mother, Anna Regina Reuter (1697–1737), was born in Königsberg (since 1946 the city of Kaliningrad, Kaliningrad Oblast, Russia) to a father from Nuremberg. Her surname is sometimes erroneously given as Porter. Kant's father, Johann Georg Kant (1682–1746), was a German harness maker from Memel, at the time Prussia's most northeastern city (now Klaipėda, Lithuania). Kant believed that his paternal grandfather Hans Kant was of Scottish origin. While scholars of Kant's life long accepted the claim, there is no evidence that Kant's paternal line was Scottish and it is more likely that the Kants got their name from the village of Kantwaggen (today part of Priekulė) and were of Curonian origin. Kant was the fourth of nine children (four of whom reached adulthood).
Kant was born on 22 April 1724 into a Prussian German family of Lutheran Protestant faith in Königsberg, East Prussia. Baptized Emanuel, he later changed his name to Immanuel after learning Hebrew. He was brought up in a Pietist household that stressed religious devotion, humility, and a literal interpretation of the Bible. His education was strict, punitive and disciplinary, and focused on Latin and religious instruction over mathematics and science. Kant maintained Christian ideals for some time, but struggled to reconcile the faith with his belief in science. In his "Groundwork of the Metaphysic of Morals", he reveals a belief in immortality as the necessary condition of humanity's approach to the highest morality possible. However, as Kant was skeptical about some of the arguments used prior to him in defence of theism and maintained that human understanding is limited and can never attain knowledge about God or the soul, various commentators have labelled him a philosophical agnostic.
Common myths about Kant's personal mannerisms are listed, explained, and refuted in Goldthwait's introduction to his translation of "Observations on the Feeling of the Beautiful and Sublime". It is often held that Kant lived a very strict and disciplined life, leading to an oft-repeated story that neighbors would set their clocks by his daily walks. He never married, but seemed to have a rewarding social life — he was a popular teacher and a modestly successful author even before starting on his major philosophical works. He had a circle of friends with whom he frequently met, among them Joseph Green, an English merchant in Königsberg.
A common myth is that Kant never traveled more than from Königsberg his whole life. In fact, between 1750 and 1754 he worked as a tutor ("Hauslehrer") in Judtschen (now Veselovka, Russia, approximately 20 km) and in Groß-Arnsdorf (now Jarnołtowo near Morąg (German: Mohrungen), Poland, approximately 145 km).
Kant showed a great aptitude for study at an early age. He first attended the Collegium Fridericianum from which he graduated at the end of the summer of 1740. In 1740, aged 16, he enrolled at the University of Königsberg, where he spent his whole career. He studied the philosophy of Gottfried Leibniz and Christian Wolff under Martin Knutzen (Associate Professor of Logic and Metaphysics from 1734 until his death in 1751), a rationalist who was also familiar with developments in British philosophy and science and introduced Kant to the new mathematical physics of Isaac Newton. Knutzen dissuaded Kant from the theory of pre-established harmony, which he regarded as "the pillow for the lazy mind". He also dissuaded Kant from idealism, the idea that reality is purely mental, which most philosophers in the 18th century regarded in a negative light. The theory of transcendental idealism that Kant later included in the "Critique of Pure Reason" was developed partially in opposition to traditional idealism.
His father's stroke and subsequent death in 1746 interrupted his studies. Kant left Königsberg shortly after August 1748—he would return there in August 1754. He became a private tutor in the towns surrounding Königsberg, but continued his scholarly research. In 1749, he published his first philosophical work, "Thoughts on the True Estimation of Living Forces" (written in 1745–47).
Kant is best known for his work in the philosophy of ethics and metaphysics, but he made significant contributions to other disciplines. In 1754, while contemplating on a prize question by the Berlin Academy about the problem of Earth's rotation, he argued that the Moon's gravity would slow down Earth's spin and he also put forth the argument that gravity would eventually cause the Moon's tidal locking to coincide with the Earth's rotation. The next year, he expanded this reasoning to the formation and evolution of the Solar System in his "Universal Natural History and Theory of the Heavens". In 1755, Kant received a license to lecture in the University of Königsberg and began lecturing on a variety of topics including mathematics, physics, logic and metaphysics. In his 1756 essay on the theory of winds, Kant laid out an original insight into the coriolis force. In 1757, Kant began lecturing on geography being one of the first people to explicitly teach geography as its own subject. Geography was one of Kant's most popular lecturing topics and in 1802 a compilation by Friedrich Theodor Rink of Kant's lecturing notes, "Physical Geography", was released. After Kant became a professor in 1770, he expanded the topics of his lectures to include lectures on natural law, ethics and anthropology along with other topics.
In the "Universal Natural History", Kant laid out the Nebular hypothesis, in which he deduced that the Solar System had formed from a large cloud of gas, a nebula. Kant also correctly deduced (though through usually false premises and fallacious reasoning, according to Bertrand Russell) that the Milky Way was a large disk of stars, which he theorized formed from a much larger spinning gas cloud. He further suggested that other distant "nebulae" might be other galaxies. These postulations opened new horizons for astronomy, for the first time extending it beyond the Solar System to galactic and intergalactic realms. According to Thomas Huxley (1867), Kant also made contributions to geology in his "Universal Natural History".
From then on, Kant turned increasingly to philosophical issues, although he continued to write on the sciences throughout his life. In the early 1760s, Kant produced a series of important works in philosophy. "The False Subtlety of the Four Syllogistic Figures", a work in logic, was published in 1762. Two more works appeared the following year: "Attempt to Introduce the Concept of Negative Magnitudes into Philosophy" and "The Only Possible Argument in Support of a Demonstration of the Existence of God". In 1764, Kant wrote "Observations on the Feeling of the Beautiful and Sublime" and then was second to Moses Mendelssohn in a Berlin Academy prize competition with his "Inquiry Concerning the Distinctness of the Principles of Natural Theology and Morality" (often referred to as "The Prize Essay"). In 1766 Kant wrote "Dreams of a Spirit-Seer" which dealt with the writings of Emanuel Swedenborg. The exact influence of Swedenborg on Kant, as well as the extent of Kant's belief in mysticism according to "Dreams of a Spirit-Seer", remain controversial. On 31 March 1770, aged 45, Kant was finally appointed Full Professor of Logic and Metaphysics ("Professor Ordinarius der Logic und Metaphysic") at the University of Königsberg. In defense of this appointment, Kant wrote his inaugural dissertation ("Inaugural-Dissertation") "De Mundi Sensibilis atque Intelligibilis Forma et Principiis" ("On the Form and Principles of the Sensible and the Intelligible World)". This work saw the emergence of several central themes of his mature work, including the distinction between the faculties of intellectual thought and sensible receptivity. To miss this distinction would mean to commit the error of subreption, and, as he says in the last chapter of the dissertation, only in avoiding this error does metaphysics flourish.
The issue that vexed Kant was central to what 20th-century scholars called "the philosophy of mind". The flowering of the natural sciences had led to an understanding of how data reaches the brain. Sunlight falling on an object is reflected from its surface in a way that maps the surface features (color, texture, etc.). The reflected light reaches the human eye, passes through the cornea, is focused by the lens onto the retina where it forms an image similar to that formed by light passing through a pinhole into a camera obscura. The retinal cells send impulses through the optic nerve and then they form a mapping in the brain of the visual features of the object. The interior mapping is not the exterior object, and our belief that there is a meaningful relationship between the object and the mapping in the brain depends on a chain of reasoning that is not fully grounded. But the uncertainty aroused by these considerations, by optical illusions, misperceptions, delusions, etc., are not the end of the problems.
Kant saw that the mind could not function as an empty container that simply receives data from outside. Something must be giving order to the incoming data. Images of external objects must be kept in the same sequence in which they were received. This ordering occurs through the mind's intuition of time. The same considerations apply to the mind's function of constituting space for ordering mappings of visual and tactile signals arriving via the already described chains of physical causation.
It is often claimed that Kant was a late developer, that he only became an important philosopher in his mid-50s after rejecting his earlier views. While it is true that Kant wrote his greatest works relatively late in life, there is a tendency to underestimate the value of his earlier works. Recent Kant scholarship has devoted more attention to these "pre-critical" writings and has recognized a degree of continuity with his mature work.
At age 46, Kant was an established scholar and an increasingly influential philosopher, and much was expected of him.
In correspondence with his ex-student and friend Markus Herz, Kant admitted that, in the inaugural dissertation, he had failed to account for the relation between our sensible and intellectual faculties. He needed to explain how we combine what is known as sensory knowledge with the other type of knowledgei.e. reasoned knowledgethese two being related but having very different processes.
Kant also credited David Hume with awakening him from a "dogmatic slumber". Hume had stated that experience consists only of sequences of feelings, images or sounds. Ideas such as "cause", goodness, or objects were not evident in experience, so why do we believe in the reality of these? Kant felt that reason could remove this skepticism, and he set himself to solving these problems. He did not publish any work in philosophy for the next 11 years.
Although fond of company and conversation with others, Kant isolated himself, and resisted friends' attempts to bring him out of his isolation. It has been noted that in 1778, in response to one of these offers by a former pupil, Kant wrote:
When Kant emerged from his silence in 1781, the result was the "Critique of Pure Reason". Although now uniformly recognized as one of the greatest works in the history of philosophy, this "Critique" was largely ignored upon its initial publication. The book was long, over 800 pages in the original German edition, and written in a convoluted style. It received few reviews, and these granted it no significance. Kant's former student, Johann Gottfried Herder criticized it for placing reason as an entity worthy of criticism instead of considering the process of reasoning within the context of language and one's entire personality. Similar to Christian Garve and Johann Georg Heinrich Feder, he rejected Kant's position that space and time possessed a form that could be analyzed. Additionally, Garve and Feder also faulted Kant's Critique for not explaining differences in perception of sensations. Its density made it, as Herder said in a letter to Johann Georg Hamann, a "tough nut to crack", obscured by "all this heavy gossamer". Its reception stood in stark contrast to the praise Kant had received for earlier works, such as his "Prize Essay" and shorter works that preceded the first Critique. These well-received and readable tracts include one on the earthquake in Lisbon that was so popular that it was sold by the page. Prior to the change in course documented in the first Critique, his books sold well, and by the time he published "Observations on the Feeling of the Beautiful and Sublime" in 1764 he had become a notable popular author. Kant was disappointed with the first Critique's reception. Recognizing the need to clarify the original treatise, Kant wrote the "Prolegomena to any Future Metaphysics" in 1783 as a summary of its main views. Shortly thereafter, Kant's friend Johann Friedrich Schultz (1739–1805) (professor of mathematics) published "Erläuterungen über des Herrn Professor Kant Critik der reinen Vernunft" (Königsberg, 1784), which was a brief but very accurate commentary on Kant's "Critique of Pure Reason".
Kant's reputation gradually rose through the latter portion of the 1780s, sparked by a series of important works: the 1784 essay, "Answer to the Question: What is Enlightenment?"; 1785's "Groundwork of the Metaphysics of Morals" (his first work on moral philosophy); and, from 1786, "Metaphysical Foundations of Natural Science." But Kant's fame ultimately arrived from an unexpected source. In 1786, Karl Leonhard Reinhold published a series of public letters on Kantian philosophy. In these letters, Reinhold framed Kant's philosophy as a response to the central intellectual controversy of the era: the Pantheism Dispute. Friedrich Jacobi had accused the recently deceased Gotthold Ephraim Lessing (a distinguished dramatist and philosophical essayist) of Spinozism. Such a charge, tantamount to atheism, was vigorously denied by Lessing's friend Moses Mendelssohn, leading to a bitter public dispute among partisans. The controversy gradually escalated into a debate about the values of the Enlightenment and the value of reason.
Reinhold maintained in his letters that Kant's "Critique of Pure Reason" could settle this dispute by defending the authority and bounds of reason. Reinhold's letters were widely read and made Kant the most famous philosopher of his era.
Kant published a second edition of the "Critique of Pure Reason" in 1787, heavily revising the first parts of the book. Most of his subsequent work focused on other areas of philosophy. He continued to develop his moral philosophy, notably in 1788's "Critique of Practical Reason" (known as the second "Critique") and 1797's "Metaphysics of Morals". The 1790 "Critique of Judgment" (the third "Critique") applied the Kantian system to aesthetics and teleology. It was in this critique where Kant wrote one of his most popular statements: "it is absurd to hope that another Newton will arise in the future who will make comprehensible to us the production of a blade of grass according to natural laws".
In 1792, Kant's attempt to publish the Second of the four Pieces of "Religion within the Bounds of Bare Reason", in the journal "Berlinische Monatsschrift", met with opposition from the King's censorship commission, which had been established that same year in the context of the French Revolution. Kant then arranged to have all four pieces published as a book, routing it through the philosophy department at the University of Jena to avoid the need for theological censorship. This insubordination earned him a now famous reprimand from the King. When he nevertheless published a second edition in 1794, the censor was so irate that he arranged for a royal order that required Kant never to publish or even speak publicly about religion. Kant then published his response to the King's reprimand and explained himself, in the preface of "The Conflict of the Faculties".
He also wrote a number of semi-popular essays on history, religion, politics and other topics. These works were well received by Kant's contemporaries and confirmed his preeminent status in 18th-century philosophy. There were several journals devoted solely to defending and criticizing Kantian philosophy. Despite his success, philosophical trends were moving in another direction. Many of Kant's most important disciples and followers (including Reinhold, Beck and Fichte) transformed the Kantian position into increasingly radical forms of idealism. The progressive stages of revision of Kant's teachings marked the emergence of German Idealism. Kant opposed these developments and publicly denounced Fichte in an open letter in 1799. It was one of his final acts expounding a stance on philosophical questions. In 1800, a student of Kant named Gottlob Benjamin Jäsche (1762–1842) published a manual of logic for teachers called "Logik", which he had prepared at Kant's request. Jäsche prepared the "Logik" using a copy of a textbook in logic by Georg Friedrich Meier entitled "Auszug aus der Vernunftlehre", in which Kant had written copious notes and annotations. The "Logik" has been considered of fundamental importance to Kant's philosophy, and the understanding of it. The great 19th-century logician Charles Sanders Peirce remarked, in an incomplete review of Thomas Kingsmill Abbott's English translation of the introduction to "Logik", that "Kant's whole philosophy turns upon his logic." Also, Robert Schirokauer Hartman and Wolfgang Schwarz, wrote in the translators' introduction to their English translation of the "Logik", "Its importance lies not only in its significance for the "Critique of Pure Reason", the second part of which is a restatement of fundamental tenets of the "Logic", but in its position within the whole of Kant's work."
Kant's health, long poor, worsened and he died at Königsberg on 12 February 1804, uttering ""Es ist gut" (It is good)" before expiring. His unfinished final work was published as "Opus Postumum". Kant always cut a curious figure in his lifetime for his modest, rigorously scheduled habits, which have been referred to as clocklike. However, Heinrich Heine noted the magnitude of "his destructive, world-crushing thoughts" and considered him a sort of philosophical "executioner", comparing him to Robespierre with the observation that both men "represented in the highest the type of provincial bourgeois. Nature had destined them to weigh coffee and sugar, but Fate determined that they should weigh other things and placed on the scales of the one a king, on the scales of the other a god."
When his body was transferred to a new burial spot, his skull was measured during the exhumation and found to be larger than the average German male's with a "high and broad" forehead. His forehead has been an object of interest ever since it became well-known through his portraits: "In Döbler's portrait and in Kiefer's faithful if expressionistic reproduction of it — as well as in many of the other late eighteenth- and early nineteenth-century portraits of Kant — the forehead is remarkably large and decidedly retreating. Was Kant's forehead shaped this way in these images because he was a philosopher, or, to follow the implications of Lavater's system, was he a philosopher because of the intellectual acuity manifested by his forehead? Kant and Johann Kaspar Lavater were correspondents on theological matters, and Lavater refers to Kant in his work "Physiognomic Fragments, for the Education of Human Knowledge and Love of People" (Leipzig & Winterthur, 1775–1778).
Kant's mausoleum adjoins the northeast corner of Königsberg Cathedral in Kaliningrad, Russia. The mausoleum was constructed by the architect Friedrich Lahrs and was finished in 1924 in time for the bicentenary of Kant's birth. Originally, Kant was buried inside the cathedral, but in 1880 his remains were moved to a neo-Gothic chapel adjoining the northeast corner of the cathedral. Over the years, the chapel became dilapidated and was demolished to make way for the mausoleum, which was built on the same location.
The tomb and its mausoleum are among the few artifacts of German times preserved by the Soviets after they conquered and annexed the city. Today, many newlyweds bring flowers to the mausoleum. Artifacts previously owned by Kant, known as "Kantiana", were included in the Königsberg City Museum. However, the museum was destroyed during World War II. A replica of the statue of Kant that stood in German times in front of the main University of Königsberg building was donated by a German entity in the early 1990s and placed in the same grounds.
After the expulsion of Königsberg's German population at the end of World War II, the University of Königsberg where Kant taught was replaced by the Russian-language Kaliningrad State University, which appropriated the campus and surviving buildings. In 2005, the university was renamed Immanuel Kant State University of Russia. The name change was announced at a ceremony attended by President Vladimir Putin of Russia and Chancellor Gerhard Schröder of Germany, and the university formed a Kant Society, dedicated to the study of Kantianism.
In late November 2018, his tomb and statue were vandalized with paint by unknown assailants, who also scattered leaflets glorifying Rus' and denouncing Kant as a "traitor". The incident is apparently connected with a recent vote to rename Khrabrovo Airport, where Kant was in the lead for a while, prompting Russian nationalist resentment.
In Kant's essay "Answering the Question: What is Enlightenment?", he defined the Enlightenment as an age shaped by the Latin motto "Sapere aude" ("Dare to be wise"). Kant maintained that one ought to think autonomously, free of the dictates of external authority. His work reconciled many of the differences between the rationalist and empiricist traditions of the 18th century. He had a decisive impact on the Romantic and German Idealist philosophies of the 19th century. His work has also been a starting point for many 20th century philosophers.
Kant asserted that, because of the limitations of argumentation in the absence of irrefutable evidence, no one could really know whether there is a God and an afterlife or not. For the sake of morality and as a ground for reason, Kant asserted, people are justified in believing in God, even though they could never know God's presence empirically.
The sense of an enlightened approach and the critical method required that "If one cannot prove that a thing "is," he may try to prove that it is "not." If he fails to do either (as often occurs), he may still ask whether it is in his "interest" to "accept" one or the other of the alternatives hypothetically, from the theoretical or the practical point of view. Hence the question no longer is as to whether perpetual peace is a real thing or not a real thing, or as to whether we may not be deceiving ourselves when we adopt the former alternative, but we must "act" on the supposition of its being real." The presupposition of God, soul, and freedom was then a practical concern, for
Kant drew a parallel between the Copernican revolution and the epistemology of his new transcendental philosophy.
Kant's Copernican revolution involved two interconnected foundations of his "critical philosophy":
These teachings placed the active, rational human subject at the center of the cognitive and moral worlds. Kant argued that the rational order of the world as known by science was not just the accidental accumulation of sense perceptions.
Conceptual unification and integration is carried out by the mind through concepts or the "categories of the understanding" operating on the perceptual manifold within space and time. The latter are not concepts, but are forms of sensibility that are "a priori" necessary conditions for any possible experience. Thus the objective order of nature and the causal necessity that operates within it depend on the mind's processes, the product of the rule-based activity that Kant called, "synthesis." There is much discussion among Kant scholars about the correct interpretation of this train of thought.
The 'two-world' interpretation regards Kant's position as a statement of epistemological limitation, that we are not able to transcend the bounds of our own mind, meaning that we cannot access the "thing-in-itself". However, Kant also speaks of the thing in itself or "transcendental object" as a product of the (human) understanding as it attempts to conceive of objects in abstraction from the conditions of sensibility. Following this line of thought, some interpreters have argued that the thing in itself does not represent a separate ontological domain but simply a way of considering objects by means of the understanding alonethis is known as the two-aspect view.
The notion of the "thing in itself" was much discussed by philosophers after Kant. It was argued that because the "thing in itself" was unknowable, its existence must not be assumed. Rather than arbitrarily switching to an account that was ungrounded in anything supposed to be the "real," as did the German Idealists, another group arose to ask how our (presumably reliable) accounts of a coherent and rule-abiding universe were actually grounded. This new kind of philosophy became known as Phenomenology, and its founder was Edmund Husserl.
With regard to morality, Kant argued that the source of the good lies not in anything outside the human subject, either in nature or given by God, but rather is only the good will itself. A good will is one that acts from duty in accordance with the universal moral law that the autonomous human being freely gives itself. This law obliges one to treat humanityunderstood as rational agency, and represented through oneself as well as othersas an end in itself rather than (merely) as means to other ends the individual might hold. This necessitates practical self-reflection in which we universalize our reasons.
These ideas have largely framed or influenced all subsequent philosophical discussion and analysis. The specifics of Kant's account generated immediate and lasting controversy. Nevertheless, his thesesthat the mind itself necessarily makes a constitutive contribution to its knowledge, that this contribution is transcendental rather than psychological, that philosophy involves self-critical activity, that morality is rooted in human freedom, and that to act autonomously is to act according to rational moral principleshave all had a lasting effect on subsequent philosophy.
Kant defines his theory of perception in his influential 1781 work the "Critique of Pure Reason", which has often been cited as the most significant volume of metaphysics and epistemology in modern philosophy. Kant maintains that our understanding of the external world had its foundations not merely in experience, but in both experience and "a priori" concepts, thus offering a "non-empiricist critique of rationalist philosophy", which is what has been referred to as his Copernican revolution.
Firstly, Kant distinguishes between analytic and synthetic propositions:
An analytic proposition is true by nature of the meaning of the words in the sentence — we require no further knowledge than a grasp of the language to understand this proposition. On the other hand, a synthetic statement is one that tells us something about the world. The truth or falsehood of synthetic statements derives from something outside their linguistic content. In this instance, weight is not a necessary predicate of the body; until we are told the heaviness of the body we do not know that it has weight. In this case, experience of the body is required before its heaviness becomes clear. Before Kant's first Critique, empiricists (cf. Hume) and rationalists (cf. Leibniz) assumed that all synthetic statements required experience to be known.
Kant contests this assumption by claiming that elementary mathematics, like arithmetic, is synthetic "a priori", in that its statements provide new knowledge not derived from experience. This becomes part of his over-all argument for transcendental idealism. That is, he argues that the possibility of experience depends on certain necessary conditions — which he calls "a priori" forms — and that these conditions structure and hold true of the world of experience. His main claims in the "Transcendental Aesthetic" are that mathematic judgments are synthetic "a priori" and that space and time are not derived from experience but rather are its preconditions.
Once we have grasped the functions of basic arithmetic, we do not need empirical experience to know that 100 + 100 = 200, and so it appears that arithmetic is analytic. However, that it is analytic can be disproved by considering the calculation 5 + 7 = 12: there is nothing in the numbers 5 and 7 by which the number 12 can be inferred. Thus "5 + 7" and "the cube root of 1,728" or "12" are not analytic because their reference is the same but their sense is not — the statement "5 + 7 = 12" tells us something new about the world. It is self-evident, and undeniably "a priori", but at the same time it is synthetic. Thus Kant argued that a proposition can be synthetic and "a priori".
Kant asserts that experience is based on the perception of external objects and "a priori" knowledge. The external world, he writes, provides those things that we sense. But our mind processes this information and gives it order, allowing us to comprehend it. Our mind supplies the conditions of space and time to experience objects. According to the "transcendental unity of apperception", the concepts of the mind (Understanding) and perceptions or intuitions that garner information from phenomena (Sensibility) are synthesized by comprehension. Without concepts, perceptions are nondescript; without perceptions, concepts are meaningless. Thus the famous statement: "Thoughts without content are empty, intuitions [perceptions] without concepts are blind."
Kant also claims that an external environment is necessary for the establishment of the self. Although Kant would want to argue that there is no empirical way of observing the self, we can see the logical necessity of the self when we observe that we can have different perceptions of the external environment over time. By uniting these general representations into one global representation, we can see how a transcendental self emerges. "I am therefore conscious of the identical self in regard to the manifold of the representations that are given to me in an intuition because I call them all together my representations, which constitute one."
Kant deemed it obvious that we have some objective knowledge of the world, such as, say, Newtonian physics. But this knowledge relies on synthetic, "a priori" laws of nature, like causality and substance. How is this possible? Kant's solution was that the subject must supply laws that make experience of objects possible, and that these laws are synthetic, "a priori" laws of nature that apply to all objects before we experience them. To deduce all these laws, Kant examined experience in general, dissecting in it what is supplied by the mind from what is supplied by the given intuitions. This is commonly called a transcendental deduction.
To begin with, Kant's distinction between the "a posteriori" being contingent and particular knowledge, and the "a priori" being universal and necessary knowledge, must be kept in mind. If we merely connect two intuitions together in a perceiving subject, the knowledge is always subjective because it is derived "a posteriori," when what is desired is for the knowledge to be objective, that is, for the two intuitions to refer to the object and hold good of it for anyone at any time, not just the perceiving subject in its current condition. What else is equivalent to objective knowledge besides the "a priori" (universal and necessary knowledge)? Before knowledge can be objective, it must be incorporated under an "a priori" category of "understanding".
For example, if a subject says, "The sun shines on the stone; the stone grows warm," all he perceives are phenomena. His judgment is contingent and holds no necessity. But if he says, "The sunshine causes the stone to warm," he subsumes the perception under the category of causality, which is not found in the perception, and necessarily synthesizes the concept sunshine with the concept heat, producing a necessarily universally true judgment.
To explain the categories in more detail, they are the preconditions of the construction of objects in the mind. Indeed, to even think of the sun and stone presupposes the category of subsistence, that is, substance. For the categories synthesize the random data of the sensory manifold into intelligible objects. This means that the categories are also the most abstract things one can say of any object whatsoever, and hence one can have an "a priori" cognition of the totality of all objects of experience if one can list all of them. To do so, Kant formulates another transcendental deduction.
Judgments are, for Kant, the preconditions of any thought. Man thinks via judgments, so all possible judgments must be listed and the perceptions connected within them put aside, so as to make it possible to examine the moments when "the understanding" is engaged in constructing judgments. For the categories are equivalent to these moments, in that they are concepts of intuitions in general, so far as they are determined by these moments universally and necessarily. Thus by listing all the moments, one can deduce from them all of the categories.
One may now ask: How many possible judgments are there? Kant believed that all the possible propositions within Aristotle's syllogistic logic are equivalent to all possible judgments, and that all the logical operators within the propositions are equivalent to the moments of the understanding within judgments. Thus he listed Aristotle's system in four groups of three: quantity (universal, particular, singular), quality (affirmative, negative, infinite), relation (categorical, hypothetical, disjunctive) and modality (problematic, assertoric, apodeictic). The parallelism with Kant's categories is obvious: quantity (unity, plurality, totality), quality (reality, negation, limitation), relation (substance, cause, community) and modality (possibility, existence, necessity).
The fundamental building blocks of experience, i.e. objective knowledge, are now in place. First there is the sensibility, which supplies the mind with intuitions, and then there is the understanding, which produces judgments of these intuitions and can subsume them under categories. These categories lift the intuitions up out of the subject's current state of consciousness and place them within consciousness in general, producing universally necessary knowledge. For the categories are innate in any rational being, so any intuition thought within a category in one mind is necessarily subsumed and understood identically in any mind. In other words, we filter what we see and hear.
Kant ran into a problem with his theory that the mind plays a part in producing objective knowledge. Intuitions and categories are entirely disparate, so how can they interact? Kant's solution is the (transcendental) schema: a priori principles by which the transcendental imagination connects concepts with intuitions through time. All the principles are temporally bound, for if a concept is purely a priori, as the categories are, then they must apply for all times. Hence there are principles such as "substance is that which endures through time", and "the cause must always be prior to the effect".. In the context of transcendental schema the concept of transcendental reflection is of a great importance.
Kant developed his moral philosophy in three works: "Groundwork of the Metaphysic of Morals" (1785), "Critique of Practical Reason" (1788), and "Metaphysics of Morals" (1797).
In "Groundwork", Kant' tries to convert our everyday, obvious, rational knowledge of morality into philosophical knowledge. The latter two works used "practical reason", which is based only on things about which reason can tell us, and not deriving any principles from experience, to reach conclusions which can be applied to the world of experience (in the second part of "The Metaphysics of Morals").
Kant is known for his theory that there is a single moral obligation, which he called the "Categorical Imperative", and is derived from the concept of duty. Kant defines the demands of moral law as "categorical imperatives". Categorical imperatives are principles that are intrinsically valid; they are good in and of themselves; they must be obeyed in all situations and circumstances, if our behavior is to observe the moral law. The Categorical Imperative provides a test against which moral statements can be assessed. Kant also stated that the moral means and ends can be applied to the categorical imperative, that rational beings can pursue certain "ends" using the appropriate "means". Ends based on physical needs or wants create hypothetical imperatives. The categorical imperative can only be based on something that is an "end in itself", that is, an end that is not a means to some other need, desire, or purpose. Kant believed that the moral law is a principle of reason itself, and is not based on contingent facts about the world, such as what would make us happy, but to act on the moral law which has no other motive than "worthiness to be happy". Accordingly, he believed that moral obligation applies only to rational agents.
Unlike a hypothetical imperative, a categorical imperative is an unconditional obligation; it has the force of an obligation regardless of our will or desires In "Groundwork of the Metaphysic of Morals" (1785) Kant enumerated three formulations of the categorical imperative that he believed to be roughly equivalent. In the same book, Kant stated:
According to Kant, one cannot make exceptions for oneself. The philosophical maxim on which one acts should always be considered to be a universal law without exception. One cannot allow oneself to do a particular action unless one thinks it appropriate that the reason for the action should become a universal law. For example, one should not steal, however dire the circumstancesbecause, by permitting oneself to steal, one makes stealing a universally acceptable act. This is the first formulation of the categorical imperative, often known as the universalizability principle.
Kant believed that, if an action is not done with the motive of duty, then it is without moral value. He thought that every action should have pure intention behind it; otherwise, it is meaningless. The final result is not the most important aspect of an action; rather, how the person feels while carrying out the action is the time when value is attached to the result.
In "Groundwork of the Metaphysic of Morals", Kant also posited the "counter-utilitarian idea that there is a difference between preferences and values, and that considerations of individual rights temper calculations of aggregate utility", a concept that is an axiom in economics:
Everything has either a "price" or a "dignity". Whatever has a price can be replaced by something else as its equivalent; on the other hand, whatever is above all price, and therefore admits of no equivalent, has a dignity. But that which constitutes the condition under which alone something can be an end in itself does not have mere relative worth, i.e., price, but an intrinsic worth, i.e., a dignity. (p. 53, italics in original).
A phrase quoted by Kant, which is used to summarize the counter-utilitarian nature of his moral philosophy, is "Fiat justitia, pereat mundus", ("Let justice be done, though the world perish"), which he translates loosely as "Let justice reign even if all the rascals in the world should perish from it". This appears in his 1795 "" (""""), Appendix 1.
The first formulation (Formula of Universal Law) of the moral imperative "requires that the maxims be chosen as though they should hold as universal laws of nature". This formulation in principle has as its supreme law the creed "Always act according to that maxim whose universality as a law you can at the same time will" and is the "only condition under which a will can never come into conflict with itself [...]"
One interpretation of the first formulation is called the "universalizability test". An agent's maxim, according to Kant, is his "subjective principle of human actions": that is, what the agent believes is his reason to act. The universalisability test has five steps:
The second formulation (or Formula of the End in Itself) holds that "the rational being, as by its nature an end and thus as an end in itself, must serve in every maxim as the condition restricting all merely relative and arbitrary ends". The principle dictates that you "[a]ct with reference to every rational being (whether yourself or another) so that it is an end in itself in your maxim", meaning that the rational being is "the basis of all maxims of action" and "must be treated never as a mere means but as the supreme limiting condition in the use of all means, i.e., as an end at the same time".
The third formulation (i.e. Formula of Autonomy) is a synthesis of the first two and is the basis for the "complete determination of all maxims". It states "that all maxims which stem from autonomous legislation ought to harmonize with a possible realm of ends as with a realm of nature".
In principle, "So act as if your maxims should serve at the same time as the universal law (of all rational beings)", meaning that we should so act that we may think of ourselves as "a member in the universal realm of ends", legislating universal laws through our maxims (that is, a universal code of conduct), in a "possible realm of ends". No one may elevate themselves above the universal law, therefore it is one's duty to follow the maxim(s).
Commentators, starting in the 20th century, have tended to see Kant as having a strained relationship with religion, though this was not the prevalent view in the 19th century. Karl Leonhard Reinhold, whose letters first made Kant famous, wrote "I believe that I may infer without reservation that the interest of religion, and of Christianity in particular, accords completely with the result of the Critique of Reason.". Johann Schultz, who wrote one of the first Kant commentaries, wrote "And does not this system itself cohere most splendidly with the Christian religion? Do not the divinity and beneficence of the latter become all the more evident?" This view continued throughout the 19th century, as noted by Friedrich Nietzsche, who said "Kant's success is merely a theologian's success." The reason for these views was Kant's moral theology, and the widespread belief that his philosophy was the great antithesis to Spinozism, which had been convulsing the European academy for much of the 18th century. Spinozism was widely seen as the cause of the Pantheism controversy, and as a form of sophisticated pantheism or even atheism. As Kant's philosophy disregarded the possibility of arguing for God through pure reason alone, for the same reasons it also disregarded the possibility of arguing against God through pure reason alone. This, coupled with his moral philosophy (his argument that the existence of morality is a rational reason why God and an afterlife do and must exist), was the reason he was seen by many, at least through the end of the 19th century, as a great defender of religion in general and Christianity in particular.
Kant articulates his strongest criticisms of the organization and practices of religious organizations to those that encourage what he sees as a religion of counterfeit service to God. Among the major targets of his criticism are external ritual, superstition and a hierarchical church order. He sees these as efforts to make oneself pleasing to God in ways other than conscientious adherence to the principle of moral rightness in choosing and acting upon one's maxims. Kant's criticisms on these matters, along with his rejection of certain theoretical proofs grounded in pure reason (particularly the ontological argument) for the existence of God and his philosophical commentary on some Christian doctrines, have resulted in interpretations that see Kant as hostile to religion in general and Christianity in particular (e.g., Walsh 1967). Nevertheless, other interpreters consider that Kant was trying to mark off defensible from indefensible Christian belief. Kant sees in Jesus Christ the affirmation of a "pure moral disposition of the heart" that "can make man well-pleasing to God". Regarding Kant's conception of religion, some critics have argued that he was sympathetic to deism. Other critics have argued that Kant's moral conception moves from deism to theism (as moral theism), for example Allen W. Wood and Merold Westphal. As for Kant's book "Religion within the Bounds of Bare Reason", it was emphasized that Kant reduced religiosity to rationality, religion to morality and Christianity to ethics.
In the "Critique of Pure Reason", Kant distinguishes between the transcendental idea of freedom, which as a psychological concept is "mainly empirical" and refers to "whether a faculty of beginning a series of successive things or states from itself is to be assumed" and the practical concept of freedom as the independence of our will from the "coercion" or "necessitation through sensuous impulses". Kant finds it a source of difficulty that the practical idea of freedom is founded on the transcendental idea of freedom, but for the sake of practical interests uses the practical meaning, taking "no account of... its transcendental meaning," which he feels was properly "disposed of" in the Third Antinomy, and as an element in the question of the freedom of the will is for philosophy "a real stumbling block" that has embarrassed speculative reason.
Kant calls practical "everything that is possible through freedom", and the pure practical laws that are never given through sensuous conditions but are held analogously with the universal law of causality are moral laws. Reason can give us only the "pragmatic laws of free action through the senses", but pure practical laws given by reason "a priori" dictate "what is to be done". (The same distinction of transcendental and practical meaning can be applied to the idea of God, with the "proviso" that the practical concept of freedom can be experienced.)
In the "Critique of Practical Reason", at the end of the second Main Part of the "Analytics", Kant introduces the categories of freedom, in analogy with the categories of understanding their practical counterparts. Kant's categories of freedom apparently function primarily as conditions for the possibility for actions (i) to be free, (ii) to be understood as free and (iii) to be morally evaluated. For Kant, although actions as theoretical objects are constituted by means of the theoretical categories, actions as practical objects (objects of practical use of reason, and which can be good or bad) are constituted by means of the categories of freedom. Only in this way can actions, as phenomena, be a consequence of freedom, and be understood and evaluated as such.
Kant discusses the subjective nature of aesthetic qualities and experiences in "Observations on the Feeling of the Beautiful and Sublime" (1764). Kant's contribution to aesthetic theory is developed in the "Critique of Judgment" (1790) where he investigates the possibility and logical status of "judgments of taste." In the "Critique of Aesthetic Judgment," the first major division of the "Critique of Judgment", Kant used the term "aesthetic" in a manner that, according to Kant scholar W.H. Walsh, differs from its modern sense. In the "Critique of Pure Reason", to note essential differences between judgments of taste, moral judgments, and scientific judgments, Kant abandoned the term "aesthetic" as "designating the critique of taste," noting that judgments of taste could never be "directed" by "laws "a priori"." After A. G. Baumgarten, who wrote "Aesthetica" (1750–58), Kant was one of the first philosophers to develop and integrate aesthetic theory into a unified and comprehensive philosophical system, utilizing ideas that played an integral role throughout his philosophy.
In the chapter "Analytic of the Beautiful" in the "Critique of Judgment", Kant states that beauty is not a property of an artwork or natural phenomenon, but is instead consciousness of the pleasure that attends the 'free play' of the imagination and the understanding. Even though it appears that we are using reason to decide what is beautiful, the judgment is not a cognitive judgment, "and is consequently not logical, but aesthetical" (§ 1). A pure judgement of taste is subjective since it refers to the emotional response of the subject and is based upon nothing but esteem for an object itself: it is a disinterested pleasure, and we feel that pure judgements of taste (i.e. judgements of beauty), lay claim to universal validity (§§ 20–22). It is important to note that this universal validity is not derived from a determinate concept of beauty but from "common sense" (§40). Kant also believed that a judgement of taste shares characteristics engaged in a moral judgement: both are disinterested, and we hold them to be universal. In the chapter "Analytic of the Sublime" Kant identifies the sublime as an aesthetic quality that, like beauty, is subjective, but unlike beauty refers to an indeterminate relationship between the faculties of the imagination and of reason, and shares the character of moral judgments in the use of reason. The feeling of the sublime, divided into two distinct modes (the mathematical and the dynamical sublime), describes two subjective moments that concern the relationship of the faculty of the imagination to reason. Some commentators argue that Kant's critical philosophy contains a third kind of the sublime, the moral sublime, which is the aesthetic response to the moral law or a representation, and a development of the "noble" sublime in Kant's theory of 1764. The mathematical sublime results from the failure of the imagination to comprehend natural objects that appear boundless and formless, or appear "absolutely great" (§§ 23–25). This imaginative failure is then recuperated through the pleasure taken in reason's assertion of the concept of infinity. In this move the faculty of reason proves itself superior to our fallible sensible self (§§ 25–26). In the dynamical sublime there is the sense of annihilation of the sensible self as the imagination tries to comprehend a vast might. This power of nature threatens us but through the resistance of reason to such sensible annihilation, the subject feels a pleasure and a sense of the human moral vocation. This appreciation of moral feeling through exposure to the sublime helps to develop moral character.
Kant developed a distinction between an object of art as a material value subject to the conventions of society and the transcendental condition of the judgment of taste as a "refined" value in the propositions of his "Idea of A Universal History" (1784). In the Fourth and Fifth Theses of that work he identified all art as the "fruits of unsociableness" due to men's "antagonism in society" and, in the Seventh Thesis, asserted that while such material property is indicative of a civilized state, only the ideal of morality and the universalization of refined value through the improvement of the mind "belongs to culture".
In "Perpetual Peace: A Philosophical Sketch", Kant listed several conditions that he thought necessary for ending wars and creating a lasting peace. They included a world of constitutional republics. His classical republican theory was extended in the "Science of Right", the first part of the "Metaphysics of Morals" (1797). Kant believed that universal history leads to the ultimate world of republican states at peace, but his theory was not pragmatic. The process was described in "Perpetual Peace" as natural rather than rational:
Kant's political thought can be summarized as republican government and international organization. "In more characteristically Kantian terms, it is doctrine of the state based upon the law ("Rechtsstaat") and of eternal peace. Indeed, in each of these formulations, both terms express the same idea: that of legal constitution or of 'peace through law'. Kant's political philosophy, being essentially a legal doctrine, rejects by definition the opposition between moral education and the play of passions as alternate foundations for social life. The state is defined as the union of men under law. The state is constituted by laws which are necessary a priori because they flow from the very concept of law. "A regime can be judged by no other criteria nor be assigned any other functions, than those proper to the lawful order as such."
He opposed "democracy," which at his time meant direct democracy, believing that majority rule posed a threat to individual liberty. He stated, "...democracy is, properly speaking, necessarily a despotism, because it establishes an executive power in which 'all' decide for or even against one who does not agree; that is, 'all,' who are not quite all, decide, and this is a contradiction of the general will with itself and with freedom." As with most writers at the time, he distinguished three forms of government i.e. democracy, aristocracy, and monarchy with mixed government as the most ideal form of it.
Kant lectured on anthropology, the study of human nature, for twenty-three and a half years. His "Anthropology from a Pragmatic Point of View" was published in 1798. (This was the subject of Michel Foucault's secondary dissertation for his State doctorate, "Introduction to Kant's Anthropology".) Kant's Lectures on Anthropology were published for the first time in 1997 in German. "Introduction to Kant's Anthropology" was translated into English and published by the Cambridge Texts in the History of Philosophy series in 2006.
Kant was among the first people of his time to introduce anthropology as an intellectual area of study, long before the field gained popularity, and his texts are considered to have advanced the field. His point of view was to influence the works of later philosophers such as Martin Heidegger and Paul Ricoeur.
Kant was also the first to suggest using a dimensionality approach to human diversity. He analyzed the nature of the Hippocrates-Galen four temperaments and plotted them in two dimensions: (1) "activation", or energetic aspect of behaviour, and (2) "orientation on emotionality". Cholerics were described as emotional and energetic; Phlegmatics as balanced and weak; Sanguines as balanced and energetic, and Melancholics as emotional and weak. These two dimensions reappeared in all subsequent models of temperament and personality traits.
Kant viewed anthropology in two broad categories: (1) the physiological approach, which he referred to as "what nature makes of the human being"; and (2) the pragmatic approach, which explored the things that a human "can and should make of himself."
Kant is a major Enlightenment thinker in the history of racism and is one of the central figures in the birth of modern "scientific" racism. Where previous figures such as Carl Linnaeus and Johann Friedrich Blumenbach had supposed only "empirical" observation for racism, Kant produced a full-blown theory of race. Using the Four Temperaments of ancient Greece, he proposed a hierarchy of four racial categories: white Europeans, yellow Asians, black Africans, and red Amerindians.
He wrote that "[Whites] contain all the impulses of nature in affects and passions, all talents, all dispositions to culture and civilization and can as readily obey as govern. They are the only ones who always advance to perfection.” He describes South Asians as "educated to the highest degree but only in the arts and not in the sciences". He goes on that Hindustanis can never reach the level of abstract concepts and that a "great hindustani man" is one who has "gone far in the art of deception and has much money". He stated that the Hindus always stay the way they are and can never advance. About black Africans, Kant wrote that "they can be educated but only as servants, that is they allow themselves to be trained". He quotes David Hume as challenging anyone to "cite a [single] example in which a Negro has shown talents" and asserts that, among the "hundreds of thousands" of blacks transported during the Atlantic slave trade, even among the freed "still not a single one was ever found who presented anything great in art or science or any other praiseworthy quality". To Kant, "the Negro can be disciplined and cultivated, but is never genuinely civilized. He falls of his own accord into savagery." Native Americans, Kant opined, "cannot be educated". He calls them unmotivated, lacking affect, passion and love, describing them as too weak for labor, unfit for any culture, and too phlegmatic for diligence. He said the Native Americans are "far below the Negro, who undoubtedly holds the lowest of all remaining levels by which we designate the different races". Kant stated that "Americans and Blacks cannot govern themselves. They thus serve only for slaves."
Kant was a proponent of the idea of miscegenation, believing that whites would be "degraded" and the "fusing of races" is undesireable, for "not every race adopts the morals and customs of the Europeans". He stated that "instead of assimilation, which was intended by the melting together of the various races, Nature has here made a law of just the opposite". He believed that in the future all races would be extinguished, except that of the whites.
Charles W. Mills wrote that Kant has been "sanitized for public consumption", his racist works conveniently ignored. Robert Bernasconi stated that Kant "supplied the first scientific definition of race". Emmanuel Chukwudi Eze is credited with bringing Kant's contributions to racism to light in the 1990s among Western philosophers, who often gloss over this part of his life and works. He wrote about Kant's ideas of race:
Kant's influence on Western thought has been profound. Over and above his influence on specific thinkers, Kant changed the framework within which philosophical inquiry has been carried out. He accomplished a paradigm shift; very little philosophy is now carried out in the style of pre-Kantian philosophy. This shift consists in several closely related innovations that have become foundational in philosophy itself and in the social sciences and humanities generally:
Kant's ideas have been incorporated into a variety of schools of thought. These include German Idealism, Marxism, positivism, phenomenology, existentialism, critical theory, linguistic philosophy, structuralism, post-structuralism, and deconstructionism.
During his own life, much critical attention was paid to his thought. He influenced Reinhold, Fichte, Schelling, Hegel, and Novalis during the 1780s and 1790s. The school of thinking known as German Idealism developed from his writings. The German Idealists Fichte and Schelling, for example, tried to bring traditional "metaphysically" laden notions like "the Absolute", "God", and "Being" into the scope of Kant's critical thought. In so doing, the German Idealists tried to reverse Kant's view that we cannot know what we cannot observe.
Hegel was one of Kant's first major critics. In response to what he saw as Kant's abstract and formal account, Hegel brought about an ethic focused on the "ethical life" of the community. But Hegel's notion of "ethical life" is meant to subsume, rather than replace, Kantian ethics. And Hegel can be seen as trying to defend Kant's idea of freedom as going beyond finite "desires", by means of reason. Thus, in contrast to later critics like Nietzsche or Russell, Hegel shares some of Kant's most basic concerns.
Kant's thinking on religion was used in Britain to challenge the decline in religious faith in the nineteenth century. British Catholic writers, notably G.K. Chesterton and Hilaire Belloc, followed this approach. Ronald Englefield debated this movement, and Kant's use of language. Criticisms of Kant were common in the realist views of the new positivism at that time.
Arthur Schopenhauer was strongly influenced by Kant's transcendental idealism. He, like G.E. Schulze, Jacobi, and Fichte before him, was critical of Kant's theory of the thing in itself. Things in themselves, they argued, are neither the cause of what we observe nor are they completely beyond our access. Ever since the first "Critique of Pure Reason" philosophers have been critical of Kant's theory of the thing in itself. Many have argued, if such a thing exists beyond experience then one cannot posit that it affects us causally, since that would entail stretching the category 'causality' beyond the realm of experience. For Schopenhauer things in themselves do not exist outside the non-rational will. The world, as Schopenhauer would have it, is the striving and largely unconscious will. Michael Kelly, in the preface to his 1910 book "Kant's Ethics and Schopenhauer's Criticism", stated: "Of Kant it may be said that what is good and true in his philosophy would have been buried with him, were it not for Schopenhauer..."
With the success and wide influence of Hegel's writings, Kant's influence began to wane, though there was in Germany a movement that hailed a return to Kant in the 1860s, beginning with the publication of "Kant und die Epigonen" in 1865 by Otto Liebmann. His motto was "Back to Kant", and a re-examination of his ideas began (see Neo-Kantianism). During the turn of the 20th century there was an important revival of Kant's theoretical philosophy, known as the Marburg School, represented in the work of Hermann Cohen, Paul Natorp, Ernst Cassirer, and anti-Neo-Kantian Nicolai Hartmann.
Kant's notion of "Critique" has been quite influential. The Early German Romantics, especially Friedrich Schlegel in his "Athenaeum Fragments", used Kant's self-reflexive conception of criticism in their Romantic theory of poetry. Also in Aesthetics, Clement Greenberg, in his classic essay "Modernist Painting", uses Kantian criticism, what Greenberg refers to as "immanent criticism", to justify the aims of Abstract painting, a movement Greenberg saw as aware of the key limitiaton—flatness—that makes up the medium of painting. French philosopher Michel Foucault was also greatly influenced by Kant's notion of "Critique" and wrote several pieces on Kant for a re-thinking of the Enlightenment as a form of "critical thought". He went so far as to classify his own philosophy as a "critical history of modernity, rooted in Kant".
Kant believed that mathematical truths were forms of synthetic "a priori" knowledge, which means they are necessary and universal, yet known through intuition. Kant's often brief remarks about mathematics influenced the mathematical school known as intuitionism, a movement in philosophy of mathematics opposed to Hilbert's formalism, and Frege and Bertrand Russell's logicism.
With his "", Kant is considered to have foreshadowed many of the ideas that have come to form the democratic peace theory, one of the main controversies in political science.
Prominent recent Kantians include the British philosophers P.F. Strawson, Onora O'Neill, and Quassim Cassam and the American philosophers Wilfrid Sellars and Christine Korsgaard. Due to the influence of Strawson and Sellars, among others, there has been a renewed interest in Kant's view of the mind. Central to many debates in philosophy of psychology and cognitive science is Kant's conception of the unity of consciousness.
Jürgen Habermas and John Rawls are two significant political and moral philosophers whose work is strongly influenced by Kant's moral philosophy. They argued against relativism, supporting the Kantian view that universality is essential to any viable moral philosophy. Jean-Francois Lyotard, however, emphasized the indeterminacy in the nature of thought and language and has engaged in debates with Habermas based on the effects this indeterminacy has on philosophical and political debates.
Mou Zongsan's study of Kant has been cited as a highly crucial part in the development of Mou’s personal philosophy, namely New Confucianism. Widely regarded as the most influential Kant scholar in China, Mou's rigorous critique of Kant’s philosophy—having translated all three of Kant’s critiques—served as an ardent attempt to reconcile Chinese and Western philosophy whilst increasing pressure to westernize in China.
Kant's influence also has extended to the social, behavioral, and physical sciences, as in the sociology of Max Weber, the psychology of Jean Piaget and Carl Gustav Jung, and the linguistics of Noam Chomsky. Kant's work on mathematics and synthetic "a priori" knowledge is also cited by theoretical physicist Albert Einstein as an early influence on his intellectual development, which he later criticised heavily and rejected. Because of the thoroughness of the Kantian paradigm shift, his influence extends to thinkers who neither specifically refer to his work nor use his terminology.
Wilhelm Dilthey inaugurated the Academy edition (the "Akademie-Ausgabe" abbreviated as "AA" or "Ak") of Kant's writings ("Gesammelte Schriften", Königlich-Preußische Akademie der Wissenschaften, Berlin, 1902–38) in 1895, and served as its first editor. The volumes are grouped into four sections:
In Germany, one important contemporary interpreter of Kant and the movement of German Idealism he began is Dieter Henrich, who has some work available in English. P.F. Strawson's "The Bounds of Sense" (1966) played a significant role in determining the contemporary reception of Kant in England and America. More recent interpreters of note in the English-speaking world include Lewis White Beck, Jonathan Bennett, Henry Allison, Paul Guyer, Christine Korsgaard, Stephen Palmquist, Robert B. Pippin, Roger Scruton, Rudolf Makkreel, and Béatrice Longuenesse.
General introductions to his thought
Biography and historical context
Collections of essays
Theoretical philosophy
Practical philosophy
Aesthetics
Philosophy of religion
Perpetual peace and international relations
Other works
Contemporary philosophy with a Kantian influence | https://en.wikipedia.org/wiki?curid=14631 |
History of Indonesia
The history of Indonesia has been shaped by its geographic position, its natural resources, a series of human migrations and contacts, wars and conquests, as well as by trade, economics and politics. Indonesia is an archipelagic country of 17,000 to 18,000 islands (8,844 named and 922 permanently inhabited) stretching along the equator in South East Asia. The country's strategic sea-lane position fostered inter-island and international trade; trade has since fundamentally shaped Indonesian history. The area of Indonesia is populated by peoples of various migrations, creating a diversity of cultures, ethnicities, and languages. The archipelago's landforms and climate significantly influenced agriculture and trade, and the formation of states. The boundaries of the state of Indonesia represent the 20th century borders of the Dutch East Indies.
Fossilised remains of "Homo erectus" and his tools, popularly known as the "Java Man", suggest the Indonesian archipelago was inhabited by at least 1.5 million years ago. Austronesian people, who form the majority of the modern population, are thought to have originally been from Taiwan and arrived in Indonesia around 2000 BCE. From the 7th century CE, the powerful Srivijaya naval kingdom flourished bringing Hindu and Buddhist influences with it. The agricultural Buddhist Sailendra and Hindu Mataram dynasties subsequently thrived and declined in inland Java. The last significant non-Muslim kingdom, the Hindu Majapahit kingdom, flourished from the late 13th century, and its influence stretched over much of Indonesia. The earliest evidence of Islamised populations in Indonesia dates to the 13th century in northern Sumatra; other Indonesian areas gradually adopted Islam which became the dominant religion in Java and Sumatra by the end of the 16th century. For the most part, Islam overlaid and mixed with existing cultural and religious influences.
Europeans such as the Portuguese arrived in Indonesia from the 16th century seeking to monopolise the sources of valuable nutmeg, cloves, and cubeb pepper in Maluku. In 1602 the Dutch established the Dutch East India Company (VOC) and became the dominant European power by 1610. Following bankruptcy, the VOC was formally dissolved in 1800, and the government of the Netherlands established the Dutch East Indies under government control. By the early 20th century, Dutch dominance extended to the current boundaries. The Japanese invasion and subsequent occupation in 1942–45 during WWII ended Dutch rule, and encouraged the previously suppressed Indonesian independence movement. Two days after the surrender of Japan in August 1945, nationalist leader, Sukarno, declared independence and became president. The Netherlands tried to reestablish its rule, but a bitter armed and diplomatic struggle ended in December 1949, when in the face of international pressure, the Dutch formally recognised Indonesian independence.
An attempted coup in 1965 led to a violent army-led anti-communist purge in which over half a million people were killed. General Suharto politically outmanoeuvred President Sukarno, and became president in March 1968. His New Order administration garnered the favour of the West, whose investment in Indonesia was a major factor in the subsequent three decades of substantial economic growth. In the late 1990s, however, Indonesia was the country hardest hit by the East Asian Financial Crisis, which led to popular protests and Suharto's resignation on 21 May 1998. The "Reformasi" era following Suharto's resignation, has led to a strengthening of democratic processes, including a regional autonomy program, the secession of East Timor, and the first direct presidential election in 2004. Political and economic instability, social unrest, corruption, natural disasters, and terrorism have slowed progress. Although relations among different religious and ethnic groups are largely harmonious, acute sectarian discontent and violence remain problems in some areas.
In 2007, an analysis of cut marks on two bovid bones found in Sangiran, showed them to have been made 1.5 to 1.6 million years ago by clamshell tools. This is the oldest evidence for the presence of early humans in Indonesia. Fossilised remains of "Homo erectus" in Indonesia, popularly known as the "Java Man" were first discovered by the Dutch anatomist Eugène Dubois at Trinil in 1891, and are at least 700,000 years old. Other "H. erectus" fossils of a similar age were found at Sangiran in the 1930s by the anthropologist Gustav Heinrich Ralph von Koenigswald, who in the same time period also uncovered fossils at Ngandong alongside more advanced tools, re-dated in 2011 to between 550,000 and 143,000 years old. In 1977 another "H. erectus" skull was discovered at Sambungmacan. The earliest evidence of artistic activity ever found, in the form of diagonal etchings made with the use of a shark's tooth, was detected in 2014 on a 500,000-year-old fossil of a clam found in Java in the 1890s, associated with "H. erectus".
In 2003, on the island of Flores, fossils of a new small hominid dated between 74,000 and 13,000 years old were discovered, much to the surprise of the scientific community. This newly discovered hominid was named the "Flores Man", or "Homo florensis". This 3 foot tall hominid is thought to be a species descended from "Homo erectus" that reduced in size over thousands of years, through a well-known process called island dwarfism. Flores Man seems to have shared the island with modern "Homo sapiens" until only 12,000 years ago, when they became extinct. In 2010, stone tools were discovered on Flores, dating from 1 million years ago. These are the earliest remains implying human seafaring technology.
The Indonesian archipelago was formed during the thaw after the Last Glacial Maximum. Early humans travelled by sea and spread from mainland Asia eastward to New Guinea and Australia. "Homo sapiens" reached the region by around 45,000 years ago. In 2011, evidence was uncovered in neighbouring East Timor, showing that 42,000 years ago, these early settlers had high-level maritime skills, and by implication the technology needed to make ocean crossings to reach Australia and other islands, as they were catching and consuming large numbers of big deep sea fish such as tuna.
Austronesian people form the majority of the modern population. They may have arrived in Indonesia around 2000 BCE and are thought to have originated in Taiwan. Dong Son culture spread to Indonesia bringing with it techniques of wet-field rice cultivation, ritual buffalo sacrifice, bronze casting, megalithic practises, and ikat weaving methods. Some of these practices remain in areas including the Batak areas of Sumatra, Toraja in Sulawesi, and several islands in Nusa Tenggara. Early Indonesians were animists who honoured the spirits of the dead believing their souls or life force could still help the living.
Ideal agricultural conditions, and the mastering of wet-field rice cultivation as early as the 8th century BCE, allowed villages, towns, and small kingdoms to flourish by the 1st century CE. These kingdoms (little more than collections of villages subservient to petty chieftains) evolved with their own ethnic and tribal religions. Java's hot and even temperature, abundant rain and volcanic soil, was perfect for wet rice cultivation. Such agriculture required a well-organized society, in contrast to the society based on dry-field rice, which is a much simpler form of cultivation that does not require an elaborate social structure to support it.
Buni culture clay pottery flourished in coastal northern West Java and Banten around 400 BCE to 100 CE. The Buni culture was probably the predecessor of the Tarumanagara kingdom, one of the earliest Hindu kingdoms in Indonesia, producing numerous inscriptions and marking the beginning of the historical period in Java.
In 11 December 2019, a team of researchers led by Dr. Maxime Aubert announced the discovery of the oldest hunting scenes in prehistoric art in the world which is more than 44,000 years old from the limestone cave of Leang Bulu’ Sipong 4. Archaeologists determined the age of the depiction of hunting a pig and buffalo thanks to the calcite ‘popcorn’, different isotope levels of radioactive uranium and thorium.
Indonesia like much of Southeast Asia was influenced by Indian culture. From the 2nd century, through the Indian dynasties like the Pallava, Gupta, Pala and Chola in the succeeding centuries up to the 12th century, Indian culture spread across all of Southeast Asia.
References to the Dvipantara or Yawadvipa, a Hindu kingdom in Java and Sumatra appear in Sanskrit writings from 200 BCE. In India's earliest epic, the Ramayana, Sugriva, the chief of Rama's army dispatched his men to Yawadvipa, the island of Java, in search of Sita. According to the ancient Tamil text Manimekalai Java had a kingdom with a capital called Nagapuram. The earliest archaeological relic discovered in Indonesia is from the Ujung Kulon National Park, West Java, where an early Hindu statue of Ganesha estimated from the 1st century CE was found on the summit of Mount Raksa in Panaitan island. There is also archaeological evidence of Sunda Kingdom in West Java dating from the 2nd-century, and Jiwa Temple in Batujaya, Karawang, West Java was probably built around this time. South Indian culture was spread to Southeast Asia by the south Indian Pallava dynasty in the 4th and 5th century. and by the 5th century, stone inscriptions written in Pallava scripts were found in Java and Borneo.
A number of Hindu and Buddhist states flourished and then declined across Indonesia. Three rough plinths dating from the beginning of the 4th century are found in Kutai, East Kalimantan, near Mahakam River. The plinths bear an inscription in the Pallava script of India reading "A gift to the Brahmin priests".
One such early kingdom was Tarumanagara, which flourished between 358 and 669 CE. Located in West Java close to modern-day Jakarta, its 5th-century King, Purnawarman, established the earliest known inscriptions in Java, the Ciaruteun inscription located near Bogor. And other inscriptions called the Pasar Awi inscription and the Muncul inscription. On this monument, King Purnawarman inscribed his name and made an imprint of his footprints, as well as his elephant's footprints. The accompanying inscription reads, "Here are the footprints of King Purnavarman, the heroic conqueror of the world". This inscription is written in Pallava script and in Sanskrit and is still clear after 1500 years. Purnawarman apparently built a canal that changed the course of the Cakung River, and drained a coastal area for agriculture and settlement purpose. In his stone inscriptions, Purnawarman associated himself with Vishnu, and Brahmins ritually secured the hydraulic project.
Around the same period, in the 6th to 7th century, the Kalingga Kingdom was established in Central Java northern coast, mentioned in Chinese account. The name of this kingdom was derived from ancient Indian kingdom of Kalinga, which suggest the ancient link between India and Indonesia.
The political history of Indonesian archipelago during the 7th to 11th centuries was dominated by Srivijaya based in Sumatra and Sailendra that dominated southeast Asia based in Java and constructed Borobudur, the largest Buddhist monument in the world. The history prior of the 14th and 15th centuries is not well known due to the scarcity of evidence. By the 15th century, two major states dominated this period; Majapahit in East Java, the greatest of the pre-Islamic Indonesian states, and Malacca on the west coast of the Malay Peninsula, arguably one of the greatest of the Muslim trading empires, this marked the rise of Muslim states in Indonesian archipelago.
Medang Empire, sometimes referred to as Mataram, was an Indianized kingdom based in Central Java around modern-day Yogyakarta between the 8th and 10th centuries. The kingdom was ruled by the Sailendra dynasty, and later by the Sanjaya dynasty. The centre of the kingdom was moved from central Java to East Java by Mpu Sindok. An eruption of the volcano Mount Merapi in 929, and political pressure from Sailendrans based in the Srivijaya Empire may have caused the move.
The first king of Mataram, Sri Sanjaya, left inscriptions in stone. The monumental Hindu temple of Prambanan in the vicinity of Yogyakarta was built by Pikatan. Dharmawangsa ordered the translation of the Mahabharata into Old Javanese in 996.
In the period 750 CE - 850 CE, the kingdom saw the blossoming of classical Javanese art and architecture. A rapid increase in temple construction occurred across the landscape of its heartland in Mataram (Kedu and Kewu Plain). The most notable temples constructed in Medang Mataram are Kalasan, Sewu, Borobudur and Prambanan. The Empire had become the dominant power (mandala) not only in Java but also Srivijayan Empire, Bali, southern Thailand, Indianized kingdoms of Philippines, and Khmer in Cambodia.
Later in its history, the dynasty divided into two dynasties based on their own religion, the Buddhist and Shivaist dynasties. Civil war was unavoidable and the outcome was Medang Empire divided into two powerful kingdom based on region and religion. The Shivaist dynasty of Medang kingdom in Java led by Rakai Pikatan and the Buddhist dynasty of Srivijaya kingdom in Sumatra led by Balaputradewa. The hostility between them didn't end until in 1006 when the Sailendran based in Srivijaya kingdom incited rebellion by Wurawari, vassal of Medang kingdom and sacked Shivaist dynasty's capital in Watugaluh, Java. Srivijaya kingdom rose into undisputed hegemonic Empire in the era as the result. Yet the Shivaist dynasty survived and successfully reclaimed the east Java in 1019 then descended to Kahuripan kingdom led by Airlangga son of Udayana of Bali.
Srivijaya was an ethnic Malay kingdom on Sumatra which influenced much of the Maritime Southeast Asia. From the 7th century, the powerful Srivijaya naval kingdom flourished as a result of trade and the influences of Hinduism and Buddhism that were imported with it.
Srivijaya was centred in the coastal trading centre of present-day Palembang. Srivijaya was not a "state" in the modern sense with defined boundaries and a centralised government to which the citizens own allegiance. Rather Srivijaya was a confederacy form of society centred on a royal heartland. It was a thalassocracy and did not extend its influence far beyond the coastal areas of the islands of Southeast Asia. Trade was the driving force of Srivijaya just as it is for most societies throughout history. The Srivijayan navy controlled the trade that made its way through the Strait of Malacca.
By the 7th century, the harbours of various vassal states of Srivijaya lined both coasts of the Straits of Melaka. Around this time, Srivijaya had established suzerainty over large areas of Sumatra, western Java, and much of the Malay Peninsula. Dominating the Malacca and Sunda straits, the empire controlled both the Spice Route traffic and local trade. It remained a formidable sea power until the 13th century. This spread the ethnic Malay culture throughout Sumatra, the Malay Peninsula, and western Borneo. A stronghold of Vajrayana Buddhism, Srivijaya attracted pilgrims and scholars from other parts of Asia.
The relation between Srivijaya and the Chola Empire of south India was friendly during the reign of Raja Raja Chola I but during the reign of Rajendra Chola I the Chola Empire attacked Srivijaya cities. A series of Chola raids in the 11th century weakened the Srivijayan hegemony and enabled the formation of regional kingdoms based, like Kediri, on intensive agriculture rather than coastal and long distance trade. Srivijayan influence waned by the 11th century. The island was in frequent conflict with the Javanese kingdoms, first Singhasari and then Majapahit. Islam eventually made its way to the Aceh region of Sumatra, spreading its influence through contacts with Arabs and Indian traders. By the late 13th century, the kingdom of Pasai in northern Sumatra converted to Islam. The last inscription dates to 1374, where a crown prince, Ananggavarman, is mentioned. Srivijaya ceased to exist by 1414, when Parameswara, the kingdom's last prince, fled to Temasik, then to Malacca. Later his son converted to Islam and founded the Sultanate of Malacca on the Malay peninsula.
Despite a lack of historical evidence, it is known that Majapahit was the most dominant of Indonesia's pre-Islamic states. The Hindu Majapahit kingdom was founded in eastern Java in the late 13th century, and under Gajah Mada it experienced what is often referred to as a "Golden Age" in Indonesian history, when its influence extended to much of southern Malay Peninsula, Borneo, Sumatra, and Bali from about 1293 to around 1500.
The founder of the Majapahit Empire, Kertarajasa, was the son-in-law of the ruler of the Singhasari kingdom, also based in Java. After Singhasari drove Srivijaya out of Java in 1290, the rising power of Singhasari came to the attention of Kublai Khan in China and he sent emissaries demanding tribute. Kertanagara, ruler of the Singhasari kingdom, refused to pay tribute and the Khan sent a punitive expedition which arrived off the coast of Java in 1293. By that time, a rebel from Kediri, Jayakatwang, had killed Kertanagara. The Majapahit founder allied himself with the Mongols against Jayakatwang and, once the Singhasari kingdom was destroyed, turned and forced his Mongol allies to withdraw in confusion.
Gajah Mada, an ambitious Majapahit prime minister and regent from 1331 to 1364, extended the empire's rule to the surrounding islands. A few years after Gajah Mada's death, the Majapahit navy captured Palembang, putting an end to the Srivijayan kingdom. Although the Majapahit rulers extended their power over other islands and destroyed neighbouring kingdoms, their focus seems to have been on controlling and gaining a larger share of the commercial trade that passed through the archipelago. About the time Majapahit was founded, Muslim traders and proselytisers began entering the area. After its peak in the 14th century, Majapahit power began to decline and was unable to control the rising power of the Sultanate of Malacca. Dates for the end of the Majapahit Empire range from 1478 to 1520. A large number of courtiers, artisans, priests, and members of the royal family moved east to the island of Bali at the end of Majapahit power.
The earliest accounts of the Indonesian archipelago date from the Abbasid Caliphate, according to those early accounts the Indonesian archipelago were famous among early Muslim sailors mainly due to its abundance of precious spice trade commodities such as nutmeg, cloves, galangal and many other spices.
Although Muslim traders first travelled through South East Asia early in the Islamic era, the spread of Islam among the inhabitants of the Indonesian archipelago dates to the 13th century in northern Sumatra. Although it is known that the spread of Islam began in the west of the archipelago, the fragmentary evidence does not suggest a rolling wave of conversion through adjacent areas; rather, it suggests the process was complicated and slow. The spread of Islam was driven by increasing trade links outside of the archipelago; in general, traders and the royalty of major kingdoms were the first to adopt the new religion.
Other Indonesian areas gradually adopted Islam, making it the dominant religion in Java and Sumatra by the end of the 16th century. For the most part, Islam overlaid and mixed with existing cultural and religious influences, which shaped the predominant form of Islam in Indonesia, particularly in Java. Only Bali retained a Hindu majority. In the eastern archipelago, both Christian and Islamic missionaries were active in the 16th and 17th centuries, and, currently, there are large communities of both religions on these islands.
The Sultanate of Mataram was the third Sultanate in Java, after the Sultanate of Demak Bintoro and the Sultanate of Pajang.
According to Javanese records, Kyai Gedhe Pamanahan became the ruler of the Mataram area in the 1570s with the support of the kingdom of Pajang to the east, near the current site of Surakarta (Solo). Pamanahan was often referred to as Kyai Gedhe Mataram after his ascension.
Pamanahan's son, Panembahan Senapati Ingalaga, replaced his father on the throne around 1584. Under Senapati the kingdom grew substantially through regular military campaigns against Mataram's neighbours. Shortly after his accession, for example, he conquered his father's patrons in Pajang.
The reign of Panembahan Seda ing Krapyak ("c." 1601–1613), the son of Senapati, was dominated by further warfare, especially against powerful Surabaya, already a major centre in East Java. The first contact between Mataram and the Dutch East India Company (VOC) occurred under Krapyak. Dutch activities at the time were limited to trading from limited coastal settlements, so their interactions with the inland Mataram kingdom were limited, although they did form an alliance against Surabaya in 1613. Krapyak died that year.
Krapyak was succeeded by his son, who is known simply as Sultan Agung ("Great Sultan") in Javanese records. Agung was responsible for the great expansion and lasting historical legacy of Mataram due to the extensive military conquests of his long reign from 1613 to 1646.
After years of war Agung finally conquered Surabaya. The city was surrounded by land and sea and starved into submission. With Surabaya brought into the empire, the Mataram kingdom encompassed all of central and eastern Java, and Madura; only in the west did Banten and the Dutch settlement in Batavia remain outside Agung's control. He tried repeatedly in the 1620s and 1630s to drive the Dutch from Batavia, but his armies had met their match, and he was forced to share control over Java.
In 1645 he began building Imogiri, his burial place, about fifteen kilometres south of Yogyakarta. Imogiri remains the resting place of most of the royalty of Yogyakarta and Surakarta to this day. Agung died in the spring of 1646, with his image of royal invincibility shattered by his losses to the Dutch, but he did leave behind an empire that covered most of Java and its neighbouring islands.
Upon taking the throne, Agung's son Susuhunan Amangkurat I tried to bring long-term stability to Mataram's realm, murdering local leaders that were insufficiently deferential to him, and closing ports so he alone had control over trade with the Dutch.
By the mid-1670s dissatisfaction with the king fanned into open revolt. Raden Trunajaya, a prince from Madura, lead a revolt fortified by itinerant mercenaries from Makassar that captured the king's court at Mataram in mid-1677. The king escaped to the north coast with his eldest son, the future king Amangkurat II, leaving his younger son Pangeran Puger in Mataram. Apparently more interested in profit and revenge than in running a struggling empire, the rebel Trunajaya looted the court and withdrew to his stronghold in East Java leaving Puger in control of a weak court.
Amangkurat I died just after his expulsion, making Amangkurat II king in 1677. He too was nearly helpless, though, having fled without an army or treasury to build one. In an attempt to regain his kingdom, he made substantial concessions to the Dutch, who then went to war to reinstate him. For the Dutch, a stable Mataram empire that was deeply indebted to them would help ensure continued trade on favourable terms. They were willing to lend their military might to keep the kingdom together. Dutch forces first captured Trunajaya, then forced Puger to recognise the sovereignty of his elder brother Amangkurat II. The kingdom collapsed after a two-year war, in which power plays crippled the Sunan.
In 1524–25, Sunan Gunung Jati from Cirebon, together with the armies of Demak Sultanate, seized the port of Banten from the Sunda kingdom, and established The Sultanate of Banten. This was accompanied by Muslim preachers and the adoption of Islam amongst the local population. At its peak in the first half of the 17th century, the Sultanate lasted from 1526 to 1813 AD. The Sultanate left many archaeological remains and historical records.
Beginning in the 16th century, successive waves of Europeans—the Portuguese, Spanish, Dutch and British—sought to dominate the spice trade at its sources in India and the 'Spice Islands' (Maluku) of Indonesia. This meant finding a way to Asia to cut out Muslim merchants who, with their Venetian outlet in the Mediterranean, monopolised spice imports to Europe. Astronomically priced at the time, spices were highly coveted not only to preserve and make poorly preserved meat palatable, but also as medicines and magic potions.
The arrival of Europeans in South East Asia is often regarded as the watershed moment in its history. Other scholars consider this view untenable, arguing that European influence during the times of the early arrivals of the 16th and 17th centuries was limited in both area and depth. This is in part due to Europe not being the most advanced or dynamic area of the world in the early 15th century. Rather, the major expansionist force of this time was Islam; in 1453, for example, the Ottoman Turks conquered Constantinople, while Islam continued to spread through Indonesia and the Philippines. European influence, particularly that of the Dutch, would not have its greatest impact on Indonesia until the 18th and 19th centuries.
New found Portuguese expertise in navigation, shipbuilding and weaponry allowed them to make daring expeditions of exploration and expansion. Starting with the first exploratory expeditions sent from newly conquered Malacca in 1512, the Portuguese were the first Europeans to arrive in Indonesia, and sought to dominate the sources of valuable spices and to extend the Catholic Church's missionary efforts. The Portuguese turned east to Maluku and through both military conquest and alliance with local rulers, they established trading posts, forts, and missions on the islands of Ternate, Ambon, and Solor among others. The height of Portuguese missionary activities, however, came in the latter half of the 16th century. Ultimately, the Portuguese presence in Indonesia was reduced to Solor, Flores and Timor in modern-day Nusa Tenggara, following defeat at the hands of indigenous Ternateans and the Dutch in Maluku, and a general failure to maintain control of trade in the region. In comparison with the original Portuguese ambition to dominate Asian trade, their influence on Indonesian culture was small: the romantic "keroncong" guitar ballads; a number of Indonesian words which reflect Portuguese's role as the "lingua franca" of the archipelago alongside Malay; and many family names in eastern Indonesia such as da Costa, Dias, de Fretes, Gonsalves, etc. The most significant impacts of the Portuguese arrival were the disruption and disorganisation of the trade network mostly as a result of their conquest of Malacca, and the first significant plantings of Christianity in Indonesia. There have continued to be Christian communities in eastern Indonesia through to the present, which has contributed to a sense of shared interest with Europeans, particularly among the Ambonese.
In 1602, the Dutch parliament awarded the VOC a monopoly on trade and colonial activities in the region at a time before the company controlled any territory in Java. In 1619, the VOC conquered the West Javan city of Jayakarta, where they founded the city of Batavia (present-day Jakarta). The VOC became deeply involved in the internal politics of Java in this period, and fought in a number of wars involving the leaders of Mataram and Banten.
The Dutch followed the Portuguese aspirations, courage, brutality, and strategies but brought better organisation, weapons, ships, and superior financial backing. Although they failed to gain complete control of the Indonesian spice trade, they had much more success than the previous Portuguese efforts. They exploited the factionalisation of the small kingdoms in Java that had replaced Majapahit, establishing a permanent foothold in Java, from which grew a land-based colonial empire which became one of the richest colonial possessions on earth.
By the mid-17th century, Batavia, the headquarter of VOC in Asia, had become an important trade centre in the region. It had repelled attacks from the Javanese Mataram kingdom. In 1641 the Dutch captured Malacca from the Portuguese, thus weakened Portuguese position in Asia. The Dutch defeated the Sulawesi city of Makassar in 1667 thus bringing its trade under VOC control. Sumatran ports were also brought under VOC control and the last of the Portuguese were expelled in 1660. In return for monopoly control over the pepper trade and the expulsion of the British, the Dutch helped the son of the ruler of Banten overthrow his father in 1680. By the 18th century, the VOC has established themselves firmly in Indonesian archipelago, controlling inter-island trade as part of their Asian business which includes India, Ceylon, Formosa, and Japan. VOC has established their important bases in some ports in Java, Maluku, and parts of Sulawesi, Sumatra, and Malay Peninsula.
After the fall of the Netherlands to the First French Empire and the dissolution of the Dutch East India Company in 1800, there were profound changes in the European colonial administration of the East Indies. The Company's assets in East Indies were nationalised as the Dutch colony, the Dutch East Indies. Meanwhile, Europe was devastated by the Napoleonic Wars. In the Netherlands, Napoleon Bonaparte in 1806 oversaw the dissolution of the Batavian Republic, which was replaced by the Kingdom of Holland, a French puppet kingdom ruled by Napoleon's third brother Louis Bonaparte (Lodewijk Napoleon). The East Indies were treated as a proxy French colony, administrated through a Dutch intermediary.
In 1806, King Lodewijk of the Netherlands sent one of his generals, Herman Willem Daendels, to serve as governor-general of the East Indies, based in Java. Daendels was sent to strengthen Javanese defences against a predicted British invasion. Since 1685, the British had had a presence in Bencoolen on the western coast of Sumatra, as well as several posts north of the Malaccan straits. Daendels was responsible for the construction of the Great Post Road () across northern Java from Anjer to Panaroecan. The thousand-kilometre road was meant as to ease logistics across Java and was completed in only one year, during which thousands of Javanese forced labourers died.
In 1811, Java fell to a British East India Company force under Baron Minto, the governor-general of India. Lord Minto appointed Sir Thomas Stamford Raffles as lieutenant governor of Java. Raffles carried further the administrative centralisation previously initiated by Daendels. Raffles launched some military expeditions against local princes to subjugate them into British rule; such as the assault on Yogyakarta kraton on 21 June 1812, and the military expedition against Sultan Mahmud Badaruddin II of Palembang, and seized the nearby Bangka Island. During his administration, numbers of ancient monuments in Java were rediscovered, excavated and systematically catalogued for the first time, the most important one is the rediscovery of Borobudur Buddhist temple in Central Java. Raffles was an enthusiast of the island's history, as he wrote the book History of Java published later in 1817. In 1815, the island of Java was returned to control of the Netherlands following the end of Napoleonic Wars, under the terms of the Anglo-Dutch Treaty of 1814.
After the VOC was dissolved in 1800 following bankruptcy, and after a short British rule under Thomas Stamford Raffles, the Dutch state took over the VOC possessions in 1816. A Javanese uprising was crushed in the Java War of 1825–1830. After 1830 a system of forced cultivations and indentured labour was introduced on Java, the Cultivation System (in Dutch: "cultuurstelsel"). This system brought the Dutch and their Indonesian allies enormous wealth. The cultivation system tied peasants to their land, forcing them to work in government-owned plantations for 60 days of the year. The system was abolished in a more liberal period after 1870. In 1901 the Dutch adopted what they called the Ethical Policy, which included somewhat increased investment in indigenous education, and modest political reforms.
The Dutch colonialists formed a privileged upper social class of soldiers, administrators, managers, teachers, and pioneers. They lived together with the "natives", but at the top of a rigid social and racial caste system. The Dutch East Indies had two legal classes of citizens; European and indigenous. A third class, Foreign Easterners, was added in 1920.
Upgrading the infrastructure of ports and roads was a high priority for the Dutch, with the goal of modernising the economy, pumping wages into local areas, facilitating commerce, and speeding up military movements. By 1950 Dutch engineers had built and upgraded a road network with 12,000 km of asphalted surface, 41,000 km of metalled road area and 16,000 km of gravel surfaces. In addition the Dutch built of railways, bridges, irrigation systems covering 1.4 million hectares (5,400 sq mi) of rice fields, several harbours, and 140 public drinking water systems. These Dutch constructed public works became the economic base of the colonial state; after independence, they became the basis of the Indonesian infrastructure.
For most of the colonial period, Dutch control over its territories in the Indonesian archipelago was tenuous. In some cases, Dutch police and military actions in parts of Indonesia were quite cruel. Recent discussions, for example, of Dutch cruelty in Aceh have encouraged renewed research on these aspects of Dutch rule. It was only in the early 20th century, three centuries after the first Dutch trading post, that the full extent of the colonial territory was established and direct colonial rule exerted across what would become the boundaries of the modern Indonesian state. Portuguese Timor, now East Timor, remained under Portuguese rule until 1975 when it was invaded by Indonesia. The Indonesian government declared the territory an Indonesian province but relinquished it in 1999.
In October 1908, the first nationalist movement was formed, Budi Utomo. On 10 September 1912, the first nationalist mass movement was formed: Sarekat Islam. By December 1912, Sarekat Islam had 93,000 members. The Dutch responded after the First World War with repressive measures. The nationalist leaders came from a small group of young professionals and students, some of whom had been educated in the Netherlands. In the post–World War I era, the Indonesian communists who were associated with the Third International started to usurp the nationalist movement. The repression of the nationalist movement led to many arrests, including Indonesia's first president, Sukarno (1901–70), who was imprisoned for political activities on 29 December 1929. Also arrested was Mohammad Hatta, first Vice-President of Indonesia. Additionally, Sutan Sjahrir, who later became the first Prime Minister of Indonesia, was arrested on this date.
In 1914 the exiled Dutch socialist Henk Sneevliet founded the Indies Social Democratic Association. Initially a small forum of Dutch socialists, it would later evolve into the Communist Party of Indonesia (PKI) in 1924. In the post–World War I era, the Dutch strongly repressed all attempts at change. This repression led to a growth of the PKI. By December 1924, the PKI had a membership of 1,140. One year later in 1925, the PKI had grown to 3,000 members. From 1926 to 1927, there was a PKI-led revolt against Dutch colonialism and the harsh repression of strikes of urban workers. However, the strikes and the revolt was put down by the Dutch with some 13,000 nationalists and communists leaders were arrested. Some 4,500 were given prison sentences.
Sukarno was released from prison in December 1931 but was re-arrested on 1 August 1933.
The Japanese invasion and subsequent occupation during World War II ended Dutch rule and encouraged the previously suppressed Indonesian independence movement. In May 1940, early in World War II, the Netherlands was occupied by Nazi Germany. The Dutch East Indies declared a state of siege and in July redirected exports for Japan to the US and Britain. Negotiations with the Japanese aimed at securing supplies of aviation fuel collapsed in June 1941, and the Japanese started their conquest of Southeast Asia in December of that year. That same month, factions from Sumatra sought Japanese assistance for a revolt against the Dutch wartime government. The last Dutch forces were defeated by Japan in March 1942.
In July 1942, Sukarno accepted Japan's offer to rally the public in support of the Japanese war effort. Sukarno and Mohammad Hatta were decorated by the Emperor of Japan in 1943. However, experience of the Japanese occupation of Dutch East Indies varied considerably, depending upon where one lived and one's social position. Many who lived in areas considered important to the war effort experienced torture, sex slavery, arbitrary arrest and execution, and other war crimes. Thousands taken away from Indonesia as war labourers (romusha) suffered or died as a result of ill-treatment and starvation. People of Dutch and mixed Dutch-Indonesian descent were particular targets of the Japanese occupation.
In March 1945, the Japanese established the Investigating Committee for Preparatory Work for Independence (BPUPK) as the initial stage of the establishment of independence for the area under the control of the Japanese 16th Army. At its first meeting in May, Soepomo spoke of national integration and against personal individualism, while Muhammad Yamin suggested that the new nation should claim British Borneo, British Malaya, Portuguese Timor, and all the pre-war territories of the Dutch East Indies. The committee drafted the 1945 Constitution, which remains in force, though now much amended. On 9 August 1945 Sukarno, Hatta, and Radjiman Wediodiningrat were flown to meet Marshal Hisaichi Terauchi in Vietnam. They were told that Japan intended to announce Indonesian independence on 24 August. After the Japanese surrender, however, Sukarno unilaterally proclaimed Indonesian independence on 17 August. A later UN report stated that four million people died in Indonesia as a result of the Japanese occupation.
Under pressure from radical and politicised "pemuda" ('youth') groups, Sukarno and Hatta proclaimed Indonesian independence on 17 August 1945, two days after the Japanese Emperor's surrender in the Pacific. The following day, the Central Indonesian National Committee (KNIP) declared Sukarno President and Hatta Vice-President. Word of the proclamation spread by shortwave and fliers while the Indonesian war-time military (PETA), youths, and others rallied in support of the new republic, often moving to take over government offices from the Japanese. In December 1946 the United Nations acknowledged that Netherlands had advised the United Nations that the "Netherlands Indies" was a non-self-governing territory (colony) for which the Netherlands had a legal duty to make yearly reports and to assist towards "a full measure of self-government" as required by the ‘’Charter of the United Nations article 73‘’.
The Dutch, initially backed by the British, tried to re-establish their rule, and a bitter armed and diplomatic struggle ended in December 1949, when in the face of international pressure, the Dutch formally recognised Indonesian independence.
Dutch efforts to re-establish complete control met resistance. At the end of World War II, a power vacuum arose, and the nationalists often succeeded in seizing the arms of the demoralised Japanese. A period of unrest with city guerrilla warfare called the Bersiap period ensued. Groups of Indonesian nationalists armed with improvised weapons (like bamboo spears) and firearms attacked returning Allied troops. 3,500 Europeans were killed and 20,000 were missing, meaning there were more European deaths in Indonesia after the war than during the war. After returning to Java, Dutch forces quickly re-occupied the colonial capital of Batavia (now Jakarta), so the city of Yogyakarta in central Java became the capital of the nationalist forces. Negotiations with the nationalists led to two major truce agreements, but disputes about their implementation, and much mutual provocation, led each time to renewed conflict. Within four years the Dutch had recaptured almost the whole of Indonesia, but guerrilla resistance persisted, led on Java by commander Nasution. On 27 December 1949, after four years of sporadic warfare and fierce criticism of the Dutch by the UN, the Netherlands officially recognised Indonesian sovereignty under the federal structure of the United States of Indonesia (RUSI). On 17 August 1950, exactly five years after the proclamation of independence, the last of the federal states were dissolved and Sukarno proclaimed a single unitary Republic of Indonesia.
With the unifying struggle to secure Indonesia's independence over, divisions in Indonesian society began to appear. These included regional differences in customs, religion, the impact of Christianity and Marxism, and fears of Javanese political domination. Following colonial rule, Japanese occupation, and war against the Dutch, the new country suffered from severe poverty, a ruinous economy, low educational and skills levels, and authoritarian traditions. Challenges to the authority of the Republic included the militant "Darul Islam" who waged a guerrilla struggle against the Republic from 1948 to 1962; the declaration of an independent Republic of South Maluku by Ambonese formerly of the Royal Dutch Indies Army; and rebellions in Sumatra and Sulawesi between 1955 and 1961.
In contrast to the 1945 Constitution, the 1950 constitution mandated a parliamentary system of government, an executive responsible to parliament, and stipulated at length constitutional guarantees for human rights, drawing heavily on the 1948 United Nations Universal Declaration of Human Rights. A proliferation of political parties dealing for shares of cabinet seats resulted in a rapid turnover of coalition governments including 17 cabinets between 1945 and 1958. The long-postponed parliamentary elections were held in 1955; although the Indonesian National Party (PNI)—considered Sukarno's party—topped the poll, and the Communist Party of Indonesia (PKI) received strong support, no party garnered more than a quarter of the votes, which resulted in short-lived coalitions.
By 1956, Sukarno was openly criticising parliamentary democracy, stating that it was "based upon inherent conflict" which ran counter to Indonesian notions of harmony as being the natural state of human relationships. Instead, he sought a system based on the traditional village system of discussion and consensus, under the guidance of village elders. He proposed a threefold blend of "nasionalisme" ('nationalism'), "agama" ('religion'), and "komunisme" ('communism') into a co-operative 'Nas-A-Kom' government. This was intended to appease the three main factions in Indonesian politics — the army, Islamic groups, and the communists. With the support of the military, he proclaimed in February 1957 a system of 'Guided Democracy', and proposed a cabinet representing all the political parties of importance (including the PKI). The US tried and failed to secretly overthrow the President, even though Secretary of State Dulles declared before Congress that "we are not interested in the internal affairs of this country."
Sukarno abrogated the 1950 Constitution on 9 July 1959 by a decree dissolving the Constitutional Assembly and restoring the 1945 Constitution. The elected parliament was replaced by one appointed by, and subject to the will of, the President. Another non-elected body, the Supreme Advisory Council, was the main policy development body, while the National Front was set up in September 1960 and presided over by the president to "mobilise the revolutionary forces of the people". Western-style parliamentary democracy was thus finished in Indonesia until the 1999 elections of the "Reformasi" era.
Charismatic Sukarno spoke as a romantic revolutionary, and under his increasingly authoritarian rule, Indonesia moved on a course of stormy nationalism. Sukarno was popularly referred to as "bung" ("older brother"), and he painted himself as a man of the people carrying the aspirations of Indonesia and one who dared take on the West. He instigated a number of large, ideologically driven infrastructure projects and monuments celebrating Indonesia's identity, which were criticised as substitutes for real development in a deteriorating economy.
Western New Guinea had been part of the Dutch East Indies, and Indonesian nationalists had thus claimed it on this basis. Indonesia was able to instigate a diplomatic and military confrontation with the Dutch over the territory following an Indonesian-Soviet arms agreement in 1960. It was, however, United States pressure on the Netherlands that led to an Indonesian takeover in 1963. Also in 1963, Indonesia commenced "Konfrontasi" with the new state of Malaysia. The northern states of Borneo, formerly British Sarawak and Sabah, had wavered in joining Malaysia, whilst Indonesia saw itself as the rightful ruler of Austronesian peoples and supported an unsuccessful revolution attempt in Brunei. Reviving the glories of the Indonesian National Revolution, Sukarno rallied against notions of British imperialism and mounted military offensives along the Indonesia-Malaysia border in Borneo. As the PKI rallied in Jakarta streets in support, the West became increasingly alarmed at Indonesian foreign policy and the United States withdrew its aid to Indonesia.
In social policy, Sukarno's time in office witnessed substantial reforms in health and education, together with the passage of various pro-labour measures. However, Indonesia's economic position deteriorated under Sukarno; by the mid-1960s, the cash-strapped government had to scrap critical public sector subsidies, inflation was at 1,000%, export revenues were shrinking, infrastructure crumbling, and factories were operating at minimal capacity with negligible investment. Severe poverty and hunger were widespread.
Described as the great "dalang" ("puppet master"), Sukarno's position depended on balancing the opposing and increasingly hostile forces of the army and the PKI. Sukarno's anti-imperialist ideology saw Indonesia increasingly dependent on Soviet and then communist China. By 1965, the PKI was the largest communist party in the world outside the Soviet Union or China. Penetrating all levels of government, the party increasingly gained influence at the expense of the army.
On 30 September 1965, six of the most senior generals within the military and other officers were executed in an attempted coup. The insurgents, known later as the 30 September Movement, backed a rival faction of the army and took up positions in the capital, later seizing control of the national radio station. They claimed they were acting against a plot organised by the generals to overthrow Sukarno. Within a few hours, Major General Suharto, commander of the Army Strategic Reserve (Kostrad), mobilised counteraction, and by the evening of 1 October, it was clear that the coup, which had little co-ordination and was largely limited to Jakarta, had failed. Complicated and partisan theories continue to this day over the identity of the attempted coup's organisers and their aims. According to the Indonesian army, the PKI were behind the coup and used disgruntled army officers to carry it out, and this became the official account of Suharto's subsequent New Order administration. Most historians agree that the coup and the surrounding events were not led by a single mastermind controlling all events, and that the full truth will never likely be known.
The PKI was blamed for the coup, and anti-communists, initially following the army's lead, went on a violent anti-communist purge across much of the country. The PKI was effectively destroyed, and the most widely accepted estimates are that between 500,000 and 1 million were killed. The violence was especially brutal in Java and Bali. The PKI was outlawed and possibly more than 1 million of its leaders and affiliates were imprisoned.
Throughout the 1965–66 period, President Sukarno attempted to restore his political position and shift the country back to its pre-October 1965 position but his Guided Democracy balancing act was destroyed with the PKI's demise. Although he remained president, the weakened Sukarno was forced to transfer key political and military powers to General Suharto, who by that time had become head of the armed forces. In March 1967, the Provisional People's Consultative Assembly (MPRS) named General Suharto acting president. Suharto was formally appointed president in March 1968. Sukarno lived under virtual house arrest until his death in 1970.
In the aftermath of Suharto's rise, hundreds of thousands of people were killed or imprisoned by the military and religious groups in a backlash against alleged communist supporters, with direct support from the United States. Suharto's administration is commonly called the "New Order" era. Suharto invited major foreign investment, which produced substantial, if uneven, economic growth. However, Suharto enriched himself and his family through business dealings and widespread corruption.
At the time of independence, the Dutch retained control over the western half of New Guinea (also known as West Irian), and permitted steps towards self-government and a declaration of independence on 1 December 1961. After negotiations with the Dutch on the incorporation of the territory into Indonesia failed, an Indonesian paratroop invasion 18 December preceded armed clashes between Indonesian and Dutch troops in 1961 and 1962. In 1962 the United States pressured the Netherlands into secret talks with Indonesia which in August 1962 produced the New York Agreement, and Indonesia assumed administrative responsibility for West Irian on 1 May 1963.
Rejecting UN supervision, the Indonesian government under Suharto decided to settle the question of West Irian, the former Dutch New Guinea, in their favour. Rather than a referendum of all residents of West Irian as had been agreed under Sukarno, an 'Act of Free Choice' was conducted in 1969 in which 1,025 Papuan representatives of local councils were selected by the Indonesians. They were warned to vote in favour of Indonesian integration with the group unanimously voting for integration with Indonesia. A subsequent UN General Assembly resolution confirmed the transfer of sovereignty to Indonesia.
West Irian was renamed Irian Jaya ('glorious Irian') in 1973. Opposition to Indonesian administration of Irian Jaya (later known as Papua) gave rise to guerrilla activity in the years following Jakarta's assumption of control.
In 1975, the Carnation Revolution in Portugal caused authorities there to announce plans for decolonisation of Portuguese Timor, the eastern half of the island of Timor whose western half was a part of the Indonesian province of East Nusa Tenggara. In the East Timorese elections held in 1975, Fretilin, a left-leaning party, and UDT, aligned with the local elite, emerged as the largest parties, having previously formed an alliance to campaign for independence from Portugal. Apodeti, a party advocating integration with Indonesia, enjoyed little popular support.
Indonesia alleged that Fretilin was communist, and feared that an independent East Timor would influence separatism in the archipelago. Indonesian military intelligence influenced the break-up of the alliance between Fretilin and UDT, which led to a coup by the UDT on 11 August 1975 and the start of a month-long civil war. During this time, the Portuguese government effectively abandoned the territory and did not resume the decolonisation process. On 28 November, Fretilin unilaterally declared independence, and proclaimed the 'Democratic Republic of East Timor'. Nine days later, on 7 December, Indonesia invaded East Timor, eventually annexing the tiny country of (then) 680,000 people. Indonesia was supported materially and diplomatically by the United States, Australia, and the United Kingdom, who regarded Indonesia as an anti-communist ally.
Following the 1998 resignation of Suharto, the people of East Timor voted overwhelmingly for independence in a UN-sponsored referendum held on 30 August 1999. About 99% of the eligible population participated; more than three quarters chose independence despite months of attacks by the Indonesian military and its militia. After the result was announced, elements of the Indonesian military and its militia retaliated by killing approximately 2,000 East Timorese, displacing two-thirds of the population, raping hundreds of women and girls, and destroying much of the country's infrastructure. In October 1999, the Indonesian parliament (MPR) revoked the decree that annexed East Timor, and the United Nations Transitional Administration in East Timor (UNTAET) assumed responsibility for governing East Timor until it officially became an independent state in May 2002.
The Transmigration program ("Transmigrasi") was a National Government initiative to move landless people from densely populated areas of Indonesia (such as Java and Bali) to less populous areas of the country including Papua, Kalimantan, Sumatra, and Sulawesi. The stated purpose of this program was to reduce the considerable poverty and overpopulation on Java, to provide opportunities for hard-working poor people, and to provide a workforce to better utilise the resources of the outer islands. The program, however, has been controversial, with critics accusing the Indonesian Government of trying to use these migrants to reduce the proportion of native populations in destination areas to weaken separatist movements. The program has often been cited as a major and ongoing factor in controversies and even conflict and violence between settlers and indigenous populations.
In 1996 Suharto undertook efforts to pre-empt a challenge to the New Order government. The Indonesian Democratic Party (PDI), a legal party that had traditionally propped up the regime, had changed direction and began to assert its independence. Suharto fostered a split over the leadership of PDI, backing a co-opted faction loyal to deputy speaker of the People's Representative Council Suryadi against a faction loyal to Megawati Sukarnoputri, the daughter of Sukarno and the PDI's chairperson.
After the Suryadi faction announced a party congress to sack Megawati would be held in Medan on 20–22 June, Megawati proclaimed that her supporters would hold demonstrations in protest. The Suryadi faction went through with its sacking of Megawati, and the demonstrations manifested themselves throughout Indonesia. This led to several confrontations on the streets between protesters and security forces, and recriminations over the violence. The protests culminated in the military allowing Megawati's supporters to take over PDI headquarters in Jakarta, with a pledge of no further demonstrations.
Suharto allowed the occupation of PDI headquarters to go on for almost a month, as attentions were also on Jakarta due to a set of high-profile ASEAN meetings scheduled to take place there. Capitalizing on this, Megawati supporters organised "democracy forums" with several speakers at the site. On 26 July, officers of the military, Suryadi, and Suharto openly aired their disgust with the forums.
On 27 July, police, soldiers, and persons claiming to be Suryadi supporters stormed the headquarters. Several Megawati supporters were killed, and over two hundred people were arrested and tried under the Anti-Subversion and Hate-Spreading laws. The day would become known as "Black Saturday" and mark the beginning of a renewed crackdown by the New Order government against supporters of democracy, now called the "Reformasi" or Reform movement.
In 1997 and 1998, Indonesia was the country hardest hit by the 1997 Asian financial crisis, which had dire consequences for the Indonesian economy and society, as well as Suharto's presidency. At the same time, the country suffered a severe drought and some of the largest forest fires in history burned in Kalimantan and Sumatra. The rupiah, the Indonesian currency, took a sharp dive in value. Suharto came under scrutiny from international lending institutions, chiefly the World Bank, International Monetary Fund (IMF) and the United States, over longtime embezzlement of funds and some protectionist policies. In December, Suharto's government signed a letter of intent to the IMF, pledging to enact austerity measures, including cuts to public services and removal of subsidies, in return for aid from the IMF and other donors. Prices for goods such as kerosene and rice, as well as fees for public services including education, rose dramatically. The effects were exacerbated by widespread corruption. The austerity measures approved by Suharto had started to erode domestic confidence with the New Order and led to popular protests.
Suharto stood for re-election by parliament for the seventh time in March 1998, justifying it on the grounds of the necessity of his leadership during the crisis. The parliament approved a new term. This sparked protests and riots throughout the country, now termed the Indonesian 1998 Revolution. Dissent within the ranks of his own Golkar party and the military finally weakened Suharto, and on 21 May he stood down from power. He was replaced by his deputy, Vice President B.J. Habibie.
President Habibie quickly assembled a cabinet. One of its main tasks was to re-establish International Monetary Fund and donor community support for an economic stabilisation program. He moved quickly to release political prisoners and lift some controls on freedom of speech and association. Elections for the national, provincial, and sub-provincial parliaments were held on 7 June 1999. In the elections for the national parliament, the Indonesian Democratic Party of Struggle (PDI-P, led by Sukarno's daughter Megawati Sukarnoputri) won 34% of the vote; Golkar (Suharto's party, formerly the only legal party of government) 22%; United Development Party (PPP, led by Hamzah Haz) 12%; and National Awakening Party (PKB, led by Abdurrahman Wahid) 10%.
The May 1998 riots of Indonesia also known as the 1998 tragedy or simply the 1998 event, were incidents of mass violence, demonstrations, and civil unrest of a racial nature that occurred throughout Indonesia
In October 1999, the People's Consultative Assembly (MPR), which consists of the 500-member Parliament plus 200 appointed members, elected Abdurrahman Wahid, commonly referred to as "Gus Dur", as President, and Megawati Sukarnoputri as Vice-President, both for five-year terms. Wahid named his first Cabinet in early November 1999 and a reshuffled, second Cabinet in August 2000. President Wahid's government continued to pursue democratisation and to encourage renewed economic growth under challenging conditions. In addition to continuing economic malaise, his government faced regional, interethnic, and interreligious conflict, particularly in Aceh, the Maluku Islands, and Irian Jaya. In West Timor, the problems of displaced East Timorese and violence by pro-Indonesian East Timorese militias caused considerable humanitarian and social problems. An increasingly assertive Parliament frequently challenged President Wahid's policies and prerogatives, contributing to a lively and sometimes rancorous national political debate.
During the People's Consultative Assembly's first annual session in August 2000, President Wahid gave an account of his government's performance. On 29 January 2001 thousands of student protesters stormed parliament grounds and demanded that President Abdurrahman Wahid resign due to alleged involvement in corruption scandals. Under pressure from the Assembly to improve management and co-ordination within the government, he issued a presidential decree giving Vice-President Megawati control over the day-to-day administration of government. Soon after, Megawati Sukarnoputri assumed the presidency on 23 July. In 2004, Susilo Bambang Yudhoyono won Indonesia's first direct presidential election and in 2009 he was elected to a second term.
In the 2014 presidential election Joko Widodo was elected president; he is from the PDI-P. Formerly Governor of Jakarta, he is the first Indonesian president without a high-ranking political or military background. However, his opponent Prabowo Subianto disputed the outcome and withdrew from the race before the count was completed.
As a multi-ethnic and multi-culture democratic country with a majority of moderate Muslim population, Indonesia faces the challenges to deal with terrorism that is linked to global militant Islamic movement. The Jemaah Islamiyah (JI), a militant Islamic organisation that aspired for the establishment of a Daulah Islamiyah that encompassed whole Southeast Asia including Indonesia, is responsible for a series of terrorist attacks in Indonesia. This terrorist organisation that is linked to Al-Qaeda, was responsible for the Bali bombings in 2002 and 2005, as well as Jakarta bombings in 2003, 2004, and 2009. The Indonesian government, people and authorities has ever since tried to crack down the terrorist cells in Indonesia.
On 14 January 2016, Indonesia encountered a terrorist attack in Jakarta. Suicide bombers and gunmen initiated the attack, which resulted in the death of seven people; an Indonesian, a Canadian and the rest were the attackers themselves. Twenty people were wounded from the attack. The assault was claimed as an act by the Islamic state.
On 26 December 2004, a massive earthquake and tsunami devastated parts of northern Sumatra, particularly Aceh. Partly as a result of the need for co-operation and peace during the recovery from the tsunami in Aceh, peace talks between the Indonesian government and the Free Aceh Movement (GAM) were restarted. Accords signed in Helsinki created a framework for military de-escalation in which the government has reduced its military presence, as members of GAM's armed wing decommission their weapons and apply for amnesty. The agreement also allows for Acehnese nationalist forces to form their own party, and other autonomy measures.
Since 1997 Indonesia has been struggling to contain forest fires, especially on the islands of Sumatra and Kalimantan. Haze occurs annually during the dry season and is largely caused by illegal agricultural fires due to slash-and-burn practices in Indonesia, especially in the provinces of South Sumatra and Riau on Indonesia's Sumatra island, and Kalimantan on Indonesian Borneo. The haze that occurred in 1997 was one of the most severe; dense hazes occurred again in 2005, 2006, 2009, 2013, and the worst was in 2015, killing dozens of Indonesians as a result of respiratory illnesses and road accidents due to poor visibility. Another 10 people were killed due to smog from forest and land fires.
In September 2014, Indonesia ratified the ASEAN Agreement on Transboundary Haze Pollution, becoming the last ASEAN country to do so. | https://en.wikipedia.org/wiki?curid=14643 |
Geography of Indonesia
Indonesia is an archipelagic country located in Southeast Asia, lying between the Indian Ocean and the Pacific Ocean. It is located in a strategic location astride or along major sea lanes connecting East Asia, South Asia and Oceania. Indonesia is the largest archipelago in the world. Indonesia's various regional cultures have been shaped—although not specifically determined—by centuries of complex interactions with its physical environment.
Indonesia is an archipelagic country extending about from east to west and from north to south. According to a geospatial survey conducted between 2007 and 2010 by National Coordinating Agency for Survey and Mapping (Bakosurtanal), Indonesia has 13,466 islands. While earlier survey conducted in 2002 by National Institute of Aeronautics and Space (LAPAN) stated Indonesia has 18,307 islands. According to the CIA World Factbook, there are 17,508 islands. The discrepancy between the surveys is likely caused by the earlier different survey method including tidal islands, sandy cays and rocky reefs that surface during low tide and submerge during high tide. There are 8,844 named islands according to estimates made by the government of Indonesia, with 922 of those are permanently inhabited. It comprises five main islands: Sumatra, Java, Borneo (known as "Kalimantan" in Indonesia), Sulawesi, and New Guinea; two major island groups (Nusa Tenggara and the Maluku Islands) and sixty smaller island groups. Four of the islands are shared with other countries: Borneo is shared with Malaysia and Brunei; Sebatik, located off the northeastern coast of Kalimantan, shared with Malaysia; Timor is shared with East Timor; and New Guinea is shared with Papua New Guinea.
Indonesia has total land area of , Including of inland seas (straits, bays, and other bodies of water). This makes it the largest island country in the world. The additional surrounding sea areas bring Indonesia's generally recognised territory (land and sea) to about 5 million km2. The government claims an exclusive economic zone of . This brings the total area to about 7.9 million km2.
Indonesia is a transcontinental country, where its territory consisted of islands geologically considered as part of either Asia or Australia. During the Pleistocene, the Greater Sunda Islands were connected to the Asian mainland while New Guinea was connected to Australia. Karimata Strait, Java Sea and Arafura Sea were formed as the sea level rose at the end of the Pleistocene.
The main islands of Sumatra, Java, Madura, and Kalimantan lie on the Sunda Plate and geographers have conventionally grouped them, (along with Sulawesi), as the Greater Sunda Islands. At Indonesia's eastern extremity is western New Guinea, which lies on the Australian Plate. Sea depths in the Sunda and Sahul shelves average or less. Between these two shelves lie Sulawesi, Nusa Tenggara (also known as the Lesser Sunda Islands), and the Maluku Islands (or the Moluccas), which form a second island group with deep, surrounding seas down to in depth. The term "Outer Islands" is used inconsistently by various writers but it is usually taken to mean those islands other than Java and Madura.
Sulawesi is an island lies on three separate plates, the Banda Sea Plate, Molucca Sea Plate, and Sunda Plate. Seismic and volcanic activities are high on its northeastern part, evidenced by the formation of volcanoes in North Sulawesi and island arcs such as the Sangihe and Talaud Islands, southwest of the Philippine Trench.
Nusa Tenggara or Lesser Sunda Islands consists of two strings of islands stretching eastward from Bali toward southern Maluku. The inner arc of Nusa Tenggara is a continuation of the Alpide belt chain of mountains and volcanoes extending from Sumatra through Java, Bali, and Flores, and trailing off in the volcanic Banda Islands, which along with the Kai Islands and the Tanimbar Islands and other small islands in the Banda Sea are typical examples of the Wallacea mixture of Asian and Australasian plant and animal life. The outer arc of Nusa Tenggara is a geological extension of the chain of islands west of Sumatra that includes Nias, Mentawai, and Enggano. This chain resurfaces in Nusa Tenggara in the ruggedly mountainous islands of Sumba and Timor.
The Maluku Islands (or Moluccas) are geologically among the most complex of the Indonesian islands, consisted of four different tectonic plates. They are located in the northeast sector of the archipelago, bounded by the Philippine Sea to the north, Papua to the east, and Nusa Tenggara to the southwest. The largest of these islands include Halmahera, Seram and Buru, all of which rise steeply out of very deep seas and have unique Wallacea vegetation. This abrupt relief pattern from sea to high mountains means that there are very few level coastal plains. To the south lies the Banda Sea. The convergence between the Banda Sea Plate and Australian Plate created a chain of volcanic islands called the Banda Arc. The sea also contains the Weber Deep, one of the deepest point in Indonesia.
Geomorphologists believe that the island of New Guinea is part of the Australian continent, both lies on Sahul Shelf and once joined via a land bridge during the Last glacial period. The tectonic movement of the Australian Plate created towering, snowcapped mountain peaks lining the island's central east-west spine and hot, humid alluvial plains along the coasts. The New Guinea Highlands range some east to west along the island, forming a mountainous spine between the northern and southern portion of the island. Due to its tectonic movement, New Guinea experienced many earthquakes and tsunamis, especially in its northern and western part.
Most of the larger islands are mountainous, with peaks ranging between meters above sea level in Sumatra, Java, Bali, Lombok, Sulawesi, and Seram. The country's tallest mountains are located in the Jayawijaya Mountains and the Sudirman Range in Papua. The highest peak, Puncak Jaya (, is located in the Sudirman Mountains. A string of volcanoes stretches from Sumatra to Nusa Tenggara, and then loops around through to the Banda Islands of Maluku to northeastern Sulawesi. Of the 400 volcanoes, approximately 150 are active. Two of the most violent volcanic eruptions in modern times occurred in Indonesia; in 1815 Mount Tambora in Sumbawa erupted killing 92,000 and in 1883, Krakatau, erupted killing 36,000. While volcanic ashes resulted from eruption has positive effects for the fertility of the surrounding soils, it also makes agricultural conditions unpredictable in some areas.Indonesia has relatively high tectonic and volcanic activities. It lies on the convergence between the Eurasian, Indo-Australian, Pacific, and Philippine Sea Plate. The Sunda megathrust is a 5,500 km long fault located off southern coasts of Sumatra, Java and Lesser Sunda Islands, where the Pacific Plate is thrusting northeastward towards the subducting Sunda Plate. Tectonic movement in this fault is responsible for the creation of the Sunda Trench, and mountain ranges across Sumatra, Java, and the Lesser Sunda Islands. Many great earthquakes occurred in the vicinity of the fault, such as the 2004 Indian Ocean earthquake. Mount Merapi, located in the Java portion of the megathrust, is the most active volcano in Indonesia and is designated as one of world's Decade Volcanoes due to the hazard it poses to the surrounding populated areas.
The northern part of Sulawesi and Maluku Islands lie on the convergence of Sunda Plate and Molucca Sea Plate, making it an active tectonic region with volcanic chains such as the Sangihe and Talaud Islands. Northern Maluku and western New Guinea is located on the convergence of Bird's Head, Philippine Sea and Caroline Plate. It is also a seismically active region, with the 7.6 Mw 2009 Papua earthquakes being the most recent great earthquake to date in the region.
Borneo is the third largest island in the world and the native vegetation was mostly Borneo lowland rain forests although much of this has been cleared with wildlife retreating to the Borneo montane rain forests inland. The islands of North Maluku are the original Spice Islands, a distinct rainforest ecoregion. A number of islands off the coast of New Guinea have their own distinctive biogeographic features, including the limestone islands of Biak, in the entrance to the large Cenderawasih Bay at the northwest end of the island.
Indonesia is divided into three time zones:
Lying along the equator, Indonesia's climate tends to be relatively even year-round. Indonesia has two seasons—a wet season and a dry season—with no extremes of summer or winter. For most of Indonesia, the dry season falls between May and October while the wet season between November and April.
Some regions, such as Kalimantan and Sumatra, experience only slight differences in rainfall and temperature between the seasons, whereas others, such as Nusa Tenggara, experience far more pronounced differences with droughts in the dry season, and floods in the wet. Rainfall in Indonesia is plentiful, particularly in west Sumatra, northwest Kalimantan, west Java, and western New Guinea.
Parts of Sulawesi and some islands closer to Australia, such as Sumba and Timor, are drier, however, these are exceptions. The almost uniformly warm waters that make up 81% of Indonesia's area ensure that temperatures on land remain fairly constant. The coastal plains averaging , the inland and mountain areas averaging , and the higher mountain regions, . The area's relative humidity ranges between 70 and 90%.
Winds are moderate and generally predictable, with monsoons usually blowing in from the south and east in June through October and from the northwest in November through March. Typhoons and large scale storms pose little hazard to mariners in Indonesia waters; the major danger comes from swift currents in channels, such as the Lombok and Sape straits.
Indonesia's climate is almost entirely tropical, dominated by the tropical rainforest climate found in every major island of Indonesia, followed by the tropical monsoon climate that predominantly lies along Java's coastal north, Sulawesi's coastal south and east, and Bali, and finally the tropical savanna climate, found in isolated locations of Central Java, lowland East Java, coastal southern Papua and smaller islands to the east of Lombok.
However, cooler climate types do exist in mountainous regions of Indonesia 1,300–1,500 metres above sea level. The oceanic climate (Köppen "Cfb") prevail in highland areas with fairly uniform precipitation year-round, adjacent to rainforest climates, while the subtropical highland climate (Köppen "Cwb") exist in highland areas with a more pronounced dry season, adjacent to tropical monsoon and savanna climates.
Above 3000 metres is where cold, subpolar climates dominate and where frost and occasional snow become more commonplace. The subpolar oceanic climate (Köppen "Cfc"), existing between 3,000 and 3,500 metres, can be found on the mountain slopes of Indonesia's highest peaks, and serves as a transition between oceanic climates and tundra climates. Tundra climates (Köppen "ET"), are found anywhere above 3500 metres on the highest peaks of Indonesia, including the permanently snow-capped peaks in Papua. In this climate regime, average monthly temperatures are all below 10 °C, and monthly precipitation is uniform.
Indonesia's high population and rapid industrialisation present serious environmental issues, which are often given a lower priority due to high poverty levels and weak, under-resourced governance. Issues include large-scale deforestation (much of it illegal) and related wildfires causing heavy smog over parts of western Indonesia, Malaysia and Singapore; over-exploitation of marine resources; and environmental problems associated with rapid urbanisation and economic development, including air pollution, traffic congestion, garbage management, and reliable water and waste water services.
Deforestation and the destruction of peatlands make Indonesia the world's third largest emitter of greenhouse gases. Habitat destruction threatens the survival of indigenous and endemic species, including 140 species of mammals identified by the World Conservation Union (IUCN) as threatened, and 15 identified as critically endangered, including the Sumatran Orangutan.
In 1970, 15% of Indonesians lived in cities compared to over 30% today, and this increases pressure on the urban environment. Industrial pollution is increasing, particularly in Java, and the increasing affluence of the growing middle class drives a rapid increase in the number of motor vehicles and associated emissions. Garbage and waste water services are being placed under increasing pressure. Reliance on septic systems or effluent disposal in open canals and river systems remains the norm, and is a major polluter of water resources. Very few Indonesians have access to safe drinking water and must boil water before use.
The geographical resources of the Indonesian archipelago have been exploited in ways that fall into consistent social and historical patterns. One cultural pattern consists of the formerly Indianized, rice-growing peasants in the valleys and plains of Sumatra, Java, and Bali, another cultural complex is composed of the largely Islamic coastal commercial sector, a third, more marginal sector consists of the upland forest farming communities which exist by means of subsistence swidden agriculture. To some degree, these patterns can be linked to the geographical resources themselves, with abundant shoreline, generally calm seas, and steady winds favouring the use of sailing vessels, and fertile valleys and plains—at least in the Greater Sunda Islands—permitting irrigated rice farming. The heavily forested, mountainous interior hinders overland communication by road or river, but fosters slash-and-burn agriculture.
Area:
"total land area:" 1,904,569 km2 ("land:" 1,811,569 km2 (699450 mi2),
"inland water:" 93,000 km2) (35,907 mi2)
Area - comparative:
Land boundaries:
Coastline:
Maritime claims: measured from claimed archipelagic baselines
"territorial sea:"
"exclusive economic zone:" with
Elevation extremes:
"lowest point:" Sea level at 0 m (sea surface level); southern portion of the Philippine Trench, east of Miangas at "highest point:" Puncak Jaya (also known as "Carstensz Pyramid") 4,884 m
Land use:
"arable land:" 12.97%
"permanent crops:" 12.14%
"other:" 74.88% (2013)
Irrigated land: 67,220 km2 (2005) (25,953 mi2)
Total renewable water resources: 2,019 km3 (2011) (484 mi3)
Freshwater withdrawal (domestic/industrial/agricultural):
"total:" 113.3 km3/yr (11%/19%/71%)
"per capita:" 517.3 m3/yr (2005)
Natural resources: coal, petroleum, natural gas, tin, nickel, timber, bauxite, copper, fertile soils, gold, silver | https://en.wikipedia.org/wiki?curid=14644 |
Demographics of Indonesia
The population of Indonesia was 237.64 million according to the 2010 national census, and it was estimated to reach 255.4 million in 2015. Fifty-eight per cent live on the island of Java, the world's most populous island.
Despite a fairly effective family planning program that has been in place since 1967, Indonesia's population growth was 1.49% for the decade ending in 2010. At that rate, Indonesia's population is projected to surpass the present population of the United States. Some say family planning should be revitalised based on the 1967 program to avoid Indonesia becoming the world's third most populous country, but this aim has been criticised by religious groups who believe that family planning goes against religious teachings.
Indonesia has a relatively young population compared to Western nations, though it is ageing as the country's birth rate has slowed and its life expectancy has increased. The median age was 30.2 years in 2017.
Indonesia includes numerous ethnic, cultural and linguistic groups, some of which are related to each other. Since independence, Indonesian (a form of Malay and the official national language) is the language of most written communication, education, government, and business. Many local ethnic languages are the first language of most Indonesians and are still important.
Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR):
Total fertility rate (TFR) and population over age 60 by region as of 2010:
Source: "UN World Population Prospects"
There are over 300 ethnic groups in Indonesia; 95% of those are of Native Indonesian ancestry. Javanese is the largest group with 100 million people (42%), followed by Sundanese, who number nearly 40 million (15%).
Indonesia is the world's most populous Muslim-majority nation; almost 87.18% of Indonesians declared themselves Muslim in the 2010 census. 9.87% of the population adhered to Christianity (of which more than 70% were Protestant), 1.69% were Hindu, 0.72% Buddhist, and 0.56 of other faiths. Most Indonesian Hindus are Balinese and most Buddhists in modern-day Indonesia are Chinese.
Indonesian is the official language, but there are many different languages native to Indonesia. According to Ethnologue, there are currently 737 living languages spoken in Indonesia, the most widely spoken being Javanese.
Some Chinese varieties, most prominently Min Nan, are also spoken. The public use of Chinese, especially Chinese characters, was officially discouraged between 1966 and 1998.
"definition:" age 15 and over can read and write
"total population:" 92.81%
"male:" 95.5%
"female:" 90.4% (2011 est.)
Education is free in state schools; it is compulsory for children through to grade 12. Although about 92% of eligible children are enrolled in primary school, a much smaller percentage attends full-time. About 44% of secondary school-age children attend junior high school, and some others of this age group attend vocational schools.
The following demographic statistics are from the CIA World Factbook, unless otherwise indicated.
Age structure
Median age
Birth rate
Death rate
Population growth rate
Urbanization
Sex ratio
Infant mortality rate
Life expectancy at birth
HIV/AIDS
Obesity - adult prevalence rate
Children under the age of 5 years underweight
Nationality
Religions
Languages
School life expectancy (primary to tertiary education)
Education expenditures | https://en.wikipedia.org/wiki?curid=14645 |
Politics of Indonesia
The politics of Indonesia take place in the framework of a presidential representative democratic republic whereby the President of Indonesia is both head of state and head of government and of a multi-party system. Executive power is exercised by the government. Legislative power is vested in both the government and the two People's Representative Councils. The judiciary is independent of the executive and the legislature.
The 1945 constitution provided for a limited separation of executive, legislative and judicial power. The governmental system has been described as "presidential with parliamentary characteristics". Following the Indonesian riots of May 1998 and the resignation of President Suharto, several political reforms were set in motion via amendments to the Constitution of Indonesia, which resulted in changes to all branches of government.
An era of Liberal Democracy () in Indonesia began on 17 August 1950 following the dissolution of the federal United States of Indonesia less than a year after its formation, and ended with the imposition of martial law and President Sukarno's 1959 Decree regarding the introduction of Guided Democracy () on 5 July. It saw a number of important events, including the 1955 Bandung Conference, Indonesia's first general and Constitutional Assembly elections, and an extended period of political instability, with no cabinet lasting as long as two years.
From 1957, Guided Democracy was the political system in place until the New Order began in 1966. It was the brainchild of President Sukarno, and was an attempt to bring about political stability. He believed that Western-style democracy was inappropriate for Indonesia's situation. Instead, he sought a system based on the traditional village system of discussion and consensus, which occurred under the guidance of village elders.
The transition to the "New Order" in the mid-1960s, ousted Sukarno after 22 years in the position. One of the most tumultuous periods in the country's modern history, it was the commencement of Suharto's three-decade presidency. Described as the great "dhalang" ("puppet master"), Sukarno drew power from balancing the opposing and increasingly antagonistic forces of the army and the Communist Party of Indonesia (PKI).
By 1965, the PKI extensively penetrated all levels of government and gained influence at the expense of the army. On 30 September 1965, six of the military's most senior officers were killed in an action (generally labelled an "attempted coup") by the so-called 30 September Movement, a group from within the armed forces. Within a few hours, Major General Suharto mobilised forces under his command and took control of Jakarta. Anti-communists, initially following the army's lead, went on a violent purge of communists throughout the country, killing an estimated half million people and destroying the PKI, which was officially blamed for the crisis.
The politically weakened Sukarno was forced to transfer key political and military powers to General Suharto, who had become head of the armed forces. In March 1967, the Provisional People's Consultative Assembly (MPRS) named General Suharto acting president. He was formally appointed president one year later. Sukarno lived under virtual house arrest until his death in 1970. In contrast to the stormy nationalism, revolutionary rhetoric, and economic failure that characterised the early 1960s under the left-leaning Sukarno, Suharto's pro-Western "New Order" stabilised the economy but continued with the official state philosophy of "Pancasila".
The New Order () is the term coined by President Suharto to characterise his regime as he came to power in 1966. He used this term to contrast his rule with that of his predecessor, Sukarno (dubbed the "Old Order," or "Orde Lama"). The term "New Order" in more recent times has become synonymous with the Suharto years (1966–1998).
Immediately following the attempted coup in 1965, the political situation was uncertain, but the New Order found much popular support from groups wanting a separation from Indonesia's problems since its independence. The 'generation of 66' ("Angkatan 66") epitomised talk of a new group of young leaders and new intellectual thought. Following communal and political conflicts, and economic collapse and social breakdown of the late 1950s through to the mid-1960s, the New Order was committed to achieving and maintaining political order, economic development, and the removal of mass participation in the political process. The features of the New Order established from the late 1960s were thus a strong political role for the military, the bureaucratisation and corporatisation of political and societal organisations, and selective but effective repression of opponents. Strident anti-communism remained a hallmark of the regime for its subsequent 32 years.
Within a few years, however, many of its original allies had become indifferent or averse to the New Order, which comprised a military faction supported by a narrow civilian group. Among much of the pro-democracy movement which forced Suharto to resign in 1998 and then gained power, the term "New Order" has come to be used pejoratively. It is frequently employed to describe figures who were either tied to the New Order, or who upheld the practises of his authoritarian regime, such as corruption, collusion and nepotism (widely known by the acronym KKN: "korupsi", "kolusi", "nepotisme").
The Post-Suharto era began with the fall of Suharto in 1998 during which Indonesia has been in a period of transition, an era known as "Reformasi" (English: "Reform"). This period has seen a more open and liberal political-social environment.
A process of constitutional reform lasted from 1999 to 2002, with four amendments producing major changes. Among these are term limits of up to 2 five-year terms for the President and Vice-President, and measures to institute checks and balances. The highest state institution is the People's Consultative Assembly (, MPR), whose functions previously included electing the president and vice-president (since 2004 the president has been elected directly by the people), establishing broad guidelines of state policy, and amending the constitution. The 695-member MPR includes all 550 members of the People's Representative Council (, DPR) plus 130 members of Regional Representative Council (, DPD) elected by the 26 provincial parliaments and 65 appointed members from societal groups.
The DPR, which is the premier legislative institution, originally included 462 members elected through a mixed proportional/district representational system and thirty-eight appointed members of the Indonesian Armed Forces (TNI) and police (POLRI). TNI/POLRI representation in the DPR and MPR ended in 2004. Societal group representation in the MPR was eliminated in 2004 through further constitutional change. Having served as rubberstamp bodies in the past, the DPR and MPR have gained considerable power and are increasingly assertive in oversight of the executive branch. Under constitutional changes in 2004, the MPR became a bicameral legislature, with the creation of the DPD, in which each province is represented by four members, although its legislative powers are more limited than those of the DPR. Through his/her appointed cabinet, the president retains the authority to conduct the administration of the government.
A general election in June 1999 produced the first freely elected national, provincial and regional parliaments in over 40 years. In October 1999, the MPR elected a compromise candidate, Abdurrahman Wahid, as the country's fourth president, and Megawati Sukarnoputri—a daughter of Sukarno—as the vice-president. Megawati's PDI-P party had won the largest share of the vote (34%) in the general election, while Golkar, the dominant party during the New Order, came in second (22%). Several other, mostly Islamic parties won shares large enough to be seated in the DPR. Further democratic elections took place in 2004, 2009 and 2014.
The executive branch of Indonesia is headed by a president, who is head of government and head of state. The president is elected by general election and can serve up to two five-year terms if reelected. The executive branch also includes a vice-president and a cabinet. All bills need joint approval between the executive and the legislature to become law, meaning the president has veto power over all legislation. The president also has the power to issue presidential decrees that have policy effects, and they are also in charge of Indonesia's international relationships, although they require legislative approval for treaties. Prior to 2004, the president was chosen by the MPR, but the president is currently selected through national election. The last election was held in April 2019, and incumbent Joko Widodo emerged as the winner.
The MPR is the legislative branch of Indonesia's political system. The MPR is composed of two houses: the DPR, which is commonly called the People's Representative Council, and the DPD, which is called the Regional Representative Council. The 575 DPR parliamentarians are elected through multi-member electoral districts, whereas 4 DPD parliamentarians are elected in each of Indonesia's 34 provinces. The DPR holds most of the legislative power because it has the sole power to pass laws. The DPD acts as a supplementary body to the DPR; it can propose bills, offer its opinion and participate in discussions, but it has no legal power. The MPR itself has power outside of those given to the individual houses. It can amend the constitution, inaugurate the president and conduct impeachment procudures. When the MPR acts in this function, it does so by simply combining the members of the two houses.
The General Elections Commission (, "KPU") is the body responsible for running both parliamentary and presidential elections. Article 22E(5) of the Constitution rules that the KPU is national, permanent, and independent. Prior to the 2004 elections, the KPU was made up of members who were also members of political parties. However, members of KPU must now be non-partisan.
Both The Supreme Court of Indonesia () and The Constitutional Court ("Mahkamah Konstitusi") are the highest level of the judicial branch. The Constitutional Court ("Mahkamah Konstitusi") listens to disputes concerning legality of law, general elections, dissolution of political parties, and the scope of authority of state institution. It has 9 judges and its judges are appointed by the DPR, the President and the Supreme Court. The Supreme Court of Indonesia hears final cessation appeals and conducts case reviews. It has 51 judges divided into 8 chambers. Its judges are nominated by the Judicial Commission of Indonesia and appointed by the President. Most civil disputes appear before the State Court ("Pengadilan Negeri"); appeals are heard before the High Court ("Pengadilan Tinggi"). Other courts include the Commercial Court, which handles bankruptcy and insolvency; the State Administrative Court ("Pengadilan Tata Usaha Negara") to hear administrative law cases against the government; and the Religious Court ("Pengadilan Agama") to deal with codified Islamic Law ("sharia") cases. Additionally, the Judicial Commission ("Komisi Yudisial") monitors the performance of judges.
During the regime of president Suharto, Indonesia built strong relations with the United States and had difficult relations with the People's Republic of China owing to Indonesia's anti-communist policies and domestic tensions with the Chinese community. It received international denunciation for its annexation of East Timor in 1978. Indonesia is a founding member of the Association of South East Asian Nations, and thereby a member of both ASEAN+3 and the East Asia Summit.
Since the 1980s, Indonesia has worked to develop close political and economic ties between Southeast Asian countries, and is also influential in the Organisation of Islamic Cooperation. Indonesia was heavily criticised between 1975 and 1999 for allegedly suppressing human rights in East Timor, and for supporting violence against the East Timorese following the latter's secession and independence in 1999. Since 2001, the government of Indonesia has co-operated with the US in cracking down on Islamic fundamentalism and terrorist groups. | https://en.wikipedia.org/wiki?curid=14646 |
Economy of Indonesia
The economy of Indonesia is the largest in Southeast Asia and is one of the emerging market economies of the world. Indonesia is a member of G20 and classified as a newly industrialised country. It is the 16th largest economy in the world by nominal GDP and the 7th largest in terms of GDP (PPP). Estimated at US$40 billion in 2019, Indonesia’s Internet economy is expected to cross the US$130 billion mark by 2025. Indonesia still depends on domestic market and government budget spending and its ownership of state-owned enterprises (the central government owns 141 enterprises). The administration of prices of a range of basic goods (including rice and electricity) also plays a significant role in Indonesia's market economy. However, since the 1990s, the majority of the economy has been controlled by individual Indonesians and foreign companies.
In the aftermath of the 1997 Asian financial crisis, the government took custody of a significant portion of private sector assets through the acquisition of nonperforming bank loans and corporate assets through the debt restructuring process and the companies in custody were sold for privatisation several years later. Since 1999 the economy has recovered, and growth has accelerated to over 4–6% in recent years.
In 2012, Indonesia replaced India as the second-fastest-growing G-20 economy, behind China. Since then, the annual growth rate has slowed and has fluctuated around 5%.
In the years immediately following the proclamation of Indonesian independence, both the Japanese occupation and conflict between Dutch and Republican forces had crippled the country's production, with exports of commodities such as rubber and oil being reduced to 12 and 5% of their pre-WW2 levels, respectively. The first Republican government-controlled bank, the Indonesian State Bank ("Bank Negara Indonesia", BNI) was founded on 5 July 1946. It initially acted as the manufacturer and distributor of ORI ("Oeang Republik Indonesia"/Money of the Republic of Indonesia), a currency issued by the Republican Government which was the predecessor of Rupiah. Despite this, currency issued during the Japanese occupation and by Dutch authorities was still in circulation, and the simplicity of the ORI made its counterfeiting relatively easy, worsening matters. Between 1949 and 1960, Indonesia experienced several economic disruptions. The country's independence recognised by the Netherlands, the dissolution of the United States of Indonesia in 1950, the subsequent liberal democracy period, the nationalisation of "De Javasche Bank" into the modern Bank Indonesia, and the takeover of Dutch corporate assets following the West New Guinea dispute,
which all resulted in the devaluation of Dutch banknotes into half their value.
During the guided democracy era in the 1960s, the economy deteriorated drastically as a result of political instability. The government was inexperienced in implementing macro-economic policies, which resulted in severe poverty and hunger. By the time of Sukarno's downfall in the mid-1960s, the economy was in chaos with 1,000% annual inflation, shrinking export revenues, crumbling infrastructure, factories operating at minimal capacity, and negligible investment. Nevertheless, Indonesia's post-1960 economic improvement was considered remarkable when taking consideration of how few indigenous Indonesians in the 1950s had received a formal education under Dutch colonial policies.
Following President Sukarno's downfall, the New Order administration brought a degree of discipline to economic policy that quickly brought inflation down, stabilised the currency, rescheduled foreign debt, and attracted foreign aid and investment. (See Inter-Governmental Group on Indonesia and Berkeley Mafia). Indonesia was until recently Southeast Asia's only member of OPEC, and the 1970s oil price rise provided an export revenue windfall that contributed to sustained high economic growth rates, averaging over 7% from 1968 to 1981.
With high levels of regulation and dependence on declining oil prices, growth slowed to an average of 4.5% per annum between 1981 and 1988. A range of economic reforms was introduced in the late 1980s, including a managed devaluation of the rupiah to improve export competitiveness, and de-regulation of the financial sector. Foreign investment flowed into Indonesia, particularly into the rapidly developing export-oriented manufacturing sector, and from 1989 to 1997, the Indonesian economy grew by an average of over 7%. GDP per capita grew 545% from 1970 to 1980 as a result of the sudden increase in oil export revenues from 1973 to 1979. High levels of economic growth masked several structural weaknesses in the economy. It came at a high cost in terms of weak and corrupt governmental institutions, severe public indebtedness through mismanagement of the financial sector, the rapid depletion of natural resources, and culture of favours and corruption in the business elite.
Corruption particularly gained momentum in the 1990s, reaching to the highest levels of the political hierarchy as Suharto became the most corrupt leader according to Transparency International. As a result, the legal system was weak, and there was no effective way to enforce contracts, collect debts, or sue for bankruptcy. Banking practices were very unsophisticated, with collateral-based lending the norm and widespread violation of prudential regulations, including limits on connected lending. Non-tariff barriers, rent-seeking by state-owned enterprises, domestic subsidies, barriers to domestic trade and export restrictions all created economic distortions.
The 1997 Asian financial crisis that began to affect Indonesia became an economic and political crisis. The initial response was to float the rupiah, raise key domestic interest rates, and tighten fiscal policy. In October 1997, Indonesia and the International Monetary Fund (IMF) reached agreement on an economic reform program aimed at macroeconomic stabilisation and elimination of some of the country's most damaging economic policies, such as the National Car Program and the clove monopoly, both involving family members of Suharto. The rupiah remained weak, however, and Suharto was forced to resign in May 1998 after massive riots erupted. In August 1998, Indonesia and the IMF agreed on an Extended Fund Facility (EFF) under President B. J. Habibie that included significant structural reform targets. President Abdurrahman Wahid took office in October 1999, and Indonesia and the IMF signed another EFF in January 2000. The new program also has a range of economic, structural reform, and governance targets.
The effects of the crisis were severe. By November 1997, rapid currency depreciation had seen public debt reach US$60 billion, imposing severe strains on the government's budget. In 1998, real GDP contracted by 13.1%, and the economy reached its low point in mid-1999 with 0.8% real GDP growth. Inflation reached 72% in 1998 but slowed to 2% in 1999. The rupiah, which had been in the Rp 2,600/USD1 range at the start of August 1997 fell to 11,000/USD1 by January 1998, with spot rates around 15,000 for brief periods during the first half of 1998. It returned to the 8,000/USD1 range at the end of 1998 and has generally traded in the Rp 8,000–10,000/USD1 range ever since, with fluctuations that are relatively predictable and gradual. However, the rupiah began devaluing past 11,000 in 2013, and as of November 2016 is around 13,000 USD.
Since an inflation target was introduced in 2000, the GDP deflator and the CPI have grown at an average annual pace of 10¾% and 9%, respectively, similar to the pace recorded in the two decades prior to the 1997 crisis, but well below the pace in the 1960s and 1970s. Inflation has also generally trended lower through the 2000s, with some of the fluctuations in inflation reflecting government policy initiatives such as the changes in fiscal subsidies in 2005 and 2008, which caused large temporary spikes in CPI growth.
In late 2004, Indonesia faced a 'mini-crisis' due to international oil prices rises and imports. The currency exchange rate reached Rp 12,000/USD1 before stabilising. Under President Susilo Bambang Yudhoyono (SBY), the government was forced to cut its massive fuel subsidies, which were planned to cost $14 billion in October 2005. This led to a more than doubling in the price of consumer fuels, resulting in double-digit inflation. The situation had stabilised but the economy continued to struggle with inflation at 17% in late 2005. Economic outlook became more positive as the 2000s progressed. Growth accelerated to 5.1% in 2004 and reached 5.6% in 2005. Real per capita income has reached fiscal levels in 1996-1997. Growth was driven primarily by domestic consumption, which accounts for roughly three-fourths of Indonesia's gross domestic product (GDP). The Jakarta Stock Exchange was the best performing market in Asia in 2004, up by 42%. Problems that continue to put a drag on growth include low foreign investment levels, bureaucratic red tape, and widespread corruption which costs Rp. 51.4 trillion (US$5.6 billion) or approximately 1.4% of GDP annually. However, there is a robust economic optimism due to the conclusion of the peaceful 2004 elections.
As of February 2007, the unemployment rate was 9.75%. Despite a slowing global economy, Indonesia's economic growth accelerated to a ten-year high of 6.3% in 2007. This growth rate was sufficient to reduce poverty from 17.8% to 16.6% based on the government's poverty line and reversed the recent trend towards jobless growth, with unemployment falling to 8.46% in February 2008. Unlike many of its more export-dependent neighbours, Indonesia has managed to skirt the recession helped by strong domestic demand (which makes up about two-thirds of the economy) and a government fiscal stimulus package of about 1.4% of GDP. After India and China, Indonesia was the third-fastest growing economy in the G20. With the $512 billion economy expanded 4.4% in the first quarter from a year earlier and last month, the IMF revised its 2009 Indonesia forecast to 3–4% from 2.5%. Indonesia enjoyed stronger fundamentals with the authorities implemented wide-ranging economic and financial reforms, including a rapid reduction in public and external debt, strengthening of corporate and banking sector balance sheets and reducing bank vulnerabilities through higher capitalisation and better supervision.
In 2012, Indonesia's real GDP growth reached 6%, then it steadily decreased below 5% until 2015. After Joko Widodo succeeded SBY, the government took measures to ease regulations for foreign direct investments to stimulate the economy. Indonesia managed to increased its GDP growth slightly above 5% in 2016–2017. However, the government is currently still facing problems such as currency weakening, decreasing exports and stagnating consumer spending. The current unemployment rate for 2017 is at 5.5%.
Source: Indonesian Statistics Bureau (Biro Pusat Statistik), annual production data.
Agriculture is a key sector which contributed to 14.43% of GDP. Currently, there are around 30% of the land area used for agriculture and employed about 49 million people (41% of the total workforce). Primary agriculture commodities include rice, cassava (tapioca), peanuts, natural rubber, cocoa, coffee, palm oil, copra; poultry, beef, pork, and eggs. Palm oil production is vital to the economy as Indonesia is the world's biggest producer and consumer of the commodity, providing about half of the world's supply. Plantations in the country stretch across 6 million hectares as of 2007, with a replanting plan set for an additional 4.7 million to boost productivity in 2017. There are a number of negative social and environmental impacts of palm oil production in southeast Asia.
Indonesia was the only Asian member of the Organization of Petroleum Exporting Countries (OPEC) until 2008 and is currently a net oil importer. In 1999, crude and condensate output averaged per day, and in 1998, the oil and gas sector including refining, contributed approximately 9% to GDP. As of 2005, crude oil and condensate output were per day. It indicates a substantial decline from the 1990s, due primarily to ageing oil fields and a lack of investment in oil production equipment. This decline in production has been accompanied by a substantial increase in domestic consumption, about 5.4% per year, leading to an estimated US$1.2 billion cost for importing oil in 2005. The state owns all petroleum and mineral rights. Foreign firms participate through production-sharing and work contracts. Oil and gas contractors are required to finance all exploration, production, and development costs in their contract areas and are entitled to recover operating, exploration, and development costs out of the oil and gas produced. Indonesia had previously subsidised fuel prices to keep prices low, costing US$7 billion in 2004. SBY has mandated a significant reduction of government subsidy of fuel prices in several stages. The government has stated that cuts in subsidies are aimed at reducing the budget deficit to 1% of GDP in 2005, down from around 1.6% last year. At the same time, the government has offered one-time subsidies for qualified citizens, to alleviate hardships.
Indonesia is the world's largest tin market. Although mineral production traditionally centred on bauxite, silver, and tin, it is expanding its copper, nickel, gold, and coal output for export markets. In mid-1993, the "Department of Mines and Energy" reopened the coal sector to foreign investment, resulting in a joint venture between Indonesian coal producer and BP and Rio Tinto Group. Total coal production reached 74 million metric tons in 1999, including exports of 55 million tons, and in 2011, production was 353 million. As of 2014, Indonesia is the third-largest producer with a total output of 458 Mt and export of 382 Mt. At this rate, the reserves will be used up in 61 years until 2075. Not all of the productions can be exported due to Domestic Market Obligation (DMO) regulation, which should fulfil the domestic market. In 2012, the DMO was 24.72%. Starting from 2014, no low-grade coal exports are allowed, so the upgraded brown coal process that cranks up the calorie value of coal from 4,500 to 6,100 kcal/kg will be built in South Kalimantan and South Sumatra. Indonesia is also the world's largest producer of nickel.
Two US firms operate three copper/gold mines in Indonesia, with a Canadian and British firm holding significant other investments in nickel and gold, respectively. India's fortune groups like Vedanta Resources and Tata Group also have substantial mining operations in Indonesia. In 1998, the value of Indonesian gold and copper production was $1 billion and $843 million respectively. Receipts from gold, copper, and coal comprised 84% of the $3 billion earned in 1998 by the mineral mining sector. With the addition of Alumina project that produces 5% of the world's alumina production, Indonesia would be the world's second-largest Alumina producer. The project will not make the ores to become Aluminium, as there are 100 types of Alumina derivatives that can be developed further by other companies in Indonesia.
Joko Widodo's administration continued the resource nationalism policy of SBY, nationalising some assets controlled by multinational companies such as Freeport McMoRan, Total SA and Chevron. In 2018, in a move aimed to cut imports, oil companies operating in Indonesia were ordered to sell their crude oil to state-owned Pertamina.
In 2010, Indonesia sold 7.6 million motorcycles, which was mainly produced in the country with almost 100% local components. Honda led the market with a 50.95% market share, followed by Yamaha with 41.37%. In 2011, the retail car sales total was 888,335 units, a 19.26% increase from last year. Toyota dominated the domestic car market (35.34%), followed by Daihatsu and Mitsubishi with 15.44% and 14.56%, respectively. Since 2011, some local carmakers have introduced some Indonesian national cars which can be categorised as Low-Cost Green Car (LCGC). In 2012, sales increased significantly by 24%, making it the first time that there were more than one million units in automobile sales.
In August 2014, Indonesia exported 126,935 Completely Build Up (CBU) vehicle units and 71,000 Completely Knock Down (CKD) vehicle units, while total production reached 878,000 vehicle units, comprising 22.5% of total output. Automotive export is more than double of its import. By 2020, it is predicted that the automotive exports will be the third after CPO and shoe export. In August 2015, Indonesia exported 123,790 motorcycles. The dominant manufactures which exported 83,641, announced to make Indonesia as a base of exporting country of its products.
In 2017, the country produced almost 1.2 million motor vehicles, ranking it as the 18th largest producer in the world. Nowadays, Indonesian automotive companies can produce cars with a high ratio of local content (80%–90%).
In 2018, the country produced 1.34 million units car with include export 346,000 units car mainly to the Philippines and Vietnam.
There are 50 million small businesses in Indonesia, with online usage growth of 48% in 2010. Google announced that it would open a local office in Indonesia before 2012. According to Deloitte in 2011, Internet-related activities have generated 1.6% of the GDP. It is bigger than electronic and electrical equipment exports and liquified natural gas at 1.51% and 1.45% respectively.
Up to the end of June 2011, the fixed state assets were Rp 1,265 trillion ($128 billion). The value of state stocks was Rp 50 trillion ($5 billion) while other state assets were Rp 24 trillion ($2.4 billion).
In 2015, financial services covered Rp 7,2 trillion. Fifty domestic and foreign conglomerations held around 70.5%. Fourteen of it were vertical conglomerations, 28 were horizontal, and eight are mixed. Thirty-five entities are mainly in the bank industries, 13 were in non-bank industries and one each in special financial industries and capital market industries.
The Indonesian Textile Association has reported that in 2013, the textile sector is likely to attract investment of around $175 million. In 2012, the investment in this sector was $247 million, of which only $51 million was for new textile machinery. Exports from the textile sector in 2012 were $13.7 billion.
In 2011, Indonesia released 55,010 working visas for foreigners, an increase of 10% compared to 2010, while the number of foreign residents in Indonesia, excluding tourists and foreign emissaries was 111,752, rose by 6% compared to last year. Those who received visas for six months to one year were mostly Chinese, Japanese, South Koreans, Indians, Americans and Australians. A few of them were entrepreneurs who made new businesses. Malaysia is the most common destination of Indonesian migrant workers (including illegal workers). In 2010, according to a World Bank report, Indonesia was among the world's top ten remittance-receiving countries with a value totalling $7 billion. In May 2011, six million Indonesian citizens were working overseas, 2.2 million of whom reside in Malaysia and another 1.5 million in Saudi Arabia.
Indonesia and Japan signed the Indonesia–Japan Economic Partnership Agreement (IJEPA), which had come into effect on 1 July 2008. The agreement was Indonesia's first bilateral free-trade agreement to ease the cross-border flow of goods and people as well as investment between both countries. Trade with China has increased since the 1990s, and in 2014, China became Indonesia's second-largest export destination after Japan. With China's economic rise, Indonesia has been intensifying its trade relationship with China to counterbalance its ties with the West. By 2020, China had become Indonesia's largest export destination.
At the beginning of the post-Suharto era, US exports to Indonesia in 1999 totalled $2 billion, down significantly from $4.5 billion in 1997. The main exports were construction equipment, machinery, aviation parts, chemicals, and agricultural products. US imports from Indonesia in 1999 totalled $9.5 billion and consisted primarily of clothing, machinery and transportation equipment, petroleum, natural rubber, and footwear. Financial assistance to Indonesia is coordinated through the Consultative Group on Indonesia (CGI) formed in 1989. It includes 19 donor countries and 13 international organisations that meet annually to coordinate donor assistance. In 2019, as Indonesia's share of global trade exceeded 0.5 percent, the United States Trade Representatives decided not to classify Indonesia as a "developing country." Despite a revocation of this status, the Indonesian government has assured that this would not change the current Generalized System of Preferences facilities that Indonesia had enjoyed from the United States.
This is a chart of trend of Indonesia's GDP at market prices by the IMF with figures in millions of rupiah.
For purchasing power parity comparisons, the exchange rate for 1 US dollar is set at 3,094.57 rupiah.
Average net wage in Indonesia is varied by sector. In February 2017 the electricity, gas, and water sector is the sector with the highest average net wage, while the agriculture sector is the lowest.
Since the late 1980s, Indonesia has made significant changes to its regulatory framework to encourage economic growth. This growth was financed mostly from private investment, both foreign and domestic. US investors dominated the oil and gas sector and undertook some of Indonesia's largest mining projects. In addition, the presence of US banks, manufacturers, and service providers expanded, especially after the industrial and financial sector reforms of the 1980s. Other major foreign investors included India, Japan, the UK, Singapore, the Netherlands, Qatar, Hong Kong, Taiwan and South Korea.
The 1997 crisis made continued private financing imperative but problematic. New foreign investment approvals fell by almost two-thirds between 1997 and 1999. The crisis further highlighted areas where additional reform was needed. Frequently cited areas for improving the investment climate were the establishment of a functioning legal and judicial system, adherence to competitive processes, and adoption of internationally acceptable accounting and disclosure standards. Despite improvements of laws in recent years, Indonesia's intellectual property rights regime remains weak, and lack of effective enforcement is a significant concern. Under Suharto, Indonesia had moved towards the private provision of public infrastructure, including electric power, toll roads, and telecommunications. The 1997 crisis brought to light a severe weakness in the process of dispute resolution, however, particularly in the area of private infrastructure projects. Although Indonesia continued to have the advantages of a large labour force, abundant natural resources and modern infrastructure, private investment in new projects largely ceased during the crisis.
As of 28 June 2010, the Indonesia Stock Exchange had 341 listed companies with a combined market capitalisation of $269.9 billion. As of November 2010, two-thirds of the market capitalisation was in the form of foreign funds, and only around 1% of the population have stock investments. Efforts are further being made to improve the business and investment environment. Within the World Bank's Doing Business Survey, Indonesia rose to 122 out of 178 countries in 2010, from 129 in the previous year. Despite these efforts, the rank is still below regional peers, and an unfavourable investment climate persists. For example, potential foreign investors and their executive staff cannot maintain their own bank accounts in Indonesia, unless they are tax-paying local residents (paying tax in Indonesia for their worldwide income).
From 1990 to 2010, Indonesian companies have been involved in 3,757 mergers and acquisitions as either acquirer or target with a total known value of $137 billion. In 2010, 609 transactions were announced, which is a new record. Numbers had increased by 19% compared to 2009. The value of deals in 2010 was US$17 billion, which is the second-highest number ever. In 2012, Indonesia realised total investments of $32.5 billion, surpassing its annual target $25 billion, as reported by Investment Coordinating Board (BKPM) on 22 January. The primary investments were in the mining, transport and chemicals sectors. In 2011, the Indonesian government announced a new "Masterplan" (known as the "MP3EI", or "Masterplan Percepatan dan Perluasan Pembangunan Ekonomi Indonesia", the "Masterplan to Accelerate and Expand Economic Development in Indonesia"). The aim was to encourage increased investment, particularly in infrastructure projects across Indonesia.
Indonesia regained its investment grade rating from Fitch Rating in late 2011, and from Moody's Rating in early 2012, after losing it in the 1997 crisis, during which Indonesia spent more than Rp. 450 trillion ($50 billion) to bail out lenders from banks. Fitch raised Indonesia's long-term and local currency debt rating to BBB- from BB+ with both ratings is stable. Fitch also predicted that the economy would grow at least 6% on average per year through 2013, despite a less conducive global economic climate. Moody's raised Indonesia's foreign and local currency bond ratings to Baa3 from Ba1 with a stable outlook. On May 2017, S&P Global raised Indonesia's investment grade from BB+ to BBB- with a stable outlook, due to the economy experiencing a rebound in exports and strong consumer spending during early 2017.
In 2015, total public spending was Rp 1,806 trillion (US$130.88 billion, 15.7% of GDP). Government revenues, including those from state-owned enterprises (BUMN), totalled Rp 1,508 trillion (US$109.28 billion, 13.1% of GDP) resulting in a deficit of 2.6%. Since the 1997 crisis that caused an increase in debt and public subsidies and a decrease in development spending, Indonesia's public finances have undergone a major transformation. As a result of a series macroeconomic policies, including a low budget deficit, Indonesia is considered to have moved into a situation of financial resources sufficiency to address development needs. Decentralisation, enacted during the Habibie administration, has changed the manner of government spending, which has resulted in around 40% of public funds being transferred to regional governments by 2006.
In 2005, rising international oil prices led to the government's decision of slashing fuel subsidies. It led to an extra US$10 billion for government spending on development, and by 2006, there were an additional 5 billion due to steady growth, and declining debt service payments. It was the country's first "fiscal space" since the revenue windfall during the 1970s oil boom. Due to decentralisation and fiscal space, Indonesia has the potential to improve the quality of its public services. Such potential also enables the country to focus on further reforms, such as the provision of targeted infrastructure. Careful management of allocated funds has been described as Indonesia's main issue in public expenditure.
In 2018, President Joko Widodo substantially increased the amount of debt by taking foreign loans. Indonesia has increased the debt by Rp 1,815 trillion compared to his predecessor, SBY. He has insisted that the loan is used for productive long-term projects such as building roads, bridges, and airports. Finance Minister Sri Mulyani also stated that despite an increase of foreign loans and debt, the government has also increased the budget for infrastructure development, healthcare, education, and budget given to regencies and villages. The government is insisting that foreign debt is still under control, and complying with relevant laws that limit debt to be under 60% of GDP.
Based on the regional administration implementation performance evaluation of 2009, by an order the best performance were:
Based on JBIC Fiscal Year 2010 survey (22nd Annual Survey Report) found that in 2009, Indonesia has the highest satisfaction level in net sales and profits for Japanese companies.
According to Asia Wealth Report, Indonesia has the highest growth rate of high-net-worth individuals (HNWI) predicted among the 10 most Asian economies. The Wealth Report 2015 by Knight Frank reported that in 2014 there were 24 individuals with a net worth above US$1 billion. 18 of them lived in Jakarta and the others spread among other large cities in Indonesia. 192 persons can be categorised as centamillionaires with over US$100 million of wealth and 650 persons as high-net-worth individuals with wealth exceeding US$30 million.
As of 2011, labour militancy was increasing with a major strike at the Grasberg mine and numerous strikes elsewhere. A common issue was the attempts by foreign-owned enterprises to evade Indonesia's strict labour laws by calling their employees' contract workers. "The New York Times" expressed concern that Indonesia's cheap labour advantage might be lost. However, a large pool of the unemployed who will accept substandard wages and conditions remains available. One factor in the increase of militancy is increased awareness via the internet of prevailing wages in other countries, and the generous profits foreign companies are making in Indonesia.
On 1 September 2015, thousands of workers in Indonesia staged demonstrations across the country in pursuit of higher wages and improved labour laws. Approximately 35,000 people rallied in several parts of the country. They demanded a 22% to 25% increase in the minimum wage by 2016 and lower prices on essential goods, including fuel. The unions also want the government to ensure job security and provide the fundamental rights of the workers.
Economic disparity and the flow of natural resource profits to Jakarta has led to discontent and even contributed to separatist movements in areas such as Aceh and Papua. Geographically, the poorest fifth regions account for just 8% of consumption, while the wealthiest fifth account for 45%. While there are new laws on decentralisation that may address the problem of uneven growth and satisfaction partially, there are many hindrances in putting this new policy into practice. At a 2011 Makassar Indonesian Chamber of Commerce and Industry (Kadin) meeting, Disadvantaged Regions Minister said there are 184 regencies classified as disadvantaged areas with around 120 in eastern Indonesia. 1% of Indonesia's population has 49.3% of the country's $1.8 trillion wealth, down from 53.5%. However, it is in the fourth rank after Russia (74.5%), India (58.4%) and Thailand (58%).
Inflation has long been a problem in Indonesia. Because of political turmoil, the country had once suffered hyperinflation, with 1,000% annual inflation between 1964 and 1967, and this had been enough to create severe poverty and hunger. Even though the economy recovered very quickly during the first decade of the New Order administration (1970–1981), never once was the inflation less than 10% annually. The inflation slowed during the mid-1980s; however, the economy was also languid due to the decrease in oil price that reduced its export revenue dramatically. The economy was again experiencing rapid growth between 1989–1997 due to the improving export-oriented manufacturing sector. Still, the inflation rate was higher than economic growth, and this caused a widening gap among several Indonesians. The inflation peaked in 1998 during the 1997 crisis, with over 58%, causing the raise poverty to the levels in the 1960s. During the economic recovery and growth in recent years, the government has been trying to lower the inflation rate. However, it seems that inflation has been affected by global fluctuation and domestic market competition. As of 2010, the inflation rate was approximately 7%, when its economic growth was 6%. To date, inflation is affecting Indonesian lower middle class, especially those who are not able to afford food after price hikes. At the end of 2017, Indonesia's inflation rate was 3.61%, above the government-set forecast of 3.0–3.5%. | https://en.wikipedia.org/wiki?curid=14647 |
Communications in Indonesia
Communications in Indonesia has a complex history due to the need to reach an extended archipelago of over 17,500 islands. The once important non-electronic communication methods of the past have given away to a considerable telecommunications infrastructure in contemporary Indonesia.
Indonesia has long been using traditional forms of slayed communications between various islands and villages. It was not until the sixteenth century when the Dutch colonised Indonesia, constructing a more elaborate communication system, both within Indonesia and to other countries. The first connection to Australia was an undersea telegraph cable that was completed on 18 November 1871, connecting Java to Darwin, and eventually to the Australian Overland Telegraph Line across Australia.
After gaining Independence, Indonesia started to develop its own communication systems, generally following the rest of the world. The construction of communication towers and launch of the Palapa series of communication satellites was done during the New Order period.
A number of lines connect Indonesia to international communication routes. For example, the SEA-ME-WE 3 optical submarine telecommunications cable lands at both Medan and Jakarta connecting Europe with South eastern Asia (several countries up to Japan) and Australia (Perth).
Domestically, Indonesia has good coverage for media across most major islands, although smaller and less populated Islands do not always receive attention from media companies, and rely on satellite communication.
Indonesia has a long list of print media, in the form of newspapers and magazines. Some, such as Kompas and Koran Tempo are circulated daily and are relatively simple to obtain. Others are island- or city-specific, and are usually not distributed to other regions.
By June 2011, all sub-districts in Indonesia will be connected to the Internet.
The media in Indonesia is regulated by the Ministry of Communications and Informatics.
LIRNEasia's Telecommunications Regulatory Environment (TRE) index, which summarises stakeholders’ perception on certain TRE dimensions, provides insight into how conducive the environment is for further development and progress. The most recent survey was conducted in July 2008 in eight Asian countries, including Bangladesh, India, Indonesia, Sri Lanka, Maldives, Pakistan, Thailand, and the Philippines. The tool measured seven dimensions: i) market entry; ii) access to scarce resources; iii) interconnection; iv) tariff regulation; v) anti-competitive practices; and vi) universal services; vii) quality of service, for the fixed, mobile and broadband sectors.
Below-average scores received in all sectors and across dimensions reflect general dissatisfaction of the TRE in Indonesia. However, this does not mean that respondents have ignored recent developments. The relatively healthy growth in mobile sector is reflected in the higher TRE scores received by the sector for most dimensions, when compared to the fixed sector. On average, the mobile sector scores best, with fixed and broadband following. | https://en.wikipedia.org/wiki?curid=14648 |
Transport in Indonesia
Indonesia's transport system has been shaped over time by the economic resource base of an archipelago with thousands of islands, and the distribution of its more than 200 million people highly concentrated on a single island, Java.
All transport modes play a role in the country’s transport system and are generally complementary rather than competitive.
Road transport is predominant, with a total system length of est. in 2008.
The railway system has four unconnected networks in Java and Sumatra primarily dedicated to transport bulk commodities and long-distance passenger traffic.
Sea transport is extremely important for economic integration and for domestic and foreign trade. It is well developed, with each of the major islands having at least one significant port city.
The role of inland waterways is relatively minor and is limited to certain areas of Eastern Sumatra and Kalimantan.
The function of air transport is significant, particularly where land or water transport is deficient or non-existent. It is based on an extensive domestic airline network where all major cities can be reached by passenger plane.
Because Indonesia encompasses a sprawling archipelago, maritime shipping provides essential links between different parts of the country. Boats in common use include large container ships, a variety of ferries, passenger ships, sailing ships, and smaller motorised vessels. Traditional wooden vessel pinisi still widely used as the inter-island freight service within Indonesian archipelago. Main pinisi traditional harbours are Sunda Kelapa in Jakarta and Paotere harbour in Makassar.
Frequent ferry services cross the straits between nearby islands, especially in the chain of islands stretching from Sumatra through Java to the Lesser Sunda Islands. On the busy crossings between Sumatra, Java, and Bali, multiple car ferries run frequently twenty-four hours per day. There are also international ferry services between across the Straits of Malacca between Sumatra and Malaysia, and between Singapore and nearby Indonesian islands, such as Batam.
A network of passenger ships makes longer connections to more remote islands, especially in the eastern part of the archipelago. The national shipping line, Pelni, provides passenger service to ports throughout the country on a two to four week schedule. These ships generally provide the least expensive way to cover long distances between islands. Still smaller privately run boats provide service between islands.
On some islands, major rivers provide a key transportation link in the absence of good roads. On Kalimantan, longboats running on the rivers are the only way to reach many inland areas.
Indonesia has 21,579 km of navigable waterways (), of which about one half are on Kalimantan, and a quarter each on Sumatra and Papua. Waterways are highly needed because the rivers on these islands are not wide enough to hold medium-sized ships. In addition to this, roads and railways are not good options since Kalimantan and Papua are not like Java, which is a highly developed island. With the current length of waterways, Indonesia ranked seventh on the countries with longest waterways.
Major ports and harbours include Bitung, Cilacap, Cirebon, Jakarta, Kupang, Palembang, Semarang, Surabaya, and Makassar. Ports are managed by the various Indonesia Port Corporations, of which there are four, numbered I through IV. Each has jurisdiction over various regions of the country, with I in the west and IV in the east. Port of Tanjung Priok in Jakarta is the Indonesia's busiest port, handling over 5.20 million TEUs. A two-phase "New Tanjung Priok" extension project is currently underway, which will triple the existing annual capacity when fully operational in 2023. In 2015, ground breaking of the strategic North Sumatra's Kuala Tanjung Port has been completed. It is expected to accommodate 500,000 TEUs per year, overtaking Johor's Tanjung Pelepas Port and could even compete with the port of Singapore.
A wide variety of vehicles are used for transportation on Indonesia's roads. Bus services are available in most areas connected to the road network. Between major cities, especially on Sumatra, Java, and Bali, services are frequent and direct; many services are available with no stops until the final destination. In more remote areas, and between smaller towns, most services are provided with minibuses or minivans ("angkut"). Buses and vans are also the primary form of transportation within cities. Often, these are operated as share taxis, running semi-fixed routes.
Many cities and towns have some form of transportation for hire available as well such as taxis. There are usually also bus services of various kinds such as the Kopaja buses and the more sophisticated Transjakarta bus rapid transit system in Jakarta, the longest bus rapid transit (BRT) system in the world that boasts some in 13 corridors and 10 cross-corridor routes and carrying 430,000 passengers daily in 2016. Other cities such as Yogyakarta, Palembang, Bandung, Denpasar, Pekanbaru, Semarang, Makassar, and Padang also have BRT systems in place without segregated lanes. Many cities also have motorised autorickshaws ("bajaj") of various kinds. Cycle rickshaws, called "becak" in Indonesia, are a regular sight on city roads and provide inexpensive transportation. They have been blamed for causing traffic congestion and, consequently, banned from most parts of Jakarta in 1972. Horse-drawn carts are found in some cities and towns.
Due to the increasing purchasing power of Indonesians, private cars are becoming more common especially in major cities. However the growth of the number of cars increasingly outpaces the construction of new roads, resulting in frequently crippling traffic jams in large parts in major cities especially in Jakarta, which often also happen on highways. Jakarta also has one of the worst traffic jams in the world.
Indonesia has about of paved highways and of unpaved highways ( estimate). The AH2 highway is one of Indonesia's main highways. The other one is AH25 in Sumatra. Indonesia has some highways, some of them are National Routes (25, currently only in Java and (partially) Sumatera), and some of them are expressways known locally as (lit. toll roads). The first expressway in Indonesia is the Jagorawi Toll Road, opened in 1978. Over of expressways opened during the first term of President Joko Widodo, surpassing previous administrations. Since 2018, all expressways do not accept any cash tolls; all tolls must be paid with certain contactless bank cards.
Indonesia has also been gradually introducing an Intelligent Transportation System (ITS) since 2012. ITS Indonesia was formed on 26 April 2011.
National routes of Indonesia pass through the hearts of most main
cities, and are designed to connect between city centres. They act as main inter-city route outside the tollways. A national route has to be passable by logistic trucks, while simultaneously handling the common traffic. National routes in Java are numbered, while those outside Java aren't. In some cities, even in crowded districts, national routes often form bypasses or ring roads (Indonesian: "jalan lingkar") around the city to prevent inter-city traffic entering the city center.
Ministry of Public Works And Public Housing is responsible to these networks, except DKI Jakarta part from Jakarta Inner Ring Road to Jakarta Outer Ring Road. A national route can be revised if it serves unable to handle the traffic. It would usually be handled by the province/regional central government.
The overwhelming majority of highways in Indonesia is tolled, the high cost of building and maintaining a national highway system means that Indonesia has to outsource the construction and maintenance to private and state-owned companies. Indonesia has an extensive system of highways consisting of:
The toll road between Tanjung Benoa to Airport and from Airport to Serangan, all in direct line (not curve) is 12.7 kilometres and is equipped also with motorcycle lanes. The toll road is formally opened on 23 September 2013, about a week before APEC Summit in Bali is opened.
"Planned:"
The majority of Indonesia's railways is located on Java, used for both passenger and freight transport. The railway is operated by Kereta Api Indonesia. The inter-city rail network on Java is complemented by local commuter rail services in the Jakarta metropolitan area and in Surabaya. In Jakarta, the commuter rail service (Kereta Commuter Indonesia) carries 885,000 passengers a day. In addition, a mass rapid transit system is under construction in Jakarta, and light rail transit systems are currently under construction in Jakarta. There are four separate railway networks on Sumatra: one in Aceh, one in North Sumatra (Aceh connection proposed to be finished in 2020s), another in West Sumatra, and the final one in South Sumatra and Lampung. South Sulawesi has railway network in Barru Regency as the impact of Trans-Sulawesi Railway construction, although the network has not been used yet. There are no railways in other parts of Indonesia, although new networks are being developed on Kalimantan and Sulawesi. The government's plan to build a high-speed rail (HSR) was announced in 2015, the first in Indonesia and Southeast Asia. It is expected to connect the capital Jakarta with Bandung, covering a distance of around . Plans were also mentioned for its possible extension to Surabaya, the country's second largest city.
As of 2013, Indonesia has pipelines for condensate , condensate/gas , gas , liquid petroleum gas , oil , oil/gas/water , refined products , and water .
Air transport in Indonesia serves as a critical means of connecting the thousands of islands throughout the archipelago. Indonesia is the largest archipelagic country in the world, extending from east to west and from north to south, comprising 13,466 islands, with 922 of those permanently inhabited. With an estimated population of over 255 million people — making it the world's fourth-most-populous country — and also due to the growth of the middle-class, the boom of low-cost carriers in the recent decade, and overall economic growth, many domestic travellers shifted from land and sea transport to faster and more comfortable air travel. Indonesia is widely regarded as an emerging market for air travel in the region. Between 2009 and 2014, the number of Indonesian air passengers increased from 27,421,235 to 94,504,086, an increase of over threefold.
However, safety issues continue to be a persistent problem in Indonesian aviation. Several accidents have given Indonesia's air transport system the reputation of the least safe in the world. Indonesian aviation faces numerous challenges, including poorly maintained, outdated, and often overwhelmed infrastructure, the factor of human error, bad weather, haze problems caused by plantation fires, and volcanic ash spewed by numerous volcanoes that disrupts air transportation.
The Indonesian Air Force has 34,930 personnel equipped with 224 aircraft, among them 110 combat aircraft. The Indonesian Air Force possesses and operates numerous military air bases and military airstrips across the archipelago.
The International Air Transport Association (IATA) has predicted that Indonesia will become the world's sixth largest air travel market by 2034. Around 270 million passengers are predicted to fly from and within Indonesia by 2034.
As of 2013, there are 673 airports in Indonesia, 186 of those have paved runways, and 487 have unpaved runways. As of 2013, there are 76 heliports in Indonesia. Jakarta's Soekarno–Hatta International Airport serves as the country's main air transportation hub as well as the nation's busiest. Since 2010, it has become the busiest airport in Southeast Asia, surpassing Suvarnabhumi and Changi airports. In 2017, it became the 17th busiest airport in the world with 62.1 million passengers. Today the airport is running over capacity. After an expansion with a third terminal was completed in 2016, the total capacity of the three terminals increased to 43 million passengers a year. The first and second terminals will be revitalised in order to accommodate 67 million passengers a year.
In Indonesia, there are 22 commercial scheduled airlines that carry more than 30 passengers, and 32 commercial scheduled airlines that transport 30 or less passengers, as well as chartered airlines. Some notable Indonesian airlines, among others, include Garuda Indonesia, the government-owned flag carrier of Indonesia, Lion Air, currently the largest private low-cost carrier airline in Indonesia, Sriwijaya Air, currently the largest medium service regional carrier in Indonesia, also the country's third largest carrier, and Indonesia AirAsia, the Indonesian branch of Malaysian-based AirAsia.
"Mudik", or "Pulang Kampung", is an Indonesian term for the activity where migrants or migrant workers return to their hometown or village during or before major holidays, especially Lebaran (Eid al-Fitr). Although the mudik homecoming travel before Lebaran takes place in most Indonesian urban centers, the highlight is on the nation's largest urban agglomeration; Greater Jakarta, as millions of Jakartans exit the city by various means of transportation, overwhelming train stations and airports and also clogging highways, especially the Trans-Java toll road and Java's Northern Coast Road. In 2017 it was estimated that the people that took annual "mudik" travel reached 33 million people.
The demand for train and airplane tickets usually spikes a month or two prior to Lebaran, prompting an unusually higher cost for tickets for highly sought days of departure. Some airlines might add extra flights or operate larger airplanes to deal with the surge in demand. Indonesian train operator Kereta Api Indonesia usually offers additional train trips or introduces longer trains with more cars in order to meet the demand. The private operators of intercity and interprovince buses usually charge higher ticket costs during this period. The impact is indeed tremendous as millions of buses, cars and motorcycles jam the roads and highways, causing kilometres of traffic jams each year. | https://en.wikipedia.org/wiki?curid=14649 |
Indonesian National Armed Forces
The Indonesian National Armed Forces (, literally ""Indonesian National Military""; abbreviated as TNI) are the military forces of the Republic of Indonesia. It consists of the Army (TNI-AD), Navy (TNI-AL), and Air Force (TNI-AU). The President of Indonesia is the commander-in-chief of the Armed Forces. In 2016, it comprises approximately 395,500 military personnel including the Indonesian Marine Corps (), which is a branch of the Navy.
Initially formed with the name of the People's Security Army (TKR), then later changed to the Republic of Indonesia Army (TRI) before changing again its name to the Indonesian National Armed Forces (TNI) to the present. The Indonesian Armed Forces was formed during the Indonesian National Revolution, when it undertook a guerrilla war along with informal militia. As a result of this, and the need to maintain internal security, the Armed forces including the Army, Navy, and Air Force has been organised along territorial lines, aimed at defeating internal enemies of the state and potential external invaders.
Under the 1945 Constitution, all citizens are legally entitled and obliged to defend the nation. Conscription is provided for by law, yet the Forces have been able to maintain mandated strength levels without resorting to a draft. Most enlisted personnel are recruited in their own home regions and generally train and serve most of their time in units nearby.
The Indonesian armed forces (military) personnel does not include members of law enforcement and paramilitary personnel such as the Indonesian National Police (Polri) consisting of approximately 590,000+ personnel, Mobile Brigade Corps (Brimob) of around 42,000+ armed personnel, the Civil Service Police Unit (Municipal police) or "Satpol PP", Indonesian College Students' Regiment or (Menwa) which is a collegiate military service consisting 26,000 trained personnel, and civil defence personnel (Linmas or Public Protection Service Corps, which replaced the old Hansip in 2014).
Before the formation of the Indonesian Republic, the military authority in the Dutch East Indies was held by the Royal Dutch East Indies Army (KNIL) and naval forces of the Royal Netherlands Navy (KM). Although both the KNIL and KM were not directly responsible for the formation of the future Indonesian armed forces, and mainly took the role of foe during Indonesian National Revolution in 1945 to 1949, the KNIL had also provided military training and infrastructure for some of the future TNI officers and other ranks. There were military training centres, military schools and academies in the Dutch East Indies. Next to Dutch volunteers and European mercenaries, the KNIL also recruited indigenous, especially Ambonese, Kai Islanders, Timorese, and Minahasan people. In 1940, with the Netherlands under German occupation and the Japanese pressing for access to Dutch East Indies oil supplies, the Dutch had opened up the KNIL to large intakes of previously excluded Javanese. Some of the indigenous soldiers that had enjoyed Dutch KNIL military academy education would later become important TNI officers, like for example: Soeharto and Nasution.
Indonesian nationalism and militarism started to gain momentum and support in World War II during the Japanese occupation of Indonesia. To gain support from the Indonesian people in their war against the Western Allied force, Japan started to encourage and back Indonesian nationalistic movements by providing Indonesian youth with military training and weapons. On 3 October 1943, the Japanese military formed the Indonesian volunteer army called PETA ( – Defenders of the Homeland). The Japanese intended PETA to assist their forces oppose a possible invasion by the Allies. The Japanese military training for Indonesian youth originally was meant to rally the local's support for the Japanese Empire, but later it became the significant resource for the Republic of Indonesia during the Indonesian National Revolution in 1945 to 1949. Many of these men who served in PETA, both officers and NCOs alike like Soedirman, formed majority of the personnel that would compose the future armed forces.
At first, Indonesian Armed Forces started out as the BKR ( – People's Security Agency), which was formed in the 3rd PPKI meeting, on 29 August 1945; this was an organisation of militias in a united nationwide force to ensure the security remained intact across the newly declared independent Indonesia; it was created more as a civil defence force than an armed forces. The decision to create a "security agency" and not an army, was taken to lessen the probability of the allied forces viewing it as an armed revolution and invading in full force. During their capitulation, one of the terms of surrender to Japan was to return the Asian domains they had conquered to the previous nation of the Allies, certainly not to liberate them independently.
When confrontations became sharp and hostile between Indonesia and the Allied forces, on 5 October 1945 the TKR ("Tentara Keamanan Rakyat" – People's Security Armed Forces) was formed on the basis of existing BKR units; this was a move taken to formalise, unite, and organise the splintered pockets of independent troopers ("laskar") across Indonesia, ensuing a more professional military approach, to contend with the Netherlands and the Allied force invaders.
The Indonesian armed forces have seen significant action since their establishment in 1945. Their first conflict was the 1945–1949 Indonesian National Revolution, in which the 1945 Battle of Surabaya was especially important as the baptism of fire of the young armed forces.
In January 1946, TKR renamed as the "Tentara Keselamatan Rakyat" (People's Safety Military Forces), then succeeded by TRI ("Tentara Republik Indonesia" – Republic of Indonesia Armed Forces), in a further step to professionalise the armed forces and increase its ability to engage systematically.
In June 1947, the TRI, per a government decision, was renamed the TNI ("Tentara Nasional Indonesia" - Indonesian National Armed Forces) which is a merger between the TRI and the independent paramilitary organizations ("laskar") across Indonesia, becoming by 1950 the APRIS or "National Military Forces of the Republic of the United States of Indonesia" ("Angkatan Perang Republik Indonesia Serikat"), by mid year the APRI or "Military Forces of the Republic of Indonesia" ("Angkatan Perang Republik Indonesia"), absolving also native personnel from within both the former KNIL and KM within the expanded republic.
On 21 June 1962, the name ""Tentara Nasional Indonesia"" (TNI) was changed to ""Angkatan Bersenjata Republik Indonesia"" (Republic of Indonesia Armed Forces, ABRI). The POLRI (Indonesian National Police) was integrated under the Armed Forces and changed its name to ""Angkatan Kepolisian"" (Police Force), and its commander maintained the concurrent status of Minister of Defence and Security, reporting to the President, who is commander in chief. The commanding generals (later chiefs of staff) and the Chief of the National Police then all held ministerial status as members of the cabinet of the republic, while a number of higher-ranking officers were appointed to other cabinet posts. On 1 July 1969, the Police Force's name was reverted to ""POLRI"".
After the fall of Suharto in 1998, the democratic and civil movement grew against the acute military role and involvements in Indonesian politics. As the result, the post-Soeharto Indonesian military has undergone certain reforms, such as the revocation of the Dwifungsi doctrine and the terminations of military controlled business. The reforms also involved law enforcement in common civil society, which questioned the position of Indonesian police under the military corps umbrella. These reforms led to the separation of the police force from the military. In April 1999, the Indonesian National Police officially regained its independence and now is a separate entity from the armed forces proper. The official name of the Indonesian military also changed from ""Angkatan Bersenjata Republik Indonesia"" (ABRI) back to ""Tentara Nasional Indonesia"" (TNI).
In the Beginning of 2010, the Indonesian government seeks to strengthen the TNI to achieve minimum standards of minimum strength (Minimum Essential Force). The MEF is divided into three strategic plan stages, 2010-2014, 2015-2019, and 2020-2024. Initially the government budgeted Rp156 trillion for the provision of TNI's main weapon system equipment (alutsista) in the MEF period 2010-2014.
The Indonesian military philosophy over-riding defence of the archipelago is summarily civilian-military defence, called "Total People's Defence"- consisting of a three-stage war: a short initial period in which invader would defeat a conventional Indonesian military, a long period of territorial guerrilla war followed by a final stage of expulsion- with military acting as a rallying point for defence from grass-roots village level upwards. The doctrine relies on a close bond between villager and soldier to encourage the support of the entire population and enable the military to manage all war-related resources.
The civilian population would provide logistical support, intelligence, and upkeep with some trained to join the guerrilla struggle. The armed forces regularly engage in large-scale community and rural development. The "Armed Forces Enters the Village" (AMD/TMMD) program, begun in 1983 is held three times annually to organise and assist construction and development of civilian village projects.
The current developments in Indonesia's defence policies are framed within the concept of achieving "Minimum Essential Force" or MEF by 2024. This concept of MEF was first articulated in Presidential Decree No. 7/2008 on General Policy Guidelines on State Defence Policy which came into effect on 26 January 2008. MEF is defined as a capability based defence and force level that can guarantee the attainment of immediate strategic defence interests, where the procurement priority is given to the improvement of minimum defence strength and/or the replacement of outdated main weapon systems/equipments. To achieve this aim, MEF had been restructured into a series of 3 strategic programmes with timeframes from 2010 to 2014, 2015 to 2019 and 2020 to 2024 as well as spending of up the 1.5 - 2% of the GDP.
The identity of the Indonesian National Armed forces is (Article 2 of the TNI Law) is the TNI must aim to become the:
The Indonesian armed forces have long been organised around territorial commands. Following independence, seven were established by 1958. No central reserve formation was formed until 1961 (when the 1st Army Corps of the Army General Reserve, "CADUAD", the precursor of today's Kostrad was established). It was only after the attempted coup d'état of 1 October 1965 and General Suharto's rise to the presidency that it became possible to integrate the armed forces and begin to develop a joint operations structure.
Following a decision in 1985, major reorganization separate the Ministry of Defense and Security ("MoDS") from the "ABRI" (Indonesian Armed Forces name during Soeharto's presidential era) headquarters and staff. MoDS was made responsible for planning, acquisition, and management tasks but had no command or control of troop units. The "ABRI" commander in chief retained command and control of all armed forces and continued by tradition to be the senior military officer in the country, while continuing to be a part of the cabinet.
The administrative structure of Ministry of Defense and Security consisted of a minister, deputy minister, secretary general, inspector general, three directorates-general and a number of functional centers and institutes. The minister, deputy minister, inspector general, and three directors general were retired senior military officers; the secretary general (who acted as deputy minister) and most functional center chiefs were, as is the case today, active-duty military officers, while employees and staff were personnel of the armed forces and of the civil service.
The 1985 reorganisation also made significant changes in the armed forces chain of command. The four multi-service Regional Defense Commands ("Kowilhans") and the National Strategic Command ("Kostranas") were eliminated from the defense structure, establishing the Military Regional Command ("Kodam"), or area command, as the key organisation for strategic, tactical, and territorial operations for all services. The chain of command flowed directly from the "ABRI" commander in chief to the ten "Kodam" commanders, and then to subordinate army territorial commands. The former territorial commands of the air force and navy were eliminated from the structure altogether, with each of those services represented on the "Kodam" staff by a senior liaison officer. The navy and air force territorial commands were replaced by operational commands. The air force formed two Operational Commands ("Ko-Ops") while the navy had its two Fleet Commands, the Western and Eastern Armadas. The air force's National Air Defense Command ("Kohanudnas") remained under the "ABRI" commander in chief. It had an essentially defensive function that included responsibility for the early warning system.
After Suharto's presidential era collapsed in 1998, the Indonesian National Police was separated from the Armed Forces making the Indonesian Armed Forces under the direct auspices command of the Ministry of Defense and the Police Force under the direct auspices of the President of Indonesia. Before 1998, the Armed Forces of Indonesia (the then name "ABRI") was composed of four service branches: Indonesian Army, Indonesian Navy, Indonesian Air Force, and the Indonesian National Police. Then after 1998 (After reformation from Soeharto), the Armed Forces' name, in 1999, was changed to TNI ("Tentara Nasional Indonesia") literally meaning: "The National Military of Indonesia" and the independent Indonesian Police Force changed its name to POLRI ("Kepolisian Negara Republik Indonesia") literally meaning: "The National Police Force of Indonesia". Now specifically, although the Armed Forces of Indonesia and the National Police of Indonesia has been separated, they still cooperate and conduct special duties and tasks together for the sake of the national security and integrity of Indonesia.
On 13 May 2018, Commander Hadi Tjahjanto reorganized the armed forces once more by inaugurating 4 new military units: Kostrad 3rd Infantry Division, 3rd Fleet Command, 3rd Air Force Operational Command and Marine Force III. The new military units are intended to reduce response time against any threats and problems in Eastern Indonesia. He also officially renamed the Western and Eastern Fleet Commands to 1st and 2nd Fleet Commands.
The Indonesian National Armed Forces is structured into the following in accordance with article 9 of Presidential decree No. 66/ 2019. Indonesian National Armed Forces organization consist of the following:
Commander of the Indonesian National Armed Forces (Panglima TNI) and serve as leadership element in Indonesia National Armed Force, both position held by the four-star General/Admiral/Air Marshall appointed by and reporting directly to the President of Indonesia. As of Nov 2019, position of deputy commander is still vacant.
Indonesian Military Special Forces
In the immediate aftermath of 2018 Surabaya bombings, President Widodo has agreed to revive the TNI Joint Special Operations Command ("Koopsusgab") to assist the National Police in antiterrorism operations under certain conditions. This joint force is composed of special forces of the National Armed Forces as mentioned above, and is under the direct control of the Commander of the National Armed Forces. On July 2019, President Widodo officially formed the Armed Forces Special Operations Command ("Koopsus TNI") which comprised 400 personnel each from Sat-81 Gultor of Kopassus, Denjaka, and Den Bravo of Paskhas to conduct special operations to protect national interests within or outside Indonesian territory.
Military spending in the national budget was widely estimated 3% of GDP in 2005, but is supplemented by revenue from many military-run businesses and foundations. The defence budget for 2017 was $8.17bn.
Beeson and Bellamy wrote in 2002 that: '..By some estimates 60–65% of the military's actual operating expenses come from 'off-budget sources' rather than the government (Cochrane 2002). This is a euphemism for a host of legal and illegal practices that include legitimate involvement in state-owned and private businesses, as well as a range of activities in the 'black economy.' An estimated 30% of government funding of the military 'is lost through corruption in the process of buying military equipment and supplies.'(International Crisis Group 2001: 13)'
In addition, the territorial commands (KODAM) are responsible for 'the bulk of their operational fund-raising.'
The Indonesian armed forces are voluntary. The active military strength is 395,500 with available manpower fit for military service of males aged between 16 and 49 is 75,000,000, with a further 4,500,000 new suitable for service annually.
In the Indonesian Army, Navy (including Marine Corps), Air Force, and the Police Force, the rank consists of officer known as in Indonesian: ""Perwira"", NCO: ""Bintara"" and enlisted: ""Tamtama"". The rank titles of the Marine Corps are the same as those of the Army, but it still uses the Navy's style insignia (for lower-ranking enlisted men, blue are replacing the red colour).
The Armed Forces Pledge is a pledge of loyalty and fidelity of the military personnel to the government and people of Indonesia and to the principles of nationhood. | https://en.wikipedia.org/wiki?curid=14650 |
Foreign relations of Indonesia
Since independence, Indonesian foreign relations have adhered to a "free and active" foreign policy, seeking to play a role in regional affairs commensurate with its size and location but avoiding involvement in conflicts among major powers. Indonesian foreign policy under the "New Order" government of President Suharto moved away from the stridently anti-Western, anti-American posturing that characterised the latter part of the Sukarno era. Following Suharto's ouster in 1998, Indonesia's government has preserved the broad outlines of Suharto's independent, moderate foreign policy. Preoccupation with domestic problems has not prevented successive presidents from travelling abroad.
Indonesia's relations with the international community were strained as a result of its invasion of neighbouring East Timor in December 1975, the subsequent annexation and occupation, the independence referendum in 1999 and the resulting violence afterwards. As one of the founding members of Association of Southeast Asian Nations (ASEAN), established in 1967, and also as the largest country in Southeast Asia, Indonesia has put ASEAN as the cornerstone of its foreign policy and outlook. After the transformation from Suharto's regime to a relatively open and democratic country in the 21st century, Indonesia today exercises its influence to promote co-operation, development, democracy, security, peace and stability in the region through its leadership in ASEAN.
Indonesia managed to play a role as a peacemaker in the Cambodia-Thailand conflict over the Preah Vihear temple. Indonesia and other ASEAN member countries collectively have also played a role in encouraging the government of Myanmar to open up its political system and introduce other reforms more quickly.
Given its geographic and demographic size, rising capabilities and diplomatic initiatives, scholars have classified Indonesia as one of Asia-Pacific's middle powers.
A cornerstone of Indonesia's contemporary foreign policy is its participation in the Association of Southeast Asian Nations (ASEAN), of which it was a founding member in 1967 with Thailand, Malaysia, Singapore, and the Philippines. Since then, Brunei, Vietnam, Laos, Burma, and Cambodia also have joined ASEAN. While organised to promote shared economic, social, and cultural goals, ASEAN acquired a security dimension after Vietnam's invasion of Cambodia in 1979; this aspect of ASEAN expanded with the establishment of the ASEAN Regional Forum in 1994, which comprises 22 countries, including the US.
Indonesian national capital Jakarta is also the seat of ASEAN Secretariat, located at Jalan Sisingamangaraja No.70A, Kebayoran Baru, South Jakarta. Other than serving their diplomatic missions for Indonesia, numbers of foreign embassies and diplomatic mission in Jakarta are also accredited to ASEAN. ASEAN Headquarter has led to the prominence of Jakarta as a diplomatic hub in Southeast Asia.
In the late 1990s to early 2000s, Indonesia's continued domestic troubles have distracted it from ASEAN matters and consequently lessened its influence within the organisation. However, after the political and economic transformation, from the turmoil of 1998 "Reformasi" to the relatively open and democratic civil society with rapid economic growth in the 2010s, Indonesia returned to the region's diplomatic stage by assuming its leadership role in ASEAN in 2011. Indonesia is viewed to have weight, international legitimacy and global appeal to draw support and attention from around the world to ASEAN. Indonesia believes that ASEAN can contribute positively to the international community, by promoting economic development and co-operation, improving security, peace, the stability of ASEAN, and making the Southeast Asia region far from conflicts.
Indonesia's bilateral relations with three neighbouring ASEAN members — Malaysia, Singapore, and Vietnam — are not without challenges. If not appropriately managed, it would result in mutual mistrust and suspicion, thus hindering bilateral and regional co-operation. In the era of rising Indonesia, which might assert its leadership role within ASEAN, the problem could become more significant. Nevertheless, the rise of Indonesia should be regarded in the sense of optimism. First, although Indonesia is likely to become assertive, the general tone of its foreign policy is mainly liberal and accommodating. The consolidation of the Indonesian democratic government played a key role and influence in ASEAN. The second, institutional web of ASEAN will sustain engagements and regular meetings between regional elites, thus deepening their mutual understanding and personal connections.
Indonesia also was one of the founders of NAM and has taken moderate positions in its councils. As NAM Chairman in 1992-95, it led NAM positions away from the rhetoric of North-South confrontation, advocating the broadening of North-South co-operation instead in the area of development. Indonesia continues to be a prominent, and generally helpful, leader of the Non-Aligned Movement.
Indonesia has the world's largest Muslim population and is a member of OIC. It carefully considers the interests of Islamic solidarity in its foreign policy decisions but generally has been an influence for moderation in the OIC.
Indonesia has been a strong supporter of the Asia-Pacific Economic Cooperation (APEC) forum. Mainly through the efforts of President Suharto at the 1994 meeting in Indonesia, APEC members agreed to implement free trade in the region by 2010 for industrialised economies and 2020 for developing economies. As the largest economy in Southeast Asia, Indonesia also belongs to other economic groupings such as G20 and Developing 8 Countries (D-8).
In 2008, Indonesia was admitted as a member of the G20, as the only ASEAN member state in the group. Through its membership in the global economic powerhouse that accounted of 85% of the global economy, Indonesia is keen to position itself as a mouthpiece for ASEAN countries, and as a representative of the developing world within the G-20.
After 1966, Indonesia welcomed and maintained close relations with the donor community, particularly the United States, western Europe, Australia, and Japan, through the Inter-Governmental Group on Indonesia (IGGI) and its successor, the Consultative Group on Indonesia (CGI), which have provided substantial foreign economic assistance. Problems in Timor and Indonesia's reluctance to implement economic reform, have complicated Indonesia's relationship with donors.
Indonesia has numerous outlying and remote islands, which some are inhabited by many pirate groups that regularly attack ships in the Strait of Malacca in the north, and illegal fishing crews known for penetrating Australian and Filipino waters. While Indonesian waters itself is the target of many illegal fishing activities by numerous foreign vessels.
Indonesia has some present and historic territorial disputes with neighboring nations, such as: | https://en.wikipedia.org/wiki?curid=14651 |
List of islands of Indonesia
The islands of Indonesia, also known as the Indonesian Archipelago, may refer either to the islands comprising the country of Indonesia or to the geographical groups which include its islands. According to the Indonesian Coordinating Ministry for Maritime Affairs, of 17,508 officially listed islands within the territory of the Republic of Indonesia, 16,671 island names have been verified by the United Nations Group of Experts on Geographical Names (UNGEGN) as of 2018. This makes Indonesia the world's largest island country.
The exact number of islands comprising Indonesia varies among definitions and sources. According to a geospatial survey conducted between 2007 and 2010 by Badan Koordinasi Survei dan Pemetaan Nasional (Bakorsurtanal), the National Coordinating Agency for Survey and Mapping, Indonesia has 13,466 islands. However, according to earlier survey in 2002 by National Institute of Aeronautics and Space (LAPAN), the Indonesian archipelago has 18,307 islands, and according to the CIA "World Factbook", there are 17,508 islands. The discrepancy of the numbers of Indonesian islands was because that the earlier surveys include "tidal islands"; sandy cays and rocky reefs that appear during low tide and are submerged during high tide. According to estimates made by the government of Indonesia 8,844 islands have been named, with 922 of those permanently inhabited.
The following islands are listed by province:
"199 islands"
"479 islands"
"about 3,200 islands"
"Islands near the Indonesian half of New Guinea island."
"610 islands, 35 inhabited" | https://en.wikipedia.org/wiki?curid=14652 |
Iran
Iran ( ), also called Persia, and officially the Islamic Republic of Iran ( ), is a country in Western Asia. It is bordered to the northwest by Armenia and Azerbaijan, to the north by the Caspian Sea, to the northeast by Turkmenistan, to the east by Afghanistan and Pakistan, to the south by the Persian Gulf and the Gulf of Oman, and to the west by Turkey and Iraq. Its central location in Eurasia and proximity to the Strait of Hormuz give it significant geostrategic importance. Tehran is the capital and largest city, as well as the leading economic and cultural hub; it is also the most populous city in Western Asia, with more than 8.8 million residents, and up to 15 million including the metropolitan area. With 83 million inhabitants, Iran is the world's 17th most populous country. Spanning , it is the second largest country in the Middle East and the 17th largest in the world.
Iran is home to one of the world's oldest civilizations, beginning with the formation of the Elamite kingdoms in the fourth millennium BC. It was first unified by the Iranian Medes in the seventh century BC, and reached its territorial height in the sixth century BC, when Cyrus the Great founded the Achaemenid Empire, which stretched from Eastern Europe to the Indus Valley, making it one of the largest empires in history. The empire fell to Alexander the Great in the fourth century BC and was divided into several Hellenistic states. An Iranian rebellion established the Parthian Empire in the third century BC, which was succeeded in the third century AD by the Sasanian Empire, a major world power for the next four centuries.
Arab Muslims conquered the empire in the seventh century AD, and the subsequent Islamization of Iran led to the decline of the once dominant Zoroastrian religion. Iran subsequently became a major center of Islamic culture and learning, with its art, literature, philosophy, and architecture spreading across the Muslim world and beyond during the Islamic Golden Age. Over the next two centuries, a series of native Muslim dynasties emerged before the Seljuq Turks and the Ilkhanate Mongols conquered the region. In the 15th century, the native Safavids reestablished a unified Iranian state and national identity, with the country's conversion to Shia Islam marking a turning point in Iranian and Muslim history.
Under the reign of Nader Shah in the 18th century, Iran once again became a major world power, though by the 19th century a series of conflicts with the Russian Empire led to significant territorial losses. However, Iran would remain one of the few non-European states to avoid colonization by Europe. The early 20th century saw the Persian Constitutional Revolution, which created the country's first constitutional monarchy and legislature, and a gradual move towards greater democracy. Efforts to nationalize its fossil fuel supply from Western companies led to an Anglo-American coup in 1953, which resulted in greater autocratic rule under Mohammad Reza Pahlavi and growing Western political influence. He went on to launch a far-reaching series of reforms in 1963, which included industrial growth, infrastructure expansion, land reforms, and increased women's rights. However, widespread dissatisfaction with the monarchy culminated in the Iranian Revolution, which established the current Islamic Republic in 1979. Iran was invaded by Iraq in 1980, leading to a bloody and protracted war that lasted for almost eight years, and ended in a stalemate with devastating losses for both sides.
Iran's political system combines elements of a presidential democracy and an Islamic theocracy, with the ultimate authority vested in an autocratic "Supreme Leader". The Iranian government is widely considered to be authoritarian, with significant constraints and abuses against human rights and civil liberties, including the violent suppression of mass protests, unfair elections, and unequal rights for women and children.
Iran is a founding member of the UN, ECO, OIC, and OPEC. It is a major regional and middle power, and its large reserves of fossil fuels—including the world's largest natural gas supply and the third largest proven oil reserves—exert considerable influence in international energy security and the world economy. The country's rich cultural legacy is reflected in part by its 22 UNESCO World Heritage Sites, the third largest number in Asia and 10th largest in the world. Historically a multi-ethnic country, Iran remains a pluralistic society comprising numerous ethnic, linguistic, and religious groups, the largest being Persians, Azeris, Kurds, Mazandaranis and Lurs.
The term "Iran" derives directly from Middle Persian , first attested in a third-century inscription at Rustam Relief, with the accompanying Parthian inscription using the term , in reference to the Iranians. The Middle Iranian "ērān" and "aryān" are oblique plural forms of gentilic nouns "ēr-" (Middle Persian) and "ary-" (Parthian), both deriving from Proto-Iranian "*arya-" (meaning "Aryan", i.e. "of the Iranians"), recognized as a derivative of Proto-Indo-European ', meaning "one who assembles (skilfully)". In the Iranian languages, the gentilic is attested as a self-identifier, included in ancient inscriptions and the literature of the Avesta, and remains also in other Iranian ethnic names "Alan" ( ) and "Iron" (). According to the Iranian mythology, the country's name comes from name of Iraj, a legendary prince and shah who was killed by his brothers.
Historically, Iran has been referred to as "Persia" by the West, due mainly to the writings of Greek historians who referred to all of Iran as (; from Old Persian ), meaning "land of the Persians", while Persis itself was one of the provinces of ancient Iran that is today defined as Fars. As the most extensive interaction the Ancient Greeks had with any outsider was with the Persians, the term persisted, even long after the Greco-Persian Wars (499–449 BC).
In 1935, Reza Shah requested the international community to refer to the country by its native name, "Iran", effective 22 March that year. Opposition to the name change led to the reversal of the decision in 1959, and Professor Ehsan Yarshater, editor of "Encyclopædia Iranica", propagated a move to use "Persia" and "Iran" interchangeably. Today, both "Iran" and "Persia" are used in cultural contexts, while "Iran" remains irreplaceable in official state contexts.
Historical and cultural usage of the word "Iran" is not restricted to the modern state proper. "Greater Iran" ("Irānzamīn" or "Irān e Bozorg") refers to territories of the Iranian cultural and linguistic zones. In addition to modern Iran, it includes portions of the Caucasus, Anatolia, Mesopotamia, Afghanistan, and Central Asia.
The Persian pronunciation of "Iran" is . Common Commonwealth English pronunciations of "Iran" are listed in the "Oxford English Dictionary" as and , while American English dictionaries such as Merriam-Webster's provide pronunciations which map to , or likewise in "Random House Webster's Unabridged Dictionary" as . The "Cambridge Dictionary" lists as the British pronunciation and as the American pronunciation. Similarly, Glasgow-based "Collins English Dictionary" provides both English English and American English pronunciations. The pronunciation guide from Voice of America also provides .
The American English pronunciation may be heard in U.S. media. Max Fisher in "The Washington Post" prescribed for "Iran", while proscribing . "The American Heritage Dictionary of the English Language", in the dictionary's 2014 Usage Ballot, addressed the topic of the pronunciations of Iran and Iraq. According to this survey, the pronunciations and were deemed almost equally acceptable, while was preferred by most panelists participating in the ballot. With regard to the pronunciation, more than 70% of the panelists deemed it unacceptable. Among the reasons given by those panelists were that has "hawkish connotations" and sounds "angrier", "xenophobic", "ignorant", and "not... cosmopolitan". The pronunciation remains standard and acceptable, reflected in the entry for "Iran" in the American Heritage Dictionary itself, as well as in each of the other major dictionaries of American English.
The earliest attested archaeological artifacts in Iran, like those excavated at Kashafrud and Ganj Par in northern Iran, confirm a human presence in Iran since the Lower Paleolithic. Iran's Neanderthal artifacts from the Middle Paleolithic have been found mainly in the Zagros region, at sites such as Warwasi and Yafteh. From the 10th to the seventh millennium BC, early agricultural communities began to flourish in and around the Zagros region in western Iran, including Chogha Golan, Chogha Bonut, and Chogha Mish.
The occupation of grouped hamlets in the area of Susa, as determined by radiocarbon dating, ranges from 4395-3955 to 3680-3490 BC. There are dozens of prehistoric sites across the Iranian Plateau, pointing to the existence of ancient cultures and urban settlements in the fourth millennium BC. During the Bronze Age, the territory of present-day Iran was home to several civilizations, including Elam, Jiroft, and Zayanderud. Elam, the most prominent of these civilizations, developed in the southwest alongside those in Mesopotamia, and continued its existence until the emergence of the Iranian empires. The advent of writing in Elam was paralleled to Sumer, and the Elamite cuneiform was developed since the third millennium BC.
From the 34th to the 20th century BC, northwestern Iran was part of the Kura-Araxes culture, which stretched into the neighboring Caucasus and Anatolia. Since the earliest second millennium BC, Assyrians settled in swaths of western Iran and incorporated the region into their territories.
By the second millennium BC, the ancient Iranian peoples arrived in what is now Iran from the Eurasian Steppe, rivaling the native settlers of the region. As the Iranians dispersed into the wider area of Greater Iran and beyond, the boundaries of modern-day Iran were dominated by Median, Persian, and Parthian tribes.
From the late 10th to the late seventh century BC, the Iranian peoples, together with the "pre-Iranian" kingdoms, fell under the domination of the Assyrian Empire, based in northern Mesopotamia. Under king Cyaxares, the Medes and Persians entered into an alliance with Babylonian ruler Nabopolassar, as well as the fellow Iranian Scythians and Cimmerians, and together they attacked the Assyrian Empire. The civil war ravaged the Assyrian Empire between 616 and 605 BC, thus freeing their respective peoples from three centuries of Assyrian rule. The unification of the Median tribes under king Deioces in 728 BC led to the foundation of the Median Empire which, by 612 BC, controlled almost the entire territory of present-day Iran and eastern Anatolia. This marked the end of the Kingdom of Urartu as well, which was subsequently conquered and dissolved.
In 550 BC, Cyrus the Great, the son of Mandane and Cambyses I, took over the Median Empire, and founded the Achaemenid Empire by unifying other city-states. The conquest of Media was a result of what is called the "Persian Revolt". The brouhaha was initially triggered by the actions of the Median ruler Astyages, and was quickly spread to other provinces, as they allied with the Persians. Later conquests under Cyrus and his successors expanded the empire to include Lydia, Babylon, Egypt, parts of the Balkans and Eastern Europe proper, as well as the lands to the west of the Indus and Oxus rivers.
539 BC was the year in which Persian forces defeated the Babylonian army at Opis, and marked the end of around four centuries of Mesopotamian domination of the region by conquering the Neo-Babylonian Empire. Cyrus entered Babylon and presented himself as a traditional Mesopotamian monarch. Subsequent Achaemenid art and iconography reflect the influence of the new political reality in Mesopotamia.
At its greatest extent, the Achaemenid Empire included territories of modern-day Iran, Republic of Azerbaijan (Arran and Shirvan), Armenia, Georgia, Turkey (Anatolia), much of the Black Sea coastal regions, northeastern Greece and southern Bulgaria (Thrace), northern Greece and North Macedonia (Paeonia and Macedon), Iraq, Syria, Lebanon, Jordan, Israel and the Palestinian territories, all significant population centers of ancient Egypt as far west as Libya, Kuwait, northern Saudi Arabia, parts of the United Arab Emirates and Oman, Pakistan, Afghanistan, and much of Central Asia, making it the first world government and the largest empire the world had yet seen.
It is estimated that in 480 BC, 50 million people lived in the Achaemenid Empire. The empire at its peak ruled over 44% of the world's population, the highest such figure for any empire in history.
The Achaemenid Empire is noted for the release of the Jewish exiles in Babylon, building infrastructures such as the Royal Road and the Chapar (postal service), and the use of an official language, Imperial Aramaic, throughout its territories. The empire had a centralized, bureaucratic administration under the emperor, a large professional army, and civil services, inspiring similar developments in later empires.
Eventual conflict on the western borders began with the Ionian Revolt, which erupted into the Greco-Persian Wars and continued through the first half of the fifth century BC, and ended with the withdrawal of the Achaemenids from all of the territories in the Balkans and Eastern Europe proper.
In 334 BC, Alexander the Great invaded the Achaemenid Empire, defeating the last Achaemenid emperor, Darius III, at the Battle of Issus. Following the premature death of Alexander, Iran came under the control of the Hellenistic Seleucid Empire. In the middle of the second century BC, the Parthian Empire rose to become the main power in Iran, and the century-long geopolitical arch-rivalry between the Romans and the Parthians began, culminating in the Roman–Parthian Wars. The Parthian Empire continued as a feudal monarchy for nearly five centuries, until 224 CE, when it was succeeded by the Sasanian Empire. Together with their neighboring arch-rival, the Roman-Byzantines, they made up the world's two most dominant powers at the time, for over four centuries.
The Sasanians established an empire within the frontiers achieved by the Achaemenids, with their capital at Ctesiphon. Late antiquity is considered one of Iran's most influential periods, as under the Sasanians their influence reached the culture of ancient Rome (and through that as far as Western Europe), Africa, China, and India, and played a prominent role in the formation of the medieval art of both Europe and Asia.
Most of the era of the Sasanian Empire was overshadowed by the Roman–Persian Wars, which raged on the western borders at Anatolia, the Western Caucasus, Mesopotamia, and the Levant, for over 700 years. These wars ultimately exhausted both the Romans and the Sasanians and led to the defeat of both by the Muslim invasion.
Throughout the Achaemenid, Parthian, and Sasanian eras, several offshoots of the Iranian dynasties established eponymous branches in Anatolia and the Caucasus, including the Pontic Kingdom, the Mihranids, and the Arsacid dynasties of Armenia, Iberia (Georgia), and Caucasian Albania (present-day Republic of Azerbaijan and southern Dagestan).
The prolonged Byzantine–Sasanian wars, most importantly the climactic war of 602–628, as well as the social conflict within the Sasanian Empire, opened the way for an Arab invasion of Iran in the seventh century. The empire was initially defeated by the Rashidun Caliphate, which was succeeded by the Umayyad Caliphate, followed by the Abbasid Caliphate. A prolonged and gradual process of state-imposed Islamization followed, which targeted Iran's then Zoroastrian majority and included religious persecution, demolition of libraries and fire temples, a special tax penalty ("jizya"), and language shift.
In 750, the Abbasids overthrew the Umayyads, notably by the support from the "mawali" (converted Iranians). The mawali formed the majority of the rebel army, which was led by converted Iranian general Abu Muslim. The arrival of the Abbasid Caliphs saw a relative revival of Iranian culture and influence, as the role of the old Arab aristocracy was partially replaced by a Muslim Iranian bureaucracy.
After two centuries of Arab rule, semi-independent and independent Iranian kingdoms—including the Tahirids, Saffarids, Samanids, and Buyids—began to appear on the fringes of the declining Abbasid Caliphate. By the Samanid era in the ninth and 10th centuries, the efforts of Iranians to regain their independence had been well solidified.
The blossoming literature, philosophy, mathematics, medicine, astronomy and art of Iran became major elements in the formation of a new age for the Iranian civilization, during a period known as the "Islamic Golden Age". The Islamic Golden Age reached its peak by the 10th and 11th centuries, during which Iran was the main theater of scientific activities. After the 10th century, Persian, alongside Arabic, was used for scientific, medical, philosophical, arithmetical, historical, and musical works, and renowned Iranian writers—such as Tusi, Avicenna, Qotb-od-Din Shirazi, and Biruni—had major contributions in scientific writing. Among Iran's famous medieval scientists, Al-Khwarizmi (whose name was Latinized as "Algoritmi") gave a significant role in the development of the Arabic numerals and algebra through his 9th-century work "On the Calculation with Hindu Numerals" that is globally adopted as the modern numerical system.
The cultural revival that began in the Abbasid period led to a resurfacing of the Iranian national identity; thus, the attempts of Arabization never succeeded in Iran. The Shu'ubiyya movement became a catalyst for Iranians to regain independence in their relations with the Arab invaders. The most notable effect of this movement was the continuation of the Persian language attested to the works of the epic poet Ferdowsi, now considered the most prominent figure in Iranian literature.
The 10th century saw a mass migration of Turkic tribes from Central Asia into the Iranian Plateau. Turkic tribesmen were first used in the Abbasid army as mamluks (slave-warriors), replacing Iranian and Arab elements within the army. As a result, the Mamluks gained a significant political power. In 999, large portions of Iran came briefly under the rule of the Ghaznavids, whose rulers were of mamluk Turkic origin, and longer subsequently under the Seljuk and Khwarezmian empires. These dynasties had been Persianized, and had adopted Persian models of administration and rulership. The Seljuks subsequently gave rise to the Sultanate of Rum in Anatolia, while taking their thoroughly Persianized identity with them. The result of the adoption and patronage of Iranian culture by Turkish rulers was the development of a distinct Turko-Persian tradition.
From 1219 to 1221, under the Khwarazmian Empire, Iran suffered a devastating invasion by the Mongol army of Genghis Khan. According to Steven R. Ward, "Mongol violence and depredations killed up to three-fourths of the population of the Iranian Plateau, possibly 10 to 15 million people. Some historians have estimated that Iran's population did not again reach its pre-Mongol levels until the mid-20th century."
Following the fracture of the Mongol Empire in 1256, Hulagu Khan, grandson of Genghis Khan, established the Ilkhanate in Iran. In 1370, yet another conqueror, Timur, followed the example of Hulagu, establishing the Timurid Empire which lasted for another 156 years. In 1387, Timur ordered the complete massacre of Isfahan, reportedly killing 70,000 citizens. The Ilkhans and the Timurids soon came to adopt the ways and customs of the Iranians, surrounding themselves with a culture that was distinctively Iranian.
By the 1500s, Ismail I of Ardabil established the Safavid Empire, with his capital at Tabriz. Beginning with Azerbaijan, he subsequently extended his authority over all of the Iranian territories, and established an intermittent Iranian hegemony over the vast relative regions, reasserting the Iranian identity within large parts of Greater Iran. Iran was predominantly Sunni, but Ismail instigated a forced conversion to the Shia branch of Islam, spreading throughout the Safavid territories in the Caucasus, Iran, Anatolia, and Mesopotamia. As a result, modern-day Iran is the only official Shia nation of the world, with it holding an absolute majority in Iran and the Republic of Azerbaijan, having there the first and the second highest number of Shia inhabitants by population percentage in the world. Meanwhile, the centuries-long geopolitical and ideological rivalry between Safavid Iran and the neighboring Ottoman Empire led to numerous Ottoman–Iranian wars.
The Safavid era peaked in the reign of Abbas I (1587–1629), surpassing their Turkish archrivals in strength, and making Iran a leading science and art hub in western Eurasia. The Safavid era saw the start of mass integration from Caucasian populations into new layers of the society of Iran, as well as mass resettlement of them within the heartlands of Iran, playing a pivotal role in the history of Iran for centuries onwards. Following a gradual decline in the late 1600s and the early 1700s, which was caused by internal conflicts, the continuous wars with the Ottomans, and the foreign interference (most notably the Russian interference), the Safavid rule was ended by the Pashtun rebels who besieged Isfahan and defeated Sultan Husayn in 1722.
In 1729, Nader Shah, a chieftain and military genius from Khorasan, successfully drove out and conquered the Pashtun invaders. He subsequently took back the annexed Caucasian territories which were divided among the Ottoman and Russian authorities by the ongoing chaos in Iran. During the reign of Nader Shah, Iran reached its greatest extent since the Sasanian Empire, reestablishing the Iranian hegemony all over the Caucasus, as well as other major parts of the west and central Asia, and briefly possessing what was arguably the most powerful empire at the time.
Nader Shah invaded India and sacked far off Delhi by the late 1730s. His territorial expansion, as well as his military successes, went into a decline following the final campaigns in the Northern Caucasus against then revolting Lezgins. The assassination of Nader Shah sparked a brief period of civil war and turmoil, after which Karim Khan of the Zand dynasty came to power in 1750, bringing a period of relative peace and prosperity.
Compared to its preceding dynasties, the geopolitical reach of the Zand dynasty was limited. Many of the Iranian territories in the Caucasus gained "de facto" autonomy, and were locally ruled through various Caucasian khanates. However, despite the self-ruling, they all remained subjects and vassals to the Zand king. Another civil war ensued after the death of Karim Khan in 1779, out of which Agha Mohammad Khan emerged, founding the Qajar dynasty in 1794.
In 1795, following the disobedience of the Georgian subjects and their alliance with the Russians, the Qajars captured Tbilisi by the Battle of Krtsanisi, and drove the Russians out of the entire Caucasus, reestablishing the Iranian suzerainty over the region.
The Russo-Iranian wars of 1804–1813 and 1826–1828 resulted in large irrevocable territorial losses for Iran in the Caucasus, comprising all of Transcaucasia and Dagestan, which made part of the very concept of Iran for centuries, and thus substantial gains for the neighboring Russian Empire.
As a result of the 19th-century Russo-Iranian wars, the Russians took over the Caucasus, and Iran irrevocably lost control over its integral territories in the region (comprising modern-day Dagestan, Georgia, Armenia, and Republic of Azerbaijan), which got confirmed per the treaties of Gulistan and Turkmenchay. The area to the north of Aras River, among which the contemporary Republic of Azerbaijan, eastern Georgia, Dagestan, and Armenia are located, were Iranian territory until they were occupied by Russia in the course of the 19th century.
As Iran shrank, many Transcaucasian and North Caucasian Muslims moved towards Iran, especially until the aftermath of the Circassian Genocide, and the decades afterwards, while Iran's Armenians were encouraged to settle in the newly incorporated Russian territories, causing significant demographic shifts.
Around 1.5 million people—20 to 25% of the population of Iran—died as a result of the Great Famine of 1870–1871.
Between 1872 and 1905, a series of protests took place in response to the sale of concessions to foreigners by Qajar monarchs Naser-ed-Din and Mozaffar-ed-Din, and led to the Constitutional Revolution in 1905. The first Iranian constitution and the first national parliament of Iran were founded in 1906, through the ongoing revolution. The Constitution included the official recognition of Iran's three religious minorities, namely Christians, Jews, and Zoroastrians, which has remained a basis in the legislation of Iran since then. The struggle related to the constitutional movement was followed by the Triumph of Tehran in 1909, when Mohammad Ali Shah was defeated and forced to abdicate. On the pretext of restoring order, the Russians occupied northern Iran in 1911 and maintained a military presence in the region for years to come. But this did not put an end to the civil uprisings and was soon followed by Mirza Kuchik Khan's Jungle Movement against both the Qajar monarchy and foreign invaders.
Despite Iran's neutrality during World War I, the Ottoman, Russian and British empires occupied the territory of western Iran and fought the Persian Campaign before fully withdrawing their forces in 1921. At least 2 million Persian civilians died either directly in the fighting, the Ottoman perpetrated anti-Christian genocides or the war induced famine of 1917-1919. A large number of Iranian Assyrian and Iranian Armenian Christians, as well as those Muslims who tried to protect them, were victims of mass murders committed by the invading Ottoman troops, notably in and around Khoy, Maku, Salmas, and Urmia.
Apart from the rule of Agha Mohammad Khan, the Qajar rule is characterized as a century of misrule. The inability of Qajar Iran's government to maintain the country's sovereignty during and immediately after World War I led to the British directed 1921 Persian coup d'état and Reza Shah's establishment of the Pahlavi dynasty. Reza Shah, became the new Prime Minister of Iran and was declared the new monarch in 1925.
In the midst of World War II, in June 1941, Nazi Germany broke the Molotov–Ribbentrop Pact and invaded the Soviet Union, Iran's northern neighbor. The Soviets quickly allied themselves with the Allied countries and in July and August, 1941 the British demanded that the Iranian government expel all Germans from Iran. Reza Shah refused to expel the Germans and on 25 August 1941, the British and Soviets launched a surprise invasion and Reza Shah's government quickly surrendered. The invasion's strategic purpose was to secure a supply line to the USSR (later named the Persian Corridor), secure the oil fields and Abadan Refinery (of the UK-owned Anglo-Iranian Oil Company), and limit German influence in Iran. Following the invasion, on 16 September 1941 Reza Shah abdicated and was replaced by Mohammad Reza Pahlavi, his 21 year old son.
During the rest of World War II, Iran became a major conduit for British and American aid to the Soviet Union and an avenue through which over 120,000 Polish refugees and Polish Armed Forces fled the Axis advance. At the 1943 Tehran Conference, the Allied "Big Three"—Joseph Stalin, Franklin D. Roosevelt, and Winston Churchill—issued the Tehran Declaration to guarantee the post-war independence and boundaries of Iran. However, at the end of the war, Soviet troops remained in Iran and established two puppet states in north-western Iran, namely the People's Government of Azerbaijan and the Republic of Mahabad. This led to the Iran crisis of 1946, one of the first confrontations of the Cold War, which ended after oil concessions were promised to the USSR and Soviet forces withdrew from Iran proper in May 1946. The two puppet states were soon overthrown and the oil concessions were later revoked.
In 1951, Mohammad Mosaddegh was appointed as the Prime Minister. He became enormously popular in Iran after he nationalized Iran's petroleum industry and oil reserves. He was deposed in the 1953 Iranian coup d'état, an Anglo-American covert operation that marked the first time the United States had participated in the overthrow of a foreign government during the Cold War.
After the coup, the Shah became increasingly autocratic and sultanistic, and Iran entered a phase of decades-long controversial close relations with the United States and some other foreign governments. While the Shah increasingly modernized Iran and claimed to retain it as a fully secular state, arbitrary arrests and torture by his secret police, the SAVAK, were used to crush all forms of political opposition.
Ruhollah Khomeini, a radical Muslim cleric, became an active critic of the Shah's far-reaching series of reforms known as the "White Revolution". Khomeini publicly denounced the government, and was arrested and imprisoned for 18 months. After his release in 1964, he refused to apologize, and was eventually sent into exile.
Due to the 1973 spike in oil prices, the economy of Iran was flooded with foreign currency, which caused inflation. By 1974, the economy of Iran was experiencing double digit inflation, and despite the many large projects to modernize the country, corruption was rampant and caused large amounts of waste. By 1975 and 1976, an economic recession led to increased unemployment, especially among millions of youths who had migrated to the cities of Iran looking for construction jobs during the boom years of the early 1970s. By the late 1970s, many of these people opposed the Shah's regime and began to organize and join the protests against it.
The 1979 Revolution, later known as the "Islamic Revolution", began in January 1978 with the first major demonstrations against the Shah. After a year of strikes and demonstrations paralyzing the country and its economy, Mohammad Reza Pahlavi fled to the United States, and Ruhollah Khomeini returned from exile to Tehran in February 1979, forming a new government. After holding a referendum, Iran officially became an Islamic republic in April 1979. A second referendum in December 1979 approved a theocratic constitution.
The immediate nationwide uprisings against the new government began with the 1979 Kurdish rebellion and the Khuzestan uprisings, along with the uprisings in Sistan and Baluchestan and other areas. Over the next several years, these uprisings were subdued in a violent manner by the new Islamic government. The new government began purging itself of the non-Islamist political opposition, as well as of those Islamists who were not considered radical enough. Although both nationalists and Marxists had initially joined with Islamists to overthrow the Shah, tens of thousands were executed by the new regime afterwards. Many former ministers and officials in the Shah's government, including former prime minister Amir-Abbas Hoveyda, were executed following Khomeini's order to purge the new government of any remaining officials still loyal to the exiled Shah.
On 4 November 1979, a group of Muslim students seized the United States Embassy and took the embassy with 52 personnel and citizens hostage, after the United States refused to extradite Mohammad Reza Pahlavi to Iran, where his execution was all but assured. Attempts by the Jimmy Carter administration to negotiate for the release of the hostages, and a failed rescue attempt, helped force Carter out of office and brought Ronald Reagan to power. On Jimmy Carter's final day in office, the last hostages were finally set free as a result of the Algiers Accords. Mohammad Reza Pahlavi left the United States for Egypt, where he died of complications from cancer only months later, on 27 July 1980.
The Cultural Revolution began in 1980, with an initial closure of universities for three years, in order to perform an inspection and clean up in the cultural policy of the education and training system.
On 22 September 1980, the Iraqi army , launching the Iran–Iraq War. Although the forces of Saddam Hussein made several early advances, by mid 1982, the Iranian forces successfully managed to into Iraq. In July 1982, with Iraq thrown on the defensive, the regime of Iran took the decision to invade Iraq and conducted countless offensives in a bid to conquer Iraqi territory and capture cities, such as Basra. The war continued until 1988 when the Iraqi army defeated the Iranian forces inside Iraq and pushed the remaining Iranian troops back across the border. Subsequently, Khomeini accepted a truce mediated by the United Nations. The total Iranian casualties in the war were estimated to be 123,220–160,000 KIA, 60,711 MIA, and 11,000–16,000 civilians killed.
Following the Iran–Iraq War, in 1989, Akbar Hashemi Rafsanjani and his administration concentrated on a pragmatic pro-business policy of rebuilding and strengthening the economy without making any dramatic break with the ideology of the revolution. In 1997, Rafsanjani was succeeded by moderate reformist Mohammad Khatami, whose government attempted, unsuccessfully, to make the country more free and democratic.
The 2005 presidential election brought conservative populist candidate, Mahmoud Ahmadinejad, to power. By the time of the 2009 Iranian presidential election, the Interior Ministry announced incumbent President Ahmadinejad had won 62.63% of the vote, while Mir-Hossein Mousavi had come in second place with 33.75%. The election results were widely disputed, and resulted in widespread protests, both within Iran and in major cities outside the country, and the creation of the Iranian Green Movement.
Hassan Rouhani was elected as the president on 15 June 2013, defeating Mohammad Bagher Ghalibaf and four other candidates. The electoral victory of Rouhani relatively improved the relations of Iran with other countries.
The 2017–18 Iranian protests swept across the country against the government and its longtime Supreme Leader in response to the economic and political situation. The scale of protests throughout the country and the number of people participating were significant, and it was formally confirmed that thousands of protesters were arrested. The 2019–20 Iranian protests started on 15 November in Ahvaz, spreading across the country within hours, after the government announced increases in the fuel price of up to 300%. A week-long total Internet shutdown throughout the country marked one of the most severe Internet blackouts in any country, and according to international observers, tens of thousands were arrested and hundreds were killed within a few days.
On 3 January 2020, the revolutionary guard's general, Qasem Soleimani, was assassinated by the United States in Iraq, which considerably heightened the existing tensions between the two countries. Three days after, Iran's Islamic Revolutionary Guard Corps launched a retaliatory attack on US forces in Iraq and shot down Ukraine International Airlines Flight 752, killing 176 civilians and leading to nation-wide protests. An international investigation led to the government admitting to the shootdown of the plane by a surface-to-air missile after three days of denial, calling it a "human error".
Iran has an area of . It lies between latitudes 24° and 40° N, and longitudes 44° and 64° E. It is bordered to the northwest by Armenia (), the Azeri exclave of Nakhchivan (), and the Republic of Azerbaijan (); to the north by the Caspian Sea; to the northeast by Turkmenistan (); to the east by Afghanistan () and Pakistan (); to the south by the Persian Gulf and the Gulf of Oman; and to the west by Iraq () and Turkey ().
Iran consists of the Iranian Plateau, with the exception of the coasts of the Caspian Sea and Khuzestan. It is one of the world's most mountainous countries, its landscape dominated by rugged mountain ranges that separate various basins or plateaux from one another. The populous western part is the most mountainous, with ranges such as the Caucasus, Zagros, and Alborz, the last containing Mount Damavand, Iran's highest point at , which is also the highest mountain in Asia west of the Hindu Kush.
The northern part of Iran is covered by the lush lowland Caspian Hyrcanian mixed forests, located near the southern shores of the Caspian Sea. The eastern part consists mostly of desert basins, such as the Kavir Desert, which is the country's largest desert, and the Lut Desert, as well as some salt lakes.
The only large plains are found along the coast of the Caspian Sea and at the northern end of the Persian Gulf, where the country borders the mouth of the Arvand river. Smaller, discontinuous plains are found along the remaining coast of the Persian Gulf, the Strait of Hormuz, and the Gulf of Oman.
Having 11 climates out of the world's 13, Iran's climate is diverse, ranging from arid and semi-arid, to subtropical along the Caspian coast and the northern forests. On the northern edge of the country (the Caspian coastal plain), temperatures rarely fall below freezing and the area remains humid for the rest of the year. Summer temperatures rarely exceed . Annual precipitation is in the eastern part of the plain and more than in the western part. Gary Lewis, the United Nations Resident Coordinator for Iran, has said that "Water scarcity poses the most severe human security challenge in Iran today".
To the west, settlements in the Zagros basin experience lower temperatures, severe winters with below zero average daily temperatures and heavy snowfall. The eastern and central basins are arid, with less than of rain, and have occasional deserts. Average summer temperatures rarely exceed . The coastal plains of the Persian Gulf and Gulf of Oman in southern Iran have mild winters, and very humid and hot summers. The annual precipitation ranges from .
The wildlife of Iran is composed of several animal species, including bears, the Eurasian lynx, foxes, gazelles, gray wolves, jackals, panthers, and wild pigs. Other domestic animals of Iran include Asian water buffaloes, camels, cattle, donkeys, goats, horses, and the sheep. Eagles, falcons, partridges, pheasants, and storks are also native to the wildlife of Iran.
One of the most famous members of the Iranian wildlife is the critically endangered Asiatic cheetah, also known as the "Iranian cheetah", whose numbers were greatly reduced after the 1979 Revolution. The Persian leopard, which is the world's largest leopard subspecies living primarily in northern Iran, is also listed as an endangered species. Iran lost all its Asiatic lions and the now extinct Caspian tigers by the earlier part of the 20th century.
At least 74 species of the Iranian wildlife are on the red list of the International Union for Conservation of Nature, a sign of serious threats against the country's biodiversity. The Iranian Parliament has been showing disregard for wildlife by passing laws and regulations such as the act that lets the Ministry of Industries and Mines exploit mines without the involvement of the Department of Environment, and by approving large national development projects without demanding comprehensive study of their impact on wildlife habitats.
Iran is divided into five regions with thirty-one provinces ("ostān"), each governed by an appointed governor ("ostāndār"). The provinces are divided into counties ("šahrestān"), and subdivided into districts ("baxš") and sub-districts ("dehestān").
The country has one of the highest urban growth rates in the world. From 1950 to 2002, the urban proportion of the population increased from 27% to 60%. The United Nations predicts that by 2030, 80% of the population will be urban. Most internal migrants have settled around the cities of Tehran, Isfahan, Ahvaz, and Qom. The listed populations are from the 2006/07 (1385 AP) census.
Tehran, with a population of around 8.8 million (2016 census), is the capital and largest city of Iran. It is an economical and cultural center, and is the hub of the country's communication and transport network.
The country's second most populous city, Mashhad, has a population of around 3.3 million (2016 census), and is capital of the province of Razavi Khorasan. Being the site of the Imam Reza Shrine, it is a holy city in Shia Islam. About 15 to 20 million pilgrims visit the shrine every year.
Isfahan has a population of around 2.2 million (2016 census), and is Iran's third most populous city. It is the capital of the province of Isfahan, and was also the third capital of the Safavid Empire. It is home to a wide variety of historical sites, including the famous Shah Square, Siosepol, and the churches at the Armenian district of New Julfa. It is also home to the world's seventh largest shopping mall, Isfahan City Center.
The fourth most populous city of Iran, Karaj, has a population of around 1.9 million (2016 census). It is the capital of the province of Alborz, and is situated 20 km west of Tehran, at the foot of the Alborz mountain range. It is a major industrial city in Iran, with large factories producing sugar, textiles, wire, and alcohol.
With a population of around 1.7 million (2016 census), Tabriz is the fifth most populous city of Iran, and had been the second most populous until the late 1960s. It was the first capital of the Safavid Empire, and is now the capital of the province of East Azerbaijan. It is also considered the country's second major industrial city (after Tehran).
Shiraz, with a population of around 1.8 million (2016 census), is Iran's sixth most populous city. It is the capital of the province of Fars, and was also the capital of Iran under the reign of the Zand dynasty. It is located near the ruins of Persepolis and Pasargadae, two of the four capitals of the Achaemenid Empire.
The political system of the Islamic Republic is based on the 1979 Constitution. According to international reports, Iran's human rights record is exceptionally poor. The regime in Iran is undemocratic, has frequently persecuted and arrested critics of the government and its Supreme Leader, and severely restricts the participation of candidates in popular elections as well as other forms of political activity. Women's rights in Iran are described as seriously inadequate, and children's rights have been severely violated, with more child offenders being executed in Iran than in any other country in the world. Sexual activity between members of the same sex is illegal and is punishable by up to death. Since the 2000s, Iran's controversial nuclear program has raised concerns, which is part of the basis of the international sanctions against the country. The Joint Comprehensive Plan of Action, an agreement reached between Iran and the P5+1, was created on 14 July 2015, aimed to loosen the nuclear sanctions in exchange for Iran's restriction in producing enriched uranium.
Over the past decade, numbers of anti-government protests have broken out throughout Iran (such as the 2019–20 Iranian protests), demanding reforms or the end to the Islamic Republic. However, the IRGC and police often suppressed mass protests by violent means, which resulted in thousands of protesters killed.
The Leader of the Revolution ("Supreme Leader") is responsible for delineation and supervision of the policies of the Islamic Republic of Iran. The Iranian president has limited power compared to the Supreme Leader Khamenei. The current longtime Supreme Leader, Ali Khamenei, has been issuing decrees and making the final decisions on the economy, environment, foreign policy, education, national plannings, and everything else in the country. Khamenei also outlines elections guidelines and urges for the transparency, and has fired and reinstated presidential cabinet appointments. Key ministers are selected with the Supreme Leader Ali Khamenei's agreement and he has the ultimate say on Iran's foreign policy. The president-elect is required to gain the Leader Khamenei's official approval before being sworn in before the Parliament (Majlis). Through this process, known as Tanfiz (validation), the Leader agrees to the outcome of the presidential election. The Supreme Leader is directly involved in ministerial appointments for Defense, Intelligence and Foreign Affairs, as well as other top ministries after submission of candidates from the president. Iran's regional policy is directly controlled by the office of the Supreme Leader with the Ministry of Foreign Affairs' task limited to protocol and ceremonial occasions. All of Iran's ambassadors to Arab countries, for example, are chosen by the Quds Corps, which directly reports to the Supreme Leader. The budget bill for every year, as well as withdrawing money from the National Development Fund of Iran, require Supreme Leader Ali Khamenei's approval and permission. The Supreme Leader Khamenei can and did order laws to be amended. Setad, estimated at $95 billion in 2013 by the Reuters, accounts of which are secret even to the Iranian parliament, is controlled only by the Supreme Leader.
The Supreme Leader is the commander-in-chief of the armed forces, controls the military intelligence and security operations, and has sole power to declare war or peace. The heads of the judiciary, the state radio and television networks, the commanders of the police and military forces, and six of the twelve members of the Guardian Council are directly appointed by the Supreme Leader.
The Assembly of Experts is responsible for electing the Supreme Leader, and has the power to dismiss him on the basis of qualifications and popular esteem. To date, the Assembly of Experts has not challenged any of the Supreme Leader's decisions, nor has it attempted to dismiss him. The previous head of the judicial system, Sadeq Larijani, appointed by the Supreme Leader, said that it is illegal for the Assembly of Experts to supervise the Supreme Leader. Due to Khamenei's very longtime unchallenged rule, many believe the Assembly of Experts has become a ceremonial body without any real power. There have been instances when the current Supreme Leader publicly criticized members of the Assembly of Experts, resulting in their arrest and dismissal. For example, Khamenei publicly called then-member of the Assembly of Experts Ahmad Azari Qomi a traitor, resulting in Qomi's arrest and eventual dismissal from the Assembly of Experts. Another instance is when Khamenei indirectly called Akbar Hashemi Rafsanjani a traitor for a statement he made, causing Rafsanjani to retract it.
Presidential candidates and parliamentary candidates must be approved by the Guardian Council (all members of which are directly or indirectly appointed by the Leader) or the Leader before running, in order to ensure their allegiance to the Supreme Leader. The Leader very rarely does the vetting himself directly, but has the power to do so, in which case additional approval of the Guardian Council would not be needed. The Leader can also revert the decisions of the Guardian Council. The Guardian Council can, and has dismissed some elected members of the Iranian parliament in the past. For example, Minoo Khaleghi was disqualified by Guardian Council even after winning election, as she had been photographed in a meeting without wearing headscarf.
After the Supreme Leader, the Constitution defines the President of Iran as the highest state authority. The President is elected by universal suffrage for a term of four years, however, the president is still required to gain the Leader's official approval before being sworn in before the Parliament (Majlis). The Leader also has the power to dismiss the elected president anytime. The President can only be re-elected for one term.
The President is responsible for the implementation of the constitution, and for the exercise of executive powers in implementing the decrees and general policies as outlined by the Supreme Leader, except for matters directly related to the Supreme Leader, who has the final say in all matters. Unlike the executive in other countries, the President of Iran does not have full control over anything, as these are ultimately under the control of the Supreme Leader. Chapter IX of the Constitution of the Islamic Republic of Iran sets forth the qualifications for presidential candidates. The procedures for presidential election and all other elections in Iran are outlined by the Supreme Leader. The President functions as the executive of affairs such as signing treaties and other international agreements, and administering national planning, budget, and state employment affairs, all as approved by the Supreme Leader.
The President appoints the ministers, subject to the approval of the Parliament, as well as the approval of the Supreme Leader, who can dismiss or reinstate any of the ministers at any time, regardless of the decisions made by the President or the Parliament. The President supervises the Council of Ministers, coordinates government decisions, and selects government policies to be placed before the legislature. The current Supreme Leader, Ali Khamenei, has fired as well as reinstated Council of Ministers members. Eight Vice Presidents serve under the President, as well as a cabinet of twenty-two ministers, who must all be approved by the legislature.
The legislature of Iran, known as the "Islamic Consultative Assembly", is a unicameral body comprising 290 members elected for four-year terms. It drafts legislation, ratifies international treaties, and approves the national budget. All parliamentary candidates and all legislation from the assembly must be approved by the Guardian Council.
The Guardian Council comprises twelve jurists, including six appointed by the Supreme Leader. Others are elected by the Parliament, from among the jurists nominated by the Head of the Judiciary. The Council interprets the constitution and may veto the Parliament. If a law is deemed incompatible with the constitution or Sharia (Islamic law), it is referred back to the Parliament for revision. The Expediency Council has the authority to mediate disputes between the Parliament and the Guardian Council, and serves as an advisory body to the Supreme Leader, making it one of the most powerful governing bodies in the country. Local city councils are elected by public vote to four-year terms in all cities and villages of Iran.
The Supreme Leader appoints the head of the country's judiciary, who in turn appoints the head of the Supreme Court and the chief public prosecutor. There are several types of courts, including public courts that deal with civil and criminal cases, and revolutionary courts which deal with certain categories of offenses, such as crimes against national security. The decisions of the revolutionary courts are final and cannot be appealed.
The Special Clerical Court handles crimes allegedly committed by clerics, although it has also taken on cases involving laypeople. The Special Clerical Court functions independently of the regular judicial framework, and is accountable only to the Supreme Leader. The Court's rulings are final and cannot be appealed. The Assembly of Experts, which meets for one week annually, comprises 86 "virtuous and learned" clerics elected by adult suffrage for eight-year terms.
The officially stated goal of the government of Iran is to establish a new world order based on world peace, global collective security, and justice. Since the time of the 1979 Revolution, Iran's foreign relations have often been portrayed as being based on two strategic principles; eliminating outside influences in the region, and pursuing extensive diplomatic contacts with developing and non-aligned countries.
Since 2005, Iran's nuclear program has become the subject of contention with the international community, mainly the United States. Many countries have expressed concern that Iran's nuclear program could divert civilian nuclear technology into a weapons program. This has led the United Nations Security Council to impose sanctions against Iran which had further isolated Iran politically and economically from the rest of the global community. In 2009, the U.S. Director of National Intelligence said that Iran, if choosing to, would not be able to develop a nuclear weapon until 2013.
, the government of Iran maintains diplomatic relations with 99 members of the United Nations, but not with the United States, and not with Israel—a state which Iran's government has derecognized since the 1979 Revolution. Among Muslim nations, Iran has an adversarial relationship with Saudi Arabia due to different political and Islamic ideologies. While Iran is a Shia Islamic Republic, Saudi Arabia is a conservative Sunni monarchy. Regarding the Israeli–Palestinian conflict, the government of Iran has recognized Jerusalem as the capital of the State of Palestine, after Trump recognized Jerusalem as the capital of Israel.
On 14 July 2015, Tehran and the P5+1 came to a historic agreement ("Joint Comprehensive Plan of Action") to end economic sanctions after demonstrating a peaceful nuclear research project that would meet the International Atomic Energy Agency standards.
Iran is a member of dozens of international organizations, including the G-15, G-24, G-77, IAEA, IBRD, IDA, IDB, IFC, ILO, IMF, IMO, Interpol, OIC, OPEC, WHO, and the United Nations, and currently has observer status at the World Trade Organization.
In September 2018, the Iranian ambassador to the United Nations asked the UN to condemn Israeli threats against Tehran and also bring Israel's nuclear program under the International Atomic Energy Agency's supervision.
In April 2019 the U.S. threatened to sanction countries continuing to buy oil from Iran after an initial six-month waiver announced in November expired. According to the BBC, U.S. sanctions against Iran "have led to a sharp downturn in Iran's economy, pushing the value of its currency to record lows, quadrupling its annual inflation rate, driving away foreign investors, and triggering protests."
On 1 September 2019, the Iranian authorities took a step to enhance its relations with Qatar, and decided to grant Qatari passport holders tourist visas upon arrival at Iranian airports. Besides, Qatari nationals were also permitted to obtain a single or multiple-entry visa from Iran's embassy in Doha.
The Islamic Republic of Iran has two types of armed forces: the regular forces of the Army, the Air Force, and the Navy, and the Revolutionary Guards, totaling about 545,000 active troops. Iran also has around 350,000 Reserve Force, totaling around 900,000 trained troops.
The government of Iran has a paramilitary, volunteer militia force within the Islamic Revolutionary Guard Corps, called the "Basij", which includes about 90,000 full-time, active-duty uniformed members. Up to 11 million men and women are members of the Basij who could potentially be called up for service. GlobalSecurity.org estimates Iran could mobilize "up to one million men", which would be among the largest troop mobilizations in the world. In 2007, Iran's military spending represented 2.6% of the GDP or $102 per capita, the lowest figure of the Persian Gulf nations. Iran's military doctrine is based on deterrence. In 2014, the country spent $15 billion on arms, while the states of the Gulf Cooperation Council spent eight times more. The United States under President Donald Trump officially labeled the Revolutionary Guard as a foreign terrorist organization. It is the first time that an element of a foreign state was designated as a terrorist organization.
The government of Iran supports the military activities of its allies in Syria, Iraq, and Lebanon (Hezbollah) with military and financial aid. Iran and Syria are close strategic allies, and Iran has provided significant support for the Syrian Government in the Syrian Civil War. According to some estimates, Iran controlled over 80,000 pro-Assad Shi'ite fighters in Syria.
Since the 1979 Revolution, to overcome foreign embargoes, the government of Iran has developed its own military industry, produced its own tanks, armored personnel carriers, missiles, submarines, military vessels, missile destroyer, radar systems, helicopters, and fighter planes. In recent years, official announcements have highlighted the development of weapons such as the Hoot, Kowsar, Zelzal, Fateh-110, Shahab-3, Sejjil, and a variety of unmanned aerial vehicles (UAVs). Iran has the largest and most diverse ballistic missile arsenal in the Middle East. The Fajr-3, a liquid fuel missile with an undisclosed range which was developed and produced domestically, is currently the most advanced ballistic missile of the country.
In June 1925, Reza Shah introduced conscription law at National Consultative Majlis. At that time every male person who had reached 21 years old must serve for military for two years. The conscription exempted women from military service after 1979 revolution. Iranian constitution obliges all men of 18 years old and higher to serve in military or police bases. They cannot leave the country or be employed without completion of the service period. The period varies from 18 to 24 months. Inappropriate situation of Iranian soldiers has caused violent incidents in recent years. Most of Iranian soldiers suffer from depression. In addition, some researches have reported high rate of suicide among Iranian conscripts.
Iran's economy is a mixture of central planning, state ownership of oil and other large enterprises, village agriculture, and small-scale private trading and service ventures. In 2017, GDP was $427.7 billion ($1.631 trillion at PPP), or $20,000 at PPP per capita. Iran is ranked as an upper-middle income economy by the World Bank. In the early 21st century, the service sector contributed the largest percentage of the GDP, followed by industry (mining and manufacturing) and agriculture.
The Central Bank of the Islamic Republic of Iran is responsible for developing and maintaining the Iranian rial, which serves as the country's currency. The government does not recognize trade unions other than the Islamic labour councils, which are subject to the approval of employers and the security services. The minimum wage in June 2013 was 487 million rials a month ($134). Unemployment has remained above 10% since 1997, and the unemployment rate for women is almost double that of the men.
In 2006, about 45% of the government's budget came from oil and natural gas revenues, and 31% came from taxes and fees. , Iran had earned $70 billion in foreign-exchange reserves, mostly (80%) from crude oil exports. Iranian budget deficits have been a chronic problem, mostly due to large-scale state subsidies, that include foodstuffs and especially gasoline, totaling more than $84 billion in 2008 for the energy sector alone. In 2010, the economic reform plan was approved by parliament to cut subsidies gradually and replace them with targeted social assistance. The objective is to move towards free market prices in a five-year period and increase productivity and social justice.
The administration continues to follow the market reform plans of the previous one, and indicates that it will diversify Iran's oil-reliant economy. Iran has also developed a biotechnology, nanotechnology, and pharmaceutical industry. However, nationalized industries such as the bonyads have often been managed badly, making them ineffective and uncompetitive with years. Currently, the government is trying to privatize these industries, and, despite successes, there are still several problems to be overcome, such as the lagging corruption in the public sector and lack of competitiveness.
Iran has leading manufacturing industries in the fields of automobile manufacture, transportation, construction materials, home appliances, food and agricultural goods, armaments, pharmaceuticals, information technology, and petrochemicals in the Middle East. According to the 2012 data from the Food and Agriculture Organization, Iran has been among the world's top five producers of apricots, cherries, sour cherries, cucumbers and gherkins, dates, eggplants, figs, pistachios, quinces, walnuts, and watermelons.
Economic sanctions against Iran, such as the embargo against Iranian crude oil, have affected the economy. Sanctions have led to a steep fall in the value of the rial, and , one US dollar is worth 36,000 rial, compared with 16,000 in early 2012. In 2018, after the withdrawal of the US from the JCPOA, the price of dollar hit an all-time high at just over 190,000 rials, which halted the market from trades and stores from selling goods, particularly in the consumer electronics sector until the prices were stable. In 2015, Iran and the P5+1 reached a deal on the nuclear program that removed the main sanctions pertaining to Iran's nuclear program by 2016.
Although tourism declined significantly during the war with Iraq, it has been subsequently recovered. About 1,659,000 foreign tourists visited Iran in 2004, and 2.3 million in 2009, mostly from Asian countries, including the republics of Central Asia, while about 10% came from the European Union and North America. Since the removal of some sanctions against Iran in 2015, tourism has re-surged in the country. Over five million tourists visited Iran in the fiscal year of 2014–2015, four percent more than the previous year.
Alongside the capital, the most popular tourist destinations are Isfahan, Mashhad, and Shiraz. In the early 2000s, the industry faced serious limitations in infrastructure, communications, industry standards, and personnel training. The majority of the 300,000 travel visas granted in 2003 were obtained by Asian Muslims, who presumably intended to visit pilgrimage sites in Mashhad and Qom. Several organized tours from Germany, France, and other European countries come to Iran annually to visit archaeological sites and monuments. In 2003, Iran ranked 68th in tourism revenues worldwide. According to the UNESCO and the deputy head of research for Iran's Tourism Organization, Iran is rated fourth among the top 10 destinations in the Middle East. Domestic tourism in Iran is one of the largest in the world. Weak advertising, unstable regional conditions, a poor public image in some parts of the world, and absence of efficient planning schemes in the tourism sector have all hindered the growth of tourism.
Iran has the world's second largest proved gas reserves after Russia, with 33.6 trillion cubic metres, and the third largest natural gas production after Indonesia and Russia. It also ranks fourth in oil reserves with an estimated 153,600,000,000 barrels. It is OPEC's second largest oil exporter, and is an energy superpower.
In 2005, Iran spent US$4 billion on fuel imports, because of contraband and inefficient domestic use. Oil industry output averaged in 2005, compared with the peak of six million barrels per day reached in 1974. In the early 2000s, industry infrastructure was increasingly inefficient because of technological lags. Few exploratory wells were drilled in 2005.
In 2004, a large share of Iran's natural gas reserves were untapped. The addition of new hydroelectric stations and the streamlining of conventional coal and oil-fired stations increased installed capacity to 33,000 megawatts. Of that amount, about 75% was based on natural gas, 18% on oil, and 7% on hydroelectric power. In 2004, Iran opened its first wind-powered and geothermal plants, and the first solar thermal plant was to come online in 2009. Iran is the world's third country to have developed GTL technology.
Demographic trends and intensified industrialization have caused electric power demand to grow by 8% per year. The government's goal of 53,000 megawatts of installed capacity by 2010 is to be reached by bringing on line new gas-fired plants, and adding hydropower and nuclear power generation capacity. Iran's first nuclear power plant at Bushire went online in 2011. It is the second nuclear power plant ever built in the Middle East after the Metsamor Nuclear Power Plant in Armenia.
In 2020 Fatih Birol the head of the International Energy Agency said that fossil fuel subsidies should be redirected, for example to the health system.
Education in Iran is highly centralized. K–12 is supervised by the Ministry of Education, and higher education is under the supervision of the Ministry of Science and Technology. The adult literacy rated 93.0% in September 2015, while it had rated 85.0% in 2008, up from 36.5% in 1976.
The requirement to enter into higher education is to have a high school diploma and pass the Iranian University Entrance Exam (officially known as "konkur" (کنکور)), which is the equivalent of the SAT and ACT exams of the United States. Many students do a 1–2-year course of pre-university ("piš-dānešgāh"), which is the equivalent of the GCE A-levels and the International Baccalaureate. The completion of the pre-university course earns students the Pre-University Certificate.
Iran's higher education is sanctioned by different levels of diplomas, including an associate degree ("kārdāni"; also known as "fowq e diplom") delivered in two years, a bachelor's degree ("kāršenāsi"; also known as "lisāns") delivered in four years, and a master's degree ("kāršenāsi e aršad") delivered in two years, after which another exam allows the candidate to pursue a doctoral program (PhD; known as "doktorā").
According to the Webometrics Ranking of World Universities (), Iran's top five universities include Tehran University of Medical Sciences (478th worldwide), the University of Tehran (514th worldwide), Sharif University of Technology (605th worldwide), Amirkabir University of Technology (726th worldwide), and the Tarbiat Modares University (789th worldwide).
Iran has increased its publication output nearly tenfold from 1996 through 2004, and has been ranked first in terms of output growth rate, followed by China. According to a study by SCImago in 2012, Iran would rank fourth in the world in terms of research output by 2018, if the current trend persists.
In 2009, a SUSE Linux-based HPC system made by the Aerospace Research Institute of Iran (ARI) was launched with 32 cores, and now runs 96 cores. Its performance was pegged at 192 GFLOPS. The Iranian humanoid robot Sorena 2, which was designed by engineers at the University of Tehran, was unveiled in 2010. The Institute of Electrical and Electronics Engineers (IEEE) has placed the name of Surena among the five prominent robots of the world after analyzing its performance.
In the biomedical sciences, Iran's Institute of Biochemistry and Biophysics has a UNESCO chair in biology. In late 2006, Iranian scientists successfully cloned a sheep by somatic cell nuclear transfer, at the Royan Research Center in Tehran.
According to a study by David Morrison and Ali Khadem Hosseini (Harvard-MIT and Cambridge), stem cell research in Iran is amongst the top 10 in the world. Iran ranks 15th in the world in nanotechnologies.
Iran placed its domestically built satellite Omid into orbit on the 30th anniversary of the 1979 Revolution, on 2February 2009, through its first expendable launch vehicle Safir, becoming the ninth country in the world capable of both producing a satellite and sending it into space from a domestically made launcher.
The Iranian nuclear program was launched in the 1950s. Iran is the seventh country to produce uranium hexafluoride, and controls the entire nuclear fuel cycle.
Iranian scientists outside Iran have also made some major contributions to science. In 1960, Ali Javan co-invented the first gas laser, and fuzzy set theory was introduced by Lotfi A. Zadeh. Iranian cardiologist Tofigh Mussivand invented and developed the first artificial cardiac pump, the precursor of the artificial heart. Furthering research and treatment of diabetes, the HbA1c was discovered by Samuel Rahbar. Iranian physics is especially strong in string theory, with many papers being published in Iran. Iranian American string theorist Kamran Vafa proposed the Vafa–Witten theorem together with Edward Witten. In August 2014, Iranian mathematician Maryam Mirzakhani became the first woman, as well as the first Iranian, to receive the Fields Medal, the highest prize in mathematics.
Iran is a diverse country, consisting of numerous ethnic and linguistic groups that are unified through a shared Iranian nationality.
Iran's population grew rapidly during the latter half of the 20th century, increasing from about 19 million in 1956 to around 75 million by 2009. However, Iran's birth rate has dropped significantly in recent years, leading to a population growth rate of about 1.20% as of 2018. Due to its young population, studies project that the growth will continue to slow until it stabilizes around 105 million by 2050.
Iran hosts one of the largest refugee populations in the world, with more than one million refugees, mostly from Afghanistan and Iraq. Since 2006, Iranian officials have been working with the UNHCR and Afghan officials for their repatriation. According to estimates, about five million Iranian citizens have emigrated to other countries, mostly since the 1979 Revolution.
According to the Iranian Constitution, the government is required to provide every citizen of the country with access to social security, covering retirement, unemployment, old age, disability, accidents, calamities, health and medical treatment and care services. This is covered by tax revenues and income derived from public contributions.
The majority of the population speak Persian, which is also the official language of the country. Others include speakers of a number of other Iranian languages within the greater Indo-European family, and languages belonging to some other ethnicities living in Iran.
In northern Iran, mostly confined to Gilan and Mazenderan, the Gilaki and Mazenderani languages are widely spoken, both having affinities to the neighboring Caucasian languages. In parts of Gilan, the Talysh language is also widely spoken, which stretches up to the neighboring Republic of Azerbaijan. Varieties of Kurdish are widely spoken in the province of Kurdistan and nearby areas. In Khuzestan, several distinct varieties of Persian are spoken. Luri and Lari are also spoken in southern Iran.
Azerbaijani, which is by far the most spoken language in the country after Persian, as well as a number of other Turkic languages and dialects, is spoken in various regions of Iran, especially in the region of Azerbaijan.
Notable minority languages in Iran include Armenian, Georgian, Neo-Aramaic, and Arabic. Khuzi Arabic is spoken by the Arabs in Khuzestan, as well as the wider group of Iranian Arabs. Circassian was also once widely spoken by the large Circassian minority, but, due to assimilation over the many years, no sizable number of Circassians speak the language anymore. | https://en.wikipedia.org/wiki?curid=14653 |
Game Boy Color
The (commonly abbreviated as GBC) is a handheld game console, manufactured by Nintendo, which was released in Japan on October 21, 1998 and to international markets that November. It is the successor to the original Game Boy and is part of the Game Boy family.
The GBC features a color screen rather than monochrome, but it is not backlit. It is slightly thicker and taller and features a slightly smaller screen than the Game Boy Pocket, its immediate predecessor in the Game Boy line. As with the original Game Boy, it has a custom 8-bit processor made by Sharp that is considered a hybrid between the Intel 8080 and the Zilog Z80. The American English spelling of the system's name, "Game Boy Color", remains consistent throughout the world.
The Game Boy Color is part of the fifth generation of video game consoles. The GBC's primary competitors in Japan were the grayscale 16-bit handhelds, SNK's Neo Geo Pocket and Bandai's WonderSwan, though the Game Boy Color outsold them by a wide margin. SNK and Bandai countered with the Neo Geo Pocket Color and the WonderSwan Color, respectively, but this did little to change Nintendo's sales dominance. With Sega discontinuing the Game Gear in 1997, the Game Boy Color's only competitor in the United States was its predecessor, the Game Boy, until the short-lived Neo Geo Pocket Color was released in North America in August 1999. The Game Boy and the Game Boy Color combined have sold 118.69 million units worldwide making them the third-best-selling system of all time,
It was discontinued on March 23, 2003, shortly after the release of the Game Boy Advance SP. Its best-selling game is "Pokémon Gold" and "Silver", which shipped 23 million units worldwide.
The release of the Game Boy Color was a response to pressure from game developers for a more sophisticated handheld platform, who said that even the latest iteration of the original system, the Game Boy Pocket, was insufficient. The resultant product was backward compatible with all existing Game Boy software, a first for a handheld system, allowing each new Game Boy family launch to begin with a significantly larger game library than any of its competitors.
On March 23, 2003, the Game Boy Color was discontinued.
The technical specifications for the console are as follows:
Game Paks manufactured by Nintendo have the following specifications:
Without additional mapper hardware, the maximum ROM size is 32kiB/256kib.
The processor, which is a Zilog Z80 workalike made by Sharp with a few extra (bit manipulation) instructions, has a clock speed of approximately 8 MHz, twice as fast as that of the original Game Boy. The Game Boy Color has three times as much memory as the original (32 kilobytes system RAM, 16 kilobytes video RAM). The screen resolution is the same as the original Game Boy at 160×144 pixels.
The Game Boy Color features an infrared communications port for wireless linking. The feature is only supported in a small number of games, so the infrared port was dropped from the Game Boy Advance line, to be later reintroduced with the Nintendo 3DS, though wireless linking would return in the Nintendo DS line using Wi-Fi. The console is capable of displaying up to 56 different colors simultaneously on screen from its palette of 32,768 (8×4 color background palettes, 8x3+transparent sprite palettes), and can add basic four-, seven- or ten-color shading to games that had been developed for the original 4-shades-of-grey Game Boy. In the 7-color modes, the sprites and backgrounds are given separate color schemes, and in the 10-color modes the sprites are further split into two differently-colored groups; however, as flat black (or white) was a shared fourth color in all but one (7-color) palette, the overall effect is that of 4, 6, or 8 colors. This method of upgrading the color count results in graphic artifacts in certain games; for example, a sprite that is supposed to meld into the background is sometimes colored separately, making it easily noticeable. Manipulation of palette registers during display allows for a rarely used high color mode, capable of displaying more than 2,000 colors on the screen simultaneously.
For dozens of select Game Boy games, the Game Boy Color has an enhanced palette built-in featuring up to 16 colors - four colors for each of the Game Boy's four layers. If the system does not have a palette stored for a game, it defaults to a palette of green, blue, salmon, black, and white. However, at power up, one of 12 built-in color palettes is selectable by pressing a directional button and optionally A or B while the Game Boy logo is present on the screen.
These palettes each contain up to ten colors. In most games, the four shades displayed on the original Game Boy translate to different subsets of this 10-color palette, such as by displaying movable sprites in one subset and backgrounds in another. The grayscale (Left + B) palette produces an appearance similar to that experienced on the original Game Boy, Game Boy Pocket or Game Boy Light.
A few games used a programming technique to increase the number of colors available on-screen to more than 2,000. This "Hi-Color mode" was used by licensed developers including 7th Sense. Some examples of games using this technique are "The Fish Files", "The New Addams Family Series", and "Alone in the Dark: The New Nightmare". "Cannon Fodder" uses this technique to render full motion video segments in the introduction sequence, ending, and main menu screen.
Game Boy Color exclusive games are housed in clear-colored Game Pak cartridges. They are shaped differently than original Game Boy Game Paks. Notably, these cartridges lack a notch that prevented the original Game Paks from being removed while the original Game Boy was powered on (although some special cartridges like "Kirby Tilt 'n' Tumble" do include this notch). The lack of this notch keeps original Game Boy systems loaded with Game Boy Color cartridges from powering on. Similarly, Game Boy Pocket, Super Game Boy, Super Game Boy 2, and Game Boy Light will power on when loaded with a Game Boy Color cartridge, but will refuse to load the game and will display a warning message stating that a Game Boy Color system is required. Some Game Boy cartridges such as are "Chee-Chai Alien" and "Pocket Music" cannot be played on Game Boy Advance and Game Boy Advance SP systems. When inserted and powered on, these systems will exhibit a similar error message and will not load the game.
The logo for Game Boy Color spells out the word "COLOR" in the five original colors in which the unit was manufactured: Berry (C), Grape (O), Kiwi (L), Dandelion (O), and Teal (R).
Another color released at the same time was "Atomic Purple", made of a translucent purple plastic similar to the color available for the Nintendo 64 controller. Other colors were sold as limited editions or in specific countries.
Due to its backward compatibility with Game Boy games, the Game Boy Color's launch period had a large playable library. The system amassed a library of 576 Game Boy Color games over a four-year period. While the majority of the games are Game Boy Color exclusive, approximately 30% of the games released are compatible with the original Game Boy.
"Tetris" for the original Game Boy is the best-selling game compatible with Game Boy Color, and "Pokémon Gold" and "Silver" are the best-selling games developed primarily for it. The best-selling Game Boy Color exclusive game is "Pokémon Crystal".
The last Game Boy Color game ever released is the Japanese exclusive "Doraemon no Study Boy: Kanji Yomikaki Master", on July 18, 2003. The last game released in North America is "Harry Potter and the Chamber of Secrets", released on November 15, 2002. In Europe the last game released for the system is "Hamtaro: Ham-Hams Unite!", on January 10, 2003.
The Game Boy and Game Boy Color were both commercially successful, selling a combined 32.47 million units in Japan, 44.06 million in the Americas, and 42.16 million in other regions.
In 2003, when the Game Boy Color was discontinued, the pair was the best-selling game console of all time. Both the Nintendo DS and PlayStation 2 have since outsold the Game Boy and Game Boy Color and are now the third-best-selling console and the second-best-selling handheld of all time. | https://en.wikipedia.org/wiki?curid=13097 |
Genosha
Genosha ( or ) is a fictional country appearing in American comic books published by Marvel Comics. It is an island nation that exists in Marvel's main shared universe, known as "Earth 616" in the Marvel Universe and a prominent place in the X-Men chronology. The fictional nation served as an allegory for slavery and later for South African apartheid before becoming a mutant homeland and subsequently a disaster zone. The island is located off the Southeastern African coast northwest from Seychelles and northeast of Madagascar. Its capital city was Hammer Bay.
Genosha first appeared in "Uncanny X-Men" #235 (October 1988), and was created by Chris Claremont and Rick Leonardi.
Genosha received an entry in the "Official Handbook of the Marvel Universe Update '89" #3.
The island is located off the east coast of Africa, to the north of Madagascar, and boasted a high standard of living, an excellent economy, and freedom from the political and racial turmoil that characterized neighboring nations. However, Genosha's prosperity was built upon the enslavement of its mutant population. Mutants in Genosha were the property of the state and children who were positively identified with the mutant gene were put through a process developed by David Moreau, commonly known as the Genegineer, stripped of free will and made into "mutates" (a Marvel term for genetically modified individuals, as opposed to those who developed mutant powers naturally). The Genegineer was also capable of modifying certain mutant abilities in order to fulfill specific labor shortages. Citizenship in Genosha is permanent and the government does not recognize any emigration. Citizens who attempt to leave the country are tracked down and forcibly brought back to the island by the special force known as Press Gang. The Press Gang consisted of Hawkshaw, Pipeline, and Punchout, and were aided in their task by Wipeout. Mutant problems are handled by a special group known as the Magistrates. The foundations of Genoshan society has been upset in recent years due to the efforts of outside mutant interests. In the first storyline to feature the nation, some members of the X-Men (Wolverine, Rogue, and their ally Madelyne Pryor) were kidnapped by Genoshan Magistrates, under the order of the Genegineer. Later, in the multi-issue, multi-title "X-Tinction Agenda" storyline, the X-Men and their allies rescued their teammates, Storm, Meltdown, Rictor and Wolfsbane, from Genoshan brainwashing, toppling the government after discovering their alliance with former X-Factor ally turned mutant hater, Cameron Hodge, and that Havok was one of the Magistrates since having his memory wiped by the Siege Perilous. Havok himself, woken from his conditioning by his brother Cyclops dealt the killing blow to Cameron Hodge in the process.
A new Genoshan regime that promised better treatment of mutants was put in place after Hodge's destruction. A period of general turmoil and a number of attacks by superhumans, including Magneto's Acolytes who were unwilling to forgive the former Genoshan government for its crimes against mutants, followed.
A different version of X-Factor, including Wolfsbane, later returned to the island to help restore peace between its government and a rogue group of super-powered beings that had fled the island. The Genoshan government was shown with peaceful intentions, even trying to undo the ill effects visited upon Wolfsbane. Genosha was also shown to have typical suburban tract housing, like many small towns in America, Australia, New Zealand and South Africa.
After the "Age of Apocalypse" story arc, it was revealed and retconned that the mutate process formula was given to the Genegineer by Sugar Man, a refugee of the "Age of Apocalypse" timeline.
The United Nations ceded the island nation to the powerful mutant Magneto, after he demanded an entire mutants-only nation. Magneto and his Acolytes managed to reestablish a modicum of peace and stability only briefly until civil war broke out between him and the remaining human population on the island led by the Magistrates. Magneto eventually defeated the Magistrates and restored order to most of the island, with hold-outs briefly remaining at Carrion Cove before being obliterated.
The elimination of the Legacy Virus gave Magneto a freshly healthy army, leading him to declare a campaign of global conquest. A small team of X-Men stopped this plan, badly injuring Magneto in the process (the original issue presented him as being killed, but this was retconned in the "New X-Men" comic book series).
Genosha had a population of sixteen million mutants and a stable, advanced society. However, the entire island was reduced to rubble and its mutant population was slain by Cassandra Nova's Wild Sentinels. There were few survivors, many evacuated, and the Brotherhood of Mutants turned one of the Sentinels into a memorial statue.
Magneto and Xavier have since then joined forces to rebuild the island nation as detailed in the series "Excalibur" vol. 3 (2004). This goes badly as foreign military forces have thrown up a cordon around the island; no one is allowed to enter, and those trying to leave are fired upon.
A few survivors and newly arriving mutants who wish to help with the rebuilding process remain on the island. Members of this volunteer 'army' include Callisto, Freakshow and Wicked. More are found in the surrounding countryside, some join with Xavier. There is a conflict with Magistrates, the island's former law enforcement. Though they are assisted by humanoid creatures they refer to as 'trolls', the Magistrates' forces are driven off. Some of the Magistrates are captured and kept in the island's makeshift jail.
Some of the captured Magistrates agree to work with Xavier to rebuild the island. Throughout the entire series, Unus the Untouchable and his squadron of mutants remain a problem; they do not wish to be part of Xavier's group.
Later, Magneto learned of his daughter the Scarlet Witch's nervous break-down and her ever-growing powers to alter reality. Magneto snatched Wanda from her battle with her fellow Avengers and brought her to Genosha, where he asked Xavier to restore the Scarlet Witch’s sanity - but to no avail. The telepath couldn’t help her and, concerned of the threat to reality that Wanda posed, Xavier consulted the Avengers and the X-Men about what to do with her. Their decision was rendered moot, though, as by the time they reached Genosha reality altered around the heroes - changing into the world ruled by the "House of M".
While conventional reality was eventually restored, it came at a high price, as thousands if not millions of Earth’s mutant population lost their powers or died in the process, leaving only a few hundred mutants alive and powered. Just like most of his new Genoshan allies and enemies, Magneto was among the depowered people, remaining trapped on the island.
In the limited series "Son of M," there is a battle between some of the remaining mutants and the Inhumans.
In "New Avengers" #19-20 it was stated that energy cannot be created nor destroyed, and the same held true for the energies of the numerous depowered mutants. Eventually, these energies gathered in the form of an unsuspecting energy-absorbing mutant named Michael Pointer. Dubbed "the Collective" by the Avengers, against whom he then fought, the Collective traveled to Genosha and reached out to the startled Magneto. The Collective, controlled by Xorn, attempted to restore Magneto’s powers and convince him to lead the remaining mutants into taking over the planet. To the Collective’s surprise, Magneto resisted and allowed the Avengers to separate the energy from his body and send it into the sun. The comatose Magneto is also taken into S.H.I.E.L.D. custody, but the helicopter that was supposed to transport him off Genosha explodes once it lifts off. Magneto’s body is not found. It has since been revealed that he survived the explosion and remained depowered until the High Evolutionary's dangerous experiment returned his magnetic abilities.
To date Genosha is now completely dead. Already in ruins before, the battle between the Inhumans and the O*N*E further destroyed the once-proud island nation.
Since Magneto was the last person on Genosha, it seems that it’s now totally uninhabited, which is corroborated by Wiccan and Speed when they began their search for their mother, the Scarlet Witch. They encountered Genosha an empty land filled with destroyed towers and empty streets.
Selene is seen traveling to the ruined island of Genosha with her followers who were resurrected by the Technarch transmode virus. Led there by Blink and Caliban, who tells Selene he senses millions of dead mutants. They enter the ruins and Selene proclaims Hammer Bay, the devastated capital of the island nation Necrosha, the place where she will become a god.
With Eli Bard, Selene resurrects the massacred residents of Genosha, with Cerebro and Bastion's computers detecting the rise of mutant numbers into the millions. A problem presents itself in that many of the newly resurrected mutants have been de-powered, despite having been killed "before" M-Day. Wither and Mortis explain what happened and the Coven begins to set up base at Necrosha.
Selene is eventually defeated and killed, thus ending the effect of the corrupted Techno-organic virus in the bodies she revived and returning Genosha to an empty land. According to writer Chris Yost, Elixir is still on Necrosha.
During a visit to Genosha by the students of Jean Grey's School for Gifted Youngsters' students organized by Beast and Kitty Pryde, Sabretooth and AOA Blob assaulted Kid Gladiator and Kid Omega in order to kidnap Genesis. During this time, there were no mentionings of Elixir living here.
During the "AXIS" storyline, Magneto enters the island of Genosha to find that it had turned into a concentration camp for mutants. He frees two mutant girls who tell him that Red Skull is responsible and possesses Professor X's brain. Magneto attacks Red Skull, but is quickly stopped by the Skull's S-Men. Magneto is captured and telepathically tortured by Red Skull. He is given visions of those closest to him suffering while being unable to do anything to stop it. After being freed by Scarlet Witch, Rogue, and Havok, he bites down on a vial beneath his skin of Mutant Growth Hormone, giving himself enough power to fight. Havok, Rogue, and Scarlet Witch are captured by the Red Skull's S-Men and sent to his concentration camp in Genosha. Rogue (who still has Wonder Man inside her) is able to break the group free. They discover Magneto has been captured, and free him, as well. The three want to leave the island and alert the rest of the Avengers and X-Men of what Red Skull is doing, but Magneto says he's going to stay and fight. Before they can do anything, Red Skull appears. Magneto, Rogue, and the Scarlet Witch fought Red Onslaught in Genosha and are later joined by the Avengers and the X-Men. Iron Man used a telepathic tamperer to stop the Red Skull's influence. When more heroes arrived to help, Red Onslaught revealed that he influenced Stark to create a model of Sentinels, based on the knowledge of different super heroes he acquired after the Civil War before erasing the latter's memories of constructing them. Red Onslaught then deployed his Stark Sentinels to fight the heroes.
As part of the "All-New, All-Different Marvel," Magneto and his use Genosha as a staging ground for an ambush on the Dark Riders, who have been targeting mutants with healing powers. After defeating the Dark Riders, Magneto then ties up the Dark Riders and sets off a bomb that kills them and also levels the entire island. It was a sign that Magneto has "no intention of 'Laying Low'."
In this reality, there is a prison called Genosha Bay Prison which is somewhat similar to Guantánamo Bay. It was originally settled by Quaker missionaries who built a penitentiary so to isolate prisoners from each other so they could contemplate the gravity of their sins. By the 1930s, Genosha Bay later became a United States extraterritorial prison which hold prisoners of the worse cases from around the world and was notorious for practicing inhumane punishments on its prisoners ranging from sleep deprivations and water torture. Genosha Bay Prison caught the notice of the public and culminating in a Senate Judiciary Meeting on consider closing the prison. Even if the prison were to be close down, lawmakers were unwilling to let its more severe criminal sociopaths from allowing into America's prisons. In reality, Genosha Bay Prison was used as a proving ground in recruiting the prisoners as a next generation of government operatives.
In the "Ultimate Marvel" reality, Genosha has made an appearances as an island south of Madagascar. Its main export seems to be television programs notably "Hunt for Justice" under control of Mojo Adams and his crew. . Mutants were recently reduced to second-class citizens after the murder of a government minister Lord Joseph Scheele by a mutant called Arthur Centino aka Longshot after an affair was revealed between Scheele and his girlfriend Spiral. Centino is sentenced by Adams and Major Domo to the neighboring island of Krakoa to battle Arcade, but is saved by the X-Men. The island returns in an arc of "Ultimate Spider-Man" where the mutant killer Deadpool and his squad is hired by Adams.
Genosha appears in both "X-Men" and "Dark Phoenix". In "X-Men", it is an uncharted island serving as Magneto's Brotherhood base. In "Dark Phoenix", it is Magneto's safe haven for mutants (such as Red Lotus/Ariki and Selene Gaillo) with no home to return to gifted to him by the U.S. Government. | https://en.wikipedia.org/wiki?curid=13098 |
Grinnell College
Grinnell College is a private liberal arts college in Grinnell, Iowa. It was founded in 1846 when a group of New England Congregationalists established the Trustees of Iowa College. Grinnell is known for its rigorous academics, innovative pedagogy, and commitment to social justice.
Grinnell has the sixth highest endowment-to-student ratio of liberal arts colleges, enabling need-blind admissions and substantial academic merit scholarships to boost socioeconomic diversity. Students receive funding for unpaid or underpaid summer internships and professional development (including international conferences and professional attire). Grinnell participates in a 3–2 engineering dual degree program with Columbia University, Washington University in St. Louis, and California Institute of Technology, a 2–1–1–1 engineering program with Dartmouth College and a Master of Public Health cooperative degree program with University of Iowa.
Nearly half of enrolled Grinnellians self-identify as international students or students of color. Among Grinnell alumni are 14 Rhodes Scholars, 5 Marshall Scholars, 119 Fulbright Scholars (Since 2005), 79 Watson Fellows, 13 Goldwater Scholars, and one Nobel laureate. Its alumni include actor Gary Cooper, Nobel chemist Thomas Cech, Intel co-founder Robert Noyce, jazz musician Herbie Hancock, government administrator Harry Hopkins, and comedian Kumail Nanjiani.
The 120-acre campus includes several listings on the National Register of Historic Places as well as a César Pelli designed ultra-modern student center, integrated academic complexes, and state-of-the-art athletics facilities. Grinnell College also manages significant real estate adjacent to the campus and in the historic downtown, a free-access golf course, and the 365-acre Conard Environmental Research Area.
"U.S. News & World Report" in 2020 ranked Grinnell tied for 14th best overall and 3rd best undergraduate teaching among liberal arts colleges in the U.S.
In 1843, eleven Congregational ministers, all of whom trained at Andover Theological Seminary in Massachusetts, set out to proselytize on the frontier. Each man pledged to gather a church and together the group or band would seek to establish a college. When the group arrived in Iowa later that year, each selected a different town in which to establish a congregation. In 1846, they collectively established Iowa College in Davenport. A few months later, Iowa joined the Union.
The first 25 years of Grinnell's history saw a change in name and location. Iowa College moved farther west from Davenport, Iowa, to the town of Grinnell and unofficially adopted the name of its new home, which itself had been named for one of its founders: an abolitionist minister, Josiah Bushnell Grinnell, to whom journalist Horace Greeley supposedly wrote "Go West, young man, go West." However, Greeley vehemently denied ever saying this to Grinnell, or to anyone. The name of the corporation, "The Trustees of Iowa College," remained, but in 1909 the name "Grinnell College" was adopted by the trustees for the institution.
In its early years, the College experienced setbacks. Although two students received bachelor of arts degrees in 1854 (the first to be granted by a college west of the Mississippi River), within 10 years the Civil War had claimed most of Grinnell's students and professors. In the decade following the war, growth resumed: women were officially admitted as candidates for degrees, and the curriculum was enlarged to include then-new areas of academic studies, such as natural sciences with laboratory work.
In 1882, Grinnell College was struck by a tornado — then called a cyclone, after which the college yearbook was named. The storm devastated the campus and destroyed both College buildings. Rebuilding began immediately, and the determination to expand wasn't limited to architecture: the curriculum was again extended to include departments in political science (the first in the United States) and modern languages.
Grinnell became known as the center of the Social Gospel reform movement, as Robert Handy writes, "The movement centered on the campus of Iowa (now Grinnell) College. Its leading figures were Professor George D. Herron and President George A. Gates". Other firsts pointed to the lighter side of college life: the first intercollegiate football and baseball games west of the Mississippi were played in Grinnell, and the home teams won.
As the 20th century began, Grinnell established a Phi Beta Kappa chapter, introduced the departmental "major" system of study, began Grinnell-in-China (an educational mission that lasted until the Japanese invasion and resumed in 1987), and built a women's residence hall system that became a national model. The social consciousness fostered at Grinnell during these years became evident during Franklin D. Roosevelt's presidency, when Grinnell graduates Harry Hopkins '12, Chester Davis '11, Paul Appleby '13, Hallie Flanagan '11, and Florence Kerr '12 became influential New Deal administrators.
Concern with social issues, educational innovation, and individual expression continue to shape Grinnell. As an example, the school's "5th year travel-service program," preceded the establishment of the Peace Corps by many years. Other recent innovations include first-year tutorials, cooperative pre-professional programs, and programs in quantitative studies and the societal impacts of technology. Every year, the college awards the $100,000 Grinnell College Innovator for Social Justice Prize, which is split between the recipient and their organization.
Grinnell College is located in the town of Grinnell, Iowa, about halfway between Des Moines and Iowa City. The main campus is bounded by 6th Avenue (which is also US Highway 6) on the south, 10th Avenue on the north, East Street on the east and Park Street on the west. The campus contains sixty-three buildings ranging in style from Collegiate Gothic to Bauhaus. Goodnow Hall and Mears Cottage (1889) are listed on the National Register of Historic Places. Immediately west of the college is the North Grinnell Historic District, which contains over 200 National Register of Historic Places contributing buildings.
The residential part of campus is divided into three sections: North Campus, East Campus, and South Campus. North and South Campus' dormitories are modeled explicitly after the residential colleges of Oxford and Cambridge. The four East Campus dormitories were designed by William Rawn Associates and feature a modern, LEED-certified design constructed from Iowa limestone.
All three campuses feature dormitory buildings connected by loggia, an architectural signature of the college. The loggia on South Campus is the only entirely closed loggia, featuring walls on all sides, while the loggias on East and North campus are only partially closed. From the time that the first dorm opened in 1915 until the fall of 1968, the nine north campus dorms were used exclusively for male students, and the six south campus dorms reserved for female students. The dorm halls house significantly fewer students than halls at other colleges.
Most academic buildings are located on the southwestern quarter of campus. The athletic facilities are mostly located on the northeastern quarter, and some facilities are located north of 10th Avenue.
In addition to the main campus, the college owns much of the adjacent property. Many administrative offices are located in converted houses across Park Street near the older academic buildings, and several residences are used for college-owned off-campus student housing.
The college maintains a environmental research area called the Conard Environmental Research Area (CERA). The U.S. Green Building Council awarded CERA's Environmental Education Center a gold certification. The building is the first in Iowa to receive the designation.
During the 2000s, the College completed the Charles Benson Bear '39 Recreation and Athletic Center, the Bucksbaum Center for the Arts, the renovation of the Robert Noyce '49 Science Center and the Joe Rosenfield '25 Student Center. Internationally renowned architect César Pelli designed the athletics center, the Joe Rosenfield '25 Student Center, and the Bucksbaum Center for the Arts.
The college has recently embarked on a significant period of new construction, which is expected to last until 2034. The first phase of this construction process includes a comprehensive landscaping update, a new Admissions and Financial Aid building, and the Humanities and Social Sciences Complex (HSSC). This first phase will cost $140 million and is projected for completion in mid-2020.
Grinnell College is considered one of the 30 Hidden Ivies.
The 2020 annual ranking of "U.S. News & World Report" rates it tied for the 14th best liberal arts college overall in the U.S., 3rd for "Best Undergraduate Teaching, 10th for "Best Value", and tied for 11th for "Most Innovative". The College has been consistently ranked in the top 25 liberal arts colleges in the nation since the publication began in 1983. "Kiplinger's Personal Finance" ranks Grinnell 9th in its 2016 ranking of "best value" liberal arts colleges in the United States. Grinnell was ranked 15th in the 2015 "Washington Monthly" rankings, which focus on key outputs such as research, dollar value of scientific grants won, the number of graduates going on to earn Ph.D. degrees, and certain types of public service. In "Forbes" magazine's 2018 rankings of academic institutions, "America's Top Colleges" (which uses a non-traditional ranking system based on RateMyProfessors.com evaluations, notable alumni, student debt, percentage of students graduating in four years, and the number of students or faculty receiving prestigious awards), Grinnell College was ranked 57th among all colleges and universities, 28th among liberal arts colleges, and 8th in the Midwest.
Grinnell College graduates enjoy a high acceptance rate to law school; over 46% of all applications submitted by students have been accepted by law schools.
Grinnell had 175 full-time faculty in Fall 2019, 173 of whom possess a doctorate or the terminal degree in their field.
Grinnell's open curriculum encourages students to take initiative and to assume responsibility for their own courses of study. The sole core, or general education, requirement is the completion of the First-Year Tutorial, a one-semester, four-credit special topics seminar that stresses methods of inquiry, critical analysis, and writing skills. All other classes are chosen, with the direct guidance of a faculty member in the student's major department, by the student.
The academic program at Grinnell College emphasizes active learning and one-on-one interactions between faculty members and students. There are few large lecture classes. In sharp contrast to all public universities and many private universities in the United States, no classes, labs or other courses are taught by graduate students.
Grinnell College expects all students to possess significant academic achievements. For example, the math department does not offer any basic-level classes such as college algebra, trigonometry, or pre-calculus, and remedial classes are not offered in any subject. However, several independent, non-credit programs assist students who need help in a specific subject. Among these programs are the Library Lab, Math Lab, Reading Lab, Science Learning Center, and the Writing Lab. While private tutors can be hired, participation in these programs is free for any enrolled student.
Grinnell has twenty-six major departments and ten interdisciplinary concentrations. Popular majors include Psychology, Economics, Biology, History, English, and political science. The minimum requirements in a major area of study are typically limited to 32 credits in a single department, with some departments additionally requiring a small number of classes in related fields that are deemed critical for all students in that field. For example, the biology program requires 32 credits in the biology department plus two classes in chemistry and one in math. Many students exceed the minimum requirements.
To graduate, students are normally expected to complete at least 32 credits in a major field and a total of 124 credits of academic work. To encourage students to explore courses outside of their primary interest area, no more than 48 credits in one department and no more than 92 credits in one division are counted towards this requirement.
Grinnell's commitment to the importance of off-campus study reflects the school's emphasis on social and political awareness and the international nature of its campus. Approximately 60 percent of all Grinnell students participate in at least one of more than seventy off-campus programs, including the Grinnell-in-London program and study tours of China, France, Greece, and Russia. These study programs in Europe (including Russia), Africa, the Near East, and Asia, as well as nine programs in Central and South America, provide the opportunity for research in many disciplines, from archaeology to education to mathematics. In addition to off-campus programs, Grinnell offers internship programs in such areas as urban studies, art, and marine biology for students interested in field-based learning and experience in professional settings. Second- and third-year students may apply for summer internship grants and receive credit for the experience. Semester programs in the United States include those at Oak Ridge National Laboratory, Newberry Library, National Theatre Institute, and Grinnell-in-Washington, D.C.
Grinnell also has invested in several interdisciplinary programs: the Center for Prairie Studies, Center for the Humanities, Center for International Studies, Noun Program in Women's Studies, Peace Studies Program, Rosenfield Public Affairs Program, and the Donald L. Wilson Program in Enterprise and Leadership.
"U.S. News & World Report" classifies Grinnell's selectivity as "most selective." For Fall 2019, Grinnell received 8,004 freshmen applications; 1,847 were admitted (23.1%). The middle 50% range of SAT scores for the enrolled freshmen was 670–740 for critical reading and 700–790 for math, while the ACT Composite range was 31–34.
Grinnell College's admission selectivity rating, according to The Princeton Review in 2018, is a 95 out of 99. This rating is determined by several institutionally reported factors, including: the class rank, average standardized test scores, and average high school GPA of entering freshmen; the percentage of students who hail from out-of-state; and the percentage of applicants accepted.
The primary factor in evaluating applicants is the quality of the education they have received, as shown by their transcript. Additional factors include standardized test scores, student writing skills, recommendations, and extracurricular activities.
Early decision rounds are offered to students in the fall; most students apply in January of their final year in high school. Admission decisions are released by April 1 of each year. All students begin classes in August.
The students' expectation of needing financial assistance does not affect the admission process.
Despite the growing trend of U.S. students taking five or more years to finish an undergraduate degree, Grinnell College is strongly oriented towards students being enrolled full-time in exactly eight consecutive semesters at the college, although exceptions are available for medical issues and other emergencies. To avoid being suspended from the college, students must make "normal progress towards graduation." This generally means that the student must pass at least 12 credits of classes in each individual semester, with grades C or higher, and have accumulated enough credits to make graduation possible at the end of four years, which requires an average of 15.5 credits each semester. A student who is not making normal progress towards graduation is placed on academic probation and may be dismissed from the college.
Nationwide, only 20% of college students complete a four-year undergraduate degree within four years, and only 57% of college students graduate within six years. However, at Grinnell College, 84% of students graduate within four years. This is the highest graduation rate of any college in Iowa.
Grinnell's combined tuition, room, board, and fees for the 2019–2020 academic year is $67,646. Tuition and fees are $54,354 and room and board are $13,292.
Grinnell College is one of a few dozen US colleges that maintain need-blind admissions and meets the full demonstrated financial need of all U.S. residents who are admitted to the college. Grinnell offers a large amount of need-based and merit-based aid in comparison with peer institutions. About 90% of students receive some form of financial aid. In 2013–2014, 24% of students enrolled at Grinnell College were receiving federal Pell Grants, which are generally reserved for students from low-income families. The average financial aid package is over $26,000.
With the first-year students enrolled in the 2006–2007 school year, Grinnell has ended its need-blind admissions policy for international applicants. Under the old policy, students from countries outside the U.S. were admitted without any consideration of their ability to afford four years of study at the college. However, financial aid offers to these students were limited to half the cost of tuition. International students frequently carried very high workloads in an effort to pay the bills, and their academic performance often suffered. Under the new "need-sensitive" or "need-aware" policy, international students whose demonstrated financial needs can be met are given a slight admissions edge over applicants who can't. The twin hopes are that the enrolled international students will be able to dedicate more energy to their schoolwork, and also that this will ultimately allow the college to provide higher tuition grants to international students.
Additionally, several extremely competitive "special scholarships" were set up to meet the full demonstrated financial needs for students from the following countries or regions: Africa, Eastern and Central Europe, Latin America, Middle East and Asia, Nepal, the People's Republic of China, as well as for native speakers of Russian regardless of citizenship, available every other year.
According to data for students enrolled approximately in 2008, the median family income for students was US$119,700 (74th percentile). This is somewhat lower than typical for other highly selective schools. Compared to other schools in the Midwestern Conference and to other highly selective schools, Grinnell College enrolled more students whose family income was in the lowest quintile (6.3% of enrolled students).
The school's varsity sports teams are named the Pioneers. They participate in eighteen intercollegiate sports at the NCAA Division III level and in the Midwest Conference. In addition, Grinnell has several club sports teams that compete in non-varsity sports such as volleyball, sailing, water polo, ultimate and rugby union.
Nearly one-third of recent Grinnell graduates participated in at least one of varsity sports while attending the college and the college has led the Midwest Conference in the total number of Academic All-Conference honorees in last six years.
The Grinnell Pioneers won the first game of intercollegiate football west of the Mississippi when they beat the University of Iowa 24–0 on November 16, 1889. A stone marker still stands in Grinnell Field marking the event.
The men's water polo team, known as the Wild Turkeys, were runners-up in the 2007 College Water Polo Association (CWPA) Division III Collegiate National Club Championships hosted by Lindenwood University in St. Charles, Missouri. They also qualified for the tournament in 2008, 2009, 2011, 2013, and 2014. The Men's Ultimate team, nicknamed the Grinnellephants, qualified in 2008 for its first Division III National Championship in Versailles, Ohio. The Women's Ultimate team, nicknamed The Sticky Tongue Frogs, tied for third place in the 2010 Division III National Championship in Appleton, Wisconsin. The success was repeated in 2011 when the men's team placed third in 2011 Division III National Championship in Buffalo.
In February 2005, Grinnell became the first Division III school featured in a regular season basketball game by the ESPN network family in 30 years when it faced off against the Beloit Buccaneers on ESPN2. Grinnell lost 86–85. Grinnell College's basketball team attracted ESPN due to the team's run and gun style of playing basketball, known in Grinnell simply as "The System." Coach Dave Arseneault originated the Grinnell System that incorporates a continual full-court press, a fast-paced offense, an emphasis on offensive rebounding, a barrage of three-point shots and substitutions of five players at a time every 35 to 40 seconds. This allows a higher average playing time for more players than the "starters" and suits the Division III goals of scholar-athletes. "The System" has been criticized for not teaching the principles of defense. However, under "The System," Grinnell has won three conference championships over the past ten years and have regularly placed in the top half of the conference. Coach Arseneault's teams have set numerous NCAA scoring records and several individuals on the Grinnell team have led the nation in scoring or assists.
On November 19, 2011 Grinnell player Griffin Lentsch set a new Division III individual scoring record in a game against Principia College. The guard scored 89 points, besting the old record of 77, also set by a Pioneers player—Jeff Clement—in 1998. Lentsch made 27 of his 55 shots, including 15 three-pointers as Grinnell won the high-scoring game 145 to 97. On November 20, 2012 Grinnell's Jack Taylor broke Lentsch's scoring record, as well as the records for NCAA and collegiate scoring, in a 179–104 victory over Faith Baptist Bible. Taylor scored 138 points on 108 shots, along with 3 rebounds, 6 turnovers and 3 steals. Taylor went 27–71 from behind the arc. Taylor scored 109 points in a November 2013 game against Crossroads College to become the first player in NCAA history to have two 100-point games.
Students at Grinnell adhere to an honor system known as "self-governance" wherein they are expected to govern their own choices and behavior with minimal direct intervention by the college administration. By cultivating a community based on freedom of choice, self-governance aims to encourage students to become responsible, respectful, and accountable members of the campus, town, and global community.
The organizational structure of the Student Government Association, wielding a yearly budget of over $450,000 and unusually strong administrative influence, covers almost all aspects of student activity and campus life.
Founded in November 2000, the student-run Student Endowment Investing Group (SEIG), actively invests over $100,000 of Grinnell College's endowment in the stock market. The group's mission is to provide interested students with valuable experience for future careers in finance.
Service organizations are popular. The Alternative Break ("AltBreak") program takes students to pursue service initiatives during school holidays, and as of 2005, Grinnell had more alumni per capita serving in the Peace Corps than any other college in the nation. The college also runs its own post-graduation service program known as Grinnell Corps in Grinnell, China, Namibia, New Orleans, and Thailand, and has previously operated programs in Greece, Lesotho, Macau, and Nepal.
The "Scarlet and Black" is the campus newspaper and KDIC (88.5 FM) is the student-run radio station. The "Scarlet and Black", or the "S&B" is the first college newspaper west of the Mississippi River and is currently in its 130th year of publication. The newspaper, typically 16 tabloid pages in length, is published in print most Fridays of the school year and online. Students primarily write the newspaper, although occasional letters from alumni or faculty are included. Funding comes from student fees and advertisers.
The school also has a bi-weekly satirical newspaper, "The B&S", which features articles about current events both on and off campus. "The B&S" satirizes social and political issues in articles, graphics, and crosswords.
In April 2007, Grinnell college students founded the Social Entrepreneurs of Grinnell, a student operated microfinance lending institution. The group collects donations for the purpose of making small loans at zero interest to business owners and artisans around the world. It is affiliated with kiva.org.
Grinnell also has an entirely student-run textbook lending library on campus. Aimed at the economically disadvantaged yet open to all, it allows students to check out books for the semester for free, defraying the high cost of college textbooks. The library has no funding, relying solely on books donated. Since its founding in 2005, the collection has grown to thousands of books due to the generosity of the campus community. The library has expanded to include caps and gowns, which are lent out to graduating seniors every spring.
Grinnell hosts the Titular Head student film festival.
In 2016, Grinnell students founded the Union of Grinnell Student Dining Workers, or UGSDW, to represent student workers in the college's dining hall. It was the first undergraduate student union at a private college in the United States. In September 2017, UGSDW announced their intention to expand the union to all student workers and create "the most unionized campus in the country", which, if successful, would be another nationwide first. The college administration said that complete unionization "would negatively impact Grinnell’s mission and culture — shifting away from an individually advised, experiential, residential, liberal arts education in which work on campus plays a major educational role." The UGSDW withdrew their petition for expansion from NLRB consideration on December 14, 2018, after the College filed an appeal which UGSDW feared would set legal precedent against the rights of graduate student organizers at other universities. | https://en.wikipedia.org/wiki?curid=13104 |
Global warming controversy
The global warming controversy concerns the public debate over whether global warming is occurring, how much has occurred in modern times, what has caused it, what its effects will be, whether any action can or should be taken to curb it, and if so what that action should be. In the scientific literature, there is a strong consensus that global surface temperatures have increased in recent decades and that the trend is caused by human-induced emissions of greenhouse gases. No scientific body of national or international standing disagrees with this view, though a few organizations with members in extractive industries hold non-committal positions, and some have attempted to convince the public that climate change isn't happening, or if the climate is changing it isn't because of human influence, attempting to sow doubt in the scientific consensus.
The controversy is, by now, political rather than scientific: there is a scientific consensus that global warming is happening and is caused by human activity. Disputes over the key scientific facts of global warming are more prevalent in the media than in the scientific literature, where such issues are treated as resolved, and such disputes are more prevalent in the United States than globally.
Political and popular debate concerning the existence and cause of global warming includes the reasons for the increase seen in the instrumental temperature record, whether the warming trend exceeds normal climatic variations, and whether human activities have contributed significantly to it. Scientists have resolved these questions decisively in favour of the view that the current warming trend exists and is ongoing, that human activity is the cause, and that it is without precedent in at least 2000 years. Public disputes that also reflect scientific debate include estimates of how responsive the climate system might be to any given level of greenhouse gases (climate sensitivity), how the climate will change at local and regional scales, and what the consequences of global warming will be.
Global warming remains an issue of widespread political debate, often split along party political lines, especially in the United States. Many of the issues that are settled within the scientific community, such as human responsibility for global warming, remain the subject of politically or economically motivated attempts to downplay, dismiss or deny them—an ideological phenomenon categorised by academics and scientists as climate change denial. The sources of funding for those involved with climate science opposing mainstream scientific positions have been questioned. There are debates about the best policy responses to the science, their cost-effectiveness and their urgency. Climate scientists, especially in the United States, have reported government and oil-industry pressure to censor or suppress their work and hide scientific data, with directives not to discuss the subject in public communications. Legal cases regarding global warming, its effects, and measures to reduce it have reached American courts. The fossil fuels lobby has been identified as overtly or covertly supporting efforts to undermine or discredit the scientific consensus on global warming.
In the United States, the mass media devoted little coverage to global warming until the drought of 1988, and James E. Hansen's testimony to the Senate, which explicitly attributed "the abnormally hot weather plaguing our nation" to global warming.
The British press also changed its coverage at the end of 1988, following a speech by Margaret Thatcher to the Royal Society advocating action against human-induced climate change. According to Anabela Carvalho, an academic analyst, Thatcher's "appropriation" of the risks of climate change to promote nuclear power, in the context of the dismantling of the coal industry following the 1984–1985 miners' strike was one reason for the change in public discourse. At the same time environmental organizations and the political opposition were demanding "solutions that contrasted with the government's". In May 2013 Charles, Prince of Wales took a strong stance criticising both climate change deniers and corporate lobbyists by likening the Earth to a dying patient. "A scientific hypothesis is tested to absolute destruction, but medicine can't wait. If a doctor sees a child with a fever, he can't wait for [endless] tests. He has to act on what is there."
Many European countries took action to reduce greenhouse gas emissions before 1990. West Germany started to take action after the Green Party took seats in Parliament in the 1980s. All countries of the European Union ratified the 1997 Kyoto Protocol. Substantial activity by NGOs took place as well. The United States Energy Information Administration reports that, in the United States, "The 2012 downturn means that emissions are at their lowest level since 1994 and over 12% below the recent 2007 peak."
The theory that increases in greenhouse gases would lead to an increase in temperature was first proposed by the Swedish chemist Svante Arrhenius in 1896, but climate change did not arise as a political issue until the 1990s. It took many years for this particular issue to attract any type of attention.
In Europe, the notion of human influence on climate gained wide acceptance more rapidly than in the United States and other countries. A 2009 survey found that Europeans rated climate change as the second most serious problem facing the world, between "poverty, the lack of food and drinking water" and "a major global economic downturn". 87% of Europeans considered climate change to be a very serious or serious problem, while ten per cent did not consider it a serious problem.
In 2007, the BBC announced the cancellation of a planned television special Planet Relief, which would have highlighted the global warming issue and included a mass electrical switch-off. The editor of BBC's Newsnight current affairs show said: "It is absolutely not the BBC's job to save the planet. I think there are a lot of people who think that, but it must be stopped." Author Mark Lynas said "The only reason why this became an issue is that there is a small but vociferous group of extreme right-wing climate 'sceptics' lobbying against taking action, so the BBC is behaving like a coward and refusing to take a more consistent stance."
The authors of the 2010 book "Merchants of Doubt", provide documentation for the assertion that professional deniers have tried to sow seeds of doubt in public opinion in order to halt any meaningful social or political progress to reduce the impact of human carbon emissions. The fact that only half of the American population believes global warming is caused by human activity could be seen as a victory for these deniers. One of the authors' main arguments is that most prominent scientists who have been voicing opposition to the near-universal consensus are being funded by industries, such as automotive and oil, that stand to lose money by government actions to regulate greenhouse gases.
A compendium of poll results on public perceptions about global warming is below.
In 2007, a report on public perceptions in the United Kingdom by Ipsos MORI reported that
The Canadian science broadcaster and environmental activist David Suzuki reports that focus groups organized by the David Suzuki Foundation in 2006 showed that the public has a poor understanding of the science behind global warming. This is despite publicity through different means, including the films "An Inconvenient Truth" and "The 11th Hour".
An example of the poor understanding is public confusion between global warming and ozone depletion or other environmental problems.
A 15-nation poll conducted in 2006, by Pew Global found that there "is a substantial gap in concern over global warming—roughly two-thirds of Japanese (66%) and Indians (65%) say they personally worry a great deal about global warming. Roughly half of the populations of Spain (51%) and France (46%) also express great concern over global warming, based on those who have heard about the issue. But there is no evidence of alarm over global warming in either the United States or China—the two largest producers of greenhouse gases. Just 19% of Americans and 20% of the Chinese who have heard of the issue say they worry a lot about global warming—the lowest percentages in the 15 countries surveyed. Moreover, nearly half of Americans (47%) and somewhat fewer Chinese (37%) express little or no concern about the problem."
A 47-nation poll by Pew Global Attitudes conducted in 2007, found, "Substantial majorities 25 of 37 countries say global warming is a 'very serious' problem."
There are differences between the opinion of scientists and that of the general public. A 2009 poll, in the US by Pew Research Center found "[w]hile 84% of scientists say the earth is getting warmer because of human activity such as burning fossil fuels, just 49% of the public agrees". A 2010 poll in the UK for the BBC showed "Climate scepticism on the rise". Robert Watson found this "very disappointing" and said "We need the public to understand that climate change is serious so they will change their habits and help us move towards a low carbon economy."
A 2012 Canadian poll, found that 32% of Canadians said they believe climate change is happening because of human activity, while 54% said they believe it's because of human activity and partially due to natural climate variation. 9% believe climate change is occurring due to natural climate variation, and only 2% said they don't believe climate change is occurring at all.
Many of the critics of the consensus view on global warming have disagreed, in whole or part, with the scientific consensus regarding other issues, particularly those relating to environmental risks, such as ozone depletion, DDT, and passive smoking. Chris Mooney, author of "The Republican War on Science", has argued that the appearance of overlapping groups of skeptical scientists, commentators and think tanks in seemingly unrelated controversies results from an organized attempt to replace scientific analysis with political ideology. Mooney says that the promotion of doubt regarding issues that are politically, but not scientifically, controversial became increasingly prevalent under the George W. Bush administration, which, he says, regularly distorted and/or suppressed scientific research to further its own political aims. This is also the subject of a 2004 book by environmental lawyer Robert F. Kennedy, Jr. titled "Crimes Against Nature: How George W. Bush and Corporate Pals are Plundering the Country and Hijacking Our Democracy" (). Another book on this topic is "The Assault on Reason" by former Vice President of the United States Al Gore. Earlier instances of this trend are also covered in the book "The Heat Is On" by Ross Gelbspan.
Some critics of the scientific consensus on global warming have argued that these issues should not be linked and that reference to them constitutes an unjustified ad hominem attack. Political scientist Roger Pielke, Jr., responding to Mooney, has argued that science is inevitably intertwined with politics.
In 2015, according to "The New York Times" and others, oil companies knew that burning oil and gas could cause global warming since the 1970s but, nonetheless, funded deniers for years.
The findings that the climate has warmed in recent decades and that human activities are producing global climate change have been endorsed by every national science academy that has issued a statement on climate change, including the science academies of all of the major industrialized countries.
Attribution of recent climate change discusses how global warming is attributed to anthropogenic greenhouse gases (GHGs).
Scientific consensus is normally achieved through communication at conferences, publication in the scientific literature, replication (reproducible results by others), and peer review. In the case of global warming, many governmental reports, the media in many countries, and environmental groups, have stated that there is virtually unanimous scientific agreement that human-caused global warming is real and poses a serious concern. A 2006 study suggests that the elevated levels and the glaciation are not synchronous, but rather that weathering associated with the uplift and erosion of the Appalachian Mountains greatly reduced atmospheric greenhouse gas concentrations and permitted the observed glaciation.
As noted above, climate models are only able to simulate the temperature record of the past century when GHG forcing is included, being consistent with the findings of the IPCC which has stated that: "Greenhouse gas forcing, largely the result of human activities, has very likely caused most of the observed global warming over the last 50 years"
The "standard" set of scenarios for future atmospheric greenhouse gases are the IPCC SRES scenarios. The purpose of the range of scenarios is not to predict what exact course the future of emissions will take, but what it may take under a range of possible population, economic and societal trends. Climate models can be run using any of the scenarios as inputs to illustrate the different outcomes for climate change. No one scenario is officially preferred, but in practice the "A1b" scenario roughly corresponding to 1%/year growth in atmospheric is often used for modelling studies.
There is debate about the various scenarios for fossil fuel consumption. Global warming skeptic Fred Singer stated "some good experts believe" that atmospheric concentration will not double since economies are becoming less reliant on carbon.
However, the Stern report,
like many other reports, notes the past correlation between emissions and economic growth and then extrapolates using a "business as usual" scenario to predict GDP growth and hence levels, concluding that:
Increasing scarcity of fossil fuels alone will not stop emissions growth in time. The stocks of hydrocarbons that are profitable to extract are more than enough to take the world to levels of well beyond 750 ppm with very dangerous consequences for climate change impacts.
According to a 2006 paper from Lawrence Livermore National Laboratory, "the earth would warm by 8 degrees Celsius (14.4 degrees Fahrenheit) if humans use the entire planet's available fossil fuels by the year 2300."
On 12 November 2015, NASA scientists reported that human-made carbon dioxide (CO2) continues to increase above levels not seen in hundreds of thousands of years: currently, about half of the carbon dioxide released from the burning of fossil fuels remains in the atmosphere and is not absorbed by vegetation and the oceans.
Scientists opposing the mainstream scientific assessment of global warming express varied opinions concerning the cause of global warming. Some say only that it has not yet been ascertained whether humans are the primary cause of global warming; others attribute global warming to natural variation; ocean currents; increased solar activity or cosmic rays. The consensus position is that solar radiation may have increased by 0.12 W/m2 since 1750, compared to 1.6 W/m2 for the net anthropogenic forcing. The TAR said, "The combined change in radiative forcing of the two major natural factors (solar variation and volcanic aerosols) is estimated to be negative for the past two, and possibly the past four, decades." The AR4 makes no direct assertions on the recent role of solar forcing, but the previous statement is consistent with the AR4's figure 4.
A few studies say that the present level of solar activity is historically high as determined by sunspot activity and other factors. Solar activity could affect climate either by variation in the Sun's output or, more speculatively, by an indirect effect on the amount of cloud formation. Solanki and co-workers suggest that solar activity for the last 60 to 70 years may be at its highest level in 8,000 years, however they said "that solar variability is unlikely to have been the dominant cause of the strong warming during the past three decades", and concluded that "at the most 30% of the strong warming since [1970] can be of solar origin". Muscheler "et al." disagreed with the study, suggesting that other comparably high levels of activity have occurred several times in the last few thousand years. | https://en.wikipedia.org/wiki?curid=13109 |
George Dantzig
George Bernard Dantzig (; November 8, 1914 – May 13, 2005) was an American mathematical scientist who made contributions to industrial engineering, operations research, computer science, economics, and statistics.
Dantzig is known for his development of the simplex algorithm, an algorithm for solving linear programming problems, and for his other work with linear programming. In statistics, Dantzig solved two open problems in statistical theory, which he had mistaken for homework after arriving late to a lecture by Jerzy Neyman.
At his death, Dantzig was the Professor Emeritus of Transportation Sciences and Professor of Operations Research and of Computer Science at Stanford University.
Born in Portland, Oregon, George Bernard Dantzig was named after George Bernard Shaw, the Irish writer. Born to Jewish parents, his father, Tobias Dantzig, was a mathematician and linguist, and his mother, Anja Dantzig (née Ourisson), was a linguist of French-Jewish origin. Dantzig's parents met during their study at the University of Paris, where Tobias studied mathematics under Henri Poincaré, after whom Dantzig's brother was named. The Dantzigs immigrated to the United States, where they settled in Portland, Oregon.
Early in the 1920s the Dantzig family moved from Baltimore to Washington, D.C. His mother became a linguist at the Library of Congress, and his father became a math tutor at the University of Maryland, College Park. Dantzig attended Powell Junior High School and Central High School; one of his friends there was Abraham Seidenberg, who also became a mathematician. By the time he reached high school, he was already fascinated by geometry, and this interest was further nurtured by his father, challenging him with complicated problems, particularly in projective geometry.
George Dantzig received his B.S. from University of Maryland in 1936 in mathematics and physics, which is part of the University of Maryland College of Computer, Mathematical, and Natural Sciences. He earned his master's degree in mathematics from the University of Michigan in 1938. After a two-year period at the Bureau of Labor Statistics, he enrolled in the doctoral program in mathematics at the University of California, Berkeley, where he studied statistics under Jerzy Neyman.
With the outbreak of World War II, Dantzig took a leave of absence from the doctoral program at Berkeley to work as a civilian for the United States Army Air Forces. From 1941 to 1946, he became the head of the combat analysis branch of the Headquarters Statistical Control for the Army Air Forces. In 1946, he returned to Berkeley to complete the requirements of his program and received his Ph.D. that year. Although he had a faculty offer from Berkeley, he returned to the Air Force as mathematical advisor to the comptroller.
In 1952, Dantzig joined the mathematics division of the RAND Corporation. By 1960, he became a professor in the Department of Industrial Engineering at UC Berkeley, where he founded and directed the Operations Research Center. In 1966 he joined the Stanford faculty as Professor of Operations Research and of Computer Science. A year later, the Program in Operations Research became a full-fledged department. In 1973 he founded the Systems Optimization Laboratory (SOL) there. On a sabbatical leave that year, he managed the Methodology Group at the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria. Later he became the C. A. Criley Professor of Transportation Sciences at Stanford University.
He was a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences. Dantzig was the recipient of many honors, including the first John von Neumann Theory Prize in 1974, the National Medal of Science in 1975, an honorary doctorate from the University of Maryland, College Park in 1976. The Mathematical Programming Society honored Dantzig by creating the George B. Dantzig Prize, bestowed every three years since 1982 on one or two people who have made a significant impact in the field of mathematical programming. He was elected to the 2002 class of Fellows of the Institute for Operations Research and the Management Sciences.
Freund wrote further that "through his research in mathematical theory, computation, economic analysis, and applications to industrial problems, Dantzig contributed more than any other researcher to the remarkable development of linear programming".
Dantzig's work allows the airline industry, for example, to schedule crews and make fleet assignments. Based on his work tools are developed "that shipping companies use to determine how many planes they need and where their delivery trucks should be deployed. The oil industry long has used linear programming in refinery planning, as it determines how much of its raw product should become different grades of gasoline and how much should be used for petroleum-based byproducts. It is used in manufacturing, revenue management, telecommunications, advertising, architecture, circuit design and countless other areas".
An event in Dantzig's life became the origin of a famous story in 1939, while he was a graduate student at UC Berkeley. Near the beginning of a class for which Dantzig was late, professor Jerzy Neyman wrote two examples of famously unsolved statistics problems on the blackboard. When Dantzig arrived, he assumed that the two problems were a homework assignment and wrote them down. According to Dantzig, the problems "seemed to be a little harder than usual", but a few days later he handed in completed solutions for the two problems, still believing that they were an assignment that was overdue.
Six weeks later, Dantzig received a visit from an excited professor Neyman, who was eager to tell him that the homework problems he had solved were two of the most famous unsolved problems in statistics. He had prepared one of Dantzig's solutions for publication in a mathematical journal. As Dantzig told it in a 1986 interview in the "College Mathematics Journal":
A year later, when I began to worry about a thesis topic, Neyman just shrugged and told me to wrap the two problems in a binder and he would accept them as my thesis.
Years later another researcher, Abraham Wald, was preparing to publish an article that arrived at a conclusion for the second problem, and included Dantzig as its co-author when he learned of the earlier solution.
This story began to spread and was used as a motivational lesson demonstrating the power of positive thinking. Over time Dantzig's name was removed, and facts were altered, but the basic story persisted in the form of an urban legend and as an introductory scene in the movie "Good Will Hunting".
Linear programming is a mathematical method for determining a way to achieve the best outcome (such as maximum profit or lowest cost) in a given mathematical model for some list of requirements represented as linear relationships. Linear programming arose as a mathematical model developed during World War II to plan expenditures and returns in order to reduce costs to the army and increase losses to the enemy. It was kept secret until 1947. Postwar, many industries found its use in their daily planning.
The founders of this subject are Leonid Kantorovich, a Russian mathematician who developed linear programming problems in 1939, Dantzig, who published the simplex method in 1947, and John von Neumann, who developed the theory of the duality in the same year.
Dantzig's original example of finding the best assignment of 70 people to 70 jobs exemplifies the usefulness of linear programming. The computing power required to test all the permutations to select the best assignment is vast; the number of possible configurations exceeds the number of particles in the universe. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the Simplex algorithm. The theory behind linear programming drastically reduces the number of possible optimal solutions that must be checked.
In 1963, Dantzig's "Linear Programming and Extensions" was published by Princeton University Press. Rich in insight and coverage of significant topics, the book quickly became "the bible" of linear programming.
Dantzig received his bachelor's degree in mathematics and physics from the University of Maryland in 1936, the year he married Anne S. Shmuner. He died on May 13, 2005, in his home in Stanford, California, of complications from diabetes and cardiovascular disease. He was 90 years old.
Books by George Dantzig:
Book chapters:
Articles, a selection: | https://en.wikipedia.org/wiki?curid=13111 |
Geoff Ryman
Geoffrey Charles Ryman (born 1951) is a Canadian writer of science fiction, fantasy, slipstream and historical fiction.
Ryman was born in Canada and moved to the United States at age 11. He earned degrees in History and English at UCLA, then moved to England in 1973, where he has lived most of his life. He is gay.
In addition to being an author, Ryman started a web design team for the UK government at the Central Office of Information in 1994. He also led the teams that designed the first official British Monarchy and 10 Downing Street websites, and worked on the UK government's flagship website www.direct.gov.uk.
Ryman says he knew he was a writer "before [he] could talk", with his first work published in his mother's newspaper column at six years of age.
He is best known for his science fiction; however, his first novel was the fantasy "The Warrior Who Carried Life", and his revisionist fantasy of "The Wizard of Oz", "Was...", has been called "his most accomplished work".
Much of Ryman's work is based on travels to Cambodia. The first of these, "The Unconquered Country" (1986) was winner of the World Fantasy Award and British Science Fiction Association Award. His novel "The King's Last Song" (2006) was set both in the Angkor Wat era and the time after Pol Pot and the Khmer Rouge.
Ryman has written, directed and performed in several plays based on works by other writers.
He was guest of honour at Novacon in 1989 and has twice been a guest speaker at Microcon, in 1994 and in 2004. He was also the guest of honour at the national Swedish science fiction convention Swecon in 2006, at Gaylaxicon 2008, at Wiscon 2009, and at Åcon 2010.
Mundane science fiction is a subgenre of science fiction focusing on stories set on or near the Earth, with a believable use of technology and science as it exists at the time the story is written. The Mundane SF movement was founded in 2002 during the Clarion workshop by Ryman and others. In 2008 a Mundane SF issue of "Interzone" magazine was published, guest-edited by Ryman, Julian Todd and Trent Walters.
Ryman currently lectures in Creative Writing for University of Manchester's English Department.
As of 2008 he was at work on a new historical novel set in the United States before their Civil War.
| valign=top | | https://en.wikipedia.org/wiki?curid=13114 |
Gametophyte
A gametophyte () is one of the two alternating phases in the life cycle of plants and algae. It is a haploid multicellular organism that develops from a haploid spore that has one set of chromosomes. The gametophyte is the sexual phase in the life cycle of plants and algae. It develops sex organs that produce gametes, haploid sex cells that participate in fertilization to form a diploid zygote which has a double set of chromosomes. Cell division of the zygote results in a new diploid multicellular organism, the second stage in the life cycle known as the sporophyte. The sporophyte can produce haploid spores by meiosis.
In some multicellular green algae ("Ulva lactuca" is one example), red algae and brown algae, sporophytes and gametophytes may be externally indistinguishable (isomorphic). In "Ulva" the gametes are isogamous, all of one size, shape and general morphology.
In land plants, anisogamy is universal. As in animals, female and male gametes are called, respectively, "eggs" and "sperm." In extant land plants, either the sporophyte or the gametophyte may be reduced (heteromorphic).
In bryophytes (mosses, liverworts, and hornworts), the gametophyte is the most visible stage of the life cycle. The bryophyte gametophyte is longer lived, nutritionally independent, and the sporophytes are typically attached to the gametophytes and dependent on them. When a moss spore germinates it grows to produce a filament of cells (called the protonema). The mature gametophyte of mosses develops into leafy shoots that produce sex organs (gametangia) that produce gametes. Eggs develop in archegonia and sperm in antheridia.
In some bryophyte groups such as many liverworts of the order Marchantiales, the gametes are produced on specialized structures called gametophores (or gametangiophores).
All vascular plants are sporophyte dominant, and a trend toward smaller and more sporophyte-dependent female gametophytes is evident as land plants evolved reproduction by seeds.
Vascular plants such as ferns that produce only one type of spore are said to be homosporous. They have exosporic gametophytes—that is, the gametophyte is free-living and develops outside of the spore wall. Exosporic gametophytes can either be bisexual, capable of producing both sperm and eggs in the same thallus (monoicous), or specialized into separate male and female organisms (dioicous).
In heterosporous vascular plants (plants that produce both microspores and megaspores), the gametophyte develops endosporically (within the spore wall). These gametophytes are dioicous, producing either sperm or eggs but not both.
In most ferns, for example, in the leptosporangiate fern "Dryopteris", the gametophyte is a photosynthetic free living autotrophic organism called a prothallus that produces gametes and maintains the sporophyte during its early multicellular development. However, in some groups, notably the clade that includes Ophioglossaceae and Psilotaceae, the gametophytes are subterranean and subsist by forming mycotrophic relationships with fungi.
Extant lycophytes produce two different types of gametophytes. In the homosporous families Lycopodiaceae and Huperziaceae, spores germinate into bisexual free-living, subterranean and mycotrophic gametophytes that derive nutrients from symbiosis with fungi. In "Isoetes" and "Selaginella", which are heterosporous, microspores and megaspores are dispersed from sporangia either passively or by active ejection. Microspores produce microgametophytes which the produce sperm. Megaspores produce reduced megagametophytes inside the spore wall. At maturity, the megaspore cracks open at the trilete suture to allow the male gametes to access the egg cells in the archegonia inside. The gametophytes of "Isoetes" appear to be similar in this respect to those of the extinct Carboniferous arborescent lycophytes "Lepidodendron" and "Lepidostrobus".
The seed plants (gymnosperms and angiosperms) are endosporic and heterosporous. The gametophytes develop into multicellular organisms while still enclosed within the spore wall, and the megaspores are retained within the sporangium.
In plants with heteromorphic gametophytes, there are two distinct kinds of gametophytes. Because the two gametophytes differ in form and function, they are termed "heteromorphic", from "hetero"- "different" and "morph" "form". The egg producing gametophyte is known as a megagametophyte, because it is typically larger, and the sperm producing gametophyte is known as a microgametophyte. Gametophytes which produce egg and sperm on separate plants are termed dioicous.
In heterosporous plants (water ferns, some lycophytes, as well as all gymnosperms and angiosperms), there are two distinct sporangia, each of which produces a single kind of spore and single kind of gametophyte. However, not all heteromorphic gametophytes come from heterosporous plants. That is, some plants have distinct egg-producing and sperm-producing gametophytes, but these gametophytes develop from the same kind of spore inside the same sporangium; "Sphaerocarpos" is an example of such a plant.
In seed plants, the microgametophyte is called pollen. Seed plant microgametophytes consists of several (typically two to five) cells when the pollen grains exit the sporangium. The megagametophyte develops within the megaspore of extant seedless vascular plants and within the megasporangium in a cone or flower in seed plants. In seed plants, the microgametophyte (pollen) travels to the vicinity of the egg cell (carried by a physical or animal vector), and produces two sperm by mitosis.
In gymnosperms the megagametophyte consists of several thousand cells and produces one to several archegonia, each with a single egg cell. The gametophyte becomes a food storage tissue in the seed.
In angiosperms, the megagametophyte is reduced to only a few nuclei and cells, and is sometimes called the embryo sac. A typical embryo sac contains seven cells and eight nuclei, one of which is the egg cell. Two nuclei fuse with a sperm nucleus to form the endosperm, which becomes the food storage tissue in the seed. | https://en.wikipedia.org/wiki?curid=13115 |
Gavoi
Gavoi is a "comune" in central Sardinia (Italy), part of the province of Nuoro, in the natural region of Barbagia. It overlooks the Lake of Gusana.
The territory of Gavoi is inhabited since the prenuragic period. During the middleage is cited various times in the list of villages and towns that paid the taxes to the Roman curia.
Gavoi was hit by the plague in the 18th century.
The Roman church of San Gavino is Gavoi's foremost sacred spot, through there are eight other ancient churches in the village. The village's center contains rock houses with balconies, and a village fountain is known as "Antana 'e Cartzonna".
Near the lake are the archaeological areas of Orrui and San Michele di Fonni. A Roman bridge is submerged beneath the lake.
Mountain tourism is among the sources of income. Agriculture production include potatoes and cheese (the town is famous for its Fiore Sardo).
The "tumbarinu" is a traditional drum made of lamb skin, and more rarely, dog or donkey skin. The tumbarinu is often accompanied with the pipiolu, the traditional sheppard's fife. The "ballu tundu", is a traditional dance in the round, as in the Balkan area. Poetry is esteemed, including extemporaneous rhyme competitions on given topics.
The nearby Sanctuary of Madonna d'Itria hosts a palio, in this case a peculiar horse competition very similar to that of Siena. | https://en.wikipedia.org/wiki?curid=13116 |
Gusana
Gusana is the name of an artificial lake and of the surrounding area, in the territory of Gavoi, Sardinia, Italy.
The lake was built in the 1930s to store water for an electricity generator (central of Coghinadordza), and it covered an ancient Roman bridge as well as an ancient archaeological site of the Nuragici people.
It is now a tourist destination. | https://en.wikipedia.org/wiki?curid=13117 |
Glenn T. Seaborg
Glenn Theodore Seaborg (; April 19, 1912February 25, 1999) was an American chemist whose involvement in the synthesis, discovery and investigation of ten transuranium elements earned him a share of the 1951 Nobel Prize in Chemistry. His work in this area also led to his development of the actinide concept and the arrangement of the actinide series in the periodic table of the elements.
Seaborg spent most of his career as an educator and research scientist at the University of California, Berkeley, serving as a professor, and, between 1958 and 1961, as the university's second chancellor. He advised ten US Presidents—from Harry S. Truman to Bill Clinton—on nuclear policy and was Chairman of the United States Atomic Energy Commission from 1961 to 1971, where he pushed for commercial nuclear energy and the peaceful applications of nuclear science. Throughout his career, Seaborg worked for arms control. He was a signatory to the Franck Report and contributed to the Limited Test Ban Treaty, the Nuclear Non-Proliferation Treaty and the Comprehensive Test Ban Treaty. He was a well-known advocate of science education and federal funding for pure research. Toward the end of the Eisenhower administration, he was the principal author of the Seaborg Report on academic science, and, as a member of President Ronald Reagan's National Commission on Excellence in Education, he was a key contributor to its 1983 report "A Nation at Risk".
Seaborg was the principal or co-discoverer of ten elements: plutonium, americium, curium, berkelium, californium, einsteinium, fermium, mendelevium, nobelium and element 106, which, while he was still living, was named seaborgium in his honor. He also discovered more than 100 isotopes of transuranium elements and is credited with important contributions to the chemistry of plutonium, originally as part of the Manhattan Project where he developed the extraction process used to isolate the plutonium fuel for the second atomic bomb. Early in his career, he was a pioneer in nuclear medicine and discovered isotopes of elements with important applications in the diagnosis and treatment of diseases, including iodine-131, which is used in the treatment of thyroid disease. In addition to his theoretical work in the development of the actinide concept, which placed the actinide series beneath the lanthanide series on the periodic table, he postulated the existence of super-heavy elements in the transactinide and superactinide series.
After sharing the 1951 Nobel Prize in Chemistry with Edwin McMillan, he received approximately 50 honorary doctorates and numerous other awards and honors. The list of things named after Seaborg ranges from the chemical element seaborgium to the asteroid 4856 Seaborg. He was a prolific author, penning numerous books and 500 journal articles, often in collaboration with others. He was once listed in the Guinness Book of World Records as the person with the longest entry in "Who's Who in America".
Glenn Theodore Seaborg was born in Ishpeming, Michigan, on April 19, 1912, the son of Herman Theodore (Ted) and Selma Olivia Erickson Seaborg. He had one sister, Jeanette, who was two years younger. His family spoke Swedish at home. When Glenn Seaborg was a boy, the family moved to Los Angeles County, California, settling in a subdivision called Home Gardens, later annexed to the City of South Gate, California. About this time he changed the spelling of his first name from Glen to Glenn.
Seaborg kept a daily journal from 1927 until he suffered a stroke in 1998. As a youth, Seaborg was both a devoted sports fan and an avid movie buff. His mother encouraged him to become a bookkeeper as she felt his literary interests were impractical. He did not take an interest in science until his junior year when he was inspired by Dwight Logan Reid, a chemistry and physics teacher at David Starr Jordan High School in Watts.
Seaborg graduated from Jordan in 1929 at the top of his class and received a bachelor of arts (AB) degree in chemistry at the University of California, Los Angeles, in 1933. He worked his way through school as a stevedore and a laboratory assistant at Firestone. Seaborg received his PhD in chemistry at the University of California, Berkeley, in 1937 with a doctoral thesis on the "Interaction of Fast Neutrons with Lead", in which he coined the term "nuclear spallation".
Seaborg was a member of the professional chemistry fraternity Alpha Chi Sigma. As a graduate student in the 1930s Seaborg performed wet chemistry research for his advisor Gilbert Newton Lewis, and published three papers with him on the theory of acids and bases. Seaborg studied the text "Applied Radiochemistry" by Otto Hahn, of the Kaiser Wilhelm Institute for Chemistry in Berlin, and it had a major impact on his developing interests as a research scientist. For several years, Seaborg conducted important research in artificial radioactivity using the Lawrence cyclotron at UC Berkeley. He was excited to learn from others that nuclear fission was possible—but also chagrined, as his own research might have led him to the same discovery.
Seaborg also became an adept interlocutor of Berkeley physicist Robert Oppenheimer. Oppenheimer had a daunting reputation and often answered a junior colleague's question before it had even been stated. Often the question answered was more profound than the one asked, but of little practical help. Seaborg learned to state his questions to Oppenheimer quickly and succinctly.
Seaborg remained at the University of California, Berkeley, for post-doctoral research. He followed Frederick Soddy's work investigating isotopes and contributed to the discovery of more than 100 isotopes of elements. Using one of Lawrence's advanced cyclotrons, John Livingood, Fred Fairbrother, and Seaborg created a new isotope of iron, iron-59 in 1937. Iron-59 was useful in the studies of the hemoglobin in human blood. In 1938, Livingood and Seaborg collaborated (as they did for five years) to create an important isotope of iodine, iodine-131, which is still used to treat thyroid disease. (Many years later, it was credited with prolonging the life of Seaborg's mother.) As a result of these and other contributions, Seaborg is regarded as a pioneer in nuclear medicine and is one of its most prolific discoverers of isotopes.
In 1939 he became an instructor in chemistry at Berkeley, was promoted to assistant professor in 1941 and professor in 1945. University of California, Berkeley, physicist Edwin McMillan led a team that discovered element 93, which he named neptunium in 1940. In November, he was persuaded to leave Berkeley temporarily to assist with urgent research in radar technology. Since Seaborg and his colleagues had perfected McMillan's oxidation-reduction technique for isolating neptunium, he asked McMillan for permission to continue the research and search for element 94. McMillan agreed to the collaboration. Seaborg first reported alpha decay proportionate to only a fraction of the element 93 under observation. The first hypothesis for this alpha particle accumulation was contamination by uranium, which produces alpha-decay particles; analysis of alpha-decay particles ruled this out. Seaborg then postulated that a distinct alpha-producing element was being formed from element 93.
In February 1941, Seaborg and his collaborators produced plutonium-239 through the bombardment of uranium. In their experiments bombarding uranium with deuterons, they observed the creation of neptunium, element 93. But it then underwent beta-decay, forming a new element, plutonium, with 94 protons. Plutonium is fairly stable, but undergoes alpha-decay, which explained the presence of alpha particles coming from neptunium. Thus, on March 28, 1941, Seaborg, physicist Emilio Segrè and Berkeley chemist Joseph W. Kennedy were able to show that plutonium (then known only as element 94) was fissile, an important distinction that was crucial to the decisions made in directing Manhattan Project research. In 1966, Room 307 of Gilman Hall on the campus at the Berkeley, where Seaborg did his work, was declared a U.S. National Historic Landmark.
In addition to plutonium, he is credited as a lead discoverer of americium, curium, and berkelium, and as a co-discoverer of californium, einsteinium, fermium, mendelevium, nobelium and seaborgium. He shared the Nobel Prize in Chemistry in 1951 with Edwin McMillan for "their discoveries in the chemistry of the first transuranium elements."
On April 19, 1942, Seaborg reached Chicago and joined the chemistry group at the Metallurgical Laboratory of the Manhattan Project at the University of Chicago, where Enrico Fermi and his group would later convert uranium-238 to plutonium-239 in a controlled nuclear chain reaction. Seaborg's role was to figure out how to extract the tiny bit of plutonium from the mass of uranium. Plutonium-239 was isolated in visible amounts using a transmutation reaction on August 20, 1942, and weighed on September 10, 1942, in Seaborg's Chicago laboratory. He was responsible for the multi-stage chemical process that separated, concentrated and isolated plutonium. This process was further developed at the Clinton Engineering Works in Oak Ridge, Tennessee, and then entered full-scale production at the Hanford Engineer Works, in Richland, Washington.
Seaborg's theoretical development of the actinide concept resulted in a redrawing of the Periodic Table of the Elements into its current configuration with the actinide series appearing below the lanthanide series. Seaborg developed the chemical elements americium and curium while in Chicago. He managed to secure patents for both elements. His patent on curium never proved commercially viable because of the element's short half-life, but americium is commonly used in household smoke detectors and thus provided a good source of royalty income to Seaborg in later years. Prior to the test of the first nuclear weapon, Seaborg joined with several other leading scientists in a written statement known as the Franck Report (secret at the time but since published) unsuccessfully calling on President Truman to conduct a public demonstration of the atomic bomb witnessed by the Japanese.
After the conclusion of World War II and the Manhattan Project, Seaborg was eager to return to academic life and university research free from the restrictions of wartime secrecy. In 1946, he added to his responsibilities as a professor by heading the nuclear chemistry research at the Lawrence Radiation Laboratory operated by the University of California on behalf of the United States Atomic Energy Commission. Seaborg was named one of the "Ten Outstanding Young Men in America" by the U.S. Junior Chamber of Commerce in 1947 (along with Richard Nixon and others). Seaborg was elected a Member of the National Academy of Sciences in 1948. From 1954 to 1961 he served as associate director of the radiation laboratory. He was appointed by President Truman to serve as a member of the General Advisory Committee of the Atomic Energy Commission, an assignment he retained until 1960.
Seaborg served as chancellor at the University of California, Berkeley, from 1958 to 1961. His term coincided with a relaxation of McCarthy-era restrictions on students' freedom of expression that had begun under his predecessor, Clark Kerr. In October 1958, Seaborg announced that the University had relaxed its prior prohibitions on political activity on a trial basis, and the ban on communists speaking on campus was lifted. This paved the way for the Free Speech Movement of 1964–65.
Seaborg was an enthusiastic supporter of Cal's sports teams. San Francisco columnist Herb Caen was fond of pointing out that Seaborg's surname is an anagram of "Go Bears", a popular cheer at UC Berkeley. Seaborg was proud of the fact that the Cal Bears won their first and only National Collegiate Athletic Association (NCAA) basketball championship in 1959, while he was chancellor. The football team also won the conference title and played in the Rose Bowl that year. He served on the Faculty Athletic Committee for several years and was the co-author of a book, "Roses from the Ashes: Breakup and Rebirth in Pacific Coast Intercollegiate Athletics" (2000), concerning the Pacific Coast Conference recruiting scandal, and the founding of what is now the Pac-12, in which he played a role in restoring confidence in the integrity of collegiate sports.
Seaborg served on the President's Science Advisory Committee (PSAC) during the Eisenhower administration. PSAC produced a report on "Scientific Progress, the Universities, and the Federal Government", also known as the "Seaborg Report", in November 1960, that urged greater federal funding of science. In 1959, he helped found the Berkeley Space Sciences Laboratory with Clark Kerr.
After appointment by President John F. Kennedy and confirmation by the United States Senate, Seaborg was chairman of the Atomic Energy Commission (AEC) from 1961 to 1971. His pending appointment by President-elect Kennedy was nearly derailed in late 1960 when members of the Kennedy transition team learned that Seaborg had been listed in a "U.S. News & World Report" article as a member of "Nixon's Idea Men". Seaborg said that as a lifetime Democrat he was baffled when the article appeared associating him with outgoing Vice President Richard Nixon, a Republican whom Seaborg considered a casual acquaintance.
During the early 1960s, Seaborg became concerned with the ecological and biological effects of nuclear weapons, especially those that would impact human life significantly. In response, he commissioned the Technical Analysis Branch of the Atomic Energy Commission to study these matters further. Seaborg's provision for these innovative studies led the U.S. Government to more seriously pursue the development and possible use of "clean" nuclear weapons.
While chairman of the AEC, Seaborg participated on the negotiating team for the Limited Test Ban Treaty (LTBT), in which the US, UK, and USSR agreed to ban all above-ground test detonations of nuclear weapons. Seaborg considered his contributions to the achievement of the LTBT as one of his greatest accomplishments. Despite strict rules from the Soviets about photography at the signing ceremony, Seaborg used a tiny camera to take a close-up photograph of Soviet Premier Nikita Khrushchev as he signed the treaty.
Seaborg enjoyed a close relationship with President Lyndon Johnson and influenced the administration to pursue the Nuclear Non-Proliferation Treaty. Seaborg was called to the White House in the first week of the Nixon Administration in January 1969 to advise President Richard Nixon on his first diplomatic crisis involving the Soviets and nuclear testing. He clashed with Nixon presidential adviser John Ehrlichman over the treatment of a Jewish scientist, Zalman Shapiro, whom the Nixon administration suspected of leaking nuclear secrets to Israel.
Seaborg published several books and journal articles during his tenure at the Atomic Energy Commission. He predicted the existence of elements beyond those on the periodic table, the transactinide series and the superactinide series of undiscovered synthetic elements. While most of these theoretical future elements have extremely short half-lives and thus no expected practical applications, he also hypothesized the existence of stable super-heavy isotopes of certain elements in an island of stability. | https://en.wikipedia.org/wiki?curid=13120 |
Gaspard-Gustave de Coriolis
Gaspard-Gustave de Coriolis (; 21 May 1792 – 19 September 1843) was a French mathematician, mechanical engineer and scientist. He is best known for his work on the supplementary forces that are detected in a rotating frame of reference, leading to the Coriolis effect. He was the first to apply the term "travail" (translated as "work") for the transfer of energy by a force acting through a distance.
Coriolis was born in Paris in 1792. In 1808 he sat the entrance exam and was placed second of all the students entering that year, and in 1816, he became a tutor at the École Polytechnique, where he did experiments on friction and hydraulics.
In 1829, Coriolis published a textbook, "Calcul de l'Effet des Machines" ("Calculation of the Effect of Machines"), which presented mechanics in a way that could readily be applied by industry. In this period, the correct expression for kinetic energy, "½mv2", and its relation to mechanical work, became established.
During the following years, Coriolis worked to extend the notions of kinetic energy and work to rotating systems. The first of his papers, "Sur le principe des forces vives dans les mouvements relatifs des machines" (On the principle of kinetic energy in the relative motion in machines), was read to the Académie des Sciences (Coriolis 1832). Three years later came the paper that would make his name famous, "Sur les équations du mouvement relatif des systèmes de corps" (On the equations of relative motion of a system of bodies). Coriolis's papers do not deal with the atmosphere or even the rotation of the Earth, but with the transfer of energy in rotating systems like waterwheels. Coriolis discussed the supplementary forces that are detected in a rotating frame of reference and he divided these forces into two categories. The second category contained the force that would eventually bear his name. A detailed discussion may be found in Dugas.
In 1835, he published a mathematical work on collisions of spheres: "Théorie Mathématique des Effets du Jeu de Billard", considered a classic on the subject.
Coriolis's name began to appear in the meteorological literature at the end of the 19th century, although the term "Coriolis force" was not used until the beginning of the 20th century. Today, the name Coriolis has become strongly associated with meteorology, but all major discoveries about the general circulation and the relation between the pressure and wind fields were made without knowledge about Gaspard Gustave Coriolis.
Coriolis became professor of mechanics at the École Centrale des Arts et Manufactures in 1829. Upon the death of Claude-Louis Navier in 1836, Coriolis succeeded him in the chair of applied mechanics at the École Nationale des Ponts et Chaussées and to Navier's place in the Académie des Sciences. In 1838, he succeeded Dulong as "Directeur des études" (director of studies) in the École Polytechnique.
He died in 1843 at the age of 51 in Paris. His name is one of the 72 names inscribed on the Eiffel Tower. | https://en.wikipedia.org/wiki?curid=13126 |
Gecko
Geckos are small lizards belonging to the infraorder Gekkota, found in warm climates throughout the world. They range from 1.6 to 60 cm (0.64 to 24 inches). Most geckos cannot blink, but they often lick their eyes to keep them clean and moist. They have a fixed lens within each iris that enlarges in darkness to let in more light.
Geckos are unique among lizards for their vocalizations, which differs from species to species. Most geckos in the family "Gekkonidae" use chirping or clicking sounds in their social interactions, tokay geckos ("Gekko gecko") are known for their loud mating calls, and some other species are capable of making hissing noises when alarmed or threatened. They are the most species-rich group of lizards, with about 1,500 different species worldwide. The New Latin "gekko" and English "gecko" stem from the Indonesian-Malay "gēkoq", which is imitative of sounds that some species make.
All geckos except species in the family Eublepharidae lack eyelids; instead, the outer surface of the eyeball has a transparent membrane, the cornea. Species without eyelids generally lick their own corneas when they need to clear them of dust and dirt.
Nocturnal species have excellent night vision; their color vision in low light is 350 times more sensitive than human color vision. The nocturnal geckos evolved from diurnal species, which had lost the eye rods. The gecko eye, therefore, modified its cones that increased in size into different types, both single and double. Three different photopigments have been retained and are sensitive to UV, blue, and green. They also use a multifocal optical system that allows them to generate a sharp image for at least two different depths.
Like most lizards, geckos can lose their tails in defense, a process called autotomy. Many species are well known for their specialised toe pads that enable them to climb smooth and vertical surfaces, and even cross indoor ceilings with ease. Geckos are well known to people who live in warm regions of the world, where several species make their home inside human habitations. These (for example the house gecko) become part of the indoor menagerie and are often welcomed, as they feed on insects, including moths and mosquitoes. Unlike most lizards, geckos are usually nocturnal.
The largest species, the kawekaweau, is only known from a single, stuffed specimen found in the basement of a museum in Marseille, France. This gecko was 60 cm (24 in) long and it was likely endemic to New Zealand, where it lived in native forests. It was probably wiped out along with much of the native fauna of these islands in the late 19th century, when new invasive species such as rats and stoats were introduced to the country during European colonization. The smallest gecko, the Jaragua sphaero, is a mere 1.6 cm (about half an inch) long and was discovered in 2001 on a small island off the coast of the Dominican Republic.
Like other reptiles, geckos are ectothermic, producing very little metabolic heat. Essentially, a gecko's body temperature is dependent on its environment. Also, to accomplish their main functions—such as locomotion, feeding, reproduction, etc.—geckos must have a relatively elevated temperature.
All geckos shed their skin at fairly regular intervals, with species differing in timing and method. Leopard geckos shed at about two- to four-week intervals. The presence of moisture aids in the shedding. When shedding begins, the gecko speeds the process by detaching the loose skin from its body and eating it.
For young geckos, shedding occurs more frequently, once a week, but when they are fully grown, they shed once every one to two months.
About 60% of gecko species have adhesive toe pads that allow them to adhere to most surfaces without the use of liquids or surface tension. Such pads have been gained and lost repeatedly over the course of gecko evolution. Adhesive toepads evolved independently in about 11 different gecko lineages and were lost in at least 9 lineages.
The spatula-shaped setae arranged in lamellae on gecko footpads enable attractive van der Waals' forces (the weakest of the weak chemical forces) between the β-keratin lamellae/setae/spatulae structures and the surface. These van der Waals interactions involve no fluids; in theory, a boot made of synthetic setae would adhere as easily to the surface of the International Space Station as it would to a living-room wall, although adhesion varies with humidity.
A recent study has however shown that gecko adhesion is in fact mainly determined by electrostatic interaction (caused by contact electrification), not van der Waals or capillary forces.
The setae on the feet of geckos are also self-cleaning and usually remove any clogging dirt within a few steps.
Teflon, which has very low surface energy, is more difficult for geckos to adhere to than many other surfaces.
Gecko adhesion is typically improved by higher humidity, even on hydrophobic surfaces, yet is reduced under conditions of complete immersion in water. The role of water in that system is under discussion, yet recent experiments agree that the presence of molecular water layers (water molecules carry a very large dipole moment) on the setae, as well as on the surface, increase the surface energy of both, therefore the energy gain in getting these surfaces in contact is enlarged, which results in an increased gecko adhesion force. Moreover, the elastic properties of the b-keratin change with water uptake.
Gecko toes seem to be "double jointed", but this is a misnomer and is properly called digital hyperextension. Gecko toes can hyperextend in the opposite direction from human fingers and toes. This allows them to overcome the van der Waals force by peeling their toes off surfaces from the tips inward. In essence, by this peeling action, the gecko separates spatula by spatula from the surface, so for each spatula separation, only some force necessary. (The process is similar to removing Scotch Tape from a surface.)
Geckos' toes operate well below their full attractive capabilities most of the time because the margin for error is great depending upon the surface roughness, and therefore the number of setae in contact with that surface.
Use of small van der Waals force requires very large surface areas; every square millimeter of a gecko's footpad contains about 14,000 hair-like setae. Each seta has a diameter of 5 μm. Human hair varies from 18 to 180 μm, so the cross-sectional area of a human hair is equivalent to 12 to 1300 setae. Each seta is in turn tipped with between 100 and 1,000 spatulae. Each spatula is 0.2 μm long (one five-millionth of a meter), or just below the wavelength of visible light.
The setae of a typical mature gecko would be capable of supporting a weight of : each spatula can exert an adhesive force of 5 to 25 nN. The exact value of the adhesion force of a spatula varies with the surface energy of the substrate to which it adheres. Recent studies have moreover shown that the component of the surface energy derived from long-range forces, such as van der Waals forces, depends on the material's structure below the outermost atomic layers (up to 100 nm beneath the surface); taking that into account, the adhesive strength can be inferred.
Apart from the setae, phospholipids—fatty substances produced naturally in their bodies—also come into play. These lipids lubricate the setae and allow the gecko to detach its foot before the next step.
The origin of gecko adhesion likely started as simple modifications to the epidermis on the underside of the toes. This was recently discovered in the genus "Gonatodes" from South America. Simple elaborations of the epidermal spinules into setae have enabled "Gonatodes humeralis" to climb smooth surfaces and sleep on smooth leaves.
Biomimetic technologies designed to mimic gecko adhesion could produce reusable self-cleaning dry adhesives with many applications. Development effort is being put into these technologies, but manufacturing synthetic setae is not a trivial material design task.
Gecko skin does not generally bear scales, but appears at a macro scale as a papillose surface which is made from hair-like protuberances developed across the entire body. These confer superhydrophobicity, and the unique design of the hair confers a profound antimicrobial action. These protuberances are very small, up to 4 microns in length, and tapering to a point. Gecko skin has been observed to have an anti-bacterial property, killing gram-negative bacteria when it comes in contact with the skin.
Geckos are polyphyodonts and able to replace each of their 100 teeth every 3 to 4 months. Next to the full grown tooth there is a small replacement tooth developing from the odontogenic stem cell in the dental lamina. The formation of the teeth is pleurodont; they are fused (ankylosed) by their sides to the inner surface of the jaw bones.
This formation is common in all species in the order Squamata.
The infraorder Gekkota is divided into seven families, containing about 125 genera of geckos, including the snake-like (legless) pygopods.
More than 1,850 species of geckos occur worldwide, including these familiar species:
Most geckos lay a small clutch of eggs, a few are live-bearing and a few can reproduce asexually. | https://en.wikipedia.org/wiki?curid=13134 |
Game show
A game show is a type of radio, television, or stage show in which contestants, individually or as teams, play a game which involves answering questions or solving puzzles, usually for money or prizes. Alternatively, a "gameshow" can be a demonstrative program about a game [while usually retaining the spirit of an awards ceremony]. In the former, contestants may be invited from a pool of public applicants. Game shows often reward players with prizes such as cash, trips and goods and services provided by the show's sponsor.
Game shows began to appear on radio and television in the late 1930s. The very first television game show, "Spelling Bee", as well as the first radio game show, "Information Please", were both broadcast in 1938; the first major success in the game show genre was "Dr. I.Q.", a radio quiz show that began in 1939. "Truth or Consequences" was the first game show to air on commercially licensed television; the "CBS Television Quiz" followed shortly thereafter. Its first episode aired in 1941 as an experimental broadcast. Over the course of the 1950s, as television began to pervade the popular culture, game shows quickly became a fixture. Daytime game shows would be played for lower stakes to target stay-at-home housewives. Higher-stakes programs would air in primetime. During the late 1950s, high-stakes games such as "Twenty-One" and "The $64,000 Question" began a rapid rise in popularity. However, the rise of quiz shows proved to be short-lived. In 1959, many of the higher stakes game shows were discovered to be rigged and ratings declines led to most of the primetime games being canceled.
An early variant of the game show, the panel game, survived the quiz show scandals. On shows like "What's My Line?", "I've Got A Secret", and "To Tell the Truth", panels of celebrities would interview a guest in an effort to determine some fact about them; in others, celebrities would answer questions. Panel games had success in primetime until the late 1960s, when they were collectively dropped from television because of their perceived low budget nature. Panel games made a comeback in American daytime television (where the lower budgets were tolerated) in the 1970s through comedy-driven shows such as "Match Game" and "Hollywood Squares". In the UK, commercial demographic pressures were not as prominent, and restrictions on game shows made in the wake of the scandals limited the style of games that could be played and the amount of money that could be awarded. Panel shows there were kept in primetime and have continued to thrive; they have transformed into showcases for the nation's top stand-up comedians on shows such as "Have I Got News for You", "Would I Lie to You?", "Mock the Week", "QI", and "8 Out of 10 Cats", all of which put a heavy emphasis on comedy, leaving the points as mere formalities. The focus on quick-witted comedians has resulted in strong ratings, which, combined with low costs of production, have only spurred growth in the UK panel show phenomenon.
Game shows remained a fixture of US daytime television through the 1960s after the quiz show scandals. Lower-stakes games made a slight comeback in daytime in the early 1960s; examples include "Jeopardy!" which began in 1964 and the original version of "The Match Game" first aired in 1962. "Let's Make a Deal" began in 1963 and the 1960s also marked the debut of "Hollywood Squares", "Password", "The Dating Game", and "The Newlywed Game".
Though CBS gave up on daytime game shows in 1968, the other networks did not follow suit. Color television was introduced to the game show genre in the late 1960s on all three networks. The 1970s saw a renaissance of the game show as new games and massive upgrades to existing games made debuts on the major networks. "The New Price Is Right", an update of the 1950s-era game show "The Price Is Right", debuted in 1972 and marked CBS's return to the game show format in its effort to draw wealthier, suburban viewers. "The Match Game" became "Big Money" "Match Game 73", which proved popular enough to prompt a spin-off, "Family Feud", on ABC in 1976. "The $10,000 Pyramid" and its numerous higher-stakes derivatives also debuted in 1973, while the 1970s also saw the return of formerly disgraced producer and host Jack Barry, who debuted "The Joker's Wild" and a clean version of the previously rigged "Tic-Tac-Dough" in the 1970s. "Wheel of Fortune" debuted on NBC in 1975. The Prime Time Access Rule, which took effect in 1971, barred networks from broadcasting in the 7–8 p.m. time slot immediately preceding prime time, opening up time slots for syndicated programming. Most of the syndicated programs were "nighttime" adaptations of network daytime game shows. These game shows originally aired once a week, but by the late 1970s and early 1980s most of the games had transitioned to five days a week.
Game shows were the lowest priority of television networks and were rotated out every thirteen weeks if unsuccessful. Most tapes were destroyed until the early 1980s. Over the course of the 1980s and early 1990s, as fewer new hits (e.g. "Press Your Luck", "Sale of the Century", and "Card Sharks") were produced, game shows lost their permanent place in the daytime lineup. ABC transitioned out of the daytime game show format in the mid-1980s (briefly returning to the format for one season in 1990 with a "Match Game" revival). NBC's game block also lasted until 1991, but the network attempted to bring them back in 1993 before cancelling its game show block again in 1994. CBS phased out most of its game shows, except for "The Price Is Right", by 1993. To the benefit of the genre, the moves of "Wheel of Fortune" and a modernized revival of "Jeopardy!" to syndication in 1983 and 1984, respectively, was and remains highly successful; the two are, to this day, fixtures in the prime time "access period".
Cable television also allowed for the debut of game shows such as "Supermarket Sweep" and "Debt" (Lifetime), "Trivial Pursuit" and "Family Challenge" (Family Channel), and "Double Dare" (Nickelodeon). It also opened up a previously underdeveloped market for game show reruns. General interest networks such as CBN Cable Network and USA Network had popular blocks for game show reruns from the mid-1980s to the mid-'90s before that niche was overtaken by Game Show Network in 1994.
In the United Kingdom, game shows have had a more steady and permanent place in the television lineup and never lost popularity in the 1990s as they did in the United States, due in part to the fact that game shows were highly regulated by the Independent Broadcasting Authority in the 1980s and that those restrictions were lifted in the 1990s, allowing for higher-stakes games to be played.
After the popularity of game shows hit a nadir in the mid-1990s United States (at which point "The Price Is Right" was the only game show still on daytime network television and numerous game shows designed for cable television were canceled), the British game show "Who Wants to Be a Millionaire?" began distribution around the globe. Upon the show's American debut in 1999, it was a hit and became a regular part of ABC's primetime lineup until 2002; that show would eventually air in syndication for seventeen years afterward. Several shorter-lived high-stakes games were attempted around the time of the millennium, both in the United States and the United Kingdom, such as "Winning Lines", "The Chair", "Greed", "Paranoia", and "Shafted", leading to some dubbing this period as "The Million-Dollar Game Show Craze". The boom quickly went bust, as by July 2000, almost all of the imitator million-dollar shows were canceled (one of those exceptions was "Winning Lines", which continued to air in the United Kingdom until 2004 even though it was canceled in the United States in early 2000); these higher stakes contests nevertheless opened the door to reality television contests such as "Survivor" and "Big Brother", in which contestants win large sums of money for outlasting their peers in a given environment. Several game shows returned to daytime in syndication during this time as well, such as "Family Feud", "Hollywood Squares", and "Millionaire".
Until the end of the 2019–20 season, CBS was the last American network to air game shows nationally on a daily basis. It aired "The Price Is Right" and, as of 2009, would air a revival of "Let's Make a Deal". "Deal" aired on weekdays at a time chosen by each CBS affiliate, while "Price" aired weekdays at 10 am or 11 am in most markets. The oldest, continually aired radio quiz show in the United States was "Simply Trivia", which aired on public radio station WYSO in Yellow Springs, Ohio. It ran from 1972 through 2014.
"Wheel of Fortune", "Jeopardy!" and "Family Feud" have continued in syndication. To keep pace with the prime-time quiz shows, "Jeopardy!" doubled its question values in 2001 and lifted its winnings limit in 2003, which one year later allowed Ken Jennings to become the show's first multi-million dollar winner; it has also increased the stakes of its tournaments and put a larger focus on contestants with strong personalities. The show has since produced two more millionaires, tournament winner Brad Rutter and recent champion James Holzhauer. "Family Feud" revived in popularity with a change in tone under host Steve Harvey to include more suggestive humor.
In 2009, actress and comedienne Kim Coles became the first black woman to host a prime time game show, "Pay It Off".
The rise of digital television in the United States opened up a large market for rerun programs. Buzzr was established by FremantleMedia North America, owners of numerous classic U.S. game shows, as a broadcast outlet for its archived holdings in June 2015. There was also a rise of live game shows at festivals and public venues, where the general audience could participate in the show, such as the science-inspired "Geek Out Game Show" or the "Yuck Show".
Since the early 2000s, several game shows were conducted in a tournament format; examples included "History IQ", "Grand Slam", "PokerFace" (which never aired in North America), "Duel", "The Million Second Quiz", "500 Questions", "The American Bible Challenge", and "Mental Samurai". Most game shows conducted in this manner only lasted for one season.
A boom in prime time revivals of classic daytime game shows began to emerge in the mid-2010s. In 2016, ABC packaged the existing "Celebrity Family Feud", which had returned in 2015, with new versions of "To Tell the Truth," "The $100,000 Pyramid", and "Match Game" in 2016; new versions of "Press Your Luck" and "Card Sharks" would follow in 2019. TBS launched a marijuana-themed revival of "The Joker's Wild", hosted by Snoop Dogg, in October 2017. This is in addition to a number of original game concepts that appeared near the same time, including "Awake", "Deal or No Deal" (which originally aired in 2005), "Child Support", "Hollywood Game Night", "1 vs. 100", "Minute to Win It" (which originally aired in 2010), and "The Wall", and a string of music-themed games such as "Don't Forget the Lyrics!", "The Singing Bee", and "Beat Shazam".
In March 2020, production on the four longest-running game shows in North America ("Jeopardy", "Wheel of Fortune", "Family Feud", and "The Price Is Right") permanently ended as a result of the COVID-19 pandemic. Reruns of all four of those shows would continue to air until the end of the 2019–20 season.
The popularity of game shows in the United States was closely paralleled around the world. Reg Grundy Organisation, for instance, would buy the international rights for American game shows and reproduce them in other countries, especially in Grundy's native Australia. Most game show formats that are popular in one country are franchised to others.
Game shows have had an inconsistent place in Canadian television, with most homegrown game shows there being made for the French-speaking Quebecois market and the majority of English-language game shows in the country being rebroadcast from, or made with the express intent of export to, the United States. There have been exceptions to this (see, for instance, the long-running "Definition"). Unlike reality television franchises, international game show franchises generally only see Canadian adaptations in a series of specials, based heavily on the American versions but usually with a Canadian host to allow for Canadian content credits (one of those exceptions was "Le Banquier", a Quebec French-language version of "Deal or No Deal" which aired on TVA from 2008 to 2015). The smaller markets and lower revenue opportunities for Canadian shows in general also affect game shows there, with Canadian games (especially Quebecois ones) often having very low budgets for prizes, unless the series is made for export. Canadian contestants are generally allowed to participate on American game shows, and there have been at least two Canadian game show hosts – Monty Hall and Alex Trebek – who have gone on to long careers hosting American series, while Jim Perry, an American host, was prominent as a host of Canadian shows.
American game shows have a tendency to hire stronger contestants than their British or Australian counterparts. Many of the most successful game show contestants in America would likely never be cast in a British or Australian game show for fear of having them dominate the game, according to Mark Labbett, who appeared in all three countries on the game show "The Chase".
Many of the prizes awarded on game shows are provided through product placement, but in some cases they are provided by private organizations or purchased at either the full price or at a discount by the show. There is the widespread use of "promotional consideration", in which a game show receives a subsidy from an advertiser in return for awarding that manufacturer's product as a prize or consolation prize. Some products supplied by manufacturers may not be intended to be awarded at all and are instead just used as part of the gameplay (such as the low-priced items used in several pricing games of "The Price Is Right").
For high-stakes games, a network may purchase prize indemnity insurance to avoid paying the cost of a rare but expensive prize out of pocket. If the said prize is won too often, the insurance company may refuse to insure a show; this was a factor in the discontinuation of "The Price Is Right $1,000,000 Spectacular" series of prime-time specials. In April 2008, three of the contestants on "The Price Is Right $1,000,000 Spectacular" won the top prize in a five-episode span after fifteen episodes without a winner, due in large part to a change in the rules. The insurance companies had made it extremely difficult to get further insurance for the remaining episodes. A network or syndicator may also opt to distribute large cash prizes in the form of an annuity, spreading the cost of the prize out over several years or decades.
From about 1960 through the rest of the 20th century, American networks placed restrictions on the amount of money that could be given away on a game show, in an effort to avoid a repeat of the scandals of the 1950s. This usually took the form of an earnings cap that forced a player to retire once they had won a certain amount of money or a limit on how many episodes, usually five, on which a player could appear on a show. The introduction of syndicated games, particularly in the 1980s, eventually allowed for more valuable prizes and extended runs on a particular show. British television was under even stricter regulations on prizes until the 1990s, seriously restricting the value of prizes that could be given and disallowing games of chance to have an influence on the results of the game. (Thus, the British version of "The Price Is Right" at first did not include the American version's "Showcase Showdown", in which contestants spun a large wheel to determine who would advance to the Showcase bonus round.) In Canada, prizes were limited not by bureaucracy but necessity, as the much smaller population limited the audience of shows marketed toward that country. The lifting of these restrictions in the 1990s was a major factor in the explosion of high-stakes game shows in the later part of that decade in both the U.S. and Britain and, subsequently, around the world.
A bonus round (also known as a bonus game or an end game) usually follows a main game as a bonus to the winner of that game. In the bonus round, the stakes are higher and the game is considered to be tougher.
The game play of a bonus round usually varies from the standard game play of the front game, and there are often borrowed or related elements of the main game in the bonus round to ensure the entire show has a unified premise. Though some end games are referred to as "bonus rounds", many are not specifically referred to as such in games but fit the same general role.
There is no one formula for the format of a bonus round. There are differences in almost every bonus round, though there are many recurring elements from show to show. The bonus round is often played for the show's top prize.
Until the 1960s, most game shows did not offer a bonus round. In traditional two-player formats, the winner – if a game show's rules provided for this – became the champion and simply played a new challenger either on the next show or after the commercial break.
One of the earliest forms of bonus rounds was the Jackpot Round of the original series "Beat the Clock". After two rounds of performing stunts, the wife of the contestant couple would perform at a jackpot board for a prize. The contestant was shown a famous quotation or common phrase, and the words were scrambled. To win the announced bonus, the contestant had to unscramble the words within 20 seconds. The contestant received a consolation gift worth over $200 if she was unsuccessful.
Another early bonus round ended each episode of "You Bet Your Life" with the team who won the most money answering one final question for a jackpot which started at $1,000 and increased $500 each week until won.
Another early example was the Lightning Round on the word game "Password", starting in 1961. The contestant who won the front game played a quick-fire series of passwords within 60 seconds, netting $50 per correctly guessed word, for a maximum bonus prize of $250.
The bonus round came about after game show producer Mark Goodson was first presented "Password", contending that it was not enough to merely guess passwords during the show. "We needed something more, and that's how the Lightning Round was invited," said Howard Felsher, who produced "Password" and "Family Feud". "From that point on every game show had to have an end round. You'd bring a show to a network and they'd say, 'What's the endgame?' as if they had thought of it themselves."
The end game of "Match Game", hosted for most of its run by Gene Rayburn, served as the impetus for a completely new game show. The first part of "Match Game"s "Super-Match" bonus round, called the "Audience Match", asked contestants to guess how a studio audience responded to a question. In 1975, with then regular panelist Richard Dawson becoming restless and progressively less cooperative, Goodson decided that this line of questioning would make a good game show of its own, and the concept eventually became "Family Feud", as whose inaugural host Dawson was hired. | https://en.wikipedia.org/wiki?curid=13135 |
Grindcore
Grindcore is an extreme fusion genre of heavy metal and hardcore punk that originated in the mid-1980s, drawing inspiration from abrasive-sounding musical styles, such as: thrashcore, crust punk, hardcore punk, extreme metal, and industrial. Grindcore is characterized by a noise-filled sound that uses heavily distorted, down-tuned guitars, grinding overdriven bass, high speed tempo, blast beats, and vocals which consist of growls and high-pitched shrieks. Early groups like Napalm Death are credited with laying the groundwork for the style. It is most prevalent today in North America and Europe, with popular contributors such as Brutal Truth and Nasum. Lyrical themes range from a primary focus on social and political concerns, to gory subject matter and black humor.
A trait of grindcore is the "microsong". Several bands have produced songs that are only seconds in length. British band Napalm Death holds the Guinness World Record for shortest song ever recorded with the one-second "You Suffer" (1987). Many bands, such as Agoraphobic Nosebleed, record simple phrases that may be rhythmically sprawled out across an instrumental lasting only a couple of bars in length.
A variety of microgenres have subsequently emerged, often labeling bands according to traits that deviate from regular grindcore, including goregrind, focused on themes of gore (e.g. mutilation and pathology), and pornogrind, fixated on pornographic lyrical themes. Another offshoot is electrogrind (or cybergrind) which incorporates electronic music elements such as sampling and programmed drums. Although influential within hardcore and extreme metal, grindcore remains an underground form of music.
Grindcore evolved as a blend of thrash metal, thrashcore and hardcore punk. The name derives from the fact that "grind" is a British term for "thrash"; that term was appended to "-core" from "hardcore". Grindcore relies on standard hardcore punk instrumentation: electric guitar, bass and drums. However, grindcore alters the usual practices of metal or rock music in regard to song structure and tone. The vocal style is "ranging from high-pitched shrieks to low, throat-shredding growls and barks." In some cases, no clear lyrics exist. Vocals may be used as merely an added sound effect, a common practice with bands such as the experimental Naked City.
A characteristic of some grindcore songs is the "microsong," lasting only a few seconds. In 2001, the "Guinness Book of World Records" awarded Brutal Truth the record for "Shortest Music Video" for 1994's "Collateral Damage" (the song lasts four seconds). In 2007, the video for the Napalm Death song "You Suffer" set a new "Shortest Music Video" record: 1.3 seconds. Beyond the microsong, it is characteristic of grindcore to have short songs in general; for example, Carcass' debut album "Reek of Putrefaction" (1988) consists of 22 tracks with an average length of 1 minute and 48 seconds. It is also not uncommon for grindcore albums to be very short when compared to other genres, usually consisting of a large track list but having a total length of only 15 to 20 minutes.
Many grindcore groups experiment with down-tuned guitars and play mostly with down picking, power chords and heavy distortion. While the vinyl A-side of Napalm Death's debut, 1987's "Scum", is set to Eb tuning, on side B, the guitars are tuned down to C. Their second album "From Enslavement to Obliteration" and the "Mentally Murdered" EP were tuned to C♯. "Harmony Corruption", their third full-length album, was tuned up to a D. Bolt Thrower went further, dropping 3½ steps down (A). Bass is tuned low as well, and is often distorted.
The blast beat is a drum beat characteristic of grindcore in all its forms, although its usage predates the genre itself, as it is native to jazz. In Adam MacGregor's definition, "the blast-beat generally comprises a repeated, sixteenth-note figure played at a very fast tempo, and divided uniformly among the kick drum, snare and ride, crash, or hi-hat cymbal." Blast beats have been described as "maniacal percussive explosions, less about rhythm per second than sheer sonic violence." Napalm Death coined the term, though this style of drumming had previously been practiced by others. Daniel Ekeroth argues that the blast beat was first performed by the Swedish group Asocial on their 1982 demo. Dirty Rotten Imbeciles ("No Sense"), Stormtroopers of Death ("Milk"), Sarcófago ("Satanas"), Sepultura ("Antichrist"), and Repulsion also included the technique prior to Napalm Death's emergence.
Grindcore lyrics are typically provocative. A number of grindcore musicians are committed to political and ethical causes, generally leaning towards the far left in connection to grindcore's punk roots. For example, Napalm Death's songs address a variety of anarchist concerns, in the tradition of anarcho-punk. These themes include anti-racism, feminism, anti-militarism, and anti-capitalism. Early grindcore bands including Napalm Death, Agathocles and Carcass made animal rights one of their primary lyrical themes. Some of them, such as Cattle Decapitation and Carcass, have expressed disgust with human behavior, animal abuse, and are, in some cases, vegetarians or vegans. Carcass' work in particular is often identified as the origin of the goregrind style, which is devoted to "bodily" themes. Groups that shift their bodily focus to sexual matters, such as Gut and the Meat Shits, are sometimes referred to as pornogrind. Seth Putnam's lyrics are notorious for their black comedy, while The Locust tend toward satirical collage, indebted to William S. Burroughs' cut-up method.
The early grindcore scene relied on an international network of tape trading and DIY production. The most widely acknowledged precursors of the grindcore sound are Siege, a hardcore punk group, and Repulsion, an early death metal outfit.
Siege, from Weymouth, Massachusetts, were influenced by classic American hardcore (Minor Threat, Black Flag, Void) and by British groups like Discharge, Venom, and Motörhead. Siege's goal was maximum velocity: "We would listen to the fastest punk and hardcore bands we could find and say, 'Okay, we're gonna deliberately write something that is faster than them, drummer Robert Williams recalled.
Repulsion is often credited with inventing the classic grind blast beat (played at 190 bpm), as well as its distinctive bass tone. Shane Embury, in particular, advocates the band as the origin of Napalm Death's later innovations. Kevin Sharp of Brutal Truth declares that ""Horrified" was and still is the defining core of what grind became; a perfect mix of hardcore punk with metallic gore, speed and distortion."
Other groups in the British grindcore scene, such as Heresy and Unseen Terror, have emphasized the influence of American hardcore punk, including Septic Death, as well as Swedish D-beat. Sore Throat cites Discharge, Disorder, and a variety of European D-beat and thrash metal groups, including Hellhammer, and American hardcore groups, such as Poison Idea and D.R.I. Japanese hardcore, particularly GISM, is also mentioned by a number of originators of the style. Other key groups cited by current and former members of Napalm Death as formative influences include Discharge, Amebix, Throbbing Gristle, and the aforementioned Dirty Rotten Imbeciles. Post-punk, such as Killing Joke and Joy Division, was also cited as an influence on early Napalm Death.
Grindcore, as such, was developed during the mid-1980s in the United Kingdom by Napalm Death, a group who emerged from the anarcho-punk scene in Birmingham, England. While their first recordings were in the vein of Crass, they eventually became associated with crust punk. The group began to take on increasing elements of thrashcore, post-punk, and power electronics. The group also went through many changes in personnel. A major shift in style took place after Mick Harris became the group's drummer. Punk historian Ian Glasper indicates that "For several months gob-smacked audiences weren't sure whether Napalm Death were actually a serious band any longer, such was the undeniable novelty of their hyper-speed new drummer." Albert Mudrian's research suggests that the name "grindcore" was coined by Harris. When asked about coming up with the term, Harris said:
Other sources contradict Harris' claim. In a "Spin" magazine article written about the genre, Steven Blush declares that "the man often credited" for dubbing the style grindcore was Shane Embury, Napalm Death's bassist since 1987. Embury offers his own account of how the grindcore "sound" came to be:
Earache Records founder Digby Pearson concurs with Embury, saying that Napalm Death "put hardcore and metal through an accelerator." Pearson, however, said that grindcore "wasn't just about the speed of [the] drums, blast beats, etc." He claimed that "it actually was coined to describe the guitars - heavy, downtuned, bleak, harsh riffing guitars [that] 'grind', so that's what the genre was described as, by the musicians who were its innovators [and] proponents."
While abrasive, grindcore achieved a measure of mainstream visibility. "New Musical Express" featured Napalm Death on their cover in 1988, declaring them "the fastest band in the world." As James Hoare, deputy editor of "Terrorizer", writes:
Napalm Death's seismic impact inspired other British grindcore groups in the 1980s, among them Extreme Noise Terror, Carcass and Sore Throat. Extreme Noise Terror, from Ipswich, formed in 1984. With the goal of becoming "the most extreme hardcore punk band of all time," the group took Mick Harris from Napalm Death in 1987. Ian Glasper describes the group as "pissed-off hateful noise with its roots somewhere between early Discharge and Disorder, with [vocalists] Dean [Jones] and Phil [Vane] pushing their trademark vocal extremity to its absolute limit." In 1991, the group collaborated with the acid house group The KLF, appearing onstage with the group at the Brit Awards in 1992. Carcass released "Reek of Putrefaction" in 1988, which John Peel declared his favorite album of the year despite its very poor production. The band's focus on gore and anatomical decay, lyrically and in sleeve artwork, inspired the goregrind subgenre. Sore Throat, said by Ian Glasper to have taken "perhaps the most uncompromisingly anti-music stance" were inspired by crust punk as well as industrial music. Some listeners, such as Digby Pearson, considered them to be simply an in-joke or parody of grindcore.
In the subsequent decade, two pioneers of the style became increasingly commercially viable. According to Nielsen Soundscan, Napalm Death sold 367,654 units between May 1991 and November 2003, while Carcass sold 220,374 units in the same period. The inclusion of Napalm Death's "Twist the Knife (Slowly)" on the "Mortal Kombat" soundtrack brought the band much greater visibility, as the compilation scored a Top 10 position in the "Billboard" 200 chart and went platinum in less than a year. The originators of the style have expressed some ambivalence regarding the subsequent popularity of grindcore. Pete Hurley, the guitarist of Extreme Noise Terror, declared that he had no interest in being remembered as a pioneer of this style: ""grindcore" was a legendarily stupid term coined by a hyperactive kid from the West Midlands, and it had nothing to do with us whatsoever. ENT were, are, and - I suspect - always will be a hardcore punk band... not a grindcore band, a stenchcore band, a trampcore band, or any other sub-sub-sub-core genre-defining term you can come up with." Lee Dorian of Napalm Death indicated that "Unfortunately, I think the same thing happened to grindcore, if you want to call it that, as happened to punk rock - all the great original bands were just plagiarised by a billion other bands who just copied their style identically, making it no longer original and no longer extreme."
Journalist Kevin Stewart-Panko argues that the American grindcore of the 1990s borrowed from three sources: British grindcore, the American precursors, and death metal. As early Napalm Death albums were not widely distributed in the United States, American groups tended to take inspiration from later works, such as "Harmony Corruption". American groups also often employ riffs taken from crossover thrash or thrash metal. Early American grind practitioners included Terrorizer and Assück. Anal Cunt, a particularly dissonant group who lacked a bass player, were also particularly influential. Their style was sometimes referred to as "noisecore" or "noisegrind", described by Giulio of Cripple Bastards as "the most anti-musical and nihilistic face of extreme music at that time." Brutal Truth was a groundbreaking group in the American scene at the beginning of the 1990s.
However, Sharp indicates that they were more inspired by the thrash metal of Dark Angel than the British groups. Discordance Axis had a more technical style of playing than many of the predecessors, and had a much more ornate visual and production style. Scott Hull is prominent in the contemporary grindcore scene, through his participation in Pig Destroyer and Agoraphobic Nosebleed. ANb's "Frozen Corpse Stuffed with Dope" has been described as "the "Paul's Boutique" of grindcore", by "Village Voice" critic Phil Freeman, for its "hyper-referential, impossibly dense barrage of samples, blast beats, answering machine messages, and incomprehensibly bellowed rants." Pig Destroyer is inspired by thrash metal, such as Dark Angel and Slayer, the sludge metal of The Melvins, and grindcore practiced by Brutal Truth, while Agoraphobic Nosebleed takes cues from thrashcore and powerviolence, like D.R.I. and Crossed Out. Pig Destroyer's style is sometimes referred to as "deathgrind", because of the prevalence of death metal influences, as are Cattle Decapitation.
The Locust, from San Diego, also take inspiration from powerviolence (Crossed Out, Dropdead), first-wave screamo (Angel Hair), obscure experimental rock (Art Bears, Renaldo and the Loaf), and death metal. The Locust were sometimes described as "hipster grind" because of their fan base and fashion choices. In Los Angeles, Hole also initially drew influence from grindcore in their early releases, particularly on their singles "Dicknail" and "Teenage Whore", as well as on their debut album, "Pretty on the Inside" (1991), all of which featured sexually provocative and violent lyrics, as well as the heavy distortion and fluctuating tempo that distinguished the genre. Frontwoman Courtney Love stated that she wanted to capture the distinguishing elements of grindcore while incorporating more pop-based melodic structure, although the band distanced themselves from the style in their later releases.
Other later prominent grindcore groups of North America include Brujeria, Soilent Green, Cephalic Carnage, Impetigo, and Circle of Dead Children. Fuck the Facts, a Canadian group, practice classic grindcore, characterized by the "metronome-precision drumming and riffing [that] abound, as well as vocal screams and growls" by "AllMusic" reviewer Greg Prato.
European groups, such as Agathocles, from Belgium, Patareni, of Croatia, and Fear of God, from Switzerland, are important early practitioners of the style. Filthy Christians, who signed to Earache Records in 1989, introduced the style in Sweden, D.D.T. & Fear of Dog were pioneering grind & noise in Serbia since mid-end of '80, Extreme Smoke 57 in Slovenia at the early beginning of the '90, while Cripple Bastards established Italian grindcore. Giulio of Cripple Bastards asserts that the name itself took some time to migrate from Britain, with the style being referred to as "death-thrashcore" for a time in Europe.
Nasum, who emerged from the Swedish death metal scene, became a popular group, addressing political topics from a personal perspective.
Anders Jakobson, their drummer, reported that "It was all these different types of people who enjoyed what we were doing. [...] We made grindcore a bit easier to listen to at the expense of the diehard grindcore fans who thought that we were, well, not sellouts, but not really true to the original essence of grindcore." Other Swedish groups, such as General Surgery and Regurgitate, practiced goregrind. Inhume, from the Netherlands, Rotten Sound, from Finland, and Leng Tch'e, from Belgium, were subsequent European groups who practiced grindcore with death metal inflections. In 2000s, the Belgium-based Aborted "had grown into the role of key contributors to the death-grind genres".
In 2010, Singaporean band Wormrot signed a recording contract with Earache Records.
Japanese noise rock group Boredoms have borrowed elements of grind, and toured with Brutal Truth in 1993. The Japanese grindcore group Gore Beyond Necropsy formed in 1989, and later collaborated with noise music artist Merzbow. Naked City, led by avant-garde jazz saxophonist John Zorn, performed an avant-garde form of polystylistic, grindcore-influenced punk jazz. Zorn later formed the Painkiller project with ambient dub producer Bill Laswell on bass guitar and Mick Harris on drums, which also collaborated with Justin Broadrick on some work. In addition, grindcore was one influence on the powerviolence movement within American hardcore punk, and has affected some strains of metalcore. Some musicians have also produced hybrids between grind and electronic music.
Powerviolence is a raw and dissonant subgenre of hardcore punk. The style is closely related to thrashcore and similar to grindcore. While powerviolence took inspiration from Napalm Death and other early grind bands, powerviolence groups avoided elements of heavy metal. Its nascent form was pioneered in the late 1980s in the music of hardcore punk band Infest, who mixed youth crew hardcore elements with noisier, sludgier qualities of Lärm and Siege. The microgenre solidified into its most commonly recognized form in the early 1990s, with the sounds of bands such as Man Is the Bastard, Crossed Out, No Comment, Capitalist Casualties, and Manpig.
Powerviolence bands focus on speed, brevity, bizarre timing breakdowns, and constant tempo changes. Powerviolence songs are often very short; it is not uncommon for some to last less than 30 seconds. Some groups, particularly Man Is the Bastard, took influence from sludge metal and noise music. Lyrically and conceptually, powerviolence groups were very raw and underproduced, both sonically and in their packaging. Some groups (Man Is the Bastard, Azucares and Dropdead) took influence from anarcho-punk and crust punk, emphasizing animal rights and anti-militarism. The Locust and Agoraphobic Nosebleed later reincorporated elements of powerviolence into grindcore.
Among other influences, Napalm Death took impetus from the industrial music scene. Subsequently, Napalm Death's former guitarist, Justin Broadrick, went on to a career in industrial metal with Godflesh. Mick Harris, in his post-Napalm Death project, Scorn, briefly experimented with the style. Scorn also worked in the industrial hip hop and isolationist styles. Fear Factory have also cited debts to the genre. Digital hardcore is an initially German hybrid of hardcore punk and hardcore techno. Agoraphobic Nosebleed and the Locust have solicited remixes from digital hardcore producers and noise musicians. James Plotkin, Dave Witte, and Speedranch participated in the Phantomsmasher project, which melds grindcore and digital hardcore. Alec Empire collaborated with Justin Broadrick, on the first Curse of the Golden Vampire album, and with Gabe Serbian, of the Locust, live in Japan. Japanoise icon Merzbow also participated in the Empire/Serbian show.
The 21st century also saw the development of "electrogrind" (or "cybergrind"), practiced by The Berzerker, Body Hammer, Gigantic Brain and Genghis Tron which borrows from electronic music. These groups built on the work of Agoraphobic Nosebleed, Enemy Soil and The Locust, as well as industrial metal. The Berzerker also appropriated the distorted Roland TR-909 kick drums of gabber producers. Many later electrogrind groups were caricatured for their hipster connections.
In the mid-1990s, mathcore groups such as The Dillinger Escape Plan, Some Girls, and Daughters began to take inspiration from developments in grindcore. These groups also include elements of post-hardcore. In addition to mathcore some early screamo groups, like Circle Takes the Square and Orchid, have been associated with grindcore by some commentators.
Crust punk had a major impact on grindcore's emergence. The first grindcore, practiced by British bands such as Napalm Death and Extreme Noise Terror, emerged from the crust punk scene. This early style is sometimes dubbed "crustgrind".
Blackened grindcore is a fusion genre that combines elements of black metal and grindcore. Notable bands include Vomit Fist, Dendritic Arbor, Sunlight's Bane, Scumpulse, Malevich, Absvrdist, and early Rotting Christ.
Noisegrind is a microgenre that combines elements of grindcore and harsh noise. Notable bands include Holy Grinder, Sete Star Sept, Full of Hell, Fear of God,Insufferable, and early Knelt Rote. | https://en.wikipedia.org/wiki?curid=13137 |
George, Margrave of Brandenburg-Ansbach
George of Brandenburg-Ansbach () (4 March 1484 – 27 December 1543), known as George the Pious, was a Margrave of Brandenburg-Ansbach from the House of Hohenzollern.
He was born in Ansbach, the third of eight sons of Margrave Frederick the Elder and his wife Sophia of Poland, daughter of Casimir IV of Poland and Elisabeth of Habsburg. Through his mother, he was related to the royal court in Buda. He entered the service of his uncle, King Vladislaus II of Bohemia and Hungary, living at his court from 1506. The king received him as an adopted son, entrusted him in 1515 with the Duchy of Oppeln, and in 1516 made him member of the tutelary government instituted for Hungary, and tutor of his son Louis II of Hungary and Bohemia. In 1521 he made an arrangement with Petar Keglević and pulled back from Hungary and Croatia; this arrangement, accepted by Louis II in 1526, was not accepted by Holy Roman Emperor Ferdinand I until 1559.
At the court of Hungary there were two parties arrayed against each other: the Magyar party under the leadership of Zápolyas and the German party under the leadership of George of Brandenburg, whose authority was increased by the acquisition of the duchies of Ratibor and Oppeln by hereditary treaties with their respective dukes and of the territories of Oderberg, Beuthen, and Tarnowitz as pledges from the king of Bohemia, who could not redeem his debts.
By the further appropriation of the Duchy of Jägerndorf, George came into possession of all Upper Silesia. As the owner and mortgagee of these territories he prepared the way for the introduction of the Protestant Reformation, here as well as in his native Franconia. Earlier than any other German prince or any other member of the Hohenzollern line including even his younger brother Albert, the Grand Master of the Teutonic Order, he turned his eyes and heart to the new faith proceeding from Wittenberg.
The first reformatory writings began the work of winning him over to the evangelical cause. Martin Luther's powerful testimony of faith at the Diet of Worms in 1521 made an indelible impression upon his mind, and the vigorous sermons of evangelical preachers in the pulpits of St. Lawrence and St. Sebald in Nuremberg, during the diet there in 1522, deepened the impression. The study of Luther's translation of the New Testament, which appeared in 1522, established his faith on personal conviction. Moreover, he entered into correspondence with Luther, discussing with him the most important problems of faith, and in 1524 he met him personally during the negotiations concerning his brother Albert's secularization of the Teutonic Order's state of Prussia into the secular Duchy of Prussia.
After the accession of King Louis II, George was aided in his reforming efforts by Queen Maria, a sister of Charles V and Ferdinand I, who was favorably inclined toward the new doctrine. As the adviser of the young king, George firmly advocated the cause of the new gospel against the influences and intrigues of his clerical opponents and successfully prevented their violent measures. His relationship with Duke Frederick II of Liegnitz, Brieg, and Wohlau, and with Duke Charles I of Münsterberg-Oels, who had both admitted the Reformation into their territories, contributed not a little to the expansion of the gospel in his own lands. But it was his own personal influence, energy, and practical spirit that introduced the new doctrine and founded a new evangelical and churchly life. He made efforts to secure preachers of the new gospel from Hungary, Silesia, and Franconia, and tried to introduce the church order of Brandenburg-Nuremberg, which had already found acceptance in the Franconian territories.
In the hereditary lands Brandenburg-Ansbach in Franconia, where with his older brother Casimir of Brandenburg-Kulmbach he had assumed the regency in place of their father, he encountered greater difficulties, although the popular spirit was inclined toward the Reformation. Owing to his marriage with a Bavarian princess and to his military command in the imperial service, his brother was allied more closely with the old church and resisted the new reforming efforts. But the pressure of the estates of the land soon compelled him to allow preaching according to Luther's doctrine, although he ensured retention of the old church ceremonies, even of those that were contrary to the new faith.
George protested against such half-measures and showed his dissatisfaction with the half-hearted resolutions of the state assembly of October 1526. It was only after the death of his brother that as sole ruler he could successfully undertake and carry out reformation in the Franconian territories, with the assistance of councillors such as Johann von Schwarzenberg and through the new resolutions of the state assembly of Brandenburg-Ansbach (1528). At the same time George maintained his correspondence with Luther and Philipp Melanchthon, discussing such questions as the evangelization of monasteries, the use of monastic property for evangelical purposes, and especially the foundation of lower schools for the people and of higher schools for the education of talented young men for the service of church and state. He tried to gain, by his continued correspondence with Luther and other reformers such as Urbanus Rhegius, efficient men for the preaching of the gospel and for the organization of the evangelical church. Hand in hand with the Council of Nuremberg he worked for the institution of a church visitation on the model of that of the Electorate of Saxony, from which after repeated revisions and emendations the excellent church order of Brandenburg-Nuremberg of 1533 was developed. After its introduction in Nuremberg and his territories in Franconia, it was also introduced in his dominions in Upper Silesia.
George's influence manifested itself also in the development of the German Reformation as a whole. When a union of the evangelicals in upper and lower Germany was contemplated as a means of improved defense against the retaliatory measures of the Roman Catholic Church, George had a meeting with Elector John of Saxony at Schleitz in 1529, where they agreed on certain articles of faith and confession to be drawn up by Luther; the commission was executed in the seventeen articles of Schwabach on the basis of the fifteen theses of the Marburg Colloquy.
But neither at the Convention of Schwabach nor at that of Schmalkalden did George approve armed resistance against the emperor and his party, even in self-defense. He opposed the emperor energetically at the Diet of Augsburg in 1530, when the emperor demanded the prohibition of evangelical preaching. King Ferdinand made George the most alluring offers of Silesian possessions if he would support the emperor, but he strongly rejected them. Next to the elector of Saxony, he stands foremost among the princes who defended the reformed faith. After the death of his cousin, Joachim I, who was a strict Romanist, he assisted his sons in the introduction of the Reformation in the territories of the Electorate of Brandenburg. He took part in the religious colloquy of Regensburg in 1541 where Elector Joachim II made a last attempt to bridge the differences between the Romanists and evangelicals and with his nephew requested Luther's cooperation. The Diet of Regensburg was the last religious meeting which he attended.
He is one of the figures on the "Prussian Homage" painting by Jan Matejko.
George went on to marry three times: First to Beatrice de Frangepan (1480 – c.1510); the marriage produced no children.
George's second wife was Hedwig of Münsterberg-Oels (1508–1531), daughter of Charles I of Münsterberg-Oels; their marriage produced two daughters:
His third wife was Emilie of Saxony (July 27, 1516 – March 9, 1591), daughter of Henry IV, Duke of Saxony and Catherine of Mecklenburg on August 25, 1533: | https://en.wikipedia.org/wiki?curid=13141 |
Generalized mean
In mathematics, generalized means (or power mean, or Hölder mean) are a family of functions for aggregating sets of numbers, that include as special cases the Pythagorean means (arithmetic, geometric, and harmonic means).
If "p" is a non-zero real number, and formula_1 are positive real numbers, then the generalized mean or power mean with exponent "p" of these positive real numbers is:
(See "p"-norm). For we set it equal to the geometric mean (which is the limit of means with exponents approaching zero, as proved below):
Furthermore, for a sequence of positive weights "wi" with sum formula_4 we define the weighted power mean as:
The unweighted means correspond to setting all .
A few particular values of formula_6 yield special cases with their own names:
Let formula_7 be a sequence of positive real numbers and formula_8 a permutation operator, then the following properties hold:
In general,
and the two means are equal if and only if "x"1 = "x"2 = ... = "xn".
The inequality is true for real values of "p" and "q", as well as positive and negative infinity values.
It follows from the fact that, for all real "p",
which can be proved using Jensen's inequality.
In particular, for "p" in {−1, 0, 1}, the generalized mean inequality implies the Pythagorean means inequality as well as the inequality of arithmetic and geometric means.
We will prove weighted power means inequality, for the purpose of the proof we will assume the following without loss of generality:
Proof for unweighted power means is easily obtained by substituting "wi" = 1/"n".
Suppose an average between power means with exponents "p" and "q" holds:
applying this, then:
We raise both sides to the power of −1 (strictly decreasing function in positive reals):
We get the inequality for means with exponents −"p" and −"q", and we can use the same reasoning backwards, thus proving the inequalities to be equivalent, which will be used in some of the later proofs.
For any "q" > 0 and non-negative weights summing to 1, the following inequality holds:
The proof follows from Jensen's inequality, making use of the fact the logarithm is concave:
By applying the exponential function to both sides and observing that as a strictly increasing function it preserves the sign of the inequality, we get
Taking "q"th powers of the "x""i", we are done for the inequality with positive "q"; the case for negatives is identical.
We are to prove that for any "p" < "q" the following inequality holds:
if "p" is negative, and "q" is positive, the inequality is equivalent to the one proved above:
The proof for positive "p" and "q" is as follows: Define the following function: "f" : R+ → R+ formula_25. "f" is a power function, so it does have a second derivative:
which is strictly positive within the domain of "f", since "q" > "p", so we know "f" is convex.
Using this, and the Jensen's inequality we get:
after raising both side to the power of 1/"q" (an increasing function, since 1/"q" is positive) we get the inequality which was to be proven:
Using the previously shown equivalence we can prove the inequality for negative "p" and "q" by substituting them with, respectively, −"q" and −"p", QED.
The power mean could be generalized further to the generalized "f"-mean:
This covers the geometric mean without using a limit with "f"("x") = "log"("x"). The power mean is obtained for "f"("x") = "xp".
A power mean serves a non-linear moving average which is shifted towards small signal values for small "p" and emphasizes big signal values for big "p". Given an efficient implementation of a moving arithmetic mean called codice_1 one can implement a moving power mean according to the following Haskell code. | https://en.wikipedia.org/wiki?curid=13143 |
Gerolamo Cardano
Gerolamo (or Girolamo, or Geronimo) Cardano (; ; ; 24 September 1501 – 21 September 1576) was an Italian polymath, whose interests and proficiencies ranged from being a mathematician, physician, biologist, physicist, chemist, astrologer, astronomer, philosopher, writer, and gambler. He was one of the most influential mathematicians of the Renaissance, and was one of the key figures in the foundation of probability and the earliest introducer of the binomial coefficients and the binomial theorem in the Western world. He wrote more than 200 works on science.
Cardano partially invented and described several mechanical devices including the combination lock, the gimbal consisting of three concentric rings allowing a supported compass or gyroscope to rotate freely, and the Cardan shaft with universal joints, which allows the transmission of rotary motion at various angles and is used in vehicles to this day. He made significant contributions to hypocycloids, published in "De proportionibus", in 1570. The generating circles of these hypocycloids were later named Cardano circles or cardanic circles and were used for the construction of the first high-speed printing presses.
Today, he is well known for his achievements in algebra. He made the first systematic use of negative numbers in Europe, published with attribution the solutions of other mathematicians for the cubic and quartic equations, and acknowledged the existence of imaginary numbers.
He was born in Pavia, Lombardy, the illegitimate child of Fazio Cardano, a mathematically gifted jurist, lawyer, and close personal friend of Leonardo da Vinci. In his autobiography, Cardano wrote that his mother, Chiara Micheri, had taken "various abortive medicines" to terminate the pregnancy; he was "taken by violent means from my mother; I was almost dead." She was in labour for three days. Shortly before his birth, his mother had to move from Milan to Pavia to escape the Plague; her three other children died from the disease.
After a depressing childhood, with frequent illnesses, including impotence, and the rough upbringing by his overbearing father, in 1520, Cardano entered the University of Pavia against his father's wish, who wanted his son to undertake studies of law, but Girolamo felt more attracted to philosophy and science. During the Italian War of 1521-6, however, the authorities in Pavia were forced to close the university in 1524. Cardano resumed his studies at the University of Padua, where he graduated with a doctorate in medicine in 1525. His eccentric and confrontational style did not earn him many friends and he had a difficult time finding work after his studies had ended. In 1525, Cardano repeatedly applied to the College of Physicians in Milan, but was not admitted owing to his combative reputation and illegitimate birth. However, he was consulted by many members of the College of Physicians, because of his irrefutable intelligence.
Cardano wanted to practice medicine in a large, rich city like Milan, but he was denied a licence to practise, so he settled for the town of Saccolongo, where he practised without a license. There, he married Lucia Banderini in 1531. Before her death in 1546, they had three children, Giovanni Battista (1534), Chiara (1537) and Aldo Urbano (1543). Cardano later wrote that those were the happiest days of his life.
With the help of a few noblemen, Cardano obtained a teaching position in mathematics in Milan. Having finally received his medical licence, he practised mathematics and medicine simultaneously, treating a few influential patients in the process. Because of this, he became one of the most sought-after doctors in Milan. In fact, by 1536, he was able to quit his teaching position, although he was still interested in mathematics. His notability in the medical field was such that the aristocracy tried to lure him out of Milan. Cardano later wrote that he turned down offers from the kings of Denmark and France, and the Queen of Scotland.
Cardano was the first mathematician to make systematic use of negative numbers. He published with attribution the solution of Scipione del Ferro to the cubic equation and the solution of Cardano's student Lodovico Ferrari to the quartic equation in his 1545 book "Ars Magna". The solution to one particular case of the cubic equation formula_1 (in modern notation), had been communicated to him in 1539 by Niccolò Fontana Tartaglia (who later claimed that Cardano had sworn not to reveal it, and engaged Cardano in a decade-long dispute) in the form of a poem, but Ferro's solution predated Fontana's. In his exposition, he acknowledged the existence of what are now called imaginary numbers, although he did not understand their properties, described for the first time by his Italian contemporary Rafael Bombelli. In "Opus novum de proportionibus" he introduced the binomial coefficients and the binomial theorem.
Cardano was notoriously short of money and kept himself solvent by being an accomplished gambler and chess player. His book about games of chance, "Liber de ludo aleae" ("Book on Games of Chance"), written around 1564, but not published until 1663, contains the first systematic treatment of probability, as well as a section on effective cheating methods. He used the game of throwing dice to understand the basic concepts of probability. He demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes). He was also aware of the multiplication rule for independent events but was not certain about what values should be multiplied.
Cardano's work with hypocycloids led him to the Cardan's Movement or Cardan Gear mechanism, in which a pair of gears with the smaller being one-half the size of the larger gear is used converting rotational motion to linear motion with greater efficiency and precision than a Scotch yoke, for example. He is also credited with the invention of the Cardan suspension or gimbal.
Cardano made several contributions to hydrodynamics and held that perpetual motion is impossible, except in celestial bodies. He published two encyclopedias of natural science which contain a wide variety of inventions, facts, and occult superstitions. He also introduced the Cardan grille, a cryptographic writing tool, in 1550.
Someone also assigned to Cardano the credit for the invention of the so-called "Cardano's Rings", also called Chinese Rings, but it is very probable that they predate Cardano.
Significantly, in the history of education of the deaf, he said that deaf people were capable of using their minds, argued for the importance of teaching them, and was one of the first to state that deaf people could learn to read and write without learning how to speak first. He was familiar with a report by Rudolph Agricola about a deaf mute who had learned to write.
As quoted from Charles Lyell's "Principles of Geology":
The title of a work of Cardano's, published in 1552, "De Subtilitate" (corresponding to what would now be called transcendental philosophy), would lead us to expect, in the chapter on minerals, many far fetched theories characteristic of that age; but when treating of petrified shells, he decided that they clearly indicated the former sojourn of the sea upon the mountains.
Two of Cardano's children — Giovanni Battista and Aldo Urbano — came to ignoble ends. Giovanni Battista, Cardano's eldest and favorite son, was tried and beheaded in 1560 for poisoning his wife, after he discovered that their three children were not his. Aldo Urbano was a gambler, who stole money from his father, and so Gerolamo disinherited him in 1569.
Cardano moved from Pavia to Bologna, in part because he believed that the decision to execute Giovanni was influenced by Gerolamo's battles with the academic establishment in Pavia, and his colleagues' jealousy at his scientific achievements and also because he was beset with allegations of sexual impropriety with his students. Cardano was arrested by the Inquisition in 1570 for unknown reasons, and forced to spend several months in prison and abjure his professorship. He moved to Rome, and received a lifetime annuity from Pope Gregory XIII (after first having been rejected by Pope Pius V) and finished his autobiography. He was accepted in the Royal College of Physicians, and as well as practising medicine he continued his philosophical studies until his death in 1576.
The seventeenth-century English physician and philosopher Sir Thomas Browne possessed the ten volumes of the Leyden 1663 edition of the complete works of Cardan in his library.
Browne critically viewed Cardan as:
"that famous Physician of Milan, a great Enquirer of Truth, but too greedy a Receiver of it. He hath left many excellent Discourses, Medical, Natural, and Astrological; the most suspicious are those two he wrote by admonition in a dream, that is "De Subtilitate & Varietate Rerum". Assuredly this learned man hath taken many things upon trust, and although examined some, hath let slip many others. He is of singular use unto a prudent Reader; but unto him that only desireth Hoties, or to replenish his head with varieties; like many others before related, either in the Original or confirmation, he may become no small occasion of Error."
Richard Hinckley Allen tells of an amusing reference made by Samuel Butler in his book "Hudibras":
Cardan believ'd great states depend
Upon the tip o'th' Bear's tail's end;
That, as she wisk'd it t'wards the Sun,
Strew'd mighty empires up and down;
Which others say must needs be false,
Because your true bears have no tails.
Alessandro Manzoni's novel "I Promessi Sposi" portrays a pedantic scholar of the obsolete, Don Ferrante, as a great admirer of Cardano. Significantly, he values him only for his superstitious and astrological writings; his scientific writings are dismissed because they contradict Aristotle, but excused on the ground that the author of the astrological works deserves to be listened to even when he is wrong.
English novelist E. M. Forster's "Abinger Harvest", a 1936 volume of essays, authorial reviews and a play, provides a sympathetic treatment of Cardano in the section titled 'The Past'. Forster believes Cardano was so absorbed in "self-analysis that he often forgot to repent of his bad temper, his stupidity, his licentiousness, and love of revenge" (212).
In honor of Gerolamo the Cardano cryptocurrency platform was named after him. | https://en.wikipedia.org/wiki?curid=13145 |
Gene Roddenberry
Eugene Wesley Roddenberry (August 19, 1921 – October 24, 1991) was an American television screenwriter, producer and creator of the , and its first spin-off "". Born in El Paso, Texas, Roddenberry grew up in Los Angeles, where his father was a police officer. Roddenberry flew 89 combat missions in the Army Air Forces during World War II, and worked as a commercial pilot after the war. Later, he followed in his father's footsteps and joined the Los Angeles Police Department, where he also began to write scripts for television.
As a freelance writer, Roddenberry wrote scripts for "Highway Patrol", "Have Gun – Will Travel", and other series, before creating and producing his own television series, "The Lieutenant." In 1964, Roddenberry created "Star Trek", which premiered in 1966 and ran for three seasons before being canceled. He then worked on other projects, including a string of failed television pilots. The syndication of "Star Trek" led to its growing popularity; this, in turn, resulted in the "Star Trek" feature films, on which Roddenberry continued to produce and consult. In 1987, the sequel series "" began airing on television in first-run syndication; Roddenberry was heavily involved in the initial development of the series, but took a less active role after the first season due to ill health. He continued to consult on the series until his death in 1991.
In 1985, he became the first TV writer with a star on the Hollywood Walk of Fame, and he was later inducted by both the Science Fiction Hall of Fame and the Academy of Television Arts & Sciences Hall of Fame. Years after his death, Roddenberry was one of the first humans to have his ashes carried into earth orbit. The popularity of the "Star Trek" universe and films has inspired films, books, comic books, video games, and fan films set in the "Star Trek" universe.
Roddenberry was born on August 19, 1921, in his parents' rented home in El Paso, Texas, the first child of Eugene Edward Roddenberry and Caroline "Glen" (née Golemon) Roddenberry. The family moved to Los Angeles in 1923 after Gene's father passed the Civil Service test and was given a police commission there. During his childhood, Roddenberry was interested in reading, especially pulp magazines, and was a fan of stories such as "John Carter of Mars", "Tarzan", and the "Skylark" series by E. E. Smith.
Roddenberry majored in police science at Los Angeles City College, where he began dating Eileen-Anita Rexroat and became interested in aeronautical engineering. He obtained a pilot's license through the United States Army Air Corps-sponsored Civilian Pilot Training Program. He enlisted with the USAAC on December 18, 1941, and married Eileen on June 13, 1942. He graduated from the USAAC on August 5, 1942, when he was commissioned as a second lieutenant.
He was posted to Bellows Field, Oahu, to join the 394th Bomb Squadron, 5th Bombardment Group, of the Thirteenth Air Force, which flew the Boeing B-17 Flying Fortress.
On August 2, 1943, while flying B-17E-BO, "41-2463", "Yankee Doodle", out of Espiritu Santo, the plane Roddenberry was piloting overshot the runway by and crashed into trees, crushing the nose, and starting a fire, killing two men: bombardier Sgt. John P. Kruger and navigator Lt. Talbert H. Woolam. The official report absolved Roddenberry of any responsibility. Roddenberry spent the remainder of his military career in the United States, and flew all over the country as a plane crash investigator. He was involved in a further plane crash, this time as a passenger. He was awarded the Distinguished Flying Cross and the Air Medal.
In 1945, Roddenberry began flying for Pan American World Airways, including routes from New York to Johannesburg or Calcutta, the two longest Pan Am routes at the time. Listed as a resident of River Edge, New Jersey, he experienced his third crash while on the Clipper "Eclipse" on June 18, 1947. The plane came down in the Syrian Desert, and Roddenberry, who took control as the ranking flight officer, suffered two broken ribs but was able to drag injured passengers out of the burning plane and led the group to get help. Fourteen (or 15) people died in the crash; 11 passengers needed hospital treatment (including Bishnu Charan Ghosh), and eight were unharmed. He resigned from Pan Am on May 15, 1948, and decided to pursue his dream of writing, particularly for the new medium of television.
Roddenberry applied for a position with the Los Angeles Police Department on January 10, 1949, and spent his first 16 months in the traffic division before being transferred to the newspaper unit. This became the Public Information Division and Roddenberry became the Chief of Police's speech writer. He became technical advisor for a new television version of "Mr. District Attorney", which led to him writing for the show under his pseudonym "Robert Wesley". He began to collaborate with Ziv Television Programs, and continued to sell scripts to "Mr. District Attorney", in addition to Ziv's "Highway Patrol". In early 1956, he sold two story ideas for "I Led Three Lives", and he found that it was becoming increasingly difficult to be a writer and a policeman. On June 7, 1956, he resigned from the force to concentrate on his writing career.
Roddenberry was promoted to head writer for "The West Point Story", and wrote 10 scripts for the first season, about a third of the total episodes. While working for Ziv, in 1956, he pitched a series to CBS set aboard a cruise ship, "Hawaii Passage", but they did not buy it, as he wanted to become a producer and have full creative control. He wrote another script for Ziv's series "Harbourmaster" titled "Coastal Security", and signed a contract with the company to develop a show called "Junior Executive" with Quinn Martin. Nothing came of the series.
He wrote scripts for a number of other series in his early years as a professional writer, including "Bat Masterson" and "Jefferson Drum". Roddenberry's episode of the series "Have Gun – Will Travel", "Helen of Abajinian", won the Writer's Guild of America award for Best Teleplay in 1958. He also continued to create series of his own, including a series based on an agent for Lloyd's of London called "The Man from Lloyds". He pitched a police-based series called "Footbeat" to CBS, Hollis Productions, and Screen Gems. It nearly made it into ABC's Sunday-night lineup, but they opted to show only Western series that night.
Roddenberry was asked to write a series called "Riverboat", set in 1860s Mississippi. When he discovered that the producers wanted no black people on the show, he argued so much with them that he lost the job. He also considered moving to England around this time, as Lew Grade wanted Roddenberry to develop series and set up his own production company. Though he did not move, he leveraged the deal to land a contract with Screen Gems that included a guaranteed $100,000, and became a producer for the first time on a summer replacement for "The Tennessee Ernie Ford Show" titled "Wrangler".
Screen Gems backed Roddenberry's first attempt at creating a pilot. His series, "The Wild Blue", went to pilot, but was not picked up. The three main characters had names that later appeared in the "Star Trek" franchise: Philip Pike, Edward Jellicoe, and James T. Irvine. While working at Screen Gems, an actress, new to Hollywood, wrote to him asking for a meeting. They quickly became friends and met every few months; the woman was Majel Leigh Hudec, later known as Majel Barrett. He created a second pilot called "333 Montgomery" about a lawyer, played by DeForest Kelley. It was not picked up by the network, but was later rewritten as a new series called "Defiance County". His career with Screen Gems ended in late 1961, and shortly afterward, he had issues with his old friend Erle Stanley Gardner. The "Perry Mason" creator claimed that "Defiance County" had infringed his character Doug Selby. The two writers fell out via correspondence and stopped contacting one another, though "Defiance County" never proceeded past the pilot stage.
In 1961, he agreed to appear in an advertisement for MONY (Mutual of New York), as long as he had final approval. With the money from Screen Gems and other works, Eileen and he moved to 539 South Beverly Glen, near Beverly Hills. He discussed an idea about a multiethnic crew on an airship travelling the world, based on the film "Master of the World" (1961), with fellow writer Christopher Knopf at MGM. As the time was not right for science fiction, he began work on "The Lieutenant" for Arena Productions. This made it to the NBC Saturday night lineup at 7:30 pm, and premiered on September 14, 1963. The show set a new ratings record for the time slot. Roddenberry worked with several cast and crew who would later join him on "Star Trek", including: Gene L. Coon, star Gary Lockwood, Joe D'Agosta, Leonard Nimoy, Nichelle Nichols, and Majel Barrett.
"The Lieutenant" was produced with the co-operation of the Pentagon, which allowed them to film at an actual Marine base. During the production of the series, Roddenberry clashed regularly with the Department of Defense over potential plots. The department withdrew its support after Roddenberry pressed ahead with a plot titled "To Set It Right" in which a white and a black man find a common cause in their roles as Marines. "To Set It Right" was the first time he worked with Nichols, and it was her first television role. The episode has been preserved at the Museum of Television and Radio in New York City. The show was not renewed after its first season. Roddenberry was already working on a new series idea. This included his ship location from "Hawaii Passage" and added a Horatio Hornblower character, plus the multiracial crew from his airship idea. He decided to write it as science fiction, and by March 11, 1964, he brought together a 16-page pitch. On April 24, he sent three copies and two dollars to the Writers Guild of America to register his series. He called it "Star Trek".
When Roddenberry pitched "Star Trek" to MGM, it was warmly received, but no offer was made. He then went to Desilu Productions, but rather than being offered a one-script deal, he was hired as a producer and allowed to work on his own projects. His first was a half-hour pilot called "Police Story" (not to be confused with the anthology series created by Joseph Wambaugh), which was not picked up by the networks. Having not sold a pilot in five years, Desilu was having financial difficulties; its only success was "I Love Lucy". Roddenberry took the "Star Trek" idea to Oscar Katz, head of programming, and the duo immediately started work on a plan to sell the series to the networks. They took it to CBS, which ultimately passed on it. The duo later learned that CBS had been eager to find out about "Star Trek" because it had a science fiction series in development—"Lost in Space". Roddenberry and Katz next took the idea to Mort Werner at NBC, this time downplaying the science fiction elements and highlighting the links to "Gunsmoke" and "Wagon Train." The network funded three story ideas, and selected "The Menagerie", which was later known as "", to be made into a pilot. (The other two later became episodes of the series.) While most of the money for the pilot came from NBC, the remaining costs were covered by Desilu. Roddenberry hired Dorothy Fontana, better known as D. C. Fontana, as his assistant. They had worked together previously on "The Lieutenant," and she had eight script credits to her name.
Roddenberry and Barrett had begun an affair by the early days of "Star Trek", and he specifically wrote the part of the character Number One in the pilot with her in mind; no other actresses were considered for the role. Barrett suggested Nimoy for the part of Spock. He had worked with both Roddenberry and Barrett on "The Lieutenant", and once Roddenberry remembered the thin features of the actor, he did not consider anyone else for the part. The remaining cast came together; filming began on November 27, 1964, and was completed on December 11. After post-production, the episode was shown to NBC executives and it was rumored that "Star Trek" would be broadcast at 8:00 pm on Friday nights. The episode failed to impress test audiences, and after the executives became hesitant, Katz offered to make a second pilot. On March 26, 1965, NBC ordered a new episode.
Roddenberry developed several possible scripts, including "Mudd's Women", "The Omega Glory", and with the help of Samuel A. Peeples, "Where No Man Has Gone Before". NBC selected the last one, leading to later rumors that Peeples created "Star Trek", something he always denied. Roddenberry was determined to make the crew racially diverse, which impressed actor George Takei when he came for his audition. The episode went into production on July 15, 1965, and was completed at around half the cost of "The Cage", since the sets were already built. Roddenberry worked on several projects for the rest of the year. In December, he decided to write lyrics to the "Star Trek" theme; this angered the theme's composer, Alexander Courage, as it meant that royalties would be split between them. In February 1966, NBC informed Desilu that they were buying "Star Trek" and that it would be included in the fall 1966 television schedule.
On May 24, the first episode of the "Star Trek" series went into production; Desilu was contracted to deliver 13 episodes. Five days before the first broadcast, Roddenberry appeared at the 24th World Science Fiction Convention and previewed "Where No Man Has Gone Before". After the episode was shown, he received a standing ovation. The first episode to air on NBC was "The Man Trap", on September 8, 1966, at 8:00 pm. Roddenberry was immediately concerned about the series' low ratings and wrote to Harlan Ellison to ask if he could use his name in letters to the network to save the show. Not wanting to lose a potential source of income, Ellison agreed and also sought the help of other writers who also wanted to avoid losing potential income. Roddenberry corresponded with science fiction writer Isaac Asimov about how to address the issue of Spock's growing popularity and the possibility that his character would overshadow Kirk. Asimov suggested having Kirk and Spock work together as a team "to get people to think of Kirk when they think of Spock." The series was renewed by NBC, first for a full season's order, and then for a second season. An article in the "Chicago Tribune" quoted studio executives as stating that the letter-writing campaign had been wasted because they had already been planning to renew "Star Trek".
Roddenberry often rewrote submitted scripts, although he did not always take credit for these. Roddenberry and Ellison fell out over "The City on the Edge of Forever" after Roddenberry rewrote Ellison's script to make it both financially feasible to film and usable for the series context. Even his close friend Don Ingalls had his script for "A Private Little War" altered drastically, and as a result, Ingalls declared that he would only be credited under the pseudonym "Jud Crucis" (a play on "Jesus Christ"), claiming he had been crucified by the process. Roddenberry's work rewriting "", based on footage originally shot for "The Cage", resulted in a Writers' Guild arbitration board hearing. The Guild ruled in his favor over John D. F. Black, the complainant. The script won a Hugo Award, but the awards board neglected to inform Roddenberry, who found out through correspondence with Asimov.
As the second season was drawing to a close, Roddenberry once again faced the threat of cancellation. He enlisted the help of Asimov, and even encouraged a student-led protest march on NBC. On January 8, 1968, a thousand students from 20 different schools across the country marched on the studio. Roddenberry began to communicate with "Star Trek" fan Bjo Trimble, who led a fan writing campaign to save the series. Trimble later noted that this campaign of writing to fans who had written to Desilu about the show, urging them to write NBC, had created an organized "Star Trek" fandom. The network received around 6,000 letters a week from fans petitioning it to renew the series. On March 1, 1968, NBC announced on air, at the end of "The Omega Glory", that "Star Trek" would return for a third season.
The network had initially planned to place "Star Trek" in the 7:30 pm Monday-night time slot freed up by "The Man from U.N.C.L.E." completing its run. Instead, the enraged George Schlatter forced the network to insert "Rowan and Martin's Laugh-In" into the slot, and Roddenberry's series was moved to 10:00 pm on Fridays. Realizing the show could not survive in that time slot and burned out from arguments with the network, Roddenberry resigned from the day-to-day running of "Star Trek", although he continued to be credited as executive producer. Roddenberry cooperated with Stephen Edward Poe, writing as Stephen Whitfield, on the 1968 nonfiction book "The Making of Star Trek" for Ballantine Books, splitting the royalties evenly. Roddenberry explained to Whitfield: "I had to get some money somewhere. I'm sure not going to get it from the profits of "Star Trek"." Herbert Solow and Robert H. Justman observed that Whitfield never regretted his 50-50 deal with Roddenberry, since it gave him "the opportunity to become the first chronicler of television's successful unsuccessful series." Whitfield had previously been the national advertising and promotion director for model makers Aluminum Model Toys, better known as "AMT", which then held the "Star Trek" license, and moved to run Lincoln Enterprises, Roddenberry's company set up to sell the series' merchandise.
Having stepped aside from the majority of his "Star Trek" duties, Roddenberry sought instead to create a film based on Asimov's "I, Robot" and also began work on a "Tarzan" script for National General Pictures. After initially requesting a budget of $2 million and being refused, Roddenberry made cuts to reduce costs to $1.2 million. When he learned they were being offered only $700,000 to shoot the film, which by now was being called a TV movie, he canceled the deal. Meanwhile, NBC announced "Star Trek" cancellation in February 1969. A similar but much smaller letter-writing campaign followed news of the cancellation. Because of the manner in which the series was sold to NBC, it left the production company $4.7 million in debt. The last episode of "Star Trek" aired 47 days before Neil Armstrong stepped onto the moon as part of the Apollo 11 mission, and Roddenberry declared that he would never write for television again.
Following the cancellation of "Star Trek," Roddenberry felt typecast as a producer of science fiction, despite his background in Westerns and police stories. He later described the period, saying, "My dreams were going downhill because I could not get work after the original series was cancelled." He felt that he was "perceived as the guy who made the show that was an expensive flop." Roddenberry had sold his interest in "Star Trek" to Paramount Studios in return for a third of the ongoing profits. However, this did not result in any quick financial gain; the studio was still claiming that the series was $500,000 in the red in 1982. He wrote and produced "Pretty Maids All in a Row" (1971), a sexploitation film directed by Roger Vadim, for MGM. The cast included Rock Hudson, Angie Dickinson, Telly Savalas, and Roddy McDowall alongside "Star Trek" regular James Doohan, and William J. Campbell, who had appeared as a guest in two "Star Trek" episodes, "The Squire Of Gothos" and "The Trouble With Tribbles". "Variety" was unimpressed: "Whatever substance was in the original [novel by Francis Pollini] or screen concept has been plowed under, leaving only superficial, one-joke results." Herbert Solow had given Roddenberry the work as a favor, paying him $100,000 for the script.
Faced with a mortgage and a $2,000-per-month alimony obligation as a result of his 1969 divorce, he began to support himself largely by giving college lectures and appearances at science fiction conventions. These presentations included screenings of "The Cage" and blooper reels from the production of "Star Trek." The conventions began to build the fan support to bring back "Star Trek," leading "TV Guide" to describe it, in 1972, as "the show that won't die."
In 1972 and 1973, Roddenberry made a comeback to science fiction, selling ideas for four new series to a variety of networks. Roddenberry's "Genesis II" was set in a postapocalyptic Earth. He had hoped to recreate the success of "Star Trek" without "doing another space-hopping show." He created a 45-page writing guide, and proposed several story ideas based on the concept that pockets of civilisation had regressed to past eras or changed altogether. The pilot aired as a TV movie in March 1973, setting new records for the "Thursday Night Movie of the Week". Roddenberry was asked to produce four more scripts for episodes, but before production could begin again, CBS aired the film "Planet of the Apes." It was watched by an even greater audience than "Genesis II." CBS scrapped "Genesis II" and replaced it with a television series based on the film; the results were disastrous from a ratings standpoint, and "Planet of the Apes" was quickly canceled.
"The Questor Tapes" project reunited him with his "Star Trek" collaborator, Gene L. Coon, who was in failing health at the time. NBC ordered 16 episodes, and tentatively scheduled the series to follow "The Rockford Files" on Friday nights; the pilot launched on January 23, 1974, to positive critical response, but Roddenberry balked at the substantial changes requested by the network and left the project, leading to its immediate cancellation. During 1974, Roddenberry reworked the "Genesis II" concept as a second pilot, "Planet Earth," for rival network ABC, with similar less-than-successful results. The pilot was aired on April 23, 1974. While Roddenberry wanted to create something that could feasibly exist in the future, the network wanted stereotypical science-fiction women and were unhappy when that was not delivered. Roddenberry was not involved in a third reworking of the material by ABC that produced "Strange New World." He began developing "MAGNA I," an underwater science-fiction series, for 20th Century Fox Television. By the time the work on the script was complete, though, those who had approved the project had left Fox and their replacements were not interested in the project. A similar fate was faced by "Tribunes," a science-fiction police series, which Roddenberry attempted to get off the ground between 1973 and 1977. He gave up after four years; the series never even reached the pilot stage.
In 1974, Roddenberry was paid $25,000 by John Whitmore to write a script called "The Nine". Intended to be about Andrija Puharich's parapsychological research, it evolved into a frank exploration of his experiences attempting to earn a living attending science fiction conventions. At the time, he was again close to losing his house because of a lack of income.
The pilot "Spectre," Roddenberry's 1977 attempt to create an occult detective duo similar to Sherlock Holmes and Dr. Watson, was released as a television movie within the United States and received a limited theatrical release in the United Kingdom.
Lacking funds in the early 1970s, Roddenberry was unable to buy the full rights to "Star Trek" for $150,000 from Paramount. Lou Scheimer approached Paramount in 1973 about creating an animated "Star Trek" series. Credited as "executive consultant" and paid $2,500 per episode, Roddenberry was granted full creative control of "". Although he read all the scripts and "sometimes [added] touches of his own", he relinquished most of his authority to "de facto" showrunner/associate producer D. C. Fontana.
Roddenberry had some difficulties with the cast. To save money, he sought not to hire George Takei and Nichelle Nichols. He neglected to inform Leonard Nimoy of this and instead, in an effort to get him to sign on, told him that he was the only member of the main cast not returning. After Nimoy discovered the deception, he demanded that Takei and Nichols play Sulu and Uhura when their characters appeared on screen; Roddenberry acquiesced. He had been promised five full seasons of the new show, but ultimately, only one and a half were produced.
However, the groundswell of vociferous fan support (6,000 attended the second New York "Star Trek" convention in 1973 and 15,000 attended in 1974, eclipsing the 4,500 attendees at the 32nd World Science Fiction Convention in 1974) led Paramount to hire Roddenberry to create and produce a feature film based on the franchise in May 1975. The studio was unimpressed with the ideas being put forward; John D. F. Black's opinion was that their ideas were never "big enough" for the studio, even when one scenario involved the end of the universe. At the time, several ideas were partly developed including "" and "". Following the commercial reception of "Star Wars", in June 1977, Paramount instead green-lit a new series set in the franchise titled "", with Roddenberry and most of the original cast, except Nimoy, set to reprise their respective roles. It was to be the anchor show of a proposed Paramount-owned "fourth network", but plans for the network were scrapped and the project was reworked into a feature film. The result, "", troubled the studio because of budgetary concerns, but was a box-office hit. Adjusted for inflation, it was the third-highest grossing "Star Trek" movie, with the 2009 film coming in first and the second.
In 1980, Roddenberry submitted a treatment for a proposed sequel about the crew preventing the alien Klingons from thwarting the assassination of John F. Kennedy. Mindful of the tumult that suffused the production of "Star Trek: The Motion Picture", Paramount rejected the proposal. After he was replaced on the project by television producer Harve Bennett, Roddenberry was named "executive consultant" for the project, a position he retained for all subsequent Star Trek franchise films produced during his lifetime. Under this arrangement, he was compensated with a producer's fee and a percentage of the net profits of the film in exchange for proffering non-binding story notes and corresponding with the fan community; much to his ongoing chagrin, these memos were largely disregarded by Bennett and other producers. An initial script for "" was circulated to eight people; Bennett attributed the subsequent plot leak of the death of Spock to Roddenberry. About 20% of the plot was based on Roddenberry's ideas.
Roddenberry was involved in creating the television series "", which premiered with "Encounter at Farpoint" on September 28, 1987. He was given a bonus of $1 million in addition to an ongoing salary to produce the series, and celebrated by purchasing a new Rolls-Royce for $100,000. The arrangement did not entitle him to be executive producer of the series. However, Paramount was already concerned about the original cast not returning, and fearing fan reaction if Roddenberry was not involved, agreed to his demand for control of the show. Roddenberry rewrote the series bible from an original version by David Gerrold, who had previously written "The Original Series" episode "The Trouble with Tribbles", and "The Animated Series" follow-up, "More Tribbles, More Troubles".
According to producer Rick Berman, Roddenberry's involvement in "The Next Generation" "diminished greatly" after the first season, but the nature of his increasingly peripheral role was not disclosed because of the value of his name to fans. While Berman said that Roddenberry had "all but stopped writing and rewriting" by the end of the third season, his final writing credit on the show (a co-teleplay credit) actually occurred considerably earlier, appearing on "Datalore", the 13th episode of the first season.
Although commercially successful from its inception, the series was initially marred by Writers Guild of America grievances from Fontana and Gerrold, both of whom left the series under acrimonious circumstances; frequent turnover among the writing staff (24 staff writers left the show during its first three seasons, triple the average attrition rate for such series); and allegations that longtime Roddenberry attorney Leonard Maizlish had become the producer's "point man and proxy", ghostwriting memos, sitting in on meetings, and contributing to scripts despite not being on staff. Writer Tracy Tormé described the first few seasons of "The Next Generation" under Roddenberry as an "insane asylum".
In 1990, Nicholas Meyer was brought in to direct the sixth film in the series: "". Creatively, Meyer clashed with Roddenberry, who felt that having the "Enterprise" crew hold prejudices against the Klingons did not fit with his view of the universe. Meyer described a meeting with Roddenberry he later regretted, saying:His guys were lined up on one side of the room, and my guys were lined up on the other side of the room, and this was not a meeting in which I felt I'd behaved very well, very diplomatically. I came out of it feeling not very good, and I've not felt good about it ever since. He was not well, and maybe there were more tactful ways of dealing with it, because at the end of the day, I was going to go out and make the movie. I didn't have to take him on. Not my finest hour. In Joel Engel's biography, "Gene Roddenberry: The Myth and the Man Behind Star Trek", he states that Roddenberry watched "The Undiscovered Country" alongside the producers of the film at a private screening two days before his death, and told them they had done a "good job". In contrast, Nimoy and Shatner's memoirs report that after the screening, Roddenberry called his lawyer and demanded a quarter of the scenes be cut; the producers refused.
In addition to his film and television work, Roddenberry wrote the novelization of "Star Trek: The Motion Picture". Although it has been incorrectly attributed to several other authors (most notably Alan Dean Foster), it was the first in a series of hundreds of "Star Trek"-based novels to be published by the Pocket Books imprint of Simon & Schuster, whose parent company also owned Paramount Pictures Corporation. Previously, Roddenberry worked intermittently on "The God Thing", a proposed novel based upon his rejected 1975 screenplay for a proposed low-budget ($3 to $5 million) "Star Trek" film preceding the development of "Phase II" throughout 1976. Attempts to complete the project by Walter Koenig, Susan Sackett, Fred Bronson, and Michael Jan Friedman have proven to be unfeasible for a variety of legal and structural reasons.
While at Los Angeles City College, Roddenberry began dating Eileen-Anita Rexroat. They became engaged before Roddenberry left Los Angeles during his military service, and married in June 1942 at the chapel at Kelly Field. They had two children together, Darleen Anita, and Dawn Allison. During his time in the LAPD, Roddenberry was known to have had affairs with secretarial staff. Before his work on "Star Trek", he began relationships with both Nichelle Nichols and Majel Barrett. Nichols only wrote about their relationship in her autobiography "Beyond Uhura" after Roddenberry's death. At the time, Roddenberry wanted to remain in an open relationship with both women, but Nichols, recognising Barrett's devotion to him, ended the affair as she did not want to be "the other woman to the other woman".
Barrett and he had an apartment together by the opening weeks of "Star Trek". He had planned to divorce Eileen after the first season of the show, but when it was renewed, he delayed doing so, fearing that he would not have enough time to deal with both the divorce and "Star Trek". He moved out of the family home on August 9, 1968, two weeks after the marriage of his daughter Darleen. In 1969, while scouting locations in Japan for MGM for "Pretty Maids all in a Row", he proposed to Barrett by telephone. They were married in a Shinto ceremony, as Roddenberry had considered it "sacrilegious" to use an American minister in Japan. Roddenberry and Barrett had a son together, Eugene Jr., commonly and professionally known as Rod Roddenberry, in February 1974. From 1975 until his death, Roddenberry maintained an extramarital relationship with his executive assistant, Susan Sackett.
Roddenberry was raised a Southern Baptist; however, as an adult, he rejected religion, and considered himself a humanist. He began questioning religion around the age of 14, and came to the conclusion that it was "nonsense". As a child, he served in the choir at his local church, but often substituted lyrics as he sang hymns. Early in his writing career, he received an award from the American Baptist Convention for "skillfully writing Christian truth and the application of Christian principles into commercial, dramatic TV scripts". For several years, he corresponded with John M. Gunn of the National Council of Churches regarding the application of Christian teachings in television series. However, Gunn stopped replying after Roddenberry wrote in a letter: "But you must understand that I am a complete pagan, and consume enormous amounts of bread, having found the Word more spice than nourishment, so I am interested in a statement couched in dollars and cents of what this means to the Roddenberry treasury."
Roddenberry said of Christianity, "How can I take seriously a God-image that requires that I prostrate myself every seven days and praise it? That sounds to me like a very insecure personality." At one point, he worked a similar opinion, which was to have been stated by a Vulcan, into the plot for "Star Trek: The God Thing". Before his death, Roddenberry became close friends with philosopher Charles Musès, who said that Roddenberry's views were "a far cry from atheism". Roddenberry explained his position thus: "It's not true that I don't believe in God. I believe in a kind of God. It's just not other people's God. I reject religion. I accept the notion of God." He had an ongoing interest in other people's experiences with religion, and called Catholicism "a very beautiful religion. An art form." However, he said that he dismissed all organized religions, saying that for the most part, they acted like a "substitute brain... and a very malfunctioning one". Roddenberry was also critical of how the public looked at certain religions, noting that when the King David Hotel bombing took place in 1946, the American public accepted it as the action of freedom fighters, whereas a car bombing by a Muslim in Beirut is condemned as a terrorist act. While he agreed that both parties were wrong in their use of violence, he said that the actions of both were undertaken because of their strong religious beliefs.
According to Ronald D. Moore, Roddenberry "felt very strongly that contemporary Earth religions would be gone by the 23rd century". Brannon Braga said that Roddenberry made it known to the writers of "Star Trek" and "Star Trek: The Next Generation" that religion, superstition, and mystical thinking were not to be included. Even a mention of marriage in a script for an early episode of "The Next Generation" resulted in Roddenberry's chastising the writers. Nicholas Meyer said that "Star Trek" had evolved "into sort of a secular parallel to the Catholic Mass". Roddenberry compared the franchise to his own philosophy by saying: "Understand that "Star Trek" is more than just my political philosophy, my racial philosophy, my overview on life and the human condition." He was awarded the 1991 Humanist Arts Award from the American Humanist Association.
In the late 1980s, Roddenberry was likely afflicted by the first manifestations of cerebral vascular disease and encephalopathy as a result of his longstanding recreational use of legal and illicit drugs, including alcohol, methaqualone, methylphenidate, Dexamyl, and cocaine (which he had used regularly since the production of "Star Trek: The Motion Picture"). Throughout much of his career, he had routinely used stimulants to work through the night on scripts, especially amphetamines. The effects of these substances were compounded by deleterious interactions with diabetes, high blood pressure, and antidepressant prescriptions.
Following a stroke at a family reunion in Tallahassee, Florida, in September 1989, Roddenberry's health declined further, ultimately requiring him to use a wheelchair. His right arm was paralyzed after another stroke in early October 1991, causing him ongoing pain as the muscles began to atrophy. It also caused problems with the sight in his right eye and he found communicating in full sentences difficult. At 2:00 pm, on October 24, he attended an appointment with his doctor, Dr. Ronald Rich. He arrived in the building with his staff, and began to travel up to the ninth floor in the elevator. As they reached the fifth floor, he began struggling for breath, and was wheeled into the doctor's office, where he was reclined and a nurse administered oxygen. Barrett was sent for. Upon her arrival, she held Roddenberry while encouraging him to breathe. He had a cardiopulmonary arrest, and he died in the doctor's office shortly afterwards. Cardiopulmonary resuscitation was attempted with no effect, and paramedics arrived to take him across the road to the Santa Monica Medical Center, where he was pronounced dead.
The funeral was arranged for November 1, with the public invited to the memorial service at the Hall of Liberty, within the Forest Lawn Memorial Park, in Hollywood Hills. It was a secular service; Roddenberry had been cremated before the event. More than 300 "Star Trek" fans attended, and stood in the balcony section of the hall, while the invited guests were on the floor level. Nichelle Nichols sang twice during the ceremony, first "Yesterday" and then a song she wrote herself titled "Gene". Both songs had been requested by Barrett. Several people spoke at the memorial, including Ray Bradbury, Whoopi Goldberg, Christopher Knopf, E. Jack Neuman, and Patrick Stewart. The ceremony was closed by two kilted pipers playing "Amazing Grace" as a recorded message by Roddenberry was broadcast. A four-plane flypast, in the missing man formation, followed some 30 minutes later. After his death, "Star Trek: The Next Generation" aired a two-part episode of season five, called "", which featured a dedication to Roddenberry.
Roddenberry's will left the majority of his $30 million estate to Barrett, in a trust. He also left money to his children and his first wife Eileen. However, his daughter Dawn contested the will based upon the grounds that Barrett had undue influence on her father. In a hearing held in 1993, the Los Angeles Superior Court ruled that improprieties existed in the management of the trust and removed Barrett as executor. In another decision, the court found that Roddenberry had hidden assets from "Star Trek" in the Norway Corporation to keep funds away from his first wife, and ordered the payment of 50% of those assets to Eileen, as well as punitive damages. In 1996, the California Court of Appeals ruled that the original will, which stated that anyone who contested it would be disinherited, would stand. As a result, Dawn lost $500,000 from the estate, as well as a share of the trust upon Barrett's death. The appellate court also overturned the earlier decision to award Roddenberry's first wife, Eileen, 50% of his assets. The judge called that case one "that should never have been".
In 1992, some of Roddenberry's ashes were flown into space, and returned to Earth, on the Space Shuttle "Columbia" mission STS-52. On April 21, 1997, a Celestis spacecraft with 7 grams (a quarter of an ounce) of the cremated remains of Roddenberry, along with those of Timothy Leary, Gerard K. O'Neill and 21 other people, was launched into Earth orbit aboard a Pegasus XL rocket from a site near the Canary Islands. On May 20, 2002, the spacecraft's orbit deteriorated and it disintegrated in the atmosphere. Another flight to launch more of his ashes into deep space, along with those of Barrett, who died in 2008, was initially planned to take place in 2009. Unlike previous flights, the intention was that this flight would not return to burn up in the Earth's atmosphere. The payload was to include the ashes of James Doohan in addition to the Roddenberrys' and several others and was scheduled to fly in 2016 on the Sunjammer solar sail experiment, but the project was canceled in 2014. At this time, it is not known if there is another mission being planned.
In 1985, Gene Roddenberry was the first television writer to receive a star on the Hollywood Walk of Fame. When the Sci-Fi Channel was launched, the first broadcast was a dedication to two "science fiction pioneers": Isaac Asimov and Roddenberry. The Roddenberry crater on Mars is named after him, as is the asteroid 4659 Roddenberry. Roddenberry and "Star Trek" have been cited as inspiration for other science fiction franchises, with George Lucas crediting the series for enabling "Star Wars" to be produced. J. Michael Straczynski, creator of the "Babylon 5" franchise, appreciated "Star Trek" amongst other science fiction series and "what they had to say about who we are, and where we are going."
David Alexander collaborated with Roddenberry on a biography over two decades. Titled "Star Trek Creator", it was published in 1995. Yvonne Fern's book, "Gene Roddenberry: The Last Conversation", detailed a series of conversations she had with Roddenberry over the last months of his life. In October 2002, a plaque was placed at Roddenberry's birthplace in El Paso, Texas. The Science Fiction Hall of Fame inducted Roddenberry in 2007, and the Television Academy Hall of Fame in January 2010.
"" was already in development when Roddenberry died. Berman said that while he never discussed the ideas for the season, he was given a blessing by Roddenberry to pursue it. Berman later stated, "I don't believe the 24th century is going to be like Gene Roddenberry believed it to be, that people will be free from poverty and greed. But if you're going to write and produce for "Star Trek", you've got to buy into that." In early 1996, Majel Barret-Roddenberry uncovered scripts for a series called "Battleground Earth". The project was sent to distributors by the Creative Artists Agency, and it was picked up by Tribune Entertainment, which set the budget at over $1 million per episode. The series was renamed "" before launch, and premiered in 1997, six years after Gene's death; it ran for 5 seasons and 110 episodes until 2002.
Two further series ideas were developed from Roddenberry's notes, "Genesis" and "Andromeda". After an initial order for two seasons, 110 episodes of "Andromeda" were aired over five seasons from 2000 to 2005. Tribune also worked on another Roddenberry series. Titled "Starship"; they aimed to launch it via the network route rather than into syndication. Rod Roddenberry, president of Roddenberry Productions, announced in 2010, at his father's posthumous induction into the Academy of Television Arts and Sciences Hall of Fame, that he was aiming to take "The Questor Tapes" to television. Rod was developing the series alongside Imagine Television. Rod would go on to create the two-hour television movie "Trek Nation" regarding the impact of his father's work.
The majority of the awards and nominations received by Roddenberry throughout his career were related to "Star Trek". He was credited for "Star Trek" during the nominations for two Emmy Awards, and won two Hugo Awards. One Hugo was a special award for the series, while another was for "The Menagerie", the episode which used footage from the original unaired pilot for "Star Trek", "The Cage". In addition, he was awarded the Brotherhood Award by the National Association for the Advancement of Colored People for his work in the advancement of African American characters on television. Following the end of "Star Trek", he was nominated for Hugo Awards for "Genesis II" and "The Questor Tapes". Following his death in 1991, he was posthumously awarded the Robert A. Heinlein Memorial Award by the National Space Society and The George Pal Memorial Award at the Saturn Awards, as well as the Exceptional Public Service Medal by NASA. | https://en.wikipedia.org/wiki?curid=13148 |
Galaxy Quest
Galaxy Quest is a 1999 American science fiction comedy film directed by Dean Parisot and written by David Howard and Robert Gordon. A parody of and homage to science-fiction films and series, especially "Star Trek" and its fandom, the film stars Tim Allen, Sigourney Weaver, Alan Rickman, Tony Shalhoub, Sam Rockwell and Daryl Mitchell. It depicts the cast of a fictional defunct cult television series, "Galaxy Quest", who are visited by actual aliens who think the series is an accurate documentary, and become involved in a very real intergalactic conflict.
The film was a modest box office success and positively received by critics: it won the Hugo Award for Best Dramatic Presentation (an award won by the original "" series in the 1960s) and the Nebula Award for Best Script. It was also nominated for 10 Saturn Awards, including Best Science Fiction Film and Best Director for Parisot, Best Actress for Weaver, and Best Supporting Actor for Rickman, with Allen winning Best Actor.
"Galaxy Quest" eventually achieved cult status, especially from "Star Trek" fans for its affectionate parody, but also from more mainstream audiences as a comedy film in its own right.
Several "Star Trek" cast and crew members praised the film. It was included in "Reader's Digest"s list of The Top 100+ Funniest Movies of All Time in 2012, and "Star Trek" fans voted it the seventh best "Star Trek" film of all time in 2013.
The cast members of the canceled 1980s space-adventure television series "Galaxy Quest" spend most of their days attending fan conventions and promotional appearances. Though the series' former lead star Jason Nesmith (Tim Allen) thrives on the attention, the other cast members resent him and, to varying degrees, the states of their careers.
At a convention, Jason is approached by a group calling themselves , led by Mathesar, who request his help. Believing he is being asked for a promotional appearance, he agrees to be picked up the next morning. Jason is hung over when he is picked up and does not grasp that he has been transported to a working re-creation of the bridge of the NSEA "Protector", the starship from "Galaxy Quest". Believing he has been called on to perform, he gives half-hearted orders as captain, directing them to attack his enemy General Sarris, temporarily defeating him. When the Thermians transport him back to Earth, he realizes the experience was real. He attempts to relate his adventure to the other cast members, but is rebuffed. When the Thermian Laliari appears and requests Jason's help again, he convinces the cast to join him.
Once aboard the "Protector", the group learns that the Thermians received the broadcasts of "Galaxy Quest" and, having no understanding of fiction, mistook them for historical documentaries. Inspired by the crew's adventures, they restructured their society to reflect the show's virtues, and manufactured a functioning replica of the "Protector". When the evil warlord Sarris attacks the ship, the group flees through a field of magnetic mines. Though they escape Sarris, the ship's power source, its beryllium sphere, is damaged. They detect beryllium on a nearby planet, and the humans travel to the surface to retrieve a new sphere. After a series of mishaps, they are successful, but in their absence Sarris seizes the "Protector". Jason confesses to Sarris that he is not the ship's commander. When he shows Sarris the "historical documents" of "Galaxy Quest", Sarris realizes they are fiction and forces Jason to explain them to a heartbroken Mathesar. Sarris orders the "Protector's" self-destruct mechanism to be activated and returns to his ship, leaving the others to die.
The humans formulate a plan to abort the self-destruct and defeat Sarris' men left on the ship. With the aid of a "Galaxy Quest" fan on Earth named Brandon – using a genuine Thermian communicator Jason had accidentally swapped for Brandon's prop – and his network of friends with intimate knowledge of the show, Jason and Gwen make their way to the ship's core and shut down the self-destruct sequence, while Alexander leads the Thermians in fighting back against Sarris' forces. The humans take back command of the "Protector" and fly to confront Sarris. With their renewed confidence in their abilities, the crew flies through the minefield again but evade the mines, causing them to drag behind the ship. They fly the "Protector" straight at Sarris' ship, then veer away at the last moment, so that Sarris flies into the mines and obliterates his own ship.
The "Protector" travels to Earth to return the humans home, but Sarris, who escaped his vessel's destruction, ambushes them and fatally wounds several crew members. Jason activates the "Omega 13", a secret superweapon on the "Protector" that had never been used and never had its capabilities explained. It causes a 13-second time warp to the past, giving Jason and Mathesar the chance to disarm Sarris before he attacks. The "Protector"s bridge splits from the main vessel to fly to Earth with the humans, while the main section departs with Mathesar leading the Thermians.
With Brandon acting as a beacon, the "Protector" bridge lands at the "Galaxy Quest" convention, crashing through walls and coming to a rest in the center of the main stage. As the humans emerge, Sarris revives to threaten them, but Jason shoots him, blasting him into atoms. The crowd naturally assumes this was all a massive display of special effects, and the cast basks in the adoration of Brandon, his pals, and their fans.
Some time later, "Galaxy Quest" is revived as a sequel series, "Galaxy Quest: The Journey Continues", with the crew reprising their roles.
Galaxy Quest is the film debut of both Long and Rainn Wilson (as the Thermian Lahnk).
The original spec script by David Howard was titled "Captain Starshine". Howard stated he got the idea while at an IMAX presentation, and while waiting for the show to start, one of the trailers for an upcoming "Americans In Space" film featured the voice of Leonard Nimoy, a leading actor from "Star Trek". The trailer got Howard thinking about how the other "Star Trek" actors had become pigeonholed in these types of roles since the cancellation of "Star Trek", and then came up with the idea of what if there were real aliens involved. From there, he considered the rest of his script "that, in a lot of ways, just wrote itself, because it just seemed so self-evident once the idea was there".
Producer Mark Johnson, who had a first look deal with DreamWorks, did not like Howard's script, but was still fascinated with its concept featuring space aliens who misconstrue old episodes of a television series. Johnson purchased the script and had Bob Gordon use its concept to create "Galaxy Quest". A fan of "Star Trek", Gordon was hesitant, believing "Galaxy Quest" "could be a great idea or it could be a terrible idea" and initially turned it down. Gordon had not read "Captain Starshine" until after the film's completion, but instead started from the premise of washed-up actors from a sci-fi series involved in a real alien situation. Gordon's initial scripts added elements of humor to Howard's script, such as the scraping of the "Protector" when leaving space dock. Gordon became more confident in his script when he completed the scene of Nesmith admitting to the Thermians the truth of the situation, which he felt he nailed. He submitted his first draft to DreamWorks in 1998, which was immediately greenlit.
Since early in the production, Mark Johnson wanted Dean Parisot, who had directed "Home Fries", another film he produced, to direct "Galaxy Quest"; however, DreamWorks favored Harold Ramis because of his prior experience. Ramis was hired in November 1998, but departed in February 1999 because of casting difficulties. He wanted Alec Baldwin for the lead role, but Baldwin turned it down. Steve Martin and Kevin Kline were considered, though Kline turned it down for family reasons. Ramis did not agree with the casting of Tim Allen as Jason Nesmith, and Parisot took over as director within three weeks. Allen said that the version of the film pitched to him by Ramis and Katzenberg felt more like "Spaceballs", in which they wanted an actor known for action to be doing comedy, rather than a comedian doing an action film. Sigourney Weaver, who had previously worked with Ramis on "Ghostbusters", said that he wanted to cast actors that had not done any science fiction roles before, a choice she thought odd since veteran actors in this category would know what was humorous. After seeing the film, Ramis said he was ultimately impressed with Allen's performance.
Following Parisot's assignment as director, Allen was quickly cast for Nesmith. After he had been cast, Allen had to choose between "Galaxy Quest" and "Bicentennial Man" and chose the first, with his "Bicentennial Man" role going to Robin Williams. Allen said he was a big sci-fi fan and had hoped the role would launch a second part of his career as a sci-fi actor. Some of Allen's sci-fi knowledge influenced production: for example, when the crew is about to land on an alien planet, Allen brought up the issue about a potential breathable atmosphere with Johnson and Parisot, which became lines for Fleegman and Kwan in the movie. About his role, Allen said he based his performance on Yul Brynner's Ramesses II from the 1956 "The Ten Commandments", more than that of William Shatner as Captain James Kirk from "Star Trek".
Alan Rickman was selected to be Alexander Dane who played the alien Dr. Lazarus. Rickman had been interested in the part not so much for the sci-fi elements but because of its comedy. He said "I love comedy almost more than anything. This really is one of the funniest scripts I've read." and that "actors are probably the only professionals who send themselves up. We actually have a sense of humor about ourselves." While the original script had made Dane a ceremonial knight, Rickman suggested the title would be too much for the character, and this was dropped, though remained listed as "Sir Alex Dane" in the credits. Rickman also provided input into the prosthetic piece that Dane would use to play Lazarus, saying "it was important for it to be good enough to convince the aliens who believe we're the real thing, but also cheesy enough to imagine that it was something he applied himself". Rickman's sense of drama came into play during initial reads and script revisions. Rockwell said that Rickman "was very instrumental in making sure the script hit the dramatic notes, and everything had a strong logic and reason behind it". The scene where Dane, as Dr. Lazarus, gives a final speech to Quellek, played by Patrick Breen, heavily utilized Rickman's sense of drama to be powerfully emotional, according to Rockwell. Rickman's knowledge of drama played well off Allen's love of sci-fi, and while Rickman initially was annoyed with Allen's excitement over his role, eventually the whole cast bonded over the film. Dr. Lazarus' catchphrase "By Grabthar's Hammer" was written as a temp line in Gordon's script when filming started, with Gordon planning to find a less funny word than "Grabthar", but the line stuck as the production crew started using the line around their offices and printed t-shirts with the saying.
Weaver had loved the script since her first read while Ramis was director, stating "that great sort of "Wizard of Oz" story of these people feeling so incomplete in the beginning, and then during the course of this adventure, they come out almost like the heroes they pretended to be in the first place". She particularly loved the part of Madison: "to me she was what a lot of women feel like, including myself, in a Hollywood situation." In addition, she had long wanted to work with both Allen and Rickman. Once Parisot replaced Ramis, Weaver pushed on Parisot to cast her, insisting that Madison needed to be blonde and have large breasts to capture the humor of a sci-fi production, and was surprised when discovering she actually got the role. Weaver said that her role in "Galaxy Quest" was closer to "telling the truth about myself and science fiction" compared to her performance as Ripley in the "Alien", given some of her personal insecurities. During filming, she wore a blonde wig (which she kept after production) and an enhanced bosom which many of the crew said made Weaver a totally new personality from what they had expected. Weaver often left shooting in the uniform and returned to her hotel to admire herself, saying that she "loved being a starlet".
Tony Shalhoub originally auditioned for Guy Fleegman, Sam Rockwell won the role, and Shalhoub was cast as Fred Kwan instead. Shalhoub and Parisot worked together to develop the Kwan character, loosely basing him on David Carradine as a non-Asian in an Asian role in "Kung Fu". There had been an urban legend that Carradine had done that show frequently while under the influence of drugs, and while they could not directly carry that "stoner" implication into a PG-13 film, Shalhoub performed the role towards that direction, which Rockwell described as a "failed Scientologist". Shalhoub insisted that Kwan should always been shown eating to play towards the stoner stereotype.
Rockwell had initially considered declining the role after he was cast, as at the time he was looking at doing an independent work co-starring alongside Marisa Tomei and developing a more serious acting career, and that a comedic sci-fi role would not help towards that. Rockwell eventually recognized that several successful drama actors had done comedy roles early on, and Rockwell's friend Kevin Spacey convinced him to take the "Galaxy Quest" role. As such, he was the last of the main actors affirmed to be cast. Rockwell fashioned Fleegman after cowardly characters from other films, such as John Turturro's Bernie in "Miller's Crossing", Bill Paxton's Private Hudson in "Aliens", and Michael Keaton's "Blaze" in "Night Shift". In some cases, Rockwell drank a lot of coffee before shots to help create his overexcitement and jitters associated with the character. The name of Rockwell's character, Guy Fleegman, is a homage to Guy Vardaman, a little-known "Star Trek" actor who worked extensively on "Star Trek" as either a stand-in or in minor roles. Rockwell and Shalhoub worked together in providing some improvisational aspects to their dialog, with Rockwell's Fleegman being the worrisome sort while Shalhoub's Kwan tackling the issue without concern.
Daryl Mitchell was approached by Parisot to audition for the role of Webber, since they both had worked together before on "Home Fries", and Parisot felt the part was perfect for Mitchell. Mitchell said he took this assurance from Parisot as a safe bet, completed the audition, and won the part. David Alan Grier was the second choice for Webber.
Justin Long was cast as Brendan, and was Long's first feature-film role. Long had just completed work on a pilot for a television show under casting director Bonnie Zane, and she had suggested Long to her sister Debra Zane, the casting director for "Galaxy Quest". Long said he was nervous auditioning as an unknown actor at the time, competing against Kieran Culkin, Eddie Kaye Thomas and Tom Everett Scott. Perisot had given Long a copy of "Trekkies", a film about the "Star Trek" fandom, to help prepare for the character. Long based his character on a combination of Philip Seymour Hoffman's Scotty J. from "Boogie Nights" and the Comic Book Guy from "The Simpsons". Paul Rudd auditioned for a role.
While casting for the Thermians, one of the first auditioned was Enrico Colantoni. Colantoni, before his audition, loved the script and spent time before the audition to develop the behavior he thought the Thermians should have. Parisot said that at the end of Colantoni's read, the actor offered a possible voice for what he thought the Thermians would sound like. Parisot immediately loved the voice and used it to establish the nature of the Thermians for the rest of the casting process. Colantoni led how the Thermians would act, which he called "happy Jehovah's Witnesses" taking everything in with "love and acceptance". Other actors cast for Thermians included Jed Rees and Rainn Wilson, his feature-film premiere. According to Debra Zane, finding an actress to play the role of Laliari was very hard, as they had "a difficult time finding a woman who could be Thermian in the same way as actors Enrico Colantoni, Rainn Wilson and Jed Rees". Ultimately, when she auditioned Missi Pyle, she was so impressed that she sent the audition tape directly to Parisot, with a note stating "If this is not Laliari, I will resign from the CSA." Steven Spielberg later asked for Laliari's role to be expanded after being impressed by her performance as well, which was developed into the romantic association with Kwan. Jennifer Coolidge was the second choice for the role.
As they cast the other Thermians, they started an "alien school" to help the actors learn how the Thermians would act and talk, helping the actors get the idea that the Thermians were "basically giant calamari hiding in human shape", according to Parisot. The walk was inspired by how the marionettes were articulated in the series "Fireball XL5". Other idiosyncrasies of the Thermians were developed by the actors during this school, and several of their lines came from improvisation. Wilson's role as Lahnk was to have a larger presence in the film, but the actor was double-booked alongside a filming of an NBC pilot in New York City. He got a crash course on how to act like a Thermian from Colantoni, Rees, and Pyle, but still was nervous around the A-list actors leading the cast. Wilson said of a deleted scene, released with the film's home media involving Lahnk, was wisely cut given how nervous he was, flubbing his lines several times.
Linda DeScenna, production designer of the film, was interested in the project because it would not have the same aesthetics as other 1990s science fiction films, and "it didn't have to be real, hi-tech and vacuformed". DeScenna drew inspiration for the sets not only from "Star Trek", but also from "Buck Rogers", "Battlestar Galactica", and "Lost in Space". DeScenna had hoped to incorporate more essence of the reuse of props and set elements from these shows within the film, but the film didn't provide enough space for this. She used color theming to help distinguish the key elements of the film, with steam blue for the Thermians and the "Protector", while Saris and his species were made to be a green tone that stood out against that. The design of the Thermian station was influenced by the works of artist Roger Dean, especially his cover art for the Yes live album "Yessongs" (1973).
The bulk of the film was shot in studios in Los Angeles. Scenes of the alien planet were filmed at Goblin Valley State Park in Utah. At the time, access to the park was partly by dirt road; fees paid by the production company were used to upgrade the entire access road to asphalt pavement. Other locations used in the film included the Stahl House as Nesmith's home, and the Hollywood Palladium for the fan conventions.
According to Weaver, Allen hectored her to sign a piece of the Nostromo, the spaceship from "Alien", in which she had starred; she ultimately did, writing "Stolen by Tim Allen; Love, Sigourney Weaver," which she claims upset him greatly. During the period of filming, the entire cast attended a 20th-anniversary screening of "Alien". After filming wrapped, Weaver kept the wig she wore for the role.
The film's visual effects were created by Industrial Light & Magic led by Bill George. A challenge in the CGI was making distinctions between scenes that were to be from the 1980s "Galaxy Quest" show which would have been done normally through practical effects, and the more realistic scenes for the contemporary actors. Various practical effects were also used, such as the "piglizard" creature that the crew transports onto the "Protector".
After most production was done, Johnson said that DreamWorks were confused by the film, as it was not what they had expected from the script they greenlit, but pushed on post-production as they needed a film to go up against Columbia Pictures' "Stuart Little". Among major cuts from DreamWorks was to bring the movie to a more family-friendly audience. The film originally received an "R" rating, according to "Galaxy Quest" producer Lindsey Collins and Weaver, before being recut. Shalhoub did not remember any darker version of the film. Gordon had not planned to write a "family friendly" film, and his initial script included mature scenes, such as DeMarco attempting to seduce aliens, and the crash of the escape pod into the convention hall decapitating several attendees.
During post-production, "The Rugrats Movie" from Paramount Pictures came out and was a box-office success. Dreamworks at that point pushed on the production to have a competing film for a younger age group as to try to compete with "Rugrats". The film was edited and cut to bring the rating to a "PG", which required cutting of some of the better and funnier scenes in the film that could have survived if a "PG-13" rating had been targeted instead according to the cast and crew. In the aforementioned "chompers" scene, DeMarco's line "Well, screw that!" was dubbed over her original "Well, fuck that!" Weaver stated she purposely made her dubbed line stand out as a form of protest from her original line. Another cut scene to achieve the rating was seeing the crew's quarters on the "Protector", which included Dr. Lazarus' quarters which Allen called a "proctologist's dream and nightmare". Several other scenes involving Dr. Lazarus were cut, as DreamWorks felt they were too kinky for the desired rating. Other scenes were added to provided what the studio felt was necessary continuity for the intended younger audience, such as showing the limo that Nesmith is taken to the "Protector" in "lifts off" from Earth.
In theaters, the first 20 minutes of the film were presented in a 1.85:1 aspect ratio, before changing to a wider 2.35:1 ratio when Nesmith looks out upon space as the "Protector" arrives at Thermia to maximize the effect on viewers. However, this caused some problems with projectionists at movie theaters when showing the film as they had not opened up the screen curtains far enough for the wider aspect ratio. Projectionists had to be told at later showings to prepare for this transition. David Newman composed the music score.
Before the release of the movie, a promotional mockumentary video titled "Galaxy Quest: 20th Anniversary, The Journey Continues", aired on E!, presenting the "Galaxy Quest" television series as an actual cult series, and the upcoming film as a documentary about the making of the series, presenting it in a similar way to "Star Trek"; it featured fake interviews of the series' cast (portrayed by the actors of the actual film), "Questerians", and critics.
While these additional materials were made, DreamWorks devoted very little advertising to the film despite its placement near the Christmas season, which the cast and crew felt hurt the potential for the film. Unlike most films where the second and ongoing weekend box office takes decline, "Galaxy Quest" saw rising numbers over the first several weekends, and DreamWorks' Jeffrey Katzenberg apologized directly to Parisot for failing to market the film properly. Additionally, the primary trailer used for the film used a cut of the film before all the specific effects were complete, and Johnson felt that if the trailer had used the completed versions, it would have helped draw a larger audience.
"Galaxy Quest" is an acknowledged homage to "Star Trek"; Perisot said "Part of the mission for me was to make a great "Star Trek" episode." Gordon's original script was titled "Galaxy Quest: The Motion Picture" as a reference to the , and elements like departing the space dock and the malfunctioning transporters were further nods to the film. The prefix of the "Protector"s registration number NTE-3120 ostensibly alludes to some sort of similar space federation, but in reality stands for "Not The Enterprise", according to visual effects co-supervisor Bill George Perisot refuted claims that the rock monster that Nesmith was based on the rock monster that had been scripted for "", but instead was more inspired by the Gorn that Kirk faces in the "Star Trek" episode "".
This homage also extended to the original marketing of the movie, including a promotional website intentionally designed to look like a poorly constructed fan website, with "screen captures" and poor HTML coding. The homage even parodied the effect that "Star Trek" had on the social lives of its cast members, such as how Alexander Dane (played by Alan Rickman) has been typecast after his success on the "Galaxy Quest" television series; this reflects the lamentations of Leonard Nimoy, who had felt typecast after his performance as Spock.
Additionally, the time between the original "Galaxy Quest" series and its sequel, "Galaxy Quest: The Journey Continues" is 17 years, the same amount of time that elapsed between the original "Star Trek" series and "".
Other aspects of the film were homages to other seminal science fiction works. The Thermians' native planet, Klaatu Nebula, is a reference to the name of the alien visitor in the classic "The Day the Earth Stood Still" (1951). Quellek's line "I'm shot" was directly influenced by the same line from James Brolin's character in "Westworld". The blue creatures on the alien planet were based on similar creatures in "Barbarella". The "chompers" scene with Nesmith and DeMacro trying to reach the self-destruct abort button was inspired by a scene from the 1997 film "Event Horizon" involving whirring blades. The effects for the Omega 13 activation were inspired by the ending scene from "Beneath the Planet of the Apes".
The film was financially successful. It earned US$7,012,630 in its opening weekend, and its total U.S. domestic tally stands at US$71,583,916; in total it has grossed US$90,683,916 worldwide.
"Galaxy Quest" received positive reviews from critics, both as a parody of "Star Trek", and as a comedy film of its own. "The New York Times"s Lawrence Van Gelder called it "an amiable comedy that simultaneously manages to spoof these popular futuristic space adventures and replicate the very elements that have made them so durable". Roger Ebert praised the ability of the film to spoof the "illogic of the TV show". "The Village Voice" offered a lukewarm review, noting that "the many eight- to 11-year-olds in the audience seemed completely enthralled". Joe Leydon of "Variety" said that "Galaxy Quest" "remains light and bright as it races along, and never turns nasty or mean-spirited as it satirizes the cliches and cults of "Star Trek"".
Retrospective reviews for "Galaxy Quest" have been positive, as the film is considered to have held up over time. On Rotten Tomatoes, it received an approval rating of 89% based on 121 reviews and an average rating of 7.2/10. The site's critical consensus reads, "Intelligent and humorous satire with an excellent cast; no previous Trekkie knowledge needed to enjoy this one." On Metacritic, the film has a score of 70 out of 100, based on 28 critics, indicating "generally favorable reviews". "Esquire" Matt Miller said in 2019 "the film absolutely holds up as one of the best sci-fi satires ever made—one that challenges our obsession with massive Hollywood franchises, the nature of fandom, and some of the more problematic cliches of the genre. But it does so with a self-aware empathy that makes it an enduring and lasting entry in not only science-fiction, but American film as a whole".
Acclaimed writer-director David Mamet, in his book "Bambi vs. Godzilla: On the Nature, Purpose, and Practice of the Movie Business", included "Galaxy Quest" in a list of four "perfect" films, along with "The Godfather", "A Place in the Sun" and "Dodsworth".
The film proved quite popular with "Star Trek" fans. At the 2013 Star Trek Convention in Las Vegas, "Galaxy Quest" received enough support in a "Star Trek" Film Ranking to be included with the twelve Star Trek films that had been released at the time on the voting ballot. The fans at the convention ranked it the seventh best "Star Trek" film.
Harold Ramis, who was originally supposed to direct the film but left following disagreements over the casting choices, notably Allen as the lead, was ultimately impressed with Allen's performance. Tim Allen later said he and William Shatner were "now friends because of this movie".
"Galaxy Quest" predicted the growth and influence of media fandom in the years after its release. While fandoms like that for "Star Trek" existed at the time of the film, the size and scope presented by the fan conventions in the film had not been seen as much in 1999; since then, major fan conventions such as the San Diego Comic Con have become significant events that draw mainstream attention.The film also depicted fandoms using their numbers to influence production companies to revive cancelled works, such as with "The Expanse", "Veronica Mars", "Arrested Development", and "Twin Peaks". The film also captured some negative elements of modern fandom, such as leading actors continuously pestered by fans for intricate details of the work's fiction and other elements of the potentially toxic culture of online fan groups.
The novella "Rabbit Remembered" (2000) by John Updike mentions the character of Laliari from the film.
Several actors who have had roles on various "Star Trek" television series and films have commented on "Galaxy Quest" in light of their own experiences with the franchise and its fandom.
The film was released by DreamWorks Home Entertainment on VHS and DVD on May 2, 2000. The DVD version included a 10-minute behind-the-scenes feature, cast and crew biographies and interviews, and deleted scenes. A special 10th anniversary deluxe edition was released on both DVD and Blu-ray by Paramount Home Entertainment on May 12, 2009; though they lacked the same features on the original DVD release, they included several new featurettes on the film's history, the cast, and the special effects used in the film's making, alongside the deleted scenes. For the film's 20th anniversary, a "Never Give Up, Never Surrender Edition" Blu-ray was released on November 5, 2019 featuring the same features as the 10th edition; a special steelbox Best Buy exclusive was released on September 17, 2019.
In November 1999, "Galaxy Quest" was novelized by science fiction writer Terry Bisson, who stayed very close to the plot of the film.
In 2008, IDW Publishing released a comic book sequel to the movie entitled "Galaxy Quest: Global Warning". In January 2015, IDW launched an ongoing series set several years after the events of the film.
Talks of a sequel have been going on since the film's release in 1999, but only began gaining traction in 2014 when Allen mentioned that there was a script. Stars Weaver and Rockwell mentioned they were interested in returning. However, Colantoni has said he would prefer for there not to be a sequel, lest it tarnish the characters from the first film. He said, "to make something up, just because we love those characters, and turn it into a sequel—then it becomes the awful sequel".
In April 2015, Paramount Television, along with the movie's co-writer Gordon, director Parisot, and executive producers Johnson and Bernstein, announced they were looking to develop a television series based on "Galaxy Quest". The move was considered in a similar vein as Paramount's revivals of "Minority Report" and "School of Rock" as television series. In August 2015, it was announced that Amazon Studios would be developing it.
In January 2016, after the unexpected death of Alan Rickman from pancreatic cancer, Tim Allen commented in "The Hollywood Reporter" about the franchise's chance of a revival:
Speaking to the Nerdist podcast in April 2016, Sam Rockwell revealed that the cast had been about ready to sign on for a follow up with Amazon, but Rickman's death, together with Allen's television schedule, had proved to be obstacles. He also said he believed Rickman's death meant the project would never happen.
However, the plans were revived in August 2017, with the announcement that Paul Scheer would be writing the series. Speaking to "/Film", Scheer said that in his first drafts submitted to Amazon in November 2017 he wanted to create a serialized adventure that starts where the film ends, but leads into the cultural shift in "Star Trek" that has occurred since 1999; he said "I really wanted to capture the difference between the original cast of "Star Trek" and the J. J. Abrams cast of "Star Trek"." To that end, Scheer's initial scripts called for two separate cast sets that would come together by the end of the first season of the show, though he did not confirm if this included any of the original film's cast.
Following the dismissal of Amy Powell as president of Paramount Television in July 2018, Scheer said the "Galaxy Quest" series had been put on hold while Paramount's management was being re-established, but anticipated the show would continue forward after that. He also said they were making the series to allow the introduction of new characters while extending the setting, similar to what "" did for "".
"Never Surrender: A Galaxy Quest Documentary" was produced by the web site Fandom in 2019 to celebrate the film's 20th anniversary. Titled after Captain Nesmith's catchphrase "Never give up, never surrender!", it features interviews with the movie's cast and crew, including Allen, Weaver, Rockwell, Shalhoub, Long, Pyle, Wilson, and Mitchell, along with director Parisot and writer Gordon, as well as celebrities including Wil Wheaton, Brent Spiner, Greg Berlanti, Paul Scheer, and Damon Lindelof, who have spoken of their love for the film. Initially premiering to a limited audience at the October 2019 New York Comic Con, it subsequently had a limited theatrical showing at about 600 screens through Fathom Events on November 26, 2019, which included a screening of deleted scenes as well as the debut of Screen Junkies' "Honest Trailer" for "Galaxy Quest". The film was made available on various digital media services for purchase in December 2019. | https://en.wikipedia.org/wiki?curid=13149 |
Gilgamesh
Gilgamesh (; originally ) was a major hero in ancient Mesopotamian mythology, and is the protagonist of the "Epic of Gilgamesh", an epic poem written in Akkadian during the late 2nd millennium BC. He was also most likely a historical king of the Sumerian city-state of Uruk, who was posthumously deified. His rule probably would have taken place sometime between 2800 and 2500 BC, though he became a major figure in Sumerian legend during the Third Dynasty of Ur ().
Tales of Gilgamesh's legendary exploits are narrated in five surviving Sumerian poems. The earliest of these is most likely "Gilgamesh, Enkidu, and the Netherworld", in which Gilgamesh comes to the aid of the goddess Inanna and drives away the creatures infesting her "huluppu" tree. She gives him two unknown objects, a "mikku" and a "pikku", which he loses. After Enkidu's death, his shade tells Gilgamesh about the bleak conditions in the Underworld. The poem "Gilgamesh and Agga" describes Gilgamesh's revolt against his overlord King Agga. Other Sumerian poems relate Gilgamesh's defeat of the ogre Huwawa and the Bull of Heaven, while a fifth, poorly preserved poem apparently describes his death and funeral.
In later Babylonian times, these stories began to be woven into a connected narrative. The standard Akkadian "Epic of Gilgamesh" was composed by a scribe named Sîn-lēqi-unninni, probably during the Middle Babylonian Period (), based on much older source material. In the epic, Gilgamesh is a demigod of superhuman strength who befriends the wildman Enkidu. Together, they go on adventures, defeating Humbaba (Sumerian: Huwawa) and the Bull of Heaven, who is sent to attack them by Ishtar (Sumerian: Inanna) after Gilgamesh rejects her offer for him to become her consort. After Enkidu dies of a disease sent as punishment from the gods, Gilgamesh becomes afraid of his own death, and visits the sage Utnapishtim, the survivor of the Great Flood, hoping to find immortality. Gilgamesh repeatedly fails the trials set before him and returns home to Uruk, realizing that immortality is beyond his reach.
Most classical historians agree that the "Epic of Gilgamesh" exerted substantial influence on both the "Iliad" and the "Odyssey", two epic poems written in ancient Greek during the 8th century BC. The story of Gilgamesh's birth is described in an anecdote from "On the Nature of Animals" by the Greek writer Aelian (2nd century AD). Aelian relates that Gilgamesh's grandfather kept his mother under guard to prevent her from becoming pregnant, because he had been told by an oracle that his grandson would overthrow him. She became pregnant and the guards threw the child off a tower, but an eagle rescued him mid-fall and delivered him safely to an orchard, where he was raised by the gardener.
The "Epic of Gilgamesh" was rediscovered in the Library of Ashurbanipal in 1849. After being translated in the early 1870s, it caused widespread controversy due to similarities between portions of it and the Hebrew Bible. Gilgamesh remained mostly obscure until the mid-20th century, but, since the late 20th century, he has become an increasingly prominent figure in modern culture.
Most historians generally agree that Gilgamesh was a historical king of the Sumerian city-state of Uruk, who probably ruled sometime during the early part of the Early Dynastic Period ( 2900 – 2350 BC). Stephanie Dalley, a scholar of the ancient Near East, states that "precise dates cannot be given for the lifetime of Gilgamesh, but they are generally agreed to lie between 2800 and 2500 BC." No contemporary mention of Gilgamesh has yet been discovered, but the 1955 discovery of the Tummal Inscription, a thirty-four-line historiographic text written during the reign of Ishbi-Erra (), has cast considerable light on his reign. The inscription credits Gilgamesh with building the walls of Uruk. Lines eleven through fifteen of the inscription read:
For a second time, the Tummal fell into ruin,
Gilgamesh built the Numunburra of the House of Enlil.
Ur-lugal, the son of Gilgamesh,
Made the Tummal pre-eminent,
Brought Ninlil to the Tummal.
Gilgamesh is also referred to as a king by King Enmebaragesi of Kish, a known historical figure who may have lived near Gilgamesh's lifetime. Furthermore, Gilgamesh is listed as one of the kings of Uruk by the "Sumerian King List". Fragments of an epic text found in Mê-Turan (modern Tell Haddad) relate that at the end of his life Gilgamesh was buried under the river bed. The people of Uruk diverted the flow of the Euphrates passing Uruk for the purpose of burying the dead king within the river bed.
It is certain that, during the later Early Dynastic Period, Gilgamesh was worshipped as a god at various locations across Sumer. In 21st century BC, King Utu-hengal of Uruk adopted Gilgamesh as his patron deity. The kings of the Third Dynasty of Ur () were especially fond of Gilgamesh, calling him their "divine brother" and "friend." King Shulgi of Ur (2029–1982 BC) declared himself the son of Lugalbanda and Ninsun and the brother of Gilgamesh. Over the centuries, there may have been a gradual accretion of stories about Gilgamesh, some possibly derived from the real lives of other historical figures, such as Gudea, the Second Dynasty ruler of Lagash (2144–2124 BC). Prayers inscribed in clay tablets address Gilgamesh as a judge of the dead in the Underworld.
During this period, a large number of myths and legends developed surrounding Gilgamesh. Five independent Sumerian poems narrating various exploits of Gilgamesh have survived to the present. Gilgamesh's first appearance in literature is probably in the Sumerian poem "Gilgamesh, Enkidu, and the Netherworld". The narrative begins with a "huluppu" tree—perhaps, according to the Sumerologist Samuel Noah Kramer, a willow, growing on the banks of the river Euphrates. The goddess Inanna moves the tree to her garden in Uruk with the intention to carve it into a throne once it is fully grown. The tree grows and matures, but the serpent "who knows no charm," the "Anzû"-bird, and "Lilitu", a Mesopotamian demon, all take up residence within the tree, causing Inanna to cry with sorrow.
Gilgamesh, who in this story is portrayed as Inanna's brother, comes along and slays the serpent, causing the "Anzû"-bird and Lilitu to flee. Gilgamesh's companions chop down the tree and carve its wood into a bed and a throne, which they give to Inanna. Inanna responds by fashioning a "pikku" and a "mikku" (probably a drum and drumsticks respectively, although the exact identifications are uncertain), which she gives to Gilgamesh as a reward for his heroism. Gilgamesh loses the "pikku" and "mikku" and asks who will retrieve them. Enkidu descends to the Underworld to find them, but disobeys the strict laws of the Underworld and is therefore required to remain there forever. The remaining portion of the poem is a dialogue in which Gilgamesh asks the shade of Enkidu questions about the Underworld.
"Gilgamesh and Agga" describes Gilgamesh's successful revolt against his overlord Agga, the king of the city-state of Kish. "Gilgamesh and Huwawa" describes how Gilgamesh and his servant Enkidu, aided by the help of fifty volunteers from Uruk, defeat the monster Huwawa, an ogre appointed by the god Enlil, the ruler of the gods, as the guardian of the Cedar Forest. In "Gilgamesh and the Bull of Heaven", Gilgamesh and Enkidu slay the Bull of Heaven, who has been sent to attack them by the goddess Inanna. The plot of this poem differs substantially from the corresponding scene in the later Akkadian "Epic of Gilgamesh". In the Sumerian poem, Inanna does not seem to ask Gilgamesh to become her consort as she does in the later Akkadian epic. Furthermore, while she is coercing her father An to give her the Bull of Heaven, rather than threatening to raise the dead to eat the living as she does in the later epic, she merely threatens to let out a "cry" that will reach the earth. A poem known as the "Death of Gilgamesh" is very poorly preserved, but appears to describe a major state funeral followed by the arrival of the deceased in the Underworld. It is possible that the modern scholars who gave the poem its title may have misinterpreted it, and the poem may actually be about the death of Enkidu.
Eventually, according to Kramer (1963): Gilgamesh became the hero par excellence of the ancient world—an adventurous, brave, but tragic figure symbolizing man's vain but endless drive for fame, glory, and immortality.By the Old Babylonian Period (), stories of Gilgamesh's legendary exploits had been woven into one or several long epics. The "Epic of Gilgamesh", the most complete account of Gilgamesh's adventures, was composed in Akkadian during the Middle Babylonian Period ( 1600 – 1155 BC) by a scribe named Sîn-lēqi-unninni. The most complete surviving version of the "Epic of Gilgamesh" is recorded on a set of twelve clay tablets dating to the seventh century BC, found in the Library of Ashurbanipal in the Assyrian capital of Nineveh. The epic survives only in a fragmentary form, with many pieces of it missing or damaged. Some scholars and translators choose to supplement the missing parts of the epic with material from the earlier Sumerian poems or from other versions of the "Epic of Gilgamesh" found at other sites throughout the Near East.
In the epic, Gilgamesh is introduced as "two thirds divine and one third mortal." At the beginning of the poem, Gilgamesh is described as a brutal, oppressive ruler. This is usually interpreted to mean either that he compels all his subjects to engage in forced labor or that he sexually oppresses all his subjects. As punishment for Gilgamesh's cruelty, the god Anu creates the wildman Enkidu. After being tamed by a prostitute named Shamhat, Enkidu travels to Uruk to confront Gilgamesh. In the second tablet, the two men wrestle and, although Gilgamesh wins the match in the end, he is so impressed by his opponent's strength and tenacity that they become close friends. In the earlier Sumerian texts, Enkidu is Gilgamesh's servant, but, in the "Epic of Gilgamesh", they are companions of equal standing.
In tablets III through IV, Gilgamesh and Enkidu travel to the Cedar Forest, which is guarded by Humbaba (the Akkadian name for Huwawa). The heroes cross the seven mountains to the Cedar Forest, where they begin chopping down trees. Confronted by Humbaba, Gilgamesh panics and prays to Shamash (the East Semitic name for Utu), who blows eight winds in Humbaba's eyes, blinding him. Humbaba begs for mercy, but the heroes decapitate him regardless. Tablet VI begins with Gilgamesh returning to Uruk, where Ishtar (the Akkadian name for Inanna) comes to him and demands him to become her consort. Gilgamesh repudiates her, insisting that she has mistreated all her former lovers.
In revenge, Ishtar goes to her father Anu and demands that he give her the Bull of Heaven, which she sends to attack Gilgamesh. Gilgamesh and Enkidu kill the Bull and offer its heart to Shamash. While Gilgamesh and Enkidu are resting, Ishtar stands up on the walls of Uruk and curses Gilgamesh. Enkidu tears off the Bull's right thigh and throws it in Ishtar's face, saying, "If I could lay my hands on you, it is this I should do to you, and lash your entrails to your side." Ishtar calls together "the crimped courtesans, prostitutes and harlots" and orders them to mourn for the Bull of Heaven. Meanwhile, Gilgamesh holds a celebration over the Bull of Heaven's defeat.
Tablet VII begins with Enkidu recounting a dream in which he saw Anu, Ea, and Shamash declare that either Gilgamesh or Enkidu must die as punishment for having slain the Bull of Heaven. They choose Enkidu and Enkidu soon grows sick. He has a dream of the Underworld and then he dies. Tablet VIII describes Gilgamesh's inconsolable grief over his friend's death and the details of Enkidu's funeral. Tablets IX through XI relate how Gilgamesh, driven by grief and fear of his own mortality, travels a great distance and overcomes many obstacles to find the home of Utnapishtim, the sole survivor of the Great Flood, who was rewarded with immortality by the gods.
The journey to Utnapishtim involves a series of episodic challenges, which probably originated as major independent adventures, but, in the epic, they are reduced to what Joseph Eddy Fontenrose calls "fairly harmless incidents." First, Gilgamesh encounters and slays lions in the mountain pass. Upon reaching the mountain of Mashu, Gilgamesh encounters a scorpion man and his wife; their bodies flash with terrifying radiance, but, once Gilgamesh tells them his purpose, they allow him to pass. Gilgamesh wanders through darkness for twelve days before he finally comes into the light. He finds a beautiful garden by the sea in which he meets Siduri, the divine Alewife. At first she tries to prevent Gilgamesh from entering the garden, but later she instead attempts to persuade him to accept death as inevitable and not journey beyond the waters. When Gilgamesh refuses to do this, she directs him to Urshanabi, the ferryman of the gods, who ferries Gilgamesh across the sea to Utnapishtim's homeland. When Gilgamesh finally arrives at Utnapishtim's home, Utnapishtim tells Gilgamesh that, to become immortal, he must defy sleep. Gilgamesh fails to do this and falls asleep for seven days without waking.
Next, Utnapishtim tells him that, even if he cannot obtain immortality, he can restore his youth using a plant with the power of rejuvenation. Gilgamesh takes the plant, but leaves it on the shore while swimming and a snake steals it, explaining why snakes are able to shed their skins. Despondent at this loss, Gilgamesh returns to Uruk, and shows his city to the ferryman Urshanabi. It is at that this point that the epic stops being a coherent narrative. Tablet XII is an appendix corresponding to the Sumerian poem of "Gilgamesh, Enkidu and the Netherworld" describing the loss of the "pikku" and "mikku".
Numerous elements within this narrative reveal lack of continuity with the earlier portions of the epic. At the beginning of Tablet XII, Enkidu is still alive, despite having previously died in Tablet VII, and Gilgamesh is kind to Ishtar, despite the violent rivalry between them displayed in Tablet VI. Also, while most of the parts of the epic are free adaptations of their respective Sumerian predecessors, Tablet XII is a literal, word-for-word translation of the last part of "Gilgamesh, Enkidu, and the Netherworld". For these reasons, scholars conclude this narrative was probably relegated to the end of the epic because it did not fit the larger narrative. In it, Gilgamesh sees a vision of Enkidu's ghost, who promises to recover the lost items and describes to his friend the abysmal condition of the Underworld.
Although stories about Gilgamesh were wildly popular throughout ancient Mesopotamia, authentic representations of him in ancient art are extremely rare. Popular works often identify depictions of a hero with long hair, containing four or six curls, as representations of Gilgamesh, but this identification is known to be incorrect. A few genuine ancient Mesopotamian representations of Gilgamesh do exist, however. These representations are mostly found on clay plaques and cylinder seals. Generally, it is only possible to identify a figure shown in art as Gilgamesh if the artistic work in question clearly depicts a scene from the "Epic of Gilgamesh" itself. One set of representations of Gilgamesh is found in scenes of two heroes fighting a demonic giant, certainly Humbaba. Another set is found in scenes showing a similar pair of heroes confronting a giant, winged bull, certainly the Bull of Heaven.
The "Epic of Gilgamesh" exerted substantial influence on the "Iliad" and the "Odyssey", two epic poems written in ancient Greek during the eighth century BC. According to Barry B. Powell, an American classical scholar, early Greeks were probably exposed to Mesopotamian oral traditions through their extensive connections to the civilizations of the ancient Near East and this exposure resulted in the similarities that are seen between the "Epic of Gilgamesh" and the Homeric epics. Walter Burkert, a German classicist, observes that the scene in Tablet VI of the "Epic of Gilgamesh" in which Gilgamesh rejects Ishtar's advances and she complains before her mother Antu, but is mildly rebuked by her father Anu, is directly paralleled in Book V of the "Iliad". In this scene, Aphrodite, the later Greek adaptation of Ishtar, is wounded by the hero Diomedes and flees to Mount Olympus, where she cries to her mother Dione and is mildly rebuked by her father Zeus.
Powell observes that the opening lines of the "Odyssey" seem to echo the opening lines of the "Epic of Gilgamesh". The storyline of the "Odyssey" likewise bears numerous similarities to that of the "Epic of Gilgamesh". Both Gilgamesh and Odysseus encounter a woman who can turn men into animals: Ishtar (for Gilgamesh) and Circe (for Odysseus). In the "Odyssey", Odysseus blinds a giant Cyclops named Polyphemus, an incident which bears similarities to Gilgamesh's slaying of Humbaba in the "Epic of Gilgamesh". Both Gilgamesh and Odysseus visit the Underworld and both find themselves unhappy whilst living in an otherworldly paradise in the presence of an attractive woman: Siduri (for Gilgamesh) and Calypso (for Odysseus). Finally, both heroes have an opportunity for immortality but miss it (Gilgamesh when he loses the plant, and Odysseus when he leaves Calypso's island).
In the Qumran scroll known as "Book of Giants" (c. 100 BC) the names of Gilgamesh and Humbaba appear as two of the antediluvian giants, rendered (in consonantal form) as "glgmš" and "ḩwbbyš". This same text was later used in the Middle East by the Manichaean sects, and the Arabic form "Gilgamish"/"Jiljamish" survives as the name of a demon according to the Egyptian cleric Al-Suyuti ( 1500).
The story of Gilgamesh's birth is not recorded in any extant Sumerian or Akkadian text, but a version of it is described in "De Natura Animalium" ("On the Nature of Animals") 12.21, a commonplace book which was written in Greek sometime around 200 AD by the Hellenized Roman orator Aelian. According to Aelian's story, an oracle told King Seuechoros of the Babylonians that his grandson Gilgamos would overthrow him. To prevent this, Seuechoros kept his only daughter under close guard at the Acropolis of the city of Babylon, but she became pregnant nonetheless. Fearing the king's wrath, the guards hurled the infant off the top of a tall tower. An eagle rescued the boy in midflight and carried him to an orchard, where it carefully set him down. The caretaker of the orchard found the boy and raised him, naming him "Gilgamos" (Γίλγαμος). Eventually, Gilgamos returned to Babylon and overthrew his grandfather, proclaiming himself king. The birth narrative described by Aelian is in the same tradition as other Near Eastern birth legends, such as those of Sargon, Moses, and Cyrus. Theodore Bar Konai (c. AD 600), writing in Syriac, also mentions a king "Gligmos", "Gmigmos" or "Gamigos" as last of a line of twelve kings who were contemporaneous with the patriarchs from Peleg to Abraham; this occurrence is also considered a vestige of Gilgamesh's former memory.
The Akkadian text of the "Epic of Gilgamesh" was first discovered in 1849 AD by the English archaeologist Austen Henry Layard in the Library of Ashurbanipal at Nineveh. Layard was seeking evidence to confirm the historicity of the events described in the Christian Old Testament, which, at the time, was believed to contain the oldest texts in the world. Instead, his excavations and those of others after him revealed the existence of much older Mesopotamian texts and showed that many of the stories in the Old Testament may actually be derived from earlier myths told throughout the ancient Near East. The first translation of the "Epic of Gilgamesh" was produced in the early 1870s by George Smith, a scholar at the British Museum, who published the Flood story from Tablet XI in 1880 under the title "The Chaldean Account of Genesis". Gilgamesh's name was originally misread as "Izdubar".
Early interest in the "Epic of Gilgamesh" was almost exclusively on account of the flood story from Tablet XI. The flood story attracted enormous public attention and drew widespread scholarly controversy, while the rest of the epic was largely ignored. Most attention towards the "Epic of Gilgamesh" in the late nineteenth and early twentieth centuries came from German-speaking countries, where controversy raged over the relationship between "Babel und Bibel" ("Babylon and Bible").
In January 1902, the German Assyriologist Friedrich Delitzsch gave a lecture at the Sing-Akademie zu Berlin in front of the Kaiser and his wife, in which he argued that the Flood story in the Book of Genesis was directly copied off the one in the "Epic of Gilgamesh". Delitzsch's lecture was so controversial that, by September 1903, he had managed to collect 1,350 short articles from newspapers and journals, over 300 longer ones, and twenty-eight pamphlets, all written in response to this lecture, as well as another lecture about the relationship between the Code of Hammurabi and the Law of Moses in the Torah. These articles were overwhelmingly critical of Delitzsch. The Kaiser distanced himself from Delitzsch and his radical views and, in fall of 1904, Delitzsch was forced to give his third lecture in Cologne and Frankfurt am Main rather than in Berlin. The putative relationship between the "Epic of Gilgamesh" and the Hebrew Bible later became a major part of Delitzsch's argument in his 1920–21 book "Die große Täuschung" ("The Great Deception") that the Hebrew Bible was irredeemably "contaminated" by Babylonian influence and that only by eliminating the human Old Testament entirely could Christians finally believe in the true, Aryan message of the New Testament.
The first modern literary adaptation of the "Epic of Gilgamesh" was "Ishtar and Izdubar" (1884) by Leonidas Le Cenci Hamilton, an American lawyer and businessman. Hamilton had rudimentary knowledge of Akkadian, which he had learned from Archibald Sayce's 1872 "Assyrian Grammar for Comparative Purposes". Hamilton's book relied heavily on Smith's translation of the "Epic of Gilgamesh", but also made major changes. For instance, Hamilton omitted the famous flood story entirely and instead focused on the romantic relationship between Ishtar and Gilgamesh. "Ishtar and Izdubar" expanded the original roughly 3,000 lines of the "Epic of Gilgamesh" to roughly 6,000 lines of rhyming couplets grouped into forty-eight cantos. Hamilton significantly altered most of the characters and introduced entirely new episodes not found in the original epic. Significantly influenced by Edward FitzGerald's "Rubaiyat of Omar Khayyam" and Edwin Arnold's "The Light of Asia", Hamilton's characters dress more like nineteenth-century Turks than ancient Babylonians. Hamilton also changed the tone of the epic from the "grim realism" and "ironic tragedy" of the original to a "cheery optimism" filled with "the sweet strains of love and harmony".
In his 1904 book "Das Alte Testament im Lichte des alten Orients", the German Assyriologist Alfred Jeremias equated Gilgamesh with the king Nimrod from the Book of Genesis and argued that Gilgamesh's strength must come from his hair, like the hero Samson in the Book of Judges, and that he must have performed Twelve Labors like the hero Heracles in Greek mythology. In his 1906 book "Das Gilgamesch-Epos in der Weltliteratur", the Orientalist Peter Jensen declared that the "Epic of Gilgamesh" was the source behind nearly all the stories in the Old Testament, arguing that Moses is "the Gilgamesh of Exodus who saves the children of Israel from precisely the same situation faced by the inhabitants of Erech at the beginning of the Babylonian epic." He then proceeded to argue that Abraham, Isaac, Samson, David, and various other biblical figures are all nothing more than exact copies of Gilgamesh. Finally, he declared that even Jesus is "nothing but an Israelite Gilgamesh. Nothing but an adjunct to Abraham, Moses, and countless other figures in the saga." This ideology became known as Panbabylonianism and was almost immediately rejected by mainstream scholars. The most stalwart critics of Panbabylonianism were those associated with the emerging "Religionsgeschichtliche Schule". Hermann Gunkel dismissed most of Jensen's purported parallels between Gilgamesh and biblical figures as mere baseless sensationalism. He concluded that Jensen and other Assyriologists like him had failed to understand the complexities of Old Testament scholarship and had confused scholars with "conspicuous mistakes and remarkable aberrations".
In English-speaking countries, the prevailing scholarly interpretation during the early twentieth century was one originally proposed by Sir Henry Rawlinson, 1st Baronet, which held that Gilgamesh is a "solar hero", whose actions represent the movements of the sun, and that the twelve tablets of his epic represent the twelve signs of the Babylonian zodiac. The Austrian psychoanalyst Sigmund Freud, drawing on the theories of James George Frazer and Paul Ehrenreich, interpreted Gilgamesh and Eabani (the earlier misreading for "Enkidu") as representing "man" and "crude sensuality" respectively. He compared them to other brother-figures in world mythology, remarking, "One is always weaker than the other and dies sooner. In Gilgamesh this ages-old motif of the unequal pair of brothers served to represent the relationship between a man and his libido." He also saw Enkidu as representing the placenta, the "weaker twin" who dies shortly after birth. Freud's friend and pupil Carl Jung frequently discusses Gilgamesh in his early work "Symbole der Wandlung" (1911–1912). He, for instance, cites Ishtar's sexual attraction to Gilgamesh as an example of the mother's incestuous desire for her son, Humbaba as an example of an oppressive father-figure whom Gilgamesh must overcome, and Gilgamesh himself as an example of a man who forgets his dependence on the unconscious and is punished by the "gods", who represent it.
In the years following World War II, Gilgamesh, formerly an obscure figure known only by a few scholars, gradually became increasingly popular with modern audiences. The "Epic of Gilgamesh"s existential themes made it particularly appealing to German authors in the years following the war. In his 1947 existentialist novel "Die Stadt hinter dem Strom", the German novelist Hermann Kasack adapted elements of the epic into a metaphor for the aftermath of the destruction of World War II in Germany, portraying the bombed-out city of Hamburg as resembling the frightening Underworld seen by Enkidu in his dream. In Hans Henny Jahnn's "magnum opus" "River Without Shores" (1949–1950), the middle section of the trilogy centers around a composer whose twenty-year-long homoerotic relationship with a friend mirrors that of Gilgamesh with Enkidu and whose masterpiece turns out to be a symphony about Gilgamesh.
"The Quest of Gilgamesh", a 1953 radio play by Douglas Geoffrey Bridson, helped popularize the epic in Britain. In the United States, Charles Olson praised the epic in his poems and essays and Gregory Corso believed that it contained ancient virtues capable of curing what he viewed as modern moral degeneracy. The 1966 postfigurative novel "Gilgamesch" by Guido Bachmann became a classic of German "queer literature" and set a decades-long international literary trend of portraying Gilgamesh and Enkidu as homosexual lovers. This trend proved so popular that the "Epic of Gilgamesh" itself is included in "The Columbia Anthology of Gay Literature" (1998) as a major early work of that genre. In the 1970s and 1980s, feminist literary critics analyzed the "Epic of Gilgamesh" as showing evidence for a transition from the original matriarchy of all humanity to modern patriarchy. As the Green Movement expanded in Europe, Gilgamesh's story began to be seen through an environmentalist lens, with Enkidu's death symbolizing man's separation from nature.
Theodore Ziolkowski, a scholar of modern literature, states, that "unlike most other figures from myth, literature, and history, Gilgamesh has established himself as an autonomous entity or simply a name, often independent of the epic context in which he originally became known. (As analogous examples one might think, for instance, of the Minotaur or Frankenstein's monster.)" The "Epic of Gilgamesh" has been translated into many major world languages and has become a staple of American world literature classes. Many contemporary authors and novelists have drawn inspiration from it, including an American avant-garde theater collective called "The Gilgamesh Group" and Joan London in her novel "Gilgamesh" (2001). "The Great American Novel" (1973) by Philip Roth features a character named "Gil Gamesh", who is the star pitcher of a fictional 1930s baseball team called the "Patriot League".
Starting in the late twentieth century, the "Epic of Gilgamesh" began to be read again in Iraq. Saddam Hussein, the former President of Iraq, had a lifelong fascination with Gilgamesh. Hussein's first novel "Zabibah and the King" (2000) is an allegory for the Gulf War set in ancient Assyria that blends elements of the "Epic of Gilgamesh" and the "One Thousand and One Nights". Like Gilgamesh, the king at the beginning of the novel is a brutal tyrant who misuses his power and oppresses his people, but, through the aid of a commoner woman named Zabibah, he grows into a more just ruler. When the United States pressured Hussein to step down in February 2003, Hussein gave a speech to a group of his generals posing the idea in a positive light by comparing himself to the epic hero.
Scholars like Susan Ackerman and Wayne R. Dynes have noted that the language used to describe Gilgamesh's relationship with Enkidu seems to have homoerotic implications. Ackerman notes that, when Gilgamesh veils Enkidu's body, Enkidu is compared to a "bride". Ackerman states, "that Gilgamesh, according to both versions, will love Enkidu 'like a wife' may further imply sexual intercourse."
In 2000, a modern statue of Gilgamesh by the Assyrian sculptor Lewis Batros was unveiled at the University of Sydney in Australia.
Informational notes
Citations
Bibliography
Further reading | https://en.wikipedia.org/wiki?curid=13151 |
Gluten
Gluten (from Latin "gluten", "to stick") is a group of proteins, called prolamins and glutelins, which occur with starch in the endosperm of various cereal grains. This protein complex supplies 75–85% of the total protein in bread wheat. It is found in related wheat species and hybrids, (such as spelt, khorasan, emmer, einkorn, and triticale), barley, rye, and oats, as well as products derived from these grains, such as breads and malts.
Glutens, especially Triticeae glutens, have unique viscoelastic and adhesive properties, which give dough its elasticity, helping it rise and keep its shape and often leaving the final product with a chewy texture. These properties and its relative low cost are the reasons why gluten is so widely demanded by the food industry and for non-food uses.
Prolamins are, by convention, referred to with different names dependent on the grain from which they are sourced. When found in wheat, prolamins are referred to as gliadins; in barley, they are referred to as hordeins; in rye, secalins; and in oats, avenins. These protein classes are collectively referred to as "gluten". Similarly, glutelins found in wheat are typically called glutenins. True gluten is limited to these four grains. (The storage proteins in maize and rice are sometimes called glutens, but they differ from true gluten.)
Gluten can trigger adverse inflammatory, immunological and autoimmune reactions in some people. Gluten can produce a broad spectrum of gluten-related disorders, including celiac disease in 1–2% of the general population, non-celiac gluten sensitivity in 6–10% of the general population, dermatitis herpetiformis, gluten ataxia and other neurological disorders. These disorders are treated by a gluten-free diet.
Gluten forms when glutenin molecules cross-link via disulfide bonds to form a submicroscopic network attached to gliadin, which contributes viscosity (thickness) and extensibility to the mix. If this dough is leavened with yeast, fermentation produces carbon dioxide bubbles, which, trapped by the gluten network, cause the dough to rise. Baking coagulates the gluten, which, along with starch, stabilizes the shape of the final product. Gluten content has been implicated as a factor in the staling of bread, possibly because it binds water through hydration.
The formation of gluten affects the texture of the baked goods. Gluten's attainable elasticity is proportional to its content of glutenins with low molecular weights, as this portion contains the preponderance of the sulfur atoms responsible for the cross-linking in the gluten network.
Further refining of the gluten leads to chewier doughs such as those found in pizza and bagels, while less refining yields tender baked goods such as pastry products.
Generally, bread flours are high in gluten (hard wheat); pastry flours have a lower gluten content. Kneading promotes the formation of gluten strands and cross-links, creating baked products that are chewier (as opposed to more brittle or crumbly). The "chewiness" increases as the dough is kneaded for longer times. An increased moisture content in the dough enhances gluten development, and very wet doughs left to rise for a long time require no kneading (see no-knead bread). Shortening inhibits formation of cross-links and is used, along with diminished water and less kneading, when a tender and flaky product, such as a pie crust, is desired.
The strength and elasticity of gluten in flour is measured in the baking industry using a farinograph. This gives the baker a measurement of quality for different varieties of flours when developing recipes for various baked goods.
In industrial production, a slurry of wheat flour is kneaded vigorously by machinery until the gluten agglomerates into a mass. This mass is collected by centrifugation, then transported through several stages integrated in a continuous process. About 65% of the water in the wet gluten is removed by means of a screw press; the remainder is sprayed through an atomizer nozzle into a drying chamber, where it remains at an elevated temperature for a short time to allow the water to evaporate without denaturing the gluten. The process yields a flour-like powder with a 7% moisture content, which is air cooled and pneumatically transported to a receiving vessel. In the final step, the processed gluten is sifted and milled to produce a uniform product.
This flour-like powder, when added to ordinary flour dough, may help improve the dough's ability to increase in volume. The resulting mixture also increases the bread's structural stability and chewiness. Gluten-added dough must be worked vigorously to induce it to rise to its full capacity; an automatic bread machine or food processor may be required for high-gluten kneading. Generally, higher gluten levels are associated with higher overall protein content.
Gluten, especially wheat gluten, is often the basis for imitation meats resembling beef, chicken, duck (see mock duck), fish and pork. When cooked in broth, gluten absorbs some of the surrounding liquid (including the flavor) and becomes firm to the bite. This use of gluten is a popular means of adding supplemental protein to many vegetarian diets. In home or restaurant cooking, wheat gluten is prepared from flour by kneading the flour under water, agglomerating the gluten into an elastic network known as a dough, and then washing out the starch.
Gluten is often present in beer and soy sauce, and can be used as a stabilizing agent in more unexpected food products, such as ice cream and ketchup. Foods of this kind may therefore present problems for a small number of consumers because the hidden gluten constitutes a hazard for people with coeliac disease and gluten sensitivities. The protein content of some pet foods may also be enhanced by adding gluten.
Gluten is also used in cosmetics, hair products and other dermatological preparations.
"Gluten-related disorders" is the umbrella term for all diseases triggered by gluten, which include celiac disease (CD), non-celiac gluten sensitivity (NCGS), wheat allergy, gluten ataxia and dermatitis herpetiformis (DH).
The gluten peptides are responsible for triggering gluten-related disorders. In people who have celiac disease, the peptides cause injury of the intestines, ranging from inflammation to partial or total destruction of the intestinal villi. To study mechanisms of this damage, laboratory experiments are done "in vitro" and "in vivo". Among the gluten peptides, gliadin has been studied extensively.
In the context of celiac disease, gliadin peptides are classified in basic and clinical research as toxic or immunogenic, depending on their mechanism of action:
At least 50 epitopes of gluten may produce cytotoxic, immunomodulatory, and gut-permeating activities.
The effect of oat peptides (avenins) in celiac people depends on the oat cultivar consumed because of prolamin genes, protein amino acid sequences, and the immunotoxicity of prolamins which vary among oat varieties. In addition, oat products may be cross-contaminated with the other gluten-containing cereals.
, gluten-related disorders were increasing in frequency in different geographic areas. This can possibly be explained by one or more of the following: the growing westernization of diets, the increasing use of wheat-based foods included in the Mediterranean diet, the progressive replacement of rice by wheat in many countries in Asia, the Middle East, and North Africa, the development in recent years of new types of wheat with a higher amount of cytotoxic gluten peptides, and the higher content of gluten in bread and bakery products due to the reduction of dough fermentation time.
Celiac disease (CD) is a chronic, multiple-organ autoimmune disorder primarily affecting the small intestine caused by the ingestion of wheat, barley, rye, oats, and derivatives, that appears in genetically predisposed people of all ages. CD is not only a gastrointestinal disease, because it may involve several organs and cause an extensive variety of non-gastrointestinal symptoms, and most importantly, it may be apparently asymptomatic. Many asymptomatic people actually are not, but have become accustomed to living with a chronic bad health status as if it were normal, and they are able to recognize that they actually had symptoms related to celiac disease after starting the gluten-free diet and improvement is evident, in contrast to the situation prior to the diet. Added difficulties for diagnosis are the fact that serological markers (anti-tissue transglutaminase [TG2]) are not always present and many people may have minor mucosal lesions, without atrophy of the intestinal villi.
CD affects approximately 1–2% of the general population, but most cases remain unrecognized, undiagnosed and untreated, and at risk for serious long-term health complications. People may suffer severe disease symptoms and be subjected to extensive investigations for many years, before a proper diagnosis is achieved. Untreated CD may cause malabsorption, reduced quality of life, iron deficiency, osteoporosis, an increased risk of intestinal lymphomas, and greater mortality. CD is associated with some other autoimmune diseases, such as diabetes mellitus type 1, thyroiditis, gluten ataxia, psoriasis, vitiligo, autoimmune hepatitis, dermatitis herpetiformis, primary sclerosing cholangitis, and more.
CD with "classic symptoms", which include gastrointestinal manifestations such as chronic diarrhea and abdominal distention, malabsorption, loss of appetite, and impaired growth, is currently the least common presentation form of the disease and affects predominantly small children generally younger than two years of age.
CD with "non-classic symptoms" is the most common clinical type and occurs in older children (over 2 years old), adolescents, and adults. It is characterized by milder or even absent gastrointestinal symptoms and a wide spectrum of non-intestinal manifestations that can involve any organ of the body, and very frequently may be completely asymptomatic both in children (at least in 43% of the cases) and adults.
Non-celiac gluten sensitivity (NCGS) is described as a condition of multiple symptoms that improves when switching to a gluten-free diet, after celiac disease and wheat allergy are excluded. Recognized since 2010, it is included among gluten-related disorders. Its pathogenesis is not yet well understood, but the activation of the innate immune system, the direct negative effects of gluten and probably other wheat components, are implicated.
NCGS is the most common syndrome of gluten intolerance, with a prevalence estimated to be 6-10%. NCGS is becoming a more common diagnosis, but its true prevalence is difficult to determine because many people self-diagnose and start a gluten-free diet, without having previously tested for celiac disease or having the dietary prescription from a physician. People with NCGS and gastrointestinal symptoms remain habitually in a "no man's land", without being recognized by the specialists and lacking the adequate medical care and treatment. Most of these people have a long history of health complaints and unsuccessful consultations with numerous physicians, trying to get a diagnosis of celiac disease, but they are only labeled as irritable bowel syndrome. A consistent although undefined number of people eliminate gluten because they identify it as responsible for their symptoms and these improve with the gluten-free diet, so they self-diagnose as NCGS.
People with NCGS may develop gastrointestinal symptoms, which resemble those of irritable bowel syndrome or wheat allergy, or a wide variety of non-gastrointestinal symptoms, such as headache, chronic fatigue, fibromyalgia, atopic diseases, allergies, neurological diseases, or psychiatric disorders, among others. The results of a 2017 study suggest that NCGS may be a chronic disorder, as is the case with celiac disease.
Besides gluten, additional components present in wheat, rye, barley, oats, and their derivatives, including other proteins called amylase-trypsin inhibitors (ATIs) and short-chain carbohydrates known as FODMAPs, may cause NCGS symptoms.As of 2019, reviews conclude that although FODMAPs present in wheat and related grains may play a role in non-celiac gluten sensitivity, they only explain certain gastrointestinal symptoms, such as bloating, but not the extra-digestive symptoms that people with non-celiac gluten sensitivity may develop, such as neurological disorders, fibromyalgia, psychological disturbances, and dermatitis. ATIs may cause toll-like receptor 4 (TLR4)-mediated intestinal inflammation in humans.
People can also experience adverse effects of wheat as result of a wheat allergy. As with most allergies, a wheat allergy causes the immune system to abnormally respond to a component of wheat that it treats as a threatening foreign body. This immune response is often time-limited and does not cause lasting harm to body tissues. Wheat allergy and celiac disease are different disorders. Gastrointestinal symptoms of wheat allergy are similar to those of celiac disease and non-celiac gluten sensitivity, but there is a different interval between exposure to wheat and onset of symptoms. An allergic reaction to wheat has a fast onset (from minutes to hours) after the consumption of food containing wheat and could include anaphylaxis.
Gluten ataxia is an autoimmune disease triggered by the ingestion of gluten. With gluten ataxia, damage takes place in the cerebellum, the balance center of the brain that controls coordination and complex movements like walking, speaking and swallowing, with loss of Purkinje cells. People with gluten ataxia usually present gait abnormality or incoordination and tremor of the upper limbs. Gaze-evoked nystagmus and other ocular signs of cerebellar dysfunction are common. Myoclonus, palatal tremor, and opsoclonus-myoclonus may also appear.
Early diagnosis and treatment with a gluten-free diet can improve ataxia and prevent its progression. The effectiveness of the treatment depends on the elapsed time from the onset of the ataxia until diagnosis, because the death of neurons in the cerebellum as a result of gluten exposure is irreversible.
Gluten ataxia accounts for 40% of ataxias of unknown origin and 15% of all ataxias. Less than 10% of people with gluten ataxia present any gastrointestinal symptom, yet about 40% have intestinal damage.
In addition to gluten ataxia, gluten sensitivity can cause a wide spectrum of neurological disorders, which develop with or without the presence of digestive symptoms or intestinal damage. These include peripheral neuropathy, epilepsy, headache, encephalopathy, vascular dementia, and various
movement disorders (restless legs syndrome, chorea, parkinsonism, Tourette syndrome, palatal tremor, myoclonus, dystonia, opsoclonus myoclonus syndrome, paroxysms, dyskinesia, myorhythmia, myokymia).
The diagnosis of underlying gluten sensitivity is complicated and delayed when there are no digestive symptoms. People who do experience gastrointestinal problems are more likely to receive a correct diagnosis and treatment. A strict gluten-free diet is the first-line treatment, which should be started as soon as possible. It is effective in most of these disorders. When dementia has progressed to an advanced degree, the diet has no beneficial effect. Cortical myoclonus appears to be treatment-resistant on both gluten-free diet and immunosuppression.
People with gluten-related disorders have to remove gluten from their diet strictly, so they need clear labeling rules. The term "gluten-free" is generally used to indicate a supposed harmless level of gluten rather than a complete absence. The exact level at which gluten is harmless is uncertain and controversial. A 2008 systematic review tentatively concluded that consumption of less than 10 mg of gluten per day is unlikely to cause intestinal damage in people with celiac disease, although it noted that few reliable studies had been done. Regulation of the label "gluten-free" varies.
The "Codex Alimentarius" international standards for food labeling has a standard relating to the labeling of products as "gluten-free". It only applies to foods that would normally contain gluten.
By law in Brazil, all food products must display labels clearly indicating whether or not they contain gluten.
Labels for all food products sold in Canada must clearly identify the presence of gluten if it is present at a level greater than 20 parts per million.
In the European Union, all prepackaged foods and non-prepacked foods from a restaurant, take-out food wrapped just before sale, or unpackaged food served in institutions must be identified if gluten-free. "Gluten-free" is defined as 20 parts per million of gluten or less and "very low gluten" is 100 parts per million of gluten or less; only foods with cereal ingredients processed to remove gluten can claim "very low gluten" on labels.
All foods containing gluten as an ingredient must be labelled accordingly as gluten is defined as one of the 14 recognised EU allergens.
In the United States, gluten is not listed on labels unless added as a standalone ingredient. Wheat or other allergens are listed after the ingredient line. The US Food and Drug Administration (FDA) has historically classified gluten as "generally recognized as safe" (GRAS). In August 2013, the FDA issued a final ruling, effective August 2014, that defined the term "gluten-free" for voluntary use in the labeling of foods as meaning that the amount of gluten contained in the food is below 20 parts per million. | https://en.wikipedia.org/wiki?curid=13152 |
Glen or Glenda
Glen or Glenda is a 1953 American exploitation film written, directed by and starring Ed Wood (credited in his starring role as "Daniel Davis"), and featuring Bela Lugosi and Wood's then-girlfriend Dolores Fuller. It was produced by George Weiss who also made the exploitation film "Test Tube Babies" that same year.
The film is a docudrama about cross-dressing and transsexuality, and is semi-autobiographical in nature. Wood himself was a cross-dresser, and the film is a plea for tolerance. It is widely considered one of the worst films ever made. However, it has become a cult film due to its low-budget production values and idiosyncratic style.
The film begins with a narrator, called The Scientist (Bela Lugosi), making cryptic comments about humanity. He first comments that humanity's constant search for the unknown results in startling things coming to light. But most of these "new" discoveries are actually quite old, which he refers to as "the signs of the ages". Later, the scene turns to the streets of a city, with the narrator commenting that each human has his/her own thoughts, ideas, and personality. He makes further comments on human life, while sounds accompany some comments. The cries of a newborn baby are followed by the sirens of an ambulance. One is a sign that a new life has begun, the other that a life has ended.
This last comment starts the narrative of the film. The life which has ended is that of a transvestite named Patrick/Patricia, who has committed suicide. A scene opens with their corpse in a small room. Within the room is an unidentified man who opens the door to a physician, a photographer, and the police. A suicide note explains the reasons behind the suicide. Patrick/Patricia had been arrested four times for cross-dressing in public, and had spent time in prison. Since they would continue wearing women's clothing, subsequent arrests and imprisonment were only a matter of time. So they ended their own life and wished to be buried with their women's clothing. ""Let my body rest in death forever, in the things I cannot wear in life.""
Inspector Warren is puzzled and wants to know more about cross-dressing. So he seeks the office of Dr. Alton, who narrates for him the story of Glen/Glenda. Glen is shown studying women's clothes in a shop window. Dr. Alton points out that men's clothes are dull and restrictive, whereas women can adorn themselves with attractive and comfortable clothing. A flashback scene reveals that a young Glen started out by asking to wear his sister's dress for a Halloween party. And he did, despite his father's protests. But he then continued wearing his sister's clothing, and Sheila (the sister) eventually caught him in the act. She shuns him afterward.
The narrative explains that Glen is a transvestite, but not a homosexual. He hides his cross-dressing from his fiancée, Barbara, fearing that she will reject him. She has no idea that certain of her clothes are fetish objects for him. When Barbara notices that something is bothering him, Glen does not have the courage to explain his secret to her. She voices her suspicion that there is another woman in his life, unaware that the woman is his feminine alter ego, Glenda. The scene shifts from a speechless Glen to footage of a stampeding herd of bison, while the Scientist calls for Glen to ""Pull the string"". The meaning of the call is unclear, though it could be a call for opening the proverbial curtain and revealing the truth.
Alton narrates that Glen is torn between the idea of being honest with Barbara before their wedding, or waiting until after. The narrative shifts briefly from Glen's story to how society reacts to sex change operations. A conversation between two "average joes", concludes that society should be more "lenient" when it comes to people with tranvestite tendencies. The story returns to Glen, who confides in a transvestite friend of his, John, whose wife left him after catching him wearing her clothes.
Later, a scene opens with Glen/Glenda walking the city streets at night. They return home in obvious anguish, when the sound of thunder causes them to collapse to the floor. The Scientist cryptically comments ""Beware! Beware! Beware of the big, green dragon that sits on your doorstep! He eats little boys, puppy dog tails, and big, fat snails! Take care! Beware!"" This serves as the introduction to an extended dream sequence. The dream opens with Barbara anguished at seeing Glenda. Then Barbara is depicted trapped under a tree, while the room around is in a chaotic state. Glenda fails to lift the tree and rescue Barbara. Glenda is replaced by Glen, who completes the task with ease. The dream then depicts Glen and Barbara getting married. The priest seems normal but the best man is a stereotypical devil, smiling ominously, suggesting that this marriage is damned. The dream shifts to the Scientist, who seems to speak to the unseen dragon, asking it what it eats. The voice of a little girl provides the answers in an apparently mocking tone.
The dreams continues with a strange series of vignettes. A woman is whipped by a shirtless man in a BDSM-themed vignette. Several women "flirt and partially disrobe" for an unseen audience. A woman tears apart her dress in a dramatic manner, then starts a coy striptease. The whipped woman from an earlier vignette appears alone in an autoerotic session. Her pleasure is interrupted by another woman who forcibly binds and gags her. Another woman has a similar autoerotic session and then falls asleep. As she sleeps, a predatory male approaches and rapes her, with the victim seeming partially willing by the end of it. Throughout these vignettes, the faces of Glen and the Scientist appear. They seem to be silently reacting to the various images.
The dream returns to Glen, who is haunted by sounds of mocking voices and howling winds. He is soon confronted by two spectral figures. A blackboard appears, with messages recording what the Scientist or the mocking voices said in previous scenes. A large number of spectres appear, all gazing at him with disapproval, as if serving as the jury of public opinion on his perceived deviance. The mocking voices return. The Devil and the various spectres menacingly approach Glen. Then the Devil departs, Glen turns into Glenda, and the spectres retreat. A victorious Glenda sees Barbara and approaches her, but she turns into a mocking Devil. Barbara starts appearing and disappearing, always evading Glenda's embrace. Then she starts mocking her lover. The Devil and spectres also shift to mocking Glenda. The dream sequence ends.
Glen/Glenda wakes and stares at their mirror reflection. They decide to tell Barbara the truth. She initially reacts with distress, but ultimately decides to stay with him. She offers them an angora sweater as a sign of acceptance. The scene effectively concludes their story.
Back in Dr. Alton's office, he starts another narrative. This one concerns another tranvestite, called Alan/Anne. He was born a boy, but his mother wanted a girl and raised him as such. Their father did not care either way. They were an outsider as a child, trying to be one of the girls and consequently rejected by schoolmates of both sexes. As a teenager, they self-identified as a woman. They were conscripted in World War II, maintaining a secret life throughout their military service. They first heard of sex change operations during the War while recovering from combat wounds in a hospital. They eventually did have a sex change operation, enduring the associated pains to fulfill their dreams. The World War II veteran becomes a "lovely young lady". Following a brief epilogue, the film ends.
Shot in four days, the film was loosely inspired by the sex reassignment surgery of Christine Jorgensen, which made national headlines in the U.S. in 1952. Posters for the film publicize the movie as being based on Jorgensen. George Weiss, a Hollywood producer of low-budget films, commissioned a movie to exploit it. Originally Weiss made Jorgensen several offers to appear in the film, but these were turned down. Wood convinced Weiss that his own transvestism made him the perfect director despite his modest resume. Wood was given the job, but instead made a movie about transvestism.
Wood persuaded Lugosi, at the time poor and drug-addicted, to appear in the movie. Lugosi's scenes were shot at the Jack Miles Studios in Los Angeles. He was reportedly paid $5000 for the role, although some stories state the actual amount was only $1000. Lugosi is credited as "The Scientist", a character whose purpose is unclear. He acts as a sort of narrator but gives no narration relevant to the plot; that job is reserved for the film's primary narrator, Timothy Farrell.
This was the only film Wood directed but did not also produce. Wood played the eponymous character, but under the pseudonym "Daniel Davis". His then-girlfriend, Dolores Fuller, played Glen's girlfriend/fiancée Barbara. Wood later returned to "Glen or Glenda" in his pulp novel "Killer in Drag" (1963). The plot features a transvestite called Glen whose alter-ego is called Glenda. He is executed in the sequel "Death of a Transvestite" (1967) after a struggle for the right to go to the electric chair dressed as Glenda.
The erotic-themed vignettes were not created by Wood. They were reportedly added by producer George Weiss. He needed extra scenes to add to what he felt was an overly-short film. While not organic parts of the narrative, they seem to tell their own tales of gender dynamics and so fit in the general themes of the film. The whipping scene suggests a Master/slave relationship. That the man is dominant and the woman submissive, seems to reflect male chauvinism. The flirtatious and striptease-themed vignettes were typical of 1950s exploitation films and grindhouse films, as was the rape scene.
The film has deleted scenes. In the theatrical trailer, included in laserdisc and DVD editions, the scene in which Fuller hands over her angora sweater, is a different take than the one in the release version — in the trailer, she tosses it to Wood in a huff, while the release version shows her handing it over more acceptingly. There is also a shot of Wood in drag, mouthing the word "Cut!"
The second part of the film, titled "Alan or Anne", is much shorter, told largely through stock footage, and was made to meet the distributor's demand for a sex change film. Alan is a pseudo-hermaphrodite who fights in World War II wearing women's underwear. After his return, Alan undergoes surgery to become a woman.
Domestically, the film was limited in release, having been pre-sold to some theaters (under alternative titles such including "I Led Two Lives", "He or She?" and "I Changed My Sex"). Internationally, the film was also limited, and in France and Belgium, the title was translated as "Louis ou Louise" and in Argentina as "Yo Cambié Mi Sexo"; the film had a brief screening in the Republic of China. It was re-released to theaters in 1981 by Paramount.
According to Tim Dirks, the film was one of a wave of "cheap teen movies" released for the drive-in market. They consisted of "exploitative, cheap fare created especially for them [teens] in a newly-established teen/drive-in genre."
It was denied classification by the British Board of Film Classification upon submission on February 26, 1958.
In 1980, Wood was posthumously given the accolade of 'Worst Director of All Time' at the Golden Turkey Awards, and revival of interest in his work followed. This led to "Glen or Glenda" being reissued in 1982. This cut included six minutes of additional footage. One of the restored scenes features Glen rejecting a pass made to him by a man. At this point, the film was reviewed seriously, and reclaimed as a radical work, by Steve Jenkins in the "Monthly Film Bulletin".
The critic Leonard Maltin names "Glen or Glenda" as "possibly the worst movie ever made".
In 1994, Tim Burton chronicled the troubled production of "Glen or Glenda" in "Ed Wood". The film includes re-creations of several key scenes, including Lugosi's narration and Glen's plea for his girlfriend's understanding at the end of the film.
A remake of the film, entitled "Glen & Glenda", was released the same year as "Ed Wood" and featured much the same script as the original film, as well as explicit scenes.
In 2006, the distributed operating system Plan 9 from Bell Labs had a mascot, Glenda, the Plan 9 Bunny, named after "Glen or Glenda" and "Plan 9 from Outer Space".
In 2011, the film "Jack and Jill" was nominated for every award at the 32nd Golden Raspberry Awards, including Worst Prequel, Remake, Ripoff or Sequel, where it was declared a Remake/Ripoff of "Glen or Glenda", despite having nothing to do with the film, other than the lead actor dressed as a woman. | https://en.wikipedia.org/wiki?curid=13154 |
The Golden Turkey Awards
The Golden Turkey Awards is a 1980 book by film critic Michael Medved and his brother Harry.
The book awards "Golden Turkey Awards" to films judged by the authors as poor in quality, and to directors and actors judged to have created a chronically inept body of work. The book features many low-budget obscurities and exploitation films such as "Rat Pfink a Boo Boo", "Attack of the 50 Foot Woman", and the apparently lost "Him". Other categories include expensive, big studio failures like "The Swarm" and popular films such as "Jesus Christ Superstar".
In the introduction the authors admit that "we know our choices will not please everyone—least of all the actors, producers, writers and directors who are honored in the pages that follow. We further recognize that the number of bad films is so enormous and the competition for the very worst is so intense, that all decisions reached here are subject to considerable second-guessing. Nevertheless, we have researched the subject thoroughly—sitting through more than 2,000 wretched films in the last few years—and we believe that our nominees and award winners can stand the test of time."
The Medveds had previously celebrated bad cinema in "The Fifty Worst Films of All Time", many of which were also featured in the various "Golden Turkey Awards" categories. Subsequently, they turned their attention to box office bombs in "The Hollywood Hall of Shame". They also published a sequel to "The Golden Turkey Awards", "Son of Golden Turkey Awards," in 1986. They declared that "Son of Golden Turkey Awards" "is our last word...we hereby solemnly pledge that the years ahead will produce no further Golden Turkey publications by the Medved Brothers...we now pass the torch to whichever brave souls feel ready to take up the challenge." "Son of Golden Turkey Awards" also listed a "Who's Who in the World of Bad Movies" at the end of its awards presentations.
The Golden Turkey Awards formed the basis of a 1983 television series "The Worst of Hollywood" hosted by Michael Medved.
In the book "The Fifty Worst Films of All Time" the authors invited readers to write in nominating their favorite "worst films". More than 3,000 ballots were received. Based on these votes, the Worst Film of All Time award was given to "Plan 9 from Outer Space" by Ed Wood.
Wood is also awarded the title of Worst Director of All Time, judged by the authors.
Raquel Welch is judged the Worst Actress of All Time over nominees including Candice Bergen and Mamie van Doren.
Richard Burton is judged as the Worst Actor of All Time over nominees John Agar, Tony Curtis and Victor Mature. While conceding he is sometimes brilliant, the authors claim Burton's "occasional triumphs only serve to highlight the pathetic waste in most of his films; for every "Equus" in which he appears there are at least a half-dozen "Cleopatra"s or "Boom!"s. The authors state that "when he is bad ... well, he's just the pits" and list several "bad" films in which he has appeared: "The Sandpiper", "Hammersmith Is Out", "The Voyage", "The Medusa Touch" and "The Assassination of Trotsky". Another Burton film, "", is the book's first runner up in the Worst Film of All Time award based on reader response.
In addition, the Golden Turkey Awards had a reader's choice category for Worst Film of All Time, voted upon by readers of "The Fifty Worst Films of All Time".
One of the films nominated in the book was in fact an invention of the authors, and readers were challenged to figure out which film was actually fake. The fake film was "Dog of Norway" featuring "Muki the Wonder Hound". This film was illustrated using a photo of a co-author's dog. The giveaway was that the same dog was in the photo of the authors in the back of the book. Another film in the book, the now-lost 1974 porn film "Him", has also been cited as the hoax, though it is known to have existed.
No formal clarification of the hoax film was provided by the subsequent release, "The Hollywood Hall of Shame". That book again features the same dog pictured with the authors (as did the subsequent "Son of Golden Turkey Awards"). In "The Hollywood Hall of Shame", in reference to the dish barbecued dog, the authors explain that it was "a snack which produced a mixed reaction among the representatives of an industry that had given the world Lassie, Rin Tin Tin, Benji, Phyllis Diller, and Muki the Wonder Hound."
The "Acknowledgements" page of "The Fifty Worst Films of All Time" ends with: | https://en.wikipedia.org/wiki?curid=13155 |
George Fox
George Fox (July 1624 – 23 January 1691 (O.S.13 January 1690)) was an English Dissenter, who was a founder of the Religious Society of Friends, commonly known as the Quakers or Friends. The son of a Leicestershire weaver, he lived in times of social upheaval and war. He rebelled against the religious and political authorities by proposing an unusual, uncompromising approach to the Christian faith. He travelled throughout Britain as a dissenting preacher, performing hundreds of healings, and often being persecuted by the disapproving authorities. In 1669, he married Margaret Fell, widow of a wealthy supporter, Thomas Fell; she was a leading Friend. His ministry expanded and he made tours of North America and the Low Countries. He was arrested and jailed numerous times for his beliefs. He spent his final decade working in London to organise the expanding Quaker movement. Despite disdain from some Anglicans and Puritans, he was viewed with respect by the Quaker convert William Penn and the Lord Protector, Oliver Cromwell.
George Fox was born in the strongly Puritan village of Drayton-in-the-Clay, Leicestershire, England (now known as Fenny Drayton), 15 miles (24 km) west-south-west of Leicester. He was the eldest of four children of Christopher Fox, a successful weaver, called "Righteous Christer" by his neighbours, and his wife, Mary "née" Lago. Christopher Fox was a churchwarden and was relatively wealthy; when he died in the late 1650s he left his son a substantial legacy. From childhood Fox was of a serious, religious disposition. There is no record of any formal schooling but he learned to read and write. "When I came to eleven years of age", he said, "I knew pureness and righteousness; for, while I was a child, I was taught how to walk to be kept pure. The Lord taught me to be faithful, in all things, and to act faithfully two ways; viz., inwardly to God, and outwardly to man." Known as an honest person, he also proclaimed, "The Lord taught me to be faithful in all things... and to keep to Yea and Nay in all things."
As he grew up, his relatives "thought to have made me a priest" but he was instead apprenticed to a local shoemaker and grazier, George Gee of Mancetter. This suited his contemplative temperament and he became well known for his diligence among the wool traders who had dealings with his master. A constant obsession for Fox was the pursuit of "simplicity" in life, meaning humility and the abandonment of luxury, and the short time he spent as a shepherd was important to the formation of this view. Toward the end of his life he wrote a letter for general circulation pointing out that Abel, Noah, Abraham, Jacob, Moses and David were all keepers of sheep or cattle and therefore that a learned education should not be seen as a necessary qualification for ministry.
George Fox knew people who were "professors" (followers of the standard religion), but by the age of 19 he had begun to look down on their behaviour, in particular drinking alcohol. He records that, in prayer one night after leaving two acquaintances at a drinking session, he heard an inner voice saying, "Thou seest how young people go together into vanity, and old people into the earth; thou must forsake all, young and old, keep out of all, and be as a stranger unto all."
Driven by his "inner voice", Fox left Drayton-in-the-Clay in September 1643, moving toward London in a state of mental torment and confusion. The English Civil War had begun and troops were stationed in many towns through which he passed. In Barnet, he was torn by depression (perhaps from the temptations of the resort town near London). He alternately shut himself in his room for days at a time or went out alone into the countryside. After almost a year he returned to Drayton, where he engaged Nathaniel Stephens, the clergyman of his hometown, in long discussions on religious matters. Stephens considered Fox a gifted young man but the two disagreed on so many issues that he later called Fox mad and spoke against him.
Over the next few years Fox continued to travel around the country as his particular religious beliefs took shape. At times he actively sought the company of clergy but found no comfort from them as they seemed unable to help with the matters troubling him. One, in Warwickshire, advised him to take tobacco (which Fox disliked) and sing psalms; another, in Coventry, lost his temper when Fox accidentally stood on a flower in his garden; a third suggested bloodletting. He became fascinated by the Bible, which he studied assiduously. He hoped to find among the "English Dissenters" a spiritual understanding absent from the established church but fell out with one group, for example, because he maintained that women had souls:
He thought intensely about the Temptation of Christ, which he compared to his own spiritual condition, but drew strength from his conviction that God would support and preserve him. In prayer and meditation he came to a greater understanding of the nature of his faith and what it required from him; this process he called "opening". He also came to what he deemed a deep inner understanding of standard Christian beliefs. Among his ideas were:
In 1647 Fox began to preach publicly: in market-places, fields, appointed meetings of various kinds or even sometimes "steeple-houses" after the service. His powerful preaching began to attract a small following. It is not clear at what point the Society of Friends was formed, but there was certainly a group of people who often travelled together. At first, they called themselves "Children of the Light" or "Friends of the Truth", and later simply "Friends". Fox seems to have initially had no desire to found a sect but only to proclaim what he saw as the pure and genuine principles of Christianity in their original simplicity, though he afterward showed great prowess as a religious organiser in the structure he gave to the new society.
There were a great many rival Christian denominations holding very diverse opinions; the atmosphere of dispute and confusion gave Fox an opportunity to put forward his own beliefs through his personal sermons. Fox's preaching was grounded in scripture but was mainly effective because of the intense personal experience he was able to project. He was scathing about immorality, deceit and the exacting of tithes and urged his listeners to lead lives without sin, avoiding the Ranter's antinomian view that a believer becomes automatically sinless. By 1651 he had gathered other talented preachers around him and continued to roam the country despite a harsh reception from some listeners, who would whip and beat them to drive them away. As his reputation spread, his words were not welcomed by all. As an uncompromising preacher, he hurled disputation and contradiction to the faces of his opponents. The worship of Friends in the form of silent waiting punctuated by individuals speaking as the Spirit moved them seems to have been well-established by this time, though it is not recorded how this came to be; Richard Bauman asserts that "speaking was an important feature of the meeting for worship from the earliest days of Quakerism."
Fox complained to judges about decisions he considered morally wrong, as in his letter on the case of a woman due to be executed for theft. He campaigned against the paying of tithes, which funded the established church and often went into the pockets of absentee landlords or religious colleges far away from the paying parishioners. In his view, as God was everywhere and anyone could preach, the established church was unnecessary and a university qualification irrelevant for a preacher. Conflict with civil authority was inevitable. Fox was imprisoned several times, the first at Nottingham in 1649. At Derby in 1650 he was imprisoned for blasphemy; a judge mocked Fox's exhortation to "tremble at the word of the Lord", calling him and his followers "Quakers". Following his refusal to fight against the return of the monarchy (or to take up arms for any reason), his sentence was doubled. The refusal to swear oaths or take up arms came to be a much more important part of his public statements. Refusal to take oaths meant that Quakers could be prosecuted under laws compelling subjects to pledge allegiance, as well as making testifying in court problematic. In a letter of 1652 ("That which is set up by the sword"), he urged Friends not to use "carnal weapons" but "spiritual weapons", saying "let the waves [the power of nations] break over your heads".
In 1652, Fox preached for several hours under a walnut tree at Balby, where his disciple Thomas Aldham was instrumental in setting up the first meeting in the Doncaster area. In the same year Fox felt that God led him to ascend Pendle Hill where he had a vision of many souls coming to Christ. From there he travelled to Sedbergh, where he had heard a group of Seekers were meeting, and preached to over a thousand people on Firbank Fell, convincing many, including Francis Howgill, to accept that Christ might speak to people directly. At the end of the month he stayed at Swarthmoor Hall, near Ulverston, the home of Thomas Fell, vice-chancellor of the Duchy of Lancaster, and his wife, Margaret. At around this time the "ad hoc" meetings of Friends began to be formalised and a monthly meeting was set up in County Durham. Margaret became a Quaker, and although Thomas did not convert, his familiarity with the Friends proved influential when Fox was arrested for blasphemy in October. Fell was one of three presiding judges, and the charges were dismissed on a technicality.
Fox remained at Swarthmoor until summer 1653 then left for Carlisle where he was arrested again for blasphemy. It was even proposed to put him to death but Parliament requested his release rather than have "a young man... die for religion". Further imprisonments came at London, England in 1654, Launceston in 1656, Lancaster in 1660, Leicester in 1662, Lancaster again and Scarborough in 1664–1666 and Worcester in 1673–1675. Charges usually included causing a disturbance and travelling without a pass. Quakers fell foul of irregularly enforced laws forbidding unauthorised worship while actions motivated by belief in social equality—refusing to use or acknowledge titles, take hats off in court or bow to those who considered themselves socially superior—were seen as disrespectful. While imprisoned at Launceston Fox wrote, "Christ our Lord and master saith 'Swear not at all, but let your communications be yea, yea, and nay, nay, for whatsoever is more than these cometh of evil.' ...the Apostle James saith, 'My brethren, above all things swear not, neither by heaven, nor by earth, nor by any other oath. Lest ye fall into condemnation.'"
In prison George Fox continued writing and preaching, feeling that imprisonment brought him into contact with people who needed his help—the jailers as well as his fellow prisoners. In his journal, he told his magistrate, "God dwells not in temples made with hands." He also sought to set an example by his actions there, turning the other cheek when being beaten and refusing to show his captors any dejected feelings.
Parliamentarians grew suspicious of monarchist plots and fearful that the group travelling with Fox aimed to overthrow the government: by this time his meetings were regularly attracting crowds of over a thousand. In early 1655 he was arrested at Whetstone, Leicestershire and taken to London under armed guard. In March he was brought before the Lord Protector, Oliver Cromwell. After affirming that he had no intention of taking up arms Fox was able to speak with Cromwell for most of the morning about the Friends and advised him to listen to God's voice and obey it so that, as Fox left, Cromwell "with tears in his eyes said, 'Come again to my house; for if thou and I were but an hour of a day together, we should be nearer one to the other'; adding that he wished [Fox] no more ill than he did to his own soul."
This episode was later recalled as an example of "speaking truth to power", a preaching technique by which subsequent Quakers hoped to influence the powerful. Although not used until the 20th century, the phrase is related to the ideas of plain speech and simplicity which Fox practised, but motivated by the more worldly goal of eradicating war, injustice and oppression.
Fox petitioned Cromwell over the course of 1656, asking him to alleviate the persecution of Quakers. Later that year, they met for a second time at Whitehall. On a personal level, the meeting went well; despite disagreements between the two men, they had a certain rapport. Fox invited Cromwell to "lay down his crown at the feet of Jesus"—which Cromwell declined to do. Fox met Cromwell again twice in March 1657. Their last meeting was in 1658 at Hampton Court, though they could not speak for long or meet again because of the Protector's worsening illness—Fox even wrote that "he looked like a dead man". Cromwell died in September of that year.
One early Quaker convert, the Yorkshireman James Nayler, arose as a prominent preacher in London around 1655. A breach began to form between Fox's and Nayler's followers. As Fox was held prisoner at Launceston, Nayler moved south-westwards towards Launceston intending to meet Fox and heal any rift. On the way he was arrested himself and held at Exeter. After Fox was released from Launceston gaol in 1656, he preached throughout the West Country. Arriving at Exeter late in September, Fox was reunited with Nayler. Nayler and his followers refused to remove their hats while Fox prayed, which Fox took as both a personal slight and a bad example. When Nayler refused to kiss Fox's hand, Fox told Nayler to kiss his foot instead. Nayler was offended and the two parted acrimoniously. Fox wrote, "there was now a wicked spirit risen amongst Friends".
After Nayler's own release later the same year he rode into Bristol triumphantly playing the part of Jesus Christ in a re-enactment of Palm Sunday. He was arrested and taken to London, where Parliament defeated a motion to execute him by 96–82. Instead, they ordered that he be pilloried and whipped through both London and Bristol, branded on his forehead with the letter B (for blasphemer), bored through the tongue with a red-hot iron and imprisoned in solitary confinement with hard labour. Nayler was released in 1659, but he was a broken man. On meeting Fox in London, he fell to his knees and begged Fox's forgiveness. Shortly afterward, Nayler was attacked by thieves while travelling home to his family, and died.
The persecutions of these years – with about a thousand Friends in prison by 1657 – hardened Fox's opinions of traditional religious and social practices. In his preaching, he often emphasised the Quaker rejection of baptism by water; this was a useful way of highlighting how the focus of Friends on inward transformation differed from what he saw as the superstition of outward ritual. It was also deliberately provocative to adherents of those practices, providing opportunities for Fox to argue with them on matters of scripture. This pattern was also found in his court appearances: when a judge challenged him to remove his hat, Fox riposted by asking where in the Bible such an injunction could be found.
The Society of Friends became increasingly organised towards the end of the decade. Large meetings were held, including a three-day event in Bedfordshire, the precursor of the present Britain Yearly Meeting system. Fox commissioned two Friends to travel around the country collecting the testimonies of imprisoned Quakers, as evidence of their persecution; this led to the establishment in 1675 of Meeting for Sufferings, which has continued to the present day.
The 1650s, when the Friends were most confrontational, was one of the most creative periods of their history. During the Commonwealth, Fox had hoped that the movement would become the major church in England. Disagreements, persecution and increasing social turmoil, however, led Fox to suffer from a severe depression, which left him deeply troubled at Reading, Berkshire, for ten weeks in 1658 or 1659. In 1659, he sent parliament his most politically radical pamphlet, "Fifty nine Particulars laid down for the Regulating things", but the year was so chaotic that it never considered them; the document was not reprinted until the 21st century.
With the restoration of the monarchy, Fox's dreams of establishing the Friends as the dominant religion seemed at an end. He was again accused of conspiracy, this time against Charles II, and fanaticism—a charge he resented. He was imprisoned in Lancaster for five months, during which he wrote to the king offering advice on governance: Charles should refrain from war and domestic religious persecution, and discourage oath-taking, plays, and maypole games. These last suggestions reveal Fox's Puritan leanings, which continued to influence Quakers for centuries after his death. Once again, Fox was released after demonstrating that he had no military ambitions.
At least on one point, Charles listened to Fox. The seven hundred Quakers who had been imprisoned under Richard Cromwell were released, though the government remained uncertain about the group's links with other, more violent, movements. A revolt by the Fifth Monarchists in January 1661 led to the suppression of that sect and the repression of other Nonconformists, including Quakers. In the aftermath of this attempted coup, Fox and eleven other Quakers issued a broadside proclaiming what became known among Friends in the 20th century as the "peace testimony": they committed themselves to oppose all outward wars and strife as contrary to the will of God. Not all his followers accepted this statement; Isaac Penington, for example, dissented for a time arguing that the state had a duty to protect the innocent from evil, if necessary by using military force. Despite the testimony, persecution against Quakers and other dissenters continued.
Penington and others, such as John Perrot and John Pennyman, were uneasy at Fox's increasing power within the movement. Like Nayler before them, they saw no reason why men should remove their hats for prayer, arguing that men and women should be treated as equals and if, as according to the apostle Paul, women should cover their heads, then so could men. Perrot and Penington lost the argument. Perrot emigrated to the New World, and Fox retained leadership of the movement.
Parliament enacted laws which forbade non-Anglican religious meetings of more than five people, essentially making Quaker meetings illegal. Fox counselled his followers to openly violate laws that attempted to suppress the movement, and many Friends, including women and children, were jailed over the next two-and-a-half decades. Meanwhile, Quakers in New England had been banished (and some executed), and Charles was advised by his councillors to issue a mandamus condemning this practice and allowing them to return. Fox was able to meet some of the New England Friends when they came to London, stimulating his interest in the colonies. Fox was unable to travel there immediately: he was imprisoned again in 1664 for his refusal to swear the oath of allegiance, and on his release in 1666 was preoccupied with organizational matters — he normalised the system of monthly and quarterly meetings throughout the country, and extended it to Ireland.
Visiting Ireland also gave him the opportunity to preach against what he saw as the excesses of the Roman Catholic Church, in particular the use of ritual. More recent Quaker commentators have noted points of contact between the denominations: both claim the actual presence of God in their meetings, and both allow the collective opinion of the church to augment Biblical teaching. Fox, however, did not perceive this, brought up as he was in a wholly Protestant environment hostile to "Popery".
Fox married Margaret Fell of Swarthmoor Hall, a lady of high social position and one of his early converts, on 27 October 1669 at a meeting in Bristol. She was ten years his senior and had eight children (all but one of them Quakers) by her first husband, Thomas Fell, who had died in 1658. She was herself very active in the movement, and had campaigned for equality and the acceptance of women as preachers. As there were no priests at Quaker weddings to perform the ceremony, the union took the form of a civil marriage approved by the principals and the witnesses at a meeting. Ten days after the marriage, Margaret returned to Swarthmoor to continue her work there, while George went back to London. Their shared religious work was at the heart of their life together, and they later collaborated on a great deal of the administration the Society required. Shortly after the marriage, Margaret was imprisoned at Lancaster; George remained in the south-east of England, becoming so ill and depressed that for a time he lost his sight.
By 1671 Fox had recovered and Margaret had been released by order of the King. Fox resolved to visit the English settlements in America and the West Indies, remaining there for two years, possibly to counter any remnants of Perrot's teaching there. After a voyage of seven weeks, during which dolphins were caught and eaten, the party arrived in Barbados on 3 October 1671. From there, Fox sent an epistle to Friends spelling out the role of women's meetings in the Quaker marriage ceremony, a point of controversy when he returned home. One of his proposals suggested that the prospective couple should be interviewed by an all-female meeting prior to the marriage to determine whether there were any financial or other impediments. Though women's meetings had been held in London for the last ten years, this was an innovation in Bristol and the north-west of England, which many there felt went too far.
Fox wrote a letter to the governor and assembly of the island in which he refuted charges that Quakers were stirring up the slaves to revolt and tried to affirm the orthodoxy of Quaker beliefs. After a stay in Jamaica, Fox's first landfall on the North American continent was at Maryland, where he participated in a four-day meeting of local Quakers. He remained there while various of his English companions travelled to the other colonies, because he wished to meet some Native Americans who were interested in Quaker ways—though he relates that they had "a great dispute" among themselves about whether to participate in the meeting. Fox was impressed by their general demeanour, which he said was "courteous and loving". He resented the suggestion (from a man in North Carolina) that "the Light and Spirit of God ... was not in the Indians", a proposition which Fox refuted. Fox left no record of encountering slaves on the mainland.
Elsewhere in the colonies, Fox helped to establish organizational systems for the Friends, along the same lines as he had done in Britain. He also preached to many non-Quakers, some but not all of whom were converted.
Following extensive travels around the various American colonies, George Fox returned to England in June 1673 confident that his movement was firmly established there. Back in England, however, he found his movement sharply divided among provincial Friends (such as William Rogers, John Wilkinson and John Story) who resisted establishment of women's meetings and the power of those who resided in or near London. With William Penn and Robert Barclay as allies of Fox, the challenge to Fox's leadership was eventually put down. But in the midst of the dispute, Fox was imprisoned again for refusing to swear oaths after being captured at Armscote, Worcestershire. His mother died shortly after hearing of his arrest and Fox's health began to suffer. Margaret Fell petitioned the king for his release, which was granted, but Fox felt too weak to take up his travels immediately. Recuperating at Swarthmoor, he began dictating what would be published after his death as his journal and devoted his time to his written output: letters, both public and private, as well as books and essays. Much of his energy was devoted to the topic of oaths, having become convinced of its importance to Quaker ideas. By refusing to swear, he felt that he could bear witness to the value of truth in everyday life, as well as to God, whom he associated with truth and the inner light.
For three months in 1677 and a month in 1684, Fox visited the Friends in the Netherlands, and organised their meetings for discipline. The first trip was the more extensive, taking him into what is now Germany, proceeding along the coast to Friedrichstadt and back again over several days. Meanwhile, Fox was participating in a dispute among Friends in Britain over the role of women in meetings, a struggle which took much of his energy and left him exhausted. Returning to England, he stayed in the south to try to end the dispute. He followed with interest the foundation of the colony of Pennsylvania, where Penn had given him over of land. Persecution continued, with Fox arrested briefly in October 1683. Fox's health was becoming worse, but he continued his activities – writing to leaders in Poland, Denmark, Germany, and elsewhere about his beliefs, and their treatment of Quakers.
In the last years of his life, Fox continued to participate in the London Meetings, and still made representations to Parliament about the sufferings of Friends. The new King, James II, pardoned religious dissenters jailed for failure to attend the established church, leading to the release of about 1,500 Friends. Though the Quakers lost influence after the Glorious Revolution, which deposed James II, the Act of Toleration 1689 put an end to the uniformity laws under which Quakers had been persecuted, permitting them to assemble freely.
Two days after preaching, as usual, at the Gracechurch Street Meeting House in London, George Fox died between 9 and 10 p.m. on 13 January 1690 (23 January 1691 N.S.). He was interred in the Quaker Burying Ground, Bunhill Fields, three days later in the presence of thousands of mourners.
George Fox performed hundreds of healings throughout his preaching ministry, the records of which were collected in a notable but now lost book entitled ‘Book of Miracles.’ This book was listed in the catalogue of George Fox’s work maintained by the Friends Library in Friends House, London. In 1932, Henry Cadbury found a reference to ‘Book of Miracles’ in the catalogue, which included the beginning and ending of each account of a miraculous cure. The book was then reconstructed based on this resource and journal accounts. According to Rufus M. Jones, the Book of Miracles “makes it possible for us to follow George Fox as he went about his seventeenth-century world, not only preaching his fresh messages of life and power, but as a remarkable healer of disease with the undoubted reputation of miracle-worker.” The ‘Book of Miracles’ was deliberately suppressed in favor of printing Fox’s "Journal" and other writings.
A sample from ‘Book of Miracles’:
”And a young woman her mother...had made her well.
And another young woman was...small pox...of God was made well.”
Fox's journal was first published in 1694, after editing by Thomas Ellwood—a friend and associate of John Milton—with a preface by William Penn. Like most similar works of its time the journal was not written contemporaneously to the events it describes, but rather compiled many years later, much of it dictated. Parts of the journal were not in fact by Fox at all but are constructed by its editors from diverse sources and written as if by him. The dissent within the movement and the contributions of others to the development of Quakerism are largely excluded from the narrative. Fox portrays himself as always in the right and always vindicated by God's interventions on his behalf. As a religious autobiography, Rufus Jones compared it to such works as Augustine's "Confessions" and John Bunyan's "Grace Abounding to the Chief of Sinners". It is, though, an intensely personal work with little dramatic power that only succeeds in appealing to readers after substantial editing. Historians have used it as a primary source because of its wealth of detail on ordinary life in the 17th century, and the many towns and villages which Fox visited.
Hundreds of Fox's letters – mostly intended for wide circulation, along with a few private communications – were also published. Written from the 1650s onwards, with such titles as "Friends, seek the peace of all men" or "To Friends, to know one another in the light", they give enormous insight into the detail of Fox's beliefs, and show his determination to spread them. These writings, in the words of Henry Cadbury, Professor of Divinity at Harvard University and a leading Quaker, "contain a few fresh phrases of his own, [but] are generally characterized by an excess of scriptural language and today they seem dull and repetitious". Others point out that "Fox's sermons, rich in biblical metaphor and common speech, brought hope in a dark time." Fox's aphorisms have found an audience beyond Quakers, with many other church groups using them to illustrate principles of Christianity.
Fox is described by Ellwood as "graceful in countenance, manly in personage, grave in gesture, courteous in conversation". Penn says he was "civil beyond all forms of breeding". We are told that he was "plain and powerful in preaching, fervent in prayer", "a discerner of other men's spirits, and very much master of his own", skilful to "speak a word in due season to the conditions and capacities of most, especially to them that were weary, and wanted soul's rest"; "valiant in asserting the truth, bold in defending it, patient in suffering for it, immovable as a rock".
Fox's influence on the Society of Friends was tremendous, and his beliefs have largely been carried forward by that group. Perhaps his most significant achievement, other than his predominant influence in the early movement, was his leadership in overcoming the twin challenges of government prosecution after the Restoration and internal disputes that threatened its stability during the same period. Not all of his beliefs were welcome to all Quakers: his Puritan-like opposition to the arts and rejection of theological study, forestalled development of these practices among Quakers for some time.
The name of George Fox is often invoked by traditionalist Friends who dislike modern liberal attitudes to the Society's Christian origins. At the same time, Quakers and others can relate to Fox's religious experience, and even those who disagree with many of his ideas regard him as a pioneer.
Walt Whitman, who was raised by parents inspired by Quaker principles, later wrote: "George Fox stands for something too—a thought—the thought that wakes in silent hours—perhaps the deepest, most eternal thought latent in the human soul. This is the thought of God, merged in the thoughts of moral right and the immortality of identity. Great, great is this thought—aye, greater than all else."
Various editions of Fox's journal have been published from time to time since the first printing in 1694: | https://en.wikipedia.org/wiki?curid=13156 |
Gunpowder Plot
The Gunpowder Plot of 1605, in earlier centuries often called the Gunpowder Treason Plot or the Jesuit Treason, was a failed assassination attempt against King James I by a group of provincial English Catholics led by Robert Catesby.
The plan was to blow up the House of Lords during the State Opening of Parliament on 5 November 1605, as the prelude to a popular revolt in the Midlands during which James's nine-year-old daughter, Elizabeth, was to be installed as the Catholic head of state. Catesby may have embarked on the scheme after hopes of securing greater religious tolerance under King James had faded, leaving many English Catholics disappointed. His fellow plotters were John and Christopher Wright, Robert and Thomas Wintour, Thomas Percy, Guy Fawkes, Robert Keyes, Thomas Bates, John Grant, Ambrose Rookwood, Sir Everard Digby and Francis Tresham. Fawkes, who had 10 years of military experience fighting in the Spanish Netherlands in the failed suppression of the Dutch Revolt, was given charge of the explosives.
The plot was revealed to the authorities in an anonymous letter sent to William Parker, 4th Baron Monteagle, on 26 October 1605. During a search of the House of Lords in the evening on 4 November 1605, Fawkes was discovered guarding 36 barrels of gunpowder—enough to reduce the House of Lords to rubble—and arrested. Most of the conspirators fled from London as they learned of the plot's discovery, trying to enlist support along the way. Several made a stand against the pursuing Sheriff of Worcester and his men at Holbeche House; in the ensuing battle, Catesby was one of those shot and killed. At their trial on 27 January 1606, eight of the survivors, including Fawkes, were convicted and sentenced to be hanged, drawn and quartered.
Details of the assassination attempt were allegedly known by the principal Jesuit of England, Father Henry Garnet. Although he was convicted of treason and sentenced to death, doubt has been cast on how much he really knew of the plot. As its existence was revealed to him through confession, Garnet was prevented from informing the authorities by the absolute confidentiality of the confessional. Although anti-Catholic legislation was introduced soon after the plot's discovery, many important and loyal Catholics retained high office during King James's reign. The thwarting of the Gunpowder Plot was commemorated for many years afterwards by special sermons and other public events such as the ringing of church bells, which have evolved into the Bonfire Night of today.
Between 1533 and 1540, King Henry VIII took control of the English Church from Rome, the start of several decades of religious tension in England. English Catholics struggled in a society dominated by the newly separate and increasingly Protestant Church of England. Henry's daughter, Queen Elizabeth I, responded to the growing religious divide by introducing the Elizabethan Religious Settlement, which required anyone appointed to a public or church office to swear allegiance to the monarch as head of the Church and state. The penalties for refusal were severe; fines were imposed for recusancy, and repeat offenders risked imprisonment and execution. Catholicism became marginalised, but despite the threat of torture or execution, priests continued to practise their faith in secret.
Queen Elizabeth, unmarried and childless, steadfastly refused to name an heir. Many Catholics believed that her Catholic cousin, Mary, Queen of Scots, was the legitimate heir to the English throne, but she was executed for treason in 1587. The English Secretary of State, Robert Cecil, negotiated secretly with Mary's son and successor, King James VI of Scotland. In the months before Elizabeth's death on 24 March 1603, Cecil prepared the way for James to succeed her.
Some exiled Catholics favoured Philip II of Spain's daughter, Isabella, as Elizabeth's successor. More moderate Catholics looked to James's and Elizabeth's cousin Arbella Stuart, a woman thought to have Catholic sympathies. As Elizabeth's health deteriorated, the government detained those they considered to be the "principal papists", and the Privy Council grew so worried that Arbella Stuart was moved closer to London to prevent her from being kidnapped by papists.
Despite competing claims to the English throne, the transition of power following Elizabeth's death went smoothly. James's succession was announced by a proclamation from Cecil on 24 March, which was generally celebrated. Leading papists, rather than causing trouble as anticipated, reacted to the news by offering their enthusiastic support for the new monarch. Jesuit priests, whose presence in England was punishable by death, also demonstrated their support for James, who was widely believed to embody "the natural order of things". James ordered a ceasefire in the conflict with Spain, and even though the two countries were still technically at war, King Philip III sent his envoy, Don Juan de Tassis, to congratulate James on his accession. In the following year both countries signed the Treaty of London.
For decades, the English had lived under a monarch who refused to provide an heir, but James arrived with a family and a clear line of succession. His wife, Anne of Denmark, was the daughter of a king. Their eldest child, the nine-year-old Henry, was considered a handsome and confident boy, and their two younger children, Elizabeth and Charles, were proof that James was able to provide heirs to continue the Protestant monarchy.
James's attitude towards Catholics was more moderate than that of his predecessor, perhaps even tolerant. He swore that he would not "persecute any that will be quiet and give an outward obedience to the law", and believed that exile was a better solution than capital punishment: "I would be glad to have both their heads and their bodies separated from this whole island and transported beyond seas." Some Catholics believed that the martyrdom of James's mother, Mary, Queen of Scots, would encourage James to convert to the Catholic faith, and the Catholic houses of Europe may also have shared that hope. James received an envoy from Albert VII, ruler of the remaining Catholic territories in the Netherlands after over 30 years of war in the Dutch Revolt by English-supported Protestant rebels. For the Catholic expatriates engaged in that struggle, the restoration by force of a Catholic monarchy was an intriguing possibility, but following the failed Spanish invasion of England in 1588 the papacy had taken a longer-term view on the return of a Catholic monarch to the English throne.
During the late 16th century, Catholics made several assassination attempts on Protestant rulers in Europe and in England, including plans to poison Elizabeth I. The Jesuit Juan de Mariana's 1598 "On Kings and the Education of Kings" explicitly justified the assassination of the French king Henry III—who had been stabbed to death by a Catholic fanatic in 1589—and until the 1620s, some English Catholics believed that regicide was justifiable to remove tyrants from power. Much of the "rather nervous" James's political writing was "concerned with the threat of Catholic assassination and refutation of the [Catholic] argument that 'faith did not need to be kept with heretics'".
In the absence of any sign that James would move to end the persecution of Catholics, as some had hoped for, several members of the clergy (including two anti-Jesuit priests) decided to take matters into their own hands. In what became known as the Bye Plot, the priests William Watson and William Clark planned to kidnap James and hold him in the Tower of London until he agreed to be more tolerant towards Catholics. Cecil received news of the plot from several sources, including the Archpriest George Blackwell, who instructed his priests to have no part in any such schemes. At about the same time, Lord Cobham, Lord Grey de Wilton, Griffin Markham and Walter Raleigh hatched what became known as the Main Plot, which involved removing James and his family and supplanting them with Arbella Stuart. Amongst others, they approached Henry IV of France for funding, but were unsuccessful. All those involved in both plots were arrested in July and tried in autumn 1603; Sir George Brooke was executed, but James, keen not to have too bloody a start to his reign, reprieved Cobham, Grey, and Markham while they were at the scaffold. Raleigh, who had watched while his colleagues sweated, and who was due to be executed a few days later, was also pardoned. Arbella Stuart denied any knowledge of the Main Plot. The two priests, condemned and "very bloodily handled", were executed.
The Catholic community responded to news of these plots with shock. That the Bye Plot had been revealed by Catholics was instrumental in saving them from further persecution, and James was grateful enough to allow pardons for those recusants who sued for them, as well as postponing payment of their fines for a year.
On 19 February 1604, shortly after he discovered that his wife, Queen Anne, had been sent a rosary from the pope via one of James's spies, Sir Anthony Standen, James denounced the Catholic Church. Three days later, he ordered all Jesuits and all other Catholic priests to leave the country, and reimposed the collection of fines for recusancy. James changed his focus from the anxieties of English Catholics to the establishment of an Anglo-Scottish union. He also appointed Scottish nobles such as George Home to his court, which proved unpopular with the Parliament of England. Some Members of Parliament made it clear that in their view, the "effluxion of people from the Northern parts" was unwelcome, and compared them to "plants which are transported from barren ground into a more fertile one". Even more discontent resulted when the King allowed his Scottish nobles to collect the recusancy fines. There were 5,560 convicted of recusancy in 1605, of whom 112 were landowners. The very few Catholics of great wealth who refused to attend services at their parish church were fined £20 per month. Those of more moderate means had to pay two-thirds of their annual rental income; middle class recusants were fined one shilling a week, although the collection of all these fines was "haphazard and negligent". When James came to power, almost £5,000 a year (equivalent to almost £12 million in 2020) was being raised by these fines.
On 19 March, the King gave his opening speech to his first English Parliament in which he spoke of his desire to secure peace, but only by "profession of the true religion". He also spoke of a Christian union and reiterated his desire to avoid religious persecution. For the Catholics, the King's speech made it clear that they were not to "increase their number and strength in this Kingdom", that "they might be in hope to erect their Religion again". To Father John Gerard, these words were almost certainly responsible for the heightened levels of persecution the members of his faith now suffered, and for the priest Oswald Tesimond they were a rebuttal of the early claims that the King had made, upon which the papists had built their hopes. A week after James's speech, Lord Sheffield informed the king of over 900 recusants brought before the Assizes in Normanby, and on 24 April a Bill was introduced in Parliament which threatened to outlaw all English followers of the Catholic Church.
The conspirators' principal aim was to kill King James, but many other important targets would also be present at the State Opening, including the monarch's nearest relatives and members of the Privy Council. The senior judges of the English legal system, most of the Protestant aristocracy, and the bishops of the Church of England would all have attended in their capacity as members of the House of Lords, along with the members of the House of Commons. Another important objective was the kidnapping of the King's daughter, Elizabeth. Housed at Coombe Abbey near Coventry, she lived only ten miles north of Warwick—convenient for the plotters, most of whom lived in the Midlands. Once the King and his Parliament were dead, the plotters intended to install Elizabeth on the English throne as a titular Queen. The fate of her brothers, Henry and Charles, would be improvised; their role in state ceremonies was, as yet, uncertain. The plotters planned to use Henry Percy, 9th Earl of Northumberland, as Elizabeth's regent, but most likely never informed him of this.
Robert Catesby (1573–1605), a man of "ancient, historic and distinguished lineage", was the inspiration behind the plot. He was described by contemporaries as "a good-looking man, about six feet tall, athletic and a good swordsman". Along with several other conspirators, he took part in the Essex Rebellion in 1601, during which he was wounded and captured. Queen Elizabeth allowed him to escape with his life after fining him 4,000 marks (equivalent to more than £6 million in 2008), after which he sold his estate in Chastleton. In 1603 Catesby helped to organise a mission to the new king of Spain, Philip III, urging Philip to launch an invasion attempt on England, which they assured him would be well supported, particularly by the English Catholics. Thomas Wintour (1571–1606) was chosen as the emissary, but the Spanish king, although sympathetic to the plight of Catholics in England, was intent on making peace with James. Wintour had also attempted to convince the Spanish envoy Don Juan de Tassis that "3,000 Catholics" were ready and waiting to support such an invasion. Concern was voiced by Pope Clement VIII that using violence to achieve a restoration of Catholic power in England would result in the destruction of those that remained.
According to contemporary accounts, in February 1604 Catesby invited Thomas Wintour to his house in Lambeth, where they discussed Catesby's plan to re-establish Catholicism in England by blowing up the House of Lords during the State Opening of Parliament. Wintour was known as a competent scholar, able to speak several languages, and he had fought with the English army in the Netherlands. His uncle, Francis Ingleby, had been executed for being a Catholic priest in 1586, and Wintour later converted to Catholicism. Also present at the meeting was John Wright, a devout Catholic said to be one of the best swordsmen of his day, and a man who had taken part with Catesby in the Earl of Essex's rebellion three years earlier. Despite his reservations over the possible repercussions should the attempt fail, Wintour agreed to join the conspiracy, perhaps persuaded by Catesby's rhetoric: "Let us give the attempt and where it faileth, pass no further."
Wintour travelled to Flanders to enquire about Spanish support. While there he sought out Guy Fawkes (1570–1606), a committed Catholic who had served as a soldier in the Southern Netherlands under the command of William Stanley, and who in 1603 was recommended for a captaincy. Accompanied by John Wright's brother Christopher, Fawkes had also been a member of the 1603 delegation to the Spanish court pleading for an invasion of England. Wintour told Fawkes that "some good frends of his wished his company in Ingland", and that certain gentlemen "were uppon a resolution to doe some whatt in Ingland if the pece with Spain healped us nott". The two men returned to England late in April 1604, telling Catesby that Spanish support was unlikely. Thomas Percy, Catesby's friend and John Wright's brother-in-law, was introduced to the plot several weeks later. Percy had found employment with his kinsman the Earl of Northumberland, and by 1596 was his agent for the family's northern estates. About 1600–1601 he served with his patron in the Low Countries. At some point during Northumberland's command in the Low Countries, Percy became his agent in his communications with James. Percy was reputedly a "serious" character who had converted to the Catholic faith. His early years were, according to a Catholic source, marked by a tendency to rely on "his sword and personal courage". Northumberland, although not a Catholic himself, planned to build a strong relationship with James I in order to better the prospects of English Catholics, and to reduce the family disgrace caused by his separation from his wife Martha Wright, a favourite of Elizabeth I. Thomas Percy's meetings with James seemed to go well. Percy returned with promises of support for the Catholics, and Northumberland believed that James would go so far as to allow Mass in private houses, so as not to cause public offence. Percy, keen to improve his standing, went further, claiming that the future King would guarantee the safety of English Catholics.
The first meeting between the five conspirators took place on 20 May 1604, probably at the Duck and Drake Inn, just off the Strand, Thomas Wintour's usual residence when staying in London. Catesby, Thomas Wintour, and John Wright were in attendance, joined by Guy Fawkes and Thomas Percy. Alone in a private room, the five plotters swore an oath of secrecy on a prayer book. By coincidence, and ignorant of the plot, Father John Gerard (a friend of Catesby's) was celebrating Mass in another room, and the five men subsequently received the Eucharist.
Following their oath, the plotters left London and returned to their homes. The adjournment of Parliament gave them, they thought, until February 1605 to finalise their plans. On 9 June, Percy's patron, the Earl of Northumberland, appointed him to the Honourable Corps of Gentlemen at Arms, a mounted troop of 50 bodyguards to the King. This role gave Percy reason to seek a base in London, and a small property near the Prince's Chamber owned by Henry Ferrers, a tenant of John Whynniard, was chosen. Percy arranged for the use of the house through Northumberland's agents, Dudley Carleton and John Hippisley. Fawkes, using the pseudonym "John Johnson", took charge of the building, posing as Percy's servant. The building was occupied by Scottish commissioners appointed by the King to consider his plans for the unification of England and Scotland, so the plotters hired Catesby's lodgings in Lambeth, on the opposite bank of the Thames, from where their stored gunpowder and other supplies could be conveniently rowed across each night. Meanwhile, King James continued with his policies against the Catholics, and Parliament pushed through anti-Catholic legislation, until its adjournment on 7 July.
The conspirators returned to London in October 1604, when Robert Keyes, a "desperate man, ruined and indebted", was admitted to the group. His responsibility was to take charge of Catesby's house in Lambeth, where the gunpowder and other supplies were to be stored. Keyes's family had notable connections; his wife's employer was the Catholic Lord Mordaunt. Tall, with a red beard, he was seen as trustworthy and, like Fawkes, capable of looking after himself. In December Catesby recruited his servant, Thomas Bates, into the plot, after the latter accidentally became aware of it.
It was announced on 24 December that the re-opening of Parliament would be delayed. Concern over the plague meant that rather than sitting in February, as the plotters had originally planned for, Parliament would not sit again until 3 October 1605. The contemporaneous account of the prosecution claimed that during this delay the conspirators were digging a tunnel beneath Parliament. This may have been a government fabrication, as no evidence for the existence of a tunnel was presented by the prosecution, and no trace of one has ever been found. The account of a tunnel comes directly from Thomas Wintour's confession, and Guy Fawkes did not admit the existence of such a scheme until his fifth interrogation. Logistically, digging a tunnel would have proved extremely difficult, especially as none of the conspirators had any experience of mining. If the story is true, by 6 December the Scottish commissioners had finished their work, and the conspirators were busy tunnelling from their rented house to the House of Lords. They ceased their efforts when, during tunnelling, they heard a noise from above. The noise turned out to be the then-tenant's widow, who was clearing out the undercroft directly beneath the House of Lords—the room where the plotters eventually stored the gunpowder.
By the time the plotters reconvened at the start of the old style new year on Lady Day, 25 March, three more had been admitted to their ranks; Robert Wintour, John Grant, and Christopher Wright. The additions of Wintour and Wright were obvious choices. Along with a small fortune, Robert Wintour inherited Huddington Court (a known refuge for priests) near Worcester, and was reputedly a generous and well-liked man. A devout Catholic, he married Gertrude, the daughter of John Talbot of Grafton, a prominent Worcestershire family of recusants. Christopher Wright (1568–1605), John's brother, had also taken part in the Earl of Essex's revolt and had moved his family to Twigmore in Lincolnshire, then known as something of a haven for priests. John Grant was married to Wintour's sister, Dorothy, and was lord of the manor of Norbrook near Stratford-upon-Avon. Reputed to be an intelligent, thoughtful man, he sheltered Catholics at his home at Snitterfield, and was another who had been involved in the Essex revolt of 1601.
In addition, 25 March was the day on which the plotters purchased the lease to the undercroft they had supposedly tunnelled near to, owned by John Whynniard. The Palace of Westminster in the early 17th century was a warren of buildings clustered around the medieval chambers, chapels, and halls of the former royal palace that housed both Parliament and the various royal law courts. The old palace was easily accessible; merchants, lawyers, and others lived and worked in the lodgings, shops and taverns within its precincts. Whynniard's building was along a right-angle to the House of Lords, alongside a passageway called Parliament Place, which itself led to Parliament Stairs and the River Thames. Undercrofts were common features at the time, used to house a variety of materials including food and firewood. Whynniard's undercroft, on the ground floor, was directly beneath the first-floor House of Lords, and may once have been part of the palace's medieval kitchen. Unused and filthy, its location was ideal for what the group planned to do.
In the second week of June Catesby met in London the principal Jesuit in England, Father Henry Garnet, and asked him about the morality of entering into an undertaking which might involve the destruction of the innocent, together with the guilty. Garnet answered that such actions could often be excused, but according to his own account later admonished Catesby during a second meeting in July in Essex, showing him a letter from the pope which forbade rebellion. Soon after, the Jesuit priest Oswald Tesimond told Garnet he had taken Catesby's confession, in the course of which he had learnt of the plot. Garnet and Catesby met for a third time on 24 July 1605, at the house of the wealthy catholic Anne Vaux in Enfield Chase. Garnet decided that Tesimond's account had been given under the seal of the confessional, and that canon law therefore forbade him to repeat what he had heard. Without acknowledging that he was aware of the precise nature of the plot, Garnet attempted to dissuade Catesby from his course, to no avail. Garnet wrote to a colleague in Rome, Claudio Acquaviva, expressing his concerns about open rebellion in England. He also told Acquaviva that "there is a risk that some private endeavour may commit treason or use force against the King", and urged the pope to issue a public brief against the use of force.
According to Fawkes, 20 barrels of gunpowder were brought in at first, followed by 16 more on 20 July. The supply of gunpowder was theoretically controlled by the government, but it was easily obtained from illicit sources. On 28 July, the ever-present threat of the plague again delayed the opening of Parliament, this time until Tuesday 5 November. Fawkes left the country for a short time. The King, meanwhile, spent much of the summer away from the city, hunting. He stayed wherever was convenient, including on occasion at the houses of prominent Catholics. Garnet, convinced that the threat of an uprising had receded, travelled the country on a pilgrimage.
It is uncertain when Fawkes returned to England, but he was back in London by late August, when he and Wintour discovered that the gunpowder stored in the undercroft had decayed. More gunpowder was brought into the room, along with firewood to conceal it. The final three conspirators were recruited in late 1605. At Michaelmas, Catesby persuaded the staunchly Catholic Ambrose Rookwood to rent Clopton House near Stratford-upon-Avon. Rookwood was a young man with recusant connections, whose stable of horses at Coldham Hall in Stanningfield, Suffolk was an important factor in his enlistment. His parents, Robert Rookwood and Dorothea Drury, were wealthy landowners, and had educated their son at a Jesuit school near Calais. Everard Digby was a young man who was generally well liked, and lived at Gayhurst House in Buckinghamshire. He had been knighted by the King in April 1603, and was converted to Catholicism by Gerard. Digby and his wife, Mary Mulshaw, had accompanied the priest on his pilgrimage, and the two men were reportedly close friends. Digby was asked by Catesby to rent Coughton Court near Alcester. Digby also promised £1,500 after Percy failed to pay the rent due for the properties he had taken in Westminster. Finally, on 14 October Catesby invited Francis Tresham into the conspiracy. Tresham was the son of the Catholic Thomas Tresham, and a cousin to Robert Catesby—the two had been raised together. He was also the heir to his father's large fortune, which had been depleted by recusant fines, expensive tastes, and by Francis and Catesby's involvement in the Essex revolt.
Catesby and Tresham met at the home of Tresham's brother-in-law and cousin, Lord Stourton. In his confession, Tresham claimed that he had asked Catesby if the plot would damn their souls, to which Catesby had replied it would not, and that the plight of England's Catholics required that it be done. Catesby also apparently asked for £2,000, and the use of Rushton Hall in Northamptonshire. Tresham declined both offers (although he did give £100 to Thomas Wintour), and told his interrogators that he had moved his family from Rushton to London in advance of the plot; hardly the actions of a guilty man, he claimed.
The details of the plot were finalised in October, in a series of taverns across London and Daventry. Fawkes would be left to light the fuse and then escape across the Thames, while simultaneously a revolt in the Midlands would help to ensure the capture of the King's daughter, Elizabeth. Fawkes would leave for the continent, to explain events in England to the European Catholic powers.
The wives of those involved and Anne Vaux (a friend of Garnet who often shielded priests at her home) became increasingly concerned by what they suspected was about to happen. Several of the conspirators expressed worries about the safety of fellow Catholics who would be present in Parliament on the day of the planned explosion. Percy was concerned for his patron, Northumberland, and the young Earl of Arundel's name was brought up; Catesby suggested that a minor wound might keep him from the chamber on that day. The Lords Vaux, Montague, Monteagle, and Stourton were also mentioned. Keyes suggested warning Lord Mordaunt, his wife's employer, to derision from Catesby.
On Saturday 26 October, Monteagle (Tresham's brother-in-law) arranged a meal in a long-disused house at Hoxton. Suddenly a servant appeared saying he had been handed a letter for Lord Monteagle from a stranger in the road. Monteagle ordered it to be read aloud to the company. "By this prearranged manoeuvre Francis Tresham sought at the same time to prevent the Plot and forewarn his friends" (H Trevor-Roper).
Uncertain of the letter's meaning, Monteagle promptly rode to Whitehall and handed it to Cecil (then Earl of Salisbury). Salisbury informed the Earl of Worcester, considered to have recusant sympathies, and the suspected Catholic Henry Howard, 1st Earl of Northampton, but kept news of the plot from the King, who was busy hunting in Cambridgeshire and not expected back for several days. Monteagle's servant, Thomas Ward, had family connections with the Wright brothers, and sent a message to Catesby about the betrayal. Catesby, who had been due to go hunting with the King, suspected that Tresham was responsible for the letter, and with Thomas Wintour confronted the recently recruited conspirator. Tresham managed to convince the pair that he had not written the letter, but urged them to abandon the plot. Salisbury was already aware of certain stirrings before he received the letter, but did not yet know the exact nature of the plot, or who exactly was involved. He therefore elected to wait, to see how events unfolded.
The letter was shown to the King on Friday 1 November following his arrival back in London. Upon reading it, James immediately seized upon the word "blow" and felt that it hinted at "some strategem of fire and powder", perhaps an explosion exceeding in violence the one that killed his father, Lord Darnley, at Kirk o' Field in 1567. Keen not to seem too intriguing, and wanting to allow the King to take the credit for unveiling the conspiracy, Salisbury feigned ignorance. The following day members of the Privy Council visited the King at the Palace of Whitehall and informed him that, based on the information that Salisbury had given them a week earlier, on Monday the Lord Chamberlain Thomas Howard, 1st Earl of Suffolk would undertake a search of the Houses of Parliament, "both above and below". On Sunday 3 November Percy, Catesby and Wintour had a final meeting, where Percy told his colleagues that they should "abide the uttermost triall", and reminded them of their ship waiting at anchor on the Thames. By 4 November Digby was ensconced with a "hunting party" at Dunchurch, ready to abduct Elizabeth. The same day, Percy visited the Earl of Northumberland—who was uninvolved in the conspiracy—to see if he could discern what rumours surrounded the letter to Monteagle. Percy returned to London and assured Wintour, John Wright, and Robert Keyes that they had nothing to be concerned about, and returned to his lodgings on Gray's Inn Road. That same evening Catesby, likely accompanied by John Wright and Bates, set off for the Midlands. Fawkes visited Keyes, and was given a pocket watch left by Percy, to time the fuse, and an hour later Rookwood received several engraved swords from a local cutler.
Although two accounts of the number of searches and their timing exist, according to the King's version, the first search of the buildings in and around Parliament was made on Monday 4 November—as the plotters were busy making their final preparations—by Suffolk, Monteagle, and John Whynniard. They found a large pile of firewood in the undercroft beneath the House of Lords, accompanied by what they presumed to be a serving man (Fawkes), who told them that the firewood belonged to his master, Thomas Percy. They left to report their findings, at which time Fawkes also left the building. The mention of Percy's name aroused further suspicion as he was already known to the authorities as a Catholic agitator. The King insisted that a more thorough search be undertaken. Late that night, the search party, headed by Thomas Knyvet, returned to the undercroft. They again found Fawkes, dressed in a cloak and hat, and wearing boots and spurs. He was arrested, whereupon he gave his name as John Johnson. He was carrying a lantern now held in the Ashmolean Museum, Oxford, and a search of his person revealed a pocket watch, several slow matches and touchwood. 36 barrels of gunpowder were discovered hidden under piles of faggots and coal. Fawkes was taken to the King early on the morning of 5 November.
As news of "John Johnson's" arrest spread among the plotters still in London, most fled northwest, along Watling Street. Christopher Wright and Thomas Percy left together. Rookwood left soon after, and managed to cover 30 miles in two hours on one horse. He overtook Keyes, who had set off earlier, then Wright and Percy at Little Brickhill, before catching Catesby, John Wright, and Bates on the same road. Reunited, the group continued northwest to Dunchurch, using horses provided by Digby. Keyes went to Mordaunt's house at Drayton. Meanwhile, Thomas Wintour stayed in London, and even went to Westminster to see what was happening. When he realised the plot had been uncovered, he took his horse and made for his sister's house at Norbrook, before continuing to Huddington Court.
The group of six conspirators stopped at Ashby St Ledgers at about 6 pm, where they met Robert Wintour and updated him on their situation. They then continued on to Dunchurch, and met with Digby. Catesby convinced him that despite the plot's failure, an armed struggle was still a real possibility. He announced to Digby's "hunting party" that the King and Salisbury were dead, before the fugitives moved west to Warwick.
In London, news of the plot was spreading, and the authorities set extra guards on the city gates, closed the ports, and protected the house of the Spanish Ambassador, which was surrounded by an angry mob. An arrest warrant was issued against Thomas Percy, and his patron, the Earl of Northumberland, was placed under house arrest. In "John Johnson's" initial interrogation he revealed nothing other than the name of his mother, and that he was from Yorkshire. A letter to Guy Fawkes was discovered on his person, but he claimed that name was one of his aliases. Far from denying his intentions, "Johnson" stated that it had been his purpose to destroy the King and Parliament. Nevertheless, he maintained his composure and insisted that he had acted alone. His unwillingness to yield so impressed the King that he described him as possessing "a Roman resolution".
On 6 November, the Lord Chief Justice, Sir John Popham (a man with a deep-seated hatred of Catholics) questioned Rookwood's servants. By the evening he had learned the names of several of those involved in the conspiracy: Catesby, Rookwood, Keyes, Wynter , John and Christopher Wright, and Grant. "Johnson" meanwhile persisted with his story, and along with the gunpowder he was found with, was moved to the Tower of London, where the King had decided that "Johnson" would be tortured. The use of torture was forbidden, except by royal prerogative or a body such as the Privy Council or Star Chamber. In a letter of 6 November James wrote: "The gentler tortours [tortures] are to be first used unto him, "et sic per gradus ad ima tenditur" [and thus by steps extended to the bottom depths], and so God speed your good work." "Johnson" may have been placed in manacles and hung from the wall, but he was almost certainly subjected to the horrors of the rack. On 7 November his resolve was broken; he confessed late that day, and again over the following two days.
On 6 November, with Fawkes maintaining his silence, the fugitives raided Warwick Castle for supplies and continued to Norbrook to collect weapons. From there they continued their journey to Huddington. Bates left the group and travelled to Coughton Court to deliver a letter from Catesby, to Father Garnet and the other priests, informing them of what had transpired, and asking for their help in raising an army. Garnet replied by begging Catesby and his followers to stop their "wicked actions", before himself fleeing. Several priests set out for Warwick, worried about the fate of their colleagues. They were caught, and then imprisoned in London. Catesby and the others arrived at Huddington early in the afternoon, and were met by Thomas Wintour. They received practically no support or sympathy from those they met, including family members, who were terrified at the prospect of being associated with treason. They continued on to Holbeche House on the border of Staffordshire, the home of Stephen Littleton, a member of their ever-decreasing band of followers. Whilst there Stephen Littleton and Thomas Wintour went to 'Pepperhill', the Shropshire residence of Sir John Talbot to gain support but to no avail. Tired and desperate, they spread out some of the now-soaked gunpowder in front of the fire, to dry out. Although gunpowder does not explode unless physically contained, a spark from the fire landed on the powder and the resultant flames engulfed Catesby, Rookwood, Grant, and a man named Morgan (a member of the hunting party).
Thomas Wintour and Littleton, on their way from Huddington to Holbeche House, were told by a messenger that Catesby had died. At that point, Littleton left, but Thomas arrived at the house to find Catesby alive, albeit scorched. John Grant was not so lucky, and had been blinded by the fire. Digby, Robert Wintour and his half-brother John, and Thomas Bates, had all left. Of the plotters, only the singed figures of Catesby and Grant, and the Wright brothers, Rookwood, and Percy, remained. The fugitives resolved to stay in the house and wait for the arrival of the King's men.
Richard Walsh (Sheriff of Worcestershire) and his company of 200 men besieged Holbeche House on the morning of 8 November. Thomas Wintour was hit in the shoulder while crossing the courtyard. John Wright was shot, followed by his brother, and then Rookwood. Catesby and Percy were reportedly killed by a single lucky shot. The attackers rushed the property, and stripped the dead or dying defenders of their clothing. Grant, Morgan, Rookwood, and Wintour were arrested.
Bates and Keyes were captured shortly after Holbeche House was taken. Digby, who had intended to give himself up, was caught by a small group of pursuers. Tresham was arrested on 12 November, and taken to the Tower three days later. Montague, Mordaunt, and Stourton (Tresham's brother-in-law) were also imprisoned in the Tower. The Earl of Northumberland joined them on 27 November. Meanwhile the government used the revelation of the plot to accelerate its persecution of Catholics. The home of Anne Vaux at Enfield Chase was searched, revealing the presence of trap doors and hidden passages. A terrified servant then revealed that Garnet, who had often stayed at the house, had recently given a Mass there. Father John Gerard was secreted at the home of Elizabeth Vaux, in Harrowden. Vaux was taken to London for interrogation. There she was resolute; she had never been aware that Gerard was a priest, she had presumed he was a "Catholic gentleman", and she did not know of his whereabouts. The homes of the conspirators were searched, and looted; Mary Digby's household was ransacked, and she was made destitute. Some time before the end of November, Garnet moved to Hindlip Hall near Worcester, the home of the Habingtons, where he wrote a letter to the Privy Council protesting his innocence.
The foiling of the Gunpowder Plot initiated a wave of national relief at the delivery of the King and his sons, and inspired in the ensuing parliament a mood of loyalty and goodwill, which Salisbury astutely exploited to extract higher subsidies for the King than any (bar one) granted in Elizabeth I's reign. Walter Raleigh, who was languishing in the Tower owing to his involvement in the Main Plot, and whose wife was a first cousin of Lady Catesby, declared he had had no knowledge of the conspiracy. The Bishop of Rochester gave a sermon at St. Paul's Cross, in which he condemned the plot. In his speech to both Houses on 9 November, James expounded on two emerging preoccupations of his monarchy: the divine right of kings and the Catholic question. He insisted that the plot had been the work of only a few Catholics, not of the English Catholics as a whole, and he reminded the assembly to rejoice at his survival, since kings were divinely appointed and he owed his escape to a miracle. Salisbury wrote to his English ambassadors abroad, informing them of what had occurred, and also reminding them that the King bore no ill will to his Catholic neighbours. The foreign powers largely distanced themselves from the plotters, calling them atheists and Protestant heretics.
Sir Edward Coke was in charge of the interrogations. Over a period of about ten weeks, in the Lieutenant's Lodgings at the Tower of London (now known as the Queen's House) he questioned those who had been implicated in the plot. For the first round of interrogations, no real proof exists that these people were tortured, although on several occasions Salisbury certainly suggested that they should be. Coke later revealed that the threat of torture was in most cases enough to elicit a confession from those caught up in the aftermath of the plot.
Only two confessions were printed in full: Fawkes's confession of 8 November, and Wintour's of 23 November. Having been involved in the conspiracy from the start (unlike Fawkes), Wintour was able to give extremely valuable information to the Privy Council. The handwriting on his testimony is almost certainly that of the man himself, but his signature was markedly different. Wintour had previously only ever signed his name as such, but his confession is signed "Winter", and since he had been shot in the shoulder, the steady hand used to write the signature may indicate some measure of government interference—or it may indicate that writing a shorter version of his name was less painful. Wintour's testimony makes no mention of his brother, Robert. Both were published in the so-called "King's Book", a hastily written official account of the conspiracy published in late November 1605.
Henry Percy, Earl of Northumberland, was in a difficult position. His midday dinner with Thomas Percy on 4 November was damning evidence against him, and after Thomas Percy's death there was nobody who could either implicate him or clear him. The Privy Council suspected that Northumberland would have been Princess Elizabeth's protector had the plot succeeded, but there was insufficient evidence to convict him. Northumberland remained in the Tower and on 27 June 1606 was finally charged with contempt. He was stripped of all public offices, fined £30,000 (about £ in 2020), and kept in the Tower until June 1621. The Lords Mordaunt and Stourton were tried in the Star Chamber. They were condemned to imprisonment in the Tower, where they remained until 1608, when they were transferred to the Fleet Prison. Both were also given significant fines.
Several other people not involved in the conspiracy, but known or related to the conspirators, were also questioned. Northumberland's brothers, Sir Allen and Sir Josceline, were arrested. Anthony-Maria Browne, 2nd Viscount Montagu had employed Fawkes at an early age, and had also met Catesby on 29 October, and was therefore of interest; he was released several months later. Agnes Wenman was from a Catholic family, and related to Elizabeth Vaux. She was examined twice but the charges against her were eventually dropped. Percy's secretary and later the controller of Northumberland's household, Dudley Carleton, had leased the vault where the gunpowder was stored, and consequently he was imprisoned in the Tower. Salisbury believed his story, and authorised his release.
Thomas Bates confessed on 4 December, providing much of the information that Salisbury needed to link the Catholic clergy to the plot. Bates had been present at most of the conspirators' meetings, and under interrogation he implicated Father Tesimond in the plot. On 13 January 1606 he described how he had visited Garnet and Tesimond on 7 November to inform Garnet of the plot's failure. Bates also told his interrogators of his ride with Tesimond to Huddington, before the priest left him to head for the Habingtons at Hindlip Hall, and of a meeting between Garnet, Gerard, and Tesimond in October 1605. At about the same time in December, Tresham's health began to deteriorate. He was visited regularly by his wife, a nurse, and his servant William Vavasour, who documented his strangury. Before he died Tresham had also told of Garnet's involvement with the 1603 mission to Spain, but in his last hours he retracted some of these statements. Nowhere in his confession did he mention the Monteagle letter. He died early on the morning of 23 December, and was buried in the Tower. Nevertheless he was attainted along with the other plotters, his head was set on a pike either at Northampton or London Bridge, and his estates confiscated.
On 15 January a proclamation named Father Garnet, Father Gerard, and Father Greenway (Tesimond) as wanted men. Tesimond and Gerard managed to escape the country and live out their days in freedom; Garnet was not so lucky. Several days earlier, on 9 January, Robert Wintour and Stephen Littleton were captured. Their hiding place at Hagley, the home of Humphrey Littleton (brother of MP John Littleton, imprisoned for treason in 1601 for his part in the Essex revolt) was betrayed by a cook, who grew suspicious of the amount of food sent up for his master's consumption. Humphrey denied the presence of the two fugitives, but another servant led the authorities to their hiding place. On 20 January the local Justice and his retainers arrived at Thomas Habington's home, Hindlip Hall, to arrest the Jesuits. Despite Thomas Habington's protests, the men spent the next four days searching the house. On 24 January, starving, two priests left their hiding places and were discovered. Humphrey Littleton, who had escaped from the authorities at Hagley, got as far as Prestwood in Staffordshire before he was captured. He was imprisoned, and then condemned to death at Worcester. On 26 January, in exchange for his life, he told the authorities where they could find Father Garnet. Worn down by hiding for so long, Garnet, accompanied by another priest, emerged from his priest hole the next day.
By coincidence, on the same day that Garnet was found, the surviving conspirators were arraigned in Westminster Hall. Seven of the prisoners were taken from the Tower to the Star Chamber by barge. Bates, who was considered lower class, was brought from the Gatehouse Prison. Some of the prisoners were reportedly despondent, but others were nonchalant, even smoking tobacco. The King and his family, hidden from view, were among the many who watched the trial. The Lords Commissioners present were the Earls of Suffolk, Worcester, Northampton, Devonshire, and Salisbury. Sir John Popham was Lord Chief Justice, Sir Thomas Fleming was Lord Chief Baron of the Exchequer, and two Justices, Sir Thomas Walmsley and Sir Peter Warburton, sat as Justices of the Common Pleas. The list of traitors' names was read aloud, beginning with those of the priests: Garnet, Tesimond, and Gerard.
The first to speak was the Speaker of the House of Commons (later Master of the Rolls), Sir Edward Philips, who described the intent behind the plot in lurid detail. He was followed by the Attorney-General Sir Edward Coke, who began with a long speech—the content of which was heavily influenced by Salisbury—that included a denial that the King had ever made any promises to the Catholics. Monteagle's part in the discovery of the plot was welcomed, and denunciations of the 1603 mission to Spain featured strongly. Fawkes's protestations that Gerard knew nothing of the plot were omitted from Coke's speech. The foreign powers, when mentioned, were accorded due respect, but the priests were accursed, their behaviour analysed and criticised wherever possible. There was little doubt, according to Coke, that the plot had been invented by the Jesuits. Garnet's meeting with Catesby, at which the former was said to have absolved the latter of any blame in the plot, was proof enough that the Jesuits were central to the conspiracy; according to Coke the Gunpowder Plot would always be known as the Jesuit Treason. Coke spoke with feeling of the probable fate of the Queen and the rest of the King's family, and of the innocents who would have been caught up in the explosion.
Each of the condemned, said Coke, would be drawn backwards to his death, by a horse, his head near the ground. He was to be "put to death halfway between heaven and earth as unworthy of both". His genitals would be cut off and burnt before his eyes, and his bowels and heart then removed. Then he would be decapitated, and the dismembered parts of his body displayed so that they might become "prey for the fowls of the air". Confessions and declarations from the prisoners were then read aloud, and finally the prisoners were allowed to speak. Rookwood claimed that he had been drawn into the plot by Catesby, "whom he loved above any worldy man". Thomas Wintour begged to be hanged for himself and his brother, so that his brother might be spared. Fawkes explained his not guilty plea as ignorance of certain aspects of the indictment. Keyes appeared to accept his fate, Bates and Robert Wintour begged for mercy, and Grant explained his involvement as "a conspiracy intended but never effected". Only Digby, tried on a separate indictment, pleaded guilty, insisting that the King had reneged upon promises of toleration for Catholics, and that affection for Catesby and love of the Catholic cause mitigated his actions. He sought death by the axe and begged mercy from the King for his young family. His defence was in vain; his arguments were rebuked by Coke and Northumberland, and along with his seven co-conspirators, he was found guilty by the jury of high treason. Digby shouted "If I may but hear any of your lordships say, you forgive me, I shall go more cheerfully to the gallows." The response was short: "God forgive you, and we do."
Garnet may have been questioned on as many as 23 occasions. His response to the threat of the rack was ""Minare ista pueris" [Threats are only for boys]", and he denied having encouraged Catholics to pray for the success of the "Catholic Cause". His interrogators resorted to the forgery of correspondence between Garnet and other Catholics, but to no avail. His jailers then allowed him to talk with another priest in a neighbouring cell, with eavesdroppers listening to every word. Eventually Garnet let slip a crucial piece of information, that there was only one man who could testify that he had any knowledge of the plot. Under torture Garnet admitted that he had heard of the plot from fellow Jesuit Oswald Tesimond, who had learnt of it in confession from Catesby. Garnet was charged with high treason and tried in the Guildhall on 28 March, in a trial lasting from 8 am until 7 pm. According to Coke, Garnet instigated the plot: "[Garnet] hath many gifts and endowments of nature, by art learned, a good linguist and, by profession, a Jesuit and a Superior as indeed he is Superior to all his predecessors in devilish treason, a Doctor of Dissimulation, Deposing of Princes, Disposing of Kingdoms, Daunting and deterring of subjects, and Destruction." Garnet refuted all the charges against him, and explained the Catholic position on such matters, but he was nevertheless found guilty and sentenced to death.
Although Catesby and Percy escaped the executioner, their bodies were exhumed and decapitated, and their heads exhibited on spikes outside the House of Lords. On a cold 30 January, Everard Digby, Robert Wintour, John Grant, and Thomas Bates, were tied to hurdles—wooden panels—and dragged through the crowded streets of London to St Paul's Churchyard. Digby, the first to mount the scaffold, asked the spectators for forgiveness, and refused the attentions of a Protestant clergyman. He was stripped of his clothing, and wearing only a shirt, climbed the ladder to place his head through the noose. He was quickly cut down, and while still fully conscious was castrated, disembowelled, and then quartered, along with the three other prisoners. The following day, Thomas Wintour, Ambrose Rookwood, Robert Keyes, and Guy Fawkes were hanged, drawn and quartered, opposite the building they had planned to blow up, in the Old Palace Yard at Westminster. Keyes did not wait for the hangman's command and jumped from the gallows, but he survived the drop and was led to the quartering block. Although weakened by his torture, Fawkes managed to jump from the gallows and break his neck, thus avoiding the agony of the gruesome latter part of his execution.
Steven Littleton was executed at Stafford. His cousin Humphrey, despite his co-operation with the authorities, met his end at Red Hill near Worcester. Henry Garnet's execution took place on 3 May 1606.
Greater freedom for Roman Catholics to worship as they chose seemed unlikely in 1604, but the discovery of such a wide-ranging conspiracy, the capture of those involved, and the subsequent trials, led Parliament to consider introducing new anti-Catholic legislation. The event also destroyed all hope that the Spanish would ever secure tolerance of the Catholics in England. In the summer of 1606, laws against recusancy were strengthened; the Popish Recusants Act returned England to the Elizabethan system of fines and restrictions, introduced a sacramental test, and an Oath of Allegiance, requiring Catholics to abjure as a "heresy" the doctrine that "princes excommunicated by the Pope could be deposed or assassinated". Catholic Emancipation took another 200 years, but many important and loyal Catholics retained high office during King James I's reign. Although there was no "golden time" of "toleration" of Catholics, which Father Garnet had hoped for, James's reign was nevertheless a period of relative leniency for Catholics, and few were subject to prosecution.
The playwright William Shakespeare had already used the family history of Northumberland's family in his "Henry IV" series of plays, and the events of the Gunpowder Plot seem to have featured alongside the earlier Gowrie conspiracy in "Macbeth", written some time between 1603 and 1607. Interest in the demonic was heightened by the Gunpowder Plot. The King had become engaged in the great debate about other-worldly powers in writing his "Daemonology" in 1597, before he became King of England as well as Scotland. Inversions seen in such lines as "fair is foul and foul is fair" are used frequently, and another possible reference to the plot relates to the use of equivocation; Garnet's "A Treatise of Equivocation" was found on one of the plotters. Another writer influenced by the plot was John Milton, who in 1626 wrote what one commentator has called a "critically vexing poem", "In Quintum Novembris". Reflecting "partisan public sentiment on an English-Protestant national holiday", in the published editions of 1645 and 1673 the poem is preceded by five epigrams on the subject of the Gunpowder Plot, apparently written by Milton in preparation for the larger work. The plot may also have influenced his later work, "Paradise Lost".
The Gunpowder Plot was commemorated for years by special sermons and other public acts, such as the ringing of church bells. It added to an increasingly full calendar of Protestant celebrations that contributed to the national and religious life of 17th-century England, and has evolved into the Bonfire Night of today. In "What If the Gunpowder Plot Had Succeeded?" historian Ronald Hutton considered the events which might have followed a successful implementation of the plot, and the destruction of the House of Lords and all those within it. He concluded that a severe backlash against suspected Catholics would have followed, and that without foreign assistance a successful rebellion would have been unlikely; despite differing religious convictions, most Englishmen were loyal to the institution of the monarchy. England might have become a more "Puritan absolute monarchy", as "existed in Sweden, Denmark, Saxony, and Prussia in the seventeenth century", rather than following the path of parliamentary and civil reform that it did.
Many at the time felt that Salisbury had been involved in the plot to gain favour with the King and enact more stridently anti-Catholic legislation. Such conspiracy theories alleged that Salisbury had either actually invented the plot or allowed it to continue when his agents had already infiltrated it, for the purposes of propaganda. The Popish Plot of 1678 sparked renewed interest in the Gunpowder Plot, resulting in a book by Thomas Barlow, Bishop of Lincoln, which refuted "a bold and groundless surmise that all this was a contrivance of Secretary Cecil".
In 1897 Father John Gerard of Stonyhurst College, namesake of John Gerard (who, following the plot's discovery, had evaded capture), wrote an account called "What was the Gunpowder Plot?", alleging Salisbury's culpability. This prompted a refutation later that year by Samuel Gardiner, who argued that Gerard had gone too far in trying to "wipe away the reproach" which the plot had exacted on generations of English Catholics. Gardiner portrayed Salisbury as guilty of nothing more than opportunism. Subsequent attempts to prove Salisbury's involvement, such as Francis Edwards's 1969 work "Guy Fawkes: the real story of the gunpowder plot?", have similarly foundered on the lack of any clear evidence.
The cellars under the Houses of Parliament continued to be leased out to private individuals until 1678, when news of the Popish Plot broke. It was then considered prudent to search the cellars on the day before each State Opening of Parliament, a ritual that survives to this day.
In January 1606, during the first sitting of Parliament since the plot, the Observance of 5th November Act 1605 was passed, making services and sermons commemorating the event an annual feature of English life; the act remained in force until 1859. The tradition of marking the day with the ringing of church bells and bonfires started soon after the Plot's discovery, and fireworks were included in some of the earliest celebrations. In Britain, 5 November is variously called Bonfire Night, Fireworks Night, or Guy Fawkes Night.
It remains the custom in Britain, on or around 5 November, to let off fireworks. Traditionally, in the weeks running up to the 5th, children made "guys"—effigies supposedly of Fawkes—usually made from old clothes stuffed with newspaper, and fitted with a grotesque mask, to be burnt on 5 November bonfire. These guys were exhibited in the street to collect money for fireworks, although this custom has become less common. The word guy thus came in the 19th century to mean an oddly dressed person, and hence in the 20th and 21st centuries to mean any male person.
5 November firework displays and bonfire parties are common throughout Britain, in major public displays and in private gardens. In some areas, particularly in Sussex, there are extensive processions, large bonfires and firework displays organised by local bonfire societies, the most elaborate of which take place in Lewes.
According to the biographer Esther Forbes, the Guy Fawkes Day celebration in the pre-revolutionary American colonies was a very popular holiday. In Boston, the revelry on "Pope Night" took on anti-authoritarian overtones, and often became so dangerous that many would not venture out of their homes.
In the 2005 ITV programme "", a full-size replica of the House of Lords was built and destroyed with barrels of gunpowder, totalling 1 metric tonne of explosives. The experiment was conducted on the Advantica-owned Spadeadam test site and demonstrated that the explosion, if the gunpowder was in good order, would have killed all those in the building. The power of the explosion was such that of the deep concrete walls making up the undercroft (replicating how archives suggest the walls of the old House of Lords were constructed), the end wall where the barrels were placed by, under the throne, was reduced to rubble, and the adjacent surviving portions of wall were shoved away. Measuring devices placed in the chamber to calculate the force of the blast were recorded as going off the scale just before their destruction by the explosion; a piece of the head of the dummy representing King James, which had been placed on a throne inside the chamber surrounded by courtiers, peers and bishops, was found a considerable distance from its initial location. According to the findings of the programme, no one within of the blast could have survived, and all of the stained glass windows in Westminster Abbey would have been shattered, as would all of the windows in the vicinity of the Palace. The explosion would have been seen from miles away and heard from further away still. Even if only half of the gunpowder had gone off, which Fawkes was apparently prepared for, everyone in the House of Lords and its environs would have been killed instantly.
The programme also disproved claims that some deterioration in the quality of the gunpowder would have prevented the explosion. A portion of deliberately deteriorated gunpowder, of such low quality as to make it unusable in firearms, when placed in a heap and ignited, still managed to create a large explosion. The impact of even deteriorated gunpowder would have been magnified by its containment in wooden barrels, compensating for the quality of the contents. The compression would have created a cannon effect, with the powder first blowing up from the top of the barrel before, a millisecond later, blowing out. Calculations showed that Fawkes, who was skilled in the use of gunpowder, had deployed double the amount needed. In a test detonation of all of period-accurate gunpowder available in the UK inside the same size of barrel Fawkes had used, the experts for the project were surprised at how much more powerful an effect that compression had in creating an explosion.
Some of the gunpowder guarded by Fawkes may have survived. In March 2002 workers cataloguing archives of diarist John Evelyn at the British Library found a box containing a number of gunpowder samples, including a compressed bar with a note in Evelyn's handwriting stating that it had belonged to Guy Fawkes. A further note, written in the 19th century, confirmed this provenance, although in 1952 the document acquired a new comment: "but there was none left!"
Notes
Footnotes
Bibliography | https://en.wikipedia.org/wiki?curid=13159 |
Gelatin
Gelatin or gelatine (from meaning "stiff" or "frozen") is a translucent, colorless, flavorless food ingredient, derived from collagen taken from animal body parts. It is brittle when dry and gummy when moist. It may also be referred to as hydrolyzed collagen, collagen hydrolysate, gelatine hydrolysate, hydrolyzed gelatine, and collagen peptides after it has undergone hydrolysis. It is commonly used as a gelling agent in food, medications, drug and vitamin capsules, photographic films and papers, and cosmetics.
Substances containing gelatin or functioning in a similar way are called gelatinous substances. Gelatin is an irreversibly hydrolyzed form of collagen, wherein the hydrolysis reduces protein fibrils into smaller peptides; depending on the physical and chemical methods of denaturation, the molecular weight of the peptides falls within a broad range. Gelatin is in gelatin desserts; most gummy candy and marshmallows; and ice creams, dips, and yogurts. Gelatin for cooking comes as powder, granules, and sheets. Instant types can be added to the food as they are; others must soak in water beforehand.
Hydrolysis results in the reduction of collagen protein fibrils of about 300,000 Da into smaller peptides. Depending upon the process of hydrolysis, peptides will have broad molecular weight ranges associated with physical and chemical methods of denaturation.
The amino acid content of hydrolyzed collagen is the same as collagen. Hydrolyzed collagen contains 19 amino acids, predominantly glycine, proline and hydroxyproline, which together represent around 50% of the total amino acid content.
Hydrolyzed collagen contains 8 out of 9 essential amino acids. It also contains glycine and arginine—two amino-acid precursors necessary for the biosynthesis of creatine. It contains no tryptophan and is deficient in isoleucine, threonine, and methionine.
The bioavailability of hydrolyzed collagen in mice was demonstrated in a 1999 study; orally administered 14C hydrolyzed collagen was digested and more than 90% absorbed within 6 hours, with measurable accumulation in cartilage and skin.
A 2005 study in humans found hydrolyzed collagen absorbed as small peptides in the blood.
Ingestion of hydrolyzed collagen may affect the skin by increasing the density of collagen fibrils and fibroblasts, thereby stimulating collagen production. It has been suggested, based on mouse and in vitro studies, that hydrolyzed collagen peptides have chemotactic properties on fibroblasts or an influence on growth of fibroblasts.
Some clinical studies report that the oral ingestion of hydrolyzed collagen decreases joint pain, those with the most severe symptoms showing the most benefit. Beneficial action is likely due to hydrolyzed collagen accumulation in the cartilage and stimulated production of collagen by the chondrocytes, the cells of cartilage. Several studies have shown that a daily intake of hydrolyzed collagen increases bone mass density in rats. It seems that hydrolyzed collagen peptides stimulated differentiation and osteoblasts activity – the cells that build bone – over that of osteoclasts (cells that destroy bone).
However, other clinical trials have yielded mixed results. In 2011, the European Food Safety Authority Panel on Dietetic Products, Nutrition and Allergies concluded that "a cause and effect relationship has not been established between the consumption of collagen hydrolysate and maintenance of joints". Four other studies reported benefit with no side effects; however, the studies were not extensive, and all recommended further controlled study. One study found that oral collagen only improved symptoms in a minority of patients and reported nausea as a side effect. Another study reported no improvement in disease activity in patients with rheumatoid arthritis. Another study found that collagen treatment may actually cause an exacerbation of rheumatoid arthritis symptoms.
Hydrolyzed collagen, like gelatin, is made from animal by-products from the meat industry or sometimes animal carcasses removed and cleared by knackers, including skin, bones, and connective tissue.
In 1997, the U.S. Food and Drug Administration (FDA), with support from the TSE (transmissible spongiform encephalopathy) Advisory Committee, began monitoring the potential risk of transmitting animal diseases, especially bovine spongiform encephalopathy (BSE), commonly known as "mad cow disease". An FDA study from that year stated: "...steps such as heat, alkaline treatment, and filtration could be effective in reducing the level of contaminating TSE agents; however, scientific evidence is insufficient at this time to demonstrate that these treatments would effectively remove the BSE infectious agent if present in the source material." On 18 March 2016 the FDA finalized three previously-issued interim final rules designed to further reduce the potential risk of BSE in human food. The final rule clarified that "gelatin is not considered a prohibited cattle material if it is manufactured using the customary industry processes specified."
The Scientific Steering Committee (SSC) of the European Union in 2003 stated that the risk associated with bovine bone gelatin is very low or zero.
In 2006, the European Food Safety Authority stated that the SSC opinion was confirmed, that the BSE risk of bone-derived gelatin was small, and that it recommended removal of the 2003 request to exclude the skull, brain, and vertebrae of bovine origin older than 12 months from the material used in gelatin manufacturing.
In cosmetics, hydrolyzed collagen may be found in topical creams, acting as a product texture conditioner, and moisturizer. Collagen implants or dermal fillers are also used to address the appearance of wrinkles, contour deficiencies, and acne scars, among others. The U.S. Food and Drug Administration has approved its use, and identifies cow (bovine) and human cells as the sources of these fillers. According to the FDA, the desired effects can last for 3–4 months, which is relatively the most short-lived compared to other materials used for the same purpose.
Gelatin is a mixture of peptides and proteins produced by partial hydrolysis of collagen extracted from the skin, bones, and connective tissues of animals such as domesticated cattle, chicken, pigs, and fish. During hydrolysis, the natural molecular bonds between individual collagen strands are broken down into a form that rearranges more easily. Its chemical composition is, in many aspects, closely similar to that of its parent collagen. Photographic and pharmaceutical grades of gelatin generally are sourced from cattle bones and pig skin. Gelatin has proline, hydroxyproline and glycine in its polypeptide chain. Glycine is responsible for close packing of the chains. Presence of proline restricts the conformation. This is important for gelation properties of gelatin.
Gelatin readily dissolves in hot water and sets to a gel on cooling. When added directly to cold water, it does not dissolve well, however. Gelatin also is soluble in most polar solvents. Gelatin solutions show viscoelastic flow and streaming birefringence. Solubility is determined by the method of manufacture. Typically, gelatin can be dispersed in a relatively concentrated acid. Such dispersions are stable for 10–15 days with little or no chemical changes and are suitable for coating purposes or for extrusion into a precipitating bath.
The mechanical properties of gelatin gels are very sensitive to temperature variations, the previous thermal history of the gels, and the amount of time elapsing. These gels exist over only a small temperature range, the upper limit being the melting point of the gel, which depends on gelatin grade and concentration, but typically, is less than and the lower limit the freezing point at which ice crystallizes. The upper melting point is below human body temperature, a factor that is important for mouthfeel of foods produced with gelatin. The viscosity of the gelatin-water mixture is greatest when the gelatin concentration is high and the mixture is kept cool at about . The gel strength is quantified using the Bloom test. Gelatin's strength (but not viscosity) declines if it is subjected to temperatures above , or if it is held at temperatures near 100 °C for an extended period of time.
The worldwide production amount of gelatin is about . On a commercial scale, gelatin is made from by-products of the meat and leather industries.
Most gelatin is derived from pork skins, pork and cattle bones, or split cattle hides. Gelatin made from fish by-products avoids some of the religious objections to gelatin consumption. The raw materials are prepared by different curing, acid, and alkali processes that are employed to extract the dried collagen hydrolysate. These processes may take several weeks, and differences in such processes have great effects on the properties of the final gelatin products.
Gelatin also can be prepared at home. Boiling certain cartilaginous cuts of meat or bones results in gelatin being dissolved into the water. Depending on the concentration, the resulting stock (when cooled) will form a jelly or gel naturally. This process is used for aspic.
While many processes exist whereby collagen may be converted to gelatin, they all have several factors in common. The intermolecular and intramolecular bonds that stabilize insoluble collagen must be broken, and also, the hydrogen bonds that stabilize the collagen helix must be broken. The manufacturing processes of gelatin consists of several main stages:
If the raw material used in the production of the gelatin is derived from bones, dilute acid solutions are used to remove calcium and other salts. Hot water or several solvents may be used to reduce the fat content, which should not exceed 1% before the main extraction step. If the raw material consists of hides and skin; size reduction, washing, removal of hair from hides, and degreasing are necessary to prepare the hides and skins for the hydrolysis step.
After preparation of the raw material, i.e., removing some of the impurities such as fat and salts, partially purified collagen is converted into gelatin through hydrolysis. Collagen hydrolysis is performed by one of three different methods: acid-, alkali-, and enzymatic hydrolysis. Acid treatment is especially suitable for less fully cross-linked materials such as pig skin collagen and normally requires 10 to 48 hours. Alkali treatment is suitable for more complex collagen such as that found in bovine hides and requires more time, normally several weeks. The purpose of the alkali treatment is to destroy certain chemical crosslinks still present in collagen. Within the gelatin industry, the gelatin obtained from acid-treated raw material has been called type-A gelatin and the gelatin obtained from alkali-treated raw material is referred to as type-B gelatin.
Advances are occurring to optimize the yield of gelatin using enzymatic hydrolysis of collagen. The treatment time is shorter than that required for alkali treatment, and results in almost complete conversion to the pure product. The physical properties of the final gelatin product are considered better.
Extraction is performed with either water or acid solutions at appropriate temperatures. All industrial processes are based on neutral or acid pH values because although alkali treatments speed up conversion, they also promote degradation processes. Acidic extraction conditions are extensively used in the industry, but the degree of acid varies with different processes. This extraction step is a multistage process, and the extraction temperature usually is increased in later extraction steps, which ensures minimum thermal degradation of the extracted gelatin.
This process includes several steps such as filtration, evaporation, drying, grinding, and sifting. These operations are concentration-dependent and also dependent on the particular gelatin used. Gelatin degradation should be avoided and minimized, so the lowest temperature possible is used for the recovery process. Most recoveries are rapid, with all of the processes being done in several stages to avoid extensive deterioration of the peptide structure. A deteriorated peptide structure would result in a low gel strength, which is not generally desired.
The first use of gelatin in foods is documented in the 15th century in medieval Britain, where cattle hooves were boiled for extended periods of time to produce a gel. This process was laborious and time-consuming, confined mainly to wealthier households. The first recorded English patent for gelatin production was granted in 1754. By the late 17th century, French inventor Denis Papin had discovered another method of gelatin extraction via boiling of bones. In 1812, the chemist Jean-Pierre-Joseph d'Arcet further experimented with the use of hydrochloric acid to extract gelatin from bones, and later with steam extraction, which was much more efficient. The French government viewed gelatin as a potential source of cheap, accessible protein for the poor, particularly in Paris. Food applications in France and the United States during 19th century appear to have established the versatility of gelatin, including the origin of its popularity in the US as Jell-O. From the mid 1800s, Charles and Rose Knox of New York manufactured and marketed gelatin powder, diversifying the appeal and applications of gelatin.
Probably best known as a gelling agent in cooking, different types and grades of gelatin are used in a wide range of food and nonfood products. Common examples of foods that contain gelatin are gelatin desserts, trifles, aspic, marshmallows, candy corn, and confections such as Peeps, gummy bears, fruit snacks, and jelly babies. Gelatin may be used as a stabilizer, thickener, or texturizer in foods such as yogurt, cream cheese, and margarine; it is used, as well, in fat-reduced foods to simulate the mouthfeel of fat and to create volume. It also is used in the production of several types of Chinese soup dumplings, specifically Shanghainese soup dumplings, or "xiaolongbao", as well as "Shengjian mantou", a type of fried and steamed dumpling. The fillings of both are made by combining ground pork with gelatin cubes, and in the process of cooking, the gelatin melts, creating a soupy interior with a characteristic gelatinous stickiness.
Gelatin is used for the clarification of juices, such as apple juice, and of vinegar.
Isinglass is obtained from the swim bladders of fish. It is used as a fining agent for wine and beer. Besides hartshorn jelly, from deer antlers (hence the name "hartshorn"), isinglass was one of the oldest sources of gelatin.
The consumption of gelatin from particular animals may be forbidden by religious rules or cultural taboos. For example, Islamic halal and Jewish kosher customs require gelatin from sources other than pigs, such as cattle (that have been slaughtered according to the religious regulations) or fish (that they are allowed to consume). Roma people are cautious of gelatin products that may have been made from horses, as their culture forbids the consumption of horses. Some companies specify the source of the gelatin used.
Vegans and vegetarians do not eat foods containing gelatin made from animals. Likewise, Sikh, Hindu, and Jain customs may require gelatin alternatives from sources other than animals, as many Hindus, most Jains and some Sikhs are vegetarian. Partial alternatives to gelatins derived from animals include the seaweed extracts agar and carrageenan, and the plant extracts pectin and konjac.
Although gelatin is 98–99% protein by dry weight, it has little additional nutritional value, varying according to the source of the raw material and processing technique.
Amino acids present in gelatin are variable, due to varying sources and batches, but are approximately:
In 2011, the European Food Safety Authority Panel on Dietetic Products, Nutrition, and Allergies concluded that "a cause and effect relationship has not been established between the consumption of collagen hydrolysate and maintenance of joints". A 2012 review also found insufficient evidence to support its use for osteoarthritis. By contrast, in 2013, Health Canada approved a label for "hydrolyzed collagen", specifying that the label may make a health claim that supplemental dietary amino acid intake from hydrolyzed collagen "helps to reduce joint pain associated with osteoarthritis". | https://en.wikipedia.org/wiki?curid=13160 |
Gelatin dessert
Gelatin desserts are desserts made with a sweetened and flavored processed collagen product (gelatin). This kind of dessert was first recorded as jelly by Hannah Glasse in her 18th century book "The Art of Cookery", appearing in a layer of trifle. Jelly is also featured in the best selling cookbooks of English food writers Eliza Acton and Isabella Beeton in the 19th century.
They can be made by combining plain gelatin with other ingredients or by using a premixed blend of gelatin with additives. Fully prepared gelatin desserts are sold in a variety of forms, ranging from large decorative shapes to individual serving cups.
Popular brands of premixed gelatin include: Aeroplane Jelly in Australia, Hartley's (formerly Rowntree's) in the United Kingdom, and Jell-O from Kraft Foods and Royal from Jel Sert in North America. In the US and Canada this dessert is known by the genericized trademark "jello".
Before gelatin became widely available as a commercial product, the most typical gelatin dessert was "calf's foot jelly". As the name indicates, this was made by extracting and purifying gelatin from the foot of a calf. This gelatin was used for savory dishes in aspic, or was mixed with fruit juice and sugar for a dessert.
In the eighteenth century, gelatin from calf's feet, isinglass and hartshorn was colored blue with violet juice, yellow with saffron, red with cochineal and green with spinach and allowed to set in layers in small, narrow glasses. It was flavored with sugar, lemon juice and mixed spices. This preparation was called "jelly"; English cookery writer Hannah Glasse was the first to record the use of this jelly in trifle in her book "The Art of Cookery", first published in 1747. Preparations on making jelly (including illustrations) appear in the best selling cookbooks of English writers Eliza Acton and Isabella Beeton in the 19th century.
To make a gelatin dessert, gelatin is dissolved in hot liquid with the desired flavors and other additives. These latter ingredients usually include sugar, fruit juice, or sugar substitutes; they may be added and varied during preparation, or pre-mixed with the gelatin in a commercial product which merely requires the addition of hot water.
In addition to sweeteners, the prepared commercial blends generally contain flavoring agents and other additives, such as adipic acid, fumaric acid, sodium citrate, and artificial flavorings and food colors. Because the collagen is processed extensively, the final product is not categorized as a meat or animal product by the US federal government.
Prepared commercial blends may be sold as a powder or as a concentrated gelatinous block, divided into small squares. Either type is mixed with sufficient hot water to completely dissolve it, and then mixed with enough cold water to make the volume of liquid specified on the packet.
The solubility of powdered gelatin can be enhanced by sprinkling it into the liquid several minutes before heating, "blooming" the individual granules. The fully dissolved mixture is then refrigerated, slowly forming a colloidal gel as it cools.
Gelatin desserts may be enhanced in many ways, such as using decorative molds, creating multicolored layers by adding a new layer of slightly cooled liquid over the previously-solidified one, or suspending non-soluble edible elements such as marshmallows or fruit. Some types of fresh fruit and their unprocessed juices are incompatible with gelatin desserts; see the Chemistry section below.
When fully chilled, the most common ratios of gelatin to liquid (as instructed on commercial packaging) usually result in a custard-like texture which can retain detailed shapes when cold but melts back to a viscous liquid when warm. A recipe calling for the addition of additional gelatin to regular jelly gives a rubbery product that can be cut into shapes with cookie cutters and eaten with fingers (called "Knox Blox" by the Knox company, makers of unflavored gelatin). Higher gelatin ratios can be used to increase the stability of the gel, culminating in gummy candies which remain rubbery solids at room temperature (see Bloom (test)).
The Bloom Strength of a gelatin mixture is the measure of how strong it is. It is defined by the force in grams required to press a 12.5 mm diameter plunger 4 mm into 112 g of a standard 6.67% w/v gelatin gel at 10 °C. The Bloom Strength of a gel is useful to know when determining the possibility of substituting a gelatin of one Bloom Strength for a gelatin of another. One can use the following equation:
C x B½ = k
or C1(B1)½÷(B2)½ = C2
Where C = concentration, B = Bloom strength and k = constant. For example, when making gummies, it's important to know that a 250 Bloom gelatin has a much shorter (more thick) texture than a 180 Bloom gelatin.
A gelatin shot (usually called a Jell-O shot in North America and vodka jelly or jelly shot in the UK and Australia) is a shooter in which liquor, usually vodka, rum, tequila, or neutral grain spirit, replaces some of the water or fruit juice that is used to congeal the gel.
The American satirist and mathematician Tom Lehrer claims to have invented the gelatin shot in the 1950s while working for the National Security Agency, where he developed vodka gelatin as a way to circumvent a restriction of alcoholic beverages on base. An early published recipe for an alcoholic gelatin drink dates from 1862, found in "How to Mix Drinks, or The Bon Vivant's Companion" by Jerry Thomas: his recipe for "Punch Jelly" calls for the addition of isinglass or other gelatin to a punch made from cognac, rum, and lemon juice.
Gelatin art desserts, also known as 3D gelatin desserts, are made by injecting colorful shapes into a flavored gelatin base. This 3D gelation art technique originated from Mexico and has spread along to Western and Pacific countries.
These desserts are made using high quality gelatin that has a high bloom value and low odor and taste. The clear gelatin base is prepared using gelatin, water, sugar, citric acid and food flavoring.
When the clear gelatin base sets, colorful shapes are injected using a syringe.
The injected material usually consists of a sweetener (most commonly sugar), some type of edible liquid (milk, cream, water, etc.), food coloring and a thickening agent such as starch or additional gelatin.
The shapes are drawn by making incisions in the clear gelatin base using sharp objects. Colored liquid is then allowed to fill the crevice and make the cut shape visible.
Most commonly, the shapes are drawn using sterile medical needles or specialized precut gelatin art tools that allow the shape to be cut and filled with color at the same time.
Gelatin art tools are attached to a syringe and used to inject a predetermined shape into gelatin.
When combined with other ingredients, such as whipping cream or mousse, gelatin art desserts can be assembled into visually impressive formations resembling a cake.
Other culinary gelling agents can be used instead of animal-derived gelatin. These plant-derived substances are more similar to pectin and other gelling plant carbohydrates than to gelatin proteins; their physical properties are slightly different, creating different constraints for the preparation and storage conditions. These other gelling agents may also be preferred for certain traditional cuisines or dietary restrictions.
Agar, a product made from red algae, is the traditional gelling agent in many Asian desserts. Agar is a popular gelatin substitute in quick jelly powder mix and prepared dessert gels that can be stored at room temperature. Compared to gelatin, agar preparations require a higher dissolving temperature, but the resulting gels congeal more quickly and remain solid at higher temperatures, , as opposed to for gelatin. Vegans and vegetarians can use agar to replace animal-derived gelatin.
Carrageenan is also derived from seaweed, and lacks agar's occasionally unpleasant smell during cooking. It sets more firmly than agar and is often used in kosher and halal cooking.
Konjac is a gelling agent used in many Asian foods, including the popular konnyaku fruit jelly candies.
Gelatin consists of partially hydrolyzed collagen, a protein which is highly abundant in animal tissues such as bone and skin. Collagen is a protein made up of three strands of polypeptide chains that form in a helical structure. To make a gelatin dessert, such as Jello, the collagen is mixed with water and heated, disrupting the bonds that hold the three strands of polypeptides together. As the gelatin cools, these bonds try to reform in the same structure as before, but now with small bubbles of liquid in between. This gives gelatin its semisolid, gel-like texture.
Because gelatin is a protein that contains both acid and base amino groups, it acts as an amphoteric molecule, displaying both acidic and basic properties. This allows it to react with different compounds, such as sugars and other food additives. These interactions give gelatin a versatile nature in the roles that it plays in different foods. It can stabilize foams in foods such as marshmallows, it can help maintain small ice crystals in ice cream, and it can even serve as an emulsifier for foods like toffee and margarine.
Although many gelatin desserts incorporate fruit, some fresh fruits contain proteolytic enzymes; these enzymes cut the gelatin molecule into peptides (protein fragments) too small to form a firm gel. The use of such fresh fruits in a gelatin recipe results in a dessert that never "sets".
Specifically, pineapple contains the protease (protein cutting enzyme) bromelain, kiwi fruit contains actinidin, figs contain ficain, and papaya contains papain. Cooking or canning denatures and deactivates the proteases, so canned pineapple, for example, works fine in a gelatin dessert.
Gelatin dessert in China is defined as edible jelly-like food prepared from a mixture of water, sugar and gelling agent. The preparation processes include concocting, gelling, sterilizing and packaging. In China, around 250 legal additives are allowed in gelatin desserts as gelling agents, colors, artificial sweeteners, emulsifiers and antioxidants.
Gelatin desserts are classified into 5 categories according to the different flavoring substances they contain. Five types of flavoring substance include artificial fruit flavored type (less than 15% of natural fruit juice), natural fruit flavored type (above 15% of natural fruit juice), natural flavored with fruit pulp type and dairy type products, which includes dairy ingredients. The last type (“others”) summarizes gelatin desserts not mentioned above. It is typically sold in single-serving plastic cups or plastic food bags.
Although eating tainted beef can lead to New Variant Creutzfeldt–Jakob disease (the human variant of mad-cow disease, bovine spongiform encephalopathy), there is no known case of BSE having been transmitted through collagen products such as gelatin. | https://en.wikipedia.org/wiki?curid=13162 |
George, Duke of Saxony
George "the Bearded", Duke of Saxony (Meissen, 27 August 1471 – Dresden, 17 April 1539), was Duke of Saxony from 1500 to 1539 known for his opposition to the Reformation. While the Ernestine line embraced Lutheranism, the Albertines (headed by George) were reluctant to do so. Despite George's efforts to avoid a succession by a Lutheran upon his death in 1539, he could not prevent it from happening. Under the Act of Settlement of 1499, Lutheran Henry IV became the new duke. Upon his accession, Henry introduced Lutheranism as a state religion in the Albertine lands of Saxony.
Duke George was a member of the Order of the Golden Fleece.
His father was Albert the Brave of Saxony, founder of the Albertine line of the Wettin family, his mother was Sidonie, daughter of George Podiebrad, King of Bohemia. Elector Frederick the Wise, a member of the Ernestine branch of the same family, known for his protection of Luther, was a cousin of Duke George.
George, as the eldest son, received an excellent training in theology and other branches of learning, and was thus much better educated than most of the princes of his day.
As early as 1488, when his father was in East Frisia fighting on behalf of the emperor, George was regent of the ducal possessions, which included the Margraviate of Meissen with the cities of Dresden and Leipzig.
He is buried with his wife Barbara in the purpose-built Georgskapelle in Meissen Cathedral. The room contains a magnificent altarpiece by Lucas Cranach the Elder. In 1677 a highly ornate ceiling was added to the chapel, designed by Wolf Caspar von Klengel.
George was married at Dresden, on 21 November 1496, to Barbara Jagiellon, daughter of Casimir IV, King of Poland and Grand Duke of Lithuania and Elisabeth, daughter of Albrecht II of Hungary. They had ten children, but all, with the exception of a daughter, died before their father:
In 1498, the emperor granted Albert the Brave the hereditary governorship of Friesland. At Maastricht, 14 February 1499, Albert settled the succession to his possessions, and endeavoured by this arrangement to prevent further partition of his domain. He died 12 September 1500, and was succeeded in his German territories by George as the head of the Albertine line, while George's brother Heinrich became hereditary governor of Friesland.
The Saxon occupation of Friesland, however, was by no means secure and was the source of constant revolts in that province. Consequently, Heinrich, who was of a rather inert disposition, relinquished his claims to the governorship, and in 1505 an agreement was made between the brothers by which Friesland was transferred to George, while Heinrich received an annuity and the districts of Freiberg and Wolkenstein. But this arrangement did not restore peace in Friesland, which remained a source of trouble to Saxony. In 1515 George sold Friesland to the future Emperor Charles V (then Duke of Burgundy) for the very moderate price of 100,000 florins. He tried to keep the newmade lands of het Bildt which weren't granted him by Charles V. These troubles outside of his Saxon possessions did not prevent George from bestowing much care on the government of the ducal territory proper. When regent, during the lifetime of his father, the difficulties arising from conflicting interests and the large demands on his powers had often brought the young prince to the verge of despair.
In a short time, however, he developed decided ability as a ruler; on entering upon his inheritance he divided the duchy into governmental districts, took measures to suppress the robber-knights, and regulated the judicial system by defining and readjusting the jurisdiction of the various law courts. In his desire to achieve good order, severity, and the amelioration of the condition of the people, he sometimes ventured to infringe even on the rights of the cities. His court was better regulated than that of any other German prince, and he bestowed a paternal care on the University of Leipzig, where a number of reforms were introduced, and Humanism, as opposed to Scholasticism, was encouraged.
From the beginning of the Reformation in 1517, Duke George directed his energies chiefly to ecclesiastical affairs. Hardly one of the secular German princes held as firmly as he to the Church, he defended its rights and vigorously condemned every innovation except those countenanced by the highest ecclesiastical authorities. At first he was not opposed to Luther, but as time went on and Luther's aim became clear to him, he turned more and more from the Reformer, and was finally, in consequence of this change of attitude, drawn into an acrimonious correspondence in which Luther, according to some without any justification, heavily criticized the duke.
The duke was not blind to the undeniable abuses existing at that time in the Church. In 1519, despite the opposition of the theological faculty of the university, he originated the Disputation of Leipzig, with the idea of helping forward the cause of truth, and was present at all the discussions. In 1521, at the Diet of Worms, when the German princes handed in a paper containing a list of "grievances" concerning the condition of the Church, George added for himself twelve specific complaints referring mainly to the abuse of Indulgences and the annates.
In 1525, he combined with his Lutheran son-in-law, Landgrave Philip of Hesse, and his cousin, the Elector Frederick the Wise, to suppress the revolt of the peasants, who were defeated near Frankenhausen in Thuringia. Some years later, he wrote a forcible preface to a translation of the New Testament issued at his command by his private secretary, Hieronymus Emser, as an offset to Luther's version. Lutheran books were confiscated by his order, wherever found, though he refunded the cost of the books. He proved himself in every way a vigorous opponent of the Lutherans, decreeing that Christian burial was to be refused to apostates, and recreant ecclesiastics were to be delivered to the bishop of Merseburg.
For those, however, who merely held anti-catholic opinions, the punishment was only expulsion from the duchy. The duke deeply regretted the constant postponement of the ardently desired council, from the action of which so much was expected. While awaiting its convocation, he thought to remove the more serious defects by a reform of the monasteries, which had become exceedingly worldly in spirit and from which many of the inmates were departing. He vainly sought to obtain from the Curia the right, which was sometimes granted by Rome, to make official visitations to the conventual institutions of his realm. His reforms were confined mainly to uniting the almost vacant monasteries and to matters of economic management, the control of the property being entrusted in most cases to the secular authorities.
In 1525, Duke George formed, with some other German rulers, the League of Dessau, for the protection of Catholic interests. In the same way he was the animating spirit of the League of Halle, formed in 1533, from which sprang in 1538 the Holy League of Nuremberg for the maintenance of the religious Peace of Nuremberg.
The vigorous activity displayed by the duke in so many directions was not attended with much success. Most of his political measures, indeed, stood the test of experience, but in ecclesiastico-political matters he witnessed with sorrow the gradual decline of Catholicism and the spread of Lutheranism within his dominions, in spite of his earnest efforts and forcible prohibition of the new doctrine. Furthermore, during George's lifetime his nearest relations his son-in-law Philip of Hesse, and his brother Heinrich, joined the Reformers.
He spent the last years of his reign in endeavours to secure a Catholic successor, thinking by this step to check the dissemination of Lutheran opinions. The only one of George's sons then living was the weak-minded and unmarried Frederick. The intention of his father was that Frederick should rule with the aid of a council. Early in 1539, Frederick was married to Elizabeth of Mansfeld, but he died shortly afterwards, leaving no prospect of an heir. According to the act of settlement of 1499, George's Protestant brother Heinrich was now heir prospective; but George, disregarding his father's will, sought to disinherit his brother and to bequeath the duchy to Ferdinand, brother of Charles V. His sudden death prevented the carrying out of this intention.
George was an industrious and energetic, if somewhat irascible ruler in the furtherance of the interests of his land and people. A faithful adherent of the Emperor and Empire, he accomplished much for his domain by economy, love of order and wise direction of activities of his state officials. The grief of his life was Luther's Reformation and what he regarded to be apostasy from the Old Faith. Of a strictly religious, although not narrow, disposition, he sought at any cost to keep his subjects from falling away from the Church, but his methods were sometimes questionable. | https://en.wikipedia.org/wiki?curid=13168 |
Gneiss
Gneiss () is a common and widely distributed type of metamorphic rock. Gneiss is formed by high temperature and high-pressure metamorphic processes acting on formations composed of igneous or sedimentary rocks. Orthogneiss is gneiss derived from igneous rock (such as granite). Paragneiss is gneiss derived from sedimentary rock (such as sandstone). Gneiss forms at higher temperatures and pressures than schist. Gneiss nearly always shows a banded texture characterized by alternating darker and lighter colored bands and without a distinct foliation.
The word "gneiss" has been used in English since at least 1757. It is borrowed from the German word "Gneis", formerly also spelled "Gneiss", which is probably derived from the Middle High German noun "spark" (so called because the rock glitters).
Gneiss is formed from sedimentary or igneous rock exposed to temperatures greater than 320 °C and relatively high pressure.
Gneissic rocks are usually medium- to coarse-foliated; they are largely recrystallized but do not carry large quantities of micas, chlorite or other platy minerals. Gneisses that are metamorphosed igneous rocks or their equivalent are termed granite gneisses, diorite gneisses, etc. Gneiss rocks may also be named after a characteristic component such as garnet gneiss, biotite gneiss, albite gneiss, etc. "Orthogneiss" designates a gneiss derived from an igneous rock, and "paragneiss" is one from a sedimentary rock.
"Gneissose" rocks have properties similar to gneiss.
Gneiss appears to be striped in bands like parallel lines in shape, called gneissic banding. The banding is developed under high temperature and pressure conditions.
The minerals are arranged into layers that appear as bands in cross section. The appearance of layers, called 'compositional banding', occurs because the layers, or bands, are of different composition. The darker bands have relatively more mafic minerals (those containing more magnesium and iron). The lighter bands contain relatively more felsic minerals (silicate minerals, containing more of the lighter elements, such as silicon, oxygen, aluminium, sodium, and potassium).
A common cause of the banding is the subjection of the protolith (the original rock material that undergoes metamorphism) to extreme shearing force, a sliding force similar to the pushing of the top of a deck of cards in one direction, and the bottom of the deck in the other direction. These forces stretch out the rock like a plastic, and the original material is spread out into sheets.
Some banding is formed from original rock material (protolith) that is subjected to extreme temperature and pressure and is composed of alternating layers of sandstone (lighter) and shale (darker), which is metamorphosed into bands of quartzite and mica.
Another cause of banding is "metamorphic differentiation", which separates different materials into different layers through chemical reactions, a process not fully understood.
Not all gneiss rocks have detectable banding. In kyanite gneiss, crystals of kyanite appear as random clumps in what is mainly a plagioclase (albite) matrix.
Augen gneiss, from the , meaning "eyes", is a coarse-grained gneiss resulting from metamorphism of granite, which contains characteristic elliptic or lenticular shear-bound feldspar porphyroclasts, normally microcline, within the layering of the quartz, biotite and magnetite bands.
Henderson gneiss is found in North Carolina and South Carolina, US, east of the Brevard Shear Zone. It has deformed into two sequential forms. The second, more warped, form is associated with the Brevard Fault, and the first deformation results from displacement to the southwest.
Most of the Outer Hebrides of Scotland have a bedrock formed from Lewisian gneiss. In addition to the Outer Hebrides, they form basement deposits on the Scottish mainland west of the Moine Thrust and on the islands of Coll and Tiree. These rocks are largely igneous in origin, mixed with metamorphosed marble, quartzite and mica schist with later intrusions of basaltic dikes and granite magma.
Gneisses of Archean and Proterozoic age occur in the Baltic Shield. | https://en.wikipedia.org/wiki?curid=13169 |
Gro Harlem Brundtland
Gro Harlem Brundtland (; born Gro Harlem, 20 April 1939) is a Norwegian politician, who served three terms as Prime Minister of Norway (1981, 1986–89, and 1990–96) and as Director-General of the World Health Organization from 1998 to 2003. She is also known for having chaired the Brundtland Commission which presented the Brundtland Report on sustainable development.
Educated as a physician, Brundtland joined the Labour Party and entered the government in 1974 as Minister of the Environment. She became the first female Prime Minister of Norway on 4 February 1981, but left office on 14 October 1981; she returned as Prime Minister on 9 May 1986 and served until 16 October 1989. She finally returned for her third term on 3 November 1990. From 1981 to 1992 she was leader of the Labour Party. After her surprise resignation as Prime Minister in 1996, she became an international leader in sustainable development and public health, and served as Director-General of the World Health Organization and as UN Special Envoy on Climate Change from 2007 to 2010. She is also deputy chair of The Elders and a former Vice-President of the Socialist International.
Brundtland belonged to the moderate wing of her party and supported Norwegian membership in the European Union during the 1994 referendum. As Prime Minister Brundtland became widely known as the "mother of the nation." Brundtland received the 1994 Charlemagne Prize, and has received many other awards and recognitions.
Brundtland was born in Oslo in 1939, the daughter of physician and politician Gudmund Harlem and Inga Margareta Elisabet Brynolf (1918-2005). She has a younger brother, Lars and a younger sister, Hanne.
In 1963, Brundtland graduated with a medical degree, a cand.med. from the University of Oslo. She took her master's degree at Harvard University in 1965, as a Master of Public Health.
From 1966 to 1969, she worked as a physician at the Directorate of Health ("Helsedirektoratet"), and from 1969 she worked as a doctor in Oslo's public school health service.
She was Norwegian Minister for Environmental Affairs from 1974 to 1979.
Brundtland became Norway's first female Prime Minister in 1981. She served as Prime Minister from February to October.
Brundtland became Norwegian Prime Minister for two further, and more durable, terms. The second ministry was from 9 May 1986 until 16 October 1989 and this cabinet became known worldwide for its high proportion of female ministers: nearly half, or eight of the total eighteen ministers, were female. The third ministry was from 3 November 1990 to 25 October 1996.
Brundtland became leader of the Labour Party in 1981 and held the office until resigning in 1992, during her third term as Prime Minister. In 1996, she resigned as Prime Minister and retired completely from Norwegian politics. Her successor as both Labour Party leader in 1992 and as Prime Minister in 1996 was Thorbjørn Jagland.
In 1983, Brundtland was invited by then United Nations Secretary-General Javier Pérez de Cuéllar to establish and chair the World Commission on Environment and Development (WCED), widely referred to as the Brundtland Commission. She developed the broad political concept of sustainable development in the course of extensive public hearings, that were distinguished by their inclusiveness. The commission, which published its report, "Our Common Future", in April 1987, provided the momentum for the 1992 Earth Summit/UNCED, which was headed by Maurice Strong, who had been a prominent member of the commission. The Brundtland Commission also provided momentum for Agenda 21.
During her third ministry, the Norwegian government in 1993 took the initiative to sponsor secret peace talks between the Government of Israel led by Yitzchak Rabin – like Brundtland, leader of a Labour Party – and the PLO led by Yasser Arafat. This culminated with the signing of the Oslo Accords. For several years afterwards Norway continued to have a high-profile involvement in promoting Israeli-Palestinian peace, though increasingly displaced by the United States from its role as the mediator.
After the end of her term as PM, Brundtland was then elected Director-General of the World Health Organization in May 1998. In this capacity, Brundtland adopted a far-reaching approach to public health, establishing a Commission on Macroeconomics and Health, chaired by Jeffrey Sachs, and addressing violence as a major public health issue. Brundtland spearheaded the movement, now worldwide, to achieve the abolition of cigarette smoking by education, persuasion, and increased taxation. Under her leadership, the World Health Organization was one of the first major employers to make quitting smoking a condition of employment.
Under Brundtland's leadership, the World Health Organization was criticized for increased drug-company influence on the agency.
Brundtland was recognized in 2003 by "Scientific American" as their 'Policy Leader of the Year' for coordinating a rapid worldwide response to stem outbreaks of SARS. Brundtland was succeeded on 21 July 2003 by Jong-Wook Lee. In 1994, Brundtland was awarded the Charlemagne Prize of the city of Aachen.
In 2006 Brundtland was a member of the Panel of Eminent Persons who reviewed the work of the United Nations Conference on Trade and Development (UNCTAD). In May 2007, UN Secretary-General Ban Ki-moon named Brundtland, as well as Ricardo Lagos (the former president of Chile), and Han Seung-soo (the former foreign minister of South Korea), to serve as UN Special Envoys for Climate Change.
Brundtland's hallmark political activities have been chronicled by her husband, , in his two bestsellers, "Married to Gro" () and "Still married to Gro" ().
In 2007, Bruntland was working for Pepsi as a consultant.
Brundtland is a member of the Council of Women World Leaders, an international network of current and former women presidents and prime ministers whose mission is to mobilize collective action on issues of critical importance to women and equitable development.
Brundtland is also a member of the Club of Madrid, an independent organization of former leaders of democratic states, which works to strengthen democratic governance and leadership.
Brundtland serves as Deputy Chair of The Elders, a group of world leaders originally convened by Nelson Mandela, Graça Machel and Desmond Tutu in order to tackle some of the world's toughest problems. Mandela announced the launch of the group on 18 July 2007 in Johannesburg, South Africa. Brundtland has been active in The Elders’ work, participating in a broad range of the group's initiatives. She has travelled with Elders delegations to Cyprus, the Korean Peninsula, Ethiopia, India and the Middle East. Brundtland has also been involved in The Elders’ initiative on child marriage, including the founding of "Girls Not Brides: The Global Partnership to End Child Marriage".
Brundtland attended the Bilderberg meetings in 1982 and 1983. Her husband attended in 1991.
In 2019, Bruntland served as co-chair with the WHO Global Preparedness Monitoring Board.
Brundtland narrowly escaped assassination by Anders Behring Breivik on 22 July 2011. She had been on the island of Utøya hours before the massacre there to give a speech to the AUF camp; Breivik stated that he originally intended Brundtland to be the main target of the attack (along with Eskil Pedersen and Jonas Gahr Støre), but he had been delayed while travelling from Oslo. Breivik arrived on Utøya about two hours after Brundtland had left.
During his trial in 2012, Breivik revealed detailed assassination plans for Brundtland. He told the court that he had planned to handcuff her and then record himself reading out a prepared text detailing her "crimes", before decapitating her on camera using a bayonet and uploading the footage to the internet. Breivik said that while Brundtland had been his main target, he had still planned to massacre everyone else on the island.
She married Arne Olav Brundtland on 9 December 1960. They had four children; one is now deceased. They own a house in the south of France.
Brundtland was operated on for uterine cancer in 2002 at Oslo University Hospital, Ullevål. In 2008 it became known that during 2007 she had received two treatments at Ullevål, paid for by Norwegian public expenditures. Since she had previously notified the Norwegian authorities that she had changed residence to France, she was no longer entitled to Norwegian social security benefits. Following media attention surrounding the matter, Brundtland decided to change residence once more, back to Norway, and she also announced that she would be paying for the treatments herself. Brundtland has claimed to suffer from electrical sensitivity which causes headaches when someone uses a mobile phone near her.
Brundtland has received many awards and honours, including | https://en.wikipedia.org/wiki?curid=13171 |
Gregory of Nazianzus
Gregory of Nazianzus (, "Grēgorios ho Nazianzēnos"; c. 329 – 25 January 390), also known as Gregory the Theologian or Gregory Nazianzen, was a 4th-century Archbishop of Constantinople, and theologian. He is widely considered the most accomplished rhetorical stylist of the patristic age. As a classically trained orator and philosopher he infused Hellenism into the early church, establishing the paradigm of Byzantine theologians and church officials. Saint Gregory was saint patron of medieval Bosnia before the Catholic conquest when he was replaced by Saint Gregory the Great.
Gregory made a significant impact on the shape of Trinitarian theology among both Greek- and Latin-speaking theologians, and he is remembered as the "Trinitarian Theologian". Much of his theological work continues to influence modern theologians, especially in regard to the relationship among the three Persons of the Trinity. Along with the brothers Basil the Great and Gregory of Nyssa, he is known as one of the Cappadocian Fathers.
Gregory is a saint in both Eastern and Western Christianity. In the Roman Catholic Church he is numbered among the Doctors of the Church; in the Eastern Orthodox Church and the Eastern Catholic Churches he is revered as one of the Three Holy Hierarchs, along with Basil the Great and John Chrysostom.
He is also one of only three men in the life of the Orthodox Church who have been officially designated "Theologian" by epithet, the other two being St. John the Theologian (the Evangelist), and St. Symeon the New Theologian.
Gregory was born of Greek parentage in the family estate of Karbala outside the village of Arianzus, near Nazianzus, in southwest Cappadocia. His parents, Gregory and Nonna, were wealthy land-owners. In AD 325 Nonna converted her husband, a Hypsistarian, to Christianity; he was subsequently ordained as bishop of Nazianzus in 328 or 329. The young Gregory and his brother, Caesarius, first studied at home with their uncle Amphylokhios. Gregory went on to study advanced rhetoric and philosophy in Nazianzus, Caesarea, Alexandria, and Athens. On the way to Athens his ship encountered a violent storm, and the terrified Gregory prayed to Christ that if He would deliver him, he would dedicate his life to His service. While at Athens, he developed a close friendship with his fellow student Basil of Caesarea, and also made the acquaintance of Flavius Claudius Julianus, who would later become the emperor known as Julian the Apostate. In Athens, Gregory studied under the famous rhetoricians Himerius and Proaeresius. Upon finishing his education, he taught rhetoric in Athens for a short time.
In 361 Gregory returned to Nazianzus and was ordained a presbyter by his father's wish, who wanted him to assist with caring for local Christians. The younger Gregory, who had been considering a monastic existence, resented his father's decision to force him to choose between priestly services and a solitary existence, calling it an "act of tyranny". Leaving home after a few days, he met his friend Basil at Annesoi, where the two lived as ascetics. However, Basil urged him to return home to assist his father, which he did for the next year. Arriving at Nazianzus, Gregory found the local Christian community split by theological differences and his father accused of heresy by local monks. Gregory helped to heal the division through a combination of personal diplomacy and oratory.
By this time Emperor Julian had publicly declared himself in opposition to Christianity. In response to the emperor's rejection of the Christian faith, Gregory composed his "Invectives Against Julian" between 362 and 363. "Invectives" asserts that Christianity will overcome imperfect rulers such as Julian through love and patience. This process as described by Gregory is the public manifestation of the process of deification ("theosis"), which leads to a spiritual elevation and mystical union with God. Julian resolved, in late 362, to vigorously prosecute Gregory and his other Christian critics; however, the emperor perished the following year during a campaign against the Persians. With the death of the emperor, Gregory and the Eastern churches were no longer under the threat of persecution, as the new emperor Jovian was an avowed Christian and supporter of the church.
Gregory spent the next few years combating Arianism, which threatened to divide the region of Cappadocia. In this tense environment, Gregory interceded on behalf of his friend Basil with Bishop Eusebius of Caesarea (Mazaca). The two friends then entered a period of close fraternal cooperation as they participated in a great rhetorical contest of the Caesarean church precipitated by the arrival of accomplished Arian theologians and rhetors. In the subsequent public debates, presided over by agents of the Emperor Valens, Gregory and Basil emerged triumphant. This success confirmed for both Gregory and Basil that their futures lay in administration of the Church. Basil, who had long displayed inclinations to the episcopacy, was elected bishop of the see of Caesarea in Cappadocia in 370.
Gregory was ordained Bishop of Sasima in 372 by Basil. Basil created this see in order to strengthen his position in his dispute with Anthimus, bishop of Tyana. The ambitions of Gregory's father to have his son rise in the Church hierarchy and the insistence of his friend Basil convinced Gregory to accept this position despite his reservations. Gregory would later refer to his episcopal ordination as forced upon him by his strong-willed father and Basil. Describing his new bishopric, Gregory lamented how it was nothing more than an "utterly dreadful, pokey little hole; a paltry horse-stop on the main road ... devoid of water, vegetation, or the company of gentlemen ... this was my Church of Sasima!" He made little effort to administer his new diocese, complaining to Basil that he preferred instead to pursue a contemplative life.
By late 372 Gregory returned to Nazianzus to assist his dying father with the administration of his diocese. This strained his relationship with Basil, who insisted that Gregory resume his post at Sasima. Gregory retorted that he had no intention to continue to play the role of pawn to advance Basil's interests. He instead focused his attention on his new duties as coadjutor of Nazianzus. It was here that Gregory preached the first of his great episcopal orations.
Following the deaths of his mother and father in 374, Gregory continued to administer the Diocese of Nazianzus but refused to be named bishop. Donating most of his inheritance to the needy, he lived an austere existence. At the end of 375 he withdrew to a monastery at Seleukia, living there for three years. Near the end of this period his friend Basil died. Although Gregory's health did not permit him to attend the funeral, he wrote a heartfelt letter of condolence to Basil's brother, Gregory of Nyssa, and composed twelve memorial poems dedicated to the memory of his departed friend.
Upon the death of Emperor Valens in 378, the accession of Theodosius I, a steadfast supporter of Nicene orthodoxy, was good news to those who wished to purge Constantinople of Arian and Apollinarian domination. The exiled Nicene party gradually returned to the city. From his deathbed, Basil reminded them of Gregory's capabilities and likely recommended his friend to champion the Trinitarian cause in Constantinople.
In 379, the Antioch synod and its archbishop, Meletios, asked Gregory to go to Constantinople to lead a theological campaign to win over that city to Nicene orthodoxy. After much hesitation, Gregory agreed. His cousin Theodosia offered him a villa for his residence; Gregory immediately transformed much of it into a church, naming it Anastasia, "a scene for the resurrection of the faith". From this little chapel he delivered five powerful discourses on Nicene doctrine, explaining the nature of the Trinity and the unity of the Godhead. Refuting the Eunomion denial of the Holy Spirit's divinity, Gregory offered this argument:
Gregory's homilies were well received and attracted ever-growing crowds to Anastasia. Fearing his popularity, his opponents decided to strike. On the vigil of Easter in 379, an Arian mob burst into his church during worship services, wounding Gregory and killing another bishop. Escaping the mob, Gregory next found himself betrayed by his erstwhile friend, the philosopher Maximus the Cynic. Maximus, who was in secret alliance with Peter, bishop of Alexandria, attempted to seize Gregory's position and have himself ordained bishop of Constantinople. Shocked, Gregory decided to resign his office, but the faction faithful to him induced him to stay and ejected Maximus. This episode left Gregory embarrassed, and exposed him to criticism as a provincial simpleton unable to cope with the intrigues of the imperial city.
Affairs in Constantinople remained confused as Gregory's position was still unofficial, and Arian priests yet occupied many important churches. The arrival of the emperor Theodosius in 380 settled matters in Gregory's favor. The emperor, determined to eliminate Arianism, expelled Bishop Demophilus. Gregory was subsequently enthroned as bishop of Constantinople at the Basilica of the Apostles, replacing Demophilus.
Theodosius wanted to further unify the entire empire behind the orthodox position and decided to convene a church council to resolve matters of faith and discipline. Gregory was of similar mind in wishing to unify Christianity. In the spring of 381 they convened the Second Ecumenical Council in Constantinople, which was attended by 150 Eastern bishops. After the death of the presiding bishop, Meletius of Antioch, Gregory was selected to lead the Council. Hoping to reconcile the West with the East, he offered to recognize Paulinus as Patriarch of Antioch. The Egyptian and Macedonian bishops who had supported Maximus's ordination arrived late for the Council. Once there, they refused to recognise Gregory's position as head of the church of Constantinople, arguing that his transfer from the See of Sasima was canonically illegitimate.
Gregory was physically exhausted and worried that he was losing the confidence of the bishops and the emperor. Rather than press his case and risk further division, he decided to resign his office: "Let me be as the Prophet Jonah! I was responsible for the storm, but I would sacrifice myself for the salvation of the ship. Seize me and throw me ... I was not happy when I ascended the throne, and gladly would I descend it." He shocked the Council with his surprise resignation and then delivered a dramatic speech to Theodosius asking to be released from his offices. The emperor, moved by his words, applauded, commended his labor, and granted his resignation. The Council asked him to appear once more for a farewell ritual and celebratory orations. Gregory used this occasion to deliver a final address (Or. 42) and then departed.
Returning to his homeland of Cappadocia, Gregory once again resumed his position as bishop of Nazianzus. He spent the next year combating the local Apollinarian heretics and struggling with periodic illness. He also began composing "De Vita Sua", his autobiographical poem. By the end of 383 he found his health too feeble to cope with episcopal duties. Gregory established Eulalius as bishop of Nazianzus and then withdrew into the solitude of Arianzum. After enjoying six peaceful years in retirement at his family estate, he died on 25 January in 390.
Gregory faced stark choices throughout his life: Should he pursue studies as a rhetor or philosopher? Would a monastic life be more appropriate than public ministry? Was it better to blaze his own path or follow the course mapped for him by his father and Basil? Gregory's writings illuminate the conflicts which both tormented and motivated him. Biographers suggest that it was this dialectic which defined him, forged his character, and inspired his search for meaning and truth.
Gregory's most significant theological contributions arose from his defense of the doctrine of the Trinity. He is especially noted for his contributions to the field of pneumatology—that is, theology concerning the nature of the Holy Spirit. In this regard, Gregory is the first to use the idea of "procession" to describe the relationship between the Spirit and the Godhead: "The Holy Spirit is truly Spirit, coming forth from the Father indeed but not after the manner of the Son, for it is not by generation but by "procession", since I must coin a word for the sake of clearness." Although Gregory does not fully develop the concept, the idea of procession would shape most later thought about the Holy Spirit.
He emphasized that Jesus did not cease to be God when he became a man, nor did he lose any of his divine attributes when he took on human nature. Furthermore, Gregory asserted that Christ was fully human, including a full human soul. He also proclaimed the eternality of the Holy Spirit, saying that the Holy Spirit's actions were somewhat hidden in the Old Testament but much clearer since the ascension of Jesus into Heaven and the descent of the Holy Spirit at the feast of Pentecost.
In contrast to the Neo-Arian belief that the Son is "anomoios", or "unlike" the Father, and with the Semi-Arian assertion that the Son is "homoiousios", or "like" the Father, Gregory and his fellow Cappadocians maintained the Nicaean doctrine of "homoousia", or consubstantiality of the Son with the Father. The Cappadocian Fathers asserted that God's nature is unknowable to man; helped to develop the framework of "hypostases", or three persons united in a single Godhead; illustrated how Jesus is the eikon of the Father; and explained the concept of "theosis", the belief that all Christians can be assimilated with God in "imitation of the incarnate Son as the divine model."
Some of Gregory's theological writings suggest that, like his friend Gregory of Nyssa, he may have supported some form of the doctrine of apocatastasis, the belief that God will bring all of creation into harmony with the Kingdom of Heaven. This led some late-nineteenth century Christian universalists, notably J. W. Hanson and Philip Schaff, to describe Gregory's theology as universalist. This view of Gregory is also held by some modern theologians such as John Sachs, who said that Gregory had "leanings" toward apocatastasis, but in a "cautious, undogmatic" way. However, it is not clear or universally accepted that Gregory held to the doctrine of apocatastasis.
Apart from the several theological discourses, Gregory was also one of the most important early Christian men of letters, a very accomplished orator, even perhaps one of the greatest of his time. Gregory was also a very prolific poet who wrote theological, moral, and biographical poems.
Gregory's great nephew Nichobulos served as his literary executor, preserving and editing many of his writings. A cousin, Eulalios, published several of Gregory's more noteworthy works in 391. By 400, Rufinius began translating his orations into Latin. As Gregory's works circulated throughout the empire they influenced theological thought. His orations were cited as authoritative by the First Council of Ephesus in 431. By 451 he was designated "Theologus", or "Theologian" by the Council of Chalcedon – a title held by no others save John the Apostle and Symeon the New Theologian (949–1022 AD). He is widely quoted by Eastern Orthodox theologians and highly regarded as a defender of the Christian faith. His contributions to Trinitarian theology are also influential and often cited in the Western churches. Paul Tillich credits Gregory of Nazianzus for having "created the definitive formulae for the doctrine of the trinity". Additionally, the Liturgy of St Gregory the Theologian in use by the Coptic Church is named after him.
Following his death, Saint Gregory was buried at Nazianzus. His relics, consisting of portions of his body and clothing, were transferred to Constantinople in 950, into the Church of the Holy Apostles. Part of the relics were taken from Constantinople by Crusaders during the Fourth Crusade, in 1204, and ended up in Rome. On 27 November 2004, those relics, along with those of John Chrysostom, were returned to Istanbul (Constantinople) by Pope John Paul II, with the Vatican retaining a small portion of both. The relics are now enshrined in the Patriarchal Cathedral of St. George in the Fanar.
During the six years of life which remained to him after his final retirement to his birthplace, Gregory composed the greater part of his copious poetical works. These include a valuable autobiographical poem of nearly 2,000 lines; about one hundred other shorter poems relating to his past career; and a large number of epitaphs, epigrams, and epistles to well-known people during that era. The poems that he wrote that dealt with his personal affairs refer to the continuous illness and severe sufferings (physical and spiritual) which assailed him during his last years. In the tiny plot of ground at Arianzus, all that remained to him of his rich inheritance was by a fountain near which there was a shady walk. Gregory retired here to spend his days as a hermit. It was during this time that he decided to write theological discourses and poetry of both a religious and an autobiographical nature. He would receive occasional visits from intimate friends, as well as visits from strangers who were attracted to his retreat by his large reputation for sanctity and learning. He died about 25 January 390, although the exact date of his death is unknown.
The Eastern Orthodox Church and the Eastern Catholic Churches celebrate two feast days in Gregory's honor. 25 January is his primary feast; 30 January, known as the feast of the Three Great Hierarchs, commemorates him along with John Chrysostom and Basil of Caesarea. The Roman Catholic Church observes his feast day on 2 January. The Church of England and the US Episcopal Church celebrate St. Gregory's Holy Day, on 2 January, a "Lesser Festival".
The Lutheran Church–Missouri Synod commemorates Gregory, along with Basil the Great and Gregory of Nyssa (the Cappadocian Fathers) on 10 January. and the Evangelical Lutheran Church in America commemorates Gregory of Nazianzus together with his friends St. Basil the Great and St. Gregory of Nyssa on 14 June. | https://en.wikipedia.org/wiki?curid=13172 |
HTML
Hypertext Markup Language (HTML) is the standard markup language for documents designed to be displayed in a web browser. It can be assisted by technologies such as Cascading Style Sheets (CSS) and scripting languages such as JavaScript and VBScript.
Web browsers receive HTML documents from a web server or from local storage and render the documents into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document.
HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects such as interactive forms may be embedded into the rendered page. HTML provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by "tags", written using angle brackets. Tags such as and directly introduce content into the page. Other tags such as surround and provide information about document text and may include other tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the content of the page.
HTML can embed programs written in a scripting language such as JavaScript, which affects the behavior and content of web pages. Inclusion of CSS defines the look and layout of content. The World Wide Web Consortium (W3C), former maintainer of the HTML and current maintainer of the CSS standards, has encouraged the use of CSS over explicit presentational HTML
In 1980, physicist Tim Berners-Lee, a contractor at CERN, proposed and prototyped ENQUIRE, a system for CERN researchers to use and share documents. In 1989, Berners-Lee wrote a memo proposing an Internet-based hypertext system. Berners-Lee specified HTML and wrote the browser and server software in late 1990. That year, Berners-Lee and CERN data systems engineer Robert Cailliau collaborated on a joint request for funding, but the project was not formally adopted by CERN. In his personal notes from 1990 he listed "some of the many areas in which hypertext is used" and put an encyclopedia first.
The first publicly available description of HTML was a document called "HTML Tags", first mentioned on the Internet by Tim Berners-Lee in late 1991. It describes 18 elements comprising the initial, relatively simple design of HTML. Except for the hyperlink tag, these were strongly influenced by SGMLguid, an in-house Standard Generalized Markup Language (SGML)-based documentation format at CERN. Eleven of these elements still exist in HTML 4.
HTML is a markup language that web browsers use to interpret and compose text, images, and other material into visual or audible web pages. Default characteristics for every item of HTML markup are defined in the browser, and these characteristics can be altered or enhanced by the web page designer's additional use of CSS. Many of the text elements are found in the 1988 ISO technical report TR 9537 "Techniques for using SGML", which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS (Compatible Time-Sharing System) operating system: these formatting commands were derived from the commands used by typesetters to manually format documents. However, the SGML concept of generalized markup is based on elements (nested annotated ranges with attributes) rather than merely print effects, with also the separation of structure and markup; HTML has been progressively moved in this direction with CSS.
Berners-Lee considered HTML to be an application of SGML. It was formally defined as such by the Internet Engineering Task Force (IETF) with the mid-1993 publication of the first proposal for an HTML specification, the "Hypertext Markup Language (HTML)" Internet Draft by Berners-Lee and Dan Connolly, which included an SGML Document type definition to define the grammar. The draft expired after six months, but was notable for its acknowledgment of the NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's philosophy of basing standards on successful prototypes. Similarly, Dave Raggett's competing Internet-Draft, "HTML+ (Hypertext Markup Format)", from late 1993, suggested standardizing already-implemented features like tables and fill-out forms.
After the HTML and HTML+ drafts expired in early 1994, the IETF created an HTML Working Group, which in 1995 completed "HTML 2.0", the first HTML specification intended to be treated as a standard against which future implementations should be based.
Further development under the auspices of the IETF was stalled by competing interests. the HTML specifications have been maintained, with input from commercial software vendors, by the World Wide Web Consortium (W3C). However, in 2000, HTML also became an international standard (ISO/IEC 15445:2000). HTML 4.01 was published in late 1999, with further errata published through 2001. In 2004, development began on HTML5 in the Web Hypertext Application Technology Working Group (WHATWG), which became a joint deliverable with the W3C in 2008, and completed and standardized on 28 October 2014.
XHTML is a separate language that began as a reformulation of HTML 4.01 using XML 1.0. It is no longer being developed as a separate standard.
On 28 May 2019, the W3C announced that WHATWG would be the sole publisher of the HTML and DOM standards. The W3C and WHATWG had been publishing competing standards since 2012. While the W3C standard was identical to the WHATWG in 2007 the standards have since progressively diverged due to different design decisions. The WHATWG "Living Standard" had been the de facto web standard for some time.
HTML markup consists of several key components, including those called "tags" (and their "attributes"), character-based "data types", "character references" and "entity references". HTML tags most commonly come in pairs like and , although some represent "empty elements" and so are unpaired, for example . The first tag in such a pair is the "start tag", and the second is the "end tag" (they are also called "opening tags" and "closing tags").
Another important component is the HTML "document type declaration", which triggers standards mode rendering.
The following is an example of the classic "Hello, World!" program:
The text between and describes the web page, and the text between and is the visible page content. The markup text defines the browser page title, and the tag defines a division of the page used for easy styling.
The Document Type Declaration is for HTML5. If a declaration is not included, various browsers will revert to "quirks mode" for rendering.
HTML documents imply a structure of nested HTML elements. These are indicated in the document by HTML "tags", enclosed in angle brackets thus: .
In the simple, general case, the extent of an element is indicated by a pair of tags: a "start tag" and "end tag" . The text content of the element, if any, is placed between these tags.
Tags may also enclose further tag markup between the start and end, including a mixture of tags and text. This indicates further (nested) elements, as children of the parent element.
The start tag may also include "attributes" within the tag. These indicate other information, such as identifiers for sections within the document, identifiers used to bind style information to the presentation of the document, and for some tags such as the used to embed images, the reference to the image resource in the format like this:
Some elements, such as the line break , or do not permit "any" embedded content, either text or further tags. These require only a single empty tag (akin to a start tag) and do not use an end tag.
Many tags, particularly the closing end tag for the very commonly used paragraph element , are optional. An HTML browser or other agent can infer the closure for the end of an element from the context and the structural rules defined by the HTML standard. These rules are complex and not widely understood by most HTML coders.
The general form of an HTML element is therefore: . Some HTML elements are defined as "empty elements" and take the form . Empty elements may enclose no content, for instance, the tag or the inline tag.
The name of an HTML element is the name used in the tags.
Note that the end tag's name is preceded by a slash character, codice_1, and that in empty elements the end tag is neither required nor allowed.
If attributes are not mentioned, default values are used in each case.
Header of the HTML document: . The title is included in the head, for example:
Headings: HTML headings are defined with the to tags with H1 being the highest (or most important) level and H6 the least:
Heading level 1
Heading level 2
Heading level 3
Heading level 4
Heading level 5
Heading level 6
The Effects are:
Paragraphs:Paragraph 1 Paragraph 2
Line breaks: . The difference between and is that breaks a line without altering the semantic structure of the page, whereas sections the page into paragraphs. The element is an "empty element" in that, although it may have attributes, it can take no content and it may not have an end tag.
This is a paragraph with line breaks
This is a link in HTML. To create a link the tag is used. The codice_2 attribute holds the URL address of the link.
A link to Wikipedia!
Inputs:
There are many possible ways a user can give input/s like:
Comments:
Comments can help in the understanding of the markup and do not display in the webpage.
There are several types of markup elements used in HTML:
Most of the attributes of an element are name-value pairs, separated by codice_8 and written within the start tag of an element after the element's name. The value may be enclosed in single or double quotes, although values consisting of certain characters can be left unquoted in HTML (but not XHTML). Leaving attribute values unquoted is considered unsafe. In contrast with name-value pair attributes, there are some attributes that affect the element simply by their presence in the start tag of the element, like the codice_9 attribute for the codice_4 element.
There are several common attributes that may appear in many elements :
The abbreviation element, codice_21, can be used to demonstrate some of these attributes:
HTML
This example displays as HTML; in most browsers, pointing the cursor at the abbreviation should display the title text "Hypertext Markup Language."
Most elements take the language-related attribute codice_22 to specify text direction, such as with "rtl" for right-to-left text in, for example, Arabic, Persian or Hebrew.
As of version 4.0, HTML defines a set of 252 character entity references and a set of 1,114,050 numeric character references, both of which allow individual characters to be written via simple markup, rather than literally. A literal character and its markup counterpart are considered equivalent and are rendered identically.
The ability to "escape" characters in this way allows for the characters codice_23 and codice_24 (when written as codice_25 and codice_26, respectively) to be interpreted as character data, rather than markup. For example, a literal codice_23 normally indicates the start of a tag, and codice_24 normally indicates the start of a character entity reference or numeric character reference; writing it as codice_26 or codice_30 or codice_31 allows codice_24 to be included in the content of an element or in the value of an attribute. The double-quote character (codice_33), when not used to quote an attribute value, must also be escaped as codice_34 or codice_35 or codice_36 when it appears within the attribute value itself. Equivalently, the single-quote character (codice_37), when not used to quote an attribute value, must also be escaped as codice_38 or codice_39 (or as codice_40 in HTML5 or XHTML documents) when it appears within the attribute value itself. If document authors overlook the need to escape such characters, some browsers can be very forgiving and try to use context to guess their intent. The result is still invalid markup, which makes the document less accessible to other browsers and to other user agents that may try to parse the document for search and indexing purposes for example.
Escaping also allows for characters that are not easily typed, or that are not available in the document's character encoding, to be represented within element and attribute content. For example, the acute-accented codice_41 (codice_42), a character typically found only on Western European and South American keyboards, can be written in any HTML document as the entity reference codice_43 or as the numeric references codice_44 or codice_45, using characters that are available on all keyboards and are supported in all character encodings. Unicode character encodings such as UTF-8 are compatible with all modern browsers and allow direct access to almost all the characters of the world's writing systems.
HTML defines several data types for element content, such as script data and stylesheet data, and a plethora of types for attribute values, including IDs, names, URIs, numbers, units of length, languages, media descriptors, colors, character encodings, dates and times, and so on. All of these data types are specializations of character data.
HTML documents are required to start with a Document Type Declaration (informally, a "doctype"). In browsers, the doctype helps to define the rendering mode—particularly whether to use quirks mode.
The original purpose of the doctype was to enable parsing and validation of HTML documents by SGML tools based on the Document Type Definition (DTD). The DTD to which the DOCTYPE refers contains a machine-readable grammar specifying the permitted and prohibited content for a document conforming to such a DTD. Browsers, on the other hand, do not implement HTML as an application of SGML and by consequence do not read the DTD.
HTML5 does not define a DTD; therefore, in HTML5 the doctype declaration is simpler and shorter:
An example of an HTML 4 doctype
This declaration references the DTD for the "strict" version of HTML 4.01. SGML-based validators read the DTD in order to properly parse the document and to perform validation. In modern browsers, a valid doctype activates standards mode as opposed to quirks mode.
In addition, HTML 4.01 provides Transitional and Frameset DTDs, as explained below. Transitional type is the most inclusive, incorporating current tags as well as older or "deprecated" tags, with the Strict DTD excluding deprecated tags. Frameset has all tags necessary to make frames on a page along with the tags included in transitional type.
Semantic HTML is a way of writing HTML that emphasizes the meaning of the encoded information over its presentation (look). HTML has included semantic markup from its inception, but has also included presentational markup, such as , and tags. There are also the semantically neutral span and div tags. Since the late 1990s, when Cascading Style Sheets were beginning to work in most browsers, web authors have been encouraged to avoid the use of presentational HTML markup with a view to the separation of presentation and content.
In a 2001 discussion of the Semantic Web, Tim Berners-Lee and others gave examples of ways in which intelligent software "agents" may one day automatically crawl the web and find, filter and correlate previously unrelated, published facts for the benefit of human users. Such agents are not commonplace even now, but some of the ideas of Web 2.0, mashups and price comparison websites may be coming close. The main difference between these web application hybrids and Berners-Lee's semantic agents lies in the fact that the current aggregation and hybridization of information is usually designed in by web developers, who already know the web locations and the API semantics of the specific data they wish to mash, compare and combine.
An important type of web agent that does crawl and read web pages automatically, without prior knowledge of what it might find, is the web crawler or search-engine spider. These software agents are dependent on the semantic clarity of web pages they find as they use various techniques and algorithms to read and index millions of web pages a day and provide web users with search facilities without which the World Wide Web's usefulness would be greatly reduced.
In order for search-engine spiders to be able to rate the significance of pieces of text they find in HTML documents, and also for those creating mashups and other hybrids as well as for more automated agents as they are developed, the semantic structures that exist in HTML need to be widely and uniformly applied to bring out the meaning of published text.
Presentational markup tags are deprecated in current HTML and XHTML recommendations. The majority of presentational features from previous versions of HTML are no longer allowed as they lead to poorer accessibility, higher cost of site maintenance, and larger document sizes.
Good semantic HTML also improves the accessibility of web documents (see also Web Content Accessibility Guidelines). For example, when a screen reader or audio browser can correctly ascertain the structure of a document, it will not waste the visually impaired user's time by reading out repeated or irrelevant information when it has been marked up correctly.
HTML documents can be delivered by the same means as any other computer file. However, they are most often delivered either by HTTP from a web server or by email.
The World Wide Web is composed primarily of HTML documents transmitted from web servers to web browsers using the Hypertext Transfer Protocol (HTTP). However, HTTP is used to serve images, sound, and other content, in addition to HTML. To allow the web browser to know how to handle each document it receives, other information is transmitted along with the document. This meta data usually includes the MIME type (e.g., text/html or application/xhtml+xml) and the character encoding (see Character encoding in HTML).
In modern browsers, the MIME type that is sent with the HTML document may affect how the document is initially interpreted. A document sent with the XHTML MIME type is expected to be well-formed XML; syntax errors may cause the browser to fail to render it. The same document sent with the HTML MIME type might be displayed successfully, since some browsers are more lenient with HTML.
The W3C recommendations state that XHTML 1.0 documents that follow guidelines set forth in the recommendation's Appendix C may be labeled with either MIME Type. XHTML 1.1 also states that XHTML 1.1 documents should be labeled with either MIME type.
Most graphical email clients allow the use of a subset of HTML (often ill-defined) to provide formatting and semantic markup not available with plain text. This may include typographic information like coloured headings, emphasized and quoted text, inline images and diagrams. Many such clients include both a GUI editor for composing HTML e-mail messages and a rendering engine for displaying them. Use of HTML in e-mail is criticized by some because of compatibility issues, because it can help disguise phishing attacks, because of accessibility issues for blind or visually impaired people, because it can confuse spam filters and because the message size is larger than plain text.
The most common filename extension for files containing HTML is .html. A common abbreviation of this is .htm, which originated because some early operating systems and file systems, such as DOS and the limitations imposed by FAT data structure, limited file extensions to three letters.
An HTML Application (HTA; file extension ".hta") is a Microsoft Windows application that uses HTML and Dynamic HTML in a browser to provide the application's graphical interface. A regular HTML file is confined to the security model of the web browser's security, communicating only to web servers and manipulating only web page objects and site cookies. An HTA runs as a fully trusted application and therefore has more privileges, like creation/editing/removal of files and Windows Registry entries. Because they operate outside the browser's security model, HTAs cannot be executed via HTTP, but must be downloaded (just like an EXE file) and executed from local file system.
Since its inception, HTML and its associated protocols gained acceptance relatively quickly. However, no clear standards existed in the early years of the language. Though its creators originally conceived of HTML as a semantic language devoid of presentation details, practical uses pushed many presentational elements and attributes into the language, driven largely by the various browser vendors. The latest standards surrounding HTML reflect efforts to overcome the sometimes chaotic development of the language and to create a rational foundation for building both meaningful and well-presented documents. To return HTML to its role as a semantic language, the W3C has developed style languages such as CSS and XSL to shoulder the burden of presentation. In conjunction, the HTML specification has slowly reined in the presentational elements.
There are two axes differentiating various variations of HTML as currently specified: SGML-based HTML versus XML-based HTML (referred to as XHTML) on one axis, and strict versus transitional (loose) versus frameset on the other axis.
One difference in the latest HTML specifications lies in the distinction between the SGML-based specification and the XML-based specification. The XML-based specification is usually called XHTML to distinguish it clearly from the more traditional definition. However, the root element name continues to be "html" even in the XHTML-specified HTML. The W3C intended XHTML 1.0 to be identical to HTML 4.01 except where limitations of XML over the more complex SGML require workarounds. Because XHTML and HTML are closely related, they are sometimes documented in parallel. In such circumstances, some authors conflate the two names as (X)HTML or X(HTML).
Like HTML 4.01, XHTML 1.0 has three sub-specifications: strict, transitional and frameset.
Aside from the different opening declarations for a document, the differences between an HTML 4.01 and XHTML 1.0 document—in each of the corresponding DTDs—are largely syntactic. The underlying syntax of HTML allows many shortcuts that XHTML does not, such as elements with optional opening or closing tags, and even empty elements which must not have an end tag. By contrast, XHTML requires all elements to have an opening tag and a closing tag. XHTML, however, also introduces a new shortcut: an XHTML tag may be opened and closed within the same tag, by including a slash before the end of the tag like this: . The introduction of this shorthand, which is not used in the SGML declaration for HTML 4.01, may confuse earlier software unfamiliar with this new convention. A fix for this is to include a space before closing the tag, as such: .
To understand the subtle differences between HTML and XHTML, consider the transformation of a valid and well-formed XHTML 1.0 document that adheres to Appendix C (see below) into a valid HTML 4.01 document. To make this translation requires the following steps:
Those are the main changes necessary to translate a document from XHTML 1.0 to HTML 4.01. To translate from HTML to XHTML would also require the addition of any omitted opening or closing tags. Whether coding in HTML or XHTML it may just be best to always include the optional tags within an HTML document rather than remembering which tags can be omitted.
A well-formed XHTML document adheres to all the syntax requirements of XML. A valid document adheres to the content specification for XHTML, which describes the document structure.
The W3C recommends several conventions to ensure an easy migration between HTML and XHTML (see HTML Compatibility Guidelines). The following steps can be applied to XHTML 1.0 documents only:
By carefully following the W3C's compatibility guidelines, a user agent should be able to interpret the document equally as HTML or XHTML. For documents that are XHTML 1.0 and have been made compatible in this way, the W3C permits them to be served either as HTML (with a codice_49 MIME type), or as XHTML (with an codice_54 or codice_55 MIME type). When delivered as XHTML, browsers should use an XML parser, which adheres strictly to the XML specifications for parsing the document's contents.
HTML 4 defined three different versions of the language: Strict, Transitional (once called Loose) and Frameset. The Strict version is intended for new documents and is considered best practice, while the Transitional and Frameset versions were developed to make it easier to transition documents that conformed to older HTML specification or didn't conform to any specification to a version of HTML 4. The Transitional and Frameset versions allow for presentational markup, which is omitted in the Strict version. Instead, cascading style sheets are encouraged to improve the presentation of HTML documents. Because XHTML 1 only defines an XML syntax for the language defined by HTML 4, the same differences apply to XHTML 1 as well.
The Transitional version allows the following parts of the vocabulary, which are not included in the Strict version:
The Frameset version includes everything in the Transitional version, as well as the codice_137 element (used instead of codice_56) and the codice_139 element.
In addition to the above transitional differences, the frameset specifications (whether XHTML 1.0 or HTML 4.01) specify a different content model, with codice_137 replacing codice_56, that contains either codice_139 elements, or optionally codice_60 with a codice_56.
As this list demonstrates, the loose versions of the specification are maintained for legacy support. However, contrary to popular misconceptions, the move to XHTML does not imply a removal of this legacy support. Rather the X in XML stands for extensible and the W3C is modularizing the entire specification and opening it up to independent extensions. The primary achievement in the move from XHTML 1.0 to XHTML 1.1 is the modularization of the entire specification. The strict version of HTML is deployed in XHTML 1.1 through a set of modular extensions to the base XHTML 1.1 specification. Likewise, someone looking for the loose (transitional) or frameset specifications will find similar extended XHTML 1.1 support (much of it is contained in the legacy or frame modules). The modularization also allows for separate features to develop on their own timetable. So for example, XHTML 1.1 will allow quicker migration to emerging XML standards such as MathML (a presentational and semantic math language based on XML) and XForms—a new highly advanced web-form technology to replace the existing HTML forms.
In summary, the HTML 4 specification primarily reined in all the various HTML implementations into a single clearly written specification based on SGML. XHTML 1.0, ported this specification, as is, to the new XML defined specification. Next, XHTML 1.1 takes advantage of the extensible nature of XML and modularizes the whole specification. XHTML 2.0 was intended to be the first step in adding new features to the specification in a standards-body-based approach.
The HTML Living Standard, which is developed by WHATWG, is the official version, while W3C HTML5 is no longer separate from WHATWG.
There are some WYSIWYG editors (What You See Is What You Get), in which the user lays out everything as it is to appear in the HTML document using a graphical user interface (GUI), often similar to word processors. The editor renders the document rather than show the code, so authors do not require extensive knowledge of HTML.
The WYSIWYG editing model has been criticized, primarily because of the low quality of the generated code; there are voices advocating a change to the WYSIWYM model (What You See Is What You Mean).
WYSIWYG editors remain a controversial topic because of their perceived flaws such as: | https://en.wikipedia.org/wiki?curid=13191 |
Hector
In Greek mythology and Roman mythology, Hector (; , ) was a Trojan prince and the greatest fighter for Troy in the Trojan War. He acted as leader of the Trojans and their allies in the defence of Troy, "killing 31,000 Greek fighters." He was ultimately killed by Achilles.
In Greek, is a derivative of the verb , archaic form * ('to have' or 'to hold'), from Proto-Indo-European *"seĝh-" ('to hold'). , or as found in Aeolic poetry, is also an epithet of Zeus in his capacity as 'he who holds [everything together]'. Hector's name could thus be taken to mean 'holding fast'.
As the first-born son of King Priam and Queen Hecuba, who was a descendant of Dardanus and Tros, the founder of Troy. In some accounts, his father was the god Apollo. He was a prince of the royal house and the heir apparent to his father's throne. He was married to Andromache, with whom he had an infant son, Scamandrius (whom the people of Troy called Astyanax).
During the European Middle Ages, Hector figures as one of the Nine Worthies noted by Jacques de Longuyon, known not only for his courage but also for his noble and courtly nature. Indeed, Homer places Hector as peace-loving, thoughtful as well as bold, a good son, husband and father, and without darker motives. James Redfield describes Hector as a "martyr to loyalties, a witness to the things of this world, a hero ready to die for the precious imperfections of ordinary life."
According to the "Iliad", Hector did not approve of war between the Greeks and the Trojans.
For ten years, the Achaeans besieged Troy and their allies in the east. Hector commanded the Trojan army, with a number of subordinates including Polydamas, and his brothers Deiphobus, Helenus and Paris. By all accounts, Hector was the best warrior the Trojans and their allies could field, and his fighting prowess was admired by Greeks and his own people alike.
Diomedes and Odysseus, when faced with his attack, described him as what Robert Fagles translated as an 'incredible dynamite' and a 'maniac'.
In the "Iliad", Hector's exploits in the war prior to the events of the book are recapitulated. He had fought the Greek champion Protesilaus in single combat at the start of the war and killed him. A prophecy had stated that the first Greek to land on Trojan soil would die. Thus, Protesilaus, Ajax, and Odysseus would not land. Finally, Odysseus threw his shield out and landed on that, and Protesilaus jumped next from his own ship. In the ensuing fight, Hector killed him, fulfilling the prophecy.
As described by Homer in the Iliad at the advice of Hector’s brother Helenus (who also was divinely inspired) and being told by him that he was not destined to die yet, Hector managed to get both armies seated and challenged any one of the Greek warriors to single combat. The Argives were initially reluctant to accept the challenge. However, after Nestor's chiding, nine Greek heroes stepped up to the challenge and drew by lot to see who was to face Hector. Ajax won and fought Hector. Hector was unable to pierce Ajax's famous shield, but Ajax crushed Hector's shield with a rock and stabbed through his armor with a spear, drawing blood, upon which the god Apollo intervened and the duel was ended as the sun was setting. Hector gave Ajax his sword, which Ajax later used to kill himself. Ajax gave Hector his girdle that Achilles later attached to his chariot to drag Hector's corpse around the walls of Troy.
The Greeks and the Trojans made a truce to bury the dead. In the early dawn the next day, the Greeks took advantage of the truce to build a wall and ditch around the ships while Zeus watched in the distance.
Another mention of Hector's exploits in the early years of war was given in the Iliad in book IX. During the embassy to Achilles, Odysseus, Phoenix and Ajax all try to persuade Achilles to rejoin the fight. In his response, Achilles points out that while Hector was terrorizing the Greek forces now, and that while he himself had fought in their front lines, Hector had 'no wish' to take his force far beyond the walls and out from the Skiaian Gate and nearby oak tree. He then claims, 'There he stood up to me alone one day, and he barely escaped my onslaught.'
Another duel that took place, although Hector received help from Aeneas (his cousin) and Deiphobus, was when Hector rushed to try to save his brother Troilus from Achilles' hands. But he came too late and Troilus had already perished. All Hector could do was to take the lifeless body of Troilus while Achilles escaped after he fought his way through from the Trojans reinforcement.
In the tenth year of the war, observing Paris avoiding combat with Menelaus, Hector upbraids him with having brought trouble on his whole country and now refusing to fight. Paris therefore proposes single combat between himself and Menelaus, with Helen to go to the victor, ending the war. The duel, however, leads to inconclusive results due to intervention by Aphrodite who leads Paris off the field. After Pandarus wounds Menelaus with an arrow the fight begins again.
The Greeks attack and drive the Trojans back. Hector must now go out to lead a counter-attack. According to Homer his wife Andromache, carrying in her arms her son Astyanax, intercepts Hector at the gate, pleading with him not to go out for her sake as well as his son's. Hector knows that Troy and the house of Priam are doomed to fall and that the gloomy fate of his wife and infant son will be to die or go into slavery in a foreign land. With understanding, compassion, and tenderness he explains that he cannot personally refuse to fight, and comforts her with the idea that no one can take him until it is his time to go. The gleaming bronze helmet frightens Astyanax and makes him cry. Hector takes it off, embraces his wife and son, and for his sake prays aloud to Zeus that his son might be chief after him, become more glorious in battle than he, to bring home the blood of his enemies, and make his mother proud. Once he left for battle, those in the house began to mourn as they knew he would not return. Hector and Paris pass through the gate and rally the Trojans, raising havoc among the Greeks.
Zeus weighs the fates of the two armies in the balance, and that of the Greeks sinks down. The Trojans press the Greeks into their camp over the ditch and wall and would have laid hands on the ships, but Agamemnon rallies the Greeks in person. The Trojans are driven off, night falls, and Hector resolves to take the camp and burn the ships the next day. The Trojans bivouac in the field.
The next day Agamemnon rallies the Greeks and drives the Trojans
Hector refrains from battle until Agamemnon leaves the field, wounded in the arm by a spear. Then Hector rallies the Trojans:
Diomedes and Odysseus hinder Hector and win the Greeks some time to retreat, but the Trojans sweep down upon the wall and rain blows upon it. The Greeks in the camp contest the gates to secure entrance for their fleeing warriors. The Trojans try to pull down the ramparts while the Greeks rain arrows upon them. Hector smashes open a gate with a large stone, clears the gate and calls on the Trojans to scale the wall, which they do, and
The battle rages inside the camp. Hector goes down, hit by a stone thrown by Ajax, but Apollo arrives from Olympus and infuses strength into "the shepherd of the people", who orders a chariot attack, with Apollo clearing the way. Many combats, deaths, boasts, threats, epithets, figures of speech, stories, lines of poetry and books of the Iliad later, Hector lays hold of Protesilaus' ship and calls for fire. The Trojans cannot bring it to him, as Ajax kills everyone who tries. Eventually, Hector breaks Ajax' spear with his sword, forcing him to give ground, and he sets the ship afire.
These events are all according to the will of the gods, who have decreed the fall of Troy, and therefore intend to tempt Achilles back into the war. Patroclus, Achilles' closest companion, disguised in the armor of Achilles, enters the combat leading the Myrmidons and the rest of the Achaeans to force a Trojan withdrawal. After Patroclus has routed the Trojan army, Hector, with the aid of Apollo and Euphorbus, kills Patroclus, vaunting over him:
The dying Patroclus foretells Hector's death:
Hector strips the armor of Achilles off the fallen Patroclus and gives it to his men to take back to the city. Glaucus accuses Hector of cowardice for not challenging Ajax. Stung, Hector calls for the armor, puts it on, and uses it to rally the Trojans. Zeus regards the donning of a hero's armor as an act of insolence by a fool about to die, but it makes Hector strong for now.
The next day, the enraged Achilles renounces the wrath that kept him out of action and routs the Trojans, forcing them back to the city. Hector chooses to remain outside the gates of Troy to face Achilles, partly because had he listened to Polydamas and retreated with his troops the previous night, Achilles would not have killed so many Trojans. When he sees Achilles, however, Hector is seized by fear and turns to flee. Achilles chases him around the city three times before Hector masters his fear and turns to face Achilles. But Athena, in the disguise of Hector's brother Deiphobus, has deluded Hector. He requests from Achilles that the victor should return the other's body after the duel, (though Hector himself made it clear he planned to throw the body of Patroclus to the dogs) but Achilles refuses. Achilles hurls his spear at Hector, who dodges it, but Athena brings it back to Achilles' hands without Hector noticing. Hector then throws his own spear at Achilles; it hits his shield and does no injury. When Hector turns to face his supposed brother to retrieve another spear, he sees no one there. At that moment he realizes that he is doomed. Hector decides that he will go down fighting and that men will talk about his bravery in years to come. The desire to achieve ever-lasting honor was one of the most fierce for soldiers living in the timocratic (honor-based) society of the age.
Hector pulls out his sword, now his only weapon, and charges. But Achilles grabbed his thrown spears that were delivered to him by the unseen Athena who wore the Hades helmet. Achilles then aimed his spear and pierced the collar bone section of Hector, the only part of the stolen Armor of Achilles that did not protect Hector. The wound was fatal yet allowed Hector to speak to Achilles. In his final moments, Hector begs Achilles for an honorable funeral, but Achilles replies that he will let the dogs and vultures devour Hector's flesh. (Throughout the Homeric poems, several references are made to dogs, vultures, and other creatures that devour the dead. It can be seen as another way of saying one will die.) Hector dies, prophesying that Achilles' death will follow soon:
Be careful now; for I might be made into the gods' curse ... upon you, on that day when Paris and Phoibos Apollo...destroy you in the Skainan gates, for all your valor.
After his death, Achilles slits Hector's heels and passes the girdle that Ajax had given Hector through the slits. He then fastens the girdle to his chariot and drives his fallen enemy through the dust to the Danaan camp. For the next twelve days, Achilles mistreats the body, but it remains preserved from all injury by Apollo and Aphrodite. After these twelve days, the gods can no longer stand watching it and send down two messengers: Iris, another messenger god, and Thetis, the mother of Achilles. Thetis has told Achilles to allow King Priam to come and take the body for ransom. Once King Priam has been notified that Achilles will allow him to claim the body, he goes to his strongroom to withdraw the ransom. The ransom King Priam offers includes twelve fine robes, twelve white mantles, several richly embroidered tunics, ten bars of yellow gold, a special gold cup, and several cauldrons. Priam himself goes to claim his son's body, and Hermes grants him safe passage by casting a charm that will make anyone who looks at him fall asleep.
Achilles, moved by Priam's actions and following his mother's orders sent by Zeus, returns Hector's body to Priam and promises him a truce of twelve days to allow the Trojans to perform funeral rites for Hector. Priam returns to Troy with the body of his son, and it is given full funeral honors. Even Helen mourns Hector, for he had always been kind to her and protected her from spite. The last lines of the "Iliad" are dedicated to Hector's funeral. Homer concludes by referring to the Trojan prince as the "Breaker of Horses."
In Virgil's "Aeneid", the dead Hector appears to Aeneas in a dream urging him to flee Troy.
The most valuable historical evidence for the Battle of Troy are treaties and letters mentioned in Hittite cuneiform texts of the same approximate era, which mention an unruly Western Anatolian warlord named "Piyama-Radu" (possibly Priam) and his successor "Alaksandu" (possibly Alexander, the nickname of Paris) both based in "Wilusa" (possibly Ilion/Ilios), as well as the god "Apaliunas" (possibly Apollo).
Other such pieces of evidence are names of Trojan heroes in Linear B tablets. Twenty out of fifty-eight men's names also known from Homer, including , "E-ko-to" (Hector), are Trojan warriors and some, including Hector, are in a servile capacity. No such conclusion that they are the offspring of Trojan captive women is warranted. Generally the public has to be content with the knowledge that these names existed in Greek in Mycenaean times, although Page hypothesizes that Hector "may very well be ... a familiar Greek form impressed on a similar-sounding foreign name."
When Pausanias visited Thebes in Boeotia, in the second century AD, he was shown Hector's tomb and was told that the bones had been transported to Thebes according to a Delphic oracle. Moses I. Finley observes "this typical bit of fiction must mean that there was an old Theban hero Hector, a Greek, whose myths antedated the Homeric poems. Even after Homer had located Hector in Troy for all time, the Thebans held on to their hero, and the Delphic oracle provided the necessary sanction."
The pseudepigraphical writer Dares Phrygius states that Hector "spoke with a slight lisp. His complexion was fair, his hair curly. His eyes would blink attractively. His movements were swift. His face, with its beard, was noble. He was handsome, fierce, and high-spirited, merciful to the citizens, and deserving of love." | https://en.wikipedia.org/wiki?curid=13207 |
Hera
Hera (; , "Hērā"; , "Hērē" in Ionic and Homeric Greek) is the goddess of women, marriage, family, and childbirth in ancient Greek religion and myth, one of the Twelve Olympians and the sister and wife of Zeus. She is the daughter of the Titans Cronus and Rhea. Hera rules over Mount Olympus as queen of the gods. A matronly figure, Hera served as both the patroness and protectress of married women, presiding over weddings and blessing marital unions. One of Hera's defining characteristics is her jealous and vengeful nature against Zeus' numerous lovers and illegitimate offspring, as well as the mortals who cross her.
Hera is commonly seen with the animals she considers sacred, including the cow, lion and the peacock. Portrayed as majestic and solemn, often enthroned, and crowned with the "polos" (a high cylindrical crown worn by several of the Great Goddesses), Hera may hold a pomegranate in her hand, emblem of fertile blood and death and a substitute for the narcotic capsule of the opium poppy. Scholar of Greek mythology Walter Burkert writes in "Greek Religion", "Nevertheless, there are memories of an earlier aniconic representation, as a pillar in Argos and as a plank in Samos."
Her Roman counterpart is Juno.
The name of Hera has several possible and mutually exclusive etymologies; one possibility is to connect it with Greek ὥρα "hōra", season, and to interpret it as ripe for marriage and according to Plato ἐρατή "eratē", "beloved" as Zeus is said to have married her for love. According to Plutarch, Hera was an allegorical name and an anagram of "aēr" (ἀήρ, "air"). So begins the section on Hera in Walter Burkert's "Greek Religion". In a note, he records other scholars' arguments "for the meaning Mistress as a feminine to "Heros", Master." John Chadwick, a decipherer of Linear B, remarks "her name may be connected with "hērōs", ἥρως, 'hero', but that is no help since it too is etymologically obscure." A. J. van Windekens, offers "young cow, heifer", which is consonant with Hera's common epithet βοῶπις ("boōpis", "cow-eyed"). R. S. P. Beekes has suggested a Pre-Greek origin. Her name is attested in Mycenaean Greek written in the Linear B syllabic script as "e-ra", appearing on tablets found in Pylos and Thebes.
Hera may have been the first deity to whom the Greeks dedicated an enclosed roofed temple sanctuary, at Samos about 800 BCE. It was replaced later by the Heraion of Samos, one of the largest of all Greek temples (altars were in front of the temples under the open sky). There were many temples built on this site so evidence is somewhat confusing and archaeological dates are uncertain.
The temple created by the Rhoecus sculptors and architects was destroyed between 570–560 BCE. This was replaced by the Polycratean temple of 540–530 BCE. In one of these temples we see a forest of 155 columns. There is also no evidence of tiles on this temple suggesting either the temple was never finished or that the temple was open to the sky.
Earlier sanctuaries, whose dedication to Hera is less certain, were of the Mycenaean type called "house sanctuaries". Samos excavations have revealed votive offerings, many of them late 8th and 7th centuries BCE, which show that Hera at Samos was not merely a local Greek goddess of the Aegean: the museum there contains figures of gods and suppliants and other votive offerings from Armenia, Babylon, Iran, Assyria, Egypt, testimony to the reputation which this sanctuary of Hera enjoyed and to the large influx of pilgrims. Compared to this mighty goddess, who also possessed the earliest temple at Olympia and two of the great fifth and sixth century temples of Paestum, the termagant of Homer and the myths is an "almost... comic figure", according to Burkert.
Though greatest and earliest free-standing temple to Hera was the Heraion of Samos, in the Greek mainland Hera was especially worshipped as "Argive Hera" ("Hera Argeia") at her sanctuary that stood between the former Mycenaean city-states of Argos and Mycenae, where the festivals in her honor called "Heraia" were celebrated. "The three cities I love best," the ox-eyed Queen of Heaven declares in the "Iliad", book iv, "are Argos, Sparta and Mycenae of the broad streets." There were also temples to Hera in Olympia, Corinth, Tiryns, Perachora and the sacred island of Delos. In Magna Graecia, two Doric temples to Hera were constructed at Paestum, about 550 BCE and about 450 BCE. One of them, long called the "Temple of Poseidon" was identified in the 1950s as a second temple there of Hera.
In Euboea, the festival of the Great Daedala, sacred to Hera, was celebrated on a sixty-year cycle.
Hera's importance in the early archaic period is attested by the large building projects undertaken in her honor. The temples of Hera in the two main centers of her cult, the Heraion of Samos and the Heraion of Argos in the Argolis, were the very earliest monumental Greek temples constructed, in the 8th century BCE.
According to Walter Burkert, both Hera and Demeter have many characteristic attributes of Pre-Greek Great Goddesses.
In the same vein, British scholar Charles Francis Keary suggests that Hera had some sort of "Earth Goddess" worship in ancient times, connected to her possible origin as a Pelasgian goddess (as mentioned by Herodotus).
According to Homeric Hymn III to Delian Apollo, Hera detained Eileithyia to prevent Leto from going into labor with Artemis and Apollo, since the father was Zeus. The other goddesses present at the birthing on Delos sent Iris to bring her. As she stepped upon the island, the divine birth began. In the myth of the birth of Heracles, it is Hera herself who sits at the door, delaying the birth of Heracles until her protégé, Eurystheus, had been born first.
The Homeric Hymn to Pythian Apollo makes the monster Typhaon the offspring of archaic Hera in her Minoan form, produced out of herself, like a monstrous version of Hephaestus, and whelped in a cave in Cilicia. She gave the creature to Python to raise.
In the Temple of Hera, Olympia, Hera's seated cult figure was older than the warrior figure of Zeus that accompanied it. Homer expressed her relationship with Zeus delicately in the "Iliad", in which she declares to Zeus, "I am Cronus' eldest daughter, and am honourable not on this ground only, but also because I am your wife, and you are king of the gods."
There has been considerable scholarship, reaching back to Johann Jakob Bachofen in the mid-nineteenth century, about the possibility that Hera, whose early importance in Greek religion is firmly established, was originally the goddess of a matriarchal people, presumably inhabiting Greece before the Hellenes. In this view, her activity as goddess of marriage established the patriarchal bond of her own subordination: her resistance to the conquests of Zeus is rendered as Hera's "jealousy", the main theme of literary anecdotes that undercut her ancient cult.
However, it remains a controversial claim that an ancient matriarchy or a cultural focus on a monotheistic Great Goddess existed among the ancient Greeks or elsewhere. The claim is generally rejected by modern scholars as insufficiently evidenced.
Hera is the daughter of the youngest Titan Cronus and his wife, and sister, Rhea. Cronus was fated to be overthrown by one of his children; to prevent this, he swallowed all of his newborn children whole until Rhea tricked him into swallowing a stone instead of her youngest child, Zeus. Zeus grew up in secret and when he grew up he tricked his father into regurgitating his siblings, including Hera. Zeus then led the revolt against the Titans, banished them, and divided the dominion over the world with his brothers Poseidon and Hades.
Hera was most known as the matron goddess, "Hera Teleia"; but she presided over weddings as well. In myth and cult, fragmentary references and archaic practices remain of the sacred marriage of Hera and Zeus. At Plataea, there was a sculpture of Hera seated as a bride by Callimachus, as well as the matronly standing Hera.
Hera was also worshipped as a virgin: there was a tradition in Stymphalia in Arcadia that there had been a triple shrine to Hera the Girl (Παις [Pais]), the Adult Woman (Τελεια [Teleia]), and the Separated (Χήρη [Chḗrē] 'Widowed' or 'Divorced'). In the region around Argos, the temple of Hera in Hermione near Argos was to Hera the Virgin. At the spring of Kanathos, close to Nauplia, Hera renewed her virginity annually, in rites that were not to be spoken of ("arrheton"). The Female figure, showing her "Moon" over the lake is also appropriate, as Hebe, Hera, and Hecate; new moon, full moon, and old moon in that order and otherwise personified as the Virgin of Spring, The Mother of Summer, and the destroying Crone of Autumn.
In Hellenistic imagery, Hera's chariot was pulled by peacocks, birds not known to Greeks before the conquests of Alexander. Alexander's tutor, Aristotle, refers to it as "the Persian bird." The peacock motif was revived in the Renaissance iconography that unified Hera and Juno, and which European painters focused on. A bird that had been associated with Hera on an archaic level, where most of the Aegean goddesses were associated with "their" bird, was the cuckoo, which appears in mythic fragments concerning the first wooing of a virginal Hera by Zeus.
Her archaic association was primarily with cattle, as a Cow Goddess, who was especially venerated in "cattle-rich" Euboea. On Cyprus, very early archaeological sites contain bull skulls that have been adapted for use as masks (see Bull (mythology)). Her familiar Homeric epithet "Boôpis", is always translated "cow-eyed". In this respect, Hera bears some resemblance to the Ancient Egyptian deity Hathor, a maternal goddess associated with cattle.
Hera bore several epithets in the mythological tradition, including:
Hera is known for her jealousy; even Zeus, commonly portrayed as fearless, feared her rages. Zeus fell in love with Hera, but she refused his first marriage proposal. Zeus then preyed on her empathy for animals and other beings, created a thunderstorm and transformed himself into a little cuckoo. As a cuckoo, Zeus pretended to be in distress outside her window. Hera, feeling pity towards the bird brought it inside and held it to her breast to warm it. Zeus then transformed back into himself and raped her. Hera, ashamed of being exploited, agreed to marriage with Zeus. All of nature burst into bloom for their wedding and many gifts were exchanged.
Zeus loved Hera, but he also loved Greece and often sneaked down to Earth in disguise to bear children with the mortals. He wanted many children to inherit his greatness and become great heroes and rulers of Greece. Hera's jealousy towards all of Zeus' lovers and children caused her to continuously torment them and Zeus was powerless to stop his wife. Hera was always aware of Zeus' trickery and kept very close watch over him and his excursions to Earth.
Hera "presided over the right arrangements of the marriage and is the archetype of the union in the marriage bed."
Hera is the stepmother and enemy of Heracles. The name Heracles means "Glory of Hera". There are three alternative stories about the birth of Heracles and Hera's role in preventing it. In Homer's "Iliad", when Alcmene was about to give birth to Heracles, Zeus announced to all the gods that on that day a child by Zeus himself, would be born and rule all those around him. Hera, after requesting Zeus to swear an oath to that effect, descended from Olympus to Argos and made the wife of Sthenelus (son of Perseus) give birth to Eurystheus after only seven months, while at the same time preventing Alcmene from delivering Heracles. This resulted in the fulfilment of Zeus's oath in that it was Eurystheus rather than Heracles. In an alternative version mentioned in Ovid's "Metamorphoses", when Alcmene was pregnant with Zeus' child, Hera tried to prevent the birth from occurring by having Eileithyia (the Greek equivalent of Lucina) tie Alcmene's legs in knots. Her attempt was foiled when Galanthis frightened Eileithyia while she was tying Alcmene's legs and Heracles was born. Hera thus punishes Galanthis by turning her into a weasel. In Pausanias' recounting, Hera sent witches (as they were called by the Thebans) to hinder Alcmene's delivery of Heracles. The witches were successful in preventing the birth until Historis, daughter of Tiresias, thought of a trick to deceive the witches. Like Galanthis, Historis announced that Alcmene had delivered her child; having been deceived, the witches went away, allowing Alcmene to give birth.
Hera's wrath against Zeus' son continues and while Heracles is still an infant, Hera sends two serpents to kill him as he lay in his cot. Heracles throttles the snakes with his bare hands and was found by his nurse playing with their limp bodies as if they were a child's toy.
One account of the origin of the Milky Way is that Zeus had tricked Hera into nursing the infant Heracles: discovering who he was, she pulled him from her breast, and a spurt of her milk formed the smear across the sky that can be seen to this day. Unlike any Greeks, the Etruscans instead pictured a full-grown bearded Heracles at Hera's breast: this may refer to his adoption by her when he became an Immortal. He had previously wounded her severely in the breast.
When Heracles reached adulthood, Hera drove him mad, which led him to murder his family and this later led to him undertaking his famous labours. Hera assigned Heracles to labour for King Eurystheus at Mycenae. She attempted to make almost each of Heracles' twelve labours more difficult. When he fought the Lernaean Hydra, she sent a crab to bite at his feet in the hopes of distracting him. Later Hera stirred up the Amazons against him when he was on one of his quests. When Heracles took the cattle of Geryon, he shot Hera in the right breast with a triple-barbed arrow: the wound was incurable and left her in constant pain, as Dione tells Aphrodite in the "Iliad", Book V. Afterwards, Hera sent a gadfly to bite the cattle, irritate them and scatter them. Hera then sent a flood which raised the water level of a river so much that Heracles could not ford the river with the cattle. He piled stones into the river to make the water shallower. When he finally reached the court of Eurystheus, the cattle were sacrificed to Hera.
Eurystheus also wanted to sacrifice the Cretan Bull to Hera. She refused the sacrifice because it reflected glory on Heracles. The bull was released and wandered to Marathon, becoming known as the Marathonian Bull.
Some myths state that in the end, Heracles befriended Hera by saving her from Porphyrion, a giant who tried to rape her during the Gigantomachy, and that she even gave her daughter Hebe as his bride. Whatever myth-making served to account for an archaic representation of Heracles as "Hera's man" it was thought suitable for the builders of the Heraion at Paestum to depict the exploits of Heracles in bas-reliefs.
When Hera discovered that Leto was pregnant and that Zeus was the father, she convinced the nature spirits to prevent Leto from giving birth on terra-firma, the mainland, any island at sea, or any place under the sun. Poseidon gave pity to Leto and guided her to the floating island of Delos, which was neither mainland nor a real island where Leto was able to give birth to her children. Afterwards, Zeus secured Delos to the bottom of the ocean. The island later became sacred to Apollo. Alternatively, Hera kidnapped Eileithyia, the goddess of childbirth, to prevent Leto from going into labor. The other gods bribed Hera with a beautiful necklace nobody could resist and she finally gave in.
Either way, Artemis was born first and then assisted with the birth of Apollo. Some versions say Artemis helped her mother give birth to Apollo for nine days. Another variation states that Artemis was born one day before Apollo, on the island of Ortygia and that she helped Leto cross the sea to Delos the next day to give birth to Apollo.
Later, Tityos attempted to rape Leto at the behest of Hera. He was slain by Artemis and Apollo.
This account of the birth of Apollo and Artemis is contradicted by Hesiod in Theogony, as the twins are born prior to Zeus’ marriage to Hera.
Hera saw a lone thundercloud and raced down in an attempt to catch Zeus with a mistress. Zeus saw her coming and transformed his new bride Io into a little snow-white cow. However, Hera was not fooled and demanded that Zeus give her the heifer as a present. Zeus could not refuse his queen without drawing suspicion so he had to give her the beautiful heifer.
Once Io was given to Hera, she tied her to a tree and sent her servant Argus to keep Io separated from Zeus. Argus was a loyal servant to Hera and he has immense strength and one hundred eyes all over his body. It was not possible to go past Argus since he never closed more than half his eyes at any time. Zeus was afraid of Hera's wrath could not personally intervene, so to save Io, he commanded Hermes to kill Argus, which he does by lulling all one hundred eyes into eternal sleep. In Ovid's interpolation, when Hera learned of Argus' death, she took his eyes and placed them in the plumage of the peacock, her favorite animal, accounting for the eye pattern in its tail and making it the vainest of all animals. Hera, furious about Io being free and the death of Argus, sent a gadfly (Greek "oistros", compare oestrus) to sting Io as she wandered the earth. Eventually Io made it to Egypt, the Egyptians worshiped the snow-white heifer and named her the Egyptian goddess Isis. Hera permitted Zeus to change Io back into her human form, under the condition that he never look at her again. Io, the goddess-queen of Egypt, then bore Zeus' son as the next king.
A prophecy stated that a son of the sea-nymph Thetis, with whom Zeus fell in love after gazing upon her in the oceans off the Greek coast, would become greater than his father. Possibly for this reasons, Thetis was betrothed to an elderly human king, Peleus son of Aeacus, either upon Zeus' orders, or because she wished to please Hera, who had raised her. All the gods and goddesses as well as various mortals were invited to the marriage of Peleus and Thetis (the eventual parents of Achilles) and brought many gifts. Only Eris, goddess of discord, was not invited and was stopped at the door by Hermes, on Zeus' order. She was annoyed at this, so she threw from the door a gift of her own: a golden apple inscribed with the word καλλίστῃ (kallistēi, "To the fairest"). Aphrodite, Hera, and Athena all claimed to be the fairest, and thus the rightful owner of the apple.
The goddesses quarreled bitterly over it, and none of the other gods would venture an opinion favoring one, for fear of earning the enmity of the other two. They chose to place the matter before Zeus, who, not wanting to favor one of the goddesses, put the choice into the hands of Paris, a Trojan prince. After bathing in the spring of Mount Ida where Troy was situated, they appeared before Paris to have him choose. The goddesses undressed before him, either at his request or for the sake of winning. Still, Paris could not decide, as all three were ideally beautiful, so they resorted to bribes. Hera offered Paris political power and control of all of Asia, while Athena offered wisdom, fame, and glory in battle, and Aphrodite offered the most beautiful mortal woman in the world as a wife, and he accordingly chose her. This woman was Helen, who was, unfortunately for Paris, already married to King Menelaus of Sparta. The other two goddesses were enraged by this and through Helen's abduction by Paris, they brought about the Trojan War.
Hera plays a substantial role in "The Iliad", appearing in a number of books throughout the epic poem. In accordance with ancient Greek mythology, Hera's hatred towards the Trojans, which was started by Paris' decision that Aphrodite was the most beautiful goddess, is seen as through her support of the Greeks during the war. Throughout the epic Hera makes many attempts to thwart the Trojan army. In books 1 and 2, Hera declares that the Trojans must be destroyed. Hera persuades Athena to aid the Achaeans in battle and she agrees to assist with interfering on their behalf.
In book 5, Hera and Athena plot to harm Ares, who had been seen by Diomedes in assisting the Trojans. Diomedes called for his soldiers to fall back slowly. Hera, Ares' mother, saw Ares' interference and asked Zeus, Ares' father, for permission to drive Ares away from the battlefield. Hera encouraged Diomedes to attack Ares and he threw his spear at the god. Athena drove the spear into Ares' body, and he bellowed in pain and fled to Mt. Olympus, forcing the Trojans to fall back.
In book 8, Hera tries to persuade Poseidon to disobey Zeus and help the Achaean army. He refuses, saying he doesn't want to go against Zeus. Determined to intervene in the war, Hera and Athena head to the battlefield. However, seeing the two flee, Zeus sent Iris to intercept them and make them return to Mt. Olympus or face grave consequences. After prolonged fighting, Hera sees Poseidon aiding the Greeks and giving them motivation to keep fighting.
In book 14 Hera devises a plan to deceive Zeus. Zeus set a decree that the gods were not allowed to interfere in the mortal war. Hera is on the side of the Achaeans, so she plans a Deception of Zeus where she seduces him, with help from Aphrodite, and tricks him into a deep sleep, with the help of Hypnos, so that the Gods could interfere without the fear of Zeus.
In book 21, Hera continues her interference with the battle as she tells Hephaestus to prevent the river from harming Achilles. Hephaestus sets the battlefield ablaze, causing the river to plead with Hera, promising her he will not help the Trojans if Hephaestus stops his attack. Hephaestus stops his assault and Hera returns to the battlefield where the gods begin to fight amongst themselves.
According to the urbane retelling of myth in Ovid's "Metamorphoses", for a long time, a nymph named Echo had the job of distracting Hera from Zeus' affairs by leading her away and flattering her. When Hera discovered the deception, she cursed Echo to only repeat the words of others (hence our modern word "echo").
When Hera learned that Semele, daughter of Cadmus King of Thebes, was pregnant by Zeus, she disguised herself as Semele's nurse and persuaded the princess to insist that Zeus show himself to her in his true form. When he was compelled to do so, having sworn by Styx, his thunder and lightning destroyed Semele. Zeus took Semele's unborn child, Dionysus and completed its gestation sewn into his own thigh.
In another version, Dionysus was originally the son of Zeus by either Demeter or Persephone. Hera sent her Titans to rip the baby apart, from which he was called Zagreus ("Torn in Pieces"). Zeus rescued the heart; or, the heart was saved, variously, by Athena, Rhea, or Demeter. Zeus used the heart to recreate Dionysus and implant him in the womb of Semele—hence Dionysus became known as "the twice-born". Certain versions imply that Zeus gave Semele the heart to eat to impregnate her. Hera tricked Semele into asking Zeus to reveal his true form, which killed her. Dionysus later managed to rescue his mother from the underworld and have her live on Mount Olympus.
Lamia was a queen of Libya, whom Zeus loved. Hera turned her into a monster and murdered their children. Or, alternatively, she killed Lamia's children and Lamia's grief and rage turned her into a monster. Lamia was cursed with the inability to close her eyes so that she would always obsess over the image of her dead children. Zeus gave her the gift to be able to take her eyes out to rest, and then put them back in. Lamia was envious of other mothers and ate their children.
Gerana was a queen of the Pygmies who boasted she was more beautiful than Hera. The wrathful goddess turned her into a crane and proclaimed that her bird descendants should wage eternal war on the Pygmy folk.
Cydippe, a priestess of Hera, was on her way to a festival in the goddess' honor. The oxen which were to pull her cart were overdue and her sons, Biton and Cleobis, pulled the cart the entire way (45 stadia, 8 kilometers). Cydippe was impressed with their devotion to her and Hera so asked Hera to give her children the best gift a god could give a person. Hera ordained that the brothers would die in their sleep.
This honor bestowed upon the children was later used by Solon, as a proof while trying to convince Croesus that it is impossible to judge a person's happiness until they have died a fruitful death after a joyous life.
Tiresias was a priest of Zeus, and as a young man he encountered two snakes mating and hit them with a stick. He was then transformed into a woman. As a woman, Tiresias became a priestess of Hera, married and had children, including Manto. After seven years as a woman, Tiresias again found mating snakes; depending on the myth, either she made sure to leave the snakes alone this time, or, according to Hyginus, trampled on them and became a man once more.
As a result of his experiences, Zeus and Hera asked him to settle the question of which sex, male or female, experienced more pleasure during intercourse. Zeus claimed it was women; Hera claimed it was men. When Tiresias sided with Zeus, Hera struck him blind. Since Zeus could not undo what she had done, he gave him the gift of prophecy.
An alternative and less commonly told story has it that Tiresias was blinded by Athena after he stumbled onto her bathing naked. His mother, Chariclo, begged her to undo her curse, but Athena could not; she gave him prophecy instead.
At the marriage of Zeus and Hera, a nymph named Chelone was disrespectful or refused to attend the wedding. Zeus thus, turned her into a tortoise.
Hera hated Pelias because he had killed Sidero, his step-grandmother, in one of the goddess's temples. She later convinced Jason and Medea to kill Pelias. The Golden Fleece was the item that Jason needed to get his mother freed.
In Thrace, Hera and Zeus turned King Haemus and Queen Rhodope into mountains, the Balkan (Haemus Mons) and Rhodope Mountains respectively, for their hubris in comparing themselves to the gods.
When Zeus had pity on Ixion and brought him to Olympus and introduced him to the gods, instead of being grateful, Ixion grew lustful for Hera. Zeus found out about his intentions and made a cloud in the shape of Hera, who was later named Nephele, and tricked Ixion into coupling with it and from their union came Centaurus. So Ixion was expelled from Olympus and Zeus ordered Hermes to bind Ixion to a winged fiery wheel that was always spinning. Therefore, Ixion was bound to a burning solar wheel for all eternity, at first spinning across the heavens, but in later myth transferred to Tartarus. | https://en.wikipedia.org/wiki?curid=13208 |
History of the ancient Levant
The Levant is the large area in Southwest Asia, south of the Taurus Mountains, bounded by the Mediterranean Sea in the west, the Arabian Desert in the south, and Mesopotamia in the east. It stretches 400 miles north to south from the Taurus Mountains to the Sinai desert, and 70 to 100 miles east to west between the sea and the Arabian desert. The term is also sometimes used to refer to modern events or states in the region immediately bordering the eastern Mediterranean Sea: Cyprus, Israel, Jordan, Lebanon, Palestine, Syria and Hatay Province of Turkey.
The term normally does not include Anatolia (although at times Cilicia may be included), the Caucasus Mountains, Mesopotamia or any part of the Arabian Peninsula proper. The Sinai Peninsula is sometimes included, though it is more considered an intermediate, peripheral or marginal area forming a land bridge between the Levant and northern Egypt.
Anatomically modern Homo sapiens are demonstrated at the area of Mount Carmel in modern-day Israel during the Middle Paleolithic dating from . These migrants out of Africa seem to have been unsuccessful, and by in the Levant, Neanderthal groups seem to have benefited from the worsening climate and replaced Homo sapiens, who were possibly confined once more to Africa.
A second move out of Africa is demonstrated by the Boker Tachtit Upper Paleolithic culture, from 52,000–50,000 BC, with humans at Ksar Akil XXV level being modern humans. This culture bears close resemblance to the Badoshan Aurignacian culture of Iran, and the later Sebilian I Egyptian culture of . Stephen Oppenheimer suggests that this reflects a movement of modern human (possibly Caucasian) groups back into North Africa, at this time.
It would appear this sets the date by which Homo sapiens Upper Paleolithic cultures begin replacing Neanderthal Levalo-Mousterian, and by Palestine was occupied by the Levanto-Aurignacian Ahmarian culture, lasting from 39,000–24,000 BC. This culture was quite successful spreading as the Antelian culture (late Aurignacian), as far as Southern Anatolia, with the Atlitan culture.
After the Late Glacial Maxima, a new Epipaleolithic culture appears in Southern Palestine. The appearance of the Kebarian culture, of microlithic type implies a significant rupture in the cultural continuity of Levantine Upper Paleolithic. The Kebaran culture, with its use of microliths, is associated with the use of the bow and arrow and the domestication of the dog. Extending from 18–10,500 BC, the Kebaran culture shows clear connections to the earlier Microlithic cultures using the bow and arrow, and using grinding stones to harvest wild grains, that developed from the Halfan culture of Egypt, that came from the still earlier Aterian tradition of the Sahara. Some linguists see this as the earliest arrival of Nostratic languages in the Middle East.
Kebaran culture was quite successful, and was ancestral to the later Natufian culture (12,500–9,500 BC), which extended throughout the whole of the Levantine region. These people pioneered the first sedentary settlements, and may have supported themselves from fishing and the harvest of wild grains plentiful in the region at that time. the oldest remains of bread were discovered at the archaeological site Shubayqa 1, once home of the Natufian hunter-gatherers, roughly 4,000 years before the advent of agriculture.
Natufian culture also demonstrates the earliest domestication of the dog, and the assistance of this animal in hunting and guarding human settlements may have contributed to the successful spread of this culture. In the northern Syrian, eastern Anatolian region of the Levant, Natufian culture at Cayonu and Mureybet developed the first fully agricultural culture with the addition of wild grains, later being supplemented with domesticated sheep and goats, which were probably domesticated first by the Zarzian culture of Northern Iraq and Iran (which like the Natufian culture may have also developed from Kebaran).
By 8500–7500 BC, the Pre-Pottery Neolithic A (PPNA) culture developed out of the earlier local tradition of Natufian in Southern Palestine, dwelling in round houses, and building the first defensive site at Tell es-Sultan (ancient Jericho) (guarding a valuable fresh water spring). This was replaced in 7500 BC by Pre-Pottery Neolithic B (PPNB), dwelling in square houses, coming from Northern Syria and the Euphrates bend.
During the period of 8500–7500 BC, another hunter-gatherer group, showing clear affinities with the cultures of Egypt (particularly the Outacha retouch technique for working stone) was in Sinai. This Harifian culture may have adopted the use of pottery from the Isnan culture and Helwan culture of Egypt (which lasted from 9000 to 4500 BC), and subsequently fused with elements from the PPNB culture during the climatic crisis of 6000 BC to form what Juris Zarins calls the Syro-Arabian pastoral technocomplex, which saw the spread of the first Nomadic pastoralists in the Ancient Near East. These extended southwards along the Red Sea coast and penetrating the Arabian bifacial cultures, which became progressively more Neolithic and pastoral, and extending north and eastwards, to lay the foundations for the tent-dwelling Martu and Akkadian peoples of Mesopotamia.
In the Amuq valley of Syria, PPNB culture seems to have survived, influencing further cultural developments further south. Nomadic elements fused with PPNB to form the Minhata Culture and Yarmukian Culture which were to spread southwards, beginning the development of the classic mixed farming Mediterranean culture, and from 5600 BC were associated with the Ghassulian culture of the region, the first chalcolithic culture of the Levant. This period also witnessed the development of megalithic structures, which continued into the Bronze Age.
In modern scholarship the chronology of the Bronze Age Levant is divided into Early/Proto Syrian, corresponding to the Early Bronze c. 3300 BC - 2000 BC; Old Syrian, corresponding to the Middle Bronze c. 2000 BC - 1550 BC; and Middle Syrian, corresponding to the Late Bronze c. 1550 BC - 1200 BC. The term Neo-Syria is used to designate the early Iron Age.
During the Early Syrian period, Akkadian and Eblaite split from proto-East Semitic somewhere around 3500 BC - 3000 BC, which is also the era the Akkadians entered Mesopotamia to the north of the Sumerians and established multiple cities. The early Syrian period was dominated by the Eblaite first kingdom (3000 BC - 2300 BC), Kingdom of Nagar (2600 BC - 2300 BC) and the Mariote second kingdom (2500 BC - 2290 BC). The Akkadian Empire conquered large areas of the Levant and were followed by the Amorite kingdoms in the old Syrian period ca. 2000–1600 BC, which arose in Mari, Yamkhad and Qatna. The Amorites also founded Babylon in Mesopotamia. Also following the Akkadians was the extension of Khirbet Kerak ware culture, showing affinities with the Caucasus, and possibly linked to the later appearance of the Hurrians.
Around the 17th and 16th centuries BC most of the older centers had been overrun. In the middle Syrian period the Mitanni in northern Syria, for a time, menaced the Hittite kingdom, but were defeated by it around the middle of the 14th. The Semitic Hyksos used the new technologies to occupy Egypt, but were expelled, leaving the empire of the New Kingdom to develop in their wake. From 1550 until 1100, much of the Levant was conquered by Egypt, which in the latter half of this period contested Syria with the Hittite Empire.
At the end of the 13th century BC, all of these powers suddenly collapsed. Cities all around the eastern Mediterranean were sacked within a span of a few decades by assorted raiders. The Hittite empire was destroyed. Egypt repelled its attackers with only a major effort, and over the next century shrank to its territorial core, its central authority permanently weakened. Multiple cities on the Coast like Ugarit and other Canaanite cities were destroyed by the Sea People.
The destruction at the end of the Bronze Age left a number of tiny kingdoms and city-states behind. A few Hittite centres remained in northern Syria known as the Syro-Hittite states after the main Hittite state fell in 1180 BC, along with some Phoenician ports in Canaan that escaped destruction and developed into great commercial powers. The Israelites emerged as a rural culture, mainly in the Canaanite hill-country and the Eastern Galilee, quickly spreading through the land and forming an alliance in the struggle for the land against the Philistines to the West, Moab and Ammon to the East and Edom to the South. In the 12th century BC, most of the interior, as well as Babylonia and Upper Mesopotamia, was overrun by Arameans and Chaldeans, while the shoreline around today's Gaza Strip was settled by Philistines.
In this period a number of technological innovations spread, most notably iron working and the Phoenician alphabet, developed by the Phoenicians or the Canaanites around the 16th century BC.
During the 9th century BC, the Assyrians began to reassert themselves against the incursions of the Aramaeans, and over the next few centuries developed into a powerful and well-organised empire. Their armies were among the first to employ cavalry, which took the place of chariots, and had a reputation for both prowess and brutality. At their height, the Assyrians dominated all of the Levant, Egypt, and Babylonia. However, the empire began to collapse toward the end of the 7th century BC, and was obliterated by an alliance between a resurgent New Kingdom of Babylonia and the Iranian Medes.
After the Battle of Carchemish, Nebuchadnezzar II besieged Jerusalem in 605 BC and destroyed the Temple, starting the period of the Babylonian captivity from between 597 BC and 581 BC. The subsequent balance of power was short-lived, though. In the 550s BC the Persians revolted against the Medes and gained control of their empire, and over the next few decades annexed to it the realms of Lydia in Anatolia, Damascus, Babylonia, and Egypt, as well as consolidating their control over the Iranian plateau nearly as far as India. This vast kingdom was divided up into various satrapies and governed roughly according to the Assyrian model, but with a far lighter hand. Around this time Zoroastrianism became the predominant religion in Persia.
Persia controlled the Levant, but by the 4th century BC Persia had fallen into decline. The Phoenicians on the coast started many unsuccessful rebellions against the Persians who taxed them heavily, while the Jews were more relaxed under the Persian rule especially after the return from the exile that Cyrus the Great granted them. The campaigns of Xenophon in 401-399 BC illustrated how very vulnerable Persia had become to armies organized along Greek lines. Such an army under the Macedonian King Alexander the Great conquered the Levant (333-332 BC).
Alexander did not live long enough to consolidate his realm; after his death in 323 BC the greater share of the east eventually went to the descendants of Seleucus I Nicator. This period saw great innovations in mathematics, science, architecture, and the like, and Greeks founded cities throughout the east, some of which grew to be the world's first major metropolises. Hellenistic culture did not, however, reach very far into the countryside. The Seleucids brought many Greeks, in addition to the descendants of the soldiers who served in the army of Alexander, from Greek cities to populate new cities founded all over the Levant.
The Seleucids (312 to 63 BC) adopted a pro-western stance that alienated both the powerful eastern satraps and many Greeks who had migrated to the east. During the 2nd century BC, Greek culture lost ground in the Levant, and the Seleucid Empire began to break apart. The Seleucid decline continued, and the Roman Republic overran the Seleucid heartland in 65 BC. Judea, which had become independent under the Hasmonean dynasty (140 BC - 37 BC), was annexed by Rome in 63 BC and became Iudaea Province (6 - 135 AD), the period became notable for when the Jewish population was expelled from Jerusalem after the destruction of the second Temple c. 70 AD and dispersed all over the old world.
Many philosophical schools emerged during the Roman rule of the Levant with famous teachers most notably the schools of Neoplatonism by Iamblichus and Porphyry, Neopythagorianism by Apollonius of Tyana and Numenius of Apamea, Hellenic Judaism by Josephus, Christianity by Paul of Tarsus and Justin Martyr and Ignatius of Antioch. Not to mention other Gnostic schools that prospered in this part of the world but didn't maintain the success for too long after mainstream Christianity and Judaism took over; Interestingly, the Mandaeans are hypothesized to be the descendants of these Gnostics who sought refuge in Mesopotamia to preserve their faith likely during the second century.
A Persian dynasty, the Sassanids (224-651), periodically clashed with Rome, and later with the Byzantine Empire. In 391 the Byzantine era began with the permanent division of the Roman Empire into Eastern and Western halves. Byzantine control over many parts of the Levant lasted until 636, when Arab armies conquered the area and it became a part of the Rashidun Caliphate.
The Byzantines reached a low point under Phocas (Byzantine Emperor from 602 to 610), with the Sassanids occupying the whole of the eastern Mediterranean. In 610, though, Heraclius took the throne in Constantinople and began a successful counter-attack, expelling the Persians and invading Media and Assyria. Unable to stop his advance, the Sassanian king Khosrau II was assassinated (628) and the Sassanid empire fell into anarchy. Weakened by their quarrels, neither the Byzantines nor the Sassanids could deal with the onslaught of the Arabs, newly unified under the banners of Islam and keen to expand their area of control. By 650 Arab forces had conquered all of Persia, Syria, and Egypt. | https://en.wikipedia.org/wiki?curid=13210 |
History of Europe
The history of Europe covers the people inhabiting the continent of Europe from prehistory to the present. During the Neolithic era and the time of the Indo-European migrations Europe saw human inflows from east and southeast and subsequent important cultural and material exchange. The period known as classical antiquity began with the emergence of the city-states of ancient Greece. Later, the Roman Empire came to dominate the entire Mediterranean basin. The fall of the Roman Empire in AD 476 traditionally marks the start of the Middle Ages. Beginning in the 14th century a Renaissance of knowledge challenged traditional doctrines in science and theology. Simultaneously, the Protestant Reformation set up Protestant churches primarily in Germany, Scandinavia and England. After 1800, the Industrial Revolution brought prosperity to Britain and Western Europe. The main European powers set up colonies in most of the Americas and Africa, and parts of Asia. In the 20th century, World War I and World War II resulted in massive numbers of deaths. The Cold War dominated European geo-politics from 1947 to 1989. After the fall of the Iron Curtain, the European countries grew together.
During the Neolithic era (starting at c. 7000 BC.) and the time of the Indo-European migrations (starting at c. 4000 BC.) Europe saw massive migrations from east and southeast which also brought agriculture, new technologies, and the Indo-European languages, primarily through the areas of the Balkan peninsula and the Black sea region.
Some of the best-known civilizations of the late prehistoric Europe were the Minoan and the Mycenaean, which flourished during the Bronze Age until they collapsed in a short period of time around 1200 BC.
The period known as classical antiquity began with the emergence of the city-states of Ancient Greece. After ultimately checking the Persian advance in Europe through the Greco-Persian Wars in the 5th century BC, Greek influence reached its zenith under the expansive empire of Alexander the Great, spreading throughout Asia, Africa, and other parts of Europe. The Thracians and their kingdoms and culture were long present in Southeast Europe. The Roman Empire came to dominate the entire Mediterranean basin. By 300 AD the Roman Empire was divided into the Western and Eastern empires. During the 4th and 5th centuries, the Germanic peoples of Northern Europe, pressed by the Huns, grew in strength and led repeated attacks that resulted in the Fall of the Western Roman Empire. The Western empire's collapse in AD 476 traditionally marks the end of the classical period and the start of the Middle Ages.
In Western Europe, Germanic peoples became more powerful in the remnants of the former Western Roman Empire and established kingdoms and empires of their own. Of all of the Germanic peoples, the Franks would rise to a position of hegemony over Western Europe, the Frankish Empire reaching its peak under Charlemagne around 800. This empire was later divided into several parts; West Francia would evolve into the Kingdom of France, while East Francia would evolve into the Holy Roman Empire, a precursor to modern Germany and Italy. The British Isles were the site of several large-scale migrations.
The Byzantine Empire – the eastern part of the Roman Empire, with its capital Constantinople, survived for the next 1000 years as the most dominant empire in Southeast Europe. The powerful and long lived Bulgarian Empire was its main competitor in the region. Both empires were major powers in that part of Europe for centuries, both creating important cultural, political, linguistic and religious legacy through the Middle Ages to this day.
The Viking Age, a period of migrations of Scandinavian peoples, occurred from the late 8th century to the middle 11th century. The Normans, descendants of the Vikings who settled in Northern France, had a significant impact on many parts of Europe, from the Norman conquest of England to Sicily. The Rus' people founded Kievan Rus', which evolved into Russia. After 1000 the Crusades were a series of religiously motivated military expeditions originally intended to bring the Levant back under Christian rule. The Crusaders opened trade routes which enabled the merchant republics of Genoa and Venice to become major economic powers. The Reconquista, a related movement, worked to reconquer Iberia for Christendom.
Eastern Europe in the High Middle Ages was dominated by the rise and fall of the Mongol Empire. Led by Genghis Khan, the Mongols were a group of steppe nomads who established a decentralized empire which, at its height, extended from China in the east to the Black and Baltic Seas in Europe. As Mongol power waned towards the Late Middle Ages, the Grand Duchy of Moscow rose to become the strongest of the numerous Russian principalities and republics and would grow into the Tsardom of Russia in 1547. The Late Middle Ages represented a period of upheaval in Europe. The epidemic known as the Black Death and an associated famine caused demographic catastrophe in Europe as the population plummeted. Dynastic struggles and wars of conquest kept many of the states of Europe at war for much of the period. In Scandinavia, the Kalmar Union dominated the political landscape, while England fought with Scotland in the Wars of Scottish Independence and with France in the Hundred Years' War. In Central Europe, the Polish–Lithuanian Commonwealth became a large territorial empire, while the Holy Roman Empire, which was an elective monarchy, came to be dominated for centuries by the House of Habsburg. Russia continued to expand southward and eastward into former Mongol lands. In the Balkans, the Ottoman Empire overran Byzantine lands, culminating in the Fall of Constantinople in 1453, which historians mark as the end of the Middle Ages.
Beginning in the 14th century in Florence and later spreading through Europe, a Renaissance of knowledge challenged traditional doctrines in science and theology. The rediscovery of classical Greek and Roman knowledge had an enormous liberating effect on intellectuals. Simultaneously, the Protestant Reformation under German Martin Luther questioned Papal authority. Henry VIII seized control of the English Church and its lands. The European religious wars were fought between German and Spanish rulers. The Reconquista ended Muslim rule in Iberia. By the 1490s a series of oceanic explorations marked the Age of Discovery, establishing direct links with Africa, the Americas, and Asia. Religious wars continued to be fought in Europe, until the 1648 Peace of Westphalia. The Spanish crown maintained its hegemony in Europe and was the leading power on the continent until the signing of the Treaty of the Pyrenees, which ended a conflict between Spain and France that had begun during the Thirty Years' War. An unprecedented series of major wars and political revolutions took place around Europe and the world in the period between 1610 and 1700.
The Industrial Revolution began in Britain, based on coal, steam, and textile mills. Political change in continental Europe was spurred by the French Revolution under the motto "liberté, égalité, fraternité". Napoleon Bonaparte took control, made many reforms inside France, and transformed Western Europe. But his rise stimulated both nationalism and reaction and he was defeated in 1814–15 as the old royal conservatives returned to power.
The period between 1815 and 1871 saw revolutionary attempts in much of Europe (apart from Britain). They all failed however. As industrial work forces grew in Western Europe, socialism and trade union activity developed. The last vestiges of serfdom were abolished in Russia in 1861. Greece and the other Balkan nations began a long slow road to independence from the Ottoman Empire, starting in the 1820s. Italy was unified in its Risorgimento in 1860. After the Franco-Prussian War of 1870–71, Otto von Bismarck unified the German states into an empire that was politically and militarily dominant until 1914. Most of Europe scrambled for imperial colonies in Africa and Asia in the Age of Empire. Britain and France built the largest empires, while diplomats ensured there were no major wars in Europe, apart from the Crimean War of the 1850s.
The outbreak of the First World War in 1914 was precipitated by the rise of nationalism in Southeastern Europe as the Great Powers took sides. The 1917 October Revolution led the Russian Empire to become the world's first communist state, the Soviet Union. The Allies, led by Britain, France, and the United States, defeated the Central Powers, led by the German Empire and Austria-Hungary, in 1918. During the Paris Peace Conference the Big Four imposed their terms in a series of treaties, especially the Treaty of Versailles. The war's human and material devastation was unprecedented.
Germany lost its overseas empire and several provinces, had to pay large reparations, and was humiliated by the victors. They in turn had large debts to the United States. The 1920s were prosperous until 1929 when the Great Depression broke out, which led to the collapse of democracy in many European states. The Nazi regime under Adolf Hitler came to power in 1933, rearmed Germany, and along with Mussolini's Italy sought to assert themselves on the continent. Other nations, who had not taken to the attractions of fascism, sought to avoid conflict. They set boundaries appeasement, which Hitler continually ignored. The Second World War began. The war ended with the defeat of the Axis powers but the threat of more conflict was recognised before the war's end. Many from the US were suspicious of how the USSR would treat the peace - in the USSR there was paranoia at US forces in Europe. Eastern Front Western Front Meetings between leaders Yalta proved inconclusive. In the closing months of the war there was a race to the finish. The territories captured from the Nazis by troops from the USSR found they had exchanged Hitler for Stalin. The USSR would not leave those territories for fourty years. The USSR claimed they needed buffer states between them and the nascent NATO. In the west, the termIron Curtain entered the language. The United States launched the Marshall Plan from 1948–51 and NATO from 1949, and rebuilt industrial economies that all were thriving by the 1950s. France and West Germany took the lead in forming the European Economic Community, which eventually became the European Union (EU). Secularization saw the weakening of Protestant and Catholic churches across most of Europe, except where they were symbols of anti-government resistance, as in Poland. The Revolutions of 1989 brought an end to both Soviet hegemony and communism in Eastern Europe. Germany was reunited, Europe's integration deepened, and both NATO and the EU expanded to the east. The EU came under increasing pressure because of the worldwide recession after 2008.
"Homo erectus" migrated from Africa to Europe before the emergence of modern humans. "Homo erectus georgicus", which lived roughly 1.8 million years ago in Georgia, is the earliest hominid to have been discovered in Europe. Lézignan-la-Cèbe in France, Orce in Spain, Monte Poggiolo in Italy and Kozarnika in Bulgaria are among the oldest Palaeolithic sites in Europe.
The earliest appearance of anatomically modern people in Europe has been dated to 35,000 BC, usually referred to as the Cro-Magnon. The earliest sites in Europe are
Riparo Mochi (Italy), Geissenklösterle (Germany), and Isturitz (France).
Some locally developed transitional cultures (Uluzzian in Italy and Greece, Altmühlian in Germany, Szeletian in Central Europe and Châtelperronian in the southwest) use clearly Upper Palaeolithic technologies at very early dates.
Nevertheless, the definitive advance of these technologies is made by the Aurignacian culture. The origins of this culture can be located in the Levant (Ahmarian) and Hungary (first full Aurignacian). By 35,000 BC, the Aurignacian culture and its technology had extended through most of Europe. The last Neanderthals seem to have been forced to retreat during this process to the southern half of the Iberian Peninsula.
Around 29,000 BC a new technology/culture appeared in the western region of Europe: the Gravettian. This technology/culture has been theorised to have come with migrations of people from the Balkans (see Kozarnika).
Around 16,000 BC, Europe witnessed the appearance of a new culture, known as Magdalenian, possibly rooted in the old Gravettian. This culture soon superseded the Solutrean area and the Gravettian of mainly France, Spain, Germany, Italy, Poland, Portugal and Ukraine. The Hamburg culture prevailed in Northern Europe in the 14th and the 13th millennium BC as the Creswellian (also termed the British Late Magdalenian) did shortly after in the British Islands.
Around 12,500 BC, the Würm glaciation ended. Slowly, through the following millennia, temperatures and sea levels rose, changing the environment of prehistoric people. Nevertheless, Magdalenian culture persisted until c. 10,000 BC, when it quickly evolved into two "microlithist" cultures: Azilian (Federmesser), in Spain and southern France, and then Sauveterrian, in southern France and Tardenoisian in Central Europe, while in Northern Europe the Lyngby complex succeeded the Hamburg culture with the influence of the Federmesser group as well. Evidence of permanent settlement dates from the 8th millennium BC in the Balkans.
The Indo-European migrations started at around c. 4200 BC. through the areas of the Black sea and the Balkan peninsula in East and Southeast Europe. In the next 3000 years the Indo-European languages expanded through Europe.
In Varna Necropolis – a burial site from 4569–4340 BC and one of the most important archaeological sites in world prehistory, was found the oldest gold treasure (elaborated golden objects) in the world. Recently discovered golden artifacts treasure in Durankulak, Bulgaria, appears to be 7000 old.
The Neolithic reached Central Europe in the 6th millennium BC and parts of Northern Europe in the 5th and 4th millenniums BC.
The first well-known literate civilization in Europe was that of the Minoans. The Minoan civilization was a Bronze Age civilization that arose on the island of Crete and flourished from approximately the 27th century BC to the 15th century BC. It was rediscovered at the beginning of the 20th century through the work of the British archaeologist Arthur Evans. Will Durant referred to it as "the first link in the European chain".
The Minoans were replaced by the Mycenaean civilization which flourished during the period roughly between 1600 BC, when Helladic culture in mainland Greece was transformed under influences from Minoan Crete, and 1100 BC. The major Mycenaean cities were Mycenae and Tiryns in Argolis, Pylos in Messenia, Athens in Attica, Thebes and Orchomenus in Boeotia, and Iolkos in Thessaly. In Crete, the Mycenaeans occupied Knossos. Mycenaean settlement sites also appeared in Epirus, Macedonia, on islands in the Aegean Sea, on the coast of Asia Minor, the Levant, Cyprus and Italy. Mycenaean artefacts have been found well outside the limits of the Mycenean world.
Quite unlike the Minoans, whose society benefited from trade, the Mycenaeans advanced through conquest. Mycenaean civilization was dominated by a warrior aristocracy. Around 1400 BC, the Mycenaeans extended their control to Crete, the center of the Minoan civilization, and adopted a form of the Minoan script (called Linear A) to write their early form of Greek in Linear B.
The Mycenaean civilization perished with the collapse of Bronze-Age civilization on the eastern shores of the Mediterranean Sea. The collapse is commonly attributed to the Dorian invasion, although other theories describing natural disasters and climate change have been advanced as well. Whatever the causes, the Mycenaean civilization had definitely disappeared after LH III C, when the sites of Mycenae and Tirynth were again destroyed and lost their importance. This end, during the last years of the 12th century BC, occurred after a slow decline of the Mycenaean civilization, which lasted many years before dying out. The beginning of the 11th century BC opened a new context, that of the protogeometric, the beginning of the geometric period, the "Greek Dark Ages" of traditional historiography.
The Greeks and the Romans left a legacy in Europe which is evident in European languages, thought, visual arts and law. Ancient Greece was a collection of city-states, out of which the original form of democracy developed. Athens was the most powerful and developed city, and a cradle of learning from the time of Pericles. Citizens' forums debated and legislated policy of the state, and from here arose some of the most notable classical philosophers, such as Socrates, Plato, and Aristotle, the last of whom taught Alexander the Great.
Through his military campaigns, the king of the kingdom of Macedon, Alexander, spread Hellenistic culture and learning to the banks of the River Indus. Meanwhile, the Roman Republic strengthened through victory over Carthage in the Punic Wars. Greek wisdom passed into Roman institutions, as Athens itself was absorbed under the banner of the Senate and People of Rome – SPQR.
The Romans expanded their domains from Anatolia in the east to Britannia in the west. In 44 BC as it approached its height, its dictator Julius Caesar was murdered by senators in an attempt to restore the Republic. In the ensuing turmoil, Octavian (ruled as Augustus; and as "divi filius", or Son of God, as Julius had adopted him as an heir) usurped the reins of power and fought the Roman Senate. While proclaiming the rebirth of the Republic, he had ushered in the transfer of the Roman state from a republic to an empire, the Roman Empire, which lasted for more than four centuries until the fall of the Western Roman Empire.
The Hellenic civilisation was a collection of city-states or poleis with different governments and cultures that achieved notable developments in government, philosophy, science, mathematics, politics, sports, theatre and music.
The most powerful city-states were Athens, Sparta, Thebes, Corinth, and Syracuse. Athens was a powerful Hellenic city-state and governed itself with an early form of direct democracy invented by Cleisthenes; the citizens of Athens voted on legislation and executive bills themselves. Athens was the home of Socrates, Plato, and the Platonic Academy.
The Hellenic city-states established colonies on the shores of the Black Sea and the Mediterranean (Asian Minor, Sicily and Southern Italy in Magna Graecia). By the late 6th century BC, all the Greek city states in Asia Minor had been incorporated into the Persian Empire, while the latter had made territorial gains in the Balkans (such as Macedon, Thrace, Paeonia, etc.) and Eastern Europe proper as well. In the course of the 5th century BC, some of the Greek city states attempted to overthrow Persian rule in the Ionian Revolt, which failed. This sparked the first Persian invasion of mainland Greece. At some point during the ensuing Greco-Persian Wars, namely during the Second Persian invasion of Greece, and precisely after the Battle of Thermopylae and the Battle of Artemisium, almost all of Greece to the north of the Isthmus of Corinth had been overrun by the Persians, but the Greek city states reached a decisive victory at the Battle of Plataea. With the end of the Greco-Persian wars, the Persians were eventually decisively forced to withdraw from their territories in Europe. The Greco-Persian Wars and the victory of the Greek city states directly influenced the entire further course of European history and would set its further tone. Some Greek city-states formed the Delian League to continue fighting Persia, but Athens' position as leader of this league led Sparta to form the rival Peloponnesian League. The Peloponnesian Wars ensued, and the Peloponnesian League was victorious. Subsequently, discontent with Spartan hegemony led to the Corinthian War and the defeat of Sparta at the Battle of Leuctra. At the same time at the north ruled the Thracian Odrysian Kingdom between the 5th century BC and the 1st century AD.
Hellenic infighting left Greek city states vulnerable, and Philip II of Macedon united the Greek city states under his control. The son of Philip II, known as Alexander the Great, invaded neighboring Persia, toppled and incorporated its domains, as well as invading Egypt and going as far off as India, increasing contact with people and cultures in these regions that marked the beginning of the Hellenistic period.
After the death of Alexander, his empire split into multiple kingdoms ruled by his generals, the Diadochi. The Diadochi fought against each other in a series of conflicts called the Wars of the Diadochi. In the beginning of the 2nd century BC, only three major kingdoms remained: the Ptolemaic Egypt, the Seleucid Empire and Macedonia. These kingdoms spread Greek culture to regions as far away as Bactria.
Much of Greek learning was assimilated by the nascent Roman state as it expanded outward from Italy, taking advantage of its enemies' inability to unite: the only challenge to Roman ascent came from the Phoenician colony of Carthage, and its defeats in the three Punic Wars marked the start of Roman hegemony. First governed by kings, then as a senatorial republic (the Roman Republic), Rome finally became an empire at the end of the 1st century BC, under Augustus and his authoritarian successors.
The Roman Empire had its centre in the Mediterranean, controlling all the countries on its shores; the northern border was marked by the Rhine and Danube rivers. Under emperor Trajan (2nd century AD) the empire reached its maximum expansion, controlling approximately of land surface, including Italia, Gallia, Dalmatia, Aquitania, Britannia, Baetica, Hispania, Thrace, Macedonia, Greece, Moesia, Dacia, Pannonia, Egypt, Minor Asia, Cappadocia, Armenia, Caucasus, North Africa, Levant and parts of Mesopotamia. Pax Romana, a period of peace, civilisation and an efficient centralised government in the subject territories ended in the 3rd century, when a series of civil wars undermined Rome's economic and social strength.
In the 4th century, the emperors Diocletian and Constantine were able to slow down the process of decline by splitting the empire into a Western part with a capital in Rome and an Eastern part with the capital in Byzantium, or Constantinople (now Istanbul). Whereas Diocletian severely persecuted Christianity, Constantine declared an official end to state-sponsored persecution of Christians in 313 with the Edict of Milan, thus setting the stage for the Church to become the state church of the Roman Empire in about 380.
The Roman Empire had been repeatedly attacked by invading armies from Northern Europe and in 476, Rome finally fell. Romulus Augustus, the last Emperor of the Western Roman Empire, surrendered to the Germanic King Odoacer. The British historian Edward Gibbon argued in "The History of the Decline and Fall of the Roman Empire" (1776) that the Romans had become decadent and had lost civic virtue.
Gibbon said that the adoption of Christianity meant belief in a better life after death, and therefore made people lazy and indifferent to the present. "From the eighteenth century onward", Glen W. Bowersock has remarked, "we have been obsessed with the fall: it has been valued as an archetype for every perceived decline, and, hence, as a symbol for our own fears." It remains one of the greatest historical questions, and has a tradition rich in scholarly interest.
Some other notable dates are the Battle of Adrianople in 378, the death of Theodosius I in 395 (the last time the Roman Empire was politically unified), the crossing of the Rhine in 406 by Germanic tribes after the withdrawal of the legions to defend Italy against Alaric I, the death of Stilicho in 408, followed by the disintegration of the western legions, the death of Justinian I, the last Roman Emperor who tried to reconquer the west, in 565, and the coming of Islam after 632. Many scholars maintain that rather than a "fall", the changes can more accurately be described as a complex transformation. Over time many theories have been proposed on why the Empire fell, or whether indeed it fell at all.
When Emperor Constantine had reconquered Rome under the banner of the cross in 312, he soon afterwards issued the Edict of Milan in 313 (preceded by the Edict of Serdica in 311), declaring the legality of Christianity in the Roman Empire. In addition, Constantine officially shifted the capital of the Roman Empire from Rome to the Greek town of Byzantium, which he renamed Nova Roma – it was later named Constantinople ("City of Constantine").
In 395 Theodosius I, who had made Christianity the official religion of the Roman Empire, would be the last emperor to preside over a united Roman Empire. The empire was split into two halves: the Western Roman Empire centred in Ravenna, and the Eastern Roman Empire (later to be referred to as the Byzantine Empire) centred in Constantinople. The Roman Empire was repeatedly attacked by Hunnic, Germanic, Slavic and other “barbarian” tribes (see: Migration Period), and in 476 finally the Western part fell to the Heruli chieftain Odoacer.
Roman authority in the Western part of the empire had collapsed, and a power vacuum left in the wake of this collapse; the central organization, institutions, laws and power of Rome had broken down, resulting in many areas being open to invasion by migrating tribes. Over time, feudalism and manorialism arose, two interlocking institutions that provided for division of land and labor, as well as a broad if uneven hierarchy of law and protection. These localised hierarchies were based on the bond of common people to the land on which they worked, and to a lord, who would provide and administer both local law to settle disputes among the peasants, as well as protection from outside invaders. Unlike under Roman rule, with its standard laws and military across the empire and its great bureaucracy to administer them and collect taxes, each lord (although having obligations to a higher lord) was largely sovereign in his domain. A peasant's lot could vary greatly depending on the leadership skills and attitudes to justice of the lord toward his people. Tithes or rents were paid to the lord, who in turn owed resources, and armed men in times of war, to his lord, perhaps a regional prince. However, the levels of hierarchy were varied over time and place.
The western provinces soon were to be dominated by three great powers: first, the Franks (Merovingian dynasty) in Francia 481–843 AD, which covered much of present France and Germany; second, the Visigothic kingdom 418–711 AD in the Iberian Peninsula (modern Spain); and third, the Ostrogothic kingdom 493–553 AD in Italy and parts of the western Balkans The Ostrogoths were later replaced by the Kingdom of the Lombards 568–774 AD. These new powers of the west built upon the Roman traditions until they evolved into a synthesis of Roman and Germanic cultures. Although these powers covered large territories, they did not have the great resources and bureaucracy of the Roman empire to control regions and localities. The ongoing invasions and boundary disputes usually meant a more risky and varying life than that under the empire. This meant that in general more power and responsibilities were left to local lords. On the other hand, it also meant more freedom, particularly in more remote areas.
In Italy, Theodoric the Great began the cultural romanization of the new world he had constructed. He made Ravenna a center of Romano-Greek culture of art and his court fostered a flowering of literature and philosophy in Latin. In Iberia, King Chindasuinth created the Visigothic Code.
In the Eastern part the dominant state was the remaining Eastern Roman Empire.
In the feudal system, new princes and kings arose, the most powerful of which was arguably the Frankish ruler Charlemagne. In 800, Charlemagne, reinforced by his massive territorial conquests, was crowned Emperor of the Romans (Imperator Romanorum) by Pope Leo III, effectively solidifying his power in western Europe. Charlemagne's reign marked the beginning of a new Germanic Roman Empire in the west, the Holy Roman Empire. Outside his borders, new forces were gathering. The Kievan Rus' were marking out their territory, a Great Moravia was growing, while the Angles and the Saxons were securing their borders.
For the duration of the 6th century, the Eastern Roman Empire was embroiled in a series of deadly conflicts, first with the Persian Sassanid Empire (see Roman–Persian Wars), followed by the onslaught of the arising Islamic Caliphate (Rashidun and Umayyad). By 650, the provinces of Egypt, Palestine and Syria were lost to the Muslim forces, followed by Hispania and southern Italy in the 7th and 8th centuries (see Muslim conquests). The Arab invasion from the east was stopped after the intervention of the Bulgarian Empire (see Han Tervel).
The Middle Ages are commonly dated from the fall of the Western Roman Empire (or by some scholars, before that) in the 5th century to the beginning of the early modern period in the 16th century, marked by the rise of nation states, the division of Western Christianity in the Reformation, the rise of humanism in the Italian Renaissance, and the beginnings of European overseas expansion which allowed for the Columbian Exchange.
Many consider Emperor Constantine I (reigned 306–337) to be the first "Byzantine Emperor". It was he who moved the imperial capital in 324 from Nicomedia to Byzantium, which re-founded as Constantinople, or Nova Roma ("New Rome"). The city of Rome itself had not served as the capital since the reign of Diocletian. Some date the beginnings of the Empire to the reign of Theodosius I (379–395) and Christianity's official supplanting of the pagan Roman religion, or following his death in 395, when the empire was split into two parts, with capitals in Rome and Constantinople. Others place it yet later in 476, when Romulus Augustulus, traditionally considered the last western Emperor, was deposed, thus leaving sole imperial authority with the emperor in the Greek East. Others point to the reorganisation of the empire in the time of Heraclius (c. 620) when Latin titles and usages were officially replaced with Greek versions. In any case, the changeover was gradual and by 330, when Constantine inaugurated his new capital, the process of hellenization and increasing Christianisation was already under way. The Empire is generally considered to have ended after the fall of Constantinople to the Ottoman Turks in 1453. The Plague of Justinian was a pandemic that afflicted the Byzantine Empire, including its capital Constantinople, in the years 541–542. It is estimated that the Plague of Justinian killed as many as 100 million people across the world. It caused Europe's population to drop by around 50% between 541 and 700. It also may have contributed to the success of the Muslim conquests.
The Early Middle Ages span roughly five centuries from 500 to 1000.
In the Eastern part of Europe new dominant states formed: the Avar Khaganate (567–after 822), Old Great Bulgaria (632–668), the Khazar Khaganate (c. 650–969) and Danube Bulgaria (founded by Asparuh in 680) were constantly rivaling the hegemony of the Byzantine Empire.
From the 7th century Byzantine history was greatly affected by the rise of Islam and the Caliphates. Muslim Arabs first invaded historically Roman territory under Abū Bakr, first Caliph of the Rashidun Caliphate, who entered Roman Syria and Roman Mesopotamia. As the Byzantines and neighboring Sasanids were severely weakened by the time, amongst the most important reason(s) being the protracted, centuries-lasting and frequent Byzantine–Sasanian wars, which included the climactic Byzantine–Sasanian War of 602–628, under Umar, the second Caliph, the Muslims entirely toppled the Sasanid Persian Empire, and decisively conquered Syria and Mesopotamia, as well as Roman Palestine, Roman Egypt, and parts of Asia Minor and Roman North Africa. In the mid 7th century AD, following the Muslim conquest of Persia, Islam penetrated into the Caucasus region, of which parts would later permanently become part of Russia. This trend, which included the conquests by the invading Muslim forces and by that the spread of Islam as well continued under Umar's successors and under the Umayyad Caliphate, which conquered the rest of Mediterranean North Africa and most of the Iberian Peninsula. Over the next centuries Muslim forces were able to take further European territory, including Cyprus, Malta, Crete, and Sicily and parts of southern Italy.
The Muslim conquest of Hispania began when the Moors (Berbers and Arabs) invaded the Christian Visigothic kingdom of Hispania in the year 711, under the Berber general Tariq ibn Ziyad. They landed at Gibraltar on 30 April and worked their way northward. Tariq's forces were joined the next year by those of his Arab superior, Musa ibn Nusair. During the eight-year campaign most of the Iberian Peninsula was brought under Muslim rule – save for small areas in the northwest (Asturias) and largely Basque regions in the Pyrenees. In 711, Visigothic Hispania was very weakened because it was immersed in a serious internal crisis caused by a war of succession to the throne involving two Visigoth suitors. The Muslims took advantage of the crisis within the Hispano-Visigothic society to carry out their conquests. This territory, under the Arab name Al-Andalus, became part of the expanding Umayyad empire.
The second siege of Constantinople (717) ended unsuccessfully after the intervention of Tervel of Bulgaria and weakened the Umayyad dynasty and reduced their prestige. In 722 Don Pelayo, a nobleman of Visigothic origin, formed an army of 300 Astur soldiers, to confront Munuza's Muslim troops. In the battle of Covadonga, the Astures defeated the Arab-Moors, who decided to retire. The Christian victory marked the beginning of the Reconquista and the establishment of the Kingdom of Asturias, whose first sovereign was Don Pelayo. The conquerors intended to continue their expansion in Europe and move northeast across the Pyrenees, but were defeated by the Frankish leader Charles Martel at the Battle of Poitiers in 732. The Umayyads were overthrown in 750 by the 'Abbāsids, and, in 756, the Umayyads established an independent emirate in the Iberian Peninsula.
The Holy Roman Empire emerged around 800, as Charlemagne, King of the Franks and part of the Carolingian dynasty, was crowned by the pope as emperor. His empire based in modern France, the Low Countries and Germany expanded into modern Hungary, Italy, Bohemia, Lower Saxony and Spain. He and his father received substantial help from an alliance with the Pope, who wanted help against the Lombards. His death marked the beginning of the end of the dynasty, which collapsed entirely by 888. The fragmentation of power led to semiautonomy in the region, and has been defined as a critical starting point for the formation of states in Europe.
To the east, Bulgaria was established in 681 and became the first Slavic country. The powerful Bulgarian Empire was the main rival of Byzantium for control of the Balkans for centuries and from the 9th century became the cultural centre of Slavic Europe. The Empire created the Cyrillic script during the 9th century AD, at the Preslav Literary School, and experienced the Golden Age of Bulgarian cultural prosperity during the reign of emperor Simeon I the Great (893–927). Two states, Great Moravia and Kievan Rus', emerged among the Slavic peoples respectively in the 9th century. In the late 9th and 10th centuries, northern and western Europe felt the burgeoning power and influence of the Vikings who raided, traded, conquered and settled swiftly and efficiently with their advanced seagoing vessels such as the longships. The Hungarians pillaged mainland Europe, the Pechenegs raided Bulgaria, Rus States and the Arab states. In the 10th century independent kingdoms were established in Central Europe including Poland and the newly settled Kingdom of Hungary. The kingdom of Croatia also appeared in the Balkans. The subsequent period, ending around 1000, saw the further growth of feudalism, which weakened the Holy Roman Empire.
In eastern Europe, Volga Bulgaria became an Islamic state in 921, after Almış I converted to Islam under the missionary efforts of Ahmad ibn Fadlan.
Slavery in the early medieval period had mostly died out in western Europe by about the year 1000 AD, replaced by serfdom. It lingered longer in England and in peripheral areas linked to the Muslim world, where slavery continued to flourish. Church rules suppressed slavery of Christians. Most historians argue the transition was quite abrupt around 1000, but some see a gradual transition from about 300 to 1000.
The slumber of the Dark Ages was shaken by a renewed crisis in the Church. In 1054, the East–West Schism, an insoluble split, occurred between the two remaining Christian seats in Rome and Constantinople (modern Istanbul).
The High Middle Ages of the 11th, 12th, and 13th centuries show a rapidly increasing population of Europe, which caused great social and political change from the preceding era. By 1250, the robust population increase greatly benefited the economy, reaching levels it would not see again in some areas until the 19th century.
From about the year 1000 onwards, Western Europe saw the last of the barbarian invasions and became more politically organized. The Vikings had settled in Britain, Ireland, France and elsewhere, whilst Norse Christian kingdoms were developing in their Scandinavian homelands. The Magyars had ceased their expansion in the 10th century, and by the year 1000, the Roman Catholic Apostolic Kingdom of Hungary was recognised in central Europe. With the brief exception of the Mongol invasions, major barbarian incursions ceased.
Bulgarian sovereignty was reestablished with the anti-Byzantine uprising of the Bulgarians and Vlachs in 1185. The crusaders invaded the Byzantine empire, captured Constantinople in 1204 and established their Latin Empire. Kaloyan of Bulgaria defeated Baldwin I, Latin emperor of Constantinople, in the Battle of Adrianople on 14 April 1205. The reign of Ivan Asen II of Bulgaria led to maximum territorial expansion and that of Ivan Alexander of Bulgaria to a Second Golden Age of Bulgarian culture. The Byzantine Empire was fully reestablished in 1261.
In the 11th century, populations north of the Alps began to settle new lands, some of which had reverted to wilderness after the end of the Roman Empire. In what is known as the "great clearances", vast forests and marshes of Europe were cleared and cultivated. At the same time settlements moved beyond the traditional boundaries of the Frankish Empire to new frontiers in Europe, beyond the Elbe river, tripling the size of Germany in the process. Crusaders founded European colonies in the Levant, the majority of the Iberian Peninsula was conquered from the Muslims, and the Normans colonised southern Italy, all part of the major population increase and resettlement pattern.
The High Middle Ages produced many different forms of intellectual, spiritual and artistic works. The most famous are the great cathedrals as expressions of Gothic architecture, which evolved from Romanesque architecture. This age saw the rise of modern nation-states in Western Europe and the ascent of the famous Italian city-states, such as Florence and Venice. The influential popes of the Catholic Church called volunteer armies from across Europe to a series of Crusades against the Seljuq Turks, who occupied the Holy Land. The rediscovery of the works of Aristotle led Thomas Aquinas and other thinkers to develop the philosophy of Scholasticism.
The Great Schism between the Western (Catholic) and Eastern (Orthodox) Christian Churches was sparked in 1054 by Pope Leo IX asserting authority over three of the seats in the Pentarchy, in Antioch, Jerusalem and Alexandria. Since the mid-8th century, the Byzantine Empire's borders had been shrinking in the face of Islamic expansion. Antioch had been wrested back into Byzantine control by 1045, but the resurgent power of the Roman successors in the West claimed a right and a duty for the lost seats in Asia and Africa. Pope Leo sparked a further dispute by defending the filioque clause in the Nicene Creed which the West had adopted customarily. The Orthodox today state that the XXVIIIth Canon of the Council of Chalcedon explicitly proclaimed the equality of the Bishops of Rome and Constantinople. The Orthodox also state that the Bishop of Rome has authority only over his own diocese and does not have any authority outside his diocese. There were other less significant catalysts for the Schism however, including variance over liturgy. The Schism of Roman Catholic and Orthodox followed centuries of estrangement between the Latin and Greek worlds.
After the East–West Schism, Western Christianity was adopted by the newly created kingdoms of Central Europe: Poland, Hungary and Bohemia. The Roman Catholic Church developed as a major power, leading to conflicts between the Pope and Emperor. The geographic reach of the Roman Catholic Church expanded enormously due to the conversions of pagan kings (Scandinavia, Lithuania, Poland, Hungary), the Christian Reconquista of Al-Andalus, and the crusades. Most of Europe was Roman Catholic in the 15th century.
Early signs of the rebirth of civilization in western Europe began to appear in the 11th century as trade started again in Italy, leading to the economic and cultural growth of independent city-states such as Venice and Florence; at the same time, nation-states began to take form in places such as France, England, Spain, and Portugal, although the process of their formation (usually marked by rivalry between the monarchy, the aristocratic feudal lords and the church) actually took several centuries. These new nation-states began writing in their own cultural vernaculars, instead of the traditional Latin. Notable figures of this movement would include Dante Alighieri and Christine de Pizan (born Christina da Pizzano), the former writing in Italian, and the latter, although an Italian (Venice), relocated to France, writing in French. (See Reconquista for the latter two countries.) Elsewhere, the Holy Roman Empire, essentially based in Germany and Italy, further fragmented into a myriad of feudal principalities or small city states, whose subjection to the emperor was only formal.
The 14th century, when the Mongol Empire came to power, is often called the "Age of the Mongols". Mongol armies expanded westward under the command of Batu Khan. Their western conquests included almost all of Russia (save Novgorod, which became a vassal), the Kipchak-Cuman Confederation. Bulgaria, Hungary, and Poland managed to remain sovereign states. Mongolian records indicate that Batu Khan was planning a complete conquest of the remaining European powers, beginning with a winter attack on Austria, Italy and Germany, when he was recalled to Mongolia upon the death of Great Khan Ögedei. Most historians believe only his death prevented the complete conquest of Europe. The areas of Eastern Europe and most of Central Asia that were under direct Mongol rule became known as the Golden Horde. Under Uzbeg Khan, Islam became the official religion of the region in the early 14th century. The invading Mongols, together with their mostly Turkic subjects, were known as Tatars. In Russia, the Tatars ruled the various states of the Rus' through vassalage for over 300 years.
In the Northern Europe, Konrad of Masovia gave Chelmno to the Teutonic Knights in 1226 as a base for a Crusade against the Old Prussians and Grand Duchy of Lithuania. The Livonian Brothers of the Sword were defeated by the Lithuanians, so in 1237 Gregory IX merged the remainder of the order into the Teutonic Order as the Livonian Order. By the middle of the century, the Teutonic Knights completed their conquest of the Prussians before conquering and converting the Lithuanians in the subsequent decades. The order also came into conflict with the Eastern Orthodox Church of the Pskov and Novgorod Republics. In 1240 the Orthodox Novgorod army defeated the Catholic Swedes in the Battle of the Neva, and, two years later, they defeated the Livonian Order in the Battle on the Ice. The Union of Krewo in 1386, bringing two major changes in the history of the Grand Duchy of Lithuania: conversion to Catholicism and establishment of a dynastic union between the Grand Duchy of Lithuania and the Crown of the Kingdom of Poland marked both the greatest territorial expansion of the Grand Duchy and the defeat of the Teutonic Knights in the Battle of Grunwald in 1410.
The Late Middle Ages spanned the 14th and early 15th centuries. Around 1300, centuries of European prosperity and growth came to a halt. A series of famines and plagues, such as the Great Famine of 1315–1317 and the Black Death, killed people in a matter of days, reducing the population of some areas by half as many survivors fled. Kishlansky reports:
Depopulation caused labor to become scarcer; the survivors were better paid and peasants could drop some of the burdens of feudalism. There was also social unrest; France and England experienced serious peasant risings including the Jacquerie and the Peasants' Revolt. At the same time, the unity of the Catholic Church was shattered by the Great Schism. Collectively these events have been called the Crisis of the Late Middle Ages.
Beginning in the 14th century, the Baltic Sea became one of the most important trade routes. The Hanseatic League, an alliance of trading cities, facilitated the absorption of vast areas of Poland, Lithuania, and Livonia into trade with other European countries. This fed the growth of powerful states in this part of Europe including Poland-Lithuania, Hungary, Bohemia, and Muscovy later on. The conventional end of the Middle Ages is usually associated with the fall of the city of Constantinople and of the Byzantine Empire to the Ottoman Turks in 1453. The Turks made the city the capital of their Ottoman Empire, which lasted until 1922 and included Egypt, Syria, and most of the Balkans. The Ottoman wars in Europe, also sometimes referred to as the Turkish wars, marked an essential part of the history of the continent as a whole.
At the local level, levels of violence were extremely high by modern standards in medieval and early modern Europe. Typically, small groups would battle their neighbors, using the farm tools at hand such as knives, sickles, hammers and axes. Mayhem and death were deliberate. The vast majority of people lived in rural areas. Cities were few, and small in size, but their concentration of population was conducive to violence. Long-term studies of places such as Amsterdam, Stockholm, Venice and Zurich show the same trends as rural areas. Across Europe, homicide trends (not including military actions) show a steady long-term decline. Regional differences were small, except that Italy's decline was later and slower. From approximately 1200 AD through 1800 AD, homicide rates from violent local episodes declined by a factor of ten, from approximately 32 deaths per 1000 people to 3.2 per 1000. In the 20th century the homicide rate fell to 1.4 per 1000. Police forces seldom existed outside the cities; prisons only became common after 1800. Before then harsh penalties were imposed for homicide (severe whipping or execution) but they proved ineffective at controlling or reducing the insults to honor that precipitated most of the violence. The decline does not correlate with economics. Most historians attribute the trend in homicides to a steady increase in self-control of the sort promoted by Protestantism, and necessitated by schools and factories.
Historian Manuel Eisner has summarized the patterns from over 300 historical studies.
The Early Modern period spans the centuries between the Middle Ages and the Industrial Revolution, roughly from 1500 to 1800, or from the discovery of the New World in 1492 to the French Revolution in 1789. The period is characterised by the rise to importance of science and increasingly rapid technological progress, secularised civic politics and the nation state. Capitalist economies began their rise. The early modern period also saw the rise and dominance of the economic theory of mercantilism. As such, the early modern period represents the decline and eventual disappearance, in much of the European sphere, of feudalism, serfdom and the power of the Catholic Church. The period includes the Renaissance, the Protestant Reformation, the disastrous Thirty Years' War, the European colonisation of the Americas and the European witch-hunts.
Despite these crises, the 14th century was also a time of great progress within the arts and sciences. A renewed interest in ancient Greek and Roman led to the Italian Renaissance.
The Renaissance was a cultural movement that profoundly affected European intellectual life in the early modern period. Beginning in Italy, and spreading to the north, west and middle Europe during a cultural lag of some two and a half centuries, its influence affected literature, philosophy, art, politics, science, history, religion, and other aspects of intellectual inquiry.
The Italian Petrarch (Francesco di Petracco), deemed the first full-blooded Humanist, wrote in the 1330s: "I am alive now, yet I would rather have been born in another time." He was enthusiastic about Greek and Roman antiquity. In the 15th and 16th centuries the continuing enthusiasm for the ancients was reinforced by the feeling that the inherited culture was dissolving and here was a storehouse of ideas and attitudes with which to rebuild. Matteo Palmieri wrote in the 1430s: "Now indeed may every thoughtful spirit thank god that it has been permitted to him to be born in a new age." The renaissance was born: a new age where learning was very important.
The Renaissance was inspired by the growth in the study of Latin and Greek texts and the admiration of the Greco-Roman era as a golden age. This prompted many artists and writers to begin drawing from Roman and Greek examples for their works, but there was also much innovation in this period, especially by multi-faceted artists such as Leonardo da Vinci. The Humanists saw their repossession of a great past as a Renaissance – a rebirth of civilization itself.
Important political precedents were also set in this period. Niccolò Machiavelli's political writing in "The Prince" influenced later absolutism and realpolitik. Also important were the many patrons who ruled states and used the artistry of the Renaissance as a sign of their power.
In all, the Renaissance could be viewed as an attempt by intellectuals to study and improve the secular and worldly, both through the revival of ideas from antiquity and through novel approaches to thought – the immediate past being too "Gothic" in language, thought and sensibility.
, Columbus and Cabral
Toward the end of the period, an era of discovery began. The growth of the Ottoman Empire, culminating in the fall of Constantinople in 1453, cut off trading possibilities with the east. Western Europe was forced to discover new trading routes, as happened with Columbus' travel to the Americas in 1492, and Vasco da Gama's circumnavigation of India and Africa in 1498.
The numerous wars did not prevent European states from exploring and conquering wide portions of the world, from Africa to Asia and the newly discovered Americas. In the 15th century, Portugal led the way in geographical exploration along the coast of Africa in search of a maritime route to India, followed by Spain near the close of the 15th century, dividing their exploration of the world according to the Treaty of Tordesillas in 1494. They were the first states to set up colonies in America and European trading posts (factories) along the shores of Africa and Asia, establishing the first direct European diplomatic contacts with Southeast Asian states in 1511, China in 1513 and Japan in 1542. In 1552, Russian tsar Ivan the Terrible conquered two major Tatar khanates, the Khanate of Kazan and the Astrakhan Khanate. The Yermak's voyage of 1580 led to the annexation of the Tatar Siberian Khanate into Russia, and the Russians would soon after conquer the rest of Siberia, steadily expanding to the east and south over the next centuries. Oceanic explorations soon followed by France, England and the Netherlands, who explored the Portuguese and Spanish trade routes into the Pacific Ocean, reaching Australia in 1606 and New Zealand in 1642.
With the development of the printing press, new ideas spread throughout Europe and challenged traditional doctrines in science and theology. Simultaneously, the Protestant Reformation under German Martin Luther questioned Papal authority. The most common dating of the Reformation begins in 1517, when Luther published "The Ninety-Five Theses", and concludes in 1648 with the Treaty of Westphalia that ended years of European religious wars.
During this period corruption in the Catholic Church led to a sharp backlash in the Protestant Reformation. It gained many followers especially among princes and kings seeking a stronger state by ending the influence of the Catholic Church. Figures other than Martin Luther began to emerge as well like John Calvin whose Calvinism had influence in many countries and King Henry VIII of England who broke away from the Catholic Church in England and set up the Anglican Church; his daughter Queen Elizabeth finished the organization of the church. These religious divisions brought on a wave of wars inspired and driven by religion but also by the ambitious monarchs in Western Europe who were becoming more centralized and powerful.
The Protestant Reformation also led to a strong reform movement in the Catholic Church called the Counter-Reformation, which aimed to reduce corruption as well as to improve and strengthen Catholic dogma. Two important groups in the Catholic Church who emerged from this movement were the Jesuits, who helped keep Spain, Portugal, Poland, and other European countries within the Catholic fold, and the Oratorians of Saint Philip Neri, who ministered to the faithful in Rome, restoring their confidence in the Church of Jesus Christ that subsisted substantially in the Church of Rome. Still, the Catholic Church was somewhat weakened by the Reformation, portions of Europe were no longer under its sway and kings in the remaining Catholic countries began to take control of the church institutions within their kingdoms.
Unlike many European countries, the Polish–Lithuanian Commonwealth and Hungary were more tolerant. While still enforcing the predominance of Catholicism, they continued to allow the large religious minorities to maintain their faiths, traditions and customs. The Polish–Lithuanian Commonwealth became divided among Catholics, Protestants, Orthodox, Jews and a small Muslim population.
Another important development in this period was the growth of pan-European sentiments. Eméric Crucé (1623) came up with the idea of the European Council, intended to end wars in Europe; attempts to create lasting peace were no success, although all European countries (except the Russian and Ottoman Empires, regarded as foreign) agreed to make peace in 1518 at the Treaty of London. Many wars broke out again in a few years. The Reformation also made European peace impossible for many centuries.
Another development was the idea of 'European superiority'. The ideal of civilization was taken over from the ancient Greeks and Romans: Discipline, education and living in the city were required to make people civilized; Europeans and non-Europeans were judged for their civility, and Europe regarded itself as superior to other continents. There was a movement by some such as Montaigne that regarded the non-Europeans as a better, more natural and primitive people. Post services were founded all over Europe, which allowed a humanistic interconnected network of intellectuals across Europe, despite religious divisions. However, the Roman Catholic Church banned many leading scientific works; this led to an intellectual advantage for Protestant countries, where the banning of books was regionally organised. Francis Bacon and other advocates of science tried to create unity in Europe by focusing on the unity in nature.1 In the 15th century, at the end of the Middle Ages, powerful sovereign states were appearing, built by the New Monarchs who were centralising power in France, England, and Spain. On the other hand, the Parliament in the Polish–Lithuanian Commonwealth grew in power, taking legislative rights from the Polish king. The new state power was contested by parliaments in other countries especially England. New kinds of states emerged which were co-operation agreements among territorial rulers, cities, farmer republics and knights.
The Iberian states (Spain and Portugal) were able to dominate colonial activity in the 16th century. The Portuguese forged the first global empire in the 15th and 16th century, whilst during the 16th century and the first half of the 17th century, the Spanish under the crown of Castile became the most powerful global empire in the world. This dominance was increasingly challenged by British, French, and the short-lived Dutch and Swedish colonial efforts of the 17th and 18th centuries. New forms of trade and expanding horizons made new forms of government, law and economics necessary.
Colonial expansion continued in the following centuries (with some setbacks, such as successful wars of independence in the British American colonies and then later Haiti, Mexico, Argentina, Brazil, and others amid European turmoil of the Napoleonic Wars; Haiti unique in abolishing slavery). Spain had control of a large part of North America, all of Central America and a great part of South America, the Caribbean and the Philippines; Britain took the whole of Australia and New Zealand, most of India, and large parts of Africa and North America; France held parts of Canada and India (nearly all of which was lost to Britain in 1763), Indochina, large parts of Africa and the Caribbean islands; the Netherlands gained the East Indies (now Indonesia) and islands in the Caribbean; Portugal obtained Brazil and several territories in Africa and Asia; and later, powers such as Germany, Belgium, Italy and Russia acquired further colonies.
This expansion helped the economy of the countries owning them. Trade flourished, because of the minor stability of the empires. By the late 16th century, American silver accounted for one-fifth of Spain's total budget. The European countries fought wars that were largely paid for by the money coming in from the colonies. Nevertheless, the profits of the slave trade and of plantations of the West Indies, then the most profitable of all the British colonies, amounted to less than 5% of the British Empire's economy (but was generally more profitable) at the time of the Industrial Revolution in the late 18th century.
The 17th century was an era of crisis. Many historians have rejected the idea, while others promote it as an invaluable insight into the warfare, politics, economics, and even art. The Thirty Years' War (1618–1648) focused attention on the massive horrors that wars could bring to entire populations. The 1640s in particular saw more state breakdowns around the world than any previous or subsequent period. The Polish-Lithuanian Commonwealth, the largest state in Europe, temporarily disappeared. In addition, there were secessions and upheavals in several parts of the Spanish empire, the world's first global empire. In Britain the entire Stuart monarchy (England, Scotland, Ireland, and its North American colonies) rebelled. Political insurgency and a spate of popular revolts seldom equalled shook the foundations of most states in Europe and Asia. More wars took place around the world in the mid-17th century than in almost any other period of recorded history. The crises spread far beyond Europe – for example Ming China, the most populous state in the world, collapsed. Across the Northern Hemisphere, the mid-17th century experienced almost unprecedented death rates. Geoffrey Parker, a British historian, suggests that environmental factors may have been in part to blame, especially global cooling.
The "absolute" rule of powerful monarchs such as Louis XIV (ruled France 1643–1715), Peter the Great (ruled Russia 1682–1725), Maria Theresa (ruled Habsburg lands 1740–1780) and Frederick the Great (ruled Prussia 1740–86), produced powerful centralized states, with strong armies and powerful bureaucracies, all under the control of the king.
Throughout the early part of this period, capitalism (through mercantilism) was replacing feudalism as the principal form of economic organisation, at least in the western half of Europe. The expanding colonial frontiers resulted in a Commercial Revolution. The period is noted for the rise of modern science and the application of its findings to technological improvements, which animated the Industrial Revolution after 1750.
The Reformation had profound effects on the unity of Europe. Not only were nations divided one from another by their religious orientation, but some states were torn apart internally by religious strife, avidly fostered by their external enemies. France suffered this fate in the 16th century in the series of conflicts known as the French Wars of Religion, which ended in the triumph of the Bourbon Dynasty. England avoided this fate for a while and settled down under Elizabeth I to a moderate Anglicanism. Much of modern-day Germany was made up of numerous small sovereign states under the theoretical framework of the Holy Roman Empire, which was further divided along internally drawn sectarian lines. The Polish–Lithuanian Commonwealth is notable in this time for its religious indifference and a general immunity to the horrors of European religious strife.
The Thirty Years' War was fought between 1618 and 1648, across Germany and neighbouring areas, and involved most of the major European powers except England and Russia. Beginning as a religious conflict between Protestants and Catholics in Bohemia, it quickly developed into a general war involving Catholics versus Protestants for the most part. The major impact of the war, in which mercenary armies were extensively used, was the devastation of entire regions scavenged bare by the foraging armies. Episodes of widespread famine and disease, and the breakup of family life, devastated the population of the German states and, to a lesser extent, the Low Countries, the Crown of Bohemia and northern parts of Italy, while bankrupting many of the regional powers involved. Between one-fourth and one-third of the German population perished from direct military causes or from disease and starvation, as well as postponed births.
After the Peace of Westphalia, which ended the war in favour of nations deciding their own religious allegiance, absolutism became the norm of the continent, while parts of Europe experimented with constitutions foreshadowed by the English Civil War and particularly the Glorious Revolution. European military conflict did not cease, but had less disruptive effects on the lives of Europeans. In the advanced northwest, the Enlightenment gave a philosophical underpinning to the new outlook, and the continued spread of literacy, made possible by the printing press, created new secular forces in thought.
From the Union of Krewo (see above) central and eastern Europe was dominated by Kingdom of Poland and Grand Duchy of Lithuania. In the 16th and 17th centuries Central and Eastern Europe was an arena of conflict for domination of the continent between Sweden, the Polish–Lithuanian Commonwealth (involved in series of wars, like Khmelnytsky Uprising, Russo-Polish War, the Deluge, etc.) and the Ottoman Empire. This period saw a gradual decline of these three powers which were eventually replaced by new enlightened absolutist monarchies: Russia, Prussia and Austria (the Habsburg Monarchy). By the turn of the 19th century they had become new powers, having divided Poland between themselves, with Sweden and Turkey having experienced substantial territorial losses to Russia and Austria respectively as well as pauperisation.
The War of the Spanish Succession (1701–1715) was a major war with France opposed by a coalition of England, the Netherlands, the Habsburg Monarchy, and Prussia. Duke of Marlborough commanded the English and Dutch victory at the Battle of Blenheim in 1704. The main issue was whether France under King Louis XIV would take control of Spain's very extensive possessions and thereby become by far the dominant power, or be forced to share power with other major nations. After initial allied successes, the long war produced a military stalemate and ended with the Treaty of Utrecht, which was based on a balance of power in Europe. Historian Russell Weigley argues that the many wars almost never accomplished more than they cost. British historian G. M. Trevelyan argues:
Frederick the Great, king of Prussia 1740–86, modernized the Prussian army, introduced new tactical and strategic concepts, fought mostly successful wars (Silesian Wars, Seven Years' War) and doubled the size of Prussia. Frederick had a rationale based on Enlightenment thought: he fought total wars for limited objectives. The goal was to convince rival kings that it was better to negotiate and make peace than to fight him.
Russia with its numerous wars and rapid expansion (mainly toward east – i.e. Siberia, Far East – and south, to the "warm seas") was in a continuous state of financial crisis, which it covered by borrowing from Amsterdam and issuing paper money that caused inflation. Russia boasted a large and powerful army, a very large and complex internal bureaucracy, and a splendid court that rivaled Paris and London. However the government was living far beyond its means and seized Church lands, leaving organized religion in a weak condition. Throughout the 18th century Russia remained "a poor, backward, overwhelmingly agricultural, and illiterate country."
The "Enlightenment" was a powerful, widespread cultural movement of intellectuals beginning in late 17th-century Europe emphasizing the power of reason rather than tradition; it was especially favourable to science (especially Isaac Newton's physics) and hostile to religious orthodoxy (especially of the Catholic Church). It sought to analyze and reform society using reason, to challenge ideas grounded in tradition and faith, and to advance knowledge through the scientific method. It promoted scientific thought, skepticism, and intellectual interchange. The Enlightenment was a revolution in human thought. This new way of thinking was that rational thought begins with clearly stated principles, uses correct logic to arrive at conclusions, tests the conclusions against evidence, and then revises the principles in light of the evidence.
Enlightenment thinkers opposed superstition. Some Enlightenment thinkers collaborated with Enlightened despots, absolutist rulers who attempted to forcibly impose some of the new ideas about government into practice. The ideas of the Enlightenment exerted significant influence on the culture, politics, and governments of Europe.
Originating in the 17th century, it was sparked by philosophers Francis Bacon (1562–1626), Baruch Spinoza (1632–1677), John Locke (1632–1704), Pierre Bayle (1647–1706), Voltaire (1694–1778), Francis Hutcheson (1694–1746), David Hume (1711–1776) and physicist Isaac Newton (1643–1727). Ruling princes often endorsed and fostered these figures and even attempted to apply their ideas of government in what was known as enlightened absolutism. The Scientific Revolution is closely tied to the Enlightenment, as its discoveries overturned many traditional concepts and introduced new perspectives on nature and man's place within it. The Enlightenment flourished until about 1790–1800, at which point the Enlightenment, with its emphasis on reason, gave way to Romanticism, which placed a new emphasis on emotion; a Counter-Enlightenment began to increase in prominence. The Romantics argued that the Enlightenment was reductionistic insofar as it had largely ignored the forces of imagination, mystery, and sentiment.
In France, Enlightenment was based in the salons and culminated in the great "Encyclopédie" (1751–72) edited by Denis Diderot (1713–1784) and (until 1759) Jean le Rond d'Alembert (1717–1783) with contributions by hundreds of leading intellectuals who were called "philosophes", notably Voltaire (1694–1778), Rousseau (1712–1778) and Montesquieu (1689–1755). Some 25,000 copies of the 35 volume encyclopedia were sold, half of them outside France. These new intellectual strains would spread to urban centres across Europe, notably England, Scotland, the German states, the Netherlands, Poland, Russia, Italy, Austria, and Spain, as well as Britain's American colonies.
The political ideals of the Enlightenment influenced the American Declaration of Independence, the United States Bill of Rights, the French Declaration of the Rights of Man and of the Citizen, and the Polish–Lithuanian Constitution of 3 May 1791.
Taking a long-term historical perspective, Norman Davies has argued that Freemasonry was a powerful force on behalf of Liberalism and Enlightenment ideas in Europe, from about 1700 to the 20th century. It expanded rapidly during the Age of Enlightenment, reaching practically every country in Europe. Prominent members included Montesquieu, Voltaire, Sir Robert Walpole, Wolfgang Amadeus Mozart, Johann Wolfgang von Goethe, Benjamin Franklin, and George Washington. Steven C. Bullock notes that in the late 18th century, English lodges were headed by the Prince of Wales, Prussian lodges by king Frederick the Great, and French lodges by royal princes. Emperor Napoleon selected as Grand Master of France his own brother.
The great enemy of Freemasonry was the Roman Catholic Church, so that in countries with a large Catholic element, such as France, Italy, Austria, Spain and Mexico, much of the ferocity of the political battles involve the confrontation between supporters of the Church versus active Masons. 20th-century totalitarian and revolutionary movements, especially the Fascists and Communists, crushed the Freemasons.
The "long 19th century", from 1789 to 1914 saw the drastic social, political and economic changes initiated by the Industrial Revolution, the French Revolution and the Napoleonic Wars. Following the reorganisation of the political map of Europe at the Congress of Vienna in 1815, Europe experienced the rise of Nationalism, the rise of the Russian Empire and the peak of the British Empire, as well as the decline of the Ottoman Empire. Finally, the rise of the German Empire and the Austro-Hungarian Empire initiated the course of events that culminated in the outbreak of the First World War in 1914.
The Industrial Revolution was a period in the late 18th century and early 19th century when major changes in agriculture, manufacturing, and transport affected socioeconomic and cultural conditions in Britain and subsequently spread throughout Europe and North America and eventually the world, a process that continues as industrialisation. Technological advancements, most notably the invention of the steam engine by Scottish engineer James Watt, were major catalysts in the industrialisation of Britain and, later, the wider world. It started in England and Scotland in the mid-18th century with the mechanisation of the textile industries, the development of iron-making techniques and the increased use of refined coal. Trade expansion was enabled by the introduction of canals, improved roads and railways. The introduction of steam power (fuelled primarily by coal) and powered machinery (mainly in textile manufacturing) underpinned the dramatic increases in production capacity. The development of all-metal machine tools in the first two decades of the 19th century facilitated the manufacture of more production machines for manufacturing in other industries. The effects spread throughout Western Europe and North America during the 19th century, eventually affecting most of the world. The impact of this change on society was enormous.
Historians R.R. Palmer and Joel Colton argue:
The era of the French Revolution and the subsequent Napoleonic wars was a difficult time for monarchs. Tsar Paul I of Russia was assassinated; King Louis XVI of France was executed, as was his queen Marie Antoinette. Furthermore, kings Charles IV of Spain, Ferdinand VII of Spain and Gustav IV Adolf of Sweden were deposed as were ultimately the Emperor Napoleon and all of the relatives he had installed on various European thrones. King Frederick William III of Prussia and Emperor Francis II of Austria barely clung to their thrones. King George III of England lost the better part of the First British Empire.
The American Revolution (1775–1783) was the first successful revolt of a colony against a European power. It proclaimed, in the words of Thomas Jefferson, that "all men are created equal," a position based on the principles of the Enlightenment. It rejected aristocracy and established a republican form of government under George Washington that attracted worldwide attention.
The French Revolution (1789–1804) was a product of the same democratic forces in the Atlantic World and had an even greater impact. French historian François Aulard says:
French intervention in the American Revolutionary War had nearly bankrupted the state. After repeated failed attempts at financial reform, King Louis XVI had to convene the Estates-General, a representative body of the country made up of three estates: the clergy, the nobility, and the commoners. The third estate, joined by members of the other two, declared itself to be a National Assembly and swore an oath not to dissolve until France had a constitution and created, in July, the National Constituent Assembly. At the same time the people of Paris revolted, famously storming the Bastille prison on 14 July 1789.
At the time the assembly wanted to create a constitutional monarchy, and over the following two years passed various laws including the Declaration of the Rights of Man and of the Citizen, the abolition of feudalism, and a fundamental change in the relationship between France and Rome. At first the king agreed with these changes and enjoyed reasonable popularity with the people. As anti-royalism increased along with threat of foreign invasion, the king tried to flee and join France's enemies. He was captured and on 21 January 1793, having been convicted of treason, he was guillotined.
On 20 September 1792 the National Convention abolished the monarchy and declared France a republic. Due to the emergency of war, the National Convention created the Committee of Public Safety, controlled by Maximilien de Robespierre of the Jacobin Club, to act as the country's executive. Under Robespierre, the committee initiated the Reign of Terror, during which up to 40,000 people were executed in Paris, mainly nobles and those convicted by the Revolutionary Tribunal, often on the flimsiest of evidence. Internal tensions at Paris drove the Committee towards increasing assertions of radicalism and increasing suspicions, fueling new terror: A few months into this phase, more and more prominent revolutionaries were being sent to the guillotine by Robespierre and his faction, for example Madame Roland and Georges Danton. Elsewhere in the country, counter-revolutionary insurrections were brutally suppressed. The regime was overthrown in the coup of 9 Thermidor (27 July 1794) and Robespierre was executed. The regime which followed ended the Terror and relaxed Robespierre's more extreme policies.
Napoleon Bonaparte was one of the world's most famous soldiers and statesmen, leading France to great victories over numerous European enemies. Despite modest origins he became Emperor and restructured much of European diplomacy, politics and law, until he was forced to abdicate in 1814. His 100-day comeback in 1815 failed at the Battle of Waterloo, and he died in exile on a remote island, remembered as a great hero by many Frenchmen and as a great villain by British and other enemies.
Napoleon, despite his youth, was France's most successful general in the Revolutionary wars, having conquered large parts of Italy and forced the Austrians to sue for peace. In 1799 on 18 Brumaire (9 November) he overthrew the feeble government, replacing it with the Consulate, which he dominated. He gained popularity in France by restoring the Church, keeping taxes low, centralizing power in Paris, and winning glory on the battlefield. In 1804 he crowned himself Emperor. In 1805, Napoleon planned to invade Britain, but a renewed British alliance with Russia and Austria (Third Coalition), forced him to turn his attention towards the continent, while at the same time the French fleet was demolished by the British at the Battle of Trafalgar, ending any plan to invade Britain. On 2 December 1805, Napoleon defeated a numerically superior Austro-Russian army at Austerlitz, forcing Austria's withdrawal from the coalition (see Treaty of Pressburg) and dissolving the Holy Roman Empire. In 1806, a Fourth Coalition was set up. On 14 October Napoleon defeated the Prussians at the Battle of Jena-Auerstedt, marched through Germany and defeated the Russians on 14 June 1807 at Friedland. The Treaties of Tilsit divided Europe between France and Russia and created the Duchy of Warsaw.
On 12 June 1812 Napoleon invaded Russia with a Grande Armée of nearly 700,000 troops. After the measured victories at Smolensk and Borodino Napoleon occupied Moscow, only to find it burned by the retreating Russian army. He was forced to withdraw. On the march back his army was harassed by Cossacks, and suffered disease and starvation. Only 20,000 of his men survived the campaign. By 1813 the tide had begun to turn from Napoleon. Having been defeated by a seven nation army at the Battle of Leipzig in October 1813, he was forced to abdicate after the Six Days' Campaign and the occupation of Paris. Under the Treaty of Fontainebleau he was exiled to the island of Elba. He returned to France on 1 March 1815 (see Hundred Days), raised an army, but was finally defeated by a British and Prussian force at the Battle of Waterloo on 18 June 1815 and exiled to a small British island in the South Atlantic.
Roberts finds that the Revolutionary and Napoleonic wars, from 1793 to 1815, caused 4 million deaths (of whom 1 million were civilians); 1.4 million were French deaths.
Outside France the Revolution had a major impact. Its ideas became widespread. Roberts argues that Napoleon was responsible for key ideas of the modern world, so that, "meritocracy, equality before the law, property rights, religious toleration, modern secular education, sound finances, and so on-were protected, consolidated, codified, and geographically extended by Napoleon during his 16 years of power."
Furthermore, the French armies in the 1790s and 1800s directly overthrew feudal remains in much of western Europe. They liberalised property laws, ended seigneurial dues, abolished the guild of merchants and craftsmen to facilitate entrepreneurship, legalised divorce, closed the Jewish ghettos and made Jews equal to everyone else. The Inquisition ended as did the Holy Roman Empire. The power of church courts and religious authority was sharply reduced and equality under the law was proclaimed for all men.
In foreign affairs, the French Army down to 1812 was quite successful. Roberts says that Napoleon fought 60 battles, losing only seven. France conquered Belgium and turned it into another province of France. It conquered the Netherlands, and made it a puppet state. It took control of the German areas on the left bank of the Rhine River and set up a puppet regime. It conquered Switzerland and most of Italy, setting up a series of puppet states. The result was glory for France, and an infusion of much needed money from the conquered lands, which also provided direct support to the French Army. However the enemies of France, led by Britain and funded by the inexhaustible British Treasury, formed a Second Coalition in 1799 (with Britain joined by Russia, the Ottoman Empire and Austria). It scored a series of victories that rolled back French successes, and trapped the French Army in Egypt. Napoleon himself slipped through the British blockade in October 1799, returning to Paris, where he overthrew the government and made himself the ruler.
Napoleon conquered most of Italy in the name of the French Revolution in 1797–99. He consolidated old units and split up Austria's holdings. He set up a series of new republics, complete with new codes of law and abolition of old feudal privileges. Napoleon's Cisalpine Republic was centered on Milan; Genoa became a republic; the Roman Republic was formed as well as the small Ligurian Republic around Genoa. The Neapolitan Republic was formed around Naples, but it lasted only five months. He later formed the Kingdom of Italy, with his brother as King. In addition, France turned the Netherlands into the Batavian Republic, and Switzerland into the Helvetic Republic. All these new countries were satellites of France, and had to pay large subsidies to Paris, as well as provide military support for Napoleon's wars. Their political and administrative systems were modernized, the metric system introduced, and trade barriers reduced. Jewish ghettos were abolished. Belgium and Piedmont became integral parts of France.
Most of the new nations were abolished and returned to prewar owners in 1814. However, Artz emphasizes the benefits the Italians gained from the French Revolution:
Likewise in Switzerland the long-term impact of the French Revolution has been assessed by Martin:
The greatest impact came of course in France itself. In addition to effects similar to those in Italy and Switzerland, France saw the introduction of the principle of legal equality, and the downgrading of the once powerful and rich Catholic Church to just a bureau controlled by the government. Power became centralized in Paris, with its strong bureaucracy and an army supplied by conscripting all young men. French politics were permanently polarized – new names were given, "left" and "right" for the supporters and opponents of the principles of the Revolution.
British historian Max Hastings says there is no question that as a military genius Napoleon ranks with Alexander the Great and Julius Caesar in greatness. However, in the political realm, historians debate whether Napoleon was "an enlightened despot who laid the foundations of modern Europe or, instead, a megalomaniac who wrought greater misery than any man before the coming of Hitler".
By the 19th century, governments increasingly took over traditional religious roles, paying much more attention to efficiency and uniformity than to religiosity. Secular bodies took control of education away from the churches, abolished taxes and tithes for the support of established religions, and excluded bishops from the upper houses. Secular laws increasingly regulated marriage and divorce, and maintaining birth and death registers became the duty of local officials. Although the numerous religious denominations in the United States founded many colleges and universities, that was almost exclusively a state function across Europe. Imperial powers protected Christian missionaries in African and Asian colonies. In France and other largely Catholic nations, anti-clerical political movements tried to reduce the role of the Catholic Church. Likewise briefly in Germany in the 1870s there was a fierce Kulturkampf (culture war) against Catholics, but the Catholics successfully fought back. The Catholic Church concentrated more power in the papacy and fought against secularism and socialism. It sponsored devotional reforms that gained wide support among the churchgoers.
Historian Kenneth Scott Latourette argues that the outlook for Protestantism at the start of the 19th century was discouraging. It was a regional religion based in northwestern Europe, with an outpost in the sparsely settled United States. It was closely allied with government, as in Scandinavia, the Netherlands, Prussia, and especially Great Britain. The alliance came at the expense of independence, as the government made the basic policy decisions, down to such details as the salaries of ministers and location of new churches. The dominant intellectual currents of the Enlightenment promoted rationalism, and most Protestant leaders preached a sort of deism. Intellectually, the new methods of historical and anthropological study undermine automatic acceptance of biblical stories, as did the sciences of geology and biology. Industrialization was a strongly negative factor, as workers who moved to the city seldom joined churches. The gap between the church and the unchurched grew rapidly, and secular forces, based both in socialism and liberalism undermine the prestige of religion. Despite the negative forces, Protestantism demonstrated a striking vitality by 1900. Shrugging off Enlightenment rationalism, Protestants embraced romanticism, with the stress on the personal and the invisible. Entirely fresh ideas as expressed by Friedrich Schleiermacher, Soren Kierkegaard, Albrecht Ritschl and Adolf von Harnack restored the intellectual power of theology. There was more attention to historic creeds such as the Augsburg, the Heidelberg, and the Westminster confessions. In England, Anglicans emphasize the historically Catholic components of their heritage, as the High Church element reintroduced vestments and incense into their rituals. The stirrings of pietism on the Continent, and evangelicalism in Britain expanded enormously, leading the devout away from an emphasis on formality and ritual and toward an inner sensibility toward personal relationship to Christ. Social activities, in education and in opposition to social vices such as slavery, alcoholism and poverty provided new opportunities for social service. Above all, worldwide missionary activity became a highly prized goal, proving quite successful in close cooperation with the imperialism of the British, German, and Dutch empires.
The political development of nationalism and the push for popular sovereignty culminated with the ethnic/national revolutions of Europe. During the 19th century nationalism became one of the most significant political and social forces in history; it is typically listed among the top causes of World War I.
Napoleon's conquests of the German and Italian states around 1800–1806 played a major role in stimulating nationalism and the demands for national unity.
In the German states east of Prussia Napoleon abolished many of the old or medieval relics, such as dissolving the Holy Roman Empire in 1806. He imposed rational legal systems and demonstrated how dramatic changes were possible. For example, his organization of the Confederation of the Rhine in 1806 promoted a feeling of nationalism. Nationalists sought to encompass masculinity in their quest for strength and unity. In the 1860s it was Prussian chancellor Otto von Bismarck who achieved German unification in 1870 after the many smaller states followed Prussia's leadership in wars against Denmark, Austria and France.
Italian nationalism emerged in the 19th century and was the driving force for Italian unification or the "Risorgimento" (meaning the Resurgence or revival). It was the political and intellectual movement that consolidated different states of the Italian peninsula into the single state of the Kingdom of Italy in 1860. The memory of the Risorgimento is central to both Italian nationalism and Italian historiography.
For centuries the Orthodox Christian Serbs were ruled by the Muslim-controlled Ottoman Empire. The success of the Serbian revolution (1804-1817) against Ottoman rule in 1817 marked the foundation of modern Principality of Serbia. It achieved "de facto" independence in 1867 and finally gained recognition by the Great Powers in the Berlin Congress of 1878. The Serbs developed a larger vision for nationalism in Pan-Slavism and with Russian support sought to pull the other Slavs out of the Austro-Hungarian Empire. Austria, with German backing, tried to crush Serbia in 1914 but Russia intervened, thus igniting the First World War in which Austria dissolved into nation states.
In 1918, the region of Vojvodina proclaimed its secession from Austria-Hungary to unite with the pan-Slavic State of Slovenes, Croats and Serbs; the Kingdom of Serbia joined the union on 1 December 1918, and the country was named Kingdom of Serbs, Croats, and Slovenes. It was renamed Yugoslavia, which was never able to tame the multiple nationalities and religions and it flew apart in civil war in the 1990s.
The Greek drive for independence from the Ottoman Empire inspired supporters across Christian Europe, especially in Britain. France, Russia and Britain intervened to make this nationalist dream become reality with the Greek War of Independence (1821-1829/1830).
Bulgarian nationalism emerged under Ottoman rule in the late 18th and early 19th century, under the influence of western ideas such as liberalism and nationalism, which trickled into the country after the French revolution, mostly via Greece, although there were stirrings in the 18th century. Russia, as a World Great Power of fellow Orthodox Slavs, could appeal to the Bulgarians in a way that Austria could not. An autonomous Bulgarian Exarchate was established for the dioceses of Bulgaria as well as those, wherein at least two thirds of Orthodox Christians were willing to join it. The April Uprising in 1876 indirectly resulted in the re-establishment of Bulgaria in 1878.
The cause of Polish nationalism was repeatedly frustrated before 1918. In the 1790s, Germany, Russia and Austria partitioned Poland. Napoleon set up the Duchy of Warsaw, a new Polish state that ignited a spirit of nationalism. Russia took it over in 1815 as Congress Poland with the tsar as King of Poland. Large-scale nationalist revolts erupted in 1830 and 1863–64 but were harshly crushed by Russia, which tried to Russify the Polish language, culture and religion. The collapse of the Russian Empire in the First World War enabled the major powers to reestablish an independent Poland, which survived until 1939. Meanwhile, Poles in areas controlled by Germany moved into heavy industry but their religion came under attack by Bismarck in the Kulturkampf of the 1870s. The Poles joined German Catholics in a well-organized new Centre Party, and defeated Bismarck politically. He responded by stopping the harassment and cooperating with the Centre Party.
An important component of nationalism was the study of the nation's heritage, emphasizing the national language and literary culture. This stimulated, and was in turn strongly supported by, the emergence of national educational systems reaching the general population. Latin gave way to the national language, and compulsory education, with strong support from modernizers and the media, became standard throughout Western countries. Voting reforms extended the franchise to the previously uneducated elements. A strong sentiment among the elites was the necessity for compulsory public education, so that the new electorate could understand and handle its duties. Every country developed a sense of national origins – the historical accuracy was less important than the motivation toward patriotism. Universal compulsory education was extended as well to girls, at least at the elementary level. By the 1890s, strong movements emerged in some countries, including France, Germany and the United States, to extend compulsory education to the secondary level.
After the defeat of revolutionary France, the other great powers tried to restore the situation which existed before 1789. In 1815 at the Congress of Vienna, the major powers of Europe managed to produce a peaceful balance of power among the various European empires. This was known as the Metternich system. The powerbase of their support was the aristocracy, with its great landed wealth and control of the government, the church, and the military in most countries. However, their efforts were unable to stop the spread of revolutionary movements: the middle classes had been deeply influenced by the ideals of the French revolution, and the Industrial Revolution brought important economical and social changes.
Radical intellectuals looked to the working classes for a base for socialist, communist and anarchistic ideas. Widely influential was the 1848 pamphlet by Karl Marx and Friedrich Engels "The Communist Manifesto".
The middle classes and businessmen promoted liberalism, free trade and capitalism. Aristocratic elements concentrated in government service, the military and the established churches. Nationalist movements (in Germany, Italy, Poland, Hungary, and elsewhere) called upon the "racial" unity (which usually meant a common language and an imagined common ethnicity) to seek national unification and/or liberation from foreign rule. As a result, the period between 1815 and 1871 saw a large number of revolutionary attempts and independence wars. Greece successfully revolted against Ottoman rule in the 1820s. European diplomats and intellectuals saw the Greek struggle for independence, with its accounts of Turkish atrocities, in a romantic light.
Napoleon III, nephew of Napoleon I, parlayed his famous name and to widespread popularity across France . He returned from exile in 1848, promising to stabilize the chaotic political situation. He was elected president and maneuvered successfully to name himself Emperor, a move approved later by a large majority of the French electorate. The first part of his Imperial term brought many important reforms, facilitated by Napoleon's control of the lawmaking body, the government, and the Army. Hundreds of old Republican leaders were arrested and deported. Napoleon controlled the media and censored the news. In compensation for the loss of freedom, Napoleon gave the people new hospitals and asylums, beautified and modernized Paris, and built a modern railroad and transportation system that dramatically improved commerce, and helped the many small farmers as well. The economy grew, but industrialization was not as rapid as Britain, and France depended largely on small family-oriented firms as opposed to the large companies that were emerging in the United States and Germany. France was on the winning side in the Crimean war (1854-56), but after 1858 Napoleon's foreign-policy was less and less successful. Antagonize Great Britain and failed to appreciate the danger of war with Prussia. Foreign-policy blunders finally destroyed his reign in 1870-71. He gained worldwide attention for his aggressive foreign policy in Europe, Mexico, and worldwide. He helped in the unification of Italy by fighting the Austrian Empire and joined the Crimean War on the side of Great Britain to defend the Ottoman Empire against Russia. His empire collapsed after being defeated in the Franco-Prussian War.
France became a republic, but until the 1880s there was a strong popular demand for return to monarchy. That never happened because of the blunders made by the available monarchs. Hostility to the Catholic Church became a major issue, as France battle between secular and religious forces well into the 20th century, with the secular elements usually more successful. The French Third Republic emerged in 1871, was on the winning side of the first world war, and was finally overthrown when it was defeated in 1940 in World War II.
Most European states had become constitutional (rather than absolute) monarchies by 1871, and Germany and Italy merged many small city-states to become united nation-states. Germany in particular increasingly dominated the continent in terms of economics and political power. Meanwhile, on a global scale, Great Britain, with its far-flung British Empire, unmatched Royal Navy, and powerful bankers, became the world's first global power. The sun never set on its territories, while an informal empire operated through British financiers, entrepreneurs, traders and engineers who established operations in many countries, and largely dominated Latin America. The British were especially famous for financing and constructing railways around the world.
From his base in Prussia, Otto von Bismarck in the 1860s engineered a series of short, decisive wars, that unified most of the German states (excluding Austria) into a powerful German Empire under Prussian leadership. He humiliated France in the process, but kept on good terms with Austria-Hungary. With that accomplished by 1871 he then skillfully used balance of power diplomacy to preserve Germany's new role and keep Europe at peace. The new German Empire industrialized rapidly and challenged Britain for economic leadership. Bismarck disliked colonies but public and elite opinion forced him to build an overseas empire. He was removed from office in 1890 by an aggressive young Kaiser Wilhelm II, who pursued a disruptive foreign policy that polarized Europe into rival camps. These rival camps went to war with each other in 1914.
The power of nationalism to create new states was irresistible in the 19th century, and the process could lead to collapse in the absence of a strong nationalism. Austria-Hungary had the advantage of size, but multiple disadvantages. There were rivals on four sides, its finances were unstable, the population was fragmented into multiple ethnicities and languages that served as the bases for separatist nationalisms. It had a large army with good forts, but its industrial base was thin. Its naval resources were so minimal that it did not attempt to build an overseas empire. It did have the advantage of good diplomats, typified by Metternich (Foreign Minister 1809–1848, Prime Minister, 1821–1848)). They employed a grand strategy for survival that balanced out different forces, set up buffer zones, and kept the Habsburg empire going despite wars with the Ottomans, Frederick the Great, Napoleon and Bismarck, until the final disaster of the First World War. The Empire overnight disintegrated into multiple states based on ethnic nationalism and the principle of self-determination.
The Russian Empire likewise brought together a multitude of languages and cultures, so that its military defeat in the First World War led to multiple splits that created independent Finland, Latvia, Lithuania, Estonia, and Poland, and for a brief spell, independent Ukraine, Armenia, Georgia, and Azerbaijan.
Colonial empires were the product of the European Age of Discovery from the 15th century. The initial impulse behind these dispersed maritime empires and those that followed was trade, driven by the new ideas and the capitalism that grew out of the Renaissance. Both the Portuguese Empire and Spanish Empire quickly grew into the first global political and economic systems with territories spread around the world.
Subsequent major European colonial empires included the French, Dutch, and British empires. The latter, consolidated during the period of British maritime hegemony in the 19th century, became the largest empire in history because of the improved ocean transportation technologies of the time as well as electronic communication through the telegraph, cable, and radio. At its height in 1920, the British Empire covered a quarter of the Earth's land area and comprised a quarter of its population. Other European countries, such as Belgium, Germany, and Italy, pursued colonial empires as well (mostly in Africa), but they were smaller. Ignoring the oceans, Russia built its Russian Empire through conquest by land in Eastern Europe, and Asia.
By the mid-19th century, the Ottoman Empire had declined enough to become a target for other global powers (see History of the Balkans). This instigated the Crimean War in 1854 and began a tenser period of minor clashes among the globe-spanning empires of Europe that eventually set the stage for the First World War. In the second half of the 19th century, the Kingdom of Sardinia and the Kingdom of Prussia carried out a series of wars that resulted in the creation of Italy and Germany as nation-states, significantly changing the balance of power in Europe. From 1870, Otto von Bismarck engineered a German hegemony of Europe that put France in a critical situation. It slowly rebuilt its relationships, seeking alliances with Russia and Britain to control the growing power of Germany. In this way, two opposing sides – the Triple Alliance of 1882 (Germany, Austria-Hungary and Italy) and the Triple Entente of 1907 (Britain, France and Russia) – formed in Europe, improving their military forces and alliances year-by-year.
German-American historian Konrad Jarausch, asked if he agreed that "the European record of the past century [was] just one gigantic catastrophe", argues:
The "short twentieth century", from 1914 to 1991, included the First World War, the Second World War and the Cold War. The First World War used modern technology to kill millions of soldiers. Victory by Britain, France, the United States and other allies drastically changed the map of Europe, ending four major land empires (the Russian, German, Austro-Hungarian and Ottoman empires) and leading to the creation of nation-states across Central and Eastern Europe. The October Revolution in Russia led to the creation of the Soviet Union (1917–1991) and the rise of the international communist movement. Widespread economic prosperity was typical of the period before 1914, and 1920–1929. After the onset of the Great Depression in 1929, however, democracy collapsed in most of Europe. Fascists took control in Italy, and the even more aggressive Nazi movement led by Adolf Hitler took control of Germany, 1933–45. The Second World War was fought on an even larger scale than the First war, killing many more people, and using even more advanced technology. It ended with the division of Europe between East and West, with the East under the control of the Soviet Union and the West dominated by NATO. The two sides engaged in the Cold War, with actual conflict taking place not in Europe but in Asia in the Korean War and the Vietnam War. The Imperial system collapsed. The remaining colonial empires ended through the decolonisation of European rule in Africa and Asia. The fall of Soviet Communism (1989–1991) left the West dominant and enabled the reunification of Germany. It accelerated the process of a European integration to include Eastern Europe. The European Union continues today, but with German economic dominance. Since the worldwide Great Recession of 2008, European growth has been slow, and financial crises have hit Greece and other countries. While Russia is a weak version of the old Soviet Union, it has been confronting Europe in Ukraine and other areas.
After the relative peace of most of the 19th century, the rivalry between European powers, compounded by a rising nationalism among ethnic groups, exploded in August 1914, when the First World War started. Over 65 million European soldiers were mobilised from 1914 to 1918; 20 million soldiers and civilians died, and 21 million were seriously wounded. On one side were Germany, Austria-Hungary, the Ottoman Empire and Bulgaria (the Central Powers/Triple Alliance), while on the other side stood Serbia and the "Triple Entente" – the coalition of France, Britain and Russia, which were joined by Italy in 1915, Romania in 1916 and by the United States in 1917. The Western Front involved especially brutal combat without any territorial gains by either side. Single battles like Verdun and the Somme killed hundreds of thousands of men while leaving the stalemate unchanged. Heavy artillery and machine guns caused most of the casualties, supplemented by poison gas. Czarist Russia collapsed in the February Revolution of 1917 and Germany claimed victory on the Eastern Front. After eight months of liberal rule, the October Revolution brought Vladimir Lenin and the Bolsheviks to power, leading to the creation of the Soviet Union in place of the disintegrated Russian Empire. With American entry into the war in 1917 on the Allied side, and the failure of Germany's spring 1918 offensive, Germany had run out of manpower, while an average of 10,000 American troops were arriving in France every day in the summer of 1918. Germany's allies, Austria-Hungary and the Ottoman Empire, surrendered and dissolved, followed by Germany on 11 November 1918. The victors forced Germany to assume responsibility for the conflict and pay war reparations.
One factor in determining the outcome of the war was that the Allies had significantly more economic resources they could spend on the war. One estimate (using 1913 US dollars) is that the Allies spent $58 billion on the war and the Central Powers only $25 billion. Among the Allies, Britain spent $21 billion and the U.S. $17 billion; among the Central Powers Germany spent $20 billion.
The world war was settled by the victors at the Paris Peace Conference in 1919. Two dozen nations sent delegations, and there were many nongovernmental groups, but the defeated powers were not invited.
The "Big Four" were President Woodrow Wilson of the United States, Prime Minister David Lloyd George of Great Britain, Georges Clemenceau of France, and, of least importance, Italian Prime Minister Vittorio Orlando. Each has a large staff of experts. They met together informally 145 times and made all the major decisions, which in turn were ratified by the others.
The major decisions were the creation of the League of Nations; the six peace treaties with defeated enemies, most notable the Treaty of Versailles with Germany; the awarding of German and Ottoman overseas possessions as "mandates", chiefly to Britain and France; and the drawing of new national boundaries (sometimes with plebiscites) to better reflect the forces of nationalism.
The Big Four implemented sweeping changes to the political geography of the world. Most famously, the Treaty of Versailles itself weakened Germany's military power and placed full blame for the war and costly reparations on its shoulders – the humiliation and resentment in Germany was probably one of the causes of Nazi success and indirectly a cause of World War II.
At the insistence of President Wilson, the Big Four required Poland to sign a treaty on 28 June 1919 that guaranteed minority rights in the new nation. Poland signed under protest, and made little effort to enforce the specified rights for Germans, Jews, Ukrainians, and other minorities. Similar treaties were signed by Czechoslovakia, Romania, Yugoslavia, Greece, Austria, Hungary, Bulgaria, and later by Latvia, Estonia and Lithuania. Finland and Germany were not asked to sign a minority rights treaty.
In the Treaty of Versailles (1919) the winners recognised the new states (Poland, Czechoslovakia, Hungary, Austria, Yugoslavia, Finland, Estonia, Latvia, Lithuania) created in central Europe from the defunct German, Austro-Hungarian and Russian empires, based on national (ethnic) self-determination. It was a peaceful era with a few small wars before 1922 such as the Ukrainian–Soviet War (1917–1921) and the Polish–Soviet War (1919–1921). Prosperity was widespread, and the major cities sponsored a youth culture called the "Roaring Twenties" or "Jazz Age" that was often featured in the cinema, which attracted very large audiences.
The Allied victory in the First World War seemed to mark the triumph of liberalism, not just in the Allied countries themselves, but also in Germany and in the new states of Eastern Europe, as well as Japan. Authoritarian militarism as typified by Germany had been defeated and discredited. Historian Martin Blinkhorn argues that the liberal themes were ascendant in terms of "cultural pluralism, religious and ethnic toleration, national self-determination, free-market economics, representative and responsible government, free trade, unionism, and the peaceful settlement of international disputes through a new body, the League of Nations." However, as early as 1917, the emerging liberal order was being challenged by the new communist movement taking inspiration from the Russian Revolution. Communist revolts were beaten back everywhere else, but they did succeed in Russia.
Italy adopted an authoritarian dictatorship known as Fascism in 1922; it became a model for Hitler in Germany and for right wing elements in other countries. Historian Stanley G. Payne says Fascism in Italy was:
Authoritarian regimes replaced democracy in the 1930s in Nazi Germany, Portugal, Austria, Poland, Greece, the Baltic countries and Francoist Spain. By 1940, there were only four liberal democracies left on the European continent: France, Finland, Switzerland and Sweden.
After the Wall Street Crash of 1929, nearly the whole world sank into a Great Depression, as money stopped flowing from New York to Europe, prices fell, profits fell, and unemployment soared. The worst hit sectors included heavy industry, export-oriented agriculture, mining and lumbering, and construction. World trade fell by two thirds.
Liberalism and democracy were discredited. In most of Europe, as well as in Japan and most of Latin America, nation after nation turned to dictators and authoritarian regimes. The most momentous change of government came when Hitler and his Nazis took power in Germany in 1933. The main institution that was meant to bring stability was the League of Nations, created in 1919. However the League failed to resolve any major crises and by 1938 it was no longer a major player. The League was undermined by the bellicosity of Nazi Germany, Imperial Japan, the Soviet Union, and Mussolini's Italy, and by the non-participation of the United States. By 1937 it was largely ignored.
A major civil war took place in Spain, with the nationalists winning. The League of Nations was helpless as Italy conquered Ethiopia and Japan seized Manchuria in 1931 and took over most of China starting in 1937.
The Spanish Civil War (1936–1939) was marked by numerous small battles and sieges, and many atrocities, until the rebels (the Nationalists), led by Francisco Franco, won in 1939. There was military intervention as Italy sent land forces, and Germany sent smaller elite air force and armoured units to the Nationalists. The Soviet Union sold armaments to the leftist Republicans on the other side, while the Communist parties in numerous countries sent soldiers to the "International Brigades." The civil war did not escalate into a larger conflict, but did become a worldwide ideological battleground that pitted the left, the communist movement and many liberals against Catholics, conservatives, and fascists. Britain, France and the US remained neutral and refused to sell military supplies to either side. Worldwide there was a decline in pacifism and a growing sense that another world war was imminent, and that it would be worth fighting for.
In the Munich Agreement of 1938, Britain and France adopted a policy of appeasement as they gave Hitler what he wanted out of Czechoslovakia in the hope that it would bring peace. It did not. In 1939 Germany took over the rest of Czechoslovakia and appeasement policies gave way to hurried rearmament as Hitler next turned his attention to Poland.
After allying with Japan in the Anti-Comintern Pact and then also with Benito Mussolini's Italy in the "Pact of Steel", and finally signing a non-aggression treaty with the Soviet Union in August 1939, Hitler launched the Second World War on 1 September 1939 by attacking Poland. To his surprise Britain and France declared war on Germany, but there was little fighting during the "Phoney War" period. War began in earnest in spring 1940 with the successful Blitzkrieg conquests of Denmark, Norway, the Low Countries, and France. Britain remained alone but refused to negotiate, and defeated Germany's air attacks in the Battle of Britain. Hitler's goal was to control Eastern Europe but because of his failure to defeat Britain and the Italian failures in North Africa and the Balkans, the great attack on the Soviet Union was delayed until June 1941. Despite initial successes, the German army was stopped close to Moscow in December 1941.
Over the next year the tide was turned and the Germans started to suffer a series of defeats, for example in the siege of Stalingrad and at Kursk. Meanwhile, Japan (allied to Germany and Italy since September 1940) attacked Britain and the United States on 7 December 1941; Germany then completed its over-extension by declaring war on the United States. War raged between the Axis Powers (Germany, Italy, and Japan) and the Allied Forces (British Empire, Soviet Union, and the United States). The Allied Forces won in North Africa, invaded Italy in 1943, and recaptured France in 1944. In the spring of 1945 Germany itself was invaded from the east by the Soviet Union and from the west by the other Allies. As the Red Army conquered the Reichstag in Berlin, Hitler committed suicide and Germany surrendered in early May. World War II was the deadliest conflict in human history, causing between 50 and 80 million deaths, the majority of whom were civilians (approximately 38 to 55 million).
This period was also marked by systematic genocide. In 1942–45, separately from the war-related deaths, the Nazis killed an additional number of over 11 million civilians identified through IBM-enabled censuses, including the majority of the Jews and Gypsies of Europe, millions of Polish and Soviet Slavs, and also homosexuals, Jehovah's Witnesses, misfits, disabled, and political enemies. Meanwhile, in the 1930s the Soviet system of forced labour, expulsions and allegedly engineered famine had a similar death toll. During and after the war millions of civilians were affected by forced population transfers.
The world wars ended the pre-eminent position of Britain, France and Germany in Europe and the world. At the Yalta Conference, Europe was divided into spheres of influence between the victors of World War II, and soon became the principal zone of contention in the Cold War between the two power blocs, the Western countries and the Communist bloc. The United States and the majority of European liberal democracies at the time (United Kingdom, France, Italy, Netherlands, West Germany etc.) established the NATO military alliance. Later, the Soviet Union and its satellites (Bulgaria, Czechoslovakia, East Germany, Hungary, Poland, and Romania) in 1955 established the Warsaw Pact as a counterpoint to NATO. The Warsaw Pact had a much larger ground force, but the American-French-British nuclear umbrellas protected NATO.
Communist states were imposed by the Red Army in the East, while parliamentary democracy became the dominant form of government in the West. Most historians point to its success as the product of exhaustion with war and dictatorship, and the promise of continued economic prosperity. Martin Conway also adds that an important impetus came from the anti-Nazi wartime political coalitions.
The United States gave away about $20 billion in Marshall Plan grants and other grants and low-interest long-term loans to Western Europe, 1945 to 1951. Historian Michael J. Hogan argues that American aid was critical in stabilizing the economy and politics of Western Europe. It brought in modern management that dramatically increased productivity, and encouraged cooperation between labor and management, and among the member states. Local Communist parties were opposed, and they lost prestige and influence and a role in government. In strategic terms, says Hogan, the Marshall Plan strengthened the West against The possibility of a Communist invasion or political takeover. However, the Marshall Plan's role in the rapid recovery has been debated. Most reject the idea that it only miraculously revived Europe, since the evidence shows that a general recovery was already under way thanks to other aid programs from the United States. Economic historians Bradford De Long and Barry Eichengreen conclude it was, " History's Most Successful Structural Adjustment Program." They state:
The Soviet Union concentrated on its own recovery. It seized and transferred most of Germany's industrial plants and it exacted war reparations from East Germany, Hungary, Romania, and Bulgaria, using Soviet-dominated joint enterprises. It used trading arrangements deliberately designed to favor the Soviet Union. Moscow controlled the Communist parties that ruled the satellite states, and they followed orders from the Kremlin. Historian Mark Kramer concludes:
Western Europe began economic and then political integration, with the aim to unite the region and defend it. This process included organisations such as the European Coal and Steel Community, which grew and evolved into the European Union, and the Council of Europe. The Solidarność movement in the 1980s weakened the Communist government in Poland. At the time the Soviet leader Mikhail Gorbachev initiated perestroika and glasnost, which weakened Soviet influence in Europe, particularly in the USSR. In 1989 the Berlin Wall came down and Communist governments outside the Soviet Union were deposed. In 1990 the Federal Republic of Germany absorbed East Germany, after making large cash payments to the USSR. In 1991 the Communist Party in Moscow collapsed, ending the USSR, which split into fifteen independent states. The largest, Russia, took the Soviet Union's seat on the United Nations Security Council. The most violent dissolution happened in Yugoslavia, in the Balkans. Four (Slovenia, Croatia, Bosnia and Herzegovina and North Macedonia) out of six Yugoslav republics declared independence and for most of them a violent war ensued, in some parts lasting until 1995. In 2006 Montenegro seceded and became an independent state. In the post–Cold War era, NATO and the EU have been gradually admitting most of the former members of the Warsaw Pact.
Looking at the half century after the war historian Walter Lacquer concluded:
The post-war period also witnessed a significant rise in the standard of living of the Western European working class. As noted by one historical text, "within a single generation, the working classes of Western Europe came to enjoy the multiple pleasures of the consumer society."
Western Europe's industrial nations in the 1970s were hit by a global economic crisis. They had obsolescent heavy industry, and suddenly had to pay very high energy prices which caused sharp inflation. Some of them also had inefficient nationalized railways and heavy industries. In the important field of computer technology, European nations lagged behind the United States. They also faced high government deficits and growing unrest led by militant labour unions. There was an urgent need for new economic directions. Germany and Sweden sought to create a social consensus behind a gradual restructuring. Germany's efforts proved highly successful. In Britain under the leadership of Margaret Thatcher, the solution was shock therapy, high interest rates, austerity, and selling off inefficient corporations as well as the public housing, which was sold off to the tenants. One result was escalating social tensions in Britain, led by the militant coal miners. Thatcher eventually defeated her opponents and radically changed the British economy, but the controversy never went away as shown by the hostile demonstrations at the time of her death in 2013.
The end of the Cold War came in a series of events from 1979 to 1991, mainly in Eastern Europe. In the end, these brought the fall of the Iron Curtain, the German reunification and the end of Soviet control over their Eastern European satellites and their worldwide network of communist parties in a friendly chain reaction from the Pan-European Picnic in 1989. The finals brought the division of the Soviet Union into 15 non-communist states in 1991.
Italian historian Federico Romero reports that observers at the time emphasized that,:
Following the end of the Cold War, the European Economic Community pushed for closer integration, co-operation in foreign and home affairs, and started to increase its membership into the neutral and former communist countries. In 1993, the Maastricht Treaty established the European Union, succeeding the EEC and furthering political co-operation. The neutral countries of Austria, Finland and Sweden acceded to the EU, and those that didn't join were tied into the EU's economic market via the European Economic Area. These countries also entered the Schengen Agreement which lifted border controls between member states.
The Maastricht Treaty created a single currency for most EU members. The "euro" was created in 1999 and replaced all previous currencies in participating states in 2002. The most notable exception to the currency union, or "eurozone", was the United Kingdom, which also did not sign the Schengen Agreement.
EU did not participate in the Yugoslav Wars, and was divided on supporting the United States in the 2003–2011 Iraq War. NATO has been part of the war in Afghanistan, but at a much lower level of involvement than the United States.
In 2004, the EU gained 10 new members. (Estonia, Latvia, and Lithuania, which had been part of the Soviet Union; Czech Republic, Hungary, Poland, Slovakia, and Slovenia, five former-communist countries; Malta, and the divided island of Cyprus.) These were followed by Bulgaria and Romania in 2007. Russia's regime had interpreted these expansions as violations against NATO's promise to not expand "one inch to the east" in 1990. Russia engaged in a number of bilateral disputes about gas supplies with Belarus and Ukraine which endangered gas supplies to Europe. Russia also engaged in a minor war with Georgia in 2008.
Supported by the United States and some European countries, Kosovo's government unilaterally declared independence from Serbia on 17 February 2008.
Public opinion in the EU turned against enlargement, partially due to what was seen as over-eager expansion including Turkey gaining candidate status. The European Constitution was rejected in France and the Netherlands, and then (as the Treaty of Lisbon) in Ireland, although a second vote passed in Ireland in 2009.
The financial crisis of 2007–08 effected Europe, and government responded with austerity measures. Limited ability of the smaller EU nations (most notably Greece) to handle their debts led to social unrest, government liquidation, and financial insolvency. In May 2010, the German parliament agreed to loan 22.4 billion euros to Greece over three years, with the stipulation that Greece follow strict austerity measures. See European sovereign-debt crisis.
Beginning in 2014, Ukraine has been in a state of revolution and unrest with two breakaway regions (Donetsk and Lugansk) attempting to join Russia as full federal subjects. ("See War in Donbass.") On 16 March, a referendum was held in Crimea leading to the "de facto" secession of Crimea and its largely internationally unrecognized annexation to the Russian Federation as the Republic of Crimea.
In June 2016, in a referendum in the United Kingdom on the country's membership in the European Union, 52% of voters voted to leave the EU, leading to the complex Brexit separation process and negotiations, which led to political and economic changes for both the UK and the remaining European Union countries. The UK left the EU on 31 January 2020.
AD | https://en.wikipedia.org/wiki?curid=13212 |
Hold come what may
Hold come what may is a phrase popularized by logician Willard Van Orman Quine. Beliefs that are "held come what may" are beliefs one is unwilling to give up, regardless of any evidence with which one might be presented.
Quine held (on a perhaps simplistic construal) that there are no beliefs that one ought to hold come what may—in other words, that all beliefs are rationally revisable ("no statement is immune to revision"), and compared this to the simplification of quantum mechanics. Many philosophers argue to the contrary, believing that, for example, the laws of thought cannot be revised and may be "held come what may". Quine believed that all beliefs are linked by a web of beliefs, in which a belief is linked to another belief by supporting relations, but if one belief is found untrue, there is ground to find the linked beliefs also untrue. The latter statement is usually referred to as either confirmation holism or Duhem–Quine thesis.
A closely related concept is hold more stubbornly at least, also popularized by Quine. Some beliefs may be more useful than others, or may be implied by a large number of beliefs. Examples might be laws of logic, or the belief in an external world of physical objects. Altering such central portions of the web of beliefs would have immense, ramifying consequences, and affect many other beliefs. It is better to alter auxiliary beliefs around the edges of the web of beliefs (considered to be sense beliefs, rather than main beliefs) in the face of new evidence unfriendly to one's central principles. Thus, while one might agree that there is no belief one can hold come what may, there are some for which there is ample practical ground to "hold more stubbornly at least". | https://en.wikipedia.org/wiki?curid=13216 |
Haiku
Haiku originated as an opening part of a larger Japanese poem, known as a renga. These haiku that were written as an opening stanza were known as hokku and eventually writers began to write them as their own stand-alone poems. Haiku was given its current name by the Japanese writer Masaoka Shiki at the end of the 19th century.
Originally from Japan, haiku today are written by authors worldwide. Haiku in English and haiku in other languages have their own styles and traditions while still incorporating aspects of the traditional haiku form. are also said to increasingly vary from the tradition of 17 "on" or taking nature as their subject.
In Japanese, haiku are traditionally printed in a single vertical line while haiku in English often appear as three lines.
There are several other forms of Japanese poetry related to haiku, such as senryū and tanka, as well as other art forms that incorporate haiku, such as haibun and haiga.
In Japanese haiku a "kireji", or cutting word, typically appears at the end of one of the verse's three phrases. A "kireji" fills a role analogous to that of a "caesura" in classical western poetry or to a volta in sonnets. Depending on which cutting word is chosen, and its position within the verse, it may briefly cut the stream of thought, suggesting a parallel between the preceding and following phrases, or it may provide a dignified ending, concluding the verse with a heightened sense of closure.
The fundamental aesthetic quality of both hokku and haiku is that it is internally sufficient, independent of context, and will bear consideration as a complete work. The "kireji" lends the verse structural support, allowing it to stand as an independent poem. The use of "kireji" distinguishes haiku and hokku from second and subsequent verses of renku; which may employ semantic and syntactic disjuncture, even to the point of occasionally end-stopping a phrase with a . However, renku typically employ "kireji".
In English, since "kireji" have no direct equivalent, poets sometimes use punctuation such as a dash or ellipsis, or an implied break to create a juxtaposition intended to prompt the reader to reflect on the relationship between the two parts.
The "kireji" in the Bashō examples "old pond" and "the wind of Mt Fuji" are both "ya" (). Neither the remaining Bashō example nor the Issa example contain a "kireji" although they do both balance a fragment in the first five "on" against a phrase in the remaining 12 "on" (it may not be apparent from the English translation of the Issa that the first five "on" mean "Edo's rain").
In comparison with English verse typically characterized by syllabic meter, Japanese verse counts sound units known as ""on"" or morae. Traditional haiku consist of 17 "on", in three phrases of five, seven and five "on" respectively. Among contemporary poems "teikei" ( fixed form) haiku continue to use the 5-7-5 pattern while "jiyuritsu" ( free form) haiku do not. One of the examples below illustrates that traditional haiku masters were not always constrained by the 5-7-5 pattern.
Although the word ""on"" is sometimes translated as "syllable", one "on" is counted for a short syllable, two for an elongated vowel or doubled consonant, and one for an "n" at the end of a syllable. Thus, the word "haibun", though counted as two syllables in English, is counted as four "on" in Japanese (ha-i-bu-n); and the word ""on"" itself, which English-speakers would view as a single syllable, comprises two "on": the short vowel o and the . This is illustrated by the Issa haiku below, which contains 17 "on" but only 15 syllables. Conversely, some sounds, such as "kyo" () may look like two syllables to English speakers but are in fact a single "on" (as well as a single syllable) in Japanese.
In 1973, the Haiku Society of America noted that the norm for writers of haiku in English was to use 17 syllables, but they also noted a trend toward shorter haiku. Shorter haiku are very much more common in 21st century English haiku writing.
About 12 syllables in English approximates the duration of 17 Japanese "on".
A haiku traditionally contains a "kigo", a word or phrase that symbolizes or implies the season of the poem and which is drawn from a "saijiki", an extensive but prescriptive list of such words.
Kigo are often in the form of metonyms and can be difficult for those who lack Japanese cultural references to spot. The Bashō examples below include "kawazu", "frog" implying spring, and "shigure", a rain shower in late autumn or early winter. Kigo are not always included in non-Japanese haiku or by modern writers of Japanese "free-form" haiku.
The best-known Japanese haiku is perhaps Bashō's "old pond":
This separates into "on" as:
Translated:
Another haiku by Bashō:
This separates into "on" as:
Translated:
This haiku by Bashō illustrates that he was not always constrained to a 5-7-5 "on" pattern. It contains 18 "on" in the pattern 6-7-5 ("ō" or is treated as two "on".)
This separates into "on" as:
Translated:
This haiku by Issa illustrates that 17 Japanese "on" do not always equate to 17 English syllables ("nan" counts as two "on" and "nonda" as three.)
This separates into "on" as,
Translated:
Hokku is the opening stanza of an orthodox collaborative linked poem, or renga, and of its later derivative, renku (or "haikai no renga"). By the time of Matsuo Bashō (1644–1694), the hokku had begun to appear as an independent poem, and was also incorporated in haibun (a combination of prose and hokku), and haiga (a combination of painting with hokku). In the late 19th century, Masaoka Shiki (1867–1902) renamed the standalone hokku to haiku. The latter term is now generally applied retrospectively to all hokku appearing independently of renku or renga, irrespective of when they were written, and the use of the term hokku to describe a stand-alone poem is considered obsolete.
In the 17th century, two masters arose who elevated "haikai" and gave it a new popularity. They were Matsuo Bashō (1644–1694) and Uejima Onitsura (1661–1738). "Hokku" is the first verse of the collaborative "haikai" or "renku", but its position as the opening verse made it the most important, setting the tone for the whole composition. Even though "hokku" had sometimes appeared individually, they were always understood in the context of "renku". The Bashō school promoted standalone "hokku" by including many in their anthologies, thus giving birth to what is now called "haiku". Bashō also used his "hokku" as torque points within his short prose sketches and longer travel diaries. This subgenre of "haikai" is known as "haibun". His best-known work, "Oku no Hosomichi", or "Narrow Roads to the Interior", is counted as one of the classics of Japanese literature and has been translated into English extensively.
Bashō was deified by both the imperial government and Shinto religious headquarters one hundred years after his death because he raised the haikai genre from a playful game of wit to sublime poetry. He continues to be revered as a saint of poetry in Japan, and is the one name from classical Japanese literature that is familiar throughout the world.
The next famous style of haikai to arise was that of Yosa Buson (1716–1783) and others such as Kitō, called the Tenmei style after the Tenmei Era (1781–1789) in which it was created.
Buson is recognized as one of the greatest masters of haiga (an art form where the painting is combined with haiku or haikai prose). His affection for painting can be seen in the painterly style of his haiku.
No new popular style followed Buson. However, a very individualistic, and at the same time humanistic, approach to writing haiku was demonstrated by the poet Kobayashi Issa (1763–1827), whose miserable childhood, poverty, sad life, and devotion to the Pure Land sect of Buddhism are evident in his poetry. Issa made the genre immediately accessible to wider audiences.
Masaoka Shiki (1867–1902) was a reformer and modernizer. A prolific writer, even though chronically ill during a significant part of his life, Shiki disliked the 'stereotype' of haikai writers of the 19th century who were known by the deprecatory term "tsukinami", meaning 'monthly', after the monthly or twice-monthly "haikai" gatherings of the end of the 18th century (in regard to this period of "haikai", it came to mean 'trite' and 'hackneyed'). Shiki also criticized Bashō. Like the Japanese intellectual world in general at that time, Shiki was strongly influenced by Western culture. He favored the painterly style of Buson and particularly the European concept of "plein-air" painting, which he adapted to create a style of haiku as a kind of nature sketch in words, an approach called "shasei" (, "sketching from life"). He popularized his views by verse columns and essays in newspapers.
Hokku up to the time of Shiki, even when appearing independently, were written in the context of renku. Shiki formally separated his new style of verse from the context of collaborative poetry. Being agnostic, he also separated it from the influence of Buddhism. Further, he discarded the term "hokku" and proposed the term "haiku" as an abbreviation of the phrase ""haikai no ku"" meaning a verse of "haikai", although the term predates Shiki by some two centuries, when it was used to mean "any" verse of haikai. Since then, "haiku" has been the term usually applied in both Japanese and English to all independent haiku, irrespective of their date of composition. Shiki's revisionism dealt a severe blow to renku and surviving haikai schools. The term "hokku" is now used chiefly in its original sense of the opening verse of a renku, and rarely to distinguish haiku written before Shiki's time.
The earliest westerner known to have written haiku was the Dutchman Hendrik Doeff (1764–1837), who was the Dutch commissioner in the Dejima trading post in Nagasaki, during the first years of the 19th century. One of his haiku:
Although there were further attempts outside Japan to imitate the "hokku" in the early 20th century, there was little understanding of its principles. Early Western scholars such as Basil Hall Chamberlain (1850–1935) and William George Aston were mostly dismissive of hokku's poetic value. One of the first advocates of English-language hokku was the Japanese poet Yone Noguchi. In "A Proposal to American Poets," published in the "Reader" magazine in February 1904, Noguchi gave a brief outline of the hokku and some of his own English efforts, ending with the exhortation, "Pray, you try Japanese Hokku, my American poets!" At about the same time the poet Sadakichi Hartmann was publishing original English-language hokku, as well as other Japanese forms in both English and French.
In France, haiku was introduced by Paul-Louis Couchoud around 1906. Couchoud's articles were read by early Imagist theoretician F. S. Flint, who passed on Couchoud's ideas to other members of the proto-Imagist Poets' Club such as Ezra Pound. Amy Lowell made a trip to London to meet Pound and find out about haiku. She returned to the United States where she worked to interest others in this "new" form. Haiku subsequently had a considerable influence on Imagists in the 1910s, notably Pound's "In a Station of the Metro" of 1913, but, notwithstanding several efforts by Yone Noguchi to explain "the hokku spirit", there was as yet little understanding of the form and its history.
In Spain, several prominent poets experimented with haiku, including Joan Alcover, Antonio Machado, Juan Ramón Jiménez and Luis Cernuda. Federico García Lorca also experimented with and learned conciseness from the form while still a student in 1921. The most persistent, however, was Isaac del Vando, whose "La Sombrilla Japonesa" (1924) went through several editions. The form was also used in Catalan by the avant-garde writers Josep Maria Junoy (1885–1955) and Joan Salvat-Papasseit, by the latter notably in his sequence "Vibracions" (1921).
In 1992 Nobel laureate Czesław Miłosz published the volume "Haiku" in which he translated from English to Polish haiku of Japanese masters and American and Canadian contemporary haiku authors.
The former president of the European Council, Herman Van Rompuy, is a "haijin" (, haiku poet) and known as "Haiku Herman." He published a book of haiku in April 2010.
R. H. Blyth was an Englishman who lived in Japan. He produced a series of works on Zen, haiku, senryū, and on other forms of Japanese and Asian literature. In 1949, with the publication in Japan of the first volume of "Haiku", the four-volume work by Blyth, haiku were introduced to the post-war English-speaking world. This four-volume series (1949–52) described haiku from the pre-modern period up to and including Shiki. Blyth's "History of Haiku" (1964) in two volumes is regarded as a classical study of haiku. Today Blyth is best known as a major interpreter of haiku to English speakers. His works have stimulated the writing of haiku in English.
The Japanese-American scholar and translator Kenneth Yasuda published "The Japanese Haiku: Its Essential Nature, History, and Possibilities in English, with Selected Examples" in 1957. The book includes both translations from Japanese and original poems of his own in English, which had previously appeared in his book titled "A Pepper-Pod: Classic Japanese Poems together with Original Haiku". In these books Yasuda presented a critical theory about haiku, to which he added comments on haiku poetry by early 20th-century poets and critics. His translations apply a 5–7–5 syllable count in English, with the first and third lines end-rhymed. Yasuda considered that haiku translated into English should utilize all of the poetic resources of the language. Yasuda's theory also includes the concept of a "haiku moment" based in personal experience, and provides the motive for writing a haiku (' an aesthetic moment' of a timeless feeling of enlightened harmony as the poet's nature and the environment are unified'). This notion of the haiku moment has resonated with haiku writers in English, even though the notion is not widely promoted in Japanese haiku.
In 1958, "An Introduction to Haiku: An Anthology of Poems and Poets from Bashô to Shiki" by Harold G. Henderson was published by Doubleday Anchor Books. This book was a revision of Henderson's earlier book titled "The Bamboo Broom" (Houghton Mifflin, 1934). After World War II, Henderson and Blyth worked for the American Occupation in Japan and for the Imperial Household, respectively, and their shared appreciation of haiku helped form a bond between the two.
Henderson translated every hokku and haiku into a rhymed tercet (a-ba), whereas the Japanese originals never used rhyme. Unlike Yasuda, however, he recognized that 17 syllables in English are generally longer than the 17 "on" of a traditional Japanese haiku. Because the normal modes of English poetry depend on accentual meter rather than on syllabics, Henderson chose to emphasize the order of events and images in the originals. Nevertheless, many of Henderson's translations were in the five-seven-five pattern.
The first haiku written in English was arguably by Ezra Pound, "In a Station of the Metro", published in 1913. Since then, the haiku has become a fairly popular form among English-speaking poets. English haiku can follow the traditional Japanese rules, but are frequently less strict, particularly concerning the number of syllables and subject matter.
The loosening of traditional standards has resulted in the term "haiku" being applied to brief English-language poems such as "mathemaku" and other kinds of pseudohaiku. Some sources claim that this is justified by the blurring of definitional boundaries in Japan.
In the early 20th century, Nobel laureate Rabindranath Tagore composed haiku in Bengali. He also translated some from Japanese. In Gujarati, Jhinabhai Desai 'Sneharashmi' popularized haiku and remained a popular haiku writer. In February 2008, the World Haiku Festival was held in Bangalore, gathering "haijin" from all over India and Bangladesh, as well as from Europe and the United States. In South Asia, some other poets also write Haiku from time to time, most notably including the Pakistani poet Omer Tarin, who is also active in the movement for global nuclear disarmament and some of his 'Hiroshima Haiku' have been read at various peace conferences in Japan and the UK. Indian writer in Malayalam language, Ashitha, wrote several Haiku poems which has been published as a book. Her poems helped popularise Haiku among the readers of Malayalam literature.
The Mexican poet José Juan Tablada is credited with popularising haiku in his country, reinforced by the publication of two collections composed entirely in that form: "Un dia" (1919), and "El jarro de flores" (1922). In the introduction to the latter, Tablada noted that two young Mexicans, Rafael Lozano and Carlos Gutiérrez Cruz, had also begun writing them. They were followed soon after by Carlos Pellicer, Xavier Villaurrutia, and by Jaime Torres Bodet in his collection "Biombo" (1925). Much later, Octavio Paz included many haiku in "Piedras Sueltas" (1955).
Elsewhere the Ecuadorian poet and diplomat Jorge Carrera Andrade included haiku among the 31 poems contained in "Microgramas" (Tokio 1940) and the Argentine Jorge Luis Borges in the collection "La cifra" (1981).
Haibun is a combination of prose and haiku, often autobiographical or written in the form of a travel journal.
Haiga is a style of Japanese painting based on the aesthetics of haikai, and usually including a haiku. Today, haiga artists combine haiku with paintings, photographs and other art.
The carving of famous haiku on natural stone to make poem monuments known as "kuhi" () has been a popular practice for many centuries. The city of Matsuyama has more than two hundred "kuhi". | https://en.wikipedia.org/wiki?curid=13217 |
Howard Hawks
Howard Winchester Hawks (May 30, 1896December 26, 1977) was an American film director, producer and screenwriter of the classic Hollywood era. Critic Leonard Maltin called him "the greatest American director who is not a household name."
A versatile film director, Hawks explored many genres such as comedies, dramas, gangster films, science fiction, film noir, war films and westerns. His most popular films include "Scarface" (1932), "Bringing Up Baby" (1938), "Only Angels Have Wings" (1939), "His Girl Friday" (1940), "To Have and Have Not" (1944), "The Big Sleep" (1946), "Red River" (1948), "The Thing from Another World" (1951), "Gentlemen Prefer Blondes" (1953), and "Rio Bravo" (1959). His frequent portrayals of strong, tough-talking female characters came to define the "Hawksian woman".
In 1942, Hawks was nominated for the Academy Award for Best Director for "Sergeant York". In 1974, he was awarded an Honorary Academy Award as "a master American filmmaker whose creative efforts hold a distinguished place in world cinema." His work has influenced various popular and respected directors such as Martin Scorsese, Robert Altman, Jean-Luc Godard, John Carpenter, and Quentin Tarantino.
Howard Winchester Hawks was born in Goshen, Indiana. He was the first-born child of Frank Winchester Hawks (1865–1950), a wealthy paper manufacturer, and his wife, Helen Brown (née Howard; 1872–1952), the daughter of a wealthy industrialist. Hawks's family on his father's side were American pioneers and his ancestor John Hawks had emigrated from England to Massachusetts in 1630. The family eventually settled in Goshen and by the 1890s was one of the wealthiest families in the Midwest, due mostly to the highly profitable Goshen Milling Company.
Hawks's maternal grandfather, C. W. Howard (1845–1916), had homesteaded in Neenah, Wisconsin in 1862 at age 17. Within 15 years he had made his fortune in the town's paper mill and other industrial endeavors. Frank Hawks and Helen Howard met in the early 1890s and married in 1895. Howard Hawks was the eldest of five children and his birth was followed by Kenneth Neil Hawks (August 12, 1898 – January 2, 1930), William Bellinger Hawks (January 29, 1901 – January 10, 1969), Grace Louise Hawks (October 17, 1903 – December 23, 1927) and Helen Bernice Hawks (1906 – May 4, 1911). In 1898, the family moved to Neenah, Wisconsin where Frank Hawks began working for his father-in-law's Howard Paper Company.
Between 1906 and 1909, the Hawks family began to spend more time in Pasadena, California during the cold Wisconsin winters in order to improve Helen Hawks's ill health. Gradually, they began to spend only their summers in Wisconsin before permanently moving to Pasadena in 1910. The family settled in a house down the street from Throop Polytechnic Institute and the Hawks children began attending the school's Polytechnic Elementary School in 1907. Hawks was an average student and did not excel in sports, but by 1910 had discovered coaster racing, an early form of soapbox racing. In 1911, Hawks's youngest sibling Helen died suddenly of food poisoning. From 1910 to 1912, Hawks attended Pasadena High School. But in 1912, the Hawks family moved to nearby Glendora, California, where Frank Hawks owned orange groves. Hawks finished his junior year of high school at Citrus Union High School in Glendora. During this time he worked as a barnstorming pilot.
He was then sent to Phillips Exeter Academy in New Hampshire from 1913 to 1914; his family's wealth may have influenced his acceptance to the elite private school. Even though he was seventeen, he was admitted as a lower middleclassman, the equivalent of a sophomore. While in New England, Hawks often attended the theaters in nearby Boston. In 1914, Hawks returned to Glendora and graduated from Pasadena High School that year. Skilled in tennis, by eighteen years old, Hawks won the United States Junior Tennis Championship. That same year, Hawks was accepted to Cornell University in Ithaca, New York, where he majored in mechanical engineering and was a member of Delta Kappa Epsilon. His college friend Ray S. Ashbury remembered Hawks spending more of his time playing craps and drinking alcohol than studying, although Hawks was also known to be a voracious reader of popular American and English novels in college.
While working in the film industry during his 1916 summer vacation, Hawks made an unsuccessful attempt to transfer to Stanford University. He returned to Cornell that September, leaving in April 1917 to join the Army when the United States entered World War I. During World War I, he taught aviators to fly and he used these experiences as influence for future aviation films such as "The Dawn Patrol" (1930). Like many college students who joined the armed services during the war, he received a degree in absentia in 1918. Before Hawks was called for active duty, he returned to Hollywood and by the end of April 1917 was working on a Cecil B. DeMille film.
Howard Hawks's interest and passion for aviation led him to many important experiences and acquaintances. In 1916, Hawks met Victor Fleming, a Hollywood cinematographer who had been an auto mechanic and early aviator. Hawks had begun racing and working on a Mercer race car—bought for him by his grandfather, C.W. Howard—during his 1916 summer vacation in California. He allegedly met Fleming when the two men raced on a dirt track and caused an accident. This meeting led to Hawks's first job in the film industry, as a prop boy on the Douglas Fairbanks film "In Again, Out Again" (on which Fleming was employed as the cinematographer) for Famous Players-Lasky. According to Hawks, a new set needed to be built quickly when the studio's set designer was unavailable, so Hawks volunteered to do the job himself, much to Fairbanks's satisfaction. He was next employed as a prop boy and general assistant on an unspecified film directed by Cecil B. DeMille. (Hawks never named the film in later interviews and DeMille made roughly five films in that time period). By the end of April 1917, Hawks was working on Cecil B. DeMille's "The Little American". Hawks then worked on the Mary Pickford film "The Little Princess", directed by Marshall Neilan. According to Hawks, Neilan did not show up to work one day, so the resourceful Hawks offered to direct a scene himself, to which Pickford consented.
Hawks began directing at age 21 after he and cinematographer Charles Rosher filmed a double exposure dream sequence with Mary Pickford. Hawks worked with Pickford and Neilan again on "Amarilly of Clothes-Line Alley" before joining the United States Army Air Service. Hawks's military records were destroyed in the 1973 Military Archive Fire, so the only account of his military service is his own. According to Hawks, he spent 15 weeks in basic training at the University of California in Berkeley where he was trained to be a squadron commander in the air force. When Pickford visited Hawks at basic training, his superior officers were so impressed by the appearance of the celebrity that they promoted him to flight instructor and sent him to Texas to teach new recruits. Bored by this work, Hawks attempted to secure a transfer during the first half of 1918 and was eventually sent to Fort Monroe, Virginia. The Armistice was signed in November of that year, and Hawks was discharged as a Second Lieutenant without having seen active duty.
After the war, Hawks was eager to return to Hollywood. His brother, Kenneth Hawks, who had also served in the Air Force, graduated from Yale University in 1919, and the two of them moved to Hollywood together to pursue their careers. They quickly made friends with Hollywood insider (and fellow Ivy Leaguer) Allan Dwan. Hawks landed his first important job when he used his family's wealth to loan money to studio head Jack L. Warner. Warner quickly paid back the loan and hired Hawks as a producer to "oversee" the making of a new series of one-reel comedies starring the Italian comedian Monty Banks. Hawks later stated that he personally directed "three or four" of the shorts, though no documentation exists to confirm the claim. The films were profitable, but Hawks soon left to form his own production company using his family's wealth and connections to secure financing. The production company, "Associated Producers," was a joint venture between Hawks, Allan Dwan, Marshall Neilan, and director Allen Holubar, with a distribution deal with First National. The company made 14 films between 1920 and 1923, with 8 directed by Neilan, 3 by Dwan and 3 by Holubar. More of a "boy's club" than a production company, the four men gradually drifted apart and went their separate ways in 1923, by which time Hawks had decided that he wanted to direct rather than produce.
Beginning in early 1920, Hawks lived in rented houses in Hollywood with the group of friends he was accumulating. This rowdy group of mostly macho, risk-taking men included his brother Kenneth Hawks, Victor Fleming, Jack Conway, Harold Rosson, Richard Rosson, Arthur Rosson and Eddie Sutherland. During this time, Hawks first met Irving Thalberg, the vice-President in charge of production at Metro-Goldwyn-Mayer. Hawks admired his intelligence and sense of story. Hawks also became friends with barn stormers and pioneer aviators at Rogers Airport in Los Angeles, getting to know men like Moye Stephens.
In 1923, Famous Players-Lasky president Jesse Lasky was looking for a new Production Editor in the story department of his studio and Thalberg suggested Hawks. Hawks accepted and was immediately put in charge of over 40 productions, including several literary acquisitions of stories by Joseph Conrad, Jack London and Zane Grey. Hawks worked on the scripts for all of the films produced, but he had his first official screenplay credit in 1924 on "Tiger Love". Hawks was the Story Editor at Famous Players (later Paramount Pictures) for almost two years, occasionally editing such films as "Heritage of the Desert". Hawks signed a new one-year contract with Famous-Players in the fall of 1924. He broke his contract to become a story editor for Thalberg at MGM, having secured a promise from Thalberg to make him a director within a year. In 1925, when Thalberg hesitated to keep his promise, Hawks broke his contract at MGM and left.
In October 1925, Sol Wurtzel, William Fox's studio superintendent at the Fox Film Corporation, invited Hawks to join his company with the promise of letting Hawks direct. Over the next three years, Hawks directed his first eight films (six silent, two "talkies"). Hawks reworked the scripts of most of the films he directed without always taking official credit for his work. He also worked on the scripts for "Honesty – The Best Policy" in 1926 and Joseph von Sternberg's "Underworld" in 1927, famous for being one of the first gangster films. Hawks's first film was "The Road to Glory" which premiered in April 1926. The screenplay was based on a 35-page composition written by Howard Hawks. This represented one of the only films on which Hawks had extensive writing credit. It is one of Hawks's only two lost films. Immediately after completing "The Road to Glory", Hawks began writing his next film, "Fig Leaves", his first (and, until 1935, only) comedy. It received positive reviews, particularly for the art direction and costume designs. It was released in July 1926 and was Hawks' first hit as a director. Although he mainly dismissed his early work, Hawks praised this film in later interviews.
"Paid to Love" is notable in Hawks's filmography, because it was a highly stylized, experimental film. He attempted to imitate the style of German film director F. W. Murnau. Hawks's film includes atypical tracking shots, expressionistic lighting and stylistic film editing that was inspired by German Expressionist cinema. In a later interview, Hawks commented "It isn't my type of stuff, at least I got it over in a hurry. You know the idea of wanting the camera to do those things: Now the camera's somebody's eyes." Hawks worked on the script with Seton I. Miller, with whom he would go on to collaborate on seven more films. The film stars George O'Brien as the introverted Crown Prince Michael, William Powell as his happy-go-lucky brother and Virginia Valli as Michael's flapper love interest Dolores. The characters played by Valli and O'Brien anticipate those found in later films by Hawks: a sexually aggressive showgirl, who is an early prototype of the "Hawksian woman", and a shy man disinterested in sex, found in later roles played by Cary Grant and Gary Cooper. "Paid to Love" was completed by September 1926, but remained unreleased until July 1927. It was financially unsuccessful. "Cradle Snatchers" was based on a 1925 hit stage play by Russell G. Medcraft and Norma Mitchell. The film was shot in early 1927. The film was released in May 1927 and was a minor hit. For many years it was believed to be a lost film until film director Peter Bogdanovich discovered a print in 20th Century Fox's film vaults, although the print was missing part of reel three and all of reel four. In March 1927, Hawks signed a new one-year, three-picture contract with Fox and was assigned to direct "Fazil", based on the play "L'Insoumise" by Pierre Frondaie. Hawks again worked with Seton Miller on the script. Hawks was over schedule and over budget on the film, which began a rift between him and Sol Wurtzel that would eventually lead to Hawks leaving Fox. The film was finished in August 1927, though it was not released until June 1928.
"A Girl in Every Port" is considered by film scholars to be the most important film of Hawks's silent career. It is the first of his films to utilize many of the Hawksian themes and characters that would define much of his subsequent work. It was his first "love story between two men," with two men bonding over their duty, skills and careers, who consider their friendship to be more important than their relationships with women. In France, Henri Langlois called Hawks "the Gropius of the cinema" and Swiss novelist and poet Blaise Cendrars said that the film "definitely marked the first appearance of contemporary cinema." Hawks went over budget once again with this film, though, and his relationship with Sol Wurtzel deteriorated. After an advance screening that received positive reviews, Wurtzel told Hawks, "This is the worst picture Fox has made in years." "The Air Circus" was Hawks's first film centered around aviation, one of his early passions. In 1928, Charles Lindbergh was the world's most famous person and "Wings" was one of the most popular films of the year. Wanting to capitalize on the country's aviation craze, Fox immediately bought Hawks' original story for "The Air Circus", a variation of the male friendship plot of "A Girl in Every Port" about two young pilots. The film was shot from April to June 1928, but Fox ordered an additional 15 minutes of dialogue footage in order that the film could compete with the new "talkies" being released. Hawks hated the new dialogue written by Hugh Herbert and he refused to participate in the re-shoots. The film was released in September 1928 and was a moderate hit. It is one of two films directed by Hawks that are lost films.
"Trent's Last Case" is an adaptation of British author E. C. Bentley's 1913 novel of the same name. Hawks considered the novel to be "one of the greatest detective stories of all time" and was eager to make it his first sound film. He cast Raymond Griffith in the lead role of Phillip Trent. Griffith's throat had been damaged by poison gas during World War I and his voice was a hoarse whisper, prompting Hawks to later state, "I thought he ought to be great in talking pictures "because" of that voice." However, after shooting only a few scenes, Fox shut Hawks down and ordered him to make a silent film, both because of Griffith's voice and because they only owned the legal rights to make a silent film. The film did have a musical score and synchronized sound effects, but no dialogue. Due to the failing business of silent films, it was never released in the US and only briefly screened in England where film critics hated it. The film was believed lost until the mid-1970s and was screened for the first time in the US at a Hawks retrospective in 1974. Hawks was in attendance of the screening and attempted to have the only print of the film destroyed. Hawks's contract with Fox ended in May 1929, and he never again signed a long-term contract with a major studio. He managed to remain an independent producer-director for the rest of his long career.
By 1930, Hollywood was in upheaval over the coming of "talkies" and the careers of many actors and directors were ruined. Hollywood studios were recruiting stage actors and directors that they believed were better suited for sound films. After having worked in the industry for 14 years and directed many financially successful films, Hawks found himself having to prove himself an asset to the studios once again. Leaving Fox on sour terms didn't help his reputation, but Hawks never backed down from fights with studio heads. After several months of unemployment, Hawks renewed his career with his first sound film in 1930.
Hawks' first all-sound film was "The Dawn Patrol", based on an original story by John Monk Saunders and (unofficially) Hawks. Reportedly, Hawks paid Saunders to put his name on the film, so that Hawks could direct the film without arousing concern due to his lack of writing experience. Accounts vary on who came up with the idea of the film, but Hawks and Saunders developed the story together and tried to sell it to several studios before First National agreed to produce it. Shooting began in late February 1930, about the same time that Howard Hughes was finally finishing his epic World War I aviation epic "Hell's Angels", which had been in production since September 1927. Shrewdly, Hawks began to hire many of the aviation experts and cameramen that had been employed by Hughes, including Elmer Dyer, Harry Reynolds and Ira Reed. When Hughes found out about the rival film, he did everything he could to sabotage "The Dawn Patrol". He harassed Hawks and other studio personnel, hired a spy that was quickly caught and finally sued First National for copyright infringement. Hughes eventually dropped the lawsuit in late 1930—he and Hawks had become good friends during the legal battle. Filming was finished in late May 1930 and it premiered in July, setting a first-week box office record at the Winter Garden Theatre in New York. The film became one of the biggest hits of 1930. The success of this film allowed Hawks to gain respect in the field of filmmaking and allowed him to spend the rest of his career as an independent director without the necessity to sign any long-term contracts with specific studios.
Hawks did not get along with Warner Brothers executive Hal B. Wallis and his contract allowed him to be loaned out to other studios. Hawks took the opportunity to accept a directing offer from Harry Cohn at Columbia Pictures. The film opened in January 1931 and was a hit. The film was banned in Chicago, though, and the experience of censorship which would continue in his next film project. In 1930, Howard Hughes hired Hawks to direct "Scarface", a gangster film loosely based on the life of Chicago mobster Al Capone. The film was completed in September 1931, but the censorship of the Hays Code prevented it from being released as Hawks and Hughes had originally intended. The two men fought, negotiated and made compromises with the Hays Office for over a year, until the film was eventually released in 1932, after such other pivotal early gangster films as "The Public Enemy" and "Little Caesar". "Scarface" was the first film in which Hawks worked with screenwriter Ben Hecht, who became a close friend and collaborator for 20 years. After filming was complete on "Scarface", Hawks left Hughes to fight the legal battles and returned to First National to fulfill his contract, this time with producer Darryl F. Zanuck. For his next film, Hawks wanted to make a film about his childhood passion: car racing. Hawks developed the script for "The Crowd Roars" with Seton Miller for their eighth and final collaboration. Hawks used real race car drivers in the film, including the 1930 Indianapolis 500 winner Billy Arnold. The film was released in March and became a hit.
Later in 1932, he directed "Tiger Shark" starring Edward G. Robinson as a tuna fisherman. In these early films, Hawks established the prototypical "Hawksian Man", which film critic Andrew Sarris described as "upheld by an instinctive professionalism." "Tiger Shark" demonstrated Hawks' ability to incorporate touches of humor into dramatic, tense, and even tragic story lines. In 1933, Hawks signed a three-picture deal at Metro-Goldwyn-Mayer Studios, the first of which was "Today We Live" in 1933. This World War I film was based on a short story by author William Faulkner. Hawks' next two films at MGM were the boxing drama "The Prizefighter and the Lady" and the bio-pic "Viva Villa!". Studio interference on both films led Hawks to walk out on his MGM contract without completing either film himself.
In 1934, Hawks went to Columbia Pictures to make his first screwball comedy, "Twentieth Century", starring John Barrymore and Hawks's distant cousin Carole Lombard. It was based on a stage play by Ben Hecht and Charles MacArthur and, along with Frank Capra's "It Happened One Night" (released the same year), is considered to be the defining film of the screwball comedy genre. In 1935, Hawks made "Barbary Coast" with Edward G. Robinson and Miriam Hopkins. Hawks collaborated with Hecht and MacArthur on "Barbary Coast" and reportedly convinced them to work on the film by promising to teach them a marble game. They would switch off between working on the script and playing with marbles during work days. In 1936, he made the aviation adventure "Ceiling Zero" with James Cagney and Pat O'Brien. Also in 1936, Hawks began filming "Come and Get It", starring Edward Arnold, Joel McCrea, Frances Farmer and Walter Brennan. But he was fired by Samuel Goldwyn in the middle of shooting and the film was completed by William Wyler.
In 1938, Hawks made the screwball comedy "Bringing Up Baby" for RKO Pictures. It starred Cary Grant and Katharine Hepburn and was adapted by Dudley Nichols and Hagar Wilde and has been called "the screwiest of the screwball comedies" by film critic Andrew Sarris. Grant plays a near-sighted paleontologist who suffers one humiliation after another due to the lovestruck socialite played by Hepburn. Hawks's artistic direction for "Bringing Up Baby" revolved around the raw natural chemistry between Grant and Hepburn. With Grant portraying the paleontologist and Hepburn as an heiress, the roles only add to the movie's purpose of disintegrating the line between the real and the imaginary. "Bringing Up Baby" was a box office flop when initially released and, subsequently, RKO fired Hawks due to extreme losses; however, the film has become regarded as one of Hawks's masterpieces. Hawks followed this with 11 consecutive hits up to 1951, starting with the aviation drama "Only Angels Have Wings", starring Cary Grant and made in 1939 for Columbia Pictures. It also starred Jean Arthur, Thomas Mitchell, Rita Hayworth, and Richard Barthelmess.
In 1940, Hawks returned to the screwball comedy genre with "His Girl Friday", starring Cary Grant and Rosalind Russell. The film was an adaptation of the hit Broadway play "The Front Page" by Ben Hecht and Charles MacArthur, which had already been made into a film in 1931. Not forgetting the influence Jesse Lasky had on his early career, in 1941, Hawks made "Sergeant York", starring Gary Cooper as a pacifist farmer who becomes a decorated World War I soldier. Hawks directed the film and cast Cooper as a specific favor to Lasky. This was the highest-grossing film of 1941 and won two Academy Awards (Best Actor and Best Editing), as well as earning Hawks his only nomination for Best Director. Later that year, Hawks worked with Cooper again for "Ball of Fire", which also starred Barbara Stanwyck. The film was written by Billy Wilder and Charles Brackett and is a playful take on "Snow White and the Seven Dwarfs". Cooper plays a sheltered, intellectual linguist who is writing an encyclopedia with six other scientists, and hires street-wise Stanwyck to help them with modern slang terms. In 1941, Hawks began work on the Howard Hughes-produced (and later directed) film "The Outlaw", based on the life of Billy the Kid and starring Jane Russell. Hawks completed initial shooting of the film in early 1941, but due to perfectionism and battles with the Hollywood Production Code, Hughes continued to re-shoot and re-edit the film until 1943, when it was finally released with Hawks uncredited as director.
After making the World War II film "Air Force" in 1943 starring John Garfield and written by Nichols, Hawks did two films with real-life lovers Humphrey Bogart and Lauren Bacall. "To Have and Have Not", made in 1944, stars Bogart, Bacall and Walter Brennan and is based on a novel by Ernest Hemingway. Hawks was a close friend of Hemingway and made a bet with the author that he could make a good film out of Hemingway's "worst book." Hawks, William Faulkner and Jules Furthman collaborated on the script about an American fishing boat captain working out of French Martinique in the Caribbean and various situations of espionage after the Fall of France in 1940. Bogart and Bacall fell in love on the set of the film and married soon afterwards. "To Have and Have Not" has been critiqued as having a "rambling, slapped-together feel" that contribute to an overall clumsy and dull movie. The film, however, has also been enjoyed for its romantic plot and has been compared to Casablanca in its feel. The greatest strength of the movie has been said to come from its atmosphere and use of wit that really plays on the strengths of Bacall and helps the movie solidify the theme of beauty in perpetual opposition. Hawks reteamed with Bogart and Bacall in 1946 with "The Big Sleep", based on the Philip Marlowe detective novel by Raymond Chandler. The screenplay for the film also reteamed Faulkner and Furthman, in addition to Leigh Brackett.
In 1948, Hawks made "Red River", an epic western reminiscent of "Mutiny on the Bounty" starring John Wayne and Montgomery Clift in his first film. Later that year, Hawks remade his earlier film "Ball of Fire" as "A Song Is Born", this time starring Danny Kaye and Virginia Mayo. This version follows the same plot but pays more attention to popular jazz music and includes such jazz legends as Tommy Dorsey, Benny Goodman, Louis Armstrong, Lionel Hampton, and Benny Carter playing themselves. In 1949, Hawks reteamed with Cary Grant in the screwball comedy "I Was a Male War Bride", also starring Ann Sheridan.
In 1951, Hawks produced, and according to some, directed, a science-fiction film, "The Thing from Another World". Director John Carpenter stated: "And let's get the record straight. The movie was directed by Howard Hawks. Verifiably directed by Howard Hawks. He let his editor, Christian Nyby, take credit. But the kind of feeling between the male characters—the camaraderie, the group of men that has to fight off the evil—it's all pure Hawksian." He followed this with the 1952 western film "The Big Sky", starring Kirk Douglas. Later in 1952, Hawks worked with Cary Grant for the fifth and final time in the screwball comedy "Monkey Business", which also starred Marilyn Monroe and Ginger Rogers. Grant plays a scientist (reminiscent of his character in "Bringing up Baby") who creates a formula that increases his vitality. Film critic John Belton called the film Hawks' "most organic comedy." Hawks' third film of 1952 was a contribution to the omnibus film "O. Henry's Full House", which includes short stories by the writer O. Henry made by various directors. Hawks' short film "The Ransom of Red Chief" starred Fred Allen, Oscar Levant and Jeanne Crain.
In 1953, Hawks made "Gentlemen Prefer Blondes", which featured Marilyn Monroe famously singing "Diamonds Are a Girl's Best Friend." The film starred Monroe and Jane Russell as two gold-digging, cabaret-performer best friends that many critics argue is the only female version of his celebrated "buddy film" genre. In 1955, Hawks shot a film atypical within the context of his other work, "Land of the Pharaohs", which is a sword-and-sandal epic about ancient Egypt that stars Jack Hawkins and Joan Collins. The film was Hawks' final collaboration with longtime friend William Faulkner before the author's death. In 1959, Hawks worked with John Wayne in "Rio Bravo", also starring Dean Martin, Ricky Nelson, and Walter Brennan as four lawmen "defending the fort" of their local jail in which a local criminal is awaiting a trial while his family attempt to break him out. The screenplay was written by Furthman and Leigh Brackett, who had collaborated with Hawks previously on "The Big Sleep". Film critic Robin Wood has said that if he "were asked to choose a film that would justify the existence of Hollywood ... it would be "Rio Bravo"."
In 1962, Hawks made "Hatari!", again with John Wayne, who plays a wild animals catcher in Africa. It was also written by Leigh Brackett. Hawks's knowledge of mechanics allowed him to built the camera-car hybrid that allowed him to film the hunting scenes in the film. In 1964, Hawks made his final comedy, "Man's Favorite Sport?", starring Rock Hudson (since Cary Grant felt he was too old for the role) and Paula Prentiss. Hawks then returned to his childhood passion for car races with "Red Line 7000" in 1965, featuring a young James Caan in his first leading role. Hawks' final two films were both Western remakes of "Rio Bravo" starring John Wayne and written by Leigh Brackett. In 1966, Hawks directed "El Dorado", starring Wayne, Robert Mitchum, and Caan, which was released the following year. He then made "Rio Lobo", with Wayne in 1970. After "Rio Lobo", Hawks planned a project relating to Ernest Hemingway and "Now, Mr. Gus," a comedy about two male friends seeking oil and money. He died in December 1977, before these projects were completed.
Hawks died on December 26, 1977, at the age of 81, from complications arising from a fall when he tripped over his dog at his home in Palm Springs, California. He had spent two weeks in the hospital recovering from his concussion when he asked to be taken home, dying a few days later. He was working with his last protege discovery at the time, Larraine Zax.
Howard Hawks was married three times: to actress Athole Shearer, sister of Norma Shearer, from 1928 to 1940; to socialite and fashion icon Slim Keith from 1941 to 1949; and to actress Dee Hartford from 1953 to 1960. Hawks had two children with Shearer, Barbara and David. David Hawks worked as an assistant director for the television series "M*A*S*H". His second daughter Kitty Hawks was a result of his second marriage to "Slim" Keith. Hawks had one son, Gregg, named after cinematographer Gregg Toland with his last wife Dee Hartford.
Along with his love of flying machines, Hawks also had a passion for cars and motorcycles. He built the race car that won the 1936 Indianapolis 500, as well as enjoyed riding motorcycles with Barbara Stanwyck and Gary Cooper. Hawks and his son Gregg were members of Checkers Motorcycle Club. Hawks continued riding until the age of 78. His other hobbies included golf, tennis, sailing, horse racing, carpentry, and silversmithing.
Hawks was also known for maintaining close friendships with many American writers such as Ben Hecht, Ernest Hemingway, and William Faulkner. Hawks credited himself with the discovery of William Faulkner and introducing the then-unknown writer to the Algonquin Round Table. Hawks and Faulkner had mutual interests in flying and drinking and Faulkner admired the films of Hawks, asking him to teach him how to write screenplays. Faulkner wrote five screenplays for Hawks, the first of them being "Today We Live" and the last of them being "Land of the Pharaohs". With a mutual interest in fishing and skiing, Hawks was also close with Ernest Hemingway, and was almost made the director of the "film adaptation" of "For Whom the Bell Tolls". Hawks found it difficult to forgive Hemingway for his suicide. After coming to terms with it in the 1970s, he began to plan a film project about Hemingway and his relationship with Robert Capa. He never filmed the project.
Hawks supported Thomas Dewey in the 1944 United States presidential election.
Hawks was a versatile director whose career includes comedies, dramas, gangster films, science fiction, film noir, and Westerns. Hawks's own functional definition of what constitutes a "good movie" is characteristic of his no-nonsense style: "Three great scenes, no bad ones." Hawks also defined a good director as "someone who doesn't annoy you." In Hawks's own words, his directing style is based on being enjoyable and straightforward. His style was very actor-focused and he made it a point to take as few shots as possible, thereby preserving an inherent and natural humor for his comedic pieces.
While Hawks was not sympathetic to feminism, he popularized the Hawksian woman archetype, a portrayal of women in more strong, less effeminate roles. Such an emphasis had never been done in the 1920s and therefore was seen to be a rarity and, according to Naomi Wise, has been cited as a prototype of the post-feminist movement. Another notable theme carried throughout his work included the relationship of morality and human interaction. In this sense he tended to portray more dramatic elements of a concept or a plot in a humorous way.
Orson Welles in an interview with Peter Bogdanovich said of Howard Hawks, in comparison with John Ford, that "Hawks is great prose; Ford is poetry." Despite Hawks's work in a variety of Hollywood genres, he still retained an independent sensibility. Film critic David Thomson wrote of Hawks: "Far from being the meek purveyor of Hollywood forms, he always chose to turn them upside down. "To Have and Have Not" and "The Big Sleep", ostensibly an adventure and a thriller, are really love stories. "Rio Bravo", apparently a Western – everyone wears a cowboy hat – is a comedy conversation piece. The ostensible comedies are shot through with exposed emotions, with the subtlest views of the sex war, and with a wry acknowledgment of the incompatibility of men and women." David Boxwell argues that the filmmaker's body of work "has been accused of a historical and adolescent escapism, but Hawks's fans rejoice in his oeuvre's remarkable avoidance of Hollywood's religiosity, bathos, flag-waving, and sentimentality.
In addition to his career as a film director, Howard Hawks either wrote or supervised the writing for most of his films. In some cases, he would rewrite parts of the script on-set. Due to the Screen Writer's Guild's rule that the director and producer couldn't receive credit for writing, Hawks rarely received credit. Even though Sidney Howard received credit for writing "Gone with the Wind" (1939), the screenplay was actually written by a myriad of Hollywood screenwriters including, David O. Selznick, Ben Hecht, and Howard Hawks. Hawks was an uncredited contributor to many other screenplays such as "Underworld" (1927), "Morocco" (1930), "Shanghai Express" (1932), and "Gunga Din" (1939). Hawks also produced many of his own films, preferring not to work under major film studios, because it allowed him creative freedom in his writing, directing, and casting. Hawks would sometimes walk out on films that he wasn't producing himself. Hawks, however, never considered producing to come before his directing. For example, several of the film cards for his films show "Directed and produced by Howard Hawks" with "produced" underneath "directed" in much smaller font. Sometimes his films wouldn't credit any producer. Hawks discovered many well known film stars such as Paul Muni, George Raft, Ann Dvorak, Carole Lombard, Frances Farmer, Jane Russell, Montgomery Cliff, Joanne Dru, Angie Dickinson, James Caan, and most famously Lauren Bacall.
Peter Bogdanovich suggested to the Museum of Modern Art to do a retrospective on Howard Hawks who was in the process of releasing "Hatari!". For marketing purposes, Paramount paid for part of the exhibition which was held in 1962. The exhibition traveled to Paris and London. For the event, Bogdanovich prepared a monograph. As a result of the retrospective, a special edition of "Cahiers du Cinéma" was published and Hawks was featured on his own issue of "Movie" magazine.
In 1996, Howard Hawks was voted No. 4 on "Entertainment Weekly"'s list of 50 greatest directors. In 2007, "Total Film" magazine ranked Hawks as No. 4 in its "100 Greatest Film Directors Ever" list. "Bringing Up Baby" (1938) was listed number 97 on the American Film Institute's AFI's 100 Years...100 Movies. On the AFI's AFI's 100 Years...100 Laughs "Bringing Up Baby" was listed number 14, "His Girl Friday" (1940) was listed number 19 and "Ball of Fire" (1941) was listed number 92. In the 2012 "Sight & Sound" polls of the greatest films ever made, six films directed by Hawks were in the critics' top 250 films: "Rio Bravo" (number 63), "Bringing Up Baby" (number 110), "Only Angels Have Wings" (number 154), "His Girl Friday" (number 171), "The Big Sleep" (number 202) and "Red River" (number 235). Six of his films currently hold a 100% rating on Rotten Tomatoes. His films "Ball of Fire", "The Big Sleep", "Bringing Up Baby", "His Girl Friday", "Only Angels Have Wings", "Red River", "Rio Bravo", "Scarface", "Sergeant York", "The Thing from Another World" and "Twentieth Century" were deemed "culturally, historically, or aesthetically significant" by the United States Library of Congress and inducted into the National Film Registry. With eleven films, he ties with John Ford for directing the most films that are in the registry.
From the film industry, he received three nominations for Outstanding Directorial Achievement in Motion Pictures from the Directors Guild of America for "Red River" in 1949, "The Big Sky" in 1953, and "Rio Bravo" in 1960. He was inducted into the Online Film and Television Association's Hall of Fame for his directing in 2005. For his contribution to the motion picture industry, Howard Hawks has a star on the Hollywood Walk of Fame at 1708 Vine Street. He was nominated for Academy Award for Best Director in 1942 for "Sergeant York", but he received his only Oscar in 1974 as an Honorary Award from the Academy. He was cited as, "a master filmmaker whose creative efforts hold a distinguished place in world cinema".
In the 1950s, Eugene Archer, a film fan, was planning on writing a book on important American film directors such as John Ford. However, after reading "Cahiers du Cinema", Archer learned that the French film scene was more interested in Alfred Hitchcock and Howard Hawks. Books were not written on Hawks until the sixties and a full biography on Hawks wasn't published until 1997, twenty years after his death. Film critic Andrew Sarris cited Howard Hawks as "the least known and least unappreciated Hollywood director of any stature". According to professor of film studies Ian Brookes, Hawks is not as well known as other directors, because of his lack of association with a particular genre such as Ford with Western and Hitchcock with thriller. Hawks worked across many genres including gangster, film noir, musical comedy, romantic comedy, screwball comedy, Western, aviation, and combat. Moreover, Hawks preferred not to associate with major studios during his film production. He worked for all major studios at least once on short term contract, but many of his films were produced under his own name. The simplicity of his narratives and stories may also have contributed to his under-recognition. Commercially, his films were successful, but he received little critical acclaim except for one Academy Award nomination for Best Director for "Sergeant York" (he lost to John Ford for "How Green Was My Valley"), and an Honorary Academy Award presented to him two years before his death.
Some critics limit Hawks by his action films, describing Hawks as a director who produced films with a "masculine bias", however action scenes in Hawks's films were often left to second-unit directors and Hawks actually preferred to work indoors. Howard Hawks's style is difficult to interpret, because there is no recognizable relationship between his visual and narrative style as in the films of his contemporary directors. Because his camera style was derived more from his working method rather than anecdotal or visual realization, his camera work is unobtrusive, making his films appear to have little to no cinematographic style. Hawks's style can, rather, by characterized as improvisational and collaborative. Hawks' directorial style and the use of natural, conversational dialogue in his films are cited as major influences on many noted filmmakers, including Robert Altman, John Carpenter, and Quentin Tarantino. His work is also admired by many notable directors including Peter Bogdanovich, Martin Scorsese, François Truffaut, Michael Mann and Jacques Rivette. Jean-Luc Godard called him "the greatest American artist." Critic Leonard Maltin labeled Hawks "the greatest American director who is not a household name." Andrew Sarris in his influential book of film criticism "The American Cinema: Directors and Directions 1929–1968" included Hawks in the "pantheon" of the 14 greatest film directors who had worked in the United States. Brian De Palma dedicated his version of "Scarface" to Hawks and Ben Hecht. Altman was influenced by the fast-paced dialogue of "His Girl Friday" in "MASH" and subsequent productions. Hawks was nicknamed "The Gray Fox" by members of the Hollywood community, thanks to his prematurely gray hair.
Hawks has been considered by some film critics to be an auteur both because of his recognizable style and frequent use of specific thematic elements, and because of his attention to all aspects of his films, not merely directing. Hawks was venerated by French critics associated with "Cahiers du cinéma", who intellectualized his work in a way that Hawks himself found moderately amusing (his work was promoted in France by The Studio des Ursulines cinema), and though he was not taken seriously by British critics of the "Sight & Sound" circle at first, other independent British writers, such as Robin Wood, admired his films. Wood named Hawks's "Rio Bravo" as his top film of all time. | https://en.wikipedia.org/wiki?curid=13219 |
Compounds of carbon
Compounds of carbon are defined as chemical substances containing carbon. More compounds of carbon exist than any other chemical element except for hydrogen. Organic carbon compounds are far more numerous than inorganic carbon compounds. In general bonds of carbon with other elements are covalent bonds. Carbon is tetravalent but carbon free radicals and carbenes occur as short-lived intermediates. Ions of carbon are carbocations and carbanions are also short-lived. An important carbon property is catenation as the ability to form long carbon chains and rings.
The known inorganic chemistry of the allotropes of carbon (diamond, graphite, and the fullerenes) blossomed with the discovery of buckminsterfullerene in 1985, as additional fullerenes and their various derivatives were discovered. One such class of derivatives is inclusion compounds, in which an ion is enclosed by the all-carbon shell of the fullerene. This inclusion is denoted by the "@" symbol in endohedral fullerenes. For example, an ion consisting of a lithium ion trapped within buckminsterfullerene would be denoted Li+@C60. As with any other ionic compound, this complex ion could in principle pair with a counterion to form a salt. Other elements are also incorporated in so-called graphite intercalation compounds.
Carbides are binary compounds of carbon with an element that is less electronegative than it. The most important are
Al4C3,
B4C,
CaC2,
Fe3C,
HfC,
SiC,
TaC,
TiC, and
WC.
It was once thought that organic compounds could only be created by living organisms. Over time, however, scientists learned how to synthesize organic compounds in the lab. The number of organic compounds is immense and the known number of defined compounds is close to 10 million. However, an indefinitely large number of such compounds are theoretically possible.
By definition, an organic compound must contain at least one atom of carbon, but this criterion is not generally regarded as sufficient. Indeed, the distinction between organic and inorganic compounds is ultimately a matter of convention, and there are several compounds that have been classified either way, such as:
COCl2,
CSCl2,
CS(NH2)2,
CO(NH2)2.
With carbon bonded to metals the field of organic chemistry crosses over into organometallic chemistry.
There is a rich variety of carbon chemistry that does not fall within the realm of organic chemistry and is thus called inorganic carbon chemistry.
There are many oxides of carbon (oxocarbons), of which the most common are carbon dioxide (CO2) and carbon monoxide (CO). Other less known oxides include carbon suboxide (C3O2) and mellitic anhydride (C12O9). There are also numerous unstable or elusive oxides, such as dicarbon monoxide (C2O), oxalic anhydride (C2O4), and carbon trioxide (CO3).
There are several oxocarbon anions, negative ions that consist solely of oxygen and carbon. The most common are the carbonate (CO32−) and oxalate (C2O42−). The corresponding acids are the highly unstable carbonic acid (H2CO3) and the quite stable oxalic acid (H2C2O4), respectively. These anions can be partially deprotonated to give the bicarbonate (HCO3−) and hydrogenoxalate (HC2O4−). Other more exotic carbon–oxygen anions exist, such as acetylenedicarboxylate (O2C–C≡C–CO22−), mellitate (C12O96−), squarate (C4O42−), and rhodizonate (C6O62−). The anhydrides of some of these acids are oxides of carbon; carbon dioxide, for instance, can be seen as the anhydride of carbonic acid.
Some important carbonates are
Ag2CO3,
BaCO3,
CaCO3,
CdCO3,
Ce2(CO3)3,
CoCO3,
Cs2CO3,
CuCO3,
FeCO3,
K2CO3,
La2(CO3)3,
Li2CO3,
MgCO3,
MnCO3,
(NH4)2CO3,
Na2CO3,
NiCO3,
PbCO3,
SrCO3, and
ZnCO3.
The most important bicarbonates include
NH4HCO3,
Ca(HCO3)2,
KHCO3, and
NaHCO3.
The most important oxalates include
Ag2C2O4,
BaC2O4,
CaC2O4,
Ce2(C2O4)3,
K2C2O4, and
Na2C2O4.
Carbonyls are coordination complexes between transition metals and carbonyl ligands. Metal carbonyls are complexes that are formed with the neutral ligand CO. These complexes are covalent. Here is a list of some carbonyls:
Cr(CO)6,
Co2(CO)8,
Fe(CO)5,
Mn2(CO)10,
Mo(CO)6,
Ni(CO)4,
W(CO)6.
Important inorganic carbon-sulfur compounds are the carbon sulfides carbon disulfide (CS2) and carbonyl sulfide (OCS). Carbon monosulfide (CS) unlike carbon monoxide is very unstable. Important compound classes are thiocarbonates, thiocarbamates, dithiocarbamates and trithiocarbonates.
Small inorganic carbon – nitrogen compounds are cyanogen, hydrogen cyanide, cyanamide, isocyanic acid and cyanogen chloride.
Paracyanogen is the polymerization product of cyanogen. Cyanuric chloride is the trimer of cyanogen chloride and 2-cyanoguanidine is the dimer of cyanamide.
Other types of inorganic compounds include the inorganic salts and complexes of the carbon-containing cyanide, cyanate, fulminate, thiocyanate and cyanamide ions. Examples of cyanides are copper cyanide (CuCN) and potassium cyanide (KCN), examples of cyanates are potassium cyanate (KNCO) and silver cyanate (AgNCO), examples of fulminates are silver fulminate (AgOCN) and mercury fulminate (HgOCN) and an example of a thiocyanate is potassium thiocyanate (KSCN).
The common carbon halides are carbon tetrafluoride (CF4), carbon tetrachloride (CCl4), carbon tetrabromide (CBr4), carbon tetraiodide (CI4), and a large number of other carbon-halogen compounds.
A carborane is a cluster composed of boron and carbon atoms such as H2C2B10H10...
There are hundreds of alloys that contain carbon. The most common of these alloys is steel, sometimes called "carbon steel" (see ). All kinds of steel contain some amount of carbon, by definition, and all ferrous alloys contain some carbon.
Some other common alloys that are based on iron and carbon include anthracite iron, cast iron, pig iron, and wrought iron.
In more technical uses, there are also spiegeleisen, an alloy of iron, manganese, and carbon; and stellite, an alloy of cobalt, chromium, tungsten, and carbon.
Whether it was placed there deliberately or not, some traces of carbon is also found in these common metals and their alloys: aluminum, chromium, magnesium, molybdenum, niobium, thorium, titanium, tungsten, uranium, vanadium, zinc, and zirconium. For example, many of these metals are smelted with coke, a form of carbon; and aluminum and magnesium are made in electrolytic cells with carbon electrodes. Some distribution of carbon into all of these metals is inevitable. | https://en.wikipedia.org/wiki?curid=15507 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.