text
stringlengths
10
951k
source
stringlengths
39
44
Old fashioned (cocktail) The old fashioned is a cocktail made by muddling sugar with bitters, adding whiskey or, less commonly, brandy, and garnishing with a twist of citrus rind and a cocktail cherry. It is traditionally served in an old fashioned glass (also known as rocks glass), which predated the cocktail. Developed during the 19th century and given its name in the 1880s, it is an IBA Official Cocktail. It is also one of six basic drinks listed in David A. Embury's "The Fine Art of Mixing Drinks". The first documented definition of the word "cocktail" was in response to a reader's letter asking to define the word in the May 6, 1806, issue of "The Balance and Columbian Repository" in Hudson, New York. In the May 13, 1806, issue, the paper's editor wrote that it was a potent concoction of spirits, bitters, water, and sugar; it was also referred to at the time as a bittered sling. J.E. Alexander describes the cocktail similarly in 1833, as he encountered it in New York City, as being rum, gin, or brandy, significant water, bitters, and sugar, though he includes a nutmeg garnish as well. By the 1860s, it was common for orange curaçao, absinthe, and other liqueurs to be added to the cocktail. The original concoction, albeit in different proportions, came back into vogue, and was referred to as "old-fashioned". The most popular of the in-vogue "old-fashioned" cocktails were made with whiskey, according to a Chicago barman, quoted in the "Chicago Daily Tribune" in 1882, with rye being more popular than Bourbon. The recipe he describes is a similar combination of spirits, bitters, water and sugar of seventy-six years earlier. The first mention in print of "old fashioned cocktails" was in the "Chicago Daily Tribune" in February 1880. However, the Pendennis Club, a gentlemen's club founded in 1881 in Louisville, Kentucky, claims the old fashioned cocktail was invented there. The recipe was said to have been invented by a bartender at that club in honor of Colonel James E. Pepper, a prominent bourbon distiller, who brought it to the Waldorf-Astoria Hotel bar in New York City. With its conception rooted in the city's history, in 2015 the city of Louisville named the old fashioned as its official cocktail. Each year, during the first two weeks of June, Louisville celebrates "Old Fashioned Fortnight" which encompasses bourbon events, cocktail specials and National Bourbon Day which is always celebrated on June 14. George Kappeler provides several of the earliest published recipes for old fashioned cocktails in his 1895 book. Recipes are given for whiskey, brandy, Holland gin, and Old Tom gin. The whiskey old fashioned recipe specifies the following (with a jigger being ): By the 1860s, as illustrated by Jerry Thomas' 1862 book, basic cocktail recipes included Curaçao, or other liqueurs. These liqueurs were not mentioned in the early 19th century descriptions, nor the "Chicago Daily Tribune" descriptions of the "old fashioned" cocktails of the early 1880s; they were absent from Kappeler's old fashioned recipes as well. The differences of the old fashioned cocktail recipes from the cocktail recipes of the late 19th Century are mainly preparation method, the use of sugar and water in lieu of simple or gomme syrup, and the absence of additional liqueurs. These old fashioned cocktail recipes are literally for cocktails done the old-fashioned way. A book by David Embury published in 1948 provides a slight variation, specifying 12 parts American whiskey, 1 part simple syrup, 1-3 dashes Angostura bitters, a twist of lemon peel over the top, and serve garnished with the lemon peel. Two additional recipes from the 1900s vary in the precise ingredients, but omit the cherry which was introduced after 1930 as well as the soda water which the occasional recipe calls for. Orange bitters were a popular ingredient in the late 19th century. The original old fashioned recipe would have showcased the whiskey available in America in the 19th century: Irish, Bourbon or rye whiskey. But in some regions, especially Wisconsin, brandy is substituted for whiskey (sometimes called a brandy old fashioned). Eventually the use of other spirits became common, such as a gin recipe becoming popularized in the late 1940s. Common garnishes for an old fashioned include an orange slice or a maraschino cherry or both, although these modifications came around 1930, some time after the original recipe was invented. While some recipes began making sparse use of the orange zest for flavor, the practice of muddling orange and other fruit gained prevalence as late as the 1990s. Some modern variants have greatly sweetened the old fashioned, e.g. by adding blood orange soda to make a fizzy old fashioned, or muddled strawberries to make a strawberry old fashioned. The old fashioned is the cocktail of choice of Don Draper, the lead character on the "Mad Men" television series, set in the 1960s. The use of the drink in the series coincided with a renewed interest in this and other classic cocktails in the 2000s. In the movie "It's a Mad, Mad, Mad, Mad World" (1963), pilot Tyler Fitzgerald (Jim Backus) directs passenger Dingy Bell (Mickey Rooney) to the aircraft's bar to "make us some old fashioneds." Annoyed by suggestions that he should limit drinking while piloting an airplane, and finding Bell's old fashioneds too sweet, Fitzgerald turns the controls over to Bell's sidekick Benjy Benjamin (Buddy Hackett) and retires to the back of the plane to "make some old fashioneds the old fashioned way, the way dear old dad used to." When Benjamin asks what if something happens, Fitzgerald replies, "What could happen to an old fashioned?" In the television series "M*A*S*H", character Margaret Houlihan frequently orders an old fashioned, "without the fruit", while in the Officers Club. In the movie "Crazy, Stupid, Love", the old fashioned is the preferred cocktail of pickup artist Jacob Palmer, and he is shown drinking it both in the bar and at home.
https://en.wikipedia.org/wiki?curid=22775
Omnipotence Omnipotence is the quality of having fully unlimited power. Monotheistic religions generally attribute omnipotence only to the deity of their faith. In the monotheistic philosophies of Abrahamic religions, omnipotence is often listed as one of a deity's characteristics among many, including omniscience, omnipresence, and omnibenevolence. The presence of all these properties in a single entity has given rise to considerable theological debate, prominently including the problem of theodicy, the question of why such a deity would permit the manifestation of evil. It is accepted in philosophy and science that omnipotence can never be effectively understood. The word "omnipotence" derives from the Latin term ""omni potens"", meaning "all-powerful" or "all-potent". The term omnipotent has been used to connote a number of different positions. These positions include, but are not limited to, the following: Logic And Religion]) Thomas Aquinas acknowledged difficulty in comprehending the deity's power: "All confess that God is omnipotent; but it seems difficult to explain in what His omnipotence precisely consists: for there may be doubt as to the precise meaning of the word 'all' when we say that God can do all things. If, however, we consider the matter aright, since power is said in reference to possible things, this phrase, 'God can do all things,' is rightly understood to mean that God can do all things that are possible; and for this reason He is said to be omnipotent." In the scholastic understanding, omnipotence is generally understood to be compatible with certain limitations or restrictions. A proposition that is necessarily true is one whose negation is self-contradictory. Aquinas explains that: Omnipotence is all-sufficient power. The adaptation of means to ends in the universe does not argue, as J. S. Mill would have it, that the power of the designer is limited, but only that God has willed to manifest his glory by a world so constituted rather than by another. Indeed, the production of secondary causes, capable of accomplishing certain effects, requires greater power than the direct accomplishment of these same effects. On the other hand, even though no creature existed, God's power would not be barren, for "creatures are not an end to God." Regarding the Deity's power, medieval theologians contended that there are certain things that even an omnipotent deity cannot do. The statement "a deity can do anything" is only sensible with an assumed suppressed clause, "that implies the perfection of true power". This standard scholastic answer allows that acts of creatures such as walking can be performed by humans but not by a deity. Rather than an advantage in power, human acts such as walking, sitting, or giving birth were possible only because of a "defect" in human power. The capacity to sin, for example, is not a power but a defect or infirmity. In response to questions of a deity performing impossibilities, e.g. making square circles, Aquinas says that "everything that does not imply a contradiction in terms, is numbered amongst those possible things, in respect of which God is called omnipotent: whereas whatever implies contradiction does not come within the scope of divine omnipotence, because it cannot have the aspect of possibility. Hence it is better to say that such things cannot be done, than that God cannot do them. Nor is this contrary to the word of the angel, saying: 'No word shall be impossible with God.' For whatever implies a contradiction cannot be a word, because no intellect can possibly conceive such a thing." In recent times, C. S. Lewis has adopted a scholastic position in the course of his work "The Problem of Pain". Lewis follows Aquinas' view on contradiction: Early Freudianism saw a feeling of omnipotence as intrinsic to early childhood. 'As Freud and Ferenczi have shown, the child lives in a sort of megalomania for a long period...the "fiction of omnipotence"'. At birth 'the baby "is" everything "as far as he knows" - "all powerful"...every step he takes towards establishing his own limits and boundaries will be painful because he'll have to lose this original God-like feeling of omnipotence'. Freud considered that in a neurotic 'the "omnipotence" which he ascribed to his thoughts and feelings...is a frank acknowledgement of a relic of the old megalomania of infancy'. In some narcissists, the 'period of primary narcissism which subjectively did not need any objects and was entirely independent...may be retained or regressively regained..."omnipotent" behavior'. D. W. Winnicott took a more positive view of a belief in early omnipotence, seeing it as essential to the child's well-being; and "good-enough" mothering as essential to enable the child to 'cope with the immense shock of loss of omnipotence' - as opposed to whatever 'prematurely forces it out of its narcissistic universe'. Some monotheists reject the view that a deity is or could be omnipotent, or take the view that, by choosing to create creatures with freewill, a deity has chosen to limit divine omnipotence. In Conservative and Reform Judaism, and some movements within Protestant Christianity, including open theism, deities are said to act in the world through persuasion, and not by coercion (this is a matter of choice—a deity could act miraculously, and perhaps on occasion does so—while for process theism it is a matter of necessity—creatures have inherent powers that a deity cannot, even in principle, override). Deities are manifested in the world through inspiration and the creation of possibility, not necessarily by miracles or violations of the laws of nature. The rejection of omnipotence often follows from either philosophical or scriptural considerations, discussed below. Process theology rejects unlimited omnipotence on a philosophical basis, arguing that omnipotence as classically understood would be less than perfect, and is therefore incompatible with the idea of a perfect deity. The idea is grounded in Plato's oft-overlooked statement that "being is power." From this premise, Charles Hartshorne argues further that: The argument can be stated as follows: For example, though someone might control a lump of jelly-pudding almost completely, the inability of that pudding to stage any resistance renders that person's power rather unimpressive. Power can only be said to be great if it is over something that has defenses and its own agenda. If a deity's power is to be great, it must therefore be over beings that have at least some of their own defenses and agenda. Thus, if a deity does not have absolute power, it must therefore embody some of the characteristics of power, and some of the characteristics of persuasion. This view is known as dipolar theism. The most popular works espousing this point are from Harold Kushner (in Judaism). The need for a modified view of omnipotence was also articulated by Alfred North Whitehead in the early 20th century and expanded upon by the aforementioned philosopher Charles Hartshorne. Hartshorne proceeded within the context of the theological system known as process theology. In the Authorized King James Version of the Bible, as well as several other versions, in Revelation 19:6 it is stated "...the Lord God omnipotent reigneth" (the original Greek word is παντοκράτωρ, "all-mighty"). All the above stated claims of power are each based on scriptural grounds and upon empirical human perception. This perception is limited to our senses. The power of a deity is related to its existence. There are however other ways of perception like: reason, intuition, revelation, divine inspiration, religious experience, mystical states, and historical testimony. According to the Hindu philosophy the essence of God or Brahman can never be understood or known since Brahman is beyond both existence and non-existence, transcending and including time, causation and space, and thus can never be known in the same material sense as one traditionally 'understands' a given concept or object. So presuming there is a god-like entity consciently taking actions, we cannot comprehend the limits of a deity's powers. Since the current laws of physics are only known to be valid in this universe, it is possible that the laws of physics are different in parallel universes, giving a God-like entity more power. If the number of universes is unlimited, then the power of a certain God-like entity is also unlimited, since the laws of physics may be different in other universes, and accordingly making this entity omnipotent. Unfortunately concerning a multiverse there is a lack of empirical correlation. To the extreme there are theories about realms beyond this multiverse (Nirvana, Chaos, Nothingness). Also trying to develop a theory to explain, assign or reject omnipotence on grounds of logic has little merit, since being omnipotent, in a Cartesian sense, would mean the omnipotent being is above logic, a view supported by René Descartes. He issues this idea in his Meditations on First Philosophy. This view is called universal possibilism. Allowing assumption that a deity exists, further debate may be provoked that said deity is consciously taking actions. It could be concluded from an emanationism point of view, that all actions and creations by a deity are simply flows of divine energy (the flowing Tao in conjunction with qi is often seen as a river; Dharma (Buddhism) the law of nature discovered by Buddha has no beginning or end.) Pantheism and pandeism see the universe/multiverse itself as God (or, at least, the current state of God), while panentheism sees the universe/multiverse as 'the body of God', making 'God' everybody and everything. So if one does something, actually 'God' is doing it. We are 'God's' means according to this view. In the Taoist religious or philosophical tradition, the Tao is in some ways equivalent to a deity or the logos. The Tao is understood to have inexhaustible power, yet that power is simply another aspect of its weakness.
https://en.wikipedia.org/wiki?curid=22776
Octopus The octopus (plural octopuses) is a soft-bodied, eight-limbed mollusc of the order Octopoda (, ). Around 300 species are recognised, and the order is grouped within the class Cephalopoda with squids, cuttlefish, and nautiloids. Like other cephalopods, the octopus is bilaterally symmetric with two eyes and a beak, with its mouth at the center point of the eight limbs ("tentacle" is used as an umbrella term for cephalopod limbs; however, within a teuthological context, "arm" is used to refer to such limbs while "tentacle" is reserved for feeding appendages not found on octopuses). The soft body can rapidly alter its shape, enabling octopuses to squeeze through small gaps. They trail their eight appendages behind them as they swim. The siphon is used both for respiration and for locomotion, by expelling a jet of water. Octopuses have a complex nervous system and excellent sight, and are among the most intelligent and behaviourally diverse of all invertebrates. Octopuses inhabit various regions of the ocean, including coral reefs, pelagic waters, and the seabed; some live in the intertidal zone and others at abyssal depths. Most species grow quickly, mature early, and are short-lived. In most species, the male uses a specially adapted arm to deliver a bundle of sperm directly into the female's mantle cavity, after which he becomes senescent and dies, while the female deposits fertilised eggs in a den and cares for them until they hatch, after which she also dies. Strategies to defend themselves against predators include the expulsion of ink, the use of camouflage and threat displays, the ability to jet quickly through the water and hide, and even deceit. All octopuses are venomous, but only the blue-ringed octopuses are known to be deadly to humans. Octopuses appear in mythology as sea monsters like the Kraken of Norway and the Akkorokamui of the Ainu, and probably the Gorgon of ancient Greece. A battle with an octopus appears in Victor Hugo's book "Toilers of the Sea", inspiring other works such as Ian Fleming's "Octopussy". Octopuses appear in Japanese erotic art, "shunga". They are eaten and considered a delicacy by humans in many parts of the world, especially the Mediterranean and the Asian seas. The scientific Latin term "octopus" was derived from Ancient Greek ὀκτώπους, a compound form of ὀκτώ ("oktō", "eight") and πούς ("pous", "foot"), itself a variant form of ὀκτάπους, a word used for example by Alexander of Tralles (c. 525–605) for the common octopus. The standard pluralised form of "octopus" in English is "octopuses"; the Ancient Greek plural ὀκτώποδες, "octopodes" (), has also been used historically. The alternative plural "octopi" is considered grammatically incorrect because it wrongly assumes that "octopus" is a Latin second declension "-us" noun or adjective when, in either Greek or Latin, it is a third declension noun. "Fowler's Modern English Usage" states that the only acceptable plural in English is "octopuses", that "octopi" is misconceived, and "octopodes" pedantic; the latter is nonetheless used frequently enough to be acknowledged by the descriptivist "Merriam-Webster 11th Collegiate Dictionary" and "Webster's New World College Dictionary". The "Oxford English Dictionary" lists "octopuses", "octopi", and "octopodes", in that order, reflecting frequency of use, calling "octopodes" rare and noting that "octopi" is based on a misunderstanding. The "New Oxford American Dictionary" (3rd Edition, 2010) lists "octopuses" as the only acceptable pluralisation, and indicates that "octopodes" is still occasionally used, but that "octopi" is incorrect. The giant Pacific octopus "(Enteroctopus dofleini)" is often cited as the largest known octopus species. Adults usually weigh around 15 kg (33 lb), with an arm span of up to 4.3 m (14 ft). The largest specimen of this species to be scientifically documented was an animal with a live mass of 71 kg (156.5 lb). Much larger sizes have been claimed for the giant Pacific octopus: one specimen was recorded as 272 kg (600 lb) with an arm span of 9 m (30 ft). A carcass of the seven-arm octopus, "Haliphron atlanticus", weighed 61 kg (134 lb) and was estimated to have had a live mass of 75 kg (165 lb). The smallest species is "Octopus wolfi", which is around and weighs less than . The octopus is bilaterally symmetrical along its dorso-ventral axis; the head and foot are at one end of an elongated body and function as the anterior (front) of the animal. The head includes the mouth and brain. The foot has evolved into a set of flexible, prehensile appendages, known as "arms", that surround the mouth and are attached to each other near their base by a webbed structure. The arms can be described based on side and sequence position (such as L1, R1, L2, R2) and divided into four pairs. The two rear appendages are generally used to walk on the sea floor, while the other six are used to forage for food; hence some biologists refer to the animals as having six "arms" and two "legs". The bulbous and hollow mantle is fused to the back of the head and is known as the visceral hump; it contains most of the vital organs. The mantle cavity has muscular walls and contains the gills; it is connected to the exterior by a funnel or siphon. The mouth of an octopus, located underneath the arms, has a sharp hard beak. The skin consists of a thin outer epidermis with mucous cells and sensory cells, and a connective tissue dermis consisting largely of collagen fibres and various cells allowing colour change. Most of the body is made of soft tissue allowing it to lengthen, contract, and contort itself. The octopus can squeeze through tiny gaps; even the larger species can pass through an opening close to in diameter. Lacking skeletal support, the arms work as muscular hydrostats and contain longitudinal, transverse and circular muscles around a central axial nerve. They can extend and contract, twist to left or right, bend at any place in any direction or be held rigid. The interior surfaces of the arms are covered with circular, adhesive suckers. The suckers allow the octopus to anchor itself or to manipulate objects. Each sucker is usually circular and bowl-like and has two distinct parts: an outer shallow cavity called an infundibulum and a central hollow cavity called an acetabulum, both of which are thick muscles covered in a protective chitinous cuticle. When a sucker attaches to a surface, the orifice between the two structures is sealed. The infundibulum provides adhesion while the acetabulum remains free, and muscle contractions allow for attachment and detachment. The eyes of the octopus are large and are at the top of the head. They are similar in structure to those of a fish and are enclosed in a cartilaginous capsule fused to the cranium. The cornea is formed from a translucent epidermal layer and the slit-shaped pupil forms a hole in the iris and lies just behind. The lens is suspended behind the pupil and photoreceptive retinal cells cover the back of the eye. The pupil can be adjusted in size and a retinal pigment screens incident light in bright conditions. Some species differ in form from the typical octopus body shape. Basal species, the Cirrina, have stout gelatinous bodies with webbing that reaches near the tip of their arms, and two large fins above the eyes, supported by an internal shell. Fleshy papillae or cirri are found along the bottom of the arms, and the eyes are more developed. Octopuses have a closed circulatory system, in which the blood remains inside blood vessels. Octopuses have three hearts; a systemic heart that circulates blood around the body and two branchial hearts that pump it through each of the two gills. The systemic heart is inactive when the animal is swimming and thus it tires quickly and prefers to crawl. Octopus blood contains the copper-rich protein haemocyanin to transport oxygen. This makes the blood very viscous and it requires considerable pressure to pump it around the body; octopuses' blood pressures can exceed . In cold conditions with low oxygen levels, haemocyanin transports oxygen more efficiently than haemoglobin. The haemocyanin is dissolved in the plasma instead of being carried within blood cells, and gives the blood a bluish colour. The systemic heart has muscular contractile walls and consists of a single ventricle and two atria, one for each side of the body. The blood vessels consist of arteries, capillaries and veins and are lined with a cellular endothelium which is quite unlike that of most other invertebrates. The blood circulates through the aorta and capillary system, to the vena cavae, after which the blood is pumped through the gills by the auxiliary hearts and back to the main heart. Much of the venous system is contractile, which helps circulate the blood. Respiration involves drawing water into the mantle cavity through an aperture, passing it through the gills, and expelling it through the siphon. The ingress of water is achieved by contraction of radial muscles in the mantle wall, and flapper valves shut when strong circular muscles force the water out through the siphon. Extensive connective tissue lattices support the respiratory muscles and allow them to expand the respiratory chamber. The lamella structure of the gills allows for a high oxygen uptake, up to 65% in water at . Water flow over the gills correlates with locomotion, and an octopus can propel its body when it expels water out of its siphon. The thin skin of the octopus absorbs additional oxygen. When resting, around 41% of an octopus's oxygen absorption is through the skin. This decreases to 33% when it swims, as more water flows over the gills; skin oxygen uptake also increases. When it is resting after a meal, absorption through the skin can drop to 3% of its total oxygen uptake. The digestive system of the octopus begins with the buccal mass which consists of the mouth with its chitinous beak, the pharynx, radula and salivary glands. The radula is a spiked, muscular tongue-like organ with multiple rows of tiny teeth. Food is broken down and is forced into the oesophagus by two lateral extensions of the esophageal side walls in addition to the radula. From there it is transferred to the gastrointestinal tract, which is mostly suspended from the roof of the mantle cavity by numerous membranes. The tract consists of a crop, where the food is stored; a stomach, where food is ground down; a caecum where the now sludgy food is sorted into fluids and particles and which plays an important role in absorption; the digestive gland, where liver cells break down and absorb the fluid and become "brown bodies"; and the intestine, where the accumulated waste is turned into faecal ropes by secretions and blown out of the funnel via the rectum. During osmoregulation, fluid is added to the pericardia of the branchial hearts. The octopus has two nephridia (equivalent to vertebrate kidneys) which are associated with the branchial hearts; these and their associated ducts connect the pericardial cavities with the mantle cavity. Before reaching the branchial heart, each branch of the vena cava expands to form renal appendages which are in direct contact with the thin-walled nephridium. The urine is first formed in the pericardial cavity, and is modified by excretion, chiefly of ammonia, and selective absorption from the renal appendages, as it is passed along the associated duct and through the nephridiopore into the mantle cavity. The octopus (along with cuttlefish) has the highest brain-to-body mass ratios of all invertebrates; it is also greater than that of many vertebrates. It has a highly complex nervous system, only part of which is localised in its brain, which is contained in a cartilaginous capsule. Two-thirds of an octopus's neurons are found in the nerve cords of its arms, which show a variety of complex reflex actions that persist even when they have no input from the brain. Unlike vertebrates, the complex motor skills of octopuses are not organised in their brain via an internal somatotopic map of its body, instead using a nonsomatotopic system unique to large-brained invertebrates. Like other cephalopods, octopuses can distinguish the polarisation of light. Colour vision appears to vary from species to species, for example being present in "O. aegina" but absent in "O. vulgaris". Researchers believe that opsins in the skin can sense different wavelengths of light and help the creatures choose a coloration that camouflages them, in addition to light input from the eyes. Other researchers hypothesise that cephalopod eyes in species which only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into colour vision, though this sacrifices image quality. This would explain pupils shaped like the letter U, the letter W, or a dumbbell, as well as explaining the need for colourful mating displays. Attached to the brain are two special organs called statocysts (sac-like structures containing a mineralised mass and sensitive hairs), that allow the octopus to sense the orientation of its body. They provide information on the position of the body relative to gravity and can detect angular acceleration. An autonomic response keeps the octopus's eyes oriented so that the pupil is always horizontal. Octopuses may also use the statocyst to hear sound. The common octopus can hear sounds between 400 Hz and 1000 Hz, and hears best at 600 Hz. Octopuses also have an excellent sense of touch. The octopus's suction cups are equipped with chemoreceptors so the octopus can taste what it touches. Octopus arms do not become tangled or stuck to each other because the sensors recognise octopus skin and prevent self-attachment. The arms contain tension sensors so the octopus knows whether its arms are stretched out, but this is not sufficient for the brain to determine the position of the octopus's body or arms. As a result, the octopus does not possess stereognosis; that is, it does not form a mental image of the overall shape of the object it is handling. It can detect local texture variations, but cannot integrate the information into a larger picture. The neurological autonomy of the arms means the octopus has great difficulty learning about the detailed effects of its motions. It has a poor proprioceptive sense, and it knows what exact motions were made only by observing the arms visually. The ink sac of an octopus is located under the digestive gland. A gland attached to the sac produces the ink, and the sac stores it. The sac is close enough to the funnel for the octopus to shoot out the ink with a water jet. Before it leaves the funnel, the ink passes through glands which mix it with mucus, creating a thick, dark blob which allows the animal to escape from a predator. The main pigment in the ink is melanin, which gives it its black colour. Cirrate octopuses lack the ink sac. Octopuses are gonochoric and have a single, posteriorly-located gonad which is associated with the coelom. The testis in males and the ovary in females bulges into the gonocoel and the gametes are released here. The gonocoel is connected by the gonoduct to the mantle cavity, which it enters at the gonopore. An optic gland creates hormones that cause the octopus to mature and age and stimulate gamete production. The gland may be triggered by environmental conditions such as temperature, light and nutrition, which thus control the timing of reproduction and lifespan. When octopuses reproduce, the male uses a specialised arm called a hectocotylus to transfer spermatophores (packets of sperm) from the terminal organ of the reproductive tract (the cephalopod "penis") into the female's mantle cavity. The hectocotylus in benthic octopuses is usually the third right arm, which has a spoon-shaped depression and modified suckers near the tip. In most species, fertilisation occurs in the mantle cavity. The reproduction of octopuses has been studied in only a few species. One such species is the giant Pacific octopus, in which courtship is accompanied, especially in the male, by changes in skin texture and colour. The male may cling to the top or side of the female or position himself beside her. There is some speculation that he may first use his hectocotylus to remove any spermatophore or sperm already present in the female. He picks up a spermatophore from his spermatophoric sac with the hectocotylus, inserts it into the female's mantle cavity, and deposits it in the correct location for the species, which in the giant Pacific octopus is the opening of the oviduct. Two spermatophores are transferred in this way; these are about one metre (yard) long, and the empty ends may protrude from the female's mantle. A complex hydraulic mechanism releases the sperm from the spermatophore, and it is stored internally by the female. About forty days after mating, the female giant Pacific octopus attaches strings of small fertilised eggs (10,000 to 70,000 in total) to rocks in a crevice or under an overhang. Here she guards and cares for them for about five months (160 days) until they hatch. In colder waters, such as those off of Alaska, it may take as much as 10 months for the eggs to completely develop. The female aerates the eggs and keeps them clean; if left untended, many eggs will not hatch. She does not feed during this time and dies soon afterwards. Males become senescent and die a few weeks after mating. The eggs have large yolks; cleavage (division) is superficial and a germinal disc develops at the pole. During gastrulation, the margins of this grow down and surround the yolk, forming a yolk sac, which eventually forms part of the gut. The dorsal side of the disc grows upwards and forms the embryo, with a shell gland on its dorsal surface, gills, mantle and eyes. The arms and funnel develop as part of the foot on the ventral side of the disc. The arms later migrate upwards, coming to form a ring around the funnel and mouth. The yolk is gradually absorbed as the embryo develops. Most young octopuses hatch as paralarvae and are planktonic for weeks to months, depending on the species and water temperature. They feed on copepods, arthropod larvae and other zooplankton, eventually settling on the ocean floor and developing directly into adults with no distinct metamorphoses that are present in other groups of mollusc larvae. Octopus species that produce larger eggs – including the southern blue-ringed, Caribbean reef, California two-spot, "Eledone moschata" and deep sea octopuses – do not have a paralarval stage, but hatch as benthic animals similar to the adults. In the argonaut (paper nautilus), the female secretes a fine, fluted, papery shell in which the eggs are deposited and in which she also resides while floating in mid-ocean. In this she broods the young, and it also serves as a buoyancy aid allowing her to adjust her depth. The male argonaut is minute by comparison and has no shell. Octopuses have a relatively short life expectancy; some species live for as little as six months. The giant Pacific octopus, one of the two largest species of octopus, may live for as much as five years. Octopus lifespan is limited by reproduction: males can live for only a few months after mating, and females die shortly after their eggs hatch. The larger Pacific striped octopus is an exception, as it can reproduce multiple times over a life of around two years. Octopus reproductive organs mature due to the hormonal influence of the optic gland but result in the inactivation of their digestive glands, typically causing the octopus to die from starvation. Experimental removal of both optic glands after spawning was found to result in the cessation of broodiness, the resumption of feeding, increased growth, and greatly extended lifespans. Octopuses live in every ocean, and different species have adapted to different marine habitats. As juveniles, common octopuses inhabit shallow tide pools. The Hawaiian day octopus ("Octopus cyanea") lives on coral reefs; argonauts drift in pelagic waters. "Abdopus aculeatus" mostly lives in near-shore seagrass beds. Some species are adapted to the cold, ocean depths. The spoon-armed octopus ("Bathypolypus arcticus") is found at depths of , and "Vulcanoctopus hydrothermalis" lives near hydrothermal vents at . The cirrate species are often free-swimming and live in deep-water habitats. Although several species are known to live at bathyal and abyssal depths, there is only a single indisputable record of an octopus in the hadal zone; a species of "Grimpoteuthis" (dumbo octopus) photographed at . No species are known to live in fresh water. Most species are solitary when not mating, though a few are known to occur in high densities and with frequent interactions, signaling, mate defending and eviction of individuals from dens. This is likely the result of abundant food supplies combined with limited den sites. The larger Pacific striped octopus however is social, living in groups of up to 40 individuals that share dens. Octopuses hide in dens, which are typically crevices in rocky outcrops or other hard structures, though some species burrow into sand or mud. Octopuses are not territorial but generally remain in a home range; they may leave the area in search of food. They can use navigation skills to return to a den without having to retrace their outward route. They are not known to be migratory. Octopuses bring captured prey back to the den where they can eat it safely. Sometimes the octopus catches more prey than it can eat, and the den is often surrounded by a midden of dead and uneaten food items. Other creatures, such as fish, crabs, molluscs and echinoderms, often share the den with the octopus, either because they have arrived as scavengers, or because they have survived capture. Nearly all octopuses are predatory; bottom-dwelling octopuses eat mainly crustaceans, polychaete worms, and other molluscs such as whelks and clams; open-ocean octopuses eat mainly prawns, fish and other cephalopods. Major items in the diet of the giant Pacific octopus include bivalve molluscs such as the cockle "Clinocardium nuttallii", clams and scallops and crustaceans such as crabs and spider crabs. Prey that it is likely to reject include moon snails because they are too large and limpets, rock scallops, chitons and abalone, because they are too securely fixed to the rock. A benthic (bottom-dwelling) octopus typically moves among the rocks and feels through the crevices. The creature may make a jet-propelled pounce on prey and pull it towards the mouth with its arms, the suckers restraining it. Small prey may be completely trapped by the webbed structure. Octopuses usually inject crustaceans like crabs with a paralysing saliva then dismember them with their beaks. Octopuses feed on shelled molluscs either by forcing the valves apart, or by drilling a hole in the shell to inject a nerve toxin. It used to be thought that the hole was drilled by the radula, but it has now been shown that minute teeth at the tip of the salivary papilla are involved, and an enzyme in the toxic saliva is used to dissolve the calcium carbonate of the shell. It takes about three hours for "O. vulgaris" to create a hole. Once the shell is penetrated, the prey dies almost instantaneously, its muscles relax, and the soft tissues are easy for the octopus to remove. Crabs may also be treated in this way; tough-shelled species are more likely to be drilled, and soft-shelled crabs are torn apart. Some species have other modes of feeding. "Grimpoteuthis" has a reduced or non-existent radula and swallows prey whole. In the deep-sea genus "Stauroteuthis", some of the muscle cells that control the suckers in most species have been replaced with photophores which are believed to fool prey by directing them towards the mouth, making them one of the few bioluminescent octopuses. Octopuses mainly move about by relatively slow crawling with some swimming in a head-first position. Jet propulsion or backwards swimming, is their fastest means of locomotion, followed by swimming and crawling. When in no hurry, they usually crawl on either solid or soft surfaces. Several arms are extended forwards, some of the suckers adhere to the substrate and the animal hauls itself forwards with its powerful arm muscles, while other arms may push rather than pull. As progress is made, other arms move ahead to repeat these actions and the original suckers detach. During crawling, the heart rate nearly doubles, and the animal requires ten or fifteen minutes to recover from relatively minor exercise. Most octopuses swim by expelling a jet of water from the mantle through the siphon into the sea. The physical principle behind this is that the force required to accelerate the water through the orifice produces a reaction that propels the octopus in the opposite direction. The direction of travel depends on the orientation of the siphon. When swimming, the head is at the front and the siphon is pointed backwards, but when jetting, the visceral hump leads, the siphon points towards the head and the arms trail behind, with the animal presenting a fusiform appearance. In an alternative method of swimming, some species flatten themselves dorso-ventrally, and swim with the arms held out sideways, and this may provide lift and be faster than normal swimming. Jetting is used to escape from danger, but is physiologically inefficient, requiring a mantle pressure so high as to stop the heart from beating, resulting in a progressive oxygen deficit. Cirrate octopuses cannot produce jet propulsion and rely on their fins for swimming. They have neutral buoyancy and drift through the water with the fins extended. They can also contract their arms and surrounding web to make sudden moves known as "take-offs". Another form of locomotion is "pumping", which involves symmetrical contractions of muscles in their webs producing peristaltic waves. This moves the body slowly. In 2005, "Adopus aculeatus" and veined octopus ("Amphioctopus marginatus") were found to walk on two arms, while at the same time mimicking plant matter. This form of locomotion allows these octopuses to move quickly away from a potential predator without being recognised. A study of this behaviour led to the suggestion that the two rearmost appendages may be more accurately termed "legs" rather than "arms". Some species of octopus can crawl out of the water briefly, which they may do between tide pools while hunting crustaceans or gastropods or to escape predators. "Stilt walking" is used by the veined octopus when carrying stacked coconut shells. The octopus carries the shells underneath it with two arms, and progresses with an ungainly gait supported by its remaining arms held rigid. Octopuses are highly intelligent; the extent of their intelligence and learning capability are not well defined. Maze and problem-solving experiments have shown evidence of a memory system that can store both short- and long-term memory. It is not known precisely what contribution learning makes to adult octopus behaviour. Young octopuses learn nothing from their parents, as adults provide no parental care beyond tending to their eggs until the young octopuses hatch. In laboratory experiments, octopuses can be readily trained to distinguish between different shapes and patterns. They have been reported to practise observational learning, although the validity of these findings is contested. Octopuses have also been observed in what has been described as play: repeatedly releasing bottles or toys into a circular current in their aquariums and then catching them. Octopuses often break out of their aquariums and sometimes into others in search of food. They have even boarded fishing boats and opened holds to eat crabs. The veined octopus collects discarded coconut shells, then uses them to build a shelter, an example of tool use. Octopuses use camouflage when hunting and to avoid predators. To do this they use specialised skin cells which change the appearance of the skin by adjusting its colour, opacity, or reflectivity. Chromatophores contain yellow, orange, red, brown, or black pigments; most species have three of these colours, while some have two or four. Other colour-changing cells are reflective iridophores and white leucophores. This colour-changing ability is also used to communicate with or warn other octopuses. Octopuses can create distracting patterns with waves of dark coloration across the body, a display known as the "passing cloud". Muscles in the skin change the texture of the mantle to achieve greater camouflage. In some species, the mantle can take on the spiky appearance of algae; in others, skin anatomy is limited to relatively uniform shades of one colour with limited skin texture. Octopuses that are diurnal and live in shallow water have evolved more complex skin than their nocturnal and deep-sea counterparts. A "moving rock" trick involves the octopus mimicking a rock and then inching across the open space with a speed matching the movement in the surrounding water, allowing it to move in plain sight of a predator. Aside from humans, octopuses may be preyed on by fishes, seabirds, sea otters, pinnipeds, cetaceans, and other cephalopods. Octopuses typically hide or disguise themselves by camouflage and mimicry; some have conspicuous warning coloration (aposematism) or deimatic behaviour. An octopus may spend 40% of its time hidden away in its den. When the octopus is approached, it may extend an arm to investigate. 66% of "Enteroctopus dofleini" in one study had scars, with 50% having amputated arms. The blue rings of the highly venomous blue-ringed octopus are hidden in muscular skin folds which contract when the animal is threatened, exposing the iridescent warning. The Atlantic white-spotted octopus ("Callistoctopus macropus") turns bright brownish red with oval white spots all over in a high contrast display. Displays are often reinforced by stretching out the animal's arms, fins or web to make it look as big and threatening as possible. Once they have been seen by a predator, they commonly try to escape but can also use distraction with an ink cloud ejected from the ink sac. The ink is thought to reduce the efficiency of olfactory organs, which would aid evasion from predators that employ smell for hunting, such as sharks. Ink clouds of some species might act as pseudomorphs, or decoys that the predator attacks instead. When under attack, some octopuses can perform arm autotomy, in a manner similar to the way skinks and other lizards detach their tails. The crawling arm may distract would-be predators. Such severed arms remain sensitive to stimuli and move away from unpleasant sensations. Octopuses can replace lost limbs. Some octopuses, such as the mimic octopus, can combine their highly flexible bodies with their colour-changing ability to mimic other, more dangerous animals, such as lionfish, sea snakes, and eels. The diseases and parasites that affect octopuses have been little studied, but cephalopods are known to be the intermediate or final hosts of various parasitic cestodes, nematodes and copepods; 150 species of protistan and metazoan parasites have been recognised. The Dicyemidae are a family of tiny worms that are found in the renal appendages of many species; it is unclear whether they are parasitic or are endosymbionts. Coccidians in the genus "Aggregata" living in the gut cause severe disease to the host. Octopuses have an innate immune system, and the haemocytes respond to infection by phagocytosis, encapsulation, infiltration or cytotoxic activities to destroy or isolate the pathogens. The haemocytes play an important role in the recognition and elimination of foreign bodies and wound repair. Captive animals have been found to be more susceptible to pathogens than wild ones. A gram-negative bacterium, "Vibrio lentus", has been found to cause skin lesions, exposure of muscle and death of octopuses in extreme cases. The scientific name Octopoda was first coined and given as the order of octopuses in 1818 by English biologist William Elford Leach, who classified them as Octopoida the previous year. The Octopoda consists of around 300 known species and were historically divided into two suborders, the Incirrina and the Cirrina. However, more recent evidence suggests that Cirrina are merely the most basal species and are not a unique clade. The incirrate octopuses (the majority of species) lack the cirri and paired swimming fins of the cirrates. In addition, the internal shell of incirrates is either present as a pair of stylets or absent altogether. Cephalopods have existed for 500 million years and octopus ancestors were in the Carboniferous seas 300 million years ago. The oldest known octopus fossil is "Pohlsepia", which lived 296 million years ago. Researchers have identified impressions of eight arms, two eyes, and possibly an ink sac. Octopuses are mostly soft tissue, and so fossils are relatively rare. Octopuses, squids and cuttlefish belong to the clade Coleoidea. They are known as "soft-bodied" cephalopods, lacking the external shell of most molluscs and other cephalopods like the nautiloids and the extinct Ammonoidea. Octopuses have eight limbs like other coleoids but lack the extra specialised feeding appendages known as tentacles which are longer and thinner with suckers only at their club-like ends. The vampire squid ("Vampyroteuthis") also lacks tentacles but has sensory filaments. The cladograms are based on Sanchez et al., 2018, who created a molecular phylogeny based on mitochondrial and nuclear DNA marker sequences. The molecular analysis of the octopods shows that the suborder Cirrina (Cirromorphida) and the superfamily Argonautoidea are paraphyletic and are broken up; these names are shown in quotation marks and italics on the cladogram. Octopuses and other coleoid cephalopods are capable of greater RNA editing (which involves changes to the nucleic acid sequence of the primary transcript of RNA molecules) than any other organisms. Editing is concentrated in the nervous system and affects proteins involved in neural excitability and neuronal morphology. More than 60% of RNA transcripts for coleoid brains are recoded by editing, compared to less than 1% for a human or fruit fly. Coleoids rely mostly on ADAR enzymes for RNA editing, which requires large double-stranded RNA structures to flank the editing sites. Both the structures and editing sites are conserved in the coleoid genome and the mutation rates for the sites are severely hampered. Hence, greater transcriptome plasticity has come at the cost of slower genome evolution. High levels of RNA editing do not appear to be present in more basal cephalopods or other molluscs. Ancient seafaring people were aware of the octopus, as evidenced by certain artworks and designs. For example, a stone carving found in the archaeological recovery from Bronze Age Minoan Crete at Knossos (1900–1100 BC) has a depiction of a fisherman carrying an octopus. The terrifyingly powerful Gorgon of Greek mythology has been thought to have been inspired by the octopus or squid, the octopus itself representing the severed head of Medusa, the beak as the protruding tongue and fangs, and its tentacles as the snakes. The Kraken are legendary sea monsters of giant proportions said to dwell off the coasts of Norway and Greenland, usually portrayed in art as a giant octopus attacking ships. Linnaeus included it in the first edition of his 1735 "Systema Naturae". One translation of the Hawaiian creation myth the Kumulipo suggests that the octopus is the lone survivor of a previous age. The Akkorokamui is a gigantic octopus-like monster from Ainu folklore. A battle with an octopus plays a significant role in Victor Hugo's book "Travailleurs de la mer" ("Toilers of the Sea"), relating to his time in exile on Guernsey. Ian Fleming's 1966 short story collection "Octopussy and The Living Daylights", and the 1983 "James Bond" film were partly inspired by Hugo's book. Japanese erotic art, "shunga", includes ukiyo-e woodblock prints such as Katsushika Hokusai's 1814 print "Tako to ama" (The Dream of the Fisherman's Wife), in which an ama diver is sexually intertwined with a large and a small octopus. The print is a forerunner of tentacle erotica. The biologist P. Z. Myers noted in his science blog, "Pharyngula", that octopuses appear in "extraordinary" graphic illustrations involving women, tentacles, and bare breasts. Since it has numerous arms emanating from a common centre, the octopus is often used as a symbol for a powerful and manipulative organisation. Octopuses generally avoid humans, but incidents have been verified. For example, a Pacific octopus, said to be nearly perfectly camouflaged, "lunged" at a diver and "wrangled" over his camera before it let go. Another diver recorded the encounter on video. All species are venomous, but only blue-ringed octopuses have venom that is lethal to humans. Bites are reported each year across the animals' range from Australia to the eastern Indo-Pacific Ocean. They bite only when provoked or accidentally stepped upon; bites are small and usually painless. The venom appears to be able to penetrate the skin without a puncture, given prolonged contact. It contains tetrodotoxin, which causes paralysis by blocking the transmission of nerve impulses to the muscles. This causes death by respiratory failure leading to cerebral anoxia. No antidote is known, but if breathing can be kept going artificially, patients recover within 24 hours. Bites have been recorded from captive octopuses of other species; they leave swellings which disappear in a day or two. Octopus fisheries exist around the world with total catches varying between 245,320 and 322,999 metric tons from 1986 to 1995. The world catch peaked in 2007 at 380,000 tons, and fell by a tenth by 2012. Methods to capture octopuses include pots, traps, trawls, snares, drift fishing, spearing, hooking and hand collection. Octopus is eaten in many cultures and is a common food on the Mediterranean and Asian coasts. The arms and sometimes other body parts are prepared in various ways, often varying by species or geography. Live octopuses are eaten in several countries around the world, including the US. Animal welfare groups have objected to this practice on the basis that octopuses can experience pain. Octopuses have a food conversion efficiency greater than that of chickens, making octopus aquaculture a possibility. In classical Greece, Aristotle (384–322 BC) commented on the colour-changing abilities of the octopus, both for camouflage and for signalling, in his "Historia animalium": "The octopus ... seeks its prey by so changing its colour as to render it like the colour of the stones adjacent to it; it does so also when alarmed." Aristotle noted that the octopus had a hectocotyl arm and suggested it might be used in sexual reproduction. This claim was widely disbelieved until the 19th century. It was described in 1829 by the French zoologist Georges Cuvier, who supposed it to be a parasitic worm, naming it as a new species, "Hectocotylus octopodis". Other zoologists thought it a spermatophore; the German zoologist Heinrich Müller believed it was "designed" to detach during copulation. In 1856 the Danish zoologist Japetus Steenstrup demonstrated that it is used to transfer sperm, and only rarely detaches. Octopuses offer many possibilities in biological research, including their ability to regenerate limbs, change the colour of their skin, behave intelligently with a distributed nervous system, and make use of 168 kinds of protocadherins (humans have 58), the proteins that guide the connections neurons make with each other. The California two-spot octopus has had its genome sequenced, allowing exploration of its molecular adaptations. Having independently evolved mammal-like intelligence, octopuses have been compared to hypothetical intelligent extraterrestrials. Their problem-solving skills, along with their mobility and lack of rigid structure enable them to escape from supposedly secure tanks in laboratories and public aquariums. Due to their intelligence, octopuses are listed in some countries as experimental animals on which surgery may not be performed without anesthesia, a protection usually extended only to vertebrates. In the UK from 1993 to 2012, the common octopus ("Octopus vulgaris") was the only invertebrate protected under the Animals (Scientific Procedures) Act 1986. In 2012, this legislation was extended to include all cephalopods in accordance with a general EU directive. Some robotics research is exploring biomimicry of octopus features. Octopus arms can move and sense largely autonomously without intervention from the animal's central nervous system. In 2015 a team in Italy built soft-bodied robots able to crawl and swim, requiring only minimal computation. In 2017 a German company made an arm with a soft pneumatically controlled silicone gripper fitted with two rows of suckers. It is able to grasp objects such as a metal tube, a magazine, or a ball, and to fill a glass by pouring water from a bottle.
https://en.wikipedia.org/wiki?curid=22780
Original Chip Set The Original Chip Set (OCS) is a chipset used in the earliest Commodore Amiga computers and defined the Amiga's graphics and sound capabilities. It was succeeded by the slightly improved Enhanced Chip Set (ECS) and greatly improved Advanced Graphics Architecture (AGA). The original chipset appeared in Amiga models built between 1985 and 1990: the Amiga 1000, Amiga 2000, Amiga CDTV, and Amiga 500. The chipset which gave the Amiga its unique graphics features consists of three main "custom" chips; "Agnus", "Denise", and "Paula". Both the original chipset and the enhanced chipset were manufactured using NMOS logic technology by Commodore's chip manufacturing subsidiary, MOS Technology. According to Jay Miner, the OCS chipset was fabricated in 5 µm manufacturing process while AGA Lisa was implemented in 1.5 µm process. All three custom chips were originally packaged in 48-pin DIPs; later versions of Agnus, known as Fat Agnus, were packaged in an 84-pin PLCC. Agnus is the central chip in the design. It controls all access to chip RAM from both the central 68000 processor and the other custom chips, using a complicated priority system. Agnus includes sub-components known as the "blitter" (fast transfer of data in memory without the intervention of the processor) and the "Copper" (video-synchronized co-processor). The original Agnus can address of chip RAM. Later revisions, dubbed 'Fat Agnus', added pseudo-fast RAM, which for ECS was changed to 1 MB (sometimes called 'Fatter Agnus') and subsequently to 2 MB chip RAM. Denise is the main video processor. Without using overscan, the Amiga's graphics display is 320 or 640 pixels wide by 200 (NTSC) or 256 (PAL) pixels tall. Denise also supports interlacing, which doubles the vertical resolution, at the cost of intrusive flickering on typical monitors of that era. Planar bitmap graphics are used, which splits the individual bits per pixel into separate areas of memory, called bitplanes. In normal operation, Denise allows between one and five bitplanes, giving two to 32 unique colors. These colors are selected from a palette of 4096 colors (four bits per RGB component). A 6th bitplane is available for two special video modes: Halfbrite mode and Hold-And-Modify (HAM) mode. Denise also supports eight sprites, single pixel scrolling, and a "dual-playfield" mode. Denise also handles mouse and digital joystick input. Paula is primarily the audio chip, with four independent hardware-mixed 8-bit PCM sound channels, each of which supports 65 volume levels (no sound to maximum volume) and waveform output rates from roughly 20 samples per second to almost 29,000 samples per second. Paula also handles interrupts and various I/O functions including the floppy disk drive, the serial port, and analog joysticks. There are many similarities both in overall functionality and in the division of functionality into the three component chips between the OCS chipset and the much earlier and simpler chipset of the Atari 8-bit family of home computers, consisting of the ANTIC, GTIA and POKEY chips; both chipsets were conceptually designed by Jay Miner, which explains the similarity. The Agnus chip is in overall control of the entire chipset's operation. All operations are synchronised to the position of the video beam. This includes access to the built-in RAM, known as chip RAM because the chipset has access to it. Both the central 68000 processor and other members of the chipset have to arbitrate for access to chip RAM via "Agnus". In computing architecture terms, this is Direct Memory Access (DMA), where Agnus is the DMA Controller (DMAC). Agnus has a complex and priority-based memory access policy that attempts to best coordinate requests for memory access among competing resources. For example, bitplane data fetches are prioritized over blitter transfers as the immediate display of frame buffer data is considered more important than the processing of memory by the blitter. Agnus also attempts to order accesses in such a way so as to overlap CPU bus cycles with DMA cycles. As the original 68000 processor in Amigas tended only to access memory on every second available memory cycle, Agnus operates a system where "odd" memory access cycles are allocated first and as needed to time-critical custom chip DMA while any remaining cycles are available to the CPU, thus the CPU does not generally get locked out of memory access and does not appear to slow down. However, non-time-critical custom chip access, such as "blitter" transfers, can use up any spare odd or even cycles and, if the "BLITHOG" (blitter hog) flag is set, Agnus can lock out the even cycles from the CPU in deference to the "blitter". Agnus's timings are measured in "color clocks" of 280 ns. This is equivalent to two low resolution (140 ns) pixels or four high resolution (70 ns) pixels. Like Denise, these timings were designed for display on household TVs, and can be synchronized to an external clock source. The "blitter" is a sub-component of Agnus. "Blit" is shorthand for "block image transfer" or bit blit. The blitter is a highly parallel memory transfer and logic operation unit. It has three modes of operation: copying blocks of memory, filling blocks (e.g. polygon filling) and line drawing. The blitter allows rapid copying of video memory, meaning that the CPU can be freed for other tasks. The blitter was primarily used for drawing and redrawing graphics images on the screen, called "bobs", short for "blitter objects". The blitter's block copying mode takes zero to three data sources in memory, called A, B and C, performs a programmable boolean function on the data sources and writes the result to a destination area, D. Any of these four areas can overlap. The blitter runs either from the start of the block to the end, known as "ascending" mode, or in reverse, "descending" mode. Blocks are "rectangular"; they have a "width" in multiples of 16 bits, a height measured in "lines", and a "stride" distance to move from the end of one line to the next. This allows the blitter to operate on any video resolution up to 1,024×1,024 pixels. The copy automatically performs a per-pixel logical operation. These operations are described generically using minterms. This is most commonly used to do direct copies (D = A), or apply a pixel mask around blitted objects (D = (C AND B) OR A). The copy can also barrel shift each line by 0 to 15 pixels. This allows the blitter to draw at pixel offsets that are not exactly multiples of 16. These functions allow the Amiga to move GUI windows around the screen rapidly as each is represented in graphical memory space as a rectangular block of memory which may be shifted to any required screen memory location at will. The blitter's line mode draws single-pixel thick lines using Bresenham's line algorithm. It can also apply a 16-bit repeating pattern to the line. The line mode can also be used to draw rotated bobs: each line of bob data is used as line pattern while the line mode draws the tilted bob line by line. The blitter's filling mode is used to fill per-line horizontal spans. On each span, it reads each pixel in turn from right to left. Whenever it reads a set pixel, it toggles filling mode on or off. When filling mode is on, it sets every pixel until filling mode is turned off or the line ends. Together, these modes allow the blitter to draw individual flat-shaded polygons. Later Amigas tended to use a combination of a faster CPU and blitter for many operations. The "Copper" is another sub-component of Agnus; The name is short for "co-processor". The Copper is a programmable finite-state machine that executes a programmed instruction stream, synchronized with the video hardware. When it is turned on, the Copper has three states; either reading an instruction, executing it, or waiting for a specific video beam position. The Copper runs a program called the "Copper list" in parallel with the main CPU. The Copper runs in sync with the video beam, and it can be used to perform various operations which require video synchronization. Most commonly it is used to control video output, but it can write to most of the chipset registers and thus can be used to initiate blits, set audio registers, or interrupt the CPU. The Copper list has three kinds of instructions, each one being a pair of two bytes, four bytes in total: The length of the Copper list program is limited by execution time. The Copper restarts executing the Copper list at the start of each new video frame. There is no explicit "end" instruction; instead, the WAIT instruction is used to wait for a location which is never reached. Under normal circumstances, the Amiga generates its own video timings, but Agnus also supports synchronising the system to an external signal so as to achieve genlocking with external video hardware. There is also a 1-bit output on this connector that indicates whether the Amiga is outputting background color or not, permitting easy overlaying of Amiga video onto external video. This made the Amiga particularly attractive as a character generator for titling videos and broadcast work, as it avoided the use and expense of AB roll and chromakey units that would be required without the genlock support. The support of overscan, interlacing and genlocking capabilities, and the fact that the display timing was very close to broadcast standards (NTSC or PAL), made the Amiga the first ideal computer for video purposes, and indeed, it was used in many studios for digitizing video data (sometimes called frame-grabbing), subtitling and interactive video news. Denise is programmed to fetch planar video data from one to five bitplanes and translate that into a color lookup. The number of bitplanes is arbitrary, thus if 32 colors are not needed, 2, 4, 8 or 16 can be used instead. The number of bitplanes (and resolution) can be changed on the fly, usually by the Copper. This allows for very economical use of RAM, and balancing of CPU processing speed vs graphical sophistication when executing from Chip RAM (as modes beyond 4bpp in lorez, or 2bpp in hires, use extra DMA channels that can slow or temporarily halt the CPU in addition to the usual non-conflicting channels). There can also be a sixth bitplane, which can be used in three special graphics modes: In Extra-HalfBrite (EHB), if a pixel is set on the sixth bitplane, the brightness of the regular 32 color pixel is halved. Early versions of the Amiga 1000 sold in the United States did not have the Extra-HalfBrite mode. In Hold-and-Modify (HAM) mode, each 6-bit pixel is interpreted as two control bits and four data bits. The four possible permutations of control bits are "set", "modify red", "modify green" and "modify blue". With "set", the four data bits act like a regular 16-color display look up. With one of the "modify"s, the red, green or blue component of the previous pixel is modified to the data value, and the other two components are held from the previous pixel. This allows all 4096 colors on screen at once and is an example of lossy image compression in hardware. In dual-playfield mode, instead of acting as a single screen, two "playfields" of eight colors each (three bitplanes each) are drawn on top of each other. They are independently scrollable and the background color of the top playfield "shines through" to the underlying playfield. There are two horizontal graphics resolutions, "lowres" with 140 ns pixels and "hires" with 70 ns pixels, with a default of 320 or 640 horizontal pixels wide without using overscan. As the pixel output is regulated by the main system clock, which is based directly on the NTSC colorburst clock, these sizes very nearly fill the width of a standard television with only a thin "underscan" border between the graphics and the screen border when compared to many other contemporary home computers, for an appearance closer to a games console but with finer detail. On top of this, Denise supports reasonably extensive overscan; technically modes with enough data for up to 400 or 800 pixels (+25%) may be specified, although this is only actually useful for scrolling and special effects that involve partial display of large graphics, as a separate hardware limit is met at 368 (or 736) pixels, which is the maximum that will fit between the end of one blanking period and the start of the next - although it is unlikely that even this many pixels will be visible on any display other than a dedicated monitor that allows adjustment of horizontal scan width, as much of the image will, by design, disappear seamlessly behind the screen bezel (or, on LCDs, be cropped off at the edge of the panel). Because of the highly regular structure of the Amiga's timing in relation to scanlines and allocation of DMA resources to various uses besides normal "playfield" graphics, increased horizontal resolution is also a tradeoff between number of pixels and how many hardware sprites are available, as increasing the DMA slots dedicated to playfield video ends up stealing some (from 1 to 7 of the total 8) the sprite engine.. Vertical resolution, without overscan, is 200 pixels for a 60 Hz NTSC Amiga or 256 for a 50 Hz PAL Amiga. This can be doubled using an interlaced display, and, as with horizontal resolution, increased using overscan, to a maximum of 241 (or 483) for NTSC, and 283 (567) for PAL (interlaced modes gaining one extra line as the maximum is determined by how many lines are taken from the available total by blanking and sync, and the total scanlines in non-interlaced modes are half the original, broadcast-spec odd-numbered interlaced counts, rounded down). Denise can composite up to eight 16 pixel wide sprites per scan line (in automatic mode) on top, underneath, or between playfields, and detect collisions between sprites and the playfields or between sprites. These sprites have three visible colors and one transparent color. Optionally, adjacent pairs of sprites can be "attached" to make a single 15 color sprite. Using Copper or CPU register manipulations, each sprite 'channel' can be reused multiple times in a single frame to increase the total sprites per frame. Sprite "position" registers may also be changed during a scanline, increasing the total number of sprites on a single scanline. However, the sprite "data", or shape, is only fetched a single time per scanline and can't change. The first Amiga game to utilize the sprite re-position registers during a scanline was Hybris released in 1988. Finally, Denise is responsible for handling mouse/joystick X/Y inputs. The Paula chip, from MOS Technology, is the interrupt controller, but also includes logic for audio playback, floppy disk drive control, serial port input/output and mouse/joystick buttons two and three signals. The logic remained functionally identical across all Amiga models from Commodore. Paula has four DMA-driven 8-bit PCM sample sound channels. Two sound channels are mixed into the left audio output, and the other two are mixed into the right output, producing stereo audio output. The only supported hardware sample format is signed linear 8-bit two's complement. Each sound channel has an independent frequency and a 6-bit volume control (64 levels). Internally, the audio hardware is implemented by four state machines, each having eight different states. Additionally the hardware allows one channel in a channel pair to modulate the other channel's period or amplitude. It is rarely used on the Amiga due to both frequency and volume being controllable in better ways, but could be used to achieve different kinds of tremolo and vibrato, and even rudimentary FM synthesis effects. Audio may be output using two methods. Most often, DMA-driven audio is used. As explained in the discussion of Agnus, memory access is prioritized and one DMA slot per scan line is available for each of the four sound channels. On a regular NTSC or PAL display, DMA audio playback is limited to a maximum output rate of 28867 values per channel (PAL: 28837) per second totaling 57674 (PAL: 57734) values per second on each stereo output. This rate can be increased with the ECS and AGA chipsets by using a video mode with higher horizontal scan rate. Alternately, Paula may signal the CPU to load a new sample into any of the four audio output buffers by generating an interrupt when a new sample is needed. This allows for output rates that exceed 57 kHz per channel and increases the number of possible voices (simultaneous sounds) through software mixing. The Amiga contains an analog low-pass filter (reconstruction filter) which is external to Paula. The filter is a 12 dB/oct Butterworth low-pass filter at approximately 3.3 kHz. The filter can only be applied globally to all four channels. In models after the Amiga 1000 (excluding the very first revision of the Amiga 500), the brightness of the power LED is used to indicate the status of the filter. The filter is active when the LED is at normal brightness, and deactivated when dimmed (on early Amiga 500 models the LED went completely off). Models released before Amiga 1200 also have a static "tone knob" type low-pass filter that is enabled regardless of the optional "LED filter". This filter is a 6 dB/oct low-pass filter with cutoff frequency at 4.5 or 5 kHz. A software technique was later developed which can play back 14-bit audio by combining two channels set at different volumes. This results in two 14-bit channels instead of four 8-bit channels. This is achieved by playing the high byte of a 16-bit sample at maximum volume, and the low byte at minimum volume (both ranges overlap, so the low byte needs to be shifted right two bits). The bit shift operation requires a small amount of CPU or blitter overhead, whereas conventional 8-bit playback is almost entirely DMA driven. This technique was incorporated into the retargetable audio subsystem AHI, allowing compatible applications to use this mode transparently. The floppy controller is unusually flexible. It can read and write raw bit sequences directly from and to the disk via DMA or programmed I/O at 500 (double density) or 250 kbit/s (single density or GCR). MFM or GCR were the two most commonly used formats though in theory any run-length limited code could be used. It also provides a number of convenient features, such as sync-on-word (in MFM coding, $4489 is usually used as the sync word). MFM encoding/decoding is usually done with the blitter — one pass for decode, three passes for encode. Normally the entire track is read or written in one shot, rather than sector-by-sector; this made it possible to get rid of most of the inter-sector gaps that most floppy disk formats need to safely prevent the "bleeding" of a written sector into the previously-existing header of the next sector due to speed variations of the drive. If all sectors and their headers are always written in one go, such bleeding is only an issue at the end of the track (which still must not bleed back into its beginning), so that only one gap per track is needed. This way, for the native Amiga disk format, the raw storage capacity of 3.5 inch DD disks was increased from the typical 720 KB to 880 KB, although the less-than-ideal file system of the earlier Amiga models reduced this again to approximately 830 KB of actual payload data. In addition to the native 880 KB 3.5-inch disk format, the controller can handle many foreign formats, such as: The Amiga 3000 introduced a special, dual-speed floppy drive that also allowed to use high density disks with double capacity without any change to Paula's floppy controller. The serial port is rudimentary, using programmed input/output only and lacking a FIFO buffer. However, virtually any bit rate can be selected, including all standard rates, MIDI rate, as well as extremely high custom rates.
https://en.wikipedia.org/wiki?curid=22784
Optic neuritis Optic neuritis is a demyelinating inflammation of the optic nerve. It is also known as optic papillitis (when the head of the optic nerve is involved), neuroretinitis when there is a combined involvement of optic disc and surrounding retina in the macular area and retrobulbar neuritis (when the posterior part of the nerve is involved). It is most often associated with multiple sclerosis, and it may lead to complete or partial loss of vision in one or both eyes. Other causes include: Partial, transient vision loss (lasting less than one hour) can be an indication of early onset multiple sclerosis. Other possible diagnoses include diabetes mellitus, low phosphorus levels, or hyperkalaemia. Major symptoms are sudden loss of vision (partial or complete), sudden blurred or "foggy" vision, and pain on movement of the affected eye. Early symptoms that require investigation include symptoms from multiple sclerosis (twitching, lack of coordination, slurred speech, frequent episodes of partial vision loss or blurred vision), episodes of "disturbed/blackened" rather than blurry indicate moderate stage and require immediate medical attention to prevent further loss of vision. Other early symptoms are reduced night vision, photophobia and red eyes. Many patients with optic neuritis may lose some of their color vision in the affected eye (especially red), with colors appearing subtly washed out compared to the other eye. Patients may also experience difficulties judging movement in depth which can be particular troublesome during driving or sport (Pulfrich effect). Likewise transient worsening of vision with increase of body temperature (Uhthoff's phenomenon) and glare disability are a frequent complaint. However, several case studies in children have demonstrated the absence of pain in more than half of cases (approximately 60%) in their pediatric study population, with the most common symptom reported simply as "blurriness." Other remarkable differences between the presentation of adult optic neuritis as compared to pediatric cases include more often unilateral optic neuritis in adults, while children much predominantly present with bilateral involvement. On medical examination the head of the optic nerve can easily be visualized by a slit lamp with a high positive lens or by using direct ophthalmoscopy; however, frequently there is no abnormal appearance of the nerve head in optic neuritis (in cases of retrobulbar optic neuritis), though it may be swollen in some patients (anterior papillitis or more extensive optic neuritis). In many cases, only one eye is affected and patients may not be aware of the loss of color vision until they are asked to close or cover the healthy eye. The optic nerve comprises axons that emerge from the retina of the eye and carry visual information to the primary visual nuclei, most of which is relayed to the occipital cortex of the brain to be processed into vision. Inflammation of the optic nerve causes loss of vision, usually because of the swelling and destruction of the myelin sheath covering the optic nerve. The most common cause is multiple sclerosis or ischemic optic neuropathy due to thrombosis or embolism of the vessel that supplies the optic nerve. Up to 50% of patients with MS will develop an episode of optic neuritis, and 20-30% of the time optic neuritis is the presenting sign of MS. The presence of demyelinating white matter lesions on brain MRI at the time of presentation of optic neuritis is the strongest predictor for developing clinically definite MS. Almost half of the patients with optic neuritis have white matter lesions consistent with multiple sclerosis. Some other common causes of optic neuritis include infection (e.g. a tooth abscess in the upper jaw, syphilis, Lyme disease, herpes zoster), autoimmune disorders (e.g. lupus, neurosarcoidosis, neuromyelitis optica), methanol poisoning, Vitamin B12 deficiency, and diabetes, or an injury to the eye. Less common causes are: papilledema, brain tumor or abscess in the occipital region, cerebral trauma or hemorrhage, meningitis, arachnoidal adhesions, sinus thrombosis, liver dysfunction, or late stage kidney disease. The repetition of an idiopathic optic neuritis is considered a distinct clinical condition, and when it shows demyelination, it has been found to be associated to anti-MOG and AQP4-negative neuromyelitis optica When an inflammatory recurrent optic neuritis is not demyelinating, it is called "Chronic relapsing inflammatory optic neuropathy" (CRION) When it is anti-MOG related, it is demyelinating and it is considered inside the anti-MOG associated inflammatory demyelinating diseases. Some reports point to the possibility to establish a difference via OCT In most MS-associated optic neuritis, visual function spontaneously improves over 2–3 months, and there is evidence that corticosteroid treatment does not affect the long term outcome. However, for optic neuritis that is not MS-associated (or atypical optic neuritis) the evidence is less clear and therefore the threshold for treatment with intravenous corticosteroids is lower. Intravenous corticosteroids also reduce the risk of developing MS in the following two years in patients with MRI lesions; but this effect disappears by the third year of follow up. Paradoxically, oral administration of corticosteroids in this situation may lead to more recurrent attacks than in non-treated patients (though oral steroids are generally prescribed after the intravenous course, to wean the patient off the medication). This effect of corticosteroids seems to be limited to optic neuritis and has not been observed in other diseases treated with corticosteroids. A Cochrane Systematic Review studied the effect of corticosteroids for treating people with acute optic neuritis. Specific corticosteroids studied included intravenous and oral methylprednisone, and oral prednisone. The authors conclude that current evidence does not show a benefit of either intravenous or oral corticosteroids for rate of recovery of vision (in terms of visual acuity, contrast sensitivity, or visual fields). There is a number of reasons why this might be the case. Optic neuritis typically affects young adults ranging from 18–45 years of age, with a mean age of 30–35 years. There is a strong female predominance. The annual incidence is approximately 5/100,000, with a prevalence estimated to be 115/100,000. In Charles Dickens' "Bleak House", the main character, Esther Summerville, suffers from a transient episode of visual loss, the symptoms of which are also seen in people who have optic neuritis. Legal historian Sir William Searle Holdsworth, suggested that the events in "Bleak House" took place in 1827. In the episode of "Dr. Quinn, Medicine Woman", "Season of Miracles", in season five, Reverend Timothy Johnson is struck blind by optic neuritis on Christmas Day, 1872. He remains blind for the duration of the series.
https://en.wikipedia.org/wiki?curid=22786
List of organizations with .int domain names This is a list of organizations with INT domain names, in alphabetical order of the second-level domain name. The list is not comprehensive. As of June 2012, the INT domain consists of 166 subdomain delegations. These organizations are generally international organizations established by treaty. Some however (such as YMCA) do not meet current restrictions and were grandfathered in from prior acceptance. * Bodies which are not international treaty organizations or (arguably) related to Internet infrastructure, and so do not meet current .INT requirements. ** Domains connected with Internet infrastructure; these are not international intergovernmental organizations. *** Domains connected with treaties but which may not have the full status of intergovernmental organization (sites for treaties or treaty secretariats).
https://en.wikipedia.org/wiki?curid=22787
Organization of American States The Organization of American States (, , ), or the OAS or OEA, is a continental organization that was founded on 30 April 1948, for the purposes of solidarity and cooperation among its member states within the Western Hemisphere. During the Cold War, the United States hoped the OAS would be a bulwark against the spread of communism. Since the 1990s, the organization has focused on election monitoring. Headquartered in the United States' capital Washington, D.C., the OAS's members are the 35 independent states of the Americas. As of 26 May 2015, the Secretary General of OAS is Luis Almagro. The notion of an international union in the New World was first put forward during the liberation of the Americas by José de San Martín and Simón Bolívar who, at the 1826 Congress of Panama (still being part of Colombia), proposed creating a league of American republics, with a common military, a mutual defense pact, and a supranational parliamentary assembly. This meeting was attended by representatives of Gran Colombia (comprising the modern-day countries of Colombia, Ecuador, Panama and Venezuela), Argentina, Peru, Bolivia, The United Provinces of Central America, and Mexico but the grandly titled "Treaty of Union, League, and Perpetual Confederation" was ultimately ratified only by Gran Colombia. Bolívar's dream soon floundered with civil war in Gran Colombia, the disintegration of Central America, and the emergence of national rather than New World outlooks in the newly independent American republics. Bolívar's dream of American unity was meant to unify Hispanic American nations against external powers. The pursuit of regional solidarity and cooperation again came to the forefront in 1889–1890, at the First International Conference of American States. Gathered together in Washington, D.C., 18 nations resolved to found the International Union of American Republics, served by a permanent secretariat called the Commercial Bureau of the American Republics (renamed the International Commercial Bureau at the Second International Conference in 1901–1902). These two bodies, in existence as of 14 April 1890, represent the point of inception to which the OAS and its General Secretariat trace their origins. At the Fourth International Conference of American States (Buenos Aires, 1910), the name of the organization was changed to the Union of American Republics and the Bureau became the Pan American Union. The Pan American Union Building was constructed in 1910, on Constitution Avenue, Northwest, Washington, D.C. In the mid-1930s, U.S. President Franklin Delano Roosevelt organized an inter-American conference in Buenos Aires. One of the items at the conference was a "League of Nations of the Americas", an idea proposed by Colombia, Guatemala, and the Dominican Republic. At the subsequent Inter-American Conference for the Maintenance of Peace, 21 nations pledged to remain neutral in the event of a conflict between any two members. The experience of World War II convinced hemispheric governments that unilateral action could not ensure the territorial integrity of the American nations in the event of external aggression. To meet the challenges of global conflict in the postwar world and to contain conflicts within the hemisphere, they adopted a system of collective security, the Inter-American Treaty of Reciprocal Assistance (Rio Treaty) signed in 1947 in Rio de Janeiro. The Ninth International Conference of American States was held in Bogotá between March and May 1948 and led by United States Secretary of State George Marshall, a meeting which led to a pledge by members to fight communism in the western hemisphere. This was the event that saw the birth of the OAS as it stands today, with the signature by 21 American countries of the Charter of the Organization of American States on 30 April 1948 (in effect since December 1951). The meeting also adopted the American Declaration of the Rights and Duties of Man, the world's first general human rights instrument. The transition from the Pan American Union to OAS would have been smooth if it had not been for the assassination of Colombian leader Jorge Eliécer Gaitán. The Director General of the former, Alberto Lleras Camargo, became the Organization's first Secretary General. The current Secretary General is former Uruguayan minister of foreign affairs Luis Almagro. Significant milestones in the history of the OAS since the signing of the Charter have included the following: In the words of Article 1 of the Charter, the goal of the member nations in creating the OAS was "to achieve an order of peace and justice, to promote their solidarity, to strengthen their collaboration, and to defend their sovereignty, their territorial integrity, and their independence." Article 2 then defines eight essential purposes: Over the course of the 1990s, with the end of the Cold War, the return to democracy in Latin America, and the thrust toward globalization, the OAS made major efforts to reinvent itself to fit the new context. Its stated priorities now include the following: The Organization of American States is composed of an Organization of American States General Secretariat, the Permanent Council, the Inter-American Council for Integral Development, and a number of committees. The General Secretariat of the Organization of American States consists of six secretariats. The various committees of the Organization of American States include: The various commissions of the Organization of American States include: The OAS has two funds, one for the General Secretariat, and one for specific programs and initiatives. The General Assembly asks for contributions from each member country based on its capacity to pay. In 2018 the General Secretariat's budget was $85 million of which the US contributed $50 million. In 2017 the US contributed $17 million to the fund for specific programmes which was almost a third of the total contributions for that year. The General Assembly is the supreme decision-making body of OAS. It convenes once every year in a regular session. In special circumstances, and with the approval of two-thirds of the member states, the Permanent Council can convene special sessions. The Organization's member states take turns hosting the General Assembly on a rotating basis. The states are represented at its sessions by their chosen delegates: generally, their ministers of foreign affairs, or their appointed deputies. Each state has one vote, and most matters—except for those for which the Charter or the General Assembly's own rules of procedure specifically require a two-thirds majority—are settled by a simple majority vote. The General Assembly's powers include setting the OAS's general course and policies by means of resolutions and declarations; approving its budget and determining the contributions payable by the member states; approving the reports and previous year's actions of the OAS's specialized agencies; and electing members to serve on those agencies. All 35 independent nations of the Americas are members of the OAS. Upon foundation in 1948, there were 21 members, most of them in Latin America: The later expansion of the OAS included Canada and the newly independent nations of the Caribbean. Members with later admission dates (sorted chronologically): Although Canada was a founding member of the League of Nations in 1919 and has joined international organizations since that date, it chose not to join the OAS when it was first formed, despite its close relations with the United States. Canada became a Permanent Observer in the OAS on 2 February 1972. Canada signed the Charter of the Organization of American States on 13 November 1989 and this decision was ratified on 8 January 1990. In 2004–2005, Canada was the second largest contributor to the OAS, with an annual assessed contribution representing 12.36 percent of the OAS Regular Budget (US$9.2 million) and an additional C$9 million in voluntary contributions to specific projects. Shortly after joining as a full member, Canada was instrumental in the creation of the Unit for the Promotion of Democracy, which provides support for the strengthening and consolidation of democratic processes and institutions in OAS member states. During the 6th Conference of Foreign Ministers of the Organization of American States (OAS) in Costa Rica, from 16 to 20 August 1960, a conviction against the State of the Dominican Republic was agreed to unanimously. The penalty was motivated because the foreign ministers checked the veracity of the claim that the Rafael Trujillo regime had sponsored an attack against Rómulo Betancourt, at that time, constitutional president of Venezuela. The meeting was attended by foreign ministers from 21 American nations, including Cuba, which at that time had not yet been expelled from the inter-American system. All countries, including the United States and Haiti, broke off diplomatic relations with the Dominican Republic. Additionally an economic blockade that affected the exports of sugar was applied, which at that time was the pillar of the Dominican economy. It was the first application of the Inter-American Treaty of Reciprocal Assistance, which had been adopted at the OAS on July 29, 1960. The current government of Cuba was excluded from participation in the Organization under a decision adopted by the Eighth Meeting of Consultation in Punta del Este, Uruguay, on 31 January 1962. The vote was passed by 14 in favor, with one against (Cuba) and six abstentions (Argentina, Bolivia, Brazil, Chile, Ecuador, and Mexico). The operative part of the resolution reads as follows: This meant that the Cuban nation was still technically a member state, but that the current government was denied the right of representation and attendance at meetings and of participation in activities. The OAS's position was that although Cuba's participation was suspended, its obligations under the Charter, the American Declaration of the Rights and Duties of Man, etc. still hold: for instance, the Inter-American Commission on Human Rights continued to publish reports on Cuba's human rights situation and to hear individual cases involving Cuban nationals. However, this stance was occasionally questioned by other individual member states. Cuba's position was stated in an official note sent to the Organization "merely as a courtesy" by Minister of Foreign Affairs Dr. Raúl Roa on 4 November 1964: "Cuba was arbitrarily excluded ... The Organization of American States has no juridical, factual, or moral jurisdiction, nor competence, over a state which it has illegally deprived of its rights." The reincorporation of Cuba as an active member regularly arose as a topic within the inter-American system for instance, it was intimated by the outgoing ambassador of Mexico in 1998but most observers did not see it as a serious possibility while the present government remained in power. Since 1960, the Cuban administration had repeatedly characterized the OAS as the "Ministry of Colonies" of the United States of America. On 6 May 2005, President Fidel Castro reiterated that the island nation would not "be part of a disgraceful institution that has only humiliated the honor of Latin American nations." After Fidel Castro's recent retirement and the ascent of his brother Raúl to power, this official position was reasserted. Venezuelan President Hugo Chávez promised to veto any final declaration of the 2009 Summit of the Americas due to Cuba's exclusion. On 17 April 2009, after a "trading of warm words" between the administrations of U.S. President Barack Obama and Cuban leader Raúl Castro, OAS Secretary General José Miguel Insulza said he would ask the 2009 General Assembly to annul the 1962 resolution excluding Cuba. On 3 June 2009, foreign ministers assembled in San Pedro Sula, Honduras, for the OAS's 39th General Assembly, passed a vote to lift Cuba's suspension from the OAS. The United States had been pressuring the OAS for weeks to condition Cuba's readmission to the group on democratic principles and commitment to human rights. Ecuador's Foreign Minister Fander Falconí said there will be no such conditions. "This is a new proposal, it has no conditions—of any kind," Falconí said. "That suspension was made in the Cold War, in the language of the Cold War. What we have done here is fix a historic error." The suspension was lifted at the end of the General Assembly, but, to be readmitted to the Organization, Cuba will need to comply with all the treaties signed by the Member States, including the Inter-American Democratic Charter of 2001. A statement issued by the Cuban government on 8 June 2009 stated that while Cuba welcomed the Assembly's gesture, in light of the Organization's historical record "Cuba will not return to the OAS." Following the expulsion of its President Manuel Zelaya, Honduras' membership of the Organization was suspended unanimously at midnight on 5 July 2009. The "de facto" government had already announced it was leaving the OAS hours earlier; this was not, however, taken into account by the OAS, which did not recognize that government as legitimate. An extraordinary meeting had been conducted by the OAS in Washington, D.C., with Zelaya in attendance. The suspension of Honduras was approved unanimously with 33 votes (Honduras did not vote). This was the first suspension carried out by the OAS since that of Cuba in 1962. After Zelaya's return to Honduras in 2011, the country was re-admitted to the Organization on 1 June 2011 with 32 votes in favor and 1 (Ecuador) against. Venezuela expressed some reservations. On 28 April 2017 Venezuela notified the OAS of its denunciation of the Charter of the OAS, which as per Article 143 would lead to the withdrawal of Venezuela from the OAS effective two years from the date of notification. During this period, the country did not plan on participating in the OAS. During the 2019 Venezuelan presidential crisis, the President of the National Assembly of Venezuela Juan Guaidó, who was recognized by the National Assembly as the acting president, sent a letter to the OAS Secretary General annulling the previous denunciation of the OAS Charter, and expressing his desire for Venezuela to remain a member of the OAS. The National Assembly designated a special envoy as representative to the OAS, lawyer Gustavo Tarre Briceño, who the OAS voted to recognize as Venezuela's delegate in April. As of 31 January 2014, there are 69 permanent observer countries including the four countries with territory or territories in the Americas—Denmark, France, the Netherlands, and the United Kingdom; as well as the European Union. The Organization's official languages are Spanish, Portuguese, French, and English. The Charter, the basic instrument governing OAS, makes no reference to the use of official languages. These references are to be found in the Rules of Procedure governing the various OAS bodies. Article 51 of the Rules of Procedure of the General Assembly, the supreme body of the OAS, which meets once a year, states that English, French, Portuguese, and Spanish are the four official languages. Article 28 stipulates that a Style Committee shall be set up with representatives of the four official languages to review the General Assembly resolutions and declarations. Article 53 states that proposals shall be presented in the four official languages. The Rules of Procedure and Statutes of other bodies, such as the Inter-American Council for Integral Development (CIDI), the Permanent Executive Committee of the Inter-American Council for Integral Development (CEPCIDI), the Inter-American Commission of Women (CIM), the Inter-American Drug Abuse Control Commission (CICAD), the Inter-American Commission on Human Rights (IACHR) and the Inter-American Juridical Committee (CJI), technical bodies of the OAS, also mention the four official languages in which their meetings are to be conducted. Policy is therefore dictated through these instruments that require use of the four official languages at meetings. Although a number of other languages have official status in one or more member states of OAS (Dutch in Suriname; Haitian Creole alongside French in Haiti; Quechua and Aymara in Peru, Ecuador and Bolivia; Guaraní in Paraguay), they are not official languages of the Organization.
https://en.wikipedia.org/wiki?curid=22788
World Organisation for Animal Health The World Organisation for Animal Health, formerly the (OIE) is an intergovernmental organization coordinating, supporting and promoting animal disease control. The main objective of the OIE is to control epizootic diseases and thus to prevent their spread. Other objectives consist of: transparency, scientific information, international solidarity, sanitary safety, the promotion of Veterinary Services, food safety and animal welfare. It is recognized as a reference organisation by the World Trade Organization (WTO) and in 2018 had a total of 182 member states. Its newest member state is Saint Lucia. The OIE maintains permanent relations with 45 other international and regional organisations and has Regional and sub-regional Offices on every continent. The OIE does not depend on the UN system; its autonomy is both institutional and financial and its activities are governed by its own constitutional texts. Since its first General Session held in Paris, the Office carries out its work under the authority of a Committee consisting of delegates of the contracting Governments. The need to fight animal diseases at a global level led to the creation of the through the international Agreement signed on January 25, 1924. In May 2003 the Office became the World Organisation for Animal Health but kept its historical acronym OIE. In December 2016, 430 delegates to the 4th Global Conference on Animal Welfare approved a range of measures aimed to improve animal welfare. An OIE strategy document which stemmed from this conference was to be presented for adoption at the OIE World Assembly in May 2017. In January 2017, the outgoing Obama administration designated the OIE as an organization entitled to benefits of the International Organizations Immunities Act. The OIE's headquarters are located in Paris, in the 17th arrondissement. It was in 1939 that the OIE moved to the aristocratic district of Parc Monceau, after having occupied premises since 1927 near the Champs de Mars and the Eiffel Tower, that had been provided by the French Higher Public Health Council. In May 1938, the OIE Members gave Dr Emmanuel Leclainche, founder and first General Director of the OIE, full powers to buy a townhouse in Paris, using the reserve fund. Dr Lecleinche chose the mansion from four properties selected by a Commission comprising the President of the OIE, H.C.L.E. Berger (Netherlands), the Vice-President, Carlo Bisanti (Italy), and the accountant, Gotlieb Flückiger (Switzerland). On 22 February 1939, the OIE, represented by E. Leclainche bought the mansion from the Marquise de Montebello, at a cost of 700,000 francs. The 13th General Session of the OIE was held from May 30 to June 5, 1939 at 12 rue de Prony after rebuilding work had been completed. Due to the Second World War, the following General Session did not take place until 1946, from 2 to 5 October. Following their entry into Paris in June 1940, the German occupying forces temporarily closed and sealed the OIE headquarters. The efforts of the President, Gotlieb Flückiger, elected in 1939, resulted in its re-opening. 12 rue de Prony was built in 1879, in the Neo-Renaissance style, by the celebrated architect Jean-Louis Pascal for the Austrian Baron, Jonas Königswater, a former banker and railway owner. A succession of major works to renovate and modernise the headquarters were undertaken by the Directors General elected after E.Leclainche: Gaston Ramon, René Vittoz, Louis Blajan, Jean Blancou and Bernard Vallat. Due to the headlong development of the organisation (tripling of the staff and the budget since 2001), additional premises have been rented at 14 rue de Prony since 2004. On 16 March 2009, the OIE purchased a large part of the building at 14 rue de Prony, adjoining its headquarters. Timely dissemination of information is crucial to containing outbreaks. The WAHID Interface provides access to all data held within OIE's new World Animal Health Information System (WAHIS). It replaces and significantly extends the former web interface named Handistatus II System. A comprehensive range of information is available from:
https://en.wikipedia.org/wiki?curid=22789
Ozzie Smith Osborne Earl "Ozzie" Smith (born December 26, 1954) is an American former baseball shortstop who played in Major League Baseball (MLB) for the San Diego Padres and St. Louis Cardinals from 1978 to 1996. Nicknamed "The Wizard" for his defensive brilliance, Smith set major league records for career assists (8,375) and double plays (1,590) by a shortstop (the latter since broken by Omar Vizquel), as well as the National League (NL) record with 2,511 career games at the position; Smith won the NL Gold Glove Award for play at shortstop for 13 consecutive seasons (1980–1992). A 15-time All-Star, he accumulated 2,460 hits and 580 stolen bases during his career, and he won the NL Silver Slugger Award as the best-hitting shortstop in 1987. He was elected to the Baseball Hall of Fame in his first year of eligibility in 2002. He was also elected to the St. Louis Cardinals Hall of Fame in the inaugural class of 2014. Smith was born in Mobile, Alabama, but his family moved to Watts, Los Angeles, when he was six years old. While participating in childhood athletic activities, Smith developed quick reflexes; he went on to play baseball in high school and college, at Los Angeles' Locke High School and Cal Poly-San Luis Obispo respectively. Drafted as an amateur player by the Padres, Smith made his major league debut in 1978. He quickly established himself as an outstanding fielder, and he later became known for performing backflips on special occasions while taking his position at the beginning of a game. Smith won his first Gold Glove Award in 1980 and made his first All-Star Game appearance in 1981. When conflict with Padres' ownership developed, he was traded to the Cardinals for shortstop Garry Templeton in 1982. Upon joining the Cardinals, Smith helped the team win the 1982 World Series. Three years later, his game-winning home run during Game 5 of the 1985 National League Championship Series prompted broadcaster Jack Buck's "Go crazy, folks!" play-by-play call. Despite a rotator cuff injury during the 1985 season, Smith posted career highs in multiple offensive categories in 1987. Smith continued to earn Gold Gloves and All-Star appearances on an annual basis until 1993. During 1995 season, Smith had shoulder surgery and was out nearly three months. After tension with his new manager Tony La Russa developed in 1996, Smith retired at season's end, and his uniform number (No. 1) was subsequently retired by the Cardinals. Smith served as host of the television show "This Week in Baseball" from 1997 to 1998. Smith was born in Mobile, Alabama, the second of Clovi and Marvella Smith's six children (five boys and one girl). While the family lived in Mobile, his father worked as a sandblaster at Brookley Air Force Base. When Smith was six his family moved to the Watts section of Los Angeles. His father became a delivery truck driver for Safeway stores, while his mother became an aide at a nursing home. His mother was an influential part of his life who stressed the importance of education and encouraged him to pursue his dreams. Smith played a variety of sports in his youth, but considered baseball to be his favorite. He developed quick reflexes through various athletic and leisure activity, such as bouncing a ball off the concrete steps in front of his house, moving in closer to reduce reaction time with each throw. When not at the local YMCA or playing sports, Smith sometimes went with friends to the neighborhood lumberyard, springboarding off inner tubes and doing flips into sawdust piles (a precursor to his famous backflips). In 1965, at age 10, he endured the Watts Riots with his family, recalling that, "We had to sleep on the floor because of all the sniping and looting going on." While Smith was attending junior high school, his parents divorced. Continuing to pursue his interest in baseball, he would ride the bus for nearly an hour to reach Dodger Stadium, cheering for the Los Angeles Dodgers at about 25 games a year. Upon becoming a student at Locke High School, Smith played on the basketball and baseball teams. Smith was a teammate of future National Basketball Association player Marques Johnson on the basketball team, and a teammate of future fellow Hall-of-Fame player Eddie Murray on the baseball side. After high school Smith attended Cal Poly San Luis Obispo in 1974 on a partial academic scholarship, and managed to walk-on to the baseball team. In addition to his academic education, he learned to switch-hit from Cal Poly coach Berdy Harr. When Cal Poly's starting shortstop broke his leg midway through the 1974 season, Smith subsequently took over the starting role. Later named an All-American athlete, he established school records in career at bats (754) and career stolen bases (110) before graduating in 1977. Smith was playing semi-professional baseball in Clarinda, Iowa, when in June 1976 he was selected in the seventh round of the amateur entry draft by the Detroit Tigers. The parties could not agree on a contract; Smith wanted a $10,000 ($ today) signing bonus, while the Tigers offered $8,500 ($ today). Smith returned to Cal Poly for his senior year, then in the 1977 draft was selected in the fourth round by the San Diego Padres, ultimately agreeing to a contract that included a $5,000 signing bonus ($ today). Smith spent his first year of professional baseball during 1977 with the Class A Walla Walla Padres of the Northwest League. Smith began 1978 as a non-roster invitee to the San Diego Padres' spring training camp in Yuma, Arizona. Smith credited Padres manager Alvin Dark for giving him confidence by telling reporters the shortstop job was Smith's until he proved he can't handle it. Even though Dark was fired in the middle of training camp, Smith made his Major League Baseball (MLB) debut on April 7, 1978. It did not take long for Smith to earn recognition in the major leagues, making what some consider his greatest fielding play only 10 games into his rookie season. The Padres played host to the Atlanta Braves on April 20, 1978, and with two out in the top of the fourth inning, Atlanta's Jeff Burroughs hit a ground ball up the middle. Smith described the play by saying, "He hit a ball back up the middle that everybody thought was going into center field. I instinctively broke to my left and dove behind second. As I was in the air, the ball took a bad hop and caromed behind me, but I was able to catch it with my bare hand. I hit the ground, bounced back up, and threw Burroughs out at first." During a roadtrip to Houston, later in the season, Smith met a part-time usherette at the Astrodome named Denise while making his way to the team bus outside the stadium. The couple developed a relationship that was sometimes long-distance in nature, and eventually decided to marry. It was also during the 1978 season that Smith introduced a signature move. Padres promotion director Andy Strasberg knew Smith could perform backflips, but that he only did them during practice before fans entered the stadium. Strasberg asked Smith to do a backflip for fans during Fan Appreciation Day on October 1, the Padres' last home game of the season. After conferring with veteran teammate Gene Tenace, Smith went ahead with the backflip, and it proved to be wildly popular. Smith finished the 1978 season with a .258 batting average and .970 fielding percentage, placing second in National League Rookie of the Year voting to Bob Horner. After working with a hitting instructor during the offseason, Smith failed to record a base hit in his first 32 at-bats of the 1979 season. Among players with enough at-bats to qualify for the 1979 National League Triple Crown, Smith finished the season last in batting average (.211), home runs (0), and RBI (27). Off the field, conflict developed between Padres' ownership and the combination of Smith and his agent, Ed Gottlieb. The parties entered into a contract dispute before the 1980 season, and when negotiations lasted into spring training, the Padres renewed Smith's contract at his 1979 salary of $72,500 Smith's agent told the Padres the shortstop would forgo the season to race in the Tour de France, despite the fact Smith admitted to The Break Room on 96.5 WCMF in Rochester, New York he had never heard of the Tour. Angered by the Padres' attitude during those contract talks, Gottlieb took out a help-wanted ad in the "San Diego Union", part of which read, "Padre baseball player wants part-time employment to supplement income." When Joan Kroc, wife of Padres owner Ray Kroc, publicly offered Smith a job as an assistant gardener on her estate, Smith and Gottlieb's relationship with the organization deteriorated further. Meanwhile, Smith was winning recognition for his accomplishments on the field. In 1980, he set the single-season record for most assists by a shortstop (621), and began his string of 13 consecutive Gold Glove awards. Smith's fielding play prompted the "Yuma Daily Sun" to use the nickname "The Wizard of Oz" in a March 1981 feature article about Smith. While "The Wizard of Oz" nickname was an allusion to the 1939 motion picture of the same name, Smith also came to be known as simply "The Wizard" during his playing career, as Smith's Baseball Hall of Fame plaque would later attest. In 1981, Smith made his first All-Star Game appearance as a reserve player. While Smith was having problems with the Padres' owners, the St. Louis Cardinals also found themselves unhappy with their shortstop, Garry Templeton. Templeton's relationship with Cardinal Nation had become increasingly strained and finally came to a head during a game at Busch Stadium on August 26, 1981, when (after being heckled for not running out a ground ball) he made obscene gestures at fans, and had to be physically pulled off the field by manager Whitey Herzog. Given the task of overhauling the Cardinals by owner Gussie Busch (and specifically to unload Templeton), Herzog was looking to trade Templeton when he was approached by Padres General Manager Jack McKeon at the 1981 baseball winter meetings. While McKeon had previously told Herzog that Smith was untouchable in any trade, the Padres were now so angry at Smith's agent Gottlieb that McKeon was willing to deal. McKeon and Herzog agreed in principle to a six-player trade, with Templeton for Smith as the centerpiece. It was then that Padres manager Dick Williams informed Herzog that a no-trade clause had been included in Smith's 1981 contract. Upon learning of the trade, Smith's initial reaction was to invoke the clause and stay in San Diego, but he was still interested to hear what the Cardinals had to say. While the deal for the players beside Templeton and Smith went through, Herzog flew to San Diego to meet with Smith and Gottlieb over the Christmas holiday. Smith later recalled that, "Whitey told me that with me playing shortstop for the St. Louis Cardinals, we could win the pennant. He made me feel wanted, which was a feeling I was quickly losing from the Padres. The mere fact that Whitey would come all the way out there to talk to us was more than enough to convince me that St. Louis was the place I wanted to be." On December 10, 1981, the Padres traded him along with a player to be named later and Steve Mura to the Cardinals for a player to be named later, Sixto Lezcano and Garry Templeton. The teams completed the trade on February 19, 1982, with the Padres sending Al Olmsted to the Cardinals, and St. Louis sending Luis DeLeon to the Padres. Herzog believed Smith could improve his offensive production by hitting more ground balls, and subsequently created a motivational tool designed to help Smith concentrate on that task. Approaching Smith one day during spring training, Herzog said, "Every time you hit a fly ball, you owe me a buck. Every time you hit a ground ball, I owe you a buck. We'll keep that going all year." Smith agreed to the wager, and by the end of the season had won close to $300 from Herzog. As the 1982 season got underway, Herzog's newly assembled team won 12 games in a row during the month of April, and finished the season atop the National League East division. Herzog would later say of Smith's contributions that, "If he saved two runs a game on defense, which he did many a night, it seemed to me that was just as valuable to the team as a player who drove in two runs a game on offense." Smith became a father for the first time during the 1982 season with the birth of his son O.J., today known as Nikko, on April 28. Smith also developed a lasting friendship with teammate Willie McGee during the season, and Smith said he likes to think he "helped Willie get over some of the rough spots of adjusting to the major leagues". Smith later participated in the postseason for the first time when the Cardinals faced the Atlanta Braves in the best-of-five 1982 National League Championship Series (NLCS). Smith drove in the series' first run by hitting a sacrifice fly that scored McGee in Game 1, ultimately going five for nine in St. Louis' three-game series sweep. Just as Herzog had predicted when he told Smith the Cardinals would win the pennant with him on the team, Smith found himself as the team's starting shortstop in the best-of-seven 1982 World Series against the Milwaukee Brewers. During the contest Smith scored three runs, had five hits, and did not commit an error in the field. When St. Louis was trailing 3–1 with one out in the sixth inning of Game 7, Smith started a rally with a base hit to left field, eventually scoring the first of the team's three runs that inning. The Cardinals scored two more runs in the 8th inning for a 6-3 win and the championship. After the World Series championship, Smith and the Cardinals agreed on a new contract in January 1983 that paid Smith $1 million per year. Smith was voted in as the National League's starting shortstop in the All-Star Game for the first time in 1983, and at season's end won a fourth consecutive Gold Glove Award. During July of the 1984 season, Smith went on the disabled list with a broken wrist after being hit by a pitch during a game against the Padres. Smith's return to the lineup a month later was not enough to propel the Cardinals to a postseason berth. In 1985, Smith amassed a .276 batting average, 31 stolen bases, and 591 assists in the field. The Cardinals as a team won 101 games during the season and earned another postseason berth. Facing the Los Angeles Dodgers in the now best-of-seven NLCS, a split of the first four games set the stage for Game 5 at Busch Stadium. With the score tied at two runs apiece in the bottom of the ninth inning, Dodgers manager Tommy Lasorda called upon closer Tom Niedenfuer to pitch. Smith batted left-handed against Niedenfuer with one out. Smith, who had never hit a home run in his previous 3,009 left-handed major league at-bats, pulled an inside fastball down the right-field line for a walk-off home run, ending Game 5 in a 3–2 Cardinals victory. Smith said, "I was trying to get an extra-base hit and get into scoring position. Fortunately, I was able to get the ball up." The home run not only prompted broadcaster Jack Buck's "Go crazy folks" play-by-play call, but was also later voted the greatest moment in Busch Stadium history by Cardinals fans. After Smith's teammate Jack Clark hit a late-inning home run of his own in Game 6 to defeat the Dodgers, the Cardinals moved on to face the Kansas City Royals in the 1985 World Series. Once again sportswriters were quick to draw attention to Smith's outstanding defensive play instead of his 2 for 23 effort at the plate. After the Cardinals took a three-games-to-two advantage, a controversial Game 6 call by umpire Don Denkinger overshadowed the remainder of the Series (which the Royals won in seven games). What was not publicly known during the regular season and playoffs was that Smith had torn his rotator cuff after suffering an impingement in his right shoulder during the July 11–14 homestand against the Padres. After suffering the impingement diving back into first base on a pickoff throw, Smith altered his throwing motion to such a degree that the rotator cuff tear subsequently developed. The 5'10" (1.78 m), 180-pound (82 kg) Smith opted to forgo surgery and instead built up his arm strength via weightlifting, playing through whatever pain he encountered. Said Smith, "I didn't tell anybody about the injury, because I wanted to keep playing and didn't want anybody thinking they could run on me or take advantage of the injury. I tried to do almost everything, except throw a baseball, left-handed: opening a door, turning on the radio—everything. It didn't get any better, but it was good enough that I didn't have to have surgery." Because of his injury, Smith let his then four-year-old son Nikko perform his traditional Opening Day backflip before the Cardinals' first home game of the 1986 season. Smith made an "eye-popping" play later that season on August 4, during a game against the Philadelphia Phillies at Busch Stadium. In the top of the ninth inning, Phillies pinch-hitter Von Hayes hit a short fly ball to left field, which was pursued by both Smith and left fielder Curt Ford. Running with his back to home plate, Smith dove forward, simultaneously catching the ball while parallel to the ground and flying over the diving Ford, avoiding a collision by inches. After hitting in either the second or eighth spot in the batting order for most of his time in St. Louis, Herzog made Smith the number-two hitter full-time during the 1987 season. Over the course of the year, Smith accrued a .303 batting average, 43 stolen bases, 75 RBIs, 104 runs scored, and 40 doubles, good enough to earn him the Silver Slugger Award at shortstop. In addition to winning the Gold Glove Award at shortstop for the eighth consecutive time, Smith posted a career-high on-base percentage of .392. Smith was also the leading vote-getter in the 1987 All-Star Game. The Cardinals earned a postseason berth with 95 wins, and subsequently faced the San Francisco Giants in the 1987 National League Championship Series. Smith contributed a triple during the series, and the Cardinals won the contest in seven games. The 1987 World Series matched the Cardinals against the American League champion Minnesota Twins. The home team won every game of the contest, as Minnesota won the series. In 28 at-bats during the Series, Smith scored three runs and had two RBIs. Smith finished second in MVP balloting to Andre Dawson, who had played on the last-place Chicago Cubs, largely because Smith and teammate Jack Clark split the first-place vote. Following the 1987 season, Smith was awarded the largest contract in the National League at $2.34 million. While the team did not see the postseason for the remainder of the decade, Smith continued to rack up All-Star appearances and Gold Gloves. Combined with the attention he received from his contract, Smith continued to be a national figure. Known as a savvy dresser, he made the April 1988 cover of "GQ" magazine. Smith was witness to change within the Cardinal organization when owner Gussie Busch died in 1989 and Herzog quit as manager during the 1990 season. Joe Torre became Smith's new manager in 1990, but the team did not reach the postseason during Torre's nearly five-year tenure. While the Cardinals celebrated their 100th anniversary in 1992, Smith marked milestones of his own, stealing his 500th career base on April 26, then notching a triple on May 26 in front of the home crowd for his 2,000th hit. St. Louis had a one-game lead in the National League East division on June 1, 1992, but injuries took their toll on the team, including Smith's two-week illness in late July after contracting chicken pox for the first time. As a testament to his national visibility during this time, Smith appeared in a 1992 episode of "The Simpsons" titled "Homer at the Bat". Smith became a free agent for the first time in his career on November 2, 1992, only to sign a new contract with the Cardinals on December 6. Smith won his final Gold Glove in 1992, and his 13 consecutive Gold Gloves at shortstop in the National League has yet to be matched. The 1993 season marked the only time between 1981 and 1996 that Smith failed to make the All-Star team, and Smith finished the 1993 season with a .288 batting average and .974 fielding percentage. He appeared in 98 games during the strike-shortened 1994 season, and later missed nearly three months of the 1995 season after shoulder surgery on May 31. Smith was recognized for his community service efforts with the 1994 Branch Rickey Award and the 1995 Roberto Clemente Award. In February 1994, Smith took on the role of honorary chairman and official spokesman for the Missouri Governor's Council on Physical Fitness and Health. As Smith entered the 1996 season, he finalized a divorce from his wife Denise during the first half of the year. Meanwhile, manager Tony La Russa began his first season with the Cardinals in tandem with a new ownership group. After General Manager Walt Jocketty acquired shortstop Royce Clayton during the offseason, La Russa emphasized an open competition for the spot that would give the Cardinals the best chance to win. When spring training concluded, Smith had amassed a .288 batting average and zero errors in the field, and Clayton batted .190 with eight errors. Smith believed he had earned the position with his spring training performance, but La Russa disagreed, and awarded Clayton the majority of playing time in the platoon situation that developed, where Smith typically saw action every third game. La Russa said, "I think it's fair to say he misunderstood how he compared to Royce in spring training ... When I and the coaches evaluated the play in spring training—the whole game—Royce started very slowly offensively and you could see him start to get better. By what he was able to do defensively and on the bases, Royce deserved to play the majority of the games." Smith missed the first month of the season with a hamstring injury, and continued to harbor ill feelings toward La Russa that had developed after spring training ended. In a closed-door meeting in mid-May, La Russa asked Smith if he would like to be traded. Instead, Smith and his agent negotiated a compromise with Cardinals management, agreeing to a buyout of special provisions in his contract in conjunction with Smith announcing his retirement. The agreement prompted a press conference at Busch Stadium on June 19, 1996, during which Smith announced he would retire from baseball at season's end. As Smith made his final tour of the National League, he was honored by many teams, and received a standing ovation at the 1996 All-Star Game in Philadelphia. Between June 19 and September 1, Smith's batting average increased from .239 to .286. On September 2 Smith tied a career high by scoring four runs, one of which was a home run, and another on a close play at home plate in the bottom of the 10th inning against division leader Houston. The victory moved the Cardinals to within a half game of Houston in the National League Central Division, and the Cardinals went on to win the division by six games. The Cardinals held a special ceremony at Busch Stadium on September 28, 1996, before a game against the Cincinnati Reds, honoring Smith by retiring his uniform number. Noted for his ritual backflip before Opening Days, All-Star Games, and postseason games, Smith chose this occasion to perform it for one of the last times. In the postseason, the Cardinals first faced the San Diego Padres in the 1996 National League Division Series. After sitting out Game 1, Smith got the start in Game 2 at Busch Stadium, helping his team go up two games in the series by notching a run, a hit and two walks at the plate, along with an assist and a putout in the field. The Cardinals then swept the series by winning Game 3 in San Diego. The Cardinals faced the Atlanta Braves in the 1996 National League Championship Series. Smith started Game 1 and subsequently registered three putouts and one assist in the field, but went hitless in four at-bats in the Cardinals' 4–2 loss. The Cardinals then won Games 2, 3, and 4, contests in which Smith did not appear. Upon receiving the start in Game 5, Smith nearly duplicated his Game 1 performance with four putouts, one assist, and zero hits in four at-bats as part of another Cardinals defeat. The Cardinals also failed to win Game 6 or Game 7 in Atlanta, ending their season. When the Cardinals were trailing by 10 runs during Game 7 on October 17, Smith flied out to right field while pinch-hitting in the sixth inning, marking the end of his playing career. Smith finished his career with distinctions ranging from the accumulation of more than 27.5 million votes in All-Star balloting, to holding the record for the most MLB at-bats without hitting a grand slam. Upon retirement, Smith took over for Mel Allen as the host of the television series "This Week in Baseball" ("TWIB") in 1997. Smith also became color commentator for the local broadcast of Cardinals games on KPLR-TV from 1997 to 1999. When his stint on "This Week in Baseball" concluded, Smith then moved on to do work for CNN-SI beginning in 1999. After La Russa retired as manager of the Cardinals in 2011, Smith became active in the organization again, starting with his stint as a special instructor for the team's 2012 spring training camp. On January 8, 2002 Smith learned via a phone call he had been elected to the Baseball Hall of Fame on his first ballot by receiving 91.7% of the votes cast. As it happened, the Olympic torch was passing through St. Louis on its way to Salt Lake City for the 2002 Winter Olympics, and Smith served as a torchbearer in a ceremony with St. Louis Rams' quarterback Kurt Warner that evening. Smith was inducted into the Hall of Fame during ceremonies on July 28, 2002. During his speech, he compared his baseball experiences with the characters from the novel "The Wonderful Wizard of Oz", after which his son Dustin presented his Hall of Fame plaque. Days later on August 11, Smith was back at Busch Memorial Stadium for the unveiling of a statue in his likeness made by sculptor Harry Weber. Weber chose to emphasize Smith's defensive skills by showing Smith stretched horizontal to the ground while fielding a baseball. At the ceremony Weber told Smith, "You spent half of your career up in the air. That makes it difficult for a sculptor to do something with it." Smith has also been an entrepreneur in a variety of business ventures. Smith opened "Ozzie's" restaurant and sports bar in 1988, started a youth sports academy in 1990, became an investor in a grocery store chain in 1999, and partnered with David Slay to open a restaurant in the early 2000s. Of those businesses the youth academy remains in operation, with the restaurant having closed in 2010 after changing ownership and locations once. Aside from appearing in numerous radio and television commercials in the St. Louis area since retiring from baseball, Smith authored a children's book in 2006 and launched his own brand of salad dressing in 2008. Besides the National Baseball Hall of Fame, Smith has been also inducted or honored in other halls of fame and recognitions. In 1999, he ranked number 87 on "The Sporting News"' list of the 100 Greatest Baseball Players, and finished third in voting at shortstop for the Major League Baseball All-Century Team. He was honored with induction into the Missouri Sports Hall of Fame, Alabama Sports Hall of Fame and the St. Louis Walk of Fame, and received an honorary Doctor of Humane Letters degree from Cal Poly. In January 2014, the Cardinals announced Smith among 22 former players and personnel to be inducted into the St. Louis Cardinals Hall of Fame Museum for the inaugural class of 2014. Smith is the father to three children from his marriage to former wife Denise; sons Nikko and Dustin, and daughter Taryn. Smith remains a visible figure around the St. Louis area, making varied appearances like playing the role of the Wizard in the St. Louis Municipal Opera's summer 2001 production of "The Wizard of Oz". Smith cheered on his son Nikko as he cracked the top 10 finalists of the 2005 edition of "American Idol". In 2012, Smith made news headlines again, when he sold all of his Gold Gloves at auction together for more than $500,000.
https://en.wikipedia.org/wiki?curid=22790
Boeing OC-135B Open Skies The OC-135B Open Skies United States Air Force observation aircraft supports the Treaty on Open Skies. The aircraft, a modified WC-135B, flies unarmed observation flights over participating parties of the treaty. Three OC-135B aircraft were modified by the Aeronautical Systems Center's 4950th Test Wing at Wright-Patterson Air Force Base in Ohio. The first operationally-capable OC-135B was assigned to the 24th Reconnaissance Squadron at Offutt AFB in October 1993. It is now fitted with a basic set of navigational and sensor equipment, and was placed in inviolate storage at the Aerospace Maintenance and Regeneration Center at Davis-Monthan Air Force Base near Tucson, Arizona in 1997. Two fully operational OC-135B aircraft were delivered in 1996 with the full complement of treaty-allowed sensors, which includes an infrared line scanner, synthetic aperture radar and video scanning sensors. The interior seats 35 people, including the cockpit crew, aircraft maintenance crew, foreign country representatives and crew members from the U.S. Department of Defense's Defense Threat Reduction Agency (DTRA). Cameras installed include one vertical and two oblique KS-87E framing cameras used for low-altitude photography approximately 3,000 feet (900 m) above the ground, and one KA-91C panoramic camera, which scans from side to side to provide a wide sweep for each picture used for high-altitude photography at approximately . The data annotation and recording system (DARMS) processes navigational, altitude, time and camera signals to annotate each picture with correct position, altitude, time, roll angle and other information. In addition, this system records every picture taken according to camera, frame and navigational position. A keyboard with trackball is the input device for operation of this system. Two Barco VGA color monitors display camera annotation and other camera data on screen for the sensor operator and observer use. Camera control, located in the sensor operator's console, operates and adjusts individual cameras for cloud cover, frame overlap and other functions. The sensor operator console seats four and has all the equipment listed above plus camera bay heating control, chronometers, emergency oxygen, interphone and individual lighting. The flight following console also seats four and includes most of the equipment listed above except for DARMS and camera controls. Seven commercial Norcold Tek II coolers with individual refrigeration units maintain temperature and humidity control to maintain peak film performance. The units can be removed, if necessary, from the aircraft in order to transport film. The coolers are capable of storing of film. The aircraft flies on its intended flight path throughout the entire mission with no reliance on ground-based navigation devices. A top-of-the-line commercial system, Litton 92 INS/GPS, which is an integrated inertial navigation system (INS) with a global positioning system (GPS), provides continuous updates. The GPS updates the INS several times per second to correct any deviations in the flight path. The INS also feeds precise latitude, longitude, time, roll angle and barometric altitude to the DARMS and camera systems. A true airspeed computer feeds true airspeed data to the INS. A combined altitude radar altimeter provides precise height above ground information to the pilot for navigational purposes as well as a signal to DARMS for film annotation. It is accurate from above the ground level. Plus, a metric altimeter is installed on the pilot's instrument panel for altitude reference when flying in countries that use meters for altitude reference. The aircraft are being upgraded with the Block 30 Pacer Crag Navigational System upgrade, a first step in making them compliant with ICAO mandated Global Air Traffic Management and Global Air Navigation Standards guidelines. The OC-135B modifications center around four cameras installed in the rear of the aircraft. Since its primary mission is to take pictures, most of the installed equipment and systems provide direct support to the cameras and the camera operator. Other modifications to the aircraft also included installing an auxiliary power unit, crew luggage compartment, sensor operator console, flight following console and upgraded avionics. Though the aircraft feature a large window in the cargo door, this is simply a remnant of their previous weather reconnaissance role. Other modifications support the aircrew. A gaseous oxygen system replaced the liquid oxygen system to be more compatible with foreign airfields, and fluorescent lighting system was added throughout the cabin to provide adequate lighting for operation and inspections. Four upgraded seats with a conference table, interphone, lighting and oxygen comprise the mission commanders' station for both countries' mission commanders. A four channel interphone system enables segregated communications between various elements on board. The auxiliary power unit enables the aircraft to start engines and provides electrical power and cabin heat independent of ground support equipment. It was manufactured by Allied Signal with the installation and design of the installation by E-Systems and World Auxiliary Power Company. The aircraft are assigned to Air Combat Command at the 55th Wing, 45th Reconnaissance Squadron, Offutt Air Force Base near Omaha, Nebraska, for operations, training and maintenance. When tasked, ACC's role is to transport a DTRA observation team to an Open Skies point of entry airport, conduct the observation flight, and then return the team to the continental United States. This article includes public domain text from the following United States Government source:
https://en.wikipedia.org/wiki?curid=22791
Treaty on Open Skies The Treaty on Open Skies entered into force on January 1, 2002, and currently has 35 party states. It establishes a program of unarmed aerial surveillance flights over the entire territory of its participants. The treaty is designed to enhance mutual understanding and confidence by giving all participants, regardless of size, a direct role in gathering information about military forces and activities of concern to them. Open Skies is one of the most wide-ranging international efforts to date promoting openness and transparency of military forces and activities. The concept of "mutual aerial observation" was initially proposed to Soviet Premier Nikolai Bulganin at the Geneva Conference of 1955 by President Dwight D. Eisenhower; however, the Soviets promptly rejected the concept and it lay dormant for several years. The treaty was eventually signed as an initiative of U.S. president (and former Central Intelligence Agency Director) George H. W. Bush in 1989. Negotiated by the then-members of NATO and the Warsaw Pact, the agreement was signed in Helsinki, Finland, on March 24, 1992. This treaty is not related to civil-aviation open skies agreements. The 34 state parties to the Open Skies Treaty are: Belarus, Belgium, Bosnia and Herzegovina, Bulgaria, Canada, Croatia, the Czech Republic, Denmark (including Greenland), Estonia, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, Italy, Latvia, Lithuania, Luxembourg, the Netherlands, Norway, Poland, Portugal, Romania, the Russian Federation, Slovakia, Slovenia, Spain, Sweden, Turkey, Ukraine, the United Kingdom . Kyrgyzstan signed the treaty but has not yet ratified it. Canada and Hungary are the Depositories of the treaty in recognition of their special contributions to the Open Skies process. Depository countries maintain treaty documents and provide administrative support. The Open Skies treaty is one of unlimited duration, and is open to accession by other states. Republics of the former Soviet Union (U.S.S.R.) that have not already become state parties to the treaty may join it at any time. Applications from other interested countries are subject to a consensus decision by the Open Skies Consultative Commission (OSCC). Eight countries have joined into the treaty since it entered into force in 2002: Bosnia and Herzegovina, Croatia, Estonia, Finland, Latvia, Lithuania, Slovenia, and Sweden. Notably missing are Austria, Cyprus, Ireland, Switzerland, Serbia, Montenegro, Albania, North Macedonia, Moldova, Armenia and Uzbekistan. The Republic of Cyprus submitted its application to accede the Treaty in 2002; since then, however, Turkey has blocked its accession. The Open Skies Consultative Commission is the implementing body for the Treaty on Open Skies. It comprises representatives from each state party to the treaty and meets monthly at the Vienna headquarters of the Organization for Security and Co-Operation in Europe. The Open Skies regulations covers the territory over which the parties exercise sovereignty, including mainland, islands, and internal and territorial waters. The treaty specifies that the entire territory of a member state is open to observation. Observation flights may only be restricted for reasons of flight safety and not for reasons of national security. Observation aircraft may be provided by either the observing party or by the observed party (the "taxi option"), at the latter's choice. All Open Skies aircraft and sensors must pass specific certification and pre-flight inspection procedures to ensure that they are compliant with treaty standards. The official certified U.S. Open Skies aircraft is the OC-135B Open Skies. Canada uses a C-130 Hercules aircraft equipped with a "SAMSON" sensor pod to conduct flights over other treaty nations. The pod is a converted CC-130 fuel tank modified to carry the permitted sensors, along with associated on-board mission systems. A consortium of nations consisting of Belgium, Netherlands, Luxembourg, Canada, France, Greece, Italy, Portugal and Spain own and operate this system. The costs of maintaining the SAMSON Pod are shared, based on each nation's flight quota and actual use. Bulgaria, Romania, Russia and Ukraine use the Antonov An-30 for their flights. The Czech Republic also used to use the An-30 for this purpose but apparently retired all of theirs from service in 2003. Russia also uses a Tu-154M-ON monitoring aircraft. Germany formerly used this type as well until the aircraft was lost in a 1997 accident. Russia is phasing out both An-30 and Tu-154M-ON and replacing them with two Tu-214ON with the registrations RA-64519 and RA-64525. This aircraft's new sensor suite, though, is being challenged by the US. Sweden uses a Saab 340 aircraft ("OS-100") that was certified in 2004. Until 2008, the UK designated aircraft was an Andover C.1(PR) aircraft, registration XS596. Since then the UK has used a variety of aircraft including a Saab 340, an An-30 and an OC-135. In 2017, the German Air Force purchased an Airbus A319 as its future Open Skies aircraft. Open Skies aircraft may have video, optical panoramic and framing cameras for daylight photography, infra-red line scanners for a day/night capability, and synthetic aperture radar for a day/night all weather capability. Photographic image quality will permit recognition of major military equipment (e.g., permit a member state to distinguish between a tank and a truck), thus allowing significant transparency of military forces and activities. Sensor categories may be added and capabilities improved by agreement among member states. All sensors used in Open Skies must be commercially available to all signatories. Imagery resolution is limited to 30 centimetres. Each state party is obligated to receive observation flights per its passive quota allocation. Each state party may conduct as many observation flightsits active quotaas its passive quota. During the first three years after entry into force, each state was obligated to accept no more than seventy-five percent of its passive quota. Since the overall annual passive quota for the United States is 42, this means that it was obligated to accept no more than 31 observation flights a year during this three-year period. Only two flights were requested over the United States during 2005, by the Russian Federation and Republic of Belarus Group of states parties (which functions as a single entity for quota allocation purposes). The United States is entitled to 8 of the 31 annual flights available over Russia/Belarus. Additionally, the United States is entitled to one flight over Ukraine, which is shared with Canada. Imagery collected from Open Skies missions is available to any state party upon request for the cost of reproduction. As a result, the data available to each state party is much greater than that which it can collect itself under the treaty quota system. At a Geneva Conference meeting with Soviet Premier Nikolai Bulganin in 1955, President Eisenhower proposed that the United States and Soviet Union conduct surveillance overflights of each other's territory to reassure each country that the other was not preparing to attack. The fears and suspicions of the Cold War led Soviet General Secretary Nikita Khrushchev to reject Eisenhower's proposal. During a five-day summit conference held in Geneva, Switzerland, at the end of July 1955, the Soviet Union and United States held serious talks about disarmament and the United States put forward proposals for mutual reconnaissance flights over each other's air space, known as the Open Skies proposal. The United States had a large number of RB-47 and RB-36 reconnaissance aircraft at its disposal for such activities. However, the Soviets turned down this proposal. This Geneva Conference was nonetheless accepted as a turning point in the Cold War. The tensions in Europe were felt to be a stalemate. Both the Soviet Union and United States were willing to talk about their differences, rather than increase them into a state of war. Thirty-four years later, the Open Skies concept was reintroduced by President George H. W. Bush as a means to build confidence and security between all North Atlantic Treaty Organisation (NATO) and Warsaw Pact countries. In February 1990, an international Open Skies conference involving all NATO and Warsaw Pact countries opened in Ottawa, Canada. Subsequent rounds of negotiations were held in Budapest, Hungary; Vienna, Austria; and Helsinki, Finland. On March 24, 1992, the Open Skies Treaty was signed in Helsinki by Secretary of State James Baker and foreign ministers from 23 other countries. The treaty entered into force on January 2, 2002, after Russia and Belarus completed ratification procedures. In November 1992, President Bush assigned responsibility for overall training, management, leadership, coordination and support for U.S. Open Skies observation missions to the On-Site Inspection Agency (OSIA), now a part of the Defense Threat Reduction Agency (DTRA). Until entry into force in January 2002, DTRA support for the treaty involved participating in training and joint trial flights (JTFs). The U.S. has conducted over 70 JTFs since 1993. By March 2003, DTRA had successfully certified 16 camera configurations on the OC-135B aircraft. They also had contributed to the certification of the Bulgarian An-30, Hungarian An-26, SAMSON POD Group (see above) C-130H, Romanian An-30, Russian An-30, and Ukrainian An-30. The United States successfully flew its first Open Skies mission over Russia in December 2002. With entry into force of the treaty, formal observation flights began in August 2002. During the first treaty year, state parties conducted 67 observation flights. In 2004, state parties conducted 74 missions, and planned 110 missions for 2005. On March 8 and 9, 2007, Russia conducted overflights of Canada under the Treaty. The OSCC continues to address modalities for conducting observation missions and other implementation issues. Since 2002 a total of 40 missions have taken place over the U.K. There were 24 quota missions conducted by: Russia – 20; Ukraine – three; and Sweden – one. There were 16 training flights conducted by: Benelux (joint with Estonia); Estonia (joint with Benelux); Georgia – three (one joint with Sweden); Sweden – three (one joint with Georgia); US – three; Latvia; Lithuania; Romania; Slovenia; and Yugoslavia. Also since 2002 the U.K. has undertaken a total of 51 open skies missions – 38 were quota missions to the following countries: Ukraine (five); Georgia (seven) and Russia (26); 13 missions were training missions to the following nations: Bulgaria; Yugoslavia; Estonia; Slovenia (three); Sweden (three); US; Latvia, Lithuania and the Benelux. The flights cost approximately £50,000 per operational mission, and approximately £25,000 for training missions with an approximate annual cost of £175,000. Russian Defence Ministry spokesman stated on 4 February 2016 that Turkey had refused a Russian Open Skies mission, planned to take place in 1–5 February 2016, to fly over areas adjacent to Syria, as well as over NATO airbases. According to Russia, Turkey gave no explanation regarding the limitations, and claimed them to indicate illegal military activity in Syrian territory. The OSCC has not commented on the alleged violation of the Treaty by Turkey. By 2016, Russian aircraft had been using equipment upgraded over initial equipment. Both Russia and the United States have alleged that the other is violating the provisions of the treaty. U.S. Secretary of State Mike Pompeo cited for example Russia's access refusal in the Russian-controlled areas of Georgia. In 20 September 2019, the U.S. and Canada were denied access to a military exercise in central Russia. In October 2019, according to documents from the House of Representatives, the United States' President is considering a U.S. withdrawal from the Open Skies Treaty. NATO allies and partners, in particular Ukraine, are against the move, fearing it would license Russia to reduce further or ban overflights, thus reducing their knowledge of Russian military movements. In April 2020, it was reported that Secretary of State Mike Pompeo and Secretary of Defense Mark Esper have agreed to proceed with U.S. withdrawal from the Treaty on Open Skies. On May 21, 2020, President Trump announced that the United States would be withdrawing from the treaty due to alleged Russian violations.
https://en.wikipedia.org/wiki?curid=22792
Limited overs cricket Limited overs cricket, also known as one-day cricket, is a version of the sport of cricket in which a match is generally completed in one day, which includes List A cricket and Twenty20 cricket. The name reflects the rule that in the match each team bowls a set maximum number of overs, usually between 20 and 50, although shorter and longer forms of limited overs cricket have been played. The concept contrasts with Test and first-class matches, which can take up to five days to complete. One-day cricket is popular with spectators as it can encourage aggressive, risky, entertaining batting, often results in cliffhanger endings, and ensures that a spectator can watch an entire match without committing to five days of continuous attendance. Each team bats only once, and each innings is limited to a set number of overs, usually fifty in a One Day International and between forty and sixty in a List A. List A is a classification of the limited-overs (one-day) form of cricket, technically as the domestic level. Despite its name, important one-day matches, international and domestic, often have two days set aside, the second day being a "reserve" day to allow more chance of the game being completed if a result is not possible on the first day (for instance if play is prevented or interrupted by rain). As mentioned above, in almost all competitive one-day games, a restriction is placed on the number of overs that may be bowled by any one bowler. This is to prevent a side playing two top-class bowlers with extremely good stamina who can bowl throughout their opponents' innings. The usual limitation is set so that a side must include at least five players who bowl. For example, the usual limit for twenty-over cricket is four overs per bowler, for forty-over cricket eight per bowler and for fifty-over cricket ten per bowler. There are exceptions: Pro Cricket in the United States restricted bowlers to five overs each, thus leaving a side requiring only four bowlers. The idea for a one-day, limited 50-over cricket tournament, was first played in the inaugural match of the All India Pooja Cricket Tournament in 1951 at Tripunithura in Kochi, Kerala. It is thought to be the brain child of KV Kelappan Thampuran, a former cricketer and the first Secretary of the Kerala Cricket Association. The one day limited over cricket game was later adapted and played between English county teams for the first instance on 2 May 1962. Leicestershire beat Derbyshire and Northamptonshire beat Nottinghamshire over 65 overs in the "Midlands Knock-Out Cup", which Northamptonshire went on to win a week later. The following year, the first full-scale one-day competition between first-class teams was played, the knock-out Gillette Cup, won by Sussex. The number of overs was reduced to 60 for the 1964 season. League one-day cricket also began in England, when the John Player Sunday League was started in 1969 with forty over matches. Both these competitions have continued every season since inauguration, though the sponsorship has changed. There is now one 50 over competition, which is called the Royal London One-Day Cup. The first Limited Overs International (LOI) or One-Day International (ODI) match was played in Melbourne in 1971, and the quadrennial cricket World Cup began in 1975. Many of the "packaging" innovations, such as coloured clothing, were as a result of World Series Cricket, a "rebel" series set up outside the cricketing establishment by Australian entrepreneur Kerry Packer. For more details, see History of cricket. Twenty20, a curtailed form of one-day cricket with 20 overs per side, was first played in England in 2003. It has proven very popular, and several Twenty20 matches have been played between national teams. It makes several changes to the usual laws of cricket, including the addition of a "bowl-out" (similar to a penalty shoot-out in football) to decide the result of tied matches, which was subsequently dispensed in favour of a Super Over. 100-ball cricket, another form of one-day cricket with 100 deliveries per side, will launch in England in 2021. It is designed to further shorten game time and hopes to attract a new audience. It makes further changes to the usual laws of cricket, including the addition of one 10-ball over which is bowled by each side in addition to 15 traditional 6-ball overs. One Day International matches are usually played in brightly coloured clothing often in a "day-night" format where the first innings of the day occurs in the afternoon and the second occurs under stadium lights. In the early days of ODI cricket, the number of overs was generally 60 overs per side, and matches were also played with 40, 45 or 55 overs per side, but now it has been uniformly fixed at 50 overs. Every four years, the Cricket World Cup involves all the Test-playing nations and other national sides who qualify through the ICC World Cup Qualifier. It usually consists of round-robin stages, followed by semi-finals and a final. The International Cricket Council (ICC) determines the venue far in advance. The ICC Champions Trophy also involves all the Test-playing nations, and is held between World Cups. It usually consists of a round-robin group stage, semifinals, and a final. Each Test-playing country often hosts triangular tournaments, between the host nation and two touring sides. There is usually a round-robin group stage, and then the leading two teams play each other in a final, or sometimes a best-of-three final. When there is only one touring side, there is still often a best-of-five or best-of-seven series of limited overs matches. The ICC World Cricket League is an ODI competition for national teams with Associate or Affiliate status. Domestic one-day competitions exist in almost every country where cricket is played. List A cricket is a classification of the limited-overs (one-day) form of the sport of cricket. Much as domestic first-class cricket is the level below international Test match cricket, so List A cricket is the domestic level of one-day cricket below One Day Internationals. Twenty20 matches do not qualify for the present. Most cricketing nations have some form of domestic List A competition. The number of overs in List A cricket ranges from forty to sixty overs per side. The Association of Cricket Statisticians and Historians created this category for the purpose of providing an equivalent to first-class cricket, to allow the generation of career records and statistics for comparable one-day matches. Only the more important one-day competitions in each country, plus matches against a touring Test team, are included. The categorisation of cricket matches as "List A" was not officially endorsed by the International Cricket Council until 2006, when the ICC announced that it and its member associations would be determining this classification in a manner similar to that done for first class matches. Matches that qualify as List A: Matches that do not qualify as List A: The JLT One Day Cup is a 50 overs tournament held since 1969. The sides that compete are the following: In 2006 Cricket Australia introduced the KFC Twenty20 Big Bash which was amongst the state teams (as above). In 2011 this was expanded to the KFC Twenty20 Big Bash League, consisting of teams based in the capital cities of Australia. The teams are as follows: The Dhaka Premier Division Cricket League is sponsored by Walton. It has been Bangladesh's List A competition since the 2013–14 season. Twelve teams compete; the bottom two are relegated each year and replaced by the top two from the league below. In 2017-18 the teams were: Bangladesh's T20 competition is the Bangladesh Premier League. It has been contested annually since the 2011–12 season. Seven teams take part: Each county has a team representing them in each league and are as followed with their home ground: North Group: Birmingham Bears (Edgbaston) Derbyshire Falcons (Derby County Ground) Durham Jets (Riverside Ground) Lancashire Lightning (Old Trafford) Leicestershire Foxes (Grace Road) Northamptonshire Steelbacks (Northampton County Ground) Nottinghamshire Outlaws (Trent Bridge) Worcestershire Rapids (New Road) Yorkshire Vikings (Headingley) South Group: Essex Eagles (Chelmsford County Ground) Glamorgan Dragons (Sophia Gardens) Gloucestershire (Bristol County Ground) Hampshire Royals (Rose Bowl) Kent Spitfires (St Lawrence Ground) Middlesex (Lord's) Somerset (Taunton County Ground) Surrey (The Oval) Sussex Sharks (Hove County Ground) The Pakistani domestic competition changes regularly, but for 2005–06 there are plans for three one-day tournaments for men. National Bank Cup: A two-week tournament in February and March between city teams, divided into the Gold League (with seven teams) and Silver League (with six teams). The teams play each other once, with the top two teams qualifying for the final in each individual League, so no team from the Gold League will meet a Silver League team. Gold League teams: Silver League teams: National Bank Patron's Cup: A two-week tournament running just before the National Bank Cup, with one group of five teams and another group of six teams. The top two teams from each group proceed to the semi-final. The teams that compete are: National Bank Twenty20 Cup: A tournament running one week in mid-March. The same groups apply as in the NATIONAL BANK Cup, and there will be two semi-finals and a final following the group stages. The tournament will be held in Karachi and Lahore. Pakistan Super League — a professional franchise Twenty20 men's cricket league. The league is headquartered in Lahore, consists of five franchises nominally representing cities in Pakistan . It is operated by the Pakistan Cricket Board (PCB) and was established in 2016.Following are the teams: The local competition in South Africa is the Standard Bank Cup (formerly Benson & Hedges Series) played between 6 teams: The games are 45-overs, and based on a home-and-away round-robin match system (each team plays ten matches) with semi-finals and a final. The Eagles were the winners of the 2004/2005 and 2005/2006 competitions. 20 teams compete in the Premier Limited-Overs Tournament, which is an expansion from 16 in the last season. Games are played over 50 overs per side, and the teams are divided into two groups, where each team meets the other once over a period of a month. The four top teams from each group qualify for the quarter-finals, and there is then a direct knock-out system until a winner is found after three knock-out stages. The competing teams are: The NAGICO Regional Super50 is the main regional one-day competition in the West Indies. In recent years, it has been run over a week's time as a group stage followed by knock-out stages. Trinidad and Tobago have won the most titles. Current teams (2018–19): The world record for the highest innings total in any List A limited overs match is 496 for 4 by Surrey against Gloucestershire in their Friends Provident Trophy 50-overs match at the Oval, London on 29 April 2007. That surpassed the 443 for nine by Sri Lanka against the Netherlands in their One Day International 50-overs match at Amstelveen on 4 July 2006, which was the record ODI score at the time. On 19 June 2018, England set a new international record, totalling 481 for 6 against Australia at Trent Bridge. The lowest ever total is 23 by Yorkshire against Middlesex at Headingley in 1974 in a 40-overs match. The record low score in ODIs was set by Zimbabwe, who managed just 35 against Sri Lanka in Harare on 25 April 2004. The most runs scored by both sides in any List A limited overs match is 872: Australia, batting first, scored 434 for four in 50 overs, and yet were beaten by South Africa who scored 438 for nine with a ball to spare during their One Day International at Johannesburg in 2006. The highest individual innings is 268 by Ali Brown for Surrey against Glamorgan in a 50-overs match at The Oval in 2002. The best bowling figures are eight for 15 by Rahul Sanghvi for Delhi against Himachal Pradesh in a 50-overs match at Una in 1997. The highest international individual innings is by Rohit Sharma who scored 264. The highest score in any formal limited overs match is believed to be United's 630 for five against Bay Area in a 45 overs match at Richmond, California in August 2006. The most runs in an over was scored by Herschelle Gibbs of the South African cricket team when, in the 2007 Cricket World Cup in the West Indies, he hit 6 sixes in one over bowled by Daan van Bunge of the Netherlands. This record is shared by Yuvraj Singh of India who achieved this feat in the 2007 ICC World Twenty20 in South Africa, he hit 6 sixes in an over bowled by Stuart Broad of England. Sachin Tendulkar holds the record of being the first male cricketer to score a double century in ODIs (200 not out). He achieved this feat against South Africa on 24 February 2010, at Gwalior, India. Virender Sehwag is the second male cricketer to score a double century, when he scored 219 before being caught out against West Indies on 8 December 2011, at Indore, India. Rohit Sharma became the third male cricketer to score a double century, when he scored 209 against Australia on 2 November 2013.
https://en.wikipedia.org/wiki?curid=22794
Organization for Security and Co-operation in Europe The Organization for Security and Co-operation in Europe (OSCE) is the world's largest security-oriented intergovernmental organization. Its mandate includes issues such as arms control, promotion of human rights, freedom of the press, and fair elections. It employs around 3,460 people, mostly in its field operations but also in its secretariat in Vienna, Austria, and its institutions. It has its origins in the 1975 Conference on Security and Co-operation in Europe (CSCE) held in Helsinki, Finland. The OSCE is concerned with early warning, conflict prevention, crisis management, and post-conflict rehabilitation. Its 57 participating countries are located in Europe, northern and central Asia, and North America. The participating states cover much of the land area of the Northern Hemisphere. It was created during the Cold War era as an East–West forum. The Organization has its roots in the 1973 Conference on Security and Co-operation in Europe (CSCE). Talks had been mooted about a European security grouping since the 1950s but the Cold War prevented any substantial progress until the talks at Dipoli in Espoo began in November 1972. These talks were held at the suggestion of the Soviet Union which wished to use the talks to maintain its control over the communist countries in Eastern Europe, and President of Finland Urho Kekkonen hosted them in order to bolster his policy of neutrality. Western Europe, however, saw these talks as a way to reduce the tension in the region, furthering economic cooperation and obtaining humanitarian improvements for the populations of the Communist bloc. The recommendations of the talks, in the form of "The Blue Book", gave the practical foundations for a three-stage conference called the "Helsinki process". The CSCE opened in Helsinki on 3 July 1973 with 35 states sending representatives. Stage I only took five days to agree to follow the Blue Book. Stage II was the main working phase and was conducted in Geneva from 18 September 1973 until 21 July 1975. The result of Stage II was the Helsinki Final Act which was signed by the 35 participating states during Stage III, which took place in Finlandia Hall from 30 July – 1 August 1975. It was opened by Holy See’s diplomat Cardinal Agostino Casaroli, who was chairman of the conference. The concepts of improving relations and implementing the act were developed over a series of follow-up meetings, with major gatherings in Belgrade (4 October 19778 March 1978), Madrid (11 November 19809 September 1983) and Vienna (4 November 198619 January 1989). The fall of the Soviet Union required a change of role for the CSCE. The Charter of Paris for a New Europe, signed on 21 November 1990, marked the beginning of this change. With the changes capped by the renaming of the CSCE to the OSCE on 1 January 1995, in accord with the results of the conference held in Budapest, Hungary, in 1994. The OSCE now had a formal secretariat, Senior Council, Parliamentary Assembly, Conflict Prevention Centre, and Office for Free Elections (later becoming the Office for Democratic Institutions and Human Rights). In December 1996, the "Lisbon Declaration on a Common and Comprehensive Security Model for Europe for the Twenty-First Century" affirmed the universal and indivisible nature of security on the European continent. In Istanbul on 19 November 1999, the OSCE ended a two-day summit by calling for a political settlement in Chechnya and adopting a Charter for European Security. According to then Minister of Foreign Affairs Igor Ivanov, this summit marked a turning point in Russian perception of the OSCE, from an organization that expressed Europe's collective will, to an organization that serves as a Western tool for "forced democratization". Through its Office for Democratic Institutions and Human Rights (ODIHR), the OSCE observes and assesses elections in its member states, in order to support fair and transparent democratic processes, in keeping with the mutual standards to which the organization is committed; between 1994 and 2004 the OSCE sent teams of observers to monitor more than 150 elections, typically focusing on elections in emerging democracies. In 2004, at the invitation of the United States Government, the ODIHR deployed an assessment mission, made up of participants from six OSCE member states, which observed that year's US presidential election and produced a report. It was the first time that a US presidential election was the subject of OSCE monitoring, although the organization had previously monitored state-level American elections in Florida and California, in 2002 and 2003. The 2004 assessment took place against the backdrop of the controversial recount effort in the 2000 US presidential election, and came about largely through the initiative of 13 Democratic members of the United States House of Representatives. That group, which included Barbara Lee, of California, and Eddie Bernice Johnson, of Texas, initially addressed a request for election observers to the United Nations, in a letter to Kofi Annan, the UN Secretary-General, but the request was declined. Subsequently, the administration of President George W. Bush, through the State Department, headed by Secretary of State Colin Powell, responded to the lawmakers' concerns by inviting the OSCE election-monitoring mission. The six official languages of the OSCE are English, French, German, Italian, Spanish and Russian. A unique aspect of the OSCE is the non-binding status of its constitutive charter. Rather than being a formal treaty ratified by national legislatures, the Helsinki Final Act represents a political commitment by the heads of government of all signatories to build security and cooperation in Europe on the basis of its provisions. This allows the OSCE to remain a flexible "process" for the evolution of improved cooperation, which avoids disputes and/or sanctions over implementation. By agreeing to these commitments, signatories for the first time accepted that treatment of citizens "within" their borders was also a matter of legitimate international concern. This open process of the OSCE is often given credit for helping build democracy in the Soviet Union and Eastern Europe, thus leading to the end of the Cold War. Unlike most international intergovernmental organizations, however, the OSCE does not have international legal personality on account of the lack of legal effect of its charter. As a result, its headquarters’ host, Austria, had to confer legal personality on the organization in order to be able to sign a legal agreement regarding its presence in Vienna. Political direction to the organization is given by heads of state or government during summits. Summits are not regular or scheduled but held as needed. The last summit took place in Astana (Kazakhstan), on 1 and 2 December 2010. The high-level decision-making body of the organization is the Ministerial Council, which meets at the end of every year. At ambassadorial level the Permanent Council convenes weekly in Vienna and serves as the regular negotiating and decision-making body. The chairperson of the Permanent Council is the ambassador to the Organization of the participating State which holds the chairmanship. From 1 January 2017 to 31 December 2017 the Chairperson-in-Office is Austrian Foreign Minister, Sebastian Kurz, who succeeded German Foreign Minister Frank-Walter Steinmeier. In addition to the Ministerial Council and Permanent Council, the Forum for Security Co-operation is also an OSCE decision-making body. It deals predominantly with matters of military co-operation, such as modalities for inspections according to the Vienna Document of 1999. The OSCE's Secretariat is located in Vienna, Austria. The current Secretary General is Thomas Greminger of Switzerland, who took over from Lamberto Zannier of Italy. The organization also has offices in Copenhagen, Geneva, The Hague, Prague and Warsaw. , the OSCE employed 3,462 staff, including 513 in its secretariat and institutions and 2,949 in its 17 field operations. The Parliamentary Assembly of the Organization for Security and Co-operation in Europe is made up of 323 parliamentarians from 57 member states. The Parliamentary Assembly performs its functions mainly via the Standing Committee, the Bureau, and 3 General Committees (Committee on Political Affairs and Security, Committee on Economic Affairs, Science, Technology and Environment, and Committee on Democracy, Human Rights and Humanitarian Questions). The Parliamentary Assembly passes resolutions on matters such as political and security affairs, economic and environmental issues, and democracy and human rights. Representing the collective voice of OSCE parliamentarians, these resolutions and recommendations are meant to ensure that all participating states live up to their OSCE commitments. The Parliamentary Assembly also engages in parliamentary diplomacy, and has an extensive election observation program. The oldest OSCE institution is the Office for Democratic Institutions and Human Rights (ODIHR), established in 1991 following a decision made at the 1990 Summit of Paris. It is based in Warsaw, Poland, and is active throughout the OSCE area in the fields of election observation, democratic development, human rights, tolerance and non-discrimination, rule of law, and Roma and Sinti issues. The ODIHR has observed over 300 elections and referendums since 1995, sending more than 50,000 observers. It has operated outside its own area twice, sending a team that offered technical support to the 9 October 2004 presidential elections in Afghanistan, an OSCE Partner for Co-operation, and an election support team to assist with parliamentary and provincial council elections on 18 September 2005. ODIHR is headed by Michael Georg Link. The Office of the OSCE Representative on Freedom of the Media, established in December 1997, acts as a watchdog to provide early warning on violations of freedom of expression in OSCE participating States. The representative also assists participating States by advocating and promoting full compliance with OSCE norms, principles and commitments regarding freedom of expression and free media. As of 2011, the current representative is expert in media law from Bosnia and Herzegovina Dunja Mijatovic. The High Commissioner on National Minorities was created on 8 July 1992 by the Helsinki Summit Meeting of the Conference on Security and Cooperation in Europe. It is charged with identifying and seeking early resolution of ethnic tension that might endanger peace, stability or friendly relations between participating states. Each year the OSCE holds an OSCE Asian Conference with partner nations (currently Australia, Thailand, South Korea, Japan and Afghanistan). Almost all field operations of OSCE have been conducted in countries of former Yugoslavia and the Soviet Union. The Secretary General is the OSCE's chief administrative officer and can, when requested by the Chairmanship, serve as the representative of the Chairperson-in-Office. Since the establishment of the office in 1992, Secretary Generals have been: The OSCE chair is assumed at yearly intervals by one participating State, which then plays the central role in managing the Organization's work and in its external representation. The foreign minister of the country holding the chair holds the office of Chairperson-in-Office (CiO). The responsibilities of the Chairman-in-Office (CiO) include The CiO is assisted by the previous and incoming chairman-in-office; the three of them together constitute the Troika. The origin of the institution lies with the Charter of Paris for a New Europe (1990), the Helsinki Document 1992 formally institutionalized this function. Chairmanship of the OSCE is held by a member state on a calendar-year basis, with the minister for foreign affairs of that state performing the function of Chairman-in-Office. The table below shows the holders since 1991. Since 1993, the OSCE's budget by year (in millions of euro,) has been: The OSCE considers itself a regional organization in the sense of Chapter VIII of the United Nations Charter and is an observer in the United Nations General Assembly. The Chairman-in-Office gives routine briefings to the United Nations Security Council. The OSCE takes a comprehensive approach to the politico-military dimension of security, which includes a number of commitments by participating States and mechanisms for conflict prevention and resolution. The organization also seeks to enhance military security by promoting greater openness, transparency and co-operation. The end of the Cold War resulted in a huge amount of surplus weapons becoming available in what is known as the international grey market for weapons. The OSCE helps to stop the - often illegal - spread of such weapons and offers assistance with their destruction. The OSCE hosts the annual exchange of information under the Conventional Forces in Europe treaty. The OSCE has also implemented two additional exchanges of information, the Vienna Document and the Global Exchange of Military Information. The Open Skies Consultative Commission, the implementing body for the Treaty on Open Skies, meets monthly at its Vienna headquarters. The actions taken by the OSCE in border monitoring range from conflict prevention to post-conflict management, capacity building and institutional support. With its expertise in conflict prevention, crisis management and early warning, the OSCE contributes to worldwide efforts in combating terrorism. The OSCE works to prevent conflicts from arising and to facilitate lasting comprehensive political settlements for existing conflicts. It also helps with the process of rehabilitation in post-conflict areas. The OSCE's Forum for Security Co-operation provides a framework for political dialogue on military reform, while practical activities are conducted by field operations, as well as the Conflict Prevention Centre. OSCE police operations are an integral part of the organization's efforts in conflict prevention and post-conflict rehabilitation. The OSCE was a rather small organization until selection by the international community to provide electoral organization to post war Bosnia and Herzegovina in early 1996. Ambassador Frowick was the first OSCE representative to initiate national election in September 1996, human rights issues and rule of law specifically designed to provide a foundation for judicial organization within Bosnia and Herzegovina. The OSCE had regional offices and field offices, to include the office in Brcko in northeastern Bosnia and Herzegovina which remained in limbo until the Brcko Arbitration Agreement could be decided, finalized and implemented. Brcko become a "special district" and remains so today. The OSCE essentially took the place of the United Nations in Bosnia and Herzegovina in part because the Bosnian leadership felt deep contempt for the UN efforts to stop the war which began in 1991 and ended in 1995. During the time the United Nations were attempting a political solution, thousands of UN troops were posted in and around Bosnia and Herzegovina with special emphasis on Sarajevo. From 1991 to 1995, over 200,000 Bosnians were killed and over one million displaced and another million as refugees. The OSCE continues to have a presence and a number of initiatives to bring a sustained peace to the region. Activities in the economic and environmental dimension include the monitoring of developments related to economic and environmental security in OSCE participating States, with the aim of alerting them to any threat of conflict; assisting States in the creation of economic and environmental policies, legislation and institutions to promote security in the OSCE region. Among the economic activities of the OSCE feature activities related to migration management, transport and energy security. Most activities are implemented in co-operation with partner organizations. The OSCE has developed a range of activities in the environmental sphere aimed at addressing ecologic threats to security in its participating States. Among the activities feature projects in the area of hazardous waste, water management and access to information under the Aarhus Convention. The commitments made by OSCE participating States in the human dimension aim to ensure full respect for human rights and fundamental freedoms; to abide by the rule of law; to promote the principles of democracy by building, strengthening and protecting democratic institutions; and to promote tolerance throughout the OSCE region. Since 2003 the OSCE has had an established mechanism for combating trafficking in human beings, as defined by Article 3 of the Palermo Protocol, which is aimed at raising public awareness of the problem and building the political will within participating states to tackle it effectively. The OSCE actions against trafficking in human beings are coordinated by the Office of the Special Representative and Co-ordinator for Combating Trafficking in Human Beings. Maria Grazia Giammarinaro, a judge in the Criminal Court of Rome, took Office as the Special Representative in March 2010. From 2006 to 2009 this Office was held by Eva Biaudet, a former Finnish Minister of Health and Social Services. Biaudet currently serves as Finnish Ombudsman for Minorities. Her predecessor was former Austrian Minister Helga Konrad, who served as the first OSCE Special Representative for Combating Trafficking in Human Beings. The activities around Combating Trafficking in Human Beings in the OSCE Region of the Office of the Special Representative include: The OSCE claims to promote democracy and assist the participating states in building democratic institutions. Education programmes are an integral part of the organization's efforts in conflict prevention and post-conflict rehabilitation. As part of its democratization activities, the OSCE carries out election assistance projects in the run-up to, during, and following elections. However, the effectiveness of such assistance is arguable—Kazakhstan, for example, despite being the former chair of the OSCE, is considered by many to be one of the least democratic countries in the world. Moreover, the recent democratic advances made in other Central Asian republics, notably Kyrgyzstan, have led to rumours of Soviet-style disruption of the Kyrgyz democratic process by, in particular, Kazakhstan and Russia. This may be in large part due to fears over the long-term stability of these countries' own quasi-dictatorships. The equality of men and women is an integral part of sustainable democracy. The OSCE aims to provide equal opportunities for men and women and to integrate gender equality in policies and practices. The OSCE's human rights activities focus on such priorities as freedom of movement and religion, preventing torture and trafficking in persons. OSCE could grant consultive status to NGOs and INGOs in the form of "Researcher-in-residence programme" (run by the Prague Office of the OSCE Secretariat): accredited representatives of national and international NGOs are granted access to all records and to numerous topical compilations related to OSCE field activities. The OSCE observes relevant media developments in its participating states with a view to addressing and providing early warning on violations of freedom of expression. Ethnic conflict is one of the main sources of large-scale violence in Europe today. The OSCE's approach is to identify and to seek early resolution of ethnic tensions, and to set standards for the rights of persons belonging to minority groups and High Commissioner on National Minorities has been established. The Democracy Defender Award honors a person or group for contributions to the promotion of democracy and the defense of human rights "in the spirit of Helsinki Final Act and other OSCE principles and commitments." The award was established in 2016 on the initiative of Ambassadors of 8 countries, and supported by the delegations of the 18 countries of the OSCE (22 countries in 2017). Following an unprecedented period of activity in the 1990s and early 2000s (decade), the OSCE has in the past few years faced accusations from the CIS states (primarily Russia) of being a tool for the Western states to advance their own interests. For instance, the events in Ukraine in 2004 (the "Orange Revolution") led to allegations by Russia of OSCE involvement on behalf of the pro-Western Viktor Yushchenko. At the 2007 Munich Conference on Security Policy, Vladimir Putin made this position very clear: Russia and its allies are advancing the concept of a comprehensive OSCE reform, which would make the Secretariat, institutions and field presences more centralized and accountable to collective consensus-based bodies and focus the work of the Organization on topical security issues (human trafficking, terrorism, non-proliferation, arms control, etc.), at the expense of the "Human Dimension", or human rights issues. The move to reduce the autonomy of the theoretically independent OSCE institutions, such as ODIHR, would effectively grant a Russian veto over any OSCE activity. Western participating States are opposing this process, which they see as an attempt to prevent the OSCE from carrying out its democratization agenda in post-Soviet countries. Following the 2008 U.S. presidential election, OSCE's ODIHR was accused of having double standards by Russia's lawmaker Slutsky. The point was made that while numerous violations of the voting process were registered, its criticism came only from within the United States (media, human rights organizations, McCain's election staff), while the OSCE known for its bashing criticism of elections on the post-Soviet space remained silent. In 2004 the OSCE Parliamentary Assembly sent election observers to the U.S. Presidential elections. The OSCE Parliamentary Assembly's president at the time was Democratic Congressman Alcee Hastings. Hastings had previously been impeached for corruption by the U.S. Congress. The OSCE faced criticism of partisanship and double standards due to Hastings's past and the fact that the OSCE's mandate was to promote democracy and the values of civil society. In 2010 the Parliamentary Assembly of the Organization for Security and Co-operation in Europe was criticized from within by the Latvian delegation for lacking transparency and democracy. Spencer Oliver (b. 1938) secretary general of the OSCE Parliamentary Assembly, who has held the post since the organization's inception in 1992, faced a challenge from the Latvian Artis Pabriks. According to the rules of the OSCE Parliamentary Assembly the incumbent general secretary can only be replaced with a full consensus minus one. Pabriks called the rules "quite shocking from the perspective of an organization that's monitoring elections". Before the U.S. presidential elections of November 2012, the OSCE announced its intention to send electoral observers to Texas and to other U.S. states. This prompted the Attorney General of Texas Greg Abbott to send letters to U.S. Secretary of State Hillary Clinton and to the OSCE, threatening to arrest OSCE officials if they should enter electoral premises in Texas and break Texas law. In reply, the U.S. Department of State stated that OSCE observers enjoyed immunities. In the event, no incidents between OSCE and Texas authorities were recorded during the elections. On 21 March 2014 OSCE deployed its Special Monitoring Mission on a request of Ukraine's government. On 27 April 2014 eight members of the OSCE Special Monitoring Mission (OSCE SMM) were taken hostage by the Girkin group that took over the power in the city of Slovyansk (Slavyansk). The group appointed Vyacheslav Ponomarev a mayor of the city. During the War in Donbass, an OSCE observer allowed Russian separatists to use the organization's marked vehicle, which prompted the belief that the OSCE was biased in the war and not interested in carrying out its duties of mediating a ceasefire. The organization issued a statement regretting the incident. The agreement called for a creation of a 40 km buffer zone, but upon Ukrainian forces withdrawing from their 20 km portion of the buffer, Russian separatists are said to have simply occupied the abandoned territory without withdrawing from their own 20 km buffer. Likewise, there are allegations of separatists using OSCE marked vehicles for transportation. Moreover, the mission also received criticism alleging that only 2 checkpoints on the Russian–Ukrainian border are currently being monitored, which has been described as "seriously inadequate" by Daniel Baer, the US ambassador to the OSCE at the time. On the other hand, Ukraine has faced criticism following a BBC report showing an alleged violation of the Minsk agreement when Ukraine stationed tanks in the residential neighborhood Avdeevka. The mission has also been criticized for waiting months to deploy drones to help monitor the border as well as withdrawing them after only several weeks of use due to Russian electronic attacks. Ukraine has argued that approximately 80% of the OSCE observers located near Mariupol were Russian citizens and many had ties to Russian security agencies such as the FSB and GRU. The organization has also been accused of allegedly revealing the locations of Ukrainian troops to Russian forces during the conflict and that Russian OSCE observers may be directly coordinating separatist artillery strikes on Ukrainian positions. On 1 December 2014, an OSCE observer was injured by Ukrainian counter artillery fire while observing militants firing at Ukrainian forces. The OSCE team was located next to two pro-Russian mortar teams. The OSCE team did not radio in or record the Russian mortar team firing on Ukrainian positions. Critics stated that the unorthodox behavior of being located next to an active separatist artillery position and not reporting the incident showed that the OSCE team was not acting in an impartial manner. On 27 October 2015 a suspended OSCE monitor confirmed he was a former employee of Russia's Main Intelligence Directorate. The suspended SMM stated he had no trouble receiving the position and neither the OSCE nor Ukraine's Security Service thoroughly checked his background. Following the report the OSCE issued a comment stating the monitor has been fired due to violations of the organization's code of conduct. On 6 April 2016 photos of OSCE monitors attending the wedding of a Russian separatist were found. The wedding was hosted in June 2015. The OSCE expressed regret over the incident, issuing a statement saying "The unprofessional behavior displayed by the monitors in the picture is an individual incident that should not be abused to cast a shadow on the reputation of other mission members." The OSCE reported that the monitors were no longer with the OSCE special monitoring mission. In April 2017 an OSCE vehicle struck a mine, killing one member and injuring two. Two armoured vehicles were on patrol was near Luhansk when one struck the mine. The dead man was an American paramedic, while the injured included a woman from Germany and a man from the Czech Republic. On 18 July 2018 it was revealed that Russian intelligence services received inside information about the activities of the OSCE Special Monitoring Mission to Ukraine from a staff member of the OSCE. The insider information consisted of observer's preference to alcohol, women, their financial situation, and their contacts in Ukraine. The OSCE issued a statement expressing concern over the alleged security breach. In April 2017, Turkish President Recep Tayyip Erdoğan criticized the OSCE for reporting that opposition "No" campaigners in the Turkish constitutional referendum had faced bans, police interventions and arrests. Erdoğan said: "Now the Organization for Security and Cooperation in Europe says if the result is ‘yes’, that means there are a lot of problems. Who are you? First of all, you should know your place. This is not your duty."
https://en.wikipedia.org/wiki?curid=22796
Omri Omri (, "‘Omri"; fl. 9th century BC) was, according to the Hebrew Bible, the sixth king of Israel. He was a successful military campaigner who extended the northern kingdom of Israel. Other monarchs from the House of Omri are Ahab, Ahaziah, Joram, and Athaliah. Like his predecessor, king Zimri, who ruled for only seven days, Omri is the second king mentioned in the Bible without a statement of his tribal origin. One possibility, though unproven, is that he was of the tribe of Issachar. Nothing is said in Scripture about the lineage of Omri. His name may be Amorite, Arabic, or Hebrew in origin. Omri is credited with the construction of Samaria and establishing it as his capital. Although the Bible is silent about other actions taken during his reign, he is described as doing more evil than all the kings who preceded him. An alternative modern hypothesis maintains that, as founder of the House of Omri, an Israelite royal house, his kingdom formed the first state in the Land of Israel, and that the Kingdom of Judah only achieved statehood later. Extrabiblical sources such as the Mesha Stele and the Black Obelisk of Shalmaneser III also mention his name; however, in the case of the Black Obelisk the reference is to the dynasty named for Omri rather than to Omri himself. A minor thesis, argued by Thomas Thompson and Niels Peter Lemche, suggests that Omri may be a dynastic name indicating the apical founder of the Kingdom of Israel rather than one denoting an actual historical king. According to the biblical narrative, Omri was "commander of the army" of King Elah when Zimri, "commander of half the king's chariots", murdered Elah and made himself king. Instead, the troops at Gibbethon chose Omri as king, and he led them to Tirzah where they besieged it. When Zimri saw that the city was taken, he committed suicide by shutting himself in the royal palace and setting it ablaze. He died after a reign of only seven days. Although Zimri was eliminated, "half of the people" supported Tibni in opposition to Omri. It took Omri four years to subdue Tibni and at last proclaim himself undisputed king of Israel. Initially, the capital was in Tirzah, which had been besieged and the royal palace had been burned down. The Jewish Encyclopedia suggests that "the associations of Tirzah were so repellent and sanguinary, and the location so poor for a capital, that Omri purchased a new site" for his residence. This was in Samaria, on a hill purchased from Shemer for two talents of silver, where Omri built a new capital for the kingdom. In Samaria, Omri reigned until his death and was buried there. His son Ahab became the next king. Omri became king of Israel in the 31st year of Asa, king of Judah and reigned for 12 years, 6 years of which were in Tirzah. The biblical reference to the period of rivalry with Tibni is from the 27th year of Asa to the 31st year. There are several possible dates: William F. Albright has dated his reign to 876–869 BC, E. R. Thiele offers the dates of 888 BC to 880 BC for his rivalry with Tibni and 880–874 BC for his sole reign. while Paul L. Maier affirms that it happened between 881–873 BC. The fortress at Jezreel was situated on one of the main east-west routes through the kingdom. Hugh Williamson believes it served not only a military function, but also a political one; a very visible example of grandiose public works used as a means of social control and to assert claims of legitimacy. The Moabite Mesha stele (on display in the Louvre) indicates that Omri expanded his holdings to include northern Moab east of the Jordan River. It makes reference to the oppression of Moab by "Omri King of Israel". Israel would later become identified in sources as the "House of Omri" ("Bit-Humria"), with the term "Israel" being used less and less as history progressed (the other defining term for "Israel" is "Samaria", beginning in the reign of Joash). Thomas L. Thompson ("The Bible in History"), however, interprets the Mesha stele as suggesting that Omri is an eponym, or legendary founder of the kingdom rather than an historical person. The Assyrian Black Obelisk in the British Museum has been interpreted as referring to Jehu "son of Omri", though that interpretation has been questioned. The short-lived dynasty founded by Omri constituted a new chapter in the history of the Northern Kingdom of Israel. It ended almost fifty years of constant civil war over the throne. There was peace with the Kingdom of Judah to the south, and even cooperation between the two rival states, while relations with neighboring Sidon to the north were bolstered by marriages negotiated between the two royal courts. This state of peace with two powerful neighbors enabled the Kingdom of Israel to expand its influence and even political control in Transjordan, and these factors combined brought economic prosperity to the kingdom. On the other hand, peace with Sidon also resulted in the penetration of Phoenician religious ideas into the kingdom and led to a "kulturkampf" between traditionalists (as personified by the prophet Elijah and his followers) and the aristocracy (as personified by Omri's son and heir Ahab and his consort Jezebel). In foreign affairs, this period paralleled the rise of the Kingdom of Aram based in Damascus, and Israel soon found itself at war in the northeast. Most threatening, however, was the ascendancy of Assyria, which was beginning to expand westward from Mesopotamia: the Battle of Qarqar (853 BC), which pitted Shalmaneser III of Assyria against a coalition of local kings, including Ahab, was the first clash between Assyria and Israel. It was the first in a series of wars that would eventually lead to the destruction of the Kingdom of Israel in 722 BC and the reduction of the Kingdom of Judah to an Assyrian tributary state. In 841 BC, the Assyrian king Shalmaneser III campaigned along the Mediterranean coast and forced Jehu to pay tribute. Assyrian kings frequently referred to Omri's successors as belonging to the "House of Omri" ("Bit Hu-um-ri-a").
https://en.wikipedia.org/wiki?curid=22798
Oxycodone Oxycodone, sold under the brand name OxyContin among others, is an opioid medication used for treatment of moderate to severe pain, and a common drug of abuse. It is usually taken by mouth, and is available in immediate-release and controlled-release formulations. Onset of pain relief typically begins within 15 minutes and lasts for up to six hours with the immediate-release formulation. In the United Kingdom, it is available by injection. Combination products are also available with paracetamol (acetaminophen), ibuprofen, naloxone, and aspirin. Common side effects include constipation, nausea, sleepiness, dizziness, itching, dry mouth, and sweating. Severe side effects may include addiction, respiratory depression (a reduction in breathing), and low blood pressure. Those allergic to codeine may also be allergic to oxycodone. Use of oxycodone in early pregnancy appears relatively safe. Opioid withdrawal may occur if rapidly stopped. Oxycodone acts by activating the μ-opioid receptor. When taken by mouth, it has roughly 1.5 times the effect of the equivalent amount of morphine. Oxycodone was first made in Germany in 1916 from thebaine. It is available as a generic medication. In the United States, the wholesale cost per dose is less than US$0.30 as of 2018. In 2017, it was the 52nd most commonly prescribed medication in the United States, with more than 15 million prescriptions. A number of abuse-deterrent formulations are available, such as in combination with naloxone. Oxycodone is used for managing moderate to severe acute or chronic pain when other treatments are not sufficient. It may improve quality of life in certain types of pain. It is unclear if use in chronic pain results in improved quality of life or ongoing pain relief. Oxycodone is available as controlled-release tablet, intended to be taken every 12 hours. A July 1996 study independent of Purdue Pharma, the drug's originator, found the controlled-release formulation had a variable duration of action ranging from 10–12 hours. A 2006 review found that controlled-release oxycodone is comparable to immediate-release oxycodone, morphine, and hydromorphone in management of moderate to severe cancer pain, with fewer side effects than morphine. The author concluded that the controlled-release form is a valid alternative to morphine and a first-line treatment for cancer pain. In 2014, the European Association for Palliative Care recommended oxycodone by mouth as a second-line alternative to morphine by mouth for cancer pain. In the U.S., extended-release oxycodone is approved for use in children as young as eleven years old. The approved uses is for relief of cancer pain, trauma pain, or pain due to major surgery, in children already treated with opioids, who can tolerate at least 20 mg per day of oxycodone; this provides an alternative to Duragesic (fentanyl), the only other extended-release opioid analgesic approved for children. Oxycodone is available in a variety of formulations for by mouth or under the tongue: In the US, oxycodone is only approved for use by mouth, available as tablets and oral solutions. Parenteral formulations of oxycodone (brand name OxyNorm) are also available in other parts of the world however, and are widely used in the European Union. In Spain, the Netherlands and the United Kingdom, oxycodone is approved for intravenous (IV) and intramuscular (IM) use. When first introduced in Germany during World War I, both IV and IM administrations of oxycodone were commonly used for postoperative pain management of Central Powers soldiers. Serious side effects of oxycodone include reduced sensitivity to pain (beyond the pain the drug is taken to reduce), euphoria, anxiolysis, feelings of relaxation, and respiratory depression. Common side effects of oxycodone include constipation (23%), nausea (23%), vomiting (12%), somnolence (23%), dizziness (13%), itching (13%), dry mouth (6%), and sweating (5%). Less common side effects (experienced by less than 5% of patients) include loss of appetite, nervousness, abdominal pain, diarrhea, urine retention, dyspnea, and hiccups. Most side effects generally become less intense over time, although issues related to constipation are likely to continue for the duration of use. Oxycodone in combination with naloxone in managed-release tablets, has been formulated to both deter abuse and reduce "opioid-induced constipation". The risk of experiencing severe withdrawal symptoms is high if a patient has become physically dependent and discontinues oxycodone abruptly. Medically, when the drug has been taken regularly over an extended period, it is withdrawn gradually rather than abruptly. People who regularly use oxycodone recreationally or at higher than prescribed doses are at even higher risk of severe withdrawal symptoms. The symptoms of oxycodone withdrawal, as with other opioids, may include "anxiety, panic attack, nausea, insomnia, muscle pain, muscle weakness, fevers, and other flu-like symptoms". Withdrawal symptoms have also been reported in newborns whose mothers had been either injecting or orally taking oxycodone during pregnancy. As with other opioids, chronic use of oxycodone (particularly with higher doses) often causes concurrent hypogonadism (low sex hormone levels). In high doses, overdoses, or in some persons not tolerant to opioids, oxycodone can cause shallow breathing, slowed heart rate, cold/clammy skin, pauses in breathing, low blood pressure, constricted pupils, circulatory collapse, respiratory arrest, and death. In 2011, it was the leading cause of drug-related deaths in the U.S. However, from 2012 onwards, heroin and fentanyl have become more common causes of drug-related deaths. Oxycodone overdose has also been described to cause spinal cord infarction in high doses and ischemic damage to the brain, due to prolonged hypoxia from suppressed breathing. Oxycodone is metabolized by the enzymes CYP3A4 and CYP2D6, and its clearance therefore can be altered by inhibitors and inducers of these enzymes, increasing and decreasing half-life, respectively. (For lists of CYP3A4 and CYP2D6 inhibitors and inducers, see here and here, respectively.) Natural genetic variation in these enzymes can also influence the clearance of oxycodone, which may be related to the wide inter-individual variability in its half-life and potency. Ritonavir or lopinavir/ritonavir greatly increase plasma concentrations of oxycodone in healthy human volunteers due to inhibition of CYP3A4 and CYP2D6. Rifampicin greatly reduces plasma concentrations of oxycodone due to strong induction of CYP3A4. There is also a case report of fosphenytoin, a CYP3A4 inducer, dramatically reducing the analgesic effects of oxycodone in a chronic pain patient. Dosage or medication adjustments may be necessary in each case. Oxycodone, a semi-synthetic opioid, is a highly selective full agonist of the μ-opioid receptor (MOR). This is the main biological target of the endogenous opioid neuropeptide β-endorphin. Oxycodone has low affinity for the δ-opioid receptor (DOR) and the κ-opioid receptor (KOR), where it is an agonist similarly. After oxycodone binds to the MOR, a G protein-complex is released, which inhibits the release of neurotransmitters by the cell by decreasing the amount of cAMP produced, closing calcium channels, and opening potassium channels. Opioids like oxycodone are thought to produce their analgesic effects via activation of the MOR in the midbrain periaqueductal gray (PAG) and rostral ventromedial medulla (RVM). Conversely, they are thought to produce reward and addiction via activation of the MOR in the mesolimbic reward pathway, including in the ventral tegmental area, nucleus accumbens, and ventral pallidum. Tolerance to the analgesic and rewarding effects of opioids is complex and occurs due to receptor-level tolerance (e.g., MOR downregulation), cellular-level tolerance (e.g., cAMP upregulation), and system-level tolerance (e.g., neural adaptation due to induction of ΔFosB expression). Taken orally, 20 mg of immediate-release oxycodone is considered to be equivalent in analgesic effect to 30 mg of morphine, while extended release oxycodone is considered to be twice as potent as oral morphine. Similarly to most other opioids, oxycodone increases prolactin secretion, but its influence on testosterone levels is unknown. Unlike morphine, oxycodone lacks immunosuppressive activity (measured by natural killer cell activity and interleukin 2 production "in vitro"); the clinical relevance of this has not been clarified. A few of the metabolites of oxycodone have also been found to be active as MOR agonists, some of which notably have much higher affinity for (as well as higher efficacy at) the MOR in comparison. Oxymorphone possesses 3- to 5-fold higher affinity for the MOR than does oxycodone, while noroxycodone and noroxymorphone possess one-third of and 3-fold higher affinity for the MOR, respectively, and MOR activation is 5- to 10-fold less with noroxycodone but 2-fold higher with noroxymorphone relative to oxycodone. Noroxycodone, noroxymorphone, and oxymorphone also have longer biological half-lives than oxycodone. However, despite the greater "in vitro" activity of some of its metabolites, it has been determined that oxycodone itself is responsible for 83.0% and 94.8% of its analgesic effect following oral and intravenous administration, respectively. Oxymorphone plays only a minor role, being responsible for 15.8% and 4.5% of the analgesic effect of oxycodone after oral and intravenous administration, respectively. Although the CYP2D6 genotype and the route of administration result in differential rates of oxymorphone formation, the unchanged parent compound remains the major contributor to the overall analgesic effect of oxycodone. In contrast to oxycodone and oxymorphone, noroxycodone and noroxymorphone, while also potent MOR agonists, poorly cross the blood–brain barrier into the central nervous system, and for this reason are only minimally analgesic in comparison. In 1997, a group of Australian researchers proposed (based on a study in rats) that oxycodone acts on KORs, unlike morphine, which acts upon MORs. Further research by this group indicated the drug appears to be a high-affinity κ2b-opioid receptor agonist. However, this conclusion has been disputed, primarily on the basis that oxycodone produces effects that are typical of MOR agonists. In 2006, research by a Japanese group suggested the effect of oxycodone is mediated by different receptors in different situations. Specifically in diabetic mice, the KOR appears to be involved in the antinociceptive effects of oxycodone, while in nondiabetic mice, the μ1-opioid receptor seems to be primarily responsible for these effects. Oxycodone can be administered orally, intranasally, via intravenous, intramuscular, or subcutaneous injection, or rectally. The bioavailability of oral administration of oxycodone averages within a range of 60 to 87%, with rectal administration yielding the same results; intranasal varies between individuals with a mean of 46%. After a dose of conventional (instant-release) oral oxycodone, the onset of action is 10 to 30 minutes, and peak plasma levels of the drug are attained within roughly 30 to 60 minutes; in contrast, after a dose of OxyContin (an oral controlled-release formulation), peak plasma levels of oxycodone occur in about three hours. The duration of instant-release oxycodone is 3 to 6 hours, although this can be variable depending on the individual. Oxycodone has a volume of distribution of 2.6L/kg, in the blood it is distributed to skeletal muscle, liver, intestinal tract, lungs, spleen, and brain. Conventional oral preparations start to reduce pain within 10 to 15 minutes on an empty stomach; in contrast, OxyContin starts to reduce pain within one hour. The metabolism of oxycodone in humans occurs in the liver mainly via the cytochrome P450 system and is extensive (about 95%) and complex, with many minor pathways and resulting metabolites. Around 10% (range 8–14%) of a dose of oxycodone is excreted essentially unchanged (unconjugated or conjugated) in the urine. The major metabolites of oxycodone are noroxycodone (70%), noroxymorphone ("relatively high concentrations"), and oxymorphone (5%). The immediate metabolism of oxycodone in humans is as follows: In humans, N-demethylation of oxycodone to noroxycodone by CYP3A4 is the major metabolic pathway, accounting for 45% ± 21% of a dose of oxycodone, while O-demethylation of oxycodone into oxymorphone by CYP2D6 and 6-ketoreduction of oxycodone into 6-oxycodols represent relatively minor metabolic pathways, accounting for 11% ± 6% and 8% ± 6% of a dose of oxycodone, respectively. Several of the immediate metabolites of oxycodone are subsequently conjugated with glucuronic acid and excreted in the urine. 6α-Oxycodol and 6β-oxycodol are further metabolized by N-demethylation to nor-6α-oxycodol and nor-6β-oxycodol, respectively, and by N-oxidation to 6α-oxycodol-N-oxide and 6β-oxycodol-N-oxide (which can subsequently be glucuronidated as well). Oxymorphone is also further metabolized, as follows: The first pathway of the above three accounts for 40% of the metabolism of oxymorphone, making oxymorphone-3-glucuronide the main metabolite of oxymorphone, while the latter two pathways account for less than 10% of the metabolism of oxymorphone. After N-demethylation of oxymorphone, noroxymorphone is further glucuronidated to noroxymorphone-3-glucuronide. Because oxycodone is metabolized by the cytochrome P450 system in the liver, its pharmacokinetics can be influenced by genetic polymorphisms and drug interactions concerning this system, as well as by liver function. Some people are fast metabolizers of oxycodone, resulting in reduced effects, while others are slow metabolizers, resulting in increased effects and toxicity. While higher CYP2D6 activity increases the effects of oxycodone (owing to increased conversion into oxymorphone), higher CYP3A4 activity has the opposite effect and decreases the effects of oxycodone (owing to increased metabolism into noroxycodone and noroxymorphone). The dose of oxycodone must be reduced in patients with reduced liver function. The clearance of oxycodone is 0.8 L/min. Oxycodone and its metabolites are mainly excreted in urine. Therefore, oxycodone accumulates in patients with kidney impairment. Oxycodone is eliminated in the urine 10% as unchanged oxycodone, 45% ± 21% as "N"-demethylated metabolites (noroxycodone, noroxymorphone, noroxycodols), 11 ± 6% as "O"-demethylated metabolites (oxymorphone, oxymorphols), and 8% ± 6% as 6-keto-reduced metabolites (oxycodols). Oxycodone has a half-life of 4.5 hours. It is available as a generic medication. Its manufacturer Purdue Pharma claimed in their 1992 patent application that the duration of action of OxyContin, oxycodone's controlled-release preparation, is 12 hours in "90% of patients." It has never performed any clinical studies in which OxyContin was given at more frequent intervals. In a separate filing, Purdue claims that controlled-release oxycodone "provides pain relief in said patient for at least 12 hours after administration." However, in 2016 an investigation by the "Los Angeles Times" found that "the drug weans off hours early in many people," inducing symptoms of opiate withdrawal and intense cravings for OxyContin. One doctor Lawrence Robbins told journalists that over 70% of his patients would report that OxyContin would only provide 4–7 hours of relief. Doctors in the 1990s often would switch their patients to a dosing schedule of once every eight hours when they complained that the duration of action for OxyContin was too short to be only taken twice a day. Purdue strongly discouraged the practice: Purdue's medical director Robert Reder wrote to one doctor in 1995 that "OxyContin has been developed for [12-hour] dosing...I request that you not use a [8-hourly] dosing regimen." Purdue repeatedly released memos to its sales representatives ordering them to remind doctors not to deviate from a 12-hour dosing schedule. One such memo read, "There is no Q8 dosing with OxyContin... [8-hour dosing] needs to be nipped in the bud. NOW!!" The journalists who covered the investigation argue that Purdue Pharma has insisted on a 12-hour duration of action for nearly all patients, despite evidence to the contrary, in order to protect the reputation of OxyContin as a 12-hour drug and the willingness of health insurance and managed care companies to cover OxyContin despite its high cost relative to generic opiates such as morphine. Purdue sales representatives were instructed to encourage doctors to write prescriptions for larger 12-hour doses instead of more frequent dosing. An August 1996 memo to Purdue sales representatives in Tennessee entitled "$$$$$$$$$$$$$ It’s Bonus Time in the Neighborhood!" reminded the representatives that their commissions would dramatically increase if they were successful in convincing doctors to prescribe larger doses. "Los Angeles Times" journalists argue using interviews from opioid addiction experts that such high doses of OxyContin spaced 12 hours apart create a combination of agony during opiate withdrawal (lower lows) and a schedule of reinforcement that relieves this agony, fostering addiction. Oxycodone's chemical name is derived from codeine. The chemical structures are very similar, differing only in that It is also similar to hydrocodone, differing only in that it has a hydroxyl group at carbon-14. Oxycodone is marketed as various salts, most commonly as the hydrochloride salt. The free base conversion ratios of different salts are: hydrochloride (0.896), bitartrate (0.667), tartrate (0.750), camphosulfonate (0.576), pectinate (0.588), phenylpriopionate (0.678), sulfate (0.887), phosphate (0.763), and terephthalate (0.792). The hydrochloride salt is the basis of most American oxycodone products whilst bitartrate, tartrate, pectinate, terephthalate and phosphate salts are also available in European Union products. Methyiodide and hydroiodide are mentioned in older European Union publications. In terms of biosynthesis, oxycodone has been found naturally in nectar extracts from the orchid family "Epipactis helleborine"; together along with another opioid: 3-{2-{3-{3-benzyloxypropyl}-3-indol, 7,8-didehydro- 4,5-epoxy-3,6-d-morphinan. Oxycodone and/or its major metabolites may be measured in blood or urine to monitor for clearance, abuse, confirm a diagnosis of poisoning, or assist in a medicolegal death investigation. Many commercial opiate screening tests cross-react appreciably with oxycodone and its metabolites, but chromatographic techniques can easily distinguish oxycodone from other opiates. and (Jakob) of the University of Frankfurt in Germany published the first synthesis of oxycodone from thebaine in 1916. When Freund died, in 1920, Speyer wrote his obituary. Speyer, born to a Jewish family in Frankfurt am Main in 1878, became a victim of the Holocaust. He died on 5 May 1942, the second day of deportations from the Lodz Ghetto; his death was noted in the ghetto's chronicle. The first clinical use of the drug was documented in 1917, the year after it was first developed. It was first introduced to the U.S. market in May 1939. In early 1928, Merck introduced a combination product containing scopolamine, oxycodone, and ephedrine under the German initials for the ingredients SEE, which was later renamed Scophedal (SCOpolamine, ePHEDrine and eukodAL) in 1942. It was last manufactured in 1987, but can be compounded. This combination is essentially an oxycodone analogue of the morphine-based "twilight sleep", with ephedrine added to reduce circulatory and respiratory effects. The drug became known as the "Miracle Drug of the 1930s" in Continental Europe and elsewhere and it was the Wehrmacht's choice for a battlefield analgesic for a time. The drug was expressly designed to provide what the patent application and package insert referred to as "very deep analgesia and profound and intense euphoria" as well as tranquillisation and anterograde amnesia useful for surgery and battlefield wounding cases. Oxycodone was allegedly chosen over morphine, hydromorphone, and hydrocodone for this product because of oxycodone having subjective elements in its side effect profile similar to cocaine. During Operation Himmler, Skophedal was also reportedly injected in massive overdose into the prisoners dressed in Polish Army uniforms in the staged incident on 1 September 1939 which opened the Second World War. The personal notes of Adolf Hitler's physician, Theodor Morell, indicate Hitler received repeated injections of "eukodal" (oxycodone) and Scophedal, as well as Dolantin (pethidine) codeine, and morphine less frequently; oxycodone could not be obtained after late January 1945. In the early 1970s, the U.S. government classified oxycodone as a schedule II drug. Purdue Pharma — a privately held company based in Stamford, Connecticut, developed the prescription painkiller OxyContin. Upon its release in 1995, OxyContin was hailed as a medical breakthrough, a long-lasting narcotic that could help patients suffering from moderate to severe pain. The drug became a blockbuster, and has reportedly generated some US$35 billion in revenue for Purdue. In October 2017, "The New Yorker" published a story on Mortimer Sackler and Purdue Pharma regarding their ties to the production and manipulation of the oxycodone markets. The article links Raymond and Arthur Sackler's business practices with the rise of direct pharmaceutical marketing and eventually to the rise of addiction to oxycodone in the United States. The article implies that Sackler bears some responsibility for the opioid epidemic in the United States. In 2019 the New York Times ran a piece confirming Sackler told company officials in 2008 to “measure our performance by Rx’s by strength, giving higher measures to higher strengths.” This was verified with documents tied to a lawsuit – which was filed by the Massachusetts attorney general, Maura Healey – claiming that Purdue Pharma and members of the Sackler family knew that high doses of OxyContin over long periods would increase the risk of serious side effects, including addiction. Despite Purdue Pharma's proposal for a US$12 billion settlement of the lawsuit, the attorneys general of 23 states, including Massachusetts, rejected the settlement offer in September, 2019. Oxycodone is subject to international conventions on narcotic drugs. In addition, oxycodone is subject to national laws that differ by country. The 1931 Convention for Limiting the Manufacture and Regulating the Distribution of Narcotic Drugs of the League of Nations included oxycodone. The 1961 Single Convention on Narcotic Drugs of the United Nations, which replaced the 1931 convention, categorized oxycodone in Schedule I. Global restrictions on Schedule I drugs include "limit[ing] exclusively to medical and scientific purposes the production, manufacture, export, import, distribution of, trade in, use and possession of" these drugs; "requir[ing] medical prescriptions for the supply or dispensation of [these] drugs to individuals"; and "prevent[ing] the accumulation" of quantities of these drugs "in excess of those required for the normal conduct of business". Oxycodone is in Schedule I (derived from the Single Convention on Narcotic Drugs) of the Commonwealth's Narcotic Drugs Act 1967. In addition, it is in Schedule 8 of the Australian Standard for the Uniform Scheduling of Drugs and Poisons ("Poisons Standard"), meaning it is a "controlled drug... which should be available for use but require[s] restriction of manufacture, supply, distribution, possession and use to reduce abuse, misuse and physical or psychological dependence". Oxycodone is a controlled substance under Schedule I of the Controlled Drugs and Substances Act (CDSA). In February 2012, Ontario passed legislation to allow the expansion of an already existing drug-tracking system for publicly funded drugs to include those that are privately insured. This database will function to identify and monitor patient's attempts to seek prescriptions from multiple doctors or retrieve from multiple pharmacies. Other provinces have proposed similar legislation, while some, such as Nova Scotia, have legislation already in effect for monitoring prescription drug use. These changes have coincided with other changes in Ontario's legislation to target the misuse of painkillers and high addiction rates to drugs such as oxycodone. As of February 29, 2012, Ontario passed legislation delisting oxycodone from the province's public drug benefit program. This was a first for any province to delist a drug based on addictive properties. The new law prohibits prescriptions for OxyNeo except to certain patients under the Exceptional Access Program including palliative care and in other extenuating circumstances. Patients already prescribed oxycodone will receive coverage for an additional year for OxyNeo, and after that, it will be disallowed unless designated under the exceptional access program. Much of the legislative activity has stemmed from Purdue Pharma's decision in 2011 to begin a modification of Oxycontin's composition to make it more difficult to crush for snorting or injecting. The new formulation, OxyNeo, is intended to be preventive in this regard and retain its effectiveness as a painkiller. Since introducing its "Narcotics Safety and Awareness Act", Ontario has committed to focusing on drug addiction, particularly in the monitoring and identification of problem opioid prescriptions, as well as the education of patients, doctors, and pharmacists. This Act, introduced in 2010, commits to the establishment of a unified database to fulfil this intention. Both the public and medical community have received the legislation positively, though concerns about the ramifications of legal changes have been expressed. Because laws are largely provincially regulated, many speculate a national strategy is needed to prevent smuggling across provincial borders from jurisdictions with looser restrictions. In 2015, Purdue Pharma's abuse-resistant OxyNEO and six generic versions of OxyContin had been on the Canada-wide approved list for prescriptions since 2012. In June 2015, then federal Minister of Health Rona Ambrose announced that within three years all oxycodone products sold in Canada would need to be tamper-resistant. Some experts warned that the generic product manufacturers may not have the technology to achieve that goal, possibly giving Purdue Pharma a monopoly on this opiate. Several class action suits across Canada have been launched against the Purdue group of companies and affiliates. Claimants argue the pharmaceutical manufacturers did not meet a standard of care and were negligent in doing so. These lawsuits reference earlier judgments in the United States, which held that Purdue was liable for wrongful marketing practices and misbranding. Since 2007, the Purdue companies have paid over CAN$650 million in settling litigation or facing criminal fines. The drug is in Appendix III of the Narcotics Act ("Betäubungsmittelgesetz" or BtMG). The law allows only physicians, dentists, and veterinarians to prescribe oxycodone and the federal government to regulate the prescriptions (e.g., by requiring reporting). Oxycodone is regulated under Part I of Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. Oxycodone is a restricted drug in Japan. Its import and export is strictly restricted to specially designated organizations having prior permit to import it. In a high-profile case an American who was a top Toyota executive living in Tokyo, who claimed to be unaware of the law, was arrested for importing oxycodone into Japan. Oxycodone is listed as a Class A drug in the Misuse of Drugs Act of Singapore, which means offences in relation to the drug attract the most severe level of punishment. A conviction for unauthorized manufacture of the drug attracts a minimum sentence of 10 years of imprisonment and corporal punishment of 5 strokes of the cane, and a maximum sentence of life imprisonment or 30 years of imprisonment and 15 strokes of the cane. The minimum and maximum penalties for unauthorized trafficking in the drug are respectively 5 years of imprisonment and 5 strokes of the cane, and 20 years of imprisonment and 15 strokes of the cane. Oxycodone is a Class A drug under the Misuse of Drugs Act. For Class A drugs, which are "considered to be the most likely to cause harm", possession without a prescription is punishable by up to seven years in prison, an unlimited fine, or both. Dealing of the drug illegally is punishable by up to life imprisonment, an unlimited fine, or both. In addition, oxycodone is a Schedule 2 drug per the Misuse of Drugs Regulations 2001 which "provide certain exemptions from the provisions of the Misuse of Drugs Act 1971". Under the Controlled Substances Act, oxycodone is a Schedule II controlled substance whether by itself or part of a multi-ingredient medication. The DEA lists oxycodone both for sale and for use in manufacturing other opioids as ACSCN 9143 and in 2013 approved the following annual aggregate manufacturing quotas: 131.5 metric tons for sale, down from 153.75 in 2012, and 10.25 metric tons for conversion, unchanged from the previous year. Oxycodone, like other opioid analgesics, tends to induce feelings of euphoria, relaxation and reduced anxiety in those who are occasional users. These effects make it one of the most commonly abused pharmaceutical drugs in the United States. In August 2010, Purdue Pharma reformulated their long-acting oxycodone line, marketed as OxyContin, using a polymer, Intac, to make the pills extremely difficult to crush or dissolve in water to reduce OxyContin abuse. The FDA approved relabeling the reformulated version as abuse-resistant in April 2013. Pfizer manufactures a preparation of short-acting oxycodone, marketed as Oxecta, which contains inactive ingredients, referred to as tamper-resistant Aversion Technology. It does not deter oral abuse. Approved by the FDA in the U.S. in June 2011, the new formulation makes crushing, chewing, snorting, or injecting the opioid impractical because of a change in its chemical properties. The non-medical use of oxycodone existed from the early 1970s, but by 2015, 91% of a national sample of injecting drug users in Australia had reported using oxycodone, and 27% had injected it in the last six months. Opioid-related deaths in Ontario had increased by 242% from 1969 to 2014. By 2009 in Ontario there were more deaths from oxycodone overdose than from cocaine overdose. Deaths from opioid pain relievers had increased from 13.7 deaths per million residents in 1991 to 27.2 deaths per million residents in 2004. The abuse of oxycodone in Canada became a problem. Areas where oxycodone is most problematic are Atlantic Canada and Ontario, where its abuse is prevalent in rural towns, and in many smaller to medium-sized cities. Oxycodone is also widely available across Western Canada, but methamphetamine and heroin are more serious problems in the larger cities, while oxycodone is more common in rural towns. Oxycodone is diverted through doctor shopping, prescription forgery, pharmacy theft, and overprescribing. The recent formulations of oxycodone, particularly Purdue Pharma's crush-, chew-, injection- and dissolve-resistant OxyNEO which replaced the banned OxyContin product in Canada in early 2012, have led to a decline in the abuse of this opiate but have increased the abuse of the more potent drug fentanyl. According to a Canadian Centre on Substance Abuse study quoted in Maclean's magazine, there were at least 655 fentanyl-related deaths in Canada in a five-year period. In Alberta, the Blood Tribe police claimed that from the fall of 2014 through January 2015, oxycodone pills or a lethal fake variation referred to as Oxy 80s containing fentanyl made in illegal labs by members of organized crime were responsible for ten deaths on the Blood Reserve, which is located southwest of Lethbridge, Alberta. Province-wide, approximately 120 Albertans died from fentanyl-related overdoses in 2014. Abuse and diversion of oxycodone in the UK commenced in the early- to mid-2000s. The first known death due to overdose in the UK occurred in 2002. However, recreational use remains relatively rare. In the United States, more than 12 million people use opioid drugs recreationally. Opioids were responsible for 49,000 of the 72,000 drug overdose deaths in the U.S. in 2017. In September 2013, the FDA released new labeling guidelines for long acting and extended release opioids requiring manufacturers to remove moderate pain as indication for use, instead stating the drug is for "pain severe enough to require daily, around-the-clock, long term opioid treatment." The updated labeling will not restrict physicians from prescribing opioids for moderate, as needed use. Oxycodone is the most widely recreationally used opioid in America. The U.S. Department of Health and Human Services estimates that about 11 million people in the U.S. consume oxycodone in a non-medical way annually. In 2007, about 42,800 emergency room visits occurred due to "episodes" involving oxycodone. Diverted oxycodone may be taken orally or ingested through insufflation; used intravenously, or the heated vapors inhaled. In 2008, recreational use of oxycodone and hydrocodone were involved in 14,800 deaths. Some of the cases were due to overdoses of the acetaminophen component, resulting in fatal liver damage. Reformulated OxyContin is causing some recreational users to change to heroin, which is cheaper and easier to obtain. The International Narcotics Control Board estimated of oxycodone were manufactured worldwide in 1998; by 2007 this figure had grown to . United States accounted for 82% of consumption in 2007 at . Canada, Germany, Australia, and France combined accounted for 13% of consumption in 2007. In 2010, of oxycodone were illegally manufactured using a fake pill imprint. This accounted for 0.8% of consumption. These illicit tablets were later seized by the U.S. Drug Enforcement Administration, according to the International Narcotics Control Board. The board also reported manufactured in 2010. This number had decreased from a record high of in 2009. Expanded expressions for the compound oxycodone in the academic literature include "dihydrohydroxycodeinone", "Eucodal", "Eukodal", "14-hydroxydihydrocodeinone", and "Nucodan". In a UNESCO convention, the translations of "oxycodone" are "oxycodon" (Dutch), "oxycodone" (French), "oxicodona" (Spanish), (Arabic), (Chinese), and (Russian). The word "oxycodone" should not be confused with "oxandrolone", "oxazepam", "oxybutynin", "oxytocin", or "Roxanol". Other brand names include Longtec and Shortec.
https://en.wikipedia.org/wiki?curid=22799
Occidental College Occidental College (informally Oxy) is a private liberal arts college in Los Angeles, California. Founded in 1887 by clergy and members of the Presbyterian Church, it is now a non-sectarian institution and one of the oldest liberal arts colleges on the West Coast of the United States. Occidental College was founded on April 20, 1887, by a group of Presbyterian clergy, missionaries, and laymen, including James George Bell, Lyman Stewart, and Thomas Bard. The cornerstone of the school's first building was laid in September 1887 in the Boyle Heights now East Los Angeles neighborhood of Los Angeles. The college's first term began a year later with 27 male and 13 female students, and tuition of $50 a year. In 1896, the Boyle Heights building was destroyed by fire. The college temporarily relocated to the old St. Vincent's College campus on Hill Street before a new site was selected in Highland Park in 1898. Eventually, the college erected three main buildings: the Academy Building, the Stimson Library, and the Hall of Arts and Letters (the Hall still stands today, converted to apartments). The Highland Park site was also bisected by the tracks of the Santa Fe Railroad, and was the site of two presidential visits, first by William Howard Taft in 1909 and subsequently by Theodore Roosevelt in 1911. In 1909, the Pomona College Board of Trustees suggested a merger between Pomona and Occidental, but the proposal came to nothing. The following year, the college severed formal ties with the Presbyterian Church and became a non-sectarian, non-denominational institution. The small size of the campus and the disruption caused by frequent freight trains pushed the college's trustees to find a new location. In 1912, the school began construction of a new campus located in Los Angeles' Eagle Rock neighborhood. The Eagle Rock campus was designed by noted California architect Myron Hunt, also known as the planner of the California Institute of Technology (Caltech) campus and as designer of the Huntington Library and Art Gallery and the Rose Bowl. That same year, Occidental President John Willis Baer announced the trustees' decision to convert Occidental College into an all-men's institution. However, students and faculty protested, and the idea was abandoned. In 1913, the Occidental College Board of Trustees announced plans to convert the college exclusively to a men's school. The plans were met with widespread backlash from students and faculty who protested the change. The community outcry garnered national headlines and the board later dropped the proposal. Two weeks after Booker T. Washington came to visit Occidental, on March 27, 1914, Swan, Fowler, and Johnson Halls were dedicated at its new Eagle Rock campus. Patterson Field, today one of the oldest collegiate sports stadiums in Los Angeles, was opened in 1916. In April 1917, shortly after the United States entered World War I, the college formed a Students Army Training Corps to aid the war effort. Under Occidental President Remsen Bird, the school opened a series of new Hunt-designed buildings, including Clapp Library (1924), Hillside Theatre and a women's dormitory (Orr Hall) in 1925, Alumni Gymnasium (1926), the Freeman Student Union (1928) and a music and speech building (1929). The Delta of California chapter of Phi Beta Kappa was established at Occidental in 1926, at a time when the only other chapters in California were at Stanford, UC Berkeley, and Pomona. English novelist Aldous Huxley, who spoke at Occidental's convocation ceremony in the then-new Thorne Hall in 1938, lampooned President Remsen Bird as Dr. Herbert Mulge of Tarzana College in his 1939 novel, "After Many a Summer Dies the Swan". Huxley was never again invited back to campus. During World War II, many students left Occidental to fight in the war. In July 1943, the U.S. Navy established a Navy V-12 officer training program on campus that produced hundreds of graduates before it was disbanded in 1945 at the end of the war. Occidental President Remsen Bird worked behind the scenes to help Oxy students of Japanese descent continue their education despite mandatory evacuation orders; his letters are included in the Japanese American Relocation Collection in Clapp Library. After having its first Rhodes Scholar, Clarence Spaulding, named in 1908, Oxy seniors John Paden and Aaron Segal were awarded Rhodes Scholarships in 1958; the first and only time Occidental has produced two Rhodes Scholars in a single year. Rhodes scholars Aaron Segal and John Paden were among the 10 Occidental students who participated in Crossroads Africa that year, a forerunner to the Peace Corps that later became a national program. In 1969, 42 students were suspended for peacefully protesting military recruiting on campus. One year later, faculty voted to suspend classes in the wake of the Kent State shootings and America's invasion of Cambodia. Subsequently, Oxy students wrote 7,000 letters to Washington D.C., protesting U.S. involvement in the war in Southeast Asia. Occidental launched one of the country's first Upward Bound programs in 1966, aimed at increasing the number of low-income, underrepresented high school students who become the first in their family to go to college. Also in 1969, the school opened its first two co-ed dormitories, and two more followed a year later. In 1988, John Brooks Slaughter became Occidental's first black president. Building on faculty and student advocacy and a series of grants the college had received previously to increase the diversity of the Occidental student body, Slaughter led the process of creating a new mission statement that is still used today. Also, Slaughter led the college's community outreach expansion with the creation of the Center for Volunteerism and Community Service, the predecessor for the current Center for Community Based Learning. In November 1990, the college, initially established as a Presbyterian institution but is no longer religiously affiliated, rededicated the campus' main chapel as the Herrick Memorial Chapel and Interfaith Center. The school also took down the crosses in the chapel in an attempt to "broaden Occidental's appeal among non-Christian students". In July 2006, Susan Westerberg Prager became Occidental's first female president. She left her position in 2007 during the fall term. Robert Skotheim the former president of Whitman College and the Huntington Library, then served as interim president. In July 2009, Jonathan Veitch, formerly dean of The New School's Eugene Lang College, became Occidental's 15th president and the first to be a native Angeleno. The college received some national scrutiny in 2014 when the U.S. Department of Education named Occidental College as one of 55 higher education institutions under investigation "for possible violations of federal law over the handling of sexual violence and harassment". In response to student and faculty outcry the college adopted a new interim sexual misconduct policy, hired a former assistant district attorney as a full-time, independent Title IX coordinator, and added a new 24-hour, 7-days-a-week telephone hotline. The school also created a permanent Sexual Misconduct Advisory Board made up of students, faculty, and staff. Two years later, the investigation was concluded with the Office of Civil Rights finding that "the preponderance of the evidence does not support a conclusion that the College violated Title IX, except with respect to the issue of promptness in several cases during the 2012-13 school years." President Barack Obama attended Occidental for two years prior to transferring to Columbia University. In 2015, "birthers" falsely claimed that Obama's Occidental College transcript revealed he received financial aid as a foreign student from Indonesia after the resurgence of a fake news story from 2009. Architect Myron Hunt created the original campus master plan for Occidental's Eagle Rock campus in 1911. He structured the campus in a Mediterranean style, with covered walkways and tile roofs. The campus landscape was designed and developed by Beatrix Farrand in the late 1930s. All of the 19 buildings designed by Hunt remain in use today, including Johnson Hall, now the home for the McKinnon Center for Global Affairs. Built on a hillside, the Eagle Rock campus covers over , some of which is undeveloped land that includes a local landmark known as Fiji Hill. There are 12 on-campus residence halls and the main dining facility is The Marketplace, which is located in the Johnson Student Center. Some buildings, such as the Hameetman Science Center (designed by Anshen + Allen, 2003), deviates from the original architecture with its large glass windows and metal balconies (its lobby houses a large Foucault pendulum). In 1979, Occidental installed "Water Forms II" (see image below), a kinetic fountain designed by professor George Baker. The fountain is a campus landmark and was featured prominently in the 1984 film "". The campus is also noted for its outdoor Bird Amphitheater, where between 1960 and 1996, a season of summer plays were performed, including Shakespeare plays and musicals. However, financial problems caused the Theater Department to end its summer programs forever. Very few plays have been performed ever since. In 1989, the college dedicated Keck Theater, a post-modern theater with a movable stage and seating arrangements for a variety of different types of shows. It was designed by the architectural firm of Kanmnitzer and Cotton. The James Barrie version of the play "Peter Pan" was the first show performed at the opening ceremony in the summer of 1989. Occidental College was ranked as the sixth "Most Beautiful" campus by "Newsweek" in 2012. The campus is home to a 1-megawatt ground-mounted solar array, which is the largest hillside array on an American college campus and the largest of its kind in Los Angeles. The 4,886-panel installation was completed in Spring 2013 and inaugurated on the school's 126-year anniversary. There are 34 majors offered on campus (and nine minor-only programs, including Public Health, Linguistics, and Classical Studies) and a 9:1 student–faculty ratio. The average class size is 18 students and most students take four classes per semester. Since 1908, Occidental has graduated 10 Rhodes Scholars. The 2017 edition of the "Fiske Guide to Colleges" gave Occidental four-star ratings (out of five) in academics and quality of life. Princeton Review's The Best 381 Colleges 2017 Edition gave Occidental ratings of 91 (out of 100) in academics and quality of life and 95 in financial aid. In "Forbes" 2019 ranking of America's Top Colleges, Occidental ranks 102nd out of 650 liberal arts colleges, universities and service academies. In "U.S. News & World Report"s 2020 rankings of American liberal arts colleges, Occidental is ranked tied for 39th overall, 67th for "Best Value", and tied at 90th for "Top Performers on Social Mobility. "Kiplinger's" "Best College Values 2019" rankings places Occidental 58th among 149 liberal arts colleges. Fall Admission Statistics "U.S. News" deemed Occidental's admissions "more selective," with the class of 2020 acceptance rate being 37.3%. Of those admitted submitting such data, 52% were in the top 10% of their high school class. The SAT 25th - 75th percentile scores were 1810-2120. Of those admitted to the class of 2020, 50% identified as persons of color, and 13% of those admitted were international students. Divided in three parts, the Core Program was designed by the faculty of Occidental to unify and enhance the liberal arts education offered by the school. The Core Program requires students to achieve the following: First-year seminars (eight course hours in total) are the centerpiece of the Core Program. Students are given a variety of class choices to fulfill the seminar requirement and to satisfy the first-year writing requirement. While the classes range in topic, each is based on a curriculum of cultural studies. The classes are designed to expose students to the rigor of college academics and to the four principles of the college mission—Excellence, Equity, Community, and Service. The Core Program's emphasis on global literacy requires students to take a minimum of three courses that touch on at least three of the following geographical areas: Africa and the Middle East; Asia and the Pacific; Europe; Latin America; the United States; and Intercultural. Students are also required to demonstrate proficiency in writing and in a foreign language and take courses in the fine arts and in the sciences, mathematics, or other courses that address formal methods of reasoning. The final portion of the Core Program requires students to pass a senior comprehensive examination in their chosen field. Comprehensive examinations may include seminars, creative projects, fieldwork, oral exams, theses, or field research projects. Students at Occidental can take courses at the California Institute of Technology (Caltech) in nearby Pasadena free of charge. In addition, a 3-2 engineering program allows qualified students the opportunity to study at Occidental for three years, completing their undergraduate experience with an additional two years either at Caltech or Columbia University. At the end of the five years, the student receives two degrees, a Bachelor of Arts in the Combined Plan from Occidental and a Bachelor of Science in the selected field of engineering from the engineering school. Art majors at Occidental College can take courses at the Art Center College of Design in Pasadena, one of the country's top-ranked art schools. The program is not open to first-year students, but as with the Caltech exchange program, students receive full course credit. No additional tuition payments are required. With a competitive GPA and LSAT scores, Columbia Law School admits students upon completion of their junior year at Occidental into its Accelerated Interdisciplinary Program in Legal Education. Admittance to the program enables students to earn a bachelor's degree from Occidental and a law degree from Columbia in six years. Students who are interested in biotechnology and who become a biochemistry major maintaining a 3.2 GPA in the necessary courses will be guaranteed admission to the Keck master's in bioscience program. The Keck Graduate Institute is part of the Claremont Colleges consortium. At the beginning of every school year, freshmen participate in Convocation, a formal ceremony welcoming new students to the college in which the faculty wear their full academic regalia and students don robes. Founders Day is celebrated annually at the school on April 20, the day in 1887 when Occidental's incorporation papers were officially signed by the California Secretary of State. For the first three years at Occidental, all students are guaranteed housing on campus and for seniors it is optional. Freshmen do not get to choose where they live; the Office of Residential Education & Housing Services arranges housing by pairing students based on a short form students fill in the summer before they arrive on campus. The Occidental College dorm life consists of 13 co-ed residential housing facilities. After a student's first year, he or she can choose to live in a number of dorms that house sophomores, juniors, and seniors; one-third of all these halls are reserved for each grade. These dorms include Newcomb Hall, Wylie Hall, Erdman Hall, Haines Hall, Rangeview Hall and Stearns Hall. There are also themed-living communities which consist of the Multicultural Hall in Pauley (open to all years), all-women housing (Berkus House, named after alumnus Dave Berkus), the E. Norris Hall, the Queer House, and the Food Justice house. Occidental College has various student-run clubs, organizations and ventures such as the Green Bean Coffee Lounge, organic garden, and the student-managed bike sharing and repairing program. There are also traditional groups such as glee club, Greek organizations, and student media outlets. The campus newspaper is "The Occidental", an independent, student-run publication. It has been published continuously since 1893. As of the 2019–20 school year, "The Occidental" publishes biweekly in print and weekly online. KOXY is a student-run campus radio station, in operation in the 1960s and 1970s, and again since 2000. It originally operated on the frequency 104.7 in and around campus from 1968 to 2009, but switched to only being available by webstream in 2009. KOXY sponsors several on-campus events. In 2010, Occidental College launched a TV station called CatAList, launched by then-students Daniel Watson and Raffy Cortina; Cortina was also the first Occidental student to be awarded with a Student Academy Award from the Academy of Motion Picture Arts and Sciences for his short "Bottled Up". The station produces 20–30 minutes of student-run content weekly on a variety of topics. Occidental College's Greek Council consists of eight members: local sororities Alpha Lambda Phi Alpha, Delta Omicron Tau; national sorority Kappa Alpha Theta; local fraternity Zeta Tau Zeta (co-ed), and national fraternity, Sigma Alpha Epsilon. These greek organizations are social organizations as opposed to academic greek organizations. Occidental has a fall and a spring greek recruiting period; first year students are first eligible to participate in greek recruitment during the spring of their first year. Occidental also has two cultural greek organizations: Kappa Alpha Psi and Sigma Lambda Gamma. The college is working to expand their roster of greek organizations by adding Phi Beta Sigma, Delta Sigma Theta, and Zeta Phi Beta. There are various entities at Occidental College that promote local community involvement opportunities in Eagle Rock, Highland Park and Los Angeles. These include the Urban and Environmental Policy Institute (UEPI), the Office of Community Engagement (OCE), the Center for Community Based Learning (CCBL), the Neighborhood Partnership Program (NPP), and Upward Bound. Occidental is one of the five schools that founded the Southern California Intercollegiate Athletic Conference (SCIAC) in 1915 and is currently a member of the SCIAC and NCAA Division III. Occidental features 21 varsity sports teams and a program of club sports and intramural competition. Approximately 25 percent of the student body participates in a varsity sports program. During the 2006–2007 athletic season, the Tigers cross country, American football and basketball teams were Southern California Intercollegiate Athletic Conference champions. In 2014, diver Jessica Robson set the Southern California Intercollegiate Athletic Conference records for both 1m and 3m diving. The school's Blackshirts Rugby union team was also league champion for the first time in five years. In 2011, Jeremy Castro ('99) and Patrick Guthrie ('86) steered the squad to a NSCRO final, falling to Longwood University 36-27 in Virginia Beach, Virginia. In addition the college has a dance team that also performs at every home football and basketball game. Occidental athletics date back to 1894, when the College helped organize the first intercollegiate athletic competition in Southern California. Since then, Oxy has produced more than a dozen Olympians, world-record holders, and national champions, including 1935 national girls' tennis champion Pat Henry Yeomans '38, two-time diving gold medalist Sammy Lee '43, and pole vault silver medalist Bob Gutowski '57. Occidental has long-standing football rivalries with Pomona College and Whittier College; the Tigers have played both the Sagehens and the Poets over 100 times. In 1982, the Occidental College football team had the rare opportunity for national prominence when, due to the 1982 National Football League strike, their game with San Diego was broadcast on national television. In 2017, Occidental cancelled the remainder of its football season due to lack of healthy players, as few as 30 in some cases. The team forfeited two games and was outscored in the other three 170-19. In 2011, Occidental College lost a basketball game to Caltech with a score of 46 to 45 giving the Caltech Beavers their first conference win in 26 years and putting an end to their 310-game losing streak. Famous Occidental College Tigers include NFL coach Jim E. Mora, former American Football League Most Valuable Player and politician Jack Kemp, former NFL player Vance Mueller, 2011 U.S. Senior Open Champion Olin Browne, and CFL player Justin Goltz (Winnipeg Blue Bombers). Notable graduates of Occidental College include filmmaker Terry Gilliam, football player and politician Jack Kemp, pioneering African-American physicist and inventor George Edward Alcorn Jr., former New Orleans Saints and Indianapolis Colts head coach Jim E. Mora, co-inventor of the hard disk drive William Goddard, federal judge Jacqueline Nguyen, historian and chancellor of the California State University system Glenn Dumke, former Lieutenant Governor of California Robert Finch, adventurer and writer Homer Lea, poet Robinson Jeffers, librarian and writer Lawrence Clark Powell, civil rights activist Ernesto Galarza, television director Jesus Salvador Trevino, journalist and current dean of Columbia University Graduate School of Journalism Steve Coll, actor and writer George Nader, veteran executive at Walt Disney Imagineering Joe Rohde, and CEO of Warner Music Group Stephen Cooper. Notable alumni include former First Lady of Colorado Dottie Lamm, Academy Award–winning actor and filmmaker Ben Affleck, actor Luke Wilson, producer Todd Garner, and actress Emily Osment. Notable faculty members include the American urban policy analyst Peter Dreier, former U.S. Ambassador Derek Shearer, former CNN and Fox News contributor Caroline Heldman, chemist Frank L. Lambert, and the 2005 PEN American Center Literary Award winner in poetry Martha Ronk. Occidental's campus, architecture, and proximity to Hollywood have made it a desired location for a number of film and television productions. Film credits include: TV credits include:
https://en.wikipedia.org/wiki?curid=22800
1986 United States bombing of Libya The 1986 United States bombing of Libya, code-named Operation El Dorado Canyon, comprised air strikes by the United States against Libya on Tuesday 15 April 1986. The attack was carried out by the U.S. Air Force, U.S. Navy and U.S. Marine Corps via air strikes, in retaliation for the West Berlin discotheque bombing ten days earlier. There were 40 reported Libyan casualties, and one U.S. plane was shot down. One of the claimed Libyan deaths was of a baby girl, reported to be Muammar Gaddafi's daughter, Hana Gaddafi. However, there were doubts as to whether she was really killed, or whether she really even existed. Libya represented a high priority for President Ronald Reagan shortly after his 1981 inauguration. Libyan leader Muammar Gaddafi was firmly anti-Israel and had supported violent organizations in the Palestinian territories and Syria. There were reports that Libya was attempting to become a nuclear power and Gaddafi's occupation of Chad, which was rich in uranium, was of major concern to the United States. Gaddafi's ambitions to set up a federation of Arab and Muslim states in North Africa were alarming to U.S. interests. Furthermore, then-Secretary of State Alexander Haig wanted to take proactive measures against Gaddafi because he had been using former Central Intelligence Agency (CIA) operatives to help set up terrorist camps (most notably Edwin P. Wilson and Frank E. Terpil). After the December 1985 Rome and Vienna airport attacks, which killed 19 and wounded approximately 140, Gaddafi indicated that he would continue to support the Red Army Faction, the Red Brigades, and the Irish Republican Army as long as the European governments supported anti-Gaddafi Libyans. After years of occasional skirmishes with Libya over Libyan territorial claims to the Gulf of Sidra, the United States contemplated a military attack to strike targets within the Libyan mainland. In March 1986, the United States, asserting the limit to territorial waters according to international law, sent a carrier task force to the region. Libya responded with aggressive counter-maneuvers on 24 March that led to a naval engagement in the Gulf of Sidra. On 5 April 1986, Libyan agents bombed "La Belle" nightclub in West Berlin, killing three people, including a U.S. serviceman, and injuring 229 people. West Germany and the United States obtained cable transcripts from Libyan agents in East Germany who were involved in the attack. More detailed information was retrieved years later when Stasi archives were investigated by the reunited Germany. Libyan agents who had carried out the operation from the Libyan embassy in East Germany were identified and prosecuted by Germany in the 1990s. The attack mission against Libya had been preceded in October 1985 by an exercise in which the 20th TFW stationed at RAF Upper Heyford airbase in the UK, which was equipped with F-111Es, received a top-secret order to launch a simulated attack mission on 18 October, with ten F-111Es armed with eight 500-lb practice bombs, against a simulated airfield located in Newfoundland, Canada south of CFB Goose Bay. The mission was designated Operation Ghost Rider. The mission was a full rehearsal for a long-range strike against Libya. The mission was completed successfully, with the exception of one aircraft that had all but one of its eight bombs hang up on one of its wing racks. The lessons learned were passed on to the 48th TFW which was equipped with the newer "F" models of the F-111. Elements of the then-secret 4450th Tactical Group (USAF) were put on standby to fly the strike mission against Libya. Over 30 F-117s had already been delivered to Tactical Air Command (USAF) and were operating from Tonopah Test Range Airport in Nevada. Commanders in the North Africa/Mediterranean theaters knew nothing about the capabilities of the F-117, or that the aircraft even existed. Within an hour of the planned launch of the F-117s, the Secretary of Defense scrubbed the stealth mission, fearing a compromise of the secret aircraft and its development program. The air strike was carried out with conventional U.S. Navy and U.S. Air Force aircraft. The F-117 would remain completely unknown to the world for several more months, before being unveiled in 1988 and featured prominently in media coverage of Operation Desert Storm. For the Libyan raid, the United States was denied overflight rights by France, Spain, and Italy as well as the use of European continental bases, forcing the Air Force portion of the operation to be flown around France and Spain, over Portugal and through the Straits of Gibraltar, adding 1,300 miles (2,100 km) each way and requiring multiple aerial refuelings. The French refusal alone added 2,800 km and was imposed despite the fact that France had itself been the target of terrorism directed by the Gaddafi government in Libya. French president Mitterrand refused overflight clearance because the United States was interested in limited action in Libya while France was more interested in major action that would remove Gaddafi from power. Another factor in the French decision was the United States' last-minute failure to participate in a retaliatory air raid on Iranian positions after the 1983 Beirut barracks bombings. After several unproductive days of meeting with European and Arab nations, and influenced by an American serviceman's death, Ronald Reagan, on 14 April, ordered an air raid on the following Libyan targets: Among operational United States tactical aircraft, only the General Dynamics F-111 and the A-6 Intruder, possessed the ability to attack at night with the required precision. Although the F-111s would be required to fly from distant bases, they were essential to mission success, because the eighteen A-6 available aboard and could not carry enough bombs to simultaneously inflict the desired damage on the five targets selected. Twenty-eight McDonnell Douglas KC-10 Extenders and Boeing KC-135 Stratotankers took off from RAF Mildenhall and RAF Fairford shortly after 19:00 on 14 April. These tankers would conduct four silent refueling operations over the round-trip route the F-111s would fly to target. Within minutes the tankers were followed by twenty-four F-111F strike aircraft of the 48th Tactical Fighter Wing, flying from RAF Lakenheath and five EF-111A Ravens of the 20th Tactical Fighter Wing from RAF Upper Heyford. Six F-111s and one EF-111 were designated spares who returned to base after the first refueling was completed without any system failures among the designated strike aircraft. "America" was on station in the Gulf of Sidra, but "Coral Sea" was preparing to leave the Mediterranean, and made a high-speed return from Spain. Naval aviators were dismayed when pre-raid news broadcasts eliminated any element of surprise by listing their mission times and target areas. "Americas" air group would strike targets in downtown Benghazi and provide fighter and suppression support for the Air Force bombers, while "Coral Seas" planes would strike the Benina airfield outside Benghazi and provide fighter and suppression support for the Navy bombers. About 01:00 "America" launched six A-6 strike aircraft with Mark 82 bombs for the Jamahiriyah Guard barracks and six A-7 strike support aircraft. "Coral Sea", operating east of "America" simultaneously launched eight A-6 and six F/A-18. Additional fighters were launched for combat air patrol (CAP). The raid began in the early hours of 15 April, with the stated objectives of sending a message and reducing Libya's ability to support and train terrorists. Reagan warned that "if necessary, [they] shall do it again." Coordinated jamming by the EF-111s and EA-6B Prowlers began at 01:54 (Libyan time) as the A-7s and F/A-18s began launching AGM-88 HARM and AGM-45 Shrike for SAM suppression. The attack began at 0200 hours (Libyan time), and lasted about twelve minutes, with 60 tons of munitions dropped. The F-111 bombers' rules of engagement required target identification by both radar and Pave Tack prior to bomb release to minimize collateral damage. Of the nine F-111s targeting Bab al-Azizia, only three placed their GBU-10 Paveway II bombs on target. One F-111 was shot down by a Libyan ZSU-23-4 over the Gulf of Sidra and one F-111's bombs missed the barracks, striking diplomatic and civilian sites in Tripoli, and narrowly missing the French embassy. All three F-111s assigned to Sidi Bilal released their GBU-10 bombs on target. One of the six F-111s assigned to bomb the Tripoli airfield aborted its mission with a terrain-following radar malfunction, but the remaining five dropped BSU-49 high drag bombs destroying two Il-76 transport aircraft. "Americas" A-6s damaged the Jamahiriyah MiG assembly warehouse and destroyed four MiG shipping crates. Two A-6s from "Coral Sea" aborted their mission, but five A-6s with CBU-59 APAM cluster bombs and one with Mk 82 bombs struck Benina airfield destroying three or four MiGs, two Mil Mi-8 helicopters, one Fokker F27 Friendship transport, and one small straight-wing aircraft. Some Libyan soldiers abandoned their positions in fright and confusion, and officers were slow to give orders. Libyan anti-aircraft fire did not begin until after the planes had passed over their targets. No Libyan fighters launched, and HARM launches and jamming prevented any of the 2K12 Kub, S-75 Dvina, S-125 Neva/Pechora, or Crotale SAM launches from homing. Within twelve minutes, all United States aircraft were "feet wet" outbound over the Mediterranean. Navy strike aircraft had been recovered aboard their carriers by 02:53 (Libyan time) and surviving Air Force planes, with the exception of one F-111 which landed in Rota, Andalusia with an overheated engine, had returned to Britain by 10:10 (Libyan time). The Libyan air defense network was extensive, and included: Covering Tripoli alone were: Forewarned by a telephone call, Libyan leader Muammar Gaddafi and his family rushed out of their residence in the Bab al-Azizia compound moments before the bombs dropped. It was long thought that the call came from Malta's Prime Minister, Karmenu Mifsud Bonnici. However, Italian Prime Minister Bettino Craxi was the person who actually warned Gaddafi, according to Giulio Andreotti, Italy's foreign minister at the time, and to Abdel Rahman Shalgham, Libya's then-ambassador to Italy. Shalgham's statement was also confirmed by Margherita Boniver, foreign affairs chief of Craxi's Socialist Party at the time. According to medical staff in a nearby hospital, two dozen casualties were brought in wearing military uniforms, and two without uniforms. Total Libyan casualties were estimated at 60, including those at the bombed airbases. An infant girl was among the casualties; her body was shown to American reporters, who were told she was Gaddafi's recently adopted daughter Hana. However, there was and remains much skepticism over the claim. She may not have died; the adoption may have been posthumous; or he may have adopted a second daughter and given her the same name after the first one died. Two U.S. Air Force captains—Fernando L. Ribas-Dominicci and Paul F. Lorence—were killed when their F-111 fighter-bomber was shot down over the Gulf of Sidra. In the hours following the attack, the U.S. military refused to speculate as to whether or not the fighter-bomber had been shot down, with Defense Secretary Caspar Weinberger suggesting that it could have experienced radio trouble or been diverted to another airfield. The next day, the Pentagon had announced it was no longer searching for the F-111 believed to be downed by a Libyan missile. On 25 December 1988, Gaddafi offered to release the body of Lorence to his family through Pope John Paul II. The body, returned in 1989, was identified as Ribas-Dominicci's from dental records. An autopsy conducted in Spain confirmed that he had drowned after his plane was shot down over the Gulf of Sidra. Libya denies that it held Lorence's body. However, Lorence's brother said that he and his mother saw television footage of a Libyan holding a white helmet with the name "Lorence" stenciled on the back. Furthermore, William C. Chasey, who toured the Bab al-Azizia barracks, claimed to have seen two flight suits and helmets engraved with the names "Lorence" and "Ribas-Dominicci", as well as the wreckage of their F-111. Gaddafi announced that he had "won a spectacular military victory over the United States" and the country was officially renamed the "Great Socialist People's Libyan Arab Jamahiriyah". Gaddafi said reconciliation between Libya and the United States was impossible so long as Reagan was in the White House; of the president he said, "He is mad. He is foolish. He is an Israeli dog." He said he had no plans to attack the United States or U.S. targets. He claimed that Reagan wanted to kill him, stating "Was Reagan trying to kill me? Of course. The attack was concentrated on my house and I was in my house", he also described how he rescued his family. When asked that if he is in danger of losing power, he told "Really, these reports and writings are not true. As you can see I am fine, and there has been no change in our country." The Government of Libya said that the United States had fallen prey to arrogance and madness of power and wanted to become the world's policeman. It charged that any party that did not agree to become an American vassal was an outlaw, a terrorist, and a devil. Gaddafi quashed an internal revolt, the organization of which he blamed on the United States, although Gaddafi appeared to have left the public sphere for a time in 1986 and 1987. The Libyan Post dedicated several postage stamps issues to the event, from 1986 until 2001. The first issue was released in 1986, 13 July (ref. Scott catalogue n.1311 – Michel catalogue n.1699). The last issue was released in 2001, 15 April (ref. Scott catalogue n.1653 – Michel catalogue n.2748–2763). Libya responded by firing two Scud missiles at a United States Coast Guard station on the Italian island of Lampedusa which passed over the island and landed in the sea. There was only limited change in Libyan-connected terrorism. The Libyan government was alleged to have ordered the hijacking of Pan Am Flight 73 in Pakistan on 5 September 1986, which resulted in the deaths of 20 people. The allegation did not come to light until it was reported by "The Sunday Times" in March 2004—days after British Prime Minister Tony Blair paid the first official visit to Tripoli by a Western leader in a generation. In May 1987, Australia expelled diplomats and broke off relations with Libya, claiming Libya sought to fuel violence in Australia and Oceania. In late 1987 French authorities stopped a merchant vessel, the MV "Eksund", which was attempting to deliver 150 tons of Soviet arms from Libya to the Irish Republican Army (IRA). In Beirut, Lebanon, two British hostages held by the Libyan-supported Abu Nidal Organization, Leigh Douglas and Philip Padfield, along with an American named Peter Kilburn, were shot dead in revenge. In addition, journalist John McCarthy was kidnapped, and tourist Paul Appleby was murdered in Jerusalem. Another British hostage named Alec Collett was also killed in retaliation for the bombing of Libya. Collett was shown being hanged in a video tape. His body was found in November 2009. On 21 December 1988 Libya bombed Pan Am Flight 103, which exploded in mid-air and crashed on the town of Lockerbie in Scotland after a bomb detonated, killing all 259 people aboard, and 11 people in Lockerbie. Iran was initially thought to have been responsible for the bombing in revenge for the downing of Iran Air flight 655 by the American missile cruiser USS "Vincennes" over the Persian Gulf, but in 1991 two Libyans were charged, one of whom was convicted of the crime in a controversial judgement on 31 January 2001. The Libyan Government accepted responsibility for the Pan Am Flight 103 bombing on 29 May 2002, and offered $2.7 billion to compensate the families of the 270 victims. The convicted Libyan, Abdelbaset al-Megrahi, who was suffering from terminal prostate cancer, was released in August 2009 by the Scottish Government on compassionate grounds. He died in 2012. In May 2014 a group of relatives of the Lockerbie victims continued to campaign for al-Megrahi's name to be cleared by reopening the case. The attack was condemned by many countries. By a vote of 79 in favor to 28 against with 33 abstentions, the United Nations General Assembly adopted resolution 41/38 which "condemns the military attack perpetrated against the Socialist People's Libyan Arab Jamahiriya on 15 April 1986, which constitutes a violation of the Charter of the United Nations and of international law." A meeting of the Non-Aligned Movement said that it condemned the "dastardly, blatant and unprovoked act of aggression". The League of Arab States expressed that it was outraged at the United States aggression and that it reinforced an element of anarchy in international relations. The Assembly of Heads of State of the African Union in its declaration said that the deliberate attempt to kill Libyans violated the principles of international law. The Government of Iran asserted that the attack constituted a policy of aggression, gunboat diplomacy, an act of war, and called for an extensive political and economic boycott of the United States. Others saw the United States motive as an attempt to eliminate Libya's revolution. China stated that the U.S. attack violated norms of international relations and had aggravated tension in the region. The Soviet Union said that there was a clear link between the attack and U.S. policy aimed at stirring up existing hotbeds of tension and creating new ones, and at destabilizing the international situation. West Germany stated that international disputes required diplomatic and not military solutions, and France also criticized the bombing. Some observers held the opinion that Article 51 of the UN Charter set limitations on the use of force in exercising the legitimate right of self-defense in the absence of an act of aggression, and affirmed that there was no such act by Libya. It was charged that the United States did not exhaust the Charter provisions for settling disputes under Article 33. The "Wall Street Journal" protested that if other nations applied Article 51 as cavalierly as the United States, then "the Nicaraguan government, very reasonably predicting that the U.S. is planning an attack on its territory, has the right to bomb Washington." British Shadow Foreign Secretary Denis Healey told ABC News that, "by this same rationale of defense against future attack, Britain could bomb apartment blocks in New York and Chicago on the ground that they contained people sending money and military supplies to the Irish Republican Army." Others asserted that Libya was innocent in the bombing of the West Berlin discotheque. The U.S. received support from the UK, Canada, Australia, Israel, and 25 other countries. Its doctrine of declaring a war on what it called "terrorist havens" was not repeated until 1998, when President Bill Clinton ordered strikes on six terrorist camps in Afghanistan. Margaret Thatcher's approval of the use of Royal Air Force bases led to substantial public criticism, including an unprecedented story in "The Sunday Times" suggesting the Queen was upset by an "uncaring" Prime Minister. However, the Americans strongly endorsed Thatcher, and the long-standing Special Relationship between the United States and Britain was strengthened. Although the Soviet Union was ostensibly friendly with Libya, it had, by the time of the Libya bombing, made its increasing ambivalence toward Libya apparent in public communications. Gaddafi had a history of verbally attacking the policy agendas and ideology of the Soviet Union, and he often engaged in various international interventions and meddling that conflicted with Soviet goals in a variety of spheres. During a period where the Soviet Union was apparently attempting to lead a subtle diplomatic effort that could impact its global status, close association with the whims of Gaddafi became a liability. In the entire crisis, the Soviet Union explicitly announced that it would not provide additional help to Libya beyond resupplying basic armaments and munitions. It made no attempt to militarily intimidate the United States, despite the ongoing American operations in the Gulf of Sidra and its previous knowledge that the United States might launch an attack. The Soviet Union did not completely ignore the event, issuing a denunciation of this 'wild' and 'barbaric' act by the United States. After the raid, Moscow did cancel a planned visit to the United States by foreign affairs minister Eduard Shevardnadze. At the same time, it clearly signaled that it did not want this action to affect negotiations about the upcoming summer summit between the United States and the Soviet Union and its plans for new arms control agreements. Former U.S. Attorney General Ramsey Clark, acting for Libyan citizens who had been killed or injured in the bombing raid by the U.S. using British air bases, brought suit under international law against the United States and the United Kingdom in U.S. federal court. The lawsuit was dismissed as frivolous. A subsequent appeal was denied, and monetary sanctions against Clark were allowed. Saltany v. Reagan, 886 F. 2d 438 (D.C. Cir. 1989). Every year, between at least 1994 and 2006, the United Nations General Assembly scheduled a declaration from the Organization of African Unity about the incident, but systematically deferred the discussion year after year until formally putting it aside (along with several other issues which had been similarly rescheduled for years) in 2005. On the first anniversary of the bombing, April 1987, European and North American left-wing activists gathered to commemorate the anniversary. After a day of social and cultural networking with local Libyans, including a tour of Gaddafi's bombed house, the group gathered with other Libyans for a commemoration event. In June 2009, during a visit to Italy, Colonel Gaddafi criticized American foreign policy and, asked as to the difference between al-Qaeda attacks and the 1986 U.S. bombing of Tripoli, he commented: "If al-Qaeda leader Osama Bin Laden has no state and is an outlaw, America is a state with international rules." On 28 May 2008, the United States began negotiations with Libya on a comprehensive claims settlement agreement to resolve outstanding claims of American and Libyan nationals against each country in their respective courts. Gaddafi's son Saif al-Islam publicly announced that an agreement was being negotiated in July of that year. On 14 August 2008, the resulting U.S.-Libya Comprehensive Claims Settlement Agreement was signed in Tripoli by Assistant Secretary of State for Near Eastern Affairs David Welch and by Libyan Secretary for American Affairs Ahmad Fituri. In October 2008, Libya paid US$1.5 billion (in three installments of $300 million on 9 October 2008, $600 million on 30 October 2008, and US$600 million 31 October 2008) into a fund used to compensate the following victims and their relatives: To pay the settlement, Libya demanded US$1.5 billion from global oil companies operating in Libya's oil fields, under threat of "serious consequences" to their leases. Libya's settlement was at least partially funded by several companies, including some based in the U.S., that chose to cooperate with Libya's demand. On 4 August 2008, President George W. Bush signed into law the Libyan Claims Resolution Act, which had unanimously passed Congress on 31 July. The Act provided for the restoration of Libya's sovereign, diplomatic, and official immunities before U.S. courts if the Secretary of State certified that the United States Government has received sufficient funds to resolve outstanding terrorism-related death and physical injury claims against Libya. On 14 August 2008, the United States and Libya signed a comprehensive claims settlement agreement. Full diplomatic relations were restored between the two nations. In 1986, hardcore punk band The Meatmen referred to the lack of French cooperation with the raid in their song 'French People Suck': "French people suck, I just gotta' say/made the jet fighter pilots fly out of their way." This song appears on the album "Rock & Roll Juggernaut" (Caroline Records). In 1987, Neil Young wrote "Mideast Vacation" a song from his live album, "Life" about the bombing. On Roger Waters' third studio album, "Amused to Death" the songs Late Home Tonight, Part I and Late Home Tonight, Part II, recalls the bombing from the perspective of two "ordinary wives' and a young American F-111 pilot. In Nelson DeMille's book "The Lion's Game", published in 2000, there is a detailed but fictionalised description of the attack from the point of view of one of the book's main protagonists.
https://en.wikipedia.org/wiki?curid=22801
Operational amplifier An operational amplifier (often op-amp or opamp) is a DC-coupled high-gain electronic voltage amplifier with a differential input and, usually, a single-ended output. In this configuration, an op-amp produces an output potential (relative to circuit ground) that is typically hundreds of thousands of times larger than the potential difference between its input terminals. Operational amplifiers had their origins in analog computers, where they were used to perform mathematical operations in many linear, non-linear, and frequency-dependent circuits. The popularity of the op-amp as a building block in analog circuits is due to its versatility. By using negative feedback, the characteristics of an op-amp circuit, its gain, input and output impedance, bandwidth etc. are determined by external components and have little dependence on temperature coefficients or engineering tolerance in the op-amp itself. Op-amps are among the most widely used electronic devices today, being used in a vast array of consumer, industrial, and scientific devices. Many standard IC op-amps cost only a few cents in moderate production volume; however, some integrated or hybrid operational amplifiers with special performance specifications may cost over in small quantities. Op-amps may be packaged as components or used as elements of more complex integrated circuits. The op-amp is one type of differential amplifier. Other types of differential amplifier include the fully differential amplifier (similar to the op-amp, but with two outputs), the instrumentation amplifier (usually built from three op-amps), the isolation amplifier (similar to the instrumentation amplifier, but with tolerance to common-mode voltages that would destroy an ordinary op-amp), and negative-feedback amplifier (usually built from one or more op-amps and a resistive feedback network). The amplifier's differential inputs consist of a non-inverting input (+) with voltage "V"+ and an inverting input (–) with voltage "V"−; ideally the op-amp amplifies only the difference in voltage between the two, which is called the "differential input voltage". The output voltage of the op-amp "V"out is given by the equation where "A"OL is the open-loop gain of the amplifier (the term "open-loop" refers to the absence of a feedback loop from the output to the input). The magnitude of "A"OL is typically very large (100,000 or more for integrated circuit op-amps), and therefore even a quite small difference between "V"+ and "V"− drives the amplifier output nearly to the supply voltage. Situations in which the output voltage is equal to or greater than the supply voltage are referred to as "saturation" of the amplifier. The magnitude of "A"OL is not well controlled by the manufacturing process, and so it is impractical to use an open-loop amplifier as a stand-alone differential amplifier. Without negative feedback, and perhaps with positive feedback for regeneration, an op-amp acts as a comparator. If the inverting input is held at ground (0 V) directly or by a resistor "R"g, and the input voltage "V"in applied to the non-inverting input is positive, the output will be maximum positive; if "V"in is negative, the output will be maximum negative. Since there is no feedback from the output to either input, this is an "open-loop" circuit acting as a comparator. If predictable operation is desired, negative feedback is used, by applying a portion of the output voltage to the inverting input. The "closed-loop" feedback greatly reduces the gain of the circuit. When negative feedback is used, the circuit's overall gain and response becomes determined mostly by the feedback network, rather than by the op-amp characteristics. If the feedback network is made of components with values small relative to the op amp's input impedance, the value of the op-amp's open-loop response "A"OL does not seriously affect the circuit's performance. The response of the op-amp circuit with its input, output, and feedback circuits to an input is characterized mathematically by a transfer function; designing an op-amp circuit to have a desired transfer function is in the realm of electrical engineering. The transfer functions are important in most applications of op-amps, such as in analog computers. High input impedance at the input terminals and low output impedance at the output terminal(s) are particularly useful features of an op-amp. In the non-inverting amplifier on the right, the presence of negative feedback via the voltage divider "R"f, "R"g determines the "closed-loop gain" "A"CL = "V"out / "V"in. Equilibrium will be established when "V"out is just sufficient to "reach around and pull" the inverting input to the same voltage as "V"in. The voltage gain of the entire circuit is thus 1 + "R"f/"R"g. As a simple example, if "V"in = 1 V and Rf = "R"g, "V"out will be 2 V, exactly the amount required to keep "V"− at 1 V. Because of the feedback provided by the "R"f, "R"g network, this is a "closed-loop" circuit. Another way to analyze this circuit proceeds by making the following (usually valid) assumptions: The input signal "V"in appears at both (+) and (−) pins, resulting in a current "i" through "R"g equal to "V"in/"R"g: Since Kirchhoff's current law states that the same current must leave a node as enter it, and since the impedance into the (−) pin is near infinity, we can assume practically all of the same current "i" flows through "R"f, creating an output voltage By combining terms, we determine the closed-loop gain "A"CL: An ideal op-amp is usually considered to have the following characteristics: These ideals can be summarized by the two "golden rules": The first rule only applies in the usual case where the op-amp is used in a closed-loop design (negative feedback, where there is a signal path of some sort feeding back from the output to the inverting input). These rules are commonly used as a good first approximation for analyzing or designing op-amp circuits. None of these ideals can be perfectly realized. A real op-amp may be modeled with non-infinite or non-zero parameters using equivalent resistors and capacitors in the op-amp model. The designer can then include these effects into the overall performance of the final circuit. Some parameters may turn out to have negligible effect on the final design while others represent actual limitations of the final performance that must be evaluated. Real op-amps differ from the ideal model in various aspects. Real operational amplifiers suffer from several non-ideal effects: The op-amp gain calculated at DC does not apply at higher frequencies. Thus, for high-speed operation, more sophisticated considerations must be used in an op-amp circuit design. Under such conditions, negative feedback becomes positive, likely causing the circuit to "lock up" in that state. Modern integrated FET or MOSFET op-amps approximate more closely the ideal op-amp than bipolar ICs when it comes to input impedance and input bias currents. Bipolars are generally better when it comes to input "voltage" offset, and often have lower noise. Generally, at room temperature, with a fairly large signal, and limited bandwidth, FET and MOSFET op-amps now offer better performance. Sourced by many manufacturers, and in multiple similar products, an example of a bipolar transistor operational amplifier is the 741 integrated circuit designed in 1968 by David Fullagar at Fairchild Semiconductor after Bob Widlar's LM301 integrated circuit design. In this discussion, we use the parameters of the hybrid-pi model to characterize the small-signal, grounded emitter characteristics of a transistor. In this model, the current gain of a transistor is denoted "h"fe, more commonly called the β. A small-scale integrated circuit, the 741 op-amp shares with most op-amps an internal structure consisting of three gain stages: Additionally, it contains current mirror (outlined red) bias circuitry and compensation capacitor (30 pF). The input stage consists of a cascaded differential amplifier (outlined in blue) followed by a current-mirror active load. This constitutes a transconductance amplifier, turning a differential voltage signal at the bases of Q1, Q2 into a current signal into the base of Q15. It entails two cascaded transistor pairs, satisfying conflicting requirements. The first stage consists of the matched NPN emitter follower pair Q1, Q2 that provide high input impedance. The second is the matched PNP common-base pair Q3, Q4 that eliminates the undesirable Miller effect; it drives an active load Q7 plus matched pair Q5, Q6. That active load is implemented as a modified Wilson current mirror; its role is to convert the (differential) input current signal to a single-ended signal without the attendant 50% losses (increasing the op-amp's open-loop gain by 3 dB). Thus, a small-signal differential current in Q3 versus Q4 appears summed (doubled) at the base of Q15, the input of the voltage gain stage. The (class-A) voltage gain stage (outlined in magenta) consists of the two NPN transistors Q15/Q19 connected in a Darlington configuration and uses the output side of current mirror Q12/Q13 as its collector (dynamic) load to achieve its high voltage gain. The output sink transistor Q20 receives its base drive from the common collectors of Q15 and Q19; the level-shifter Q16 provides base drive for the output source transistor Q14. The transistor Q22 prevents this stage from delivering excessive current to Q20 and thus limits the output sink current. The output stage (Q14, Q20, outlined in cyan) is a Class AB complementary-symmetry amplifier. It provides an output drive with impedance of ~50Ω, in essence, current gain. Transistor Q16 (outlined in green) provides the quiescent current for the output transistors, and Q17 provides output current limiting. Provide appropriate quiescent current for each stage of the op-amp. The resistor (39 kΩ) connecting the (diode-connected) Q11 and Q12, and the given supply voltage ("V""S"+ − "V""S"−), determine the current in the current mirrors, (matched pairs) Q10/Q11 and Q12/Q13. The collector current of Q11, "i"11 × 39 kΩ = "V""S"+ − "V""S"− − 2 "V"BE. For the typical "V""S" = ±20 V, the standing current in Q11/Q12 (as well as in Q13) would be ~1 mA. A supply current for a typical 741 of about 2 mA agrees with the notion that these two bias currents dominate the quiescent supply current. Transistors Q11 and Q10 form a Widlar current mirror, with quiescent current in Q10 "i"10 such that ln("i"11 / "i"10) = "i"10 × 5 kΩ / 28 mV, where 5 kΩ represents the emitter resistor of Q10, and 28 mV is "V"T, the thermal voltage at room temperature. In this case "i"10 ≈ 20 μA. The biasing circuit of this stage is set by a feedback loop that forces the collector currents of Q10 and Q9 to (nearly) match. The small difference in these currents provides the drive for the common base of Q3/Q4 (note that the base drive for input transistors Q1/Q2 is the input bias current and must be sourced externally). The summed quiescent currents of Q1/Q3 plus Q2/Q4 is mirrored from Q8 into Q9, where it is summed with the collector current in Q10, the result being applied to the bases of Q3/Q4. The quiescent currents of Q1/Q3 (resp., Q2/Q4) "i"1 will thus be half of "i"10, of order ~10 μA. Input bias current for the base of Q1 (resp. Q2) will amount to "i"1 / β; typically ~50 nA, implying a current gain "h"fe ≈ 200 for Q1(Q2). This feedback circuit tends to draw the common base node of Q3/Q4 to a voltage "V"com − 2 "V"BE, where "V"com is the input common-mode voltage. At the same time, the magnitude of the quiescent current is relatively insensitive to the characteristics of the components Q1–Q4, such as "h"fe, that would otherwise cause temperature dependence or part-to-part variations. Transistor Q7 drives Q5 and Q6 into conduction until their (equal) collector currents match that of Q1/Q3 and Q2/Q4. The quiescent current in Q7 is "V"BE / 50 kΩ, about 35 μA, as is the quiescent current in Q15, with its matching operating point. Thus, the quiescent currents are pairwise matched in Q1/Q2, Q3/Q4, Q5/Q6, and Q7/Q15. Quiescent currents in Q16 and Q19 are set by the current mirror Q12/Q13, which is running at ~1 mA. Through some mechanism, the collector current in Q19 tracks that standing current. In the circuit involving Q16 (variously named rubber diode or "V"BE multiplier), the 4.5 kΩ resistor must be conducting about 100 μA, with the Q16 "V"BE roughly 700 mV. Then the "V"CB must be about 0.45 V and "V"CE at about 1.0 V. Because the Q16 collector is driven by a current source and the Q16 emitter drives into the Q19 collector current sink, the Q16 transistor establishes a voltage difference between Q14 base and Q20 base of ~1 V, regardless of the common-mode voltage of Q14/Q20 base. The standing current in Q14/Q20 will be a factor exp(100 mV / "V"T) ≈ 36 smaller than the 1 mA quiescent current in the class A portion of the op amp. This (small) standing current in the output transistors establishes the output stage in class AB operation and reduces the crossover distortion of this stage. A small differential input voltage signal gives rise, through multiple stages of current amplification, to a much larger voltage signal on output. The input stage with Q1 and Q3 is similar to an emitter-coupled pair (long-tailed pair), with Q2 and Q4 adding some degenerating impedance. The input impedance is relatively high because of the small current through Q1-Q4. A typical 741 op amp has a differential input impedance of about 2 MΩ. The common mode input impedance is even higher, as the input stage works at an essentially constant current. A differential voltage "V"in at the op-amp inputs (pins 3 and 2, respectively) gives rise to a small differential current in the bases of Q1 and Q2 "i"in ≈ "V"in / (2"h"ie"h"fe). This differential base current causes a change in the differential collector current in each leg by "i"in"h"fe. Introducing the transconductance of Q1, "g""m" = "h"fe / "h"ie, the (small-signal) current at the base of Q15 (the input of the voltage gain stage) is "V"in"g""m" / 2. This portion of the op amp cleverly changes a differential signal at the op amp inputs to a single-ended signal at the base of Q15, and in a way that avoids wastefully discarding the signal in either leg. To see how, notice that a small negative change in voltage at the inverting input (Q2 base) drives it out of conduction, and this incremental decrease in current passes directly from Q4 collector to its emitter, resulting in a decrease in base drive for Q15. On the other hand, a small positive change in voltage at the non-inverting input (Q1 base) drives this transistor into conduction, reflected in an increase in current at the collector of Q3. This current drives Q7 further into conduction, which turns on current mirror Q5/Q6. Thus, the increase in Q3 emitter current is mirrored in an increase in Q6 collector current; the increased collector currents shunts more from the collector node and results in a decrease in base drive current for Q15. Besides avoiding wasting 3 dB of gain here, this technique decreases common-mode gain and feedthrough of power supply noise. A current signal "i" at Q15's base gives rise to a current in Q19 of order "i"β2 (the product of the "h"fe of each of Q15 and Q19, which are connected in a Darlington pair). This current signal develops a voltage at the bases of output transistors Q14/Q20 proportional to the "h"ie of the respective transistor. Output transistors Q14 and Q20 are each configured as an emitter follower, so no voltage gain occurs there; instead, this stage provides current gain, equal to the "h"fe of Q14 (resp. Q20). The output impedance is not zero, as it would be in an ideal op-amp, but with negative feedback it approaches zero at low frequencies. The net open-loop small-signal voltage gain of the op amp involves the product of the current gain "h"fe of some 4 transistors. In practice, the voltage gain for a typical 741-style op amp is of order 200,000, and the current gain, the ratio of input impedance (~2−6 MΩ) to output impedance (~50Ω) provides yet more (power) gain. The ideal op amp has infinite common-mode rejection ratio, or zero common-mode gain. In the present circuit, if the input voltages change in the same direction, the negative feedback makes Q3/Q4 base voltage follow (with 2"V"BE below) the input voltage variations. Now the output part (Q10) of Q10-Q11 current mirror keeps up the common current through Q9/Q8 constant in spite of varying voltage. Q3/Q4 collector currents, and accordingly the output current at the base of Q15, remain unchanged. In the typical 741 op amp, the common-mode rejection ratio is 90 dB, implying an open-loop common-mode voltage gain of about 6. The innovation of the Fairchild μA741 was the introduction of frequency compensation via an on-chip (monolithic) capacitor, simplifying application of the op amp by eliminating the need for external components for this function. The 30 pF capacitor stabilizes the amplifier via Miller compensation and functions in a manner similar to an op-amp integrator circuit. Also known as 'dominant pole compensation' because it introduces a pole that masks (dominates) the effects of other poles into the open loop frequency response; in a 741 op amp this pole can be as low as 10 Hz (where it causes a −3 dB loss of open loop voltage gain). This internal compensation is provided to achieve unconditional stability of the amplifier in negative feedback configurations where the feedback network is non-reactive and the closed loop gain is unity or higher. By contrast, amplifiers requiring external compensation, such as the μA748, may require external compensation or closed-loop gains significantly higher than unity. The "offset null" pins may be used to place external resistors (typically in the form of the two ends of a potentiometer, with the slider connected to "V""S"–) in parallel with the emitter resistors of Q5 and Q6, to adjust the balance of the Q5/Q6 current mirror. The potentiometer is adjusted such that the output is null (midrange) when the inputs are shorted together. The transistors Q3, Q4 help to increase the reverse "V"BE rating: the base-emitter junctions of the NPN transistors Q1 and Q2 break down at around 7V, but the PNP transistors Q3 and Q4 have "V"BE breakdown voltages around 50V. Variations in the quiescent current with temperature, or between parts with the same type number, are common, so crossover distortion and quiescent current may be subject to significant variation. The output range of the amplifier is about one volt less than the supply voltage, owing in part to "V"BE of the output transistors Q14 and Q20. The 25 Ω resistor at the Q14 emitter, along with Q17, acts to limit Q14 current to about 25 mA; otherwise, Q17 conducts no current. Current limiting for Q20 is performed in the voltage gain stage: Q22 senses the voltage across Q19's emitter resistor (50Ω); as it turns on, it diminishes the drive current to Q15 base. Later versions of this amplifier schematic may show a somewhat different method of output current limiting. While the 741 was historically used in audio and other sensitive equipment, such use is now rare because of the improved noise performance of more modern op-amps. Apart from generating noticeable hiss, 741s and other older op-amps may have poor common-mode rejection ratios and so will often introduce cable-borne mains hum and other common-mode interference, such as switch 'clicks', into sensitive equipment. The "741" has come to often mean a generic op-amp IC (such as μA741, LM301, 558, LM324, TBA221 — or a more modern replacement such as the TL071). The description of the 741 output stage is qualitatively similar for many other designs (that may have quite different input stages), except: Op-amps may be classified by their construction: IC op-amps may be classified in many ways, including: The use of op-amps as circuit blocks is much easier and clearer than specifying all their individual circuit elements (transistors, resistors, etc.), whether the amplifiers used are integrated or discrete circuits. In the first approximation op-amps can be used as if they were ideal differential gain blocks; at a later stage limits can be placed on the acceptable range of parameters for each op-amp. Circuit design follows the same lines for all electronic circuits. A specification is drawn up governing what the circuit is required to do, with allowable limits. For example, the gain may be required to be 100 times, with a tolerance of 5% but drift of less than 1% in a specified temperature range; the input impedance not less than one megohm; etc. A basic circuit is designed, often with the help of circuit modeling (on a computer). Specific commercially available op-amps and other components are then chosen that meet the design criteria within the specified tolerances at acceptable cost. If not all criteria can be met, the specification may need to be modified. A prototype is then built and tested; changes to meet or improve the specification, alter functionality, or reduce the cost, may be made. That is, the op-amp is being used as a voltage comparator. Note that a device designed primarily as a comparator may be better if, for instance, speed is important or a wide range of input voltages may be found, since such devices can quickly recover from full on or full off ("saturated") states. A "voltage level detector" can be obtained if a reference voltage "V"ref is applied to one of the op-amp's inputs. This means that the op-amp is set up as a comparator to detect a positive voltage. If the voltage to be sensed, "E"i, is applied to op amp's (+) input, the result is a noninverting positive-level detector: when "E"i is above "V"ref, "V"O equals +"V"sat; when "E"i is below "V"ref, "V"O equals −"V"sat. If "E"i is applied to the inverting input, the circuit is an inverting positive-level detector: When "E"i is above "V"ref, "V"O equals −"V"sat. A "zero voltage level detector" ("E"i = 0) can convert, for example, the output of a sine-wave from a function generator into a variable-frequency square wave. If "E"i is a sine wave, triangular wave, or wave of any other shape that is symmetrical around zero, the zero-crossing detector's output will be square. Zero-crossing detection may also be useful in triggering TRIACs at the best time to reduce mains interference and current spikes. Another typical configuration of op-amps is with positive feedback, which takes a fraction of the output signal back to the non-inverting input. An important application of it is the comparator with hysteresis, the Schmitt trigger. Some circuits may use "positive" feedback and "negative" feedback around the same amplifier, for example triangle-wave oscillators and active filters. Because of the wide slew range and lack of positive feedback, the response of all the open-loop level detectors described above will be relatively slow. External overall positive feedback may be applied, but (unlike internal positive feedback that may be applied within the latter stages of a purpose-designed comparator) this markedly affects the accuracy of the zero-crossing detection point. Using a general-purpose op-amp, for example, the frequency of "E"i for the sine to square wave converter should probably be below 100 Hz. In a non-inverting amplifier, the output voltage changes in the same direction as the input voltage. The gain equation for the op-amp is However, in this circuit "V"− is a function of "V"out because of the negative feedback through the "R"1 "R"2 network. "R"1 and "R"2 form a voltage divider, and as "V"− is a high-impedance input, it does not load it appreciably. Consequently where Substituting this into the gain equation, we obtain Solving for formula_9: If formula_11 is very large, this simplifies to The non-inverting input of the operational amplifier needs a path for DC to ground; if the signal source does not supply a DC path, or if that source requires a given load impedance, then the circuit will require another resistor from the non-inverting input to ground. When the operational amplifier's input bias currents are significant, then the DC source resistances driving the inputs should be balanced. The ideal value for the feedback resistors (to give minimal offset voltage) will be such that the two resistances in parallel roughly equal the resistance to ground at the non-inverting input pin. That ideal value assumes the bias currents are well matched, which may not be true for all op-amps. In an inverting amplifier, the output voltage changes in an opposite direction to the input voltage. As with the non-inverting amplifier, we start with the gain equation of the op-amp: This time, "V"− is a function of both "V"out and "V"in due to the voltage divider formed by "R"f and "R"in. Again, the op-amp input does not apply an appreciable load, so Substituting this into the gain equation and solving for formula_9: If formula_11 is very large, this simplifies to A resistor is often inserted between the non-inverting input and ground (so both inputs "see" similar resistances), reducing the input offset voltage due to different voltage drops due to bias current, and may reduce distortion in some op-amps. A DC-blocking capacitor may be inserted in series with the input resistor when a frequency response down to DC is not needed and any DC voltage on the input is unwanted. That is, the capacitive component of the input impedance inserts a DC zero and a low-frequency pole that gives the circuit a bandpass or high-pass characteristic. The potentials at the operational amplifier inputs remain virtually constant (near ground) in the inverting configuration. The constant operating potential typically results in distortion levels that are lower than those attainable with the non-inverting topology. Most single, dual and quad op-amps available have a standardized pin-out which permits one type to be substituted for another without wiring changes. A specific op-amp may be chosen for its open loop gain, bandwidth, noise performance, input impedance, power consumption, or a compromise between any of these factors. 1941: A vacuum tube op-amp. An op-amp, defined as a general-purpose, DC-coupled, high gain, inverting feedback amplifier, is first found in "Summing Amplifier" filed by Karl D. Swartzel Jr. of Bell Labs in 1941. This design used three vacuum tubes to achieve a gain of and operated on voltage rails of . It had a single inverting input rather than differential inverting and non-inverting inputs, as are common in today's op-amps. Throughout World War II, Swartzel's design proved its value by being liberally used in the M9 artillery director designed at Bell Labs. This artillery director worked with the SCR584 radar system to achieve extraordinary hit rates (near 90%) that would not have been possible otherwise. 1947: An op-amp with an explicit non-inverting input. In 1947, the operational amplifier was first formally defined and named in a paper by John R. Ragazzini of Columbia University. In this same paper a footnote mentioned an op-amp design by a student that would turn out to be quite significant. This op-amp, designed by Loebe Julie, was superior in a variety of ways. It had two major innovations. Its input stage used a long-tailed triode pair with loads matched to reduce drift in the output and, far more importantly, it was the first op-amp design to have two inputs (one inverting, the other non-inverting). The differential input made a whole range of new functionality possible, but it would not be used for a long time due to the rise of the chopper-stabilized amplifier. 1949: A chopper-stabilized op-amp. In 1949, Edwin A. Goldberg designed a chopper-stabilized op-amp. This set-up uses a normal op-amp with an additional AC amplifier that goes alongside the op-amp. The chopper gets an AC signal from DC by switching between the DC voltage and ground at a fast rate (60 Hz or 400 Hz). This signal is then amplified, rectified, filtered and fed into the op-amp's non-inverting input. This vastly improved the gain of the op-amp while significantly reducing the output drift and DC offset. Unfortunately, any design that used a chopper couldn't use their non-inverting input for any other purpose. Nevertheless, the much improved characteristics of the chopper-stabilized op-amp made it the dominant way to use op-amps. Techniques that used the non-inverting input regularly would not be very popular until the 1960s when op-amp ICs started to show up in the field. 1953: A commercially available op-amp. In 1953, vacuum tube op-amps became commercially available with the release of the model K2-W from George A. Philbrick Researches, Incorporated. The designation on the devices shown, GAP/R, is an acronym for the complete company name. Two nine-pin 12AX7 vacuum tubes were mounted in an octal package and had a model K2-P chopper add-on available that would effectively "use up" the non-inverting input. This op-amp was based on a descendant of Loebe Julie's 1947 design and, along with its successors, would start the widespread use of op-amps in industry. 1961: A discrete IC op-amp. With the birth of the transistor in 1947, and the silicon transistor in 1954, the concept of ICs became a reality. The introduction of the planar process in 1959 made transistors and ICs stable enough to be commercially useful. By 1961, solid-state, discrete op-amps were being produced. These op-amps were effectively small circuit boards with packages such as edge connectors. They usually had hand-selected resistors in order to improve things such as voltage offset and drift. The P45 (1961) had a gain of 94 dB and ran on ±15 V rails. It was intended to deal with signals in the range of . 1961: A varactor bridge op-amp. There have been many different directions taken in op-amp design. Varactor bridge op-amps started to be produced in the early 1960s. They were designed to have extremely small input current and are still amongst the best op-amps available in terms of common-mode rejection with the ability to correctly deal with hundreds of volts at their inputs. 1962: An op-amp in a potted module. By 1962, several companies were producing modular potted packages that could be plugged into printed circuit boards. These packages were crucially important as they made the operational amplifier into a single black box which could be easily treated as a component in a larger circuit. 1963: A monolithic IC op-amp. In 1963, the first monolithic IC op-amp, the μA702 designed by Bob Widlar at Fairchild Semiconductor, was released. Monolithic ICs consist of a single chip as opposed to a chip and discrete parts (a discrete IC) or multiple chips bonded and connected on a circuit board (a hybrid IC). Almost all modern op-amps are monolithic ICs; however, this first IC did not meet with much success. Issues such as an uneven supply voltage, low gain and a small dynamic range held off the dominance of monolithic op-amps until 1965 when the μA709 (also designed by Bob Widlar) was released. 1968: Release of the μA741. The popularity of monolithic op-amps was further improved upon the release of the LM101 in 1967, which solved a variety of issues, and the subsequent release of the μA741 in 1968. The μA741 was extremely similar to the LM101 except that Fairchild's facilities allowed them to include a 30 pF compensation capacitor inside the chip instead of requiring external compensation. This simple difference has made the 741 "the" canonical op-amp and many modern amps base their pinout on the 741s. The μA741 is still in production, and has become ubiquitous in electronics—many manufacturers produce a version of this classic chip, recognizable by part numbers containing "741". The same part is manufactured by several companies. 1970: First high-speed, low-input current FET design. In the 1970s high speed, low-input current designs started to be made by using FETs. These would be largely replaced by op-amps made with MOSFETs in the 1980s. 1972: Single sided supply op-amps being produced. A single sided supply op-amp is one where the input and output voltages can be as low as the negative power supply voltage instead of needing to be at least two volts above it. The result is that it can operate in many applications with the negative supply pin on the op-amp being connected to the signal ground, thus eliminating the need for a separate negative power supply. The LM324 (released in 1972) was one such op-amp that came in a quad package (four separate op-amps in one package) and became an industry standard. In addition to packaging multiple op-amps in a single package, the 1970s also saw the birth of op-amps in hybrid packages. These op-amps were generally improved versions of existing monolithic op-amps. As the properties of monolithic op-amps improved, the more complex hybrid ICs were quickly relegated to systems that are required to have extremely long service lives or other specialty systems. Recent trends. Recently supply voltages in analog circuits have decreased (as they have in digital logic) and low-voltage op-amps have been introduced reflecting this. Supplies of 5 V and increasingly 3.3 V (sometimes as low as 1.8 V) are common. To maximize the signal range modern op-amps commonly have rail-to-rail output (the output signal can range from the lowest supply voltage to the highest) and sometimes rail-to-rail inputs.
https://en.wikipedia.org/wiki?curid=22804
Oh Hell Oh Hell is a trick-taking card game in which the object is to take exactly the number of tricks bid. Unlike contract bridge and spades, taking more tricks than bid is a loss. It was first described by B. C. Westall around 1930. The game of Oh Hell explores the idea of taking an exact number of tricks specified by a bid before the hand. It differs from other trick-taking games in that players play a fixed number of hands. The game uses trumps, often decided by a cut of the deck after the hand's cards have been distributed. Like many popular social card games, Oh Hell has many local variants, in both rules and names. Prominent players of Oh Hell include former President Bill Clinton, who learned it from Steven Spielberg. There are many variations to this game; a common set of regulations is given here. Oh Hell can be played with almost any number of players (3+) although 4-7 is considered optimal. The game is played using a standard 52-card deck, with ace (A) being the highest rank, two (2) the lowest. With six or more players, the game can be played with two decks combined or with a 63-card deck from six-player 500. A game consists of a fixed number of hands, and each hand consists of dealing a certain number of cards to each player, depending on the variation and the number of players. During a hand, each player bids for a number of tricks, then attempts to take exactly that many tricks during the hand. The dealer (initially determined by cutting cards) deals out the cards one by one, starting with the player to his left, in a clockwise direction, until the required number of cards has been dealt. After the dealing is complete, the next card is turned face up, and the suit of this card determines the trump suit for the deal, which is why only up to 12 cards are dealt in a four-player match. (If there are no unused cards, the largest hand is played without a trump suit. Alternatively, the maximal round trump suit can be determined in a variety of ways: for instance, by revealing the dealer's last card as in whist, by cutting the pack before dealing, or the dealer can decide the trump before seeing his own cards.) Each player now bids for the number of tricks he believes he can win. The player to the left of the dealer bids first. Bidding is unrestricted except for the "screw the dealer" rule: the number of tricks bid cannot equal the number available. That is, every deal must in total be either overbid or underbid. For example, if five cards are dealt, and the first three bids are two, zero and one, then the dealer may not bid two. However, if five cards are dealt, and the first three bids are three, one and two, then the dealer is free to make any bid. (The "screw the dealer" rule is not used in the version played in West Virginia, South Carolina, rural Maryland, and Pennsylvania, with the dealer being free to make any bid.) In an alternative style of bidding, all players simultaneously hold out fingers for the number of tricks they want to bid (similar in style to a rock paper scissors shoot). The players' bids are recorded on the score sheet. When every player has made a bid, the player to the left of the dealer makes the opening lead. Play then proceeds as usual in a trick-taking game, with each player in turn playing one card. Players must follow suit, unless they have no cards of the led suit, in which case they may play any card. The highest card of the led suit wins the trick unless ruffed, when the highest trump card wins. In multi-deck games, the first of identical cards to be played (say two queens of clubs) wins the trick. In a more complicated variant, identical cards cancel each other, leading to the possibility (if the number of players is even) of an entire trick being canceled out. The player who wins the trick leads to the next trick. In this variant, all bids must add up exactly to the number of cards dealt for that round. Players must then "make it work" to move on to the next round. If anyone takes more or less than their bid, the deal moves to the left and the round is re-dealt. With four players, a second deck may be used to specify the round to be played—the value of the upcard determines the number of cards dealt and the suit determines the trump suit for the round. This variant is played for money. Prior to dealing the first hand, players agree on the amount of money the “losers” will have to pay to the winner. The last place finisher pays the most and the second-place finisher pays the least. The sliding scale in the Prospect version keeps all the players invested in the outcome of every hand, since their finishing rank corresponds to how much money they will owe the winner. The WPOHL (World Prospect Oh Hell League) Championship is usually held in December in Rehoboth Beach, Delaware, using “Prospect” rules. The deal begins with 10 cards, plays down to 1, then back up to 10 for a total of 19 hands per round. Depending on the size of the field, the five or ten lowest scoring players in the room are eliminated each round until there is a five-person "final table." In the early 1990s, the International Oh-Hell League's annual Championship Tournament of All Creation was held each March in the Fire Hall of Riverton, New Jersey. This yearly tournament lasted for over 20 years. Players vied for nominal, but not actual, possession of the league trophy, a two-inch bronze reproduction of the Manneken Pis, which resided permanently in the home of tournament founder, John B. "Jack" Mathews. Alcohol was off-limits, but profanity was encouraged. A full table consisted of four players. Play began with a one-card hand, went up to 13, then back down to one for a total of 25 hands. In each hand, except the 13th when the entire deck was dealt, the first undealt card was turned over to establish the trump suit. The tournament entry fee was $5 and included complimentary hot dogs, doughnuts, and Tak-a-Boost (the official drink of the International Oh-Hell League). The tournament was a grueling all-day affair. All players had to commit to playing at least three games. Cumulative tabulation of the end scores from the first three-rounds allowed the top eight highest scoring players to move on to the semi-finals. The semi-finals portion of the tournament was where the commitment to play ended. If a semi-final eligible player could not or did not wish to continue, the player with the next highest cumulative score was offered the seat. The drawing of cards determined random seeding for the semi-final tables. The first and second place players at each semi-final table advanced to the finals, where the top-scoring player in this last game was the tournament winner. Consolation games and clean up by any remaining players were encouraged while waiting for the outcome of the tournament. For a period, the International Oh-Hell League was a registered corporation. The Annual Cartier 'Oh Hell!' Tournament began in 1995. The tournament formula was created by Tessa Kennedy and Tomasz Starzewski. Cartier Ltd. sponsors the tournament with all money raised going to charity. There are several alternative methods of scoring: Oh Hell is known by many names, including: Boerenbridge, Boerenlullen, Chinees poepen, Chinees dekken, Chinees bridgen, Koreaanse poker, 10 op en neer, jodelen, pronostieken, Slagenvragen, Hellen, Bollen, op-en-affen, Afrikaans beffen.
https://en.wikipedia.org/wiki?curid=22807
On War Vom Kriege () is a book on war and military strategy by Prussian general Carl von Clausewitz (1780–1831), written mostly after the Napoleonic wars, between 1816 and 1830, and published posthumously by his wife Marie von Brühl in 1832. It has been translated into English several times as On War. "On War" is an unfinished work. Clausewitz had set about revising his accumulated manuscripts in 1827, but did not live to finish the task. His wife edited his collected works and published them between 1832 and 1835. His 10-volume collected works contain most of his larger historical and theoretical writings, though not his shorter articles and papers or his extensive correspondence with important political, military, intellectual and cultural leaders in the Prussian state. "On War" is formed by the first three volumes and represents his theoretical explorations. It is one of the most important treatises on political-military analysis and strategy ever written, and remains both controversial and influential on strategic thinking. Clausewitz was among those intrigued by the manner in which the leaders of the French Revolution, especially Napoleon, changed the conduct of war through their ability to motivate the populace and gain access to the full resources of the state, thus unleashing war on a greater scale than had previously been seen in Europe. Clausewitz was well educated and had strong interests in art, history, science, and education. He was a professional soldier who spent a considerable part of his life fighting against Napoleon. The insights he gained from his political and military experiences, combined with a solid grasp of European history, provided the basis for his work. A wealth of historical examples is being used to illustrate its various ideas. Napoleon and Frederick the Great figure prominently for having made very efficient use of the terrain, movement and the forces at their disposal. Clausewitz argued that war theory can not be a strict operational advise for generals. Instead, he wanted to highlight general principles that would result from the study of history and logical thinking. He contended that military campaigns could be planned only to a very small degree because incalculable influences or events, so-called "friction", would quickly make any too-detailed planning in advance obsolete. Military leaders must be capable to make decisions under time pressure with incomplete information since in his opinion "three quarters of the things on which action is built in war" are concealed and distorted by the "fog of war". In his 1812 "Bekenntnisschrift" ("Notes of Confession"), he presents a more existential interpretation of war by envisioning war as the highest form of self-assertion by a people. That corresponded in every respect with the spirit of the time when the French Revolution and the conflicts that arose from it had caused the evolution of conscript armies and guerrillas. The people's armies supported the idea that war is an existential struggle. During the following years, however, Clausewitz gradually abandoned this exalted view and concluded that the war served as a mere instrument: "Thus, war is an act of violence in order to force our will upon the enemy." Clausewitz analyzed the conflicts of his time along the line of the categories "Purpose", "Goal" and "Means". He reasoned that the "Purpose" of war is one's will to be enforced, which is determined by politics. The "Goal" of the conflict is therefore to defeat the opponent in order to exact the "Purpose". The "Goal" is pursued with the help of a strategy, that might be brought about by various "Means" such as by the defeat or the elimination of opposing armed forces or by non-military "Means" (such as propaganda, economic sanctions and political isolation). Thus, any resource of the human body and mind and all the moral and physical powers of a state might serve as "Means" to achieve the set goal. One of Clausewitz's best-known quotes summarizes that idea: "War is a mere continuation of politics by other means." That quote in itself allows for the interpretation that the military will take over from politics as soon as war has begun, as, for example, the German General Staff did during World War I. However, Clausewitz had postulated the "primacy of politics" and in this context elaborated: "[...], we claim that war is nothing more than a continuation of the political process by applying other means. By applying other means we simultaneously assert that the political process does not end with the conclusion of the war or is being transformed into something entirely different, but that it continues to exist and proceed in its essence, regardless of the means, it might make use of." According to Azar Gat, the "general message" of the book was that "the conduct of war could not be reduced to universal principles [and is] dominated by political decisions and moral forces." These basic conclusions are essential to Clausewitz's theory: Some of the key ideas (not necessarily original to Clausewitz or even to his mentor, Gerhard von Scharnhorst) discussed in "On War" include (in no particular order of importance): Clausewitz used a dialectical method to construct his argument, which led to frequent modern misinterpretation because he explores various often-opposed ideas before he came to conclusions. Modern perceptions of war are based on the concepts that Clausewitz put forth in "On War", but they have been diversely interpreted by various leaders (such as Moltke, Vladimir Lenin, Dwight Eisenhower, and Mao Zedong), thinkers, armies, and peoples. Modern military doctrine, organization, and norms are all still based on Napoleonic premises, but whether the premises are necessarily also "Clausewitzian" is debatable. The "dualism" of Clausewitz's view of war (that wars can vary a great deal between the two "poles" that he proposed, based on the political objectives of the opposing sides and the context) seems to be simple enough, but few commentators have been willing to accept that crucial variability. They insist that Clausewitz "really" argued for one end of the scale or the other. "On War" has been seen by some prominent critics as an argument for "total war". It has been blamed for the level of destruction involved in the First and the Second World Wars, but it seems rather that Clausewitz, who did not actually use the term "total war", had merely foreseen the inevitable development that started with the huge, patriotically motivated armies of the Napoleonic wars. They resulted (though the evolution of war has not yet ended) in the atomic bombing of Hiroshima and Nagasaki, with all the forces and capabilities of the state devoted to destroying forces and capabilities of the enemy state (thus "total war"). Conversely, Clausewitz has also been seen as "The preeminent military and political strategist of limited war in modern times". (Robert Osgood, 1979) Clausewitz and his proponents have been severely criticized by other military theorists, like Antoine-Henri Jomini in the 19th century, B. H. Liddell Hart in the mid-20th century, and Martin van Creveld and John Keegan more recently. "On War" is a work rooted solely in the world of the nation state, states historian Martin van Creveld, who alleges that Clausewitz takes the state "almost for granted", as he rarely looks at anything before the Peace of Westphalia, and mediaeval warfare is effectively ignored in Clausewitz's theory. He alleges that Clausewitz does not address any form of intra/supra-state conflict, such as rebellion and revolution, because he could not theoretically account for warfare before the existence of the state. Previous kinds of conflict were demoted to criminal activities without legitimacy and not worthy of the label "war". Van Creveld argues that "Clausewitzian war" requires the state to act in conjunction with the people and the army, the state becoming a massive engine built to exert military force against an identical opponent. He supports that statement by pointing to the conventional armies in existence throughout the 20th century. However, revolutionaries like Karl Marx and Friedrich Engels derived some inspiration from Clausewitzian ideas.
https://en.wikipedia.org/wiki?curid=22808
Orange Alternative The Orange Alternative (Polish: "Pomarańczowa Alternatywa") is a Polish anti-communist underground movement, started in Wrocław, a city in south-west Poland and led by Waldemar Fydrych (sometimes misspelled as Frydrych), commonly known as "Major (Commander of Festung Breslau)" in the 1980s. Its main purpose was to offer a wider group of citizens an alternative way of opposition against the authoritarian regime by means of a peaceful protest that used absurd and nonsensical elements. By doing this, members of the Orange Alternative could not be arrested by the police for opposition to the regime without the authorities becoming a laughing stock. The Orange Alternative has been viewed as part of the broader Solidarity movement. Academics Dennis Bos and Marjolein 't Hart have asserted it was the most effective of all Solidarity's factions in bringing about the movement's success. Initially it painted ridiculous graffiti of dwarves on paint spots covering up anti-government slogans on city walls. Afterwards, beginning with 1985 through 1990, it organized a series of more than sixty happenings in several Polish cities, including Wrocław, Warsaw, Łódź, Lublin, and Tomaszów Mazowiecki. It was the most picturesque element of Polish opposition to Stalinist authoritarianism. It suspended activity in 1989, but reactivated in 2001 and has been active on a small scale ever since. A statue of a dwarf, dedicated to the memory of the movement, stands today on Świdnicka Street in Wrocław, in the place where events took place. The Orange Alternative movement has inspired several other similar movements in authoritarian countries including Czechoslovakia and Hungary and it has also inspired and influenced the Pora and the so-called Orange Revolution movement in Ukraine, which was in turn supported by Poland. Some utterances ascribed to Waldemar Fydrych: The beginnings of the Orange Alternative are in a student movement called the Movement for New Culture created in 1980 at the University of Wrocław. It is in that year that Waldemar "Major" Fydrych, one of the movement's founders, proclaims the Socialist Surrealism Manifesto, which becomes the ideological backbone behind a gazette known as "The Orange Alternative". Seven out of the total fifteen issues of this gazette appear during student strikes organized in November and December 1980 as part of the Solidarity upheaval. The first number is edited jointly by Major Waldemar Fydrych and Wiesław Cupała (a.k.a. "Rittmeister") simply with an idea to have fun. The editors treat the strike and the surrounding reality as forms of Art. For the ensuing numbers, the editorial committee is joined by Piotr Adamcio, known as "Lieutenant Pablo", Andrzej Dziewit and Zenon Zegarski, nicknamed "Lieutenant Zizi Top". Although its avantgarde character, according to the student strike organizers, was a threat to the "higher aims of the strike", and notwithstanding attempts by the strike committee to censor it, the gazette became rapidly very popular among the students. The first known actions of the Orange Alternative consisted of painting dwarf graffiti on spots created by the police's covering up anti-regime slogans on walls of the Polish cities. The first graffiti was painted by Major Waldemar Fydrych and Wiesław Cupała on the night from the 30 to 31 August 1982 on one of the residences in the Wrocław district of Biskupin and Sępolno. Altogether more than one thousand of such graffiti were painted in the major Polish cities such as Wrocław, Kraków, Warsaw, Łódź, and Gdańsk. Dwarves appearing in numbers all over Poland aroused the interest of both Polish pedestrians and the militia, whose intervention led to short term arrests of the graffiti artists. During one of these incidents, Major, a detainee at a police station in Łódź, proclaimed, in reference to the Marxist and Hegelian dialectics, yet another artistic manifesto and referred to his graffiti art as "dialectic painting" stating: "The Thesis is the Anti-Regime Slogan. The Anti-thesis is the Spot and the Synthesis is the Dwarf. Quantity evolves into Quality. The more Dwarves there are, the better it is." At the beginning of the 21st century Dwarf figurines made of bronze began to appear in Wrocław. Over time, they have become a major tourist attraction in Wrocław. What brought the Orange Alternative the biggest fame were its street happenings which it organized throughout the second half of the 1980s. These actions gained it enormous popularity among the Polish youth, who joined the movement, seeing it an alternative to the opposition style presented by the Solidarity, which they viewed as more stiff and boring. The first modest happening called the "Burning of Tubes" was organized as early as 1985 in Wrocław by Major Waldemar Fydrych accompanied by a small group of artists to which belonged: Krzysztof Skarbek, Piotr Petyszkowski, Andrzej Głuszek, and Sławomir Monkiewicz. The break-through moment came in the fall of 1987, during the Open Theatre Festival in Wrocław, when the Village Voice reported the Orange Alternative's action known as "Distribution of Toilet Paper" – a happening that satirized the annoying lack of that consumer product at the time. After the publication of this article, the Orange Alternative became of interest to a number of Polish and foreign media. The biggest happenings however took place in the years 1987 through 1989, with the "orange" wave spilling over Poland into cities such as Warsaw, Łódź, Lublin and Tomaszów Mazowiecki, following Major Fydrych's arrest on 8 March 1988. The actions of the Orange Alternative – although its leaders and participants often expressed anarchistic viewpoints – were not inherently ideological. No serious demands were ever expressed. Rather, the slogans were surrealist in character (such as "Vivat Sorbovit", Sorbovit being a popular soft drink at that time) or "There is no freedom without dwarves". They often paraphrased slogans used by the Solidarity Union or the communists. Their role was to laugh at absurdities and pompousness of both sides of the system and provoke independent thinking. The open street formula allowed all individuals to take part in the happenings. This openness drew thousands of pedestrians to participate in the group's actions. In such a way, the majority of the happenings could assemble thousands of participants, of whom many were accidental passers-by. The culmination point in the movement's history was the action organized on 1 June 1988, known as the "Revolution of Dwarves", during which more than 10 thousand people marched through the center of Wrocław wearing orange dwarf hats. The happenings usually terminated with the arrest of hundreds of participants, who did not manage to escape in time from the hands of the militia. At one point, the participants were even able to provoke the Communist militia to arrest 77 Santa Clauses or, on another occasion, anyone wearing anything orange. For each of its actions, the Orange Alternative printed leaflets and posters, featuring slogans like "Every militiaman is a piece of Art" or "Citizen, help the militia, beat yourself up".
https://en.wikipedia.org/wiki?curid=22810
Otto IV, Holy Roman Emperor Otto IV (1175 – 19 May 1218) was one of two rival kings of Germany from 1198 on, sole king from 1208 on, and Holy Roman Emperor from 1209 until he was forced to abdicate in 1215. The only German king of the Welf dynasty, he incurred the wrath of Pope Innocent III and was excommunicated in 1210. Otto was the third son of Henry the Lion, Duke of Bavaria and Duke of Saxony, by his wife Matilda of England, the eldest daughter of King Henry II of England and Eleanor of Aquitaine. His exact birthplace is not given by any original source. He grew up in England in the care of his grandfather King Henry II. Otto was fluent in French as well as German. He became the foster son of his maternal uncle, King Richard I of England. In 1190, after he left England to join the Third Crusade, Richard appointed Otto as Earl of York. The authenticity (or authority) of this grant was doubted by the vassals of Yorkshire, who prevented Otto taking possession of his earldom. Still, he probably visited Yorkshire in 1191, and he continued to claim the revenues of the earldom after becoming king of Germany, although he never secured them. Neither did he succeed in getting the 25,000 silver marks willed to him by his uncle in 1199. In 1195, Richard began negotiations to marry Otto to Margaret of Scotland, daughter and heir presumptive of King William the Lion of Scotland. Lothian, as Margaret's dowry, would be handed over to Richard for safekeeping and the counties of Northumberland and Cumberland (Carlisle) would be granted to Otto and turned over to the king of Scotland. The negotiations dragged on until August 1198, when the birth of a son and heir to William rendered them unnecessary. Having failed in his efforts to secure Otto an English earldom or else a Scottish kingdom, in September 1196 Richard, as duke of Aquitaine, enfeoffed Otto with the county of Poitou. There is some disagreement over whether Otto received Poitou in exchange for or in addition to the earldom of York. Otto was in Poitou from September 1196 until mid-1197, when he joined Richard in Normandy to confer over the appointment of bishops to the vacant sees of Poitiers, Limoges and Périgueux. He then participated in the war against Philip II of France on the side of Richard. In October he returned to Poitou. The German historian Jens Ahlers, taking into account Otto's life prior to 1198, considers that he might have been the first foreign king of Germany. After the death of Emperor Henry VI, the majority of the princes of the Empire, situated in the south, elected Henry's brother, Philip, Duke of Swabia, king in March 1198, after receiving money and promises from Philip in exchange for their support. Those princes opposed to the Hohenstaufen dynasty also decided, on the initiative of Richard of England, to elect instead a member of the House of Welf. Otto's elder brother, Henry, was on a crusade at the time, and so the choice fell to Otto. Otto, soon recognized throughout the northwest and the lower Rhine region, was elected king by his partisans in Cologne on 9 June 1198. Otto took control of Aachen, the place of coronation, and was crowned by Adolf, Archbishop of Cologne, on 12 July 1198. This was of great symbolic importance, since the Archbishop of Cologne alone could crown the King of the Romans. The coronation was done with fake imperial regalia, because the actual materials were in the hands of the Hohenstaufen. Otto's election pulled the empire into the conflict between England and France. Philip had allied himself with the French king, Philip II, while Otto was supported at first by Richard I, and after his death in 1199 by his brother John. The papacy meanwhile, under Innocent III, determined to prevent the continued unification of Sicily and the Holy Roman Empire under one monarch seized the opportunity to extend its influence. Therefore, Innocent III favoured Otto, whose family had always been opposed to the house of Hohenstaufen. Otto himself also seemed willing to grant any demands that Innocent would make. The confusion in the empire allowed Innocent to drive out the imperial feudal lords from Ancona, Spoleto, and Perugia, who had been installed by Emperor Henry VI. At the same time, Innocent encouraged the cities in Tuscany to form a league, called the League of San Genesio, against imperial interests in Italy. The cities placed themselves under Innocent's protection. In 1201, Innocent announced that he recognized Otto as the only legitimate king. In return, Otto promised to support the pope's interests in Italy. Otto also had the support of Ottokar I of Bohemia, who although at first siding with Philip of Swabia, eventually threw in his lot with Otto. Otto's cause was further strengthened by the support of Valdemar II of Denmark. Philip achieved a great deal of success in the civil war that followed, allowing him in 1204 to be again crowned king, this time by the archbishop of Cologne. In the following years, Otto's situation worsened because after England's defeat by France he lost England's financial support. Many of his allies changed sides to Philip, including his brother Henry. Otto was defeated and wounded in battle by Philip on 27 July 1206, near Wassenberg, and as a consequence he also lost the support of the pope, who began to favour the apparent winner in the conflict. Otto was forced to retire to his possessions near Brunswick, leaving Philip virtually uncontested as German king. Innocent III forced the two warring parties into negotiations at Cologne, and in exchange for renouncing his claim to the throne, Philip promised Otto the hand of his daughter Beatrix in marriage, together with the Duchy of Swabia and an enormous dowry. Otto refused, and as the civil war was again about to recommence, Philip was murdered on 21 June 1208. After Philip's death, Otto made amends with the Staufen party and became engaged to Philip's daughter Beatrix. In an election in Frankfurt on 11 November 1208, he gained the support of all the electoral princes, as he promised he would not make hereditary claims to the imperial crown on behalf of any children he might father. Now fully reconciled with Innocent, Otto made preparations to be crowned Holy Roman Emperor. To secure Innocent's support, he promised to restore to the Papal States all territory that it had possessed under Louis the Pious, including the March of Ancona, the Duchy of Spoleto, the former Exarchate of Ravenna, and the Pentapolis. Travelling down via Verona, Modena, and Bologna, he eventually arrived at Milan where he received the Iron Crown of Lombardy and the title of King of Italy in 1208. He was met at Viterbo by Pope Innocent and was taken to St. Peter's Basilica, where he was crowned emperor by Pope Innocent on 21 October 1209, before rioting broke out in Rome, forcing Otto to abandon the city. Not content with his successes so far, Innocent also obtained from Otto further written concessions to the Papal See, including to allow all elections of German bishops to be conducted according to Church ordinances, and not to prevent any appeals to Rome. He also promised to hand over to the Church all income from any vacant sees which had been flowing into the imperial treasury. After abandoning Rome, Otto marched north, reaching Pisa by 20 November. Here, probably advised by Peter of Celano and Dipold, Count of Acerra, he was convinced to abandon his earlier promises. Otto immediately worked to restore imperial power in Italy. After his consecration by the pope, he promised to restore the lands bequeathed to the church by the countess Matilda of Tuscany nearly a century before, and not to move against Frederick, King of Sicily. He quickly broke all his promises. He threw out the papal troops from Ancona and Spoleto, reclaiming the territory as imperial fiefs. He then demanded that Frederick of Sicily do homage for the duchies of Calabria and Apulia, and when Frederick refused to appear, Otto declared those fiefs forfeited. Otto then marched on Rome, and commanded Innocent to annul the Concordat of Worms, and to recognise the imperial crown's right to make nominations to all vacant benefices. Such actions infuriated Innocent, and Otto was promptly excommunicated by the pope for this on 18 November 1210. Subsequently, he tried to conquer Sicily, which was held by the Staufen king Frederick, under the guardianship of Innocent III. Parallel to this, the German nobility by this time were growing ever more frustrated with Otto. They felt that instead of wasting his time in Italy, and playing power politics with the pope, it was his first duty to defend the northern provinces of the empire against Valdemar II of Denmark, who had taken advantage of Otto's distractions by invading the northern provinces of the empire and possessing the whole Baltic coast from Holstein to Livonia. So while Otto was in southern Italy, several princes of the empire, including the archbishops of Mainz and Magdeburg, at the instigation of King Philip II of France and with the consent of the pope, elected Frederick King of the Romans at the Diet of Nuremberg in 1211. Otto's ambassadors from Milan appeared before the Fourth Lateran Council, pleading his case for his excommunication to be lifted. Although he claimed he had repented for his offences, and declared his willingness to be obedient to the Pope in all things, Innocent III had already recognised Frederick as emperor-elect. Otto returned to Germany to deal with the situation, hopeful to salvage something from the looming disaster. He found most of the German princes and bishops had turned against him, and that Frederick, who had made his way up the Italian peninsula, had avoided Otto's men who were guarding the passes through the Alps and had arrived at Constance. Otto soon discovered that after Beatrix died in the summer of 1212, and Frederick arrived in Germany with his army in September 1212, most of the former Staufen supporters deserted Otto for Frederick, forcing Otto to withdraw to Cologne. On 5 December 1212, Frederick was elected king for a second time by a majority of the princes. The support that Philip II of France was giving to Frederick forced King John of England to throw his weight behind his nephew Otto. The destruction of the French fleet in 1213 by the English saw John begin preparations for an invasion of France, and Otto saw a way of both destroying Frederick's French support as well as bolstering his own prestige. He agreed to join John in the invasion, and in February 1214, as John advanced from the Loire, Otto was supposed to make a simultaneous attack from Flanders, together with the Count of Flanders. Unfortunately, the three armies could not coordinate their efforts effectively. It was not until John, who had been disappointed in his hope for an easy victory after being driven from Roche-au-Moine and had retreated to his transports, that the Imperial Army, with Otto at its head, assembled in the Low Countries. On 27 July 1214, the opposing armies suddenly discovered they were in close proximity to each other, on the banks of the little river Marque (a tributary of the river Deûle), near the Bridge of Bouvines. Philip's army numbered some 15,000, while the allied forces possessed around 25,000 troops, and the armies clashed at the Battle of Bouvines. It was a tight battle, but it was lost when Otto was carried off the field by his wounded and terrified horse, causing his forces to abandon the field. It is said that Philip II had sent to Frederick the imperial eagle which Otto had left lying on the battlefield. This defeat allowed Frederick to take Aachen and Cologne, as Otto was forced again to withdraw to his private possessions around Brunswick, and he was deposed in 1215. Absolved from his excommunication, he died of disease, at Harzburg castle on 19 May 1218, requesting that he be mortally expiated in atonement of his sins. Historian Kantorowicz described the death as "gruesome": "deposed, dethroned, he was flung full length on the ground by the Abbot, confessing his sins, while the reluctant priests beat him bloodily to death with rods. Such was the end of the first and last Welf Emperor." He is entombed in Brunswick Cathedral. Otto was related to every other King of Germany. He married twice: Neither marriage produced any children.
https://en.wikipedia.org/wiki?curid=22811
Outcome-based education Outcome-based education or outcomes-based education (OBE) is an educational theory that bases each part of an educational system around goals (outcomes). By the end of the educational experience, each student should have achieved the goal. There is no single specified style of teaching or assessment in OBE; instead, classes, opportunities, and assessments should all help students achieve the specified outcomes. The role of the faculty adapts into instructor, trainer, facilitator, and/or mentor based on the outcomes targeted. Outcome-based methods have been adopted in education systems around the world, at multiple levels. Australia and South Africa adopted OBE policies in the early 1990s but have since been phased out. The United States has had an OBE program in place since 1994 that has been adapted over the years. In 2005, Hong Kong adopted an outcome-based approach for its universities. Malaysia implemented OBE in all of their public schools systems in 2008. The European Union has proposed an education shift to focus on outcomes, across the EU. In an international effort to accept OBE, The Washington Accord was created in 1989; it is an agreement to accept undergraduate engineering degrees that were obtained using OBE methods. As of 2017, the full signatories are Australia, Canada, Taiwan, Hong Kong, India, Ireland, Japan, Korea, Malaysia, New Zealand, Russia, Singapore, South Africa, Sri Lanka, Turkey, the United Kingdom, Pakistan, China and the United States. OBE can primarily be distinguished from traditional education method by the way it incorporates three elements: theory of education, a systematic structure for education, and a specific approach to instructional practice. It organizes the entire educational system towards what are considered essential for the learners to successfully do at the end of their learning experiences. In this model, the term "outcome" is the core concept and sometimes used interchangeably with the terms "competency, "standards, "benchmarks", and "attainment targets". OBE also uses the same methodology formally and informally adopted in actual workplace to achieve outcomes. It focuses on the following skills when developing curricula and outcomes: In a regional/local/foundational/electrical education system, students are given grades and rankings compared to each other. Content and performance expectations are based primarily on what was taught in the past to students of a given age of 12-18. The goal of this education was to present the knowledge and skills of an older generation to the new generation of students, and to provide students with an environment in which to learn. The process paid little attention (beyond the classroom teacher) to whether or not students learn any of the material. The focus on outcomes creates a clear expectation of what needs to be accomplished by the end of the course. Students will understand what is expected of them and teachers will know what they need to teach during the course. Clarity is important over years of schooling and when team teaching is involved. Each team member, or year in school, will have a clear understanding of what needs to be accomplished in each class, or at each level, allowing students to progress. Those designing and planning the curriculum are expected to work backwards once an outcome has been decided upon; they must determine what knowledge and skills will be required to reach the outcome. With a clear sense of what needs to be accomplished, instructors will be able to structure their lessons around the student’s needs. OBE does not specify a specific method of instruction, leaving instructors free to teach their students using any method. Instructors will also be able to recognize diversity among students by using various teaching and assessment techniques during their class. OBE is meant to be a student-centered learning model. Teachers are meant to guide and help the students understand the material in any way necessary, study guides, and group work are some of the methods instructors can use to facilitate students learning. OBE can be compared across different institutions. On an individual level, institutions can look at what outcomes a student has achieved to decide what level the student would be at within a new institution. On an institutional level, institutions can compare themselves, by checking to see what outcomes they have in common, and find places where they may need improvement, based on the achievement of outcomes at other institutions. The ability to compare easily across institutions allows students to move between institutions with relative ease. The institutions can compare outcomes to determine what credits to award the student. The clearly articulated outcomes should allow institutions to assess the student’s achievements rapidly, leading to increased movement of students. These outcomes also work for school to work transitions. A potential employer can look at records of the potential employee to determine what outcomes they have achieved. They can then determine if the potential employee has the skills necessary for the job. Student involvement in the classroom is a key part of OBE. Students are expected to do their own learning, so that they gain a full understanding of the material. Increased student involvement allows students to feel responsible for their own learning, and they should learn more through this individual learning. Other aspects of involvement are parental and community, through developing curriculum, or making changes to it. OBE outcomes are meant to be decided upon within a school system, or at a local level. Parents and community members are asked to give input in order to uphold the standards of education within a community and to ensure that students will be prepared for life after school. The definitions of the outcomes decided upon are subject to interpretation by those implementing them. Across different programs or even different instructors outcomes could be interpreted differently, leading to a difference in education, even though the same outcomes were said to be achieved. By outlining specific outcomes, a holistic approach to learning is lost. Learning can find itself reduced to something that is specific, measurable, and observable. As a result, outcomes are not yet widely recognized as a valid way of conceptualizing what learning is about. When determining if an outcome has been achieved, assessments may become too mechanical, looking only to see if the student has acquired the knowledge. The ability to use and apply the knowledge in different ways may not be the focus of the assessment. The focus on determining if the outcome has been achieved leads to a loss of understanding and learning for students, who may never be shown how to use the knowledge they have gained. Instructors are faced with a challenge: they must learn to manage an environment that can become fundamentally different from what they are accustomed to. In regards to giving assessments, they must be willing to put in the time required to create a valid, reliable assessment that ideally would allow students to demonstrate their understanding of the information, while remaining objective. Education outcomes can lead to a constrained nature of teaching and assessment. Assessing liberal outcomes such as creativity, respect for self and others, responsibility, and self-sufficiency, can become problematic. There is not a measurable, observable, or specific way to determine if a student has achieved these outcomes. Due to the nature of specific outcomes, OBE may actually work against its ideals of serving and creating individuals that have achieved many outcomes. Parental involvement, as discussed in the benefits section can also be a drawback, if parents and community members are not willing to express their opinions on the quality of the education system, the system may not see a need for improvement, and not change to meet student’s needs. Parents may also become too involved, requesting too many changes, so that important improvements get lost with other changes that are being suggested. Instructors will also find that their work is increased; they must work to first understand the outcome, then build a curriculum around each outcome they are required to meet. Instructors have found that implementing multiple outcomes is difficult to do equally, especially in primary school. Instructors will also find their work load increased if they chose to use an assessment method that evaluates students holistically. In the early 1990s, all states and territories in Australia developed intended curriculum documents largely based on OBE for their primary and secondary schools. Criticism arose shortly after implementation. Critics argued that no evidence existed that OBE could be implemented successfully on a large scale, in either the United States or Australia. An evaluation of Australian schools found that implementing OBE was difficult. Teachers felt overwhelmed by the amount of expected achievement outcomes. Educators believed that the curriculum outcomes did not attend to the needs of the students or teachers. Critics felt that too many expected outcomes left students with shallow understanding of the material. Many of Australia’s current education policies have moved away from OBE and towards a focus on fully understanding the essential content, rather than learning more content with less understanding. Officially, an agenda to implement Outcomes Based Education took place between 1992 and 2008 in Western Australia. Dissatisfaction with OBE escalated from 2004 when the government proposed the implementation of an alternative assessment system using OBE 'levels' for years 11 and 12. With government school teachers not permitted to publicly express dissatisfaction with the new system, a community lobby group called PLATO as formed in June 2004 by high school science teacher Marko Vojkavi. Teachers anonymously expressed their views through the website and online forums, with the website quickly became one of the most widely read educational websites in Australia with more 180,000 hits per month and contained an archive of more than 10,000 articles on the subject of OBE implementation. In 2008 it was officially abandoned by the state government with Minister for Education Mark McGowan remarking that the 1990s fad "to dispense with syllabus" was over. In December 2012, the European Commission presented a new strategy to decrease youth unemployment rate, which at the time was close to 23% across the European Union . The European Qualifications Framework calls for a shift towards learning outcomes in primary and secondary schools throughout the EU. Students are expected to learn skills that they will need when they complete their education. It also calls for lessons to have a stronger link to employment through work-based learning (WBL). Work-based learning for students should also lead to recognition of vocational training for these students. The program also sets goals for learning foreign languages, and for teachers continued education. It also highlights the importance of using technology, especially the internet, in learning to make it relevant to students. Hong Kong’s University Grants Committee adopted an outcomes-based approach to teaching and learning in 2005. No specific approach was created leaving universities to design the approach themselves. Universities were also left with a goal of ensuring an education for their students that will contribute to social and economic development, as defined by the community in which the university resides. With little to no direction or feedback from the outside universities will have to determine if their approach is achieving its goals on their own. OBE has been practiced in Malaysia since the 1950s; however, as of 2008, OBE is being implemented at all levels of education, especially tertiary education. This change is a result of the belief that the education system used prior to OBE inadequately prepared graduates for life outside of school. The Ministry of Higher Education has pushed for this change because of the number of unemployed graduates. Findings in 2006 state that nearly 70% of graduates from public universities were considered unemployed. A further study of those graduates found that they felt they lacked, job experience, communication skills, and qualifications relevant to the current job market. The Malaysian Qualifications Agency (MQA) was created to oversee quality of education and to ensure outcomes were being reached. The MQA created a framework that includes eight levels of qualification within higher education, covering three sectors; skills, vocational and technical, and academic. Along with meeting the standards set by the MQA, universities set and monitor their own outcome expectations for students OBE was introduced to South Africa in the late 1990s by the post-apartheid government as part of its Curriculum 2005 program. , Initial support for the program derived from anti-apartheid education policies. The policy also gained support from the labor movements that borrowed ideas about competency-based education, and Vocational education from New Zealand and Australia, as well as the labor movement that critiqued the apartheid education system. With no strong alternative proposals, the idea of outcome-based education, and a national qualification framework, became the policy of the African National Congress government. This policy was believed to be a democratization of education, people would have a say in what they wanted the outcomes of education to be. It was also believed to be a way to increase education standards and increase the availability of education. The National Qualifications Framework (NQF) went into effect in 1997. In 2001 people realized that the intended effects were not being seen. By 2006 no proposals to change the system had been accepted by the government, causing a hiatus of the program. The program came to be viewed as a failure and a new curriculum improvement process was announced in 2010, slated to be implemented between 2012 and 2014. In 1983, a report from the National Commission on Excellence in Education declared that American education standards were eroding, that young people in the United States were not learning enough. In 1989, President Bush and the nation’s governors set national goals to be achieved by the year 2000. GOALS 2000: Educate America Act was signed in March 1994. The goal of this new reform was to show that results were being achieved in schools. In 2001, the No Child Left Behind Act took the place of Goals 2000. It mandated certain measurements as a condition of receiving federal education funds. States are free to set their own standards, but the federal law mandates public reporting of math and reading test scores for disadvantaged demographic subgroups, including racial minorities, low-income students, and special education students. Various consequences for schools that do not make "adequate yearly progress" are included in the law. In 2010, President Obama proposed improvements for the program. In 2012, the U.S. Department of Education invited states to request flexibility waivers in exchange for rigorous plans designed to improve students' education in the state. India has become the permanent signatory member of the Washington Accord on 13 June 2014. India has started implementing OBE in higher technical education like diploma and undergraduate programmes. The National Board of Accreditation, a body for promoting international quality standards for technical education in India has started accrediting only the programmes running with OBE from 2013. The National Board of Accreditation mandates establishing a culture of outcomes-based education in institutions that offer Engineering, Pharmacy, Management programs. Outcomes analysis and using the analytical reports to find gaps and carry out continuous improvement is essential cultural shift from how the above programs are run when OBE culture is not embraced. Outcomes analysis requires huge amount of data to be churned and made available at any time, anywhere. Such an access to scalable, accurate, automated and real-time data analysis is possible only if the institute adopts either excelsheet based measurement system or some kind of home-grown or commercial software system. It is observed that excelsheet based measurement and analysis system doesn't scale when the stakeholders want to analyse longitudinal data. There are products like inpods which are available in India for implementing a culture of outcomes based education for Engineering, Pharmacy, Management programs.
https://en.wikipedia.org/wiki?curid=22816
Olga of Kiev Saint Olga (, in the baptism — Elena; born c. 890–925, in Pskov – died 969 AD in Kiev) was a regent of Kievan Rus' for her son Svyatoslav from 945 until 960. Due to the imperfect transliteration between Old East Slavic and the English language, the name Olga is synonymous with Olha. The baptism of Olga took the name Elenа. She is known for her obliteration of the Drevlians, a tribe that had killed her husband Igor of Kiev. Even though it would be her grandson Vladimir that would convert the entire nation to Christianity, because of her efforts to spread Christianity through Rus’, Olga is venerated as a saint in the Eastern Orthodox Church with the epithet "Equal to the Apostles" and her feast day is the 11th of July. While Olga's birthdate is unknown, it could be as early as 890 AD and as late as 925 AD. According to the "Primary Chronicle" Olga was born and lived in Pskov (). Little is known about her life before her marriage to Prince Igor I of Kiev and the birth of their son, Svyatoslav. Igor was the son and heir of Rurik, founder of Rurik dynasty. After his father's death Igor was under guardianship of Oleg, who had consolidated power in the region, conquering neighboring tribes and establishing a capital in Kiev. This loose tribal federation became known as Kievan Rus’, a territory covering what are now parts of Russia, Ukraine, and Belarus. The Drevlians were a neighboring tribe with which the growing Kievan Rus’ empire had a complex relationship. The Drevlians had joined Kievan Rus’ in military campaigns against the Byzantine Empire and paid tribute to Igor's predecessors. They stopped paying tribute upon Oleg's death and instead gave money to a local warlord. In 945, Igor set out to the Drevlian capital, Iskorosten (today known as Korosten in northern Ukraine), to force the tribe to pay tribute to Kievan Rus’. Confronted by Igor's larger army, the Drevlians backed down and paid him. As Igor and his army rode home, however, he decided the payment was not enough and returned, with only a small envoy, seeking more tribute. Upon his arrival in their territory, the Drevlians murdered Igor. According to the Byzantine chronicler Leo the Deacon, Igor's death was caused by a gruesome act of torture in which he was “captured by them, tied to tree trunks, and torn in two.” D. Sullivan has suggested that Leo may have invented this sensationalist version of Igor's death, taking inspiration from Diodorus Siculus’ account of a similar killing method used by the robber Sinis, who lived near the Isthmus of Corinth and was killed by Theseus. After Igor's death in 945, Olga ruled Kievan Rus as regent on behalf of their son Svyatoslav. Little is known about Olga’s tenure as ruler of Kiev, but the "Primary Chronicle" does give an account of her accession to the throne and her bloody revenge on the Drevlians for the murder of her husband as well as some insight into her role as civil leader of the Kievan people. According to archeologist , Knyaginya Olga, like all the other rulers before Vladimir the Great, was also using bident as her personal symbol. After Igor’s death at the hands of the Drevlians, Olga assumed the throne because her three-year-old son Svyatoslav was too young to rule. The Drevlians, emboldened by their success in ambushing and killing the king, sent a messenger to Olga proposing that she marry his murderer, Prince Mal. Twenty Drevlian negotiators boated to Kiev to pass along their king’s message and to ensure Olga’s compliance. They arrived in her court and told the queen why they were in Kiev: “to report that they had slain her husband...and that Olga should come and marry their Prince Mal.” Olga responded:Your proposal is pleasing to me’ indeed, my husband cannot rise again from the dead. But I desire to honor you tomorrow in the presence of my people. Return now to your boat, and remain there with an aspect of arrogance. I shall send for you on the morrow, and you shall say, ‘We will not ride on horses nor go on foot’ carry us in our boat.’ And you shall be carried in your boat.When the Drevlians returned the next day, they waited outside Olga's court to receive the honor she had promised. When they repeated the words she had told them to say, the people of Kiev rose up, carrying the Drevlians in their boat. The ambassadors believed this was a great honor, as if they were being carried by palanquin. The people brought them into the court where they were dropped into a trench that had been dug the day before under Olga’s orders where the ambassadors were buried alive. It is written that Olga bent down to watch them as they were buried and “inquired whether they found the honor to their taste.” Olga then sent a message to the Drevlians that they should send “their distinguished men to her in Kiev, so that she might go to their Prince with due honor.” The Drevlians, unaware of the fate of the first diplomatic party, gathered another party of men to send “the best men who governed the land of Dereva.” When they arrived, Olga commanded her people to draw them a bath and invited the men to appear before her after they had bathed. When the Drevlians entered the bathhouse, Olga had it set on fire from the doors, so that all the Drevlians within burned to death. Olga sent another message to the Drevlians, this time ordering them to “prepare great quantities of mead in the city where you killed my husband, that I may weep over his grave and hold a funeral feast for him.” When Olga and a small group of attendants arrived at Igor's tomb, she did indeed weep and hold a funeral feast. The Drevlians sat down to join them and began to drink heavily. When the Drevlians were drunk, she ordered her followers to kill them, “and went about herself egging on her retinue to the massacre of the Drevlians.” According to the "Primary Chronicle", five thousand Drevlians were killed on this night, but Olga returned to Kiev to prepare an army to finish off the survivors. The initial conflict between the armies of the two nations went very well for the forces of Kievan Rus’, who won the battle handily and drove the survivors back into their cities. Olga then led her army to Iskorosten (what is today Korosten), the city where her husband had been slain, and laid siege to the city. The siege lasted for a year without success, when Olga thought of a plan to trick the Drevlians. She sent them a message: “Why do you persist in holding out? All your cities have surrendered to me and submitted to tribute, so that the inhabitants now cultivate their fields and their lands in peace. But you had rather tide of hunger, without submitting to tribute.” The Drevlians responded that they would submit to tribute, but that they were afraid she was still intent on avenging her husband. Olga answered that the murder of the messengers sent to Kiev, as well as the events of the feast night, had been enough for her. She then asked them for a small request: “Give me three pigeons...and three sparrows from each house.” The Drevlians rejoiced at the prospect of the siege ending for so small a price, and did as she asked. Olga then instructed her army to attach a piece of sulphur bound with small pieces of cloth to each bird. At nightfall, Olga told her soldiers to set the pieces aflame and release the birds. They returned to their nests within the city, which subsequently set the city ablaze. As the "Primary Chronicle" tells it: “There was not a house that was not consumed, and it was impossible to extinguish the flames, because all the houses caught fire at once.” As the people fled the burning city, Olga ordered her soldiers to catch them, killing some of them and giving the others as slaves to her followers. She left the remnant to pay tribute. Olga remained regent ruler of Kievan Rus with the support of the army and her people. She changed the system of tribute gathering (poliudie) in the first legal reform recorded in Eastern Europe. She continued to evade proposals of marriage, defended the city during the Siege of Kiev in 968, and saved the power of the throne for her son. After her dramatic subjugation of the Drevlians, the "Primary Chronicle" recounts how Olga “passed through the land of Dereva, accompanied by her son and her retinue, establishing laws and tribute. Her trading posts and hunting-reserves are there still.” As queen, Olga established trading-posts and collected tribute along the Msta and the Luga rivers. She established hunting grounds, boundary posts, towns, and trading-posts across the empire. Olga's work helped to centralize state rule with these trade centers, called "pogosti", which served as administrative centers in addition to their mercantile roles. Olga's network of "pogosti" would prove important in the ethnic and cultural unification of the Russian nation, and her border posts began the establishment of national boundaries for the kingdom. During her son's prolonged military campaigns, she remained in charge of Kiev, residing in the castle of Vyshgorod with her grandsons. The "Primary Chronicle" does not go into additional detail about Olga's time as regent, but does tell the story of her conversion to Christianity and subsequent effect on the acceptance of Christianity in Eastern Europe. In the 950s, Olga traveled to Constantinople, the capital of the Byzantine Empire, to visit Emperor Constantine VII. Once in Constantinople, Olga converted to Christianity with the assistance of the Emperor and the Patriarch. While the "Primary Chronicle" does not divulge Olga's motivation for her visit or conversion, it does go into great detail on the conversion process, in which she was baptized and instructed in the ways of Christianity:The reigning Emperor was named Constantine, son of Leo. Olga came before him, and when he saw that she was very fair of countenance and wise as well, the Emperor wondered at her intellect. He conversed with her and remarked that she was worthy to reign with him in his city. When Olga heard his words, she replied that she was still a pagan, and that if he desired to baptize her, he should perform this function himself; otherwise, she was unwilling to accept baptism. The Emperor, with the assistance of the Patriarch, accordingly baptized her. When Olga was enlightened, she rejoiced in soul and body. The Patriarch, who instructed her in the faith, said to her, ‘Blessed art thou among the women of Rus’, for thou hast loved the light, and quit the darkness. The sons of Rus’ shall bless thee to the last generation of thy descendants.’ He taught her the doctrine of the Church, and instructed her in prayer and fasting, in almsgiving, and in the maintenance of chastity. She bowed her head, and like a sponge absorbing water, she eagerly drank in his teachings. The Princess bowed before the Patriarch, saying, ‘Through thy prayers, Holy Father, may I be preserved from the crafts and assaults of the devil!’ At her baptism she was christened Helena, after the ancient Empress, mother of Constantine the Great. The Patriarch then blessed her and dismissed her.While the "Primary Chronicle" notes that Olga was christened with the name “Helena” after the ancient Saint Helena (the mother of Constantine the Great), Jonathan Shepard argues that Olga's baptismal name comes from the contemporary emperor's wife, Helena. The observation that Olga was “worthy to reign with him in his city” suggests that the emperor was interested in marrying her. While the "Chronicle" explains Constantine's desire to take Olga as his wife as stemming from the fact that she was “fair of countenance and wise as well,” marrying Olga could certainly have helped him gain power over Rus’. The "Chronicle" recounts that Olga asked the emperor to baptize her knowing that his baptismal sponsorship, by the rules of spiritual kinship, would make marriage between them a kind of spiritual incest. Though her desire to become Christian may have been genuine, this request was also a way for her to maintain political independence. After the baptism, when Constantine repeated his marriage proposal, Olga answered that she could not marry him since Church law forbade a goddaughter to marry her godfather:After her baptism, the Emperor summoned Olga and made known to her that he wished her to become his wife. But she replied, ‘How can you marry me, after yourself baptizing me and calling me your daughter? For among Christians that is unlawful, as you yourself must know.’ Then the Emperor said, ‘Olga, you have outwitted me.’ He gave her many gifts of gold, silver, silks, and various vases, and dismissed her, still calling her his daughter.Francis Butler argues that the story of the proposal was a literary embellishment, describing an event that is highly unlikely to have ever actually occurred. In fact, at the time of her baptism, Constantine already had an empress. In addition to uncertainty over the truth of the "Chronicle"’s telling of events in Constantinople, there is controversy over the details of her conversion to Christianity. According to Russian sources, she was baptized in Constantinople in 957. Byzantine sources, however, indicate that she was a Christian prior to her 957 visit. It seems likely that she was baptized in Kiev around 955 and, following a second christening in Constantinople, took the Christian name Helen. Olga was not the first person from Rus’ to convert from her pagan ways-- there were Christians in Igor's court who had taken oaths at the St. Elias Church in Kiev for the Rus'–Byzantine Treaty in 945--but she was the most powerful Rus’ individual to undergo baptism during her life. "The Primary" "Chronicle" reports that Olga received the Patriarch's blessing for her journey home, and that once she arrived, she unsuccessfully attempted to convert her son to Christianity:Now Olga dwelt with her son Svyatoslav, and she urged him to be baptized, but he would not listen to her suggestion, though when any man wished to be baptized, he was not hindered, but only mocked. For to the infidels, the Christian faith is foolishness. They do not comprehend it, because they walk in darkness and do not see the glory of God. Their hearts are hardened, and they can neither hear with their ears nor see with their eyes. For Solomon has said, ‘The deeds of the unrighteous are far from wisdom. Inasmuch as I have called you, and ye heard me not, I sharpened my words, and ye understood not. But ye have set at nought all my counsel, and would have none of my reproach. For they have hated knowledge, and the fear of Jehovah they have not chosen. They would none of my counsel, but despised all my reproof.This passage highlights the hostility towards Christianity in Kievan Rus’ in the tenth century. In the "Chronicle," Svyatoslav declares that his followers would “laugh” if he were to accept Christianity. While Olga tried to convince her son that his followers would follow his example if he converted, her efforts were in vain. However, her son agreed not to persecute those in his kingdom who did convert, which marked a crucial turning point for Christianity in the area. Despite the resistance of her people to Christianity, Olga built churches in Kiev, Pskov, and elsewhere. Seven Latin sources document Olga's embassy to Holy Roman Emperor Otto I in 959. The continuation of Regino of Prüm mentions that the envoys requested the emperor to appoint a bishop and priests for their nation. The chronicler accuses the envoys of lies, commenting that their trick was not exposed until later. Thietmar of Merseburg says that the first archbishop of Magdeburg, Saint Adalbert of Magdeburg, before being promoted to this high rank, was sent by Emperor Otto to the country of the Rus' ("Rusciae") as a simple bishop but was expelled by pagan allies of Svyatoslav I. The same data is repeated in the annals of Quedlinburg and Hildesheim. In 2018, Russian historian and writer Boris Akunin pointed out the importance 2-year gap between invitation and arrival of bishops: "The failure of Olga’s Byzantine trip has inflicted a severe blow to her party. The Grand Knyaginya made a second attempt to find a Christian patron, now in the West. But it seems, in the period between the sending of the embassy to Emperor Otto in 959 and the arrival of Adalbert in Kiev in 961, a bloodless coup took place. Pagan party prevailed, the young Sviatoslav pushed his mother into the background, and that's why the German bishops had to return empty-handed." According to Russian historian Vladimir Petrukhin, Olga has invited Roman Catholic bishops because she wanted to motivate Byzantine Orthodox priests for more enthusiastic catechization of Rus' people, by the competition. According to the "Primary Chronicle", Olga died from illness in 969, soon after the Pechenegs' siege of the city. When Svyatoslav announced plans to move his throne to the Danube region, the ailing Olga convinced him to stay with her during her final days. Only three days later, she passed away and her family and all of Kievan Rus’ wept:Svyatoslav announced to his mother and his boyars, ‘I do not care to remain in Kiev, but should prefer to live in Perya-slavets on the Danube, since that is the centre of my realm, where all riches are concentrated; gold, silks, wine, and various fruits from Greece, silver and horses from Hungary and Bohemia, and from Rus’ furs, wax, honey, and slaves.’ But Olga made reply, ‘You behold me in my weakness. Why do you desire to depart from me?’ For she was already in precarious health. She thus remonstrated with him and begged him first to bury her and then to go wheresoever he would. Three days later Olga died. Her son wept for her with great mourning, as did likewise her grandsons and all the people. They thus carried her out, and buried her in her tomb. Olga had given command not to hold a funeral feast for her, for she had a priest who performed the last rites over the sainted Princess.Although he disapproved of his mother's Christian tradition, Svyatoslav heeded Olga's request that her priest, Gregory, conduct a Christian funeral without the ritual pagan burial feast. Her tomb remained in Kiev for over two centuries, but was destroyed by the Mongolian-Tatar armies of Batu Khan in 1240. At the time of her death, it seemed that Olga's attempt to make Kievan Rus’ a Christian territory had been a failure. Nonetheless, Olga's Christianizing mission would be brought to fruition by her grandson, Vladimir, who officially adopted Christianity in 988. The "Primary Chronicle" highlights Olga's holiness in contrast to the pagans around her during her life as well as the significance of her decision to convert to Christianity:Olga was the precursor of the Christian land, even as the day-spring precedes the sun and as the dawn precedes the day. For she shone like the moon by night, and she was radiant among the infidels like a pearl in the mire, since the people were soiled, and not yet purified of their sin by holy baptism. But she herself was cleansed by this sacred purification…. She was the first from Rus’ to enter the kingdom of God, and the son of Rus’ thus praise her as their leader, for since her death she has interceded with God in their behalf.In 1547, nearly 600 years after her 969 death, the Russian Orthodox Church named Olga a saint. Because of her proselytizing influence, the Eastern Orthodox Church, the Ruthenian Greek Catholic Church, and the Ukrainian Greek Catholic Church call Saint Olga by the honorific Isapóstolos, "Equal to the Apostles". She is also a saint in the Roman Catholic Church. Olga's feast day is July 11th, the date of her death. In keeping with her own biography, she is the patron of widows and converts. Olga is venerated as Saint in East Slavic-speaking countries where churches uses the Byzantine Rite: Eastern Orthodox Church (especially in Russian Orthodox Church), Greek Catholic Church (especially in the Ukrainian Greek Catholic Church), in churches with Byzantine Rite Lutheranism, in Roman Catholic Church in Russia (Latin rite) As an important figure in the history of Christianity, Olga's image as a saint lives on. But the question of Olga as a historical figure and character in the "Primary Chronicle" has been taken up in recent years. Olga's historical characterization as a vengeful princess, juxtaposed with her estimation within the Orthodox tradition as a saint, has produced a variety of modern interpretations of her story. Scholars tend to be more conservative with their interpretations, focusing on what the "Primary Chronicle" makes explicit: Olga's role in the spread of Christianity to Eastern Europe and Russia. These texts, generally speaking, focus on Olga's role as advisor to her son, whose decision not to persecute Christians in the Kievan Rus' was a pivotal moment in the religious history of Russia and its neighboring lands. Academic work on Olga tends not to dwell on the narrative twists and turns of her story, instead focusing on extracting historical facts from the story. Modern publications, however, have focused on her as an historical character. Journalists have penned articles with titles ranging from “Saint Olga of Kiev is the Best Warrior Princess You Never Knew” to “Meet the Murderous Viking Princess Who Brought the Faith to Eastern Europe.” These texts, written for a broader audience, tend to focus on Olga's exploits as a sort of historical drama. Her Viking heritage is always brought up, and often used as an explanation for her fiery spirit and military accomplishments. Authors focus on the most dramatic details of her story: her murder of two Drevlian negotiating groups, her wily deception of the Drevlian ruler, and her ultimate conquest of his people. A number of sources make her out to be a proto-feminist figure, a woman who did not allow contemporaneous expectations of gender roles to lock her out of the leadership role. Because there is little evidence to support the idea that Olga's rule was ever questioned by her people, this characterization of her rule is a medievalism — that is, an assumption made about history based not on facts but on preconceptions about the past, in this case the rigid relationship between gender and medieval rulership. Though a number of these contemporary sources refer to Olga as a “warrior princess,” there is little evidence to suggest she actually participated in the fighting and killing of her enemies. Based on historical precedent, it is more likely that she was a commander of troops, a sort of general or commander-in-chief, than a warrior of particular skill. These assertions have still made their way into the public imagination, however, as evidenced by the appropriation of her image in the Eastern European heavy metal scene. This duality of Olga's character — on the one hand a venerated saint, on the other a bloodthirsty commander of troops — has made her an attractive figure for subversive artists. Her image has been taken up in the heavy metal scene in some cases, most notably as the muse and cover figure for A Perfect Absolution, a concept album by French band Gorod about Olga of Kiev. According to Russian politician Vladimir Medinsky, the influence of Olga's image as a ruler is underappreciated among the feminists: "Logically, Olga should have been a feminist icon. At least, Russian feminist icon. To stand up and throw a boat full of guys into a pit. And to bury them. Unfortunately (or maybe, in this case, fortunately), Russian ladies, the regular consumers of "Cosmopolitan" and "Sex & the City", do not know the history well. Seriously speaking, the fact of the second ruler of Russia being female is surprisingly poorly mastered by the public consciousness... For sure, the memory about Olga will be refreshed, alongside Catherine the Great's." Medinsky also pointed out Olga's successful political PR: "For politicians, it is very important to be perceived as wise and cunning. In their case, these two qualities merge and make them seem exceptional. And in this sense, Olga is the most successful politician of Ancient Russia. Her image of both clever and cunning has survived through the ages." According to Russian historian Boris Akunin, the facts about Olga can be relatively clearly separated from the legends. For him, it's only plausible she's murdered the envoys who wanted to replace her husband Igor with their Prince Mal, as Iskorosten was just two days of the ride from Kiev, so it was impossible to conceal the first public murder. It's also obvious she's reconquered Drevlians. On the contrary, her large-scale administrative-economical reforms have had some controversial implications: "Olga has secured for herself "traps" () (hunting lands) and "camps" () (guesting places). She was generally very concerned about the separation of her personal property from the state. It gave the Grand Knyazes the opportunity to dispose of the funds more voluntary, but at the same time it has inserted a time bomb into the centralized state: after a period of time, the division of the country into “Grand Kniaz's” and “non-Grand-Kniaz's” parts will become one of the reasons for Kievan Rus' collapse. However, for the next 100 years, Olga had secured her family's power and wealth."
https://en.wikipedia.org/wiki?curid=22817
Olympus Mons Olympus Mons (; Latin for Mount Olympus) is a very large shield volcano on the planet Mars. The volcano has a height of nearly 22 km (13.6 mi or 72,000 ft) as measured by the Mars Orbiter Laser Altimeter (MOLA). Olympus Mons is about two and a half times Mount Everest's height above sea level. It is one of the largest volcanoes, the tallest planetary mountain, and the second tallest mountain currently discovered in the Solar System, comparable to Rheasilvia on Vesta. It is often cited as the largest volcano in the Solar System. However, by some metrics, other volcanoes are considerably larger. Alba Mons, northeast of Olympus Mons, has roughly 19 times the surface area, but is only about one third the height. Pele, the largest known volcano on Io, is also much larger, at roughly 4 times the surface area, but is considerably flat. Additionally, Tharsis Rise, a large volcanic structure on Mars of which Olympus Mons is a part, has been interpreted as an enormous spreading volcano. If this is confirmed, Tharsis would be by far the largest volcano in the Solar System. Olympus Mons is the youngest of the large volcanoes on Mars, having formed during Mars's Hesperian Period. It had been known to astronomers since the late 19th century as the albedo feature Nix Olympica (Latin for "Olympic Snow"). Its mountainous nature was suspected well before space probes confirmed its identity as a mountain. The volcano is located in Mars's western hemisphere at approximately , just off the northwestern edge of the Tharsis bulge. The western portion of the volcano lies in the Amazonis quadrangle (MC-8) and the central and eastern portions in the adjoining Tharsis quadrangle (MC-9). Two impact craters on Olympus Mons have been assigned provisional names by the International Astronomical Union. They are the -diameter Karzok crater () and the -diameter Pangboche crater (). The craters are notable for being two of several suspected source areas for shergottites, the most abundant class of Martian meteorites. As a shield volcano, Olympus Mons resembles the shape of the large volcanoes making up the Hawaiian Islands. The edifice is about wide. Because the mountain is so large, with complex structure at its edges, allocating a height to it is difficult. Olympus Mons stands above the Mars global datum, and its local relief, from the foot of the cliffs which form its northwest margin to its peak, is nearly (a little over twice the height of Mauna Kea as measured from its base on the ocean floor). The total elevation change from the plains of Amazonis Planitia, over to the northwest, to the summit approaches . The summit of the mountain has six nested calderas (collapsed craters) forming an irregular depression × across and up to deep. The volcano's outer edge consists of an escarpment, or cliff, up to tall (although obscured by lava flows in places), a feature unique among the shield volcanoes of Mars, which may have been created by enormous flank landslides. Olympus Mons covers an area of about , which is approximately the size of Italy or the Philippines, and it is supported by a thick lithosphere. The extraordinary size of Olympus Mons is likely because Mars lacks mobile tectonic plates. Unlike on Earth, the crust of Mars remains fixed over a stationary hotspot, and a volcano can continue to discharge lava until it reaches an enormous height. Being a shield volcano, Olympus Mons has a very gently sloping profile. The average slope on the volcano's flanks is only 5°. Slopes are steepest near the middle part of the flanks and grow shallower toward the base, giving the flanks a concave upward profile. The shape of Olympus Mons is distinctly asymmetrical—its flanks are shallower and extend farther from the summit in the northwestern direction than they do to the southeast. The volcano's shape and profile have been likened to a "circus tent" held up by a single pole that is shifted off center. Due to the size and shallow slopes of Olympus Mons, an observer standing on the Martian surface would be unable to view the entire profile of the volcano, even from a great distance. The curvature of the planet and the volcano itself would obscure such a synoptic view. Similarly, an observer near the summit would be unaware of standing on a very high mountain, as the slope of the volcano would extend far beyond the horizon, a mere 3 kilometers away. The typical atmospheric pressure at the top of Olympus Mons is 72 pascals, about 12% of the average Martian surface pressure of 600 pascals. Both are exceedingly low by terrestrial standards; by comparison, the atmospheric pressure at the summit of Mount Everest is 32,000 pascals, or about 32% of Earth's sea level pressure. Even so, high-altitude orographic clouds frequently drift over the Olympus Mons summit, and airborne Martian dust is still present. Although the average Martian surface atmospheric pressure is less than one percent of Earth's, the much lower gravity of Mars increases the atmosphere's scale height; in other words, Mars's atmosphere is expansive and does not drop off in density with height as sharply as Earth's. The composition of Olympus Mons is approximately 44% silicates, 17.5% iron oxides (which give the planet its red coloration) 7% aluminum, 6% magnesium, 6% calcium, and particularly high proportions of sulfur oxide with 7%. These results point to the surface being largely composed of basalts and other mafic rocks, which would have erupted as low viscosity lava flows and hence lead to the low gradients on the surface of the planet. Olympus Mons is an unlikely landing location for automated space probes in the near future. The high elevations preclude parachute-assisted landings because the atmosphere is insufficiently dense to slow the spacecraft down. Moreover, Olympus Mons stands in one of the dustiest regions of Mars. A mantle of fine dust obscures the underlying bedrock, possibly making rock samples hard to come by and likely posing a significant obstacle for rovers. Olympus Mons is the result of many thousands of highly fluid, basaltic lava flows that poured from volcanic vents over a long period of time (the Hawaiian Islands exemplify similar shield volcanoes on a smaller scale – see Mauna Kea). Like the basalt volcanoes on Earth, Martian basaltic volcanoes are capable of erupting enormous quantities of ash. Due to the reduced gravity of Mars compared to Earth, there are lesser buoyant forces on the magma rising out of the crust. In addition, the magma chambers are thought to be much larger and deeper than the ones found on Earth. The flanks of Olympus Mons are made up of innumerable lava flows and channels. Many of the flows have levees along their margins (pictured). The cooler, outer margins of the flow solidify, leaving a central trough of molten, flowing lava. Partially collapsed lava tubes are visible as chains of pit craters, and broad lava fans formed by lava emerging from intact, subsurface tubes are also common. In places along the volcano's base, solidified lava flows can be seen spilling out into the surrounding plains, forming broad aprons, and burying the basal escarpment. Crater counts from high-resolution images taken by the Mars Express orbiter in 2004 indicate that lava flows on the northwestern flank of Olympus Mons range in age from 115 million years old (Mya) to only 2 Mya. These ages are very recent in geological terms, suggesting that the mountain may still be volcanically active, though in a very quiescent and episodic fashion. The caldera complex at the peak of the volcano is made of at least six overlapping calderas and caldera segments (pictured). Calderas are formed by roof collapse following depletion and withdrawal of the subsurface magma chamber after an eruption. Each caldera thus represents a separate pulse of volcanic activity on the mountain. The largest and oldest caldera segment appears to have formed as a single, large lava lake. Using geometric relationships of caldera dimensions from laboratory models, scientists have estimated that the magma chamber associated with the largest caldera on Olympus Mons lies at a depth of about below the caldera floor. Crater size-frequency distributions on the caldera floors indicate the calderas range in age from 350 Mya to about 150 Mya. All probably formed within 100 million years of each other. Olympus Mons is asymmetrical structurally as well as topographically. The longer, more shallow northwestern flank displays extensional features, such as large slumps and normal faults. In contrast, the volcano's steeper southeastern side has features indicating compression, including step-like terraces in the volcano's mid-flank region (interpreted as thrust faults) and a number of wrinkle ridges located at the basal escarpment. Why opposite sides of the mountain should show different styles of deformation may lie in how large shield volcanoes grow laterally and in how variations within the volcanic substrate have affected the mountain's final shape. Large shield volcanoes grow not only by adding material to their flanks as erupted lava, but also by spreading laterally at their bases. As a volcano grows in size, the stress field underneath the volcano changes from compressional to extensional. A subterranean rift may develop at the base of the volcano, causing the underlying crust to spread apart. If the volcano rests on sediments containing mechanically weak layers (e.g., beds of water-saturated clay), detachment zones (decollements) may develop in the weak layers. The extensional stresses in the detachment zones can produce giant landslides and normal faults on the volcano's flanks, leading to the formation of a basal escarpment. Further from the volcano, these detachment zones can express themselves as a succession of overlapping, gravity driven thrust faults. This mechanism has long been cited as an explanation of the Olympus Mons aureole deposits (discussed below). Olympus Mons lies at the edge of the Tharsis bulge, an ancient vast volcanic plateau likely formed by the end of the Noachian Period. During the Hesperian, when Olympus Mons began to form, the volcano was located on a shallow slope that descended from the high in Tharsis into the northern lowland basins. Over time, these basins received large volumes of sediment eroded from Tharsis and the southern highlands. The sediments likely contained abundant Noachian-aged phyllosilicates (clays) formed during an early period on Mars when surface water was abundant, and were thickest in the northwest where basin depth was greatest. As the volcano grew through lateral spreading, low-friction detachment zones preferentially developed in the thicker sediment layers to the northwest, creating the basal escarpment and widespread lobes of aureole material (Lycus Sulci). Spreading also occurred to the southeast; however, it was more constrained in that direction by the Tharsis rise, which presented a higher-friction zone at the volcano's base. Friction was higher in that direction because the sediments were thinner and probably consisted of coarser grained material resistant to sliding. The competent and rugged basement rocks of Tharsis acted as an additional source of friction. This inhibition of southeasterly basal spreading in Olympus Mons could account for the structural and topographic asymmetry of the mountain. Numerical models of particle dynamics involving lateral differences in friction along the base of Olympus Mons have been shown to reproduce the volcano's present shape and asymmetry fairly well. It has been speculated that the detachment along the weak layers was aided by the presence of high-pressure water in the sediment pore spaces, which would have interesting astrobiological implications. If water-saturated zones still exist in sediments under the volcano, they would likely have been kept warm by a high geothermal gradient and residual heat from the volcano's magma chamber. Potential springs or seeps around the volcano would offer exciting possibilities for detecting microbial life. Olympus Mons and a few other volcanoes in the Tharsis region stand high enough to reach above the frequent Martian dust-storms recorded by telescopic observers as early as the 19th century. The astronomer Patrick Moore pointed out that Schiaparelli (1835–1910) "had found that his "Nodus Gordis" and "Olympic Snow" [Nix Olympica] were almost the only features to be seen" during dust storms, and "guessed correctly that they must be high". The Mariner 9 spacecraft arrived in orbit around Mars in 1971 during a global dust-storm. The first objects to become visible as the dust began to settle, the tops of the Tharsis volcanoes, demonstrated that the altitude of these features greatly exceeded that of any mountain found on Earth, as astronomers expected. Observations of the planet from Mariner 9 confirmed that Nix Olympica was a volcano. Ultimately, astronomers adopted the name "Olympus Mons" for the albedo feature known as Nix Olympica. Olympus Mons is located between the northwestern edge of the Tharsis region and the eastern edge of Amazonis Planitia. It stands about from the other three large Martian shield volcanoes, collectively called the Tharsis Montes (Arsia Mons, Pavonis Mons, and Ascraeus Mons). The Tharsis Montes are slightly smaller than Olympus Mons. A wide, annular depression or moat about deep surrounds the base of Olympus Mons and is thought to be due to the volcano's immense weight pressing down on the Martian crust. The depth of this depression is greater on the northwest side of the mountain than on the southeast side. Olympus Mons is partially surrounded by a region of distinctive grooved or corrugated terrain known as the Olympus Mons aureole. The aureole consists of several large lobes. Northwest of the volcano, the aureole extends a distance of up to and is known as Lycus Sulci (). East of Olympus Mons, the aureole is partially covered by lava flows, but where it is exposed it goes by different names (Gigas Sulci, for example). The origin of the aureole remains debated, but it was likely formed by huge landslides or gravity-driven thrust sheets that sloughed off the edges of the Olympus Mons shield.
https://en.wikipedia.org/wiki?curid=22818
Object database An object database is a database management system in which information is represented in the form of objects as used in object-oriented programming. Object databases are different from relational databases which are table-oriented. Object-relational databases are a hybrid of both approaches. Object databases have been considered since the early 1980s. Object-oriented database management systems (OODBMSs) also called ODBMS (Object Database Management System) combine database capabilities with object-oriented programming language capabilities. OODBMSs allow object-oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the OODBMS. Because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the OODBMS and the programming language will use the same model of representation. Relational DBMS projects, by way of contrast, maintain a clearer division between the database model and the application. As the usage of web-based technology increases with the implementation of Intranets and extranets, companies have a vested interest in OODBMSs to display their complex data. Using a DBMS that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer-aided design (CAD). Some object-oriented databases are designed to work well with object-oriented programming languages such as Delphi, Ruby, Python, JavaScript, Perl, Java, C#, Visual Basic .NET, C++, Objective-C and Smalltalk; others such as JADE have their own programming languages. OODBMSs use exactly the same model as object-oriented programming languages. Object database management systems grew out of research during the early to mid-1970s into having intrinsic database management support for graph-structured objects. The term "object-oriented database system" first appeared around 1985. Notable research projects included Encore-Ob/Server (Brown University), EXODUS (University of Wisconsin–Madison), IRIS (Hewlett-Packard), ODE (Bell Labs), ORION (Microelectronics and Computer Technology Corporation or MCC), Vodak (GMD-IPSI), and Zeitgeist (Texas Instruments). The ORION project had more published papers than any of the other efforts. Won Kim of MCC compiled the best of those papers in a book published by The MIT Press. Early commercial products included Gemstone (Servio Logic, name changed to GemStone Systems), Gbase (Graphael), and Vbase (Ontologic). Additional commercial products entered the market in the late 1980s through the mid 1990s. These included ITASCA (Itasca Systems), Jasmine (Fujitsu, marketed by Computer Associates), Matisse (Matisse Software), Objectivity/DB (Objectivity, Inc.), ObjectStore (Progress Software, acquired from eXcelon which was originally Object Design, Incorporated), ONTOS (Ontos, Inc., name changed from Ontologic), O2 (O2 Technology, merged with several companies, acquired by Informix, which was in turn acquired by IBM), POET (now FastObjects from Versant which acquired Poet Software), Versant Object Database (Versant Corporation), VOSS (Logic Arts) and JADE (Jade Software Corporation). Some of these products remain on the market and have been joined by new open source and commercial products such as InterSystems Caché. Object database management systems added the concept of persistence to object programming languages. The early commercial products were integrated with various languages: GemStone (Smalltalk), Gbase (LISP), Vbase (COP) and VOSS (Virtual Object Storage System for Smalltalk). For much of the 1990s, C++ dominated the commercial object database management market. Vendors added Java in the late 1990s and more recently, C#. Starting in 2004, object databases have seen a second growth period when open source object databases emerged that were widely affordable and easy to use, because they are entirely written in OOP languages like Smalltalk, Java, or C#, such as Versant's db4o (db4objects), DTS/S1 from Obsidian Dynamics and Perst (McObject), available under dual open source and commercial licensing. Object databases based on persistent programming acquired a niche in application areas such as engineering and spatial databases, telecommunications, and scientific areas such as high energy physics and molecular biology. Another group of object databases focuses on embedded use in devices, packaged software, and real-time systems. Most object databases also offer some kind of query language, allowing objects to be found using a declarative programming approach. It is in the area of object query languages, and the integration of the query and navigational interfaces, that the biggest differences between products are found. An attempt at standardization was made by the ODMG with the Object Query Language, OQL. Access to data can be faster because an object can be retrieved directly without a search, by following pointers. Another area of variation between products is in the way that the schema of a database is defined. A general characteristic, however, is that the programming language and the database schema use the same type definitions. Multimedia applications are facilitated because the class methods associated with the data are responsible for its correct interpretation. Many object databases, for example Gemstone or VOSS, offer support for versioning. An object can be viewed as the set of all its versions. Also, object versions can be treated as objects in their own right. Some object databases also provide systematic support for triggers and constraints which are the basis of active databases. The efficiency of such a database is also greatly improved in areas which demand massive amounts of data about one item. For example, a banking institution could get the user's account information and provide them efficiently with extensive information such as transactions, account information entries etc. The Object Data Management Group was a consortium of object database and object-relational mapping vendors, members of the academic community, and interested parties. Its goal was to create a set of specifications that would allow for portable applications that store objects in database management systems. It published several versions of its specification. The last release was ODMG 3.0. By 2001, most of the major object database and object-relational mapping vendors claimed conformance to the ODMG Java Language Binding. Compliance to the other components of the specification was mixed. In 2001, the ODMG Java Language Binding was submitted to the Java Community Process as a basis for the Java Data Objects specification. The ODMG member companies then decided to concentrate their efforts on the Java Data Objects specification. As a result, the ODMG disbanded in 2001. Many object database ideas were also absorbed into and have been implemented in varying degrees in object-relational database products. In 2005 Cook, Rai, and Rosenberger proposed to drop all standardization efforts to introduce additional object-oriented query APIs but rather use the OO programming language itself, i.e., Java and .NET, to express queries. As a result, Native Queries emerged. Similarly, Microsoft announced Language Integrated Query (LINQ) and DLINQ, an implementation of LINQ, in September 2005, to provide close, language-integrated database query capabilities with its programming languages C# and VB.NET 9. In February 2006, the Object Management Group (OMG) announced that they had been granted the right to develop new specifications based on the ODMG 3.0 specification and the formation of the Object Database Technology Working Group (ODBT WG). The ODBT WG planned to create a set of standards that would incorporate advances in object database technology (e.g., replication), data management (e.g., spatial indexing), and data formats (e.g., XML) and to include new features into these standards that support domains where object databases are being adopted (e.g., real-time systems). The work of the ODBT WG was suspended in March 2009 when, subsequent to the economic turmoil in late 2008, the ODB vendors involved in this effort decided to focus their resources elsewhere. In January 2007 the World Wide Web Consortium gave final recommendation status to the XQuery language. XQuery uses XML as its data model. Some of the ideas developed originally for object databases found their way into XQuery, but XQuery is not intrinsically object-oriented. Because of the popularity of XML, XQuery engines compete with object databases as a vehicle for storage of data that is too complex or variable to hold conveniently in a relational database. XQuery also allows modules to be written to provide encapsulation features that have been provided by Object-Oriented systems. XQuery v1 and XPath v2 are extremely complex (no FOSS software is implementing these standards more than 10 years after their publication) when compared to XPath v1 and XSLT v1, and XML did not fit all community demands as an open format. Since the early 2000s JSON has gained community adoption and popularity in applications, surpassing XML in the 2010s. JSONiq, a query-analog of XQuery for JSON (sharing XQuery's core expressions and operations), demonstrated the functional equivalence of the JSON and XML formats. In this context, the main strategy of OODBMS maintainers was to retrofit JSON to their databases (by using it as the internal data type). In January 2016, with the PostgreSQL 9.5 release was the first FOSS OODBMS to offer an efficient JSON internal datatype (JSONB) with a complete set of functions and operations, for all basic relational and non-relational manipulations. An object database stores complex data and relationships between data directly, without mapping to relational rows and columns, and this makes them suitable for applications dealing with very complex data. Objects have a many-to-many relationship and are accessed by the use of pointers. Pointers are linked to objects to establish relationships. Another benefit of an OODBMS is that it can be programmed with small procedural differences without affecting the entire system.
https://en.wikipedia.org/wiki?curid=22826
Orgy of the Dead Orgy of the Dead is a 1965 erotic horror film directed by Stephen C. Apostolof (under the alias A. C. Stephen). The screenplay was written by cult film director Edward D. Wood Jr., who adapted the screenplay into a novel. The film belongs to the genre of nudie cuties, narrative-based films featuring female nudity. It was an evolution of earlier films, which featured striptease and burlesque shows. These predecessors mostly depicted actual stage performances, sometimes attached to a frame story. The film has "little to no" story line. About 70 minutes of the film's running time features topless female dancers without any dialogue. The film opens to two muscle-bound men dressed in loincloths approaching a crypt. They open the doors, revealing a coffin. They remove the lid and exit the crypt, then the inhabitant of the coffin (Criswell) sits up to deliver an opening narration. A lone Chevrolet Corvair drives down a California desert road. Its passengers, Bob and Shirley, are arguing over the decision to use this night to search for a cemetery. Bob is a horror writer who hopes that the scene of a cemetery at night will bring him inspiration. The conversation ends when Bob accidentally drives the car off the road and over a cliff. The next scene opens to a nocturnal image of a fog-shrouded cemetery. The lonely figure of the Emperor walks towards a marble altar, sits, and then summons his "Princess of the Night", the Black Ghoul, who appears and bows before him. The Emperor warns that if the night's entertainment fails to please him, he will banish the souls of the entertainers to eternal damnation, indicating that he is an all-powerful demonic being. As the full moon appears, the Black Ghoul summons the first dancer of the night, a Native American woman. The Black Ghoul explains that this woman loved flames, and that her lovers and she died in flames. The woman dances and strips before the flames of the cemetery. The Black Ghoul then introduces the second dancer of the night, a street walker in life. While the woman dances, Bob and Shirley make their way to the cemetery and start observing the dance from a distance. Shirley suspects that they are observing a college initiation, though Bob seriously doubts her theory. The Emperor himself summons the third dancer, a woman who worshiped gold above else. The Golden Girl dances in her turn, and the Emperor instructs his loin-clothed servants to reward her with gold. The supposed reward is soon revealed to be a punishment, as the servants place her in a cauldron with liquid gold. What emerges from the cauldron is a golden statue of the living woman who entered. The servants transport the immobile statue to a nearby crypt. A werewolf and a mummy appear and seize the intruding young couple. They are brought before the Emperor, who decides to postpone deciding their fate. The intruders are tied up, side by side, and allowed to continue watching the dances. The Black Ghoul next introduces the fourth dancer, a "Cat Woman" (Texas Starr). She is depicted as a woman dressed in a leopard costume, which exposes her chest area. As she dances, a servant follows her around and thrashes her with a bullwhip, offering a sadomasochistic show for the spectators. The Emperor next calls for a Slave Girl to be whipped for his amusement. The slave wears a tunic and is chained to a wall. Following her torture session, the Slave Girl breaks free and becomes the fifth dancer of the night. Later, the Black Ghoul exhibits a fascination with Shirley and scratches a mark on her. She draws a knife and seems about to kill Shirley, when the Emperor decides it is not yet time for the intruders to properly join them. The female ghoul reluctantly obeys. The Emperor is puzzled when a human skull appears instead of the next dancer. The Black Ghoul explains it is the symbol of the sixth dancer, who loved bullfighting and matadors. She used to dance over their demise, and now it is time to dance over her own. The dancer of apparent Spanish/Mexican heritage (Stephanie Jones) appears to perform. The Emperor and Ghoul briefly discuss the past of the dancer, who came to them on the Day of the Dead. The seventh dancer appears dressed in Polynesian garments. The Black Ghoul describes her as a worshiper of snakes, smoke, and flames. A rattlesnake is depicted along with her dance. The camera shifts to the mummy and the werewolf. The mummy voices his dislike of snakes and recalls the death of Cleopatra. He informs his companion that ancient Egypt had many snakes and they were the stuff of nightmares. The Emperor next expresses his boredom and demands "unusual" entertainment, while the Black Ghoul notes that the night is almost over. She reminds her superior that they will be gone at the first sight of the morning sun. They proceed to argue over the fate of Shirley. The argument ends with the introduction of the eighth dancer, a woman who murdered her husband on their wedding night. She dances with the skeleton of her spouse. The argument over Shirley then resumes, as the Ghoul claims her for her own. The Emperor feels the need to assert his own authority over the Black Ghoul. The ninth dancer was a zombie in life and remains zombie-like in death. The tenth and final dancer is introduced as one who died for feathers, fur, and fluff. She starts her dance in clothing matching this style. When the final dance ends, the Emperor finally offers Shirley to the Ghoul. The Ghoul briefly dances herself as she prepares to claim her prize, but dawn arrives and with it, sunlight. The Emperor and all his undead are reduced to bones. The final scene portrays Bob and Shirley waking up at the scene of the accident, surrounded by paramedics, suggesting it was all a dream. Criswell appears in his coffin to offer parting words to the audience. Apostolof was attracted to the project, because the film was "relatively very inexpensive" to produce and direct. The film's graveyard prologue is a recreation of the opening scene from Ed Wood's then-unreleased 1958 film "Night of the Ghouls". Originally, Wood titled the film's script "Night of the Ghouls", as he did not expect the 1958 film to ever be released. The film also had a working title "Ghoulies". The action begins when a young couple, Bob (William Bates) and Shirley (sexploitation actress Pat Barrington, billed as Pat Barringer) survive a car crash only to find themselves tied to posts in a misty cemetery where they are forced to watch dead spirits dance for the Emperor of the Night played by Criswell (best known for "Plan 9 from Outer Space"). Criswell reprises his role from the earlier film. Wood convinced Apostolof to cast his friend Criswell in the film. His lines were written on cue cards, which he had difficulty reading because he wasn't wearing his glasses. Ten striptease performances by topless dancers outfitted in various motifs comprise most of this movie. The Wolf Man (wearing a very obvious mask, with the actor's bare neck visible below the bottom of the mask) and the Mummy are also tossed in for comic relief. Barrington doubles as the blond Golden Girl (inspired by Shirley Eaton in "Goldfinger") while her red-headed "Shirley" character watches her perform. The dancing has been described as awkward and wooden, probably exacerbated by Apostolof firing the dance coordinator during the shooting of the film. Criswell's undead consort, the sexy Black Ghoul, was allegedly written for Maila Nurmi, a.k.a. Vampira, but was instead played by Fawn Silver, who wore a black bouffant wig. The Black Ghoul appears to have "pasty white skin", with red fingernails and lipstick. She wears a black dress, implying the role of a funerary garment. Black, red, and white are the main colors associated with her. Wood served as writer, production manager, casting agent, and even held up cue cards on the film, although he did not direct. Apostolof paid Wood $400 for the script. Several gaffes typical for Wood-associated projects are present, such as day-for-night issues, poor integration of stock footage, obviously fake props and absurd dialogue (for example, at one point, Criswell declares something to be "more than a fact"). Wood allegedly stole money from the film's budget to purchase alcohol for himself. The incident caused a falling-out between Apostolof and Wood, and the two men would not collaborate again until 1972. An article on the making of this film was published in "Femme Fatales", 7:1 (June 1998). Film journalists Andrew J. Rausch and Charles E. Pratt describe the film's attempt in eroticism as juvenile, with a puerile focus on jiggling female breasts. They write that the viewer "just has to sit back in awe and speculate how something like this could come from the mind of a grown man". TV Guide criticized the film's "boring" striptease performances and music, but praised Wood's dialog as "priceless". Allmovie critic Mark Deming stated that the film "moves like molasses on a cold morning", and that "Orgy of the Dead" "is that rare film that would have been improved if Ed Wood had directed it". In September 2017, the film was restored in 2K and released on DVD and Blu-ray by Vinegar Syndrome.
https://en.wikipedia.org/wiki?curid=22829
Ostwald process The Ostwald process is a chemical process used for making nitric acid (HNO3). Wilhelm Ostwald developed the process, and he patented it in 1902. The Ostwald process is a mainstay of the modern chemical industry, and it provides the main raw material for the most common type of fertilizer production. Historically and practically, the Ostwald process is closely associated with the Haber process, which provides the requisite raw material, ammonia (NH3). Ammonia is converted to nitric acid in 2 stages. It is oxidized by heating with oxygen in the presence of a catalyst such as platinum with 10% rhodium, platinum metal on fused silica wool, copper or nickel, to form nitric oxide (nitrogen(II) oxide) and water (as steam). This reaction is strongly exothermic, making it a useful heat source once initiated: Stage two encompasses two reactions and is carried out in an absorption apparatus containing water. Initially nitric oxide is oxidized again to yield nitrogen dioxide (nitrogen(IV) oxide). This gas is then readily absorbed by the water, yielding the desired product (nitric acid, albeit in a dilute form), while reducing a portion of it back to nitric oxide: The NO is recycled, and the acid is concentrated to the required strength by distillation. Alternatively, if the last step is carried out in air: Typical conditions for the first stage, which contribute to an overall yield of about 98%, are: A complication that needs to be taken into consideration involves a side-reaction in the first step that reverts the nitric oxide back to nitrogen: This is a secondary reaction that is minimised by reducing the time the gas mixtures are in contact with the catalyst. The overall reaction is the sum of the first equation, 3 times the second equation, and 2 times the last equation; all divided by 2: Alternatively, if the last step is carried out in air, the overall reaction is the sum of equation 1, 2 times the equation 2, and equation 4; all divided by 2. Without considering the state of water,
https://en.wikipedia.org/wiki?curid=22830
Oliver Heaviside Oliver Heaviside FRS (; 18 May 1850 – 3 February 1925) was an English self-taught electrical engineer, mathematician, and physicist who adapted complex numbers to the study of electrical circuits, invented mathematical techniques for the solution of differential equations (equivalent to Laplace transforms), reformulated Maxwell's field equations in terms of electric and magnetic forces and energy flux, and independently co-formulated vector analysis. Although at odds with the scientific establishment for most of his life, Heaviside changed the face of telecommunications, mathematics, and science. Heaviside was born in Camden Town, London, at 55 Kings Street (now Plender Street). He was a short and red-headed child, and suffered from scarlet fever when young, which left him with a hearing impairment. A small legacy enabled the family to move to a better part of Camden when he was thirteen and he was sent to Camden House Grammar School. He was a good student, placed fifth out of five hundred students in 1865, but his parents could not keep him at school after he was 16, so he continued studying for a year by himself and had no further formal education. Heaviside's uncle by marriage was Sir Charles Wheatstone (1802–1875), an internationally celebrated expert in telegraphy and electromagnetism, and the original co-inventor of the first commercially successful telegraph in the mid-1830s. Wheatstone took a strong interest in his nephew's education and in 1867 sent him north to work with his own, older brother Arthur, who was managing one of Wheatstone's telegraph companies in Newcastle-upon-Tyne. Two years later he took a job as a telegraph operator with the Danish Great Northern Telegraph Company laying a cable from Newcastle to Denmark using British contractors. He soon became an electrician. Heaviside continued to study while working, and by the age of 22 he published an article in the prestigious "Philosophical Magazine" on 'The Best Arrangement of Wheatstone's Bridge for measuring a Given Resistance with a Given Galvanometer and Battery' which received positive comments from physicists who had unsuccessfully tried to solve this algebraic problem, including Sir William Thomson, to whom he gave a copy of the paper, and James Clerk Maxwell. When he published an article on the duplex method of using a telegraph cable, he poked fun at R. S. Culley, the engineer in chief of the Post Office telegraph system, who had been dismissing duplex as impractical. Later in 1873 his application to join the Society of Telegraph Engineers was turned down with the comment that "they didn't want telegraph clerks". This riled Heaviside, who asked Thomson to sponsor him, and along with support of the society's president he was admitted "despite the P.O. snobs". In 1873 Heaviside had encountered Maxwell's newly published, and later famous, two-volume "Treatise on Electricity and Magnetism". In his old age Heaviside recalled: Undertaking research from home, he helped develop transmission line theory (also known as the ""telegrapher's equations""). Heaviside showed mathematically that uniformly distributed inductance in a telegraph line would diminish both attenuation and distortion, and that, if the inductance were great enough and the insulation resistance not too high, the circuit would be distortionless in that currents of all frequencies would have equal speeds of propagation. Heaviside's equations helped further the implementation of the telegraph. From 1882 to 1902, except for three years, he contributed regular articles to the trade paper "The Electrician", which wished to improve its standing, for which he was paid £40 per year. This was hardly enough to live on, but his demands were very small and he was doing what he most wanted to. Between 1883 and 1887 these averaged 2–3 articles per month and these articles later formed the bulk of his "Electromagnetic Theory" and "Electrical Papers". In 1880, Heaviside researched the skin effect in telegraph transmission lines. That same year he patented, in England, the coaxial cable. In 1884 he recast Maxwell's mathematical analysis from its original cumbersome form (they had already been recast as quaternions) to its modern vector terminology, thereby reducing twelve of the original twenty equations in twenty unknowns down to the four differential equations in two unknowns we now know as Maxwell's equations. The four re-formulated Maxwell's equations describe the nature of electric charges (both static and moving), magnetic fields, and the relationship between the two, namely electromagnetic fields. Between 1880 and 1887, Heaviside developed the operational calculus using "p" for the differential operator, (which Boole had previously denoted by "D"), giving a method of solving differential equations by direct solution as algebraic equations. This later caused a great deal of controversy, owing to its lack of rigour. He famously said, "Mathematics is an experimental science, and definitions do not come first, but later on. They make themselves, when the nature of the subject has developed itself." On another occasion he asked somewhat more defensively, "Shall I refuse my dinner because I do not fully understand the process of digestion?" In 1887, Heaviside worked with his brother Arthur on a paper entitled "The Bridge System of Telephony". However the paper was blocked by Arthur's superior, William Henry Preece of the Post Office, because part of the proposal was that loading coils (inductors) should be added to telephone and telegraph lines to increase their self-induction and correct the distortion which they suffered. Preece had recently declared self-inductance to be the great enemy of clear transmission. Heaviside was also convinced that Preece was behind the sacking of the editor of "The Electrician" which brought his long-running series of articles to a halt (until 1891). There was a long history of animosity between Preece and Heaviside. Heaviside considered Preece to be mathematically incompetent, an assessment supported by the biographer Paul J. Nahin: "Preece was a powerful government official, enormously ambitious, and in some remarkable ways, an utter blockhead." Preece's motivations in suppressing Heaviside's work were more to do with protecting Preece's own reputation and avoiding having to admit error than any perceived faults in Heaviside's work. The importance of Heaviside's work remained undiscovered for some time after publication in "The Electrician", and so its rights lay in the public domain. In 1897, AT&T employed one of its own scientists, George A. Campbell, and an external investigator Michael I. Pupin to find some respect in which Heaviside's work was incomplete or incorrect. Campbell and Pupin extended Heaviside's work, and AT&T filed for patents covering not only their research, but also the technical method of constructing the coils previously invented by Heaviside. AT&T later offered Heaviside money in exchange for his rights; it is possible that the Bell engineers' respect for Heaviside influenced this offer. However, Heaviside refused the offer, declining to accept any money unless the company were to give him full recognition. Heaviside was chronically poor, making his refusal of the offer even more striking. But this setback had the effect of turning Heaviside's attention towards electromagnetic radiation, and in two papers of 1888 and 1889, he calculated the deformations of electric and magnetic fields surrounding a moving charge, as well as the effects of it entering a denser medium. This included a prediction of what is now known as Cherenkov radiation, and inspired his friend George FitzGerald to suggest what now is known as the Lorentz–FitzGerald contraction. In 1889, Heaviside first published a correct derivation of the magnetic force on a moving charged particle, which is the magnetic component of what is now called the Lorentz force. In the late 1880s and early 1890s, Heaviside worked on the concept of electromagnetic mass. Heaviside treated this as material mass, capable of producing the same effects. Wilhelm Wien later verified Heaviside's expression (for low velocities). In 1891 the British Royal Society recognized Heaviside's contributions to the mathematical description of electromagnetic phenomena by naming him a Fellow of the Royal Society, and the following year devoting more than fifty pages of the "Philosophical Transactions" of the Society to his vector methods and electromagnetic theory. In 1905 Heaviside was given an honorary doctorate by the University of Göttingen. In 1896, FitzGerald and John Perry obtained a civil list pension of £120 per year for Heaviside, who was now living in Devon, and persuaded him to accept it, after he had rejected other charitable offers from the Royal Society. In 1902, Heaviside proposed the existence of what is now known as the Kennelly–Heaviside layer of the ionosphere. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. The existence of the ionosphere was confirmed in 1923. The predictions by Heaviside, combined with Planck's radiation theory, probably discouraged further attempts to detect radio waves from the Sun and other astronomical objects. For whatever reason, there seem to have been no attempts for 30 years, until Jansky's development of radio astronomy in 1932. In later years his behavior became quite eccentric. According to associate B. A. Behrend, he became a recluse who was so averse to meeting people that he delivered the manuscripts of his "Electrician" papers to a grocery store, where the editors picked them up. Though he had been an active cyclist in his youth, his health seriously declined in his sixth decade. During this time Heaviside would sign letters with the initials ""W.O.R.M."" after his name. Heaviside also reportedly started painting his fingernails pink and had granite blocks moved into his house for furniture. In 1922, he became the first recipient of the Faraday Medal, which was established that year. On Heaviside's religious views, he was a Unitarian, but not a religious one. He was even said to have made fun of people who put their faith in a supreme being. Heaviside died on 3 February 1925, at Torquay in Devon after falling from a ladder, and is buried near the eastern corner of Paignton cemetery. He is buried with his father, Thomas Heaviside (1813–1896) and his mother, Rachel Elizabeth Heaviside. The gravestone was cleaned thanks to an anonymous donor sometime in 2005. Most of his recognition was gained posthumously. In July 2014, academics at Newcastle University, UK and the Newcastle Electromagnetics Interest Group founded the Heaviside Memorial Project in a bid to fully restore the monument through public subscription. The restored memorial was ceremonially unveiled on 30 August 2014 by Alan Heather, a distant relative of Heaviside. The unveiling was attended by the Mayor of Torbay, the MP for Torbay, an ex-curator of the Science Museum (representing the Institution of Engineering and Technology), the Chairman of the Torbay Civic Society, and delegates from Newcastle University. A collection of Heaviside's notebooks, papers, correspondence, notes and annotated pamphlets on telegraphy is held at the Institution of Engineering and Technology (IET) Archive Centre. Heaviside did much to develop and advocate vector methods and vector calculus. Maxwell's formulation of electromagnetism consisted of 20 equations in 20 variables. Heaviside employed the curl and divergence operators of the vector calculus to reformulate 12 of these 20 equations into four equations in four variables (B, E, J, and ρ), the form by which they have been known ever since (see Maxwell's equations). Less well known is that Heaviside's equations and Maxwell's are not exactly the same, and in fact it is easier to modify the former to make them compatible with quantum physics. The possibility of gravitational waves was also discussed by Heaviside using the analogy between the inverse-square law in gravitation and electricity. He invented the Heaviside step function, using it to calculate the current when an electric circuit is switched on. He was the first to use the unit impulse function now usually known as the Dirac delta function. He invented his operational calculus method for solving linear differential equations. This resembles the currently used Laplace transform method based on the "Bromwich integral" named after Bromwich who devised a rigorous mathematical justification for Heaviside's operator method using contour integration. Heaviside was familiar with the Laplace transform method but considered his own method more direct. Heaviside developed the transmission line theory (also known as the "telegrapher's equations"), which had the effect of increasing the transmission rate over transatlantic cables by a factor of ten. It originally took ten minutes to transmit each character, and this immediately improved to one character per minute. Closely related to this was his discovery that telephone transmission could be greatly improved by placing electrical inductance in series with the cable. Heaviside also independently discovered the Poynting vector. Heaviside advanced the idea that the Earth's uppermost atmosphere contained an ionized layer known as the ionosphere; in this regard, he predicted the existence of what later was dubbed the Kennelly–Heaviside layer. In 1947 Edward Victor Appleton received the Nobel Prize in Physics for proving that this layer really existed. Heaviside coined the following terms of art in electromagnetic theory: Heaviside is sometimes also credited with coining "susceptance" (the imaginary part of admittance, reciprocal of reactance), but this is actually due to Charles Proteus Steinmetz. Sorted by date.
https://en.wikipedia.org/wiki?curid=22831
Ozone layer The ozone layer or ozone shield is a region of Earth's stratosphere that absorbs most of the Sun's ultraviolet radiation. It contains high concentration of ozone (O3) in relation to other parts of the atmosphere, although still small in relation to other gases in the stratosphere. The ozone layer contains less than 10 parts per million of ozone, while the average ozone concentration in Earth's atmosphere as a whole is about 0.3 parts per million. The ozone layer is mainly found in the lower portion of the stratosphere, from approximately above Earth, although its thickness varies seasonally and geographically. The ozone layer was discovered in 1913 by the French physicists Charles Fabry and Henri Buisson. Measurements of the sun showed that the radiation sent out from its surface and reaching the ground on Earth is usually consistent with the spectrum of a black body with a temperature in the range of 5,500–6,000 K (5,227 to 5,727 °C), except that there was no radiation below a wavelength of about 310 nm at the ultraviolet end of the spectrum. It was deduced that the missing radiation was being absorbed by something in the atmosphere. Eventually the spectrum of the missing radiation was matched to only one known chemical, ozone. Its properties were explored in detail by the British meteorologist G. M. B. Dobson, who developed a simple spectrophotometer (the Dobsonmeter) that could be used to measure stratospheric ozone from the ground. Between 1928 and 1958, Dobson established a worldwide network of ozone monitoring stations, which continue to operate to this day. The "Dobson unit", a convenient measure of the amount of ozone overhead, is named in his honor. The ozone layer absorbs 97 to 99 percent of the Sun's medium-frequency ultraviolet light (from about 200 nm to 315 nm wavelength), which otherwise would potentially damage exposed life forms near the surface. In 1976, atmospheric research revealed that the ozone layer was being depleted by chemicals released by industry, mainly chlorofluorocarbons (CFCs). Concerns that increased UV radiation due to ozone depletion threatened life on Earth, including increased skin cancer in humans and other ecological problems, led to bans on the chemicals, and the latest evidence is that ozone depletion has slowed or stopped. The United Nations General Assembly has designated September 16 as the International Day for the Preservation of the Ozone Layer. Venus also has a thin ozone layer at an altitude of 100 kilometers above the planet's surface. The photochemical mechanisms that give rise to the ozone layer were discovered by the British physicist Sydney Chapman in 1930. Ozone in the Earth's stratosphere is created by ultraviolet light striking ordinary oxygen molecules containing two oxygen atoms (O2), splitting them into individual oxygen atoms (atomic oxygen); the atomic oxygen then combines with unbroken O2 to create ozone, O3. The ozone molecule is unstable (although, in the stratosphere, long-lived) and when ultraviolet light hits ozone it splits into a molecule of O2 and an individual atom of oxygen, a continuing process called the ozone-oxygen cycle. Chemically, this can be described as: About 90 percent of the ozone in the atmosphere is contained in the stratosphere. Ozone concentrations are greatest between about , where they range from about 2 to 8 parts per million. If all of the ozone were compressed to the pressure of the air at sea level, it would be only thick. Although the concentration of the ozone in the ozone layer is very small, it is vitally important to life because it absorbs biologically harmful ultraviolet (UV) radiation coming from the sun. Extremely short or vacuum UV (10–100 nm) is screened out by nitrogen. UV radiation capable of penetrating nitrogen is divided into three categories, based on its wavelength; these are referred to as UV-A (400–315 nm), UV-B (315–280 nm), and UV-C (280–100 nm). UV-C, which is very harmful to all living things, is entirely screened out by a combination of dioxygen (< 200 nm) and ozone (> about 200 nm) by around altitude. UV-B radiation can be harmful to the skin and is the main cause of sunburn; excessive exposure can also cause cataracts, immune system suppression, and genetic damage, resulting in problems such as skin cancer. The ozone layer (which absorbs from about 200 nm to 310 nm with a maximal absorption at about 250 nm) is very effective at screening out UV-B; for radiation with a wavelength of 290 nm, the intensity at the top of the atmosphere is 350 million times stronger than at the Earth's surface. Nevertheless, some UV-B, particularly at its longest wavelengths, reaches the surface, and is important for the skin's production of vitamin D. Ozone is transparent to most UV-A, so most of this longer-wavelength UV radiation reaches the surface, and it constitutes most of the UV reaching the Earth. This type of UV radiation is significantly less harmful to DNA, although it may still potentially cause physical damage, premature aging of the skin, indirect genetic damage, and skin cancer. The thickness of the ozone layer varies worldwide and is generally thinner near the equator and thicker near the poles. Thickness refers to how much ozone is in a column over a given area and varies from season to season. The reasons for these variations are due to atmospheric circulation patterns and solar intensity. The majority of ozone is produced over the tropics and is transported towards the poles by stratospheric wind patterns. In the northern hemisphere these patterns, known as the Brewer-Dobson circulation, make the ozone layer thickest in the spring and thinnest in the fall. When ozone is produced by solar UV radiation in the tropics, it is done so by circulation lifting ozone-poor air out of the troposphere and into the stratosphere where the sun photolyzes oxygen molecules and turns them into ozone. Then, the ozone-rich air is carried to higher latitudes and drops into lower layers of the atmosphere. Research has found that the ozone levels in the United States are highest in the spring months of April and May and lowest in October. While the total amount of ozone increases moving from the tropics to higher latitudes, the concentrations are greater in high northern latitudes than in high southern latitudes, due to the ozone hole phenomenon. The highest amounts of ozone are found over the Arctic during the spring months of March and April, but the Antarctic has their lowest amounts of ozone during their summer months of September and October, The ozone layer can be depleted by free radical catalysts, including nitric oxide (NO), nitrous oxide (N2O), hydroxyl (OH), atomic chlorine (Cl), and atomic bromine (Br). While there are natural sources for all of these species, the concentrations of chlorine and bromine increased markedly in recent decades because of the release of large quantities of man-made organohalogen compounds, especially chlorofluorocarbons (CFCs) and bromofluorocarbons. These highly stable compounds are capable of surviving the rise to the stratosphere, where Cl and Br radicals are liberated by the action of ultraviolet light. Each radical is then free to initiate and catalyze a chain reaction capable of breaking down over 100,000 ozone molecules. By 2009, nitrous oxide was the largest ozone-depleting substance (ODS) emitted through human activities. The breakdown of ozone in the stratosphere results in reduced absorption of ultraviolet radiation. Consequently, unabsorbed and dangerous ultraviolet radiation is able to reach the Earth's surface at a higher intensity. Ozone levels have dropped by a worldwide average of about 4 percent since the late 1970s. For approximately 5 percent of the Earth's surface, around the north and south poles, much larger seasonal declines have been seen, and are described as "ozone holes". The discovery of the annual depletion of ozone above the Antarctic was first announced by Joe Farman, Brian Gardiner and Jonathan Shanklin, in a paper which appeared in "Nature" on May 16, 1985. To support successful regulation attempts, the ozone case was communicated to lay persons "with easy-to-understand bridging metaphors derived from the popular culture" and related to "immediate risks with everyday relevance". The specific metaphors used in the discussion (ozone shield, ozone hole) proved quite useful and, compared to global climate change, the ozone case was much more seen as a "hot issue" and imminent risk. Lay people were cautious about a depletion of the ozone layer and the risks of skin cancer. In 1978, the United States, Canada and Norway enacted bans on CFC-containing aerosol sprays that damage the ozone layer. The European Community rejected an analogous proposal to do the same. In the U.S., chlorofluorocarbons continued to be used in other applications, such as refrigeration and industrial cleaning, until after the discovery of the Antarctic ozone hole in 1985. After negotiation of an international treaty (the Montreal Protocol), CFC production was capped at 1986 levels with commitments to long-term reductions. This allowed for a ten-year phase-in for developing countries (identified in Article 5 of the protocol). Since that time, the treaty was amended to ban CFC production after 1995 in the developed countries, and later in developing countries. Today, all of the world's 197 countries have signed the treaty. Beginning January 1, 1996, only recycled and stockpiled CFCs were available for use in developed countries like the US. This production phaseout was possible because of efforts to ensure that there would be substitute chemicals and technologies for all ODS uses. On August 2, 2003, scientists announced that the global depletion of the ozone layer may be slowing down because of the international regulation of ozone-depleting substances. In a study organized by the American Geophysical Union, three satellites and three ground stations confirmed that the upper-atmosphere ozone-depletion rate slowed significantly during the previous decade. Some breakdown can be expected to continue because of ODSs used by nations which have not banned them, and because of gases which are already in the stratosphere. Some ODSs, including CFCs, have very long atmospheric lifetimes, ranging from 50 to over 100 years. It has been estimated that the ozone layer will recover to 1980 levels near the middle of the 21st century. A gradual trend toward "healing" was reported in 2016. Compounds containing C–H bonds (such as hydrochlorofluorocarbons, or HCFCs) have been designed to replace CFCs in certain applications. These replacement compounds are more reactive and less likely to survive long enough in the atmosphere to reach the stratosphere where they could affect the ozone layer. While being less damaging than CFCs, HCFCs can have a negative impact on the ozone layer, so they are also being phased out. These in turn are being replaced by hydrofluorocarbons (HFCs) and other compounds that do not destroy stratospheric ozone at all. The residual effects of CFCs accumulating within the atmosphere lead to a concentration gradient between the atmosphere and the ocean. This organohalogen compound is able to dissolve into the ocean's surface waters and is able to act as a time-dependent tracer. This tracer helps scientists study ocean circulation by tracing biological, physical and chemical pathways As ozone in the atmosphere prevents most energetic ultraviolet radiation reaching the surface of the Earth, astronomical data in these wavelengths have to be gathered from satellites orbiting above the atmosphere and ozone layer. Most of the light from young hot stars is in the ultraviolet and so study of these wavelengths is important for studying the origins of galaxies. The Galaxy Evolution Explorer, GALEX, is an orbiting ultraviolet space telescope launched on April 28, 2003, which operated until early 2012.
https://en.wikipedia.org/wiki?curid=22834
Public Enemy (band) Public Enemy Radio (also known as Public Enemy) is an American hip hop group consisting of Chuck D and DJ Lord. Formed in Long Island, New York, in 1985, they are famous for their music with a heavy political message alongside extreme criticism of the media of the United States, with many of their works also revolving around frustrations and concerns of the African American community. Public Enemy's debut album, "Yo! Bum Rush the Show", was released in 1987 to critical acclaim and their second album, "It Takes a Nation of Millions to Hold Us Back" (1988), was the first hip hop album to top "The Village Voice"s Pazz & Jop critics' poll. Their next three albums, "Fear of a Black Planet" (1990), "Apocalypse 91... The Enemy Strikes Black" (1991) and "Muse Sick-n-Hour Mess Age" (1994), were also well-received by music critics. Public Enemy has gone through lineup changes over the years, with Chuck D being the only constant member of the group. Professor Griff left in 1989 but rejoined in 1998, though he parted ways with the group again some years later. DJ Lord also joined Public Enemy in 1998 as the replacement of the group's original DJ Terminator X. Flavor Flav, one of the co-founders of Public Enemy, was a member of the group for three-and-a-half decades, until he was fired in 2020. Since the split with Flavor Flav, Public Enemy tour and record music under the name of Public Enemy Radio which consists of the lineup of Chuck D, Jahi, DJ Lord and the S1Ws. Public Enemy's first four albums during the late 1980s and early 1990s were all certified either gold or platinum and were, according to music critic Robert Hilburn in 1998, "the most acclaimed body of work ever by a hip hop act". Critic Stephen Thomas Erlewine called them "the most influential and radical band of their time". They were inducted into Rock and Roll Hall of Fame in 2013. They were honored with the Grammy Lifetime Achievement Award at the 62nd Grammy Awards. Public Enemy was formed in 1985 by Carlton Ridenhour (Chuck D) and William Drayton (Flavor Flav), who met at Long Island's Adelphi University in the mid-1980s. Developing his talents as an MC with Flav while delivering furniture for his father's business, Chuck D and Spectrum City, as the group was called, released the record "Check Out the Radio", backed by "Lies", a social commentary—both of which would influence RUSH Productions' Run–D.M.C. and Beastie Boys. Chuck D put out a tape to promote WBAU (the radio station where he was working at the time) and to fend off a local MC who wanted to battle him. He called the tape "Public Enemy #1" because he felt like he was being persecuted by people in the local scene. This was the first reference to the notion of a public enemy in any of Chuck D's songs. The single was created by Chuck D with a contribution by Flavor Flav, though this was before the group "Public Enemy" was officially assembled. Around 1986, Bill Stephney, the former Program Director at WBAU, was approached by Ali Hafezi and offered a position with the label. Stephney accepted, and his first assignment was to help fledgling producer Rick Rubin sign Chuck D, whose song "Public Enemy Number One" Rubin had heard from Andre "Doctor Dré" Brown. According to the book "The History of Rap Music" by Cookie Lommel, "Stephney thought it was time to mesh the hard-hitting style of Run DMC with politics that addressed black youth. Chuck recruited Spectrum City, which included Hank Shocklee, his brother Keith Shocklee, and Eric "Vietnam" Sadler, collectively known as the Bomb Squad, to be his production team and added another Spectrum City partner, Professor Griff, to become the group's Minister of Information. With the addition of Flavor Flav and another local mobile DJ named Terminator X, the group Public Enemy was born." According to Chuck, The S1W, which stands for Security of the First World, "represents that the black man can be just as intelligent as he is strong. It stands for the fact that we're not third-world people, we're first-world people; we're the original people." Hank Shocklee came up with the name Public Enemy based on "underdog love and their developing politics" and the idea from Def Jam staffer Bill Stephney following the Howard Beach racial incident, Bernhard Goetz, and the death of Michael Stewart: "The Black man is definitely the public enemy." Public Enemy started out as opening act for the Beastie Boys during the latter's "Licensed to Ill" popularity, and in 1987 released their debut album "Yo! Bum Rush the Show". Their debut album, "Yo! Bum Rush the Show", was released in 1987 to critical acclaim. The album was the group's first step toward stardom. In October 1987, critic Simon Reynolds dubbed Public Enemy "a superlative "rock" ". They released their second album "It Takes a Nation of Millions to Hold Us Back" in 1988, which performed better in the charts than their previous release, and included the hit single "Don't Believe the Hype" in addition to "Bring the Noise". "Nation of Millions ... " was the first hip hop album to be voted album of the year in "The Village Voice"s influential Pazz & Jop critics' poll. In 1989, the group returned to the studio to record "Fear of a Black Planet", which continued their politically charged themes. The album was supposed to be released in late 1989, but was pushed back to April 1990. It was the most successful of any of their albums and, in 2005, was selected for preservation in the National Recording Registry. It included the singles "Welcome To The Terrordome", written after the band was criticized by Jews for Professor Griff's Anti-Semitic comments, "911 Is a Joke", which criticized emergency response units for taking longer to arrive at emergencies in the black community than those in the white community, and "Fight the Power". "Fight the Power" is regarded as one of the most popular and influential songs in hip hop . It was the theme song of Spike Lee's "Do the Right Thing". The group's next release, "Apocalypse 91... The Enemy Strikes Black", continued this trend, with songs like "Can't Truss It", which addressed the of slavery and how the black community can fight back against oppression; "I Don't Wanna be Called Yo Nigga", a track that takes issue with the use of the word "nigga" outside of its original derogatory context. The album also included the controversial song and video "By the Time I Get to Arizona", which chronicled the black community's frustration that some US states did not recognize Martin Luther King Jr.'s birthday as a national holiday. The video featured members of Public Enemy taking out their frustrations on politicians in the states not recognizing the holiday. In 1992, the group was one of the first rap acts to perform at the Reading Festival, in the UK, headlining the second day of the three-day festival. After a 1994 motorcycle accident shattered his left leg and kept him in the hospital for a full month, Terminator X relocated to his 15-acre farm in Vance County, North Carolina. By 1998, he was ready to retire from the group and focus full-time on raising African black ostriches on his farm. In late 1998, the group started looking for Terminator X's permanent replacement. Following several months of searching for a DJ, Professor Griff saw DJ Lord at a Vestax Battle and approached him about becoming the DJ for Public Enemy. DJ Lord joined as the group's full-time DJ just in time for Public Enemy's 40th World Tour. Since 1999, he has been the official DJ for Public Enemy on albums and world tours while winning numerous turntablist competitions, including multiple DMC finals. In 2007, the group released an album entitled "How You Sell Soul to a Soulless People Who Sold Their Soul?". Public Enemy's single from the album was "Harder Than You Think". Four years after "How You Sell Soul ... ", in January 2011, Public Enemy released the album "Beats and Places", a compilation of remixes and "lost" tracks. On July 13, 2012, "Most of My Heroes Still Don't Appear on No Stamp" was released and was exclusively available on iTunes. In July 2012, on UK television an advert for the London 2012 Summer Paralympics featured a short remix of the song "Harder Than You Think". The advert caused the song to reach No. 4 in the UK Singles Chart on September 2, 2012. On July 30, 2012, Public Enemy performed a free concert with Salt-N-Pepa and Kid 'n Play at Wingate Park in Brooklyn, New York as part of the Martin Luther King Jr. Concert Series. On August 26, 2012, Public Enemy performed at South West Four music festival in Clapham Common in London. On October 1, 2012 "The Evil Empire of Everything" was released. On June 29, 2013, they performed at Glastonbury Festival 2013. On September 14, 2013 they performed at Riot Fest & Carnival 2013 in Chicago, Illinois. On September 20, 2013 they performed at Riot Fest & Side Show in Byers, Colorado. In 2014 Chuck D launched PE 2.0 with Oakland rapper Jahi as a spiritual successor and "next generation" of Public Enemy. Jahi met Chuck D backstage during a soundcheck at the 1999 Rock & Roll Hall of Fame and later appeared as a support act on Public Enemy's 20th Anniversary Tour in 2007. PE 2.0's task is twofold, Jahi says, to "take select songs from the PE catalog and cover or revisit them" as well as new material with members of the original Public Enemy including DJ Lord, Davy DMX, Professor Griff and Chuck D. PE 2.0's first album "People Get Ready" was released on October 7, 2014. "InsPirEd" PE 2.0's second album and part two of a proposed trilogy was released a year later on October 11, 2015. "Man Plans God Laughs", Public Enemy's thirteenth album, was released in July 2015. On June 29, 2017, Public Enemy released their fourteenth album, "Nothing Is Quick in the Desert". The album was available for free download through Bandcamp until July 4, 2017. In late February 2020 it was announced that Public Enemy (billed as Public Enemy Radio) would perform at a campaign rally in Los Angeles, CA on March 1, 2020 for Bernie Sanders, who was campaigning to be the nominee of the Democratic Party in the 2020 presidential election. Days following the announcement, Flavor Flav took issue toward the group being associated with the Sanders campaign and issued a cease and desist letter asking the campaign to not use the group's name or logo. "While Chuck is certainly free to express his political views as he sees fit -- his voice alone does not speak for Public Enemy. The planned performance will only be Chuck D of Public Enemy, it will not be a performance by Public Enemy. Those who truly know what Public Enemy stands for know what time it is, there is no Public Enemy without Flavor Flav." Flavor Flav's statement read. Chuck D responded to the statement by saying "Flavor chooses to dance for his money and not do benevolent work like this. He has a year to get his act together and get himself straight or he’s out.” A lawyer for Chuck D added "Chuck could perform as Public Enemy if he ever wanted to; he is the sole owner of the Public Enemy trademark. He originally drew the logo himself in the mid-80’s, is also the creative visionary and the group’s primary songwriter, having written Flavor’s most memorable lines.” Prior to the group's performance at the Sanders rally, Chuck D issued a statement saying Flavor Flav had been fired from the group. "Public Enemy and Public Enemy Radio will be moving forward without Flavor Flav. We thank him for his years of service and wish him well." According to reports, Chuck D and Flavor Flav had been at odds for a while. In 2017, Flavor Flav sued Chuck D over claims his earnings from Public Enemy “diminished to almost nothing". Flavor Flav issued a statement shortly before his firing saying "I don’t want our family and our movement broken up. I am a little worried about my partner Chuck, I hope he is ok and that Public Enemy can get back to doing the good works we have done for 30 years…not for money but for people like me who have been denied their rights to participate because of bullshit policies. I have nothing personal against Bernie but I have issues with how he and his people have handled this". Following his firing his lawyer released statement taking shots at Chuck D and claiming that "masses of clock wearing fans" left the Sanders rally when Public Enemy Radio performed. Following the news of Flavor Flav being fired, on March 2, 2020, it was announced that the group would be releasing their first album under the Public Enemy Radio name titled "Loud Is Not Enough" which is due for release in April 2020. The album will feature the lineup of Chuck D, DJ Lord, Jahi and the S1Ws and according to a statement from the group it will be “taking it back to hip hop’s original DJ-and-turntablist foundation.” A Reuters report dated April 1, 2020 claimed that the breakup was a hoax to generate publicity and provide a commentary on disinformation. According to the report, Chuck D said "he and Flav concocted a fake split to grab attention and highlight media bias towards reporting bad news about hip hop." However, Flavor Flav denied these claims, writing on his Twitter, "I am not a part of your hoax"; referring to the COVID-19 pandemic, he continued, "There are more serious things in the world right now than April Fool's jokes and dropping records. The world needs better than this." Flav finished his Twitter post with the sentence "you say we are leaders so act like one." On June 19, 2020, Public Enemy (with Flavor Flav), released the single and music video for their anti-Donald Trump song “State of the Union (STFU)”. “Our collective voices keep getting louder. The rest of the planet is on our side. But it’s not enough to talk about change. You have to show up and demand change. Folks gotta vote like their lives depend on it, cause it does.” Chuck D said. Public Enemy made contributions to the hip-hop world with sonic experimentation as well as political and cultural consciousness, which infused itself into skilled and poetic rhymes. Critic Stephen Thomas Erlewine wrote that "PE brought in elements of free jazz, hard funk, even musique concrète, via [its] producing team the Bomb Squad, creating a dense, ferocious sound unlike anything that came before." Terminator X's innovative scratching tricks can be heard on the songs "Rebel Without a Pause", "Night of the Living Baseheads", and "Shut 'Em Down". Public Enemy held a strong, pro-black, political stance. Before PE, politically motivated hip-hop was defined by a few tracks by Ice-T, Grandmaster Flash and the Furious Five, Kurtis Blow and Boogie Down Productions. Other politically motivated opinions were shared by prototypical artists Gil Scott-Heron and the Last Poets. PE was a revolutionary hip-hop act whose entire image rested on a specified political stance. With the successes of Public Enemy, many hip-hop artists began to celebrate Afrocentric themes, such as Kool Moe Dee, Gang Starr, X Clan, Eric B. & Rakim, Queen Latifah, the Jungle Brothers, and A Tribe Called Quest. Public Enemy was one of the first hip-hop groups to do well internationally. PE changed the Internet's music distribution capability by being one of the first groups to release MP3-only albums, a format virtually unknown at the time. Public Enemy helped to create and define "rap metal" by collaborating with Living Colour in 1988 ("Funny Vibe"), with Sonic Youth on the 1990 song "Kool Thing", and with New York thrash metal outfit Anthrax in 1991. The single "Bring the Noise" was a mix of semi-militant black power lyrics, grinding guitars, and sporadic humor. The two bands, cemented by a mutual respect and the personal friendship between Chuck D and Anthrax's Scott Ian, introduced a hitherto alien genre to rock fans, and the two seemingly disparate groups toured together. Flavor Flav's pronouncement on stage that "They said this tour would never happen" (as heard on Anthrax's "" CD) has become a legendary comment in both rock and hip-hop circles. Metal guitarist Vernon Reid (of Living Colour) contributed to Public Enemy's recordings, and PE sampled Slayer's "Angel of Death" half-time riff on "She Watch Channel Zero?!" Members of the Bomb Squad produced or remixed works for other acts, like Bell Biv DeVoe, Ice Cube, Vanessa Williams, Sinéad O'Connor, Blue Magic, Peter Gabriel, L.L. Cool J, Paula Abdul, Jasmine Guy, Jody Watley, Eric B & Rakim, Third Bass, Big Daddy Kane, EPMD, and Chaka Khan. According to Chuck D, "We had tight dealings with MCA Records and were talking about taking three guys that were left over from New Edition and coming up with an album for them. The three happened to be Ricky Bell, Michael Bivins, and Ronnie DeVoe, later to become Bell Biv DeVoe. Ralph Tresvant had been slated to do a solo album for years, Bobby Brown had left New Edition and experienced some solo success beginning in 1988, and Johnny Gill had just been recruited to come in, but [he] had come off a solo career and could always go back to that. At MCA, Hiram Hicks, who was their manager, and Louil Silas, who was running the show, were like, 'Yo, these kids were left out in the cold. Can y'all come up with something for them?' It was a task that Hank, Keith, Eric, and I took on to try to put some kind of hip-hop-flavored R&B shit down for them. Subsequently, what happened in the four weeks of December [1989] was that the Bomb Squad knocked out a large piece of the production and arrangement on Bell Biv DeVoe's three-million selling album "Poison". In January [1990], they knocked out "Fear of a Black Planet" in four weeks, and PE knocked out Ice Cube's album "AmeriKKKa's Most Wanted" in four to five weeks in February." They have also produced local talent such as Son of Bazerk, Young Black Teenagers, Kings of Pressure, and True Mathematics—and gave producer Kip Collins his start in the business. Poet and hip-hop artist Saul Williams uses a sample from Public Enemy's "Welcome to the Terrordome" in his song "Tr[n]igger" on the "Niggy Tardust" album. He also used a line from the song in his poem, "amethyst rocks". The Manic Street Preachers track "Repeat (Stars And Stripes)" is a remix of the band's own anti-monarchy tirade by Public Enemy production team The Bomb Squad of whom James Dean Bradfield and Richey Edwards were big fans. The song samples "Countdown to Armageddon" from "It Takes a Nation of Millions to Hold Us Back". The band had previously sampled Public Enemy on their 1991 single Motown Junk. The revolutionary influence of the band is seen throughout hip-hop and is recognized in society and politics. The band "rewrote the rules of hip-hop", changing the image, sound and message forever. Pro-black lyrics brought political and social themes to hardcore hip hop, with stirring ideas of racial equality, and retribution against police brutality, aimed at disenfranchised blacks, but appealing to all the poor and underrepresented. Before Public Enemy, hip hop music was seen as "throwaway entertainment", with trite sexist and homophobic lyrics. Public Enemy brought social relevance and strength to hip hop. They also brought black activist Louis Farrakhan to greater popularity, and they gave impetus to the Million Man March in 1995. The influence of the band goes also beyond hip-hop in a unique way, indeed the group was cited as an influence by artists as diverse as Autechre (selected in the All Tomorrow's Parties in 2003), Nirvana (It Takes a Nation of Millions to Hold Us Back being cited by Kurt Cobain among his favorite albums), Nine Inch Nails (mentioned the band in Pretty Hate Machine credits), Björk (included Rebel Without a Pause in her The Breezeblock Mix in July 2007), Tricky (did a cover of Black Steel in the Hour of Chaos and appears in Do You Wanna Go Our Way ??? video), The Prodigy (included Public Enemy No. 1 in The Dirtchamber Sessions Volume One), Ben Harper, Underground Resistance (cited by both Mad Mike and Jeff Mills), Orlando Voorn, M.I.A., Amon Tobin, Mathew Jonson, Aphex Twin (Welcome To The Terrordome being the first track played after the introduction at the Coachella festival in April 2008), Rage Against the Machine (sampling the track in their song "Renegades of Funk") and My Bloody Valentine who was influenced by the Bomb Squad's production for their sound. The 1991 song "By the Time I Get to Arizona" from "Apocalypse 91... The Enemy Strikes Black" referenced the controversy a year earlier when Arizona cancelled a state holiday for Martin Luther King Jr., and the NFL switched Super Bowl XXVII from Arizona to California, costing the state an estimated loss of over $100 million. A video of "By the Time I Get to Arizona", which was shown only once on MTV, depicted Chuck D killing Arizona officials with machine guns and a car bomb. This violent behaviour attracted negative media attention, and was described by one newspaper columnist as being the opposite of what King died for. In 1989, in an interview with Public Enemy for the "Washington Times", the interviewing journalist, David Mills, lifted some quotations from a UK magazine in which the band were asked their opinion on the Arab–Israeli conflict. Professor Griff's comments apparently sympathized with the Palestinians and he was accused of anti-Semitism. According to Rap Attack 2, he suggested that "Jews are responsible for the majority of the wickedness in the world" (p. 177). (In turn a quote from" The International Jew") Shortly after, Ridenhour expressed an apology on his behalf. At a June 21, 1989, press conference, Ridenhour announced Griff's dismissal from the group, and a June 28 statement by Russell Simmons, president of Def Jam Recordings and Rush Artists Management, stated that Chuck D. had disbanded Public Enemy "for an indefinite period of time". By August 10, however, Ridenhour denied that he had disbanded the group, and stated that Griff had been re-hired as "Supreme Allied Chief of Community Relations" (in contrast to his previous position with the group as Minister of Information). Griff later denied holding anti-Semitic views and apologized for the remarks. Several people who had worked with Public Enemy expressed concern about Ridenhour's leadership abilities and role as a social spokesman. In his 2009 book, entitled "Analytixz", Griff criticized his 1989 statement: "to say the Jews are responsible for the majority of wickedness that went on around the globe I would have to know about the majority of wickedness that went on around the globe, which is impossible ... I'm not the best knower. Then, not only knowing that, I would have to know who is at the crux of all of the problems in the world and then blame Jewish people, which is not correct." Griff also said that not only were his words taken out of context, but that the recording has never been released to the public for an unbiased listen. The controversy and apologies on behalf of Griff spurred Chuck D to reference the negative press they were receiving. In 1990, Public Enemy issued the single "Welcome to the Terrordome", which contains the lyrics: "Crucifixion ain't no fiction / So-called chosen frozen / Apologies made to whoever pleases / Still they got me like Jesus". These lyrics have been cited by some in the media as anti-Semitic, making supposed references to the concept of the "chosen people" with the lyric "so-called chosen" and Jewish deicide with the last line. In 1999 the group released an album entitled "There's a Poison Goin' On". The title of the last song on the album is called "Swindler's Lust". The Anti-Defamation League (ADL) claimed that the title of the song was a word play on the title of the Steven Spielberg movie "Schindler's List" about the genocide of Jews in World War II. Similarly in 2000 a Public Enemy spin off group under the name Confrontation Camp, a name according to the ADL, that is a pun on the term concentration camp, released an album. The group consisted of Kyle Jason, Chuck D (under the name Mistachuck) and Professor Griff. In a letter to the editor, Leo Haber alludes to criticism by "The New York Times" writer Peter Watrous of the group's supposed homophobia. Zoe Williams defended Public Enemy against charges of homophobia by stating that: Grammy Awards American Music Awards Rock and Roll Hall of Fame Public Enemy was inducted into the Rock and Roll Hall of Fame in 2013.
https://en.wikipedia.org/wiki?curid=22841
Closings and cancellations following the September 11 attacks Many closings and cancellations followed the September 11, 2001 attacks, including major landmarks, buildings, restrictions on access to Lower Manhattan, and postponement or cancellation of major sporting and other events. Landmarks were closed primarily because of fears that they may be attacked. At some places, streets leading up to the institutions were also closed. When they reopened, there was heightened security. Many states declared a state of emergency. Speaking at a press conference at 11:02 a.m. on the morning of the attacks, Mayor Rudy Giuliani told New Yorkers: "If you are south of Canal Street, get out. Walk slowly and carefully. If you can't figure what else to do, just walk north." The neighborhood was covered in dust and debris, and electrical failures caused traffic light outages. Emergency vehicles were given priority to respond to ongoing fires, building collapses, and expected mass casualties. Over a million workers and residents south of Canal Street evacuated, and police stopped pedestrians from entering lower Manhattan. With subways shut down, vehicle traffic restricted, and tunnels closed, they mainly fled on foot, pouring over bridges and ferries to Brooklyn and New Jersey. On September 12, vehicle traffic was banned south of 14th Street, subway stations south of Canal Street were bypassed, and pedestrians were not permitted below Chambers Street. Vehicle traffic below Canal Street was not allowed until October 13. The New York Stock Exchange did not open on September 11 even as CNBC showed futures numbers early in the day. As Wall Street was covered in debris from the World Trade Center and suffered infrastructure damage, it remained closed until September 17. For at least a full day after the attacks, bridges and tunnels to Manhattan were closed to non-emergency traffic in both directions. Among other things, this interrupted scheduled deliveries of food and other perishables, leading to shortages in restaurants. From September 27, 2001, one-occupant cars were banned from crossing into Lower Manhattan from Midtown on weekday mornings in an effort to relieve some of the crush of traffic in the city (the morning rush hour lasts from 5:30 a.m. to noon), caused largely by the increased security measures and closure of major vehicle and transit crossings. The tracks and stations under the WTC were shut down within minutes of the first plane crash. All remaining New York City Subway service was suspended from 10:20 a.m. to 12:48 p.m. Immediately after the attacks and more so after the collapses of the Twin Towers, many trains running in Lower Manhattan lost power and had to be evacuated through the tunnels. Some trains had power but the signals did not, requiring special operating procedures to ensure safety. The IRT Broadway–Seventh Avenue Line, which ran below the World Trade Center between Chambers Street and Rector Street, was the most crippled. This section of the tunnel, including Cortlandt Street station (located directly underneath the World Trade Center), was badly damaged, and had to be rebuilt. Service was immediately suspended south of Chambers Street and then cut back to 14th Street. There was also subsequent flooding on the line south of 34th Street–Penn Station. After the flood was cleaned up, express service was able to resume on September 17 with trains running between Van Cortlandt Park–242nd Street and 14th Street, making local stops north of and express stops south of 96th Street, while and trains made all stops in Manhattan (but bypassed all stations between Canal Street and Fulton Street until October 1). 1/9 skip-stop service was suspended. After a few switching delays at 96th Street, service was changed on September 19. The train resumed local service in Manhattan, but was extended to New Lots Avenue in Brooklyn (switching onto the express tracks at Chambers Street) to replace the 3, which now terminated at 14th Street as an express. The train continued to make local stops in Manhattan and service between Chambers Street and South Ferry as well as skip-stop service remained suspended. Normal service on all four trains was restored September 15, 2002, but Cortlandt Street remained closed until September 8, 2018. Service on the BMT Broadway Line was also disrupted because the tracks from the Montague Street Tunnel run adjacent to the World Trade Center and there were concerns that train movements could cause unsafe settling of the debris pile. Cortlandt Street station, which sits under Church Street, sustained significant damage in the collapse of the towers. It was closed until September 15, 2002 for removal of debris, structural repairs, and restoration of the track beds, which had suffered flood damage in the aftermath of the collapse. Starting September 17, 2001, and service was suspended and respectively replaced by the (which was extended to Coney Island–Stillwell Avenue via the BMT Montague Street Tunnel, BMT Fourth Avenue Line, and BMT Sea Beach Line) and the (also extended via Fourth Avenue to Bay Ridge–95th Street). In Queens, the replaced the while the replaced the . All service on the BMT Broadway Line ran local north of Canal Street except for the , which ran normally from 57th Street to Brighton Beach via Broadway and Brighton Express. J/Z skip-stop service was suspended at this time. Normal service on all seven trains resumed on October 28. The only subway line running between Midtown and Lower Manhattan was the IRT Lexington Avenue Line, which was overcrowded before the attacks and at crush density until the BMT Broadway Line reopened. Wall Street was closed until September 21. The IND Eighth Avenue Line, which has a stub terminal serving the train under Five World Trade Center, was undamaged, but covered in soot. E trains were extended to Euclid Avenue, Brooklyn, replacing the then suspended train (the and trains replaced it as the local north of 59th Street–Columbus Circle on nights and weekends, respectively. The train, which ran normally from 145th Street or Bedford Park Boulevard to 34th Street–Herald Square via Central Park West Local, also replaced C trains on weekdays). Service was cut back to Canal Street when C service resumed on September 21, but Chambers Street and Broadway–Nassau Street remained closed until October 1. World Trade Center remained closed until January 2002. There were no reported casualties on the subway or loss of train cars, but an MCI coach bus was destroyed. Another bus was damaged, but repaired and is back in normal service with a special commemoration livery. PATH started evacuating passengers from its Manhattan trains and tracks within minutes of the first plane crash. The PATH station at World Trade Center was heavily damaged (a train parked in the station was crushed by debris and was removed during the excavation process in January 2002) and all service there was suspended. For several hours, PATH did not run any trains to Manhattan, but was able to restore service on the Uptown Hudson Tubes to 33rd Street by the afternoon. Exchange Place was unusable since the switch configuration at the time required all trains to continue to World Trade Center. As a result, PATH ran a modified service: Hoboken-Journal Square, Hoboken-33rd Street, and Newark-33rd Street. Exchange Place reopened with modifications on June 29, 2003; a temporary station replacing World Trade Center opened on November 23. Liberty Water Taxi and NY Waterway had a ferry terminal at the World Financial Center. As the area around the terminal was in the restricted zone, NY Waterway suspended service to the terminal with alternate service going to Midtown and Wall Street and Liberty Water Taxi service was suspended. Free ad-hoc ferry service to New Jersey, Brooklyn, and Queens began by evening, with about half a million evacuees transported by Circle Line Tours, NY Waterway, privately owned dining boats, tug boats, and at least one fire boat. MTA buses were temporarily suspended south of Canal Street, and MTA and NJ Transit buses were re-routed to serve passengers arriving in Brooklyn and New Jersey by walking and taking ferries out of Manhattan. The Port Authority Bus Terminal was closed until September 13. Amtrak suspended all of its rail service nationwide until 6pm. Greyhound Bus Lines cancelled its bus service in the Northeast, but was running normally by September 13. The entire airspaces of the United States and Canada were closed ("ground stop") by order of FAA National Operations Manager Ben Sliney (who was working his first day in that position) except for military, police, and medical flights. The unprecedented implementation of Security Control of Air Traffic and Air Navigation Aids (SCATANA) was the first unplanned closure in the U.S.; military exercises known as Operation Skyshield had temporarily closed the airspace in the early 1960s. Domestic planes were diverted to the nearest available airport. All non-military flights needed specific approval from the United States Air Force and the FAA. There were only a few dozen private aircraft which received approvals in that time period. Civil Air Patrol's aerial photography unit was the earliest non-military flight granted approval. United Airlines cancelled all flights worldwide temporarily. Grounded passengers and planes were searched for security threats. Amtrak was closed until 6pm on September 11, but by September 13 it had increased capacity 30% to deal with an influx of stranded plane passengers. President George W. Bush was transported to a secure location via Air Force One. Many incoming international flights were diverted to Atlantic Canada to avoid proximity to potential targets in the US and large cities in Canada. Some international flights that departed from South America were diverted to Mexico, however its airspace was not shut down. On Thursday night, the New York area airports (JFK, LaGuardia, and Newark) were closed again and reopened the next morning. The only traffic from LaGuardia during the closure was a single C-9C government VIP jet, departing at approximately 5:15 p.m. on the 12th. Civilian air traffic was allowed to resume on September 13, 2001, with stricter airport security checks, disallowing for example the box cutting knives that were used by the hijackers. (Reinforcement of cockpit doors began in October 2001, and was required for larger airlines by 2003.) First, stranded planes were allowed to fly to their intended destinations, then limited service resumed. The backlog of delayed passengers took several days to clear. Due to a translation error, controllers believed Korean Air Flight 85 might have been hijacked. Canadian Prime Minister Jean Chrétien and U.S. authorities ordered the United States Air Force to surround the plane and force it to land in Whitehorse, Yukon and to shoot down the plane if the pilots did not cooperate. Alaska Governor Tony Knowles ordered the evacuation of large hotels and government buildings in Anchorage. Also in Alaska at nearby Valdez, the U.S. Coast Guard ordered all tankers filling up with oil to head out to sea. Canadian officials evacuated all schools and large buildings in Whitehorse before the plane landed safely. Many businesses across the United States closed after the intentional nature of the events became clear, and many national landmarks and financial district skyscrapers were evacuated out of fear of further attacks. In an atmosphere reminiscent of the assassination of John F. Kennedy in 1963, everyday life around the world came to a standstill in the days after the September 11 attacks. For this reason, as well as for reasons of perceived threat associated with large gatherings, many events were postponed or cancelled. Other events were also cancelled, postponed, or modified:
https://en.wikipedia.org/wiki?curid=28046
Memorials and services for the September 11 attacks The first memorials to the victims of the September 11 attacks in 2001 began to take shape online, as hundreds of webmasters posted their own thoughts, links to the Red Cross and other rescue agencies, photos, and eyewitness accounts. Numerous online September 11 memorials began appearing a few hours after the attacks, although many of these memorials were only temporary. Around the world, U.S. embassies and consulates became makeshift memorials as people came out to pay their respects. The "Tribute in Light" was the first major physical memorial at the World Trade Center site. A permanent memorial and museum, the National September 11 Memorial & Museum at the World Trade Center, were built as part of the design for overall site redevelopment. The Memorial consists of two massive pools set within the original footprints of the Twin Towers with waterfalls cascading down their sides. The names of the victims of the attacks are inscribed around the edges of the waterfalls. Other permanent memorials have been constructed around the world. One of the places that saw many memorials and candlelight vigils was Pier A in Hoboken, New Jersey. There was also a memorial service on March 11, 2002, at dusk on Pier A when the "Tribute in Light" first turned on, marking the half-year anniversary of the terrorist attack. A permanent September 11 memorial for Hoboken, called Hoboken Island, was chosen in September 2004. Soon after the attacks, temporary memorials were set up in New York and elsewhere. In Europe, annually a commemoration of September 11 to never forget, Nissoria was one of the first public places that dedicated a memorial to September 11 in Europe. Nissoria is in a small town located in the Province of Enna in Sicily, Italy. Two family members of this community, Vincenzo DiFazio and Salvatore Lopez, lost their lives on Sept 11 at the World Trade Center. The then-mayor Dr. Marco Murgo along with the Chiara family Benito Sr. and son Mario developed the project to dedicate a small plot of land adjacent to a local school and museum that was entitled “Parco 11 Settembre”. The Commanding Officer of the nearby U.S. Naval Air Station Sigonella met with this delegation from Nissoria and embraced this truly heartfelt initiative. Ever since its dedication, and thanks to the present-day Mayor of Nissoria Dott. Armando Glorioso and Dott. Alberto Lunetta who continue this important event, a representation of both American and Italian military personnel from the nearby Military base NAS Sigonella come to visit and annually commemorate along with all local Italian Authorities, Dignitaries and citizens who truly will never forget this tragic event. The Raoul Wallenberg Award was given to New York City in 2001 "For all of its citizens who searched for the missing, cared for the injured, gave comfort to loved ones of the missing or lost, and provided sustenance and encouragement to those who searched through the rubble at Ground Zero." On February 3, 2002, during the Halftime Show of Super Bowl XXXVI, rock group U2 performed Where the Streets Have No Name, while the names of the victims were projected onto banners. Bono opened his jacket to reveal a U.S. flag pattern sewn in the inside lining. At the opening ceremony of the 2002 Winter Olympics in Salt Lake City on February 8, a tattered American flag recovered from the World Trade Center site was carried into the stadium by American athletes, members of the Port Authority police, and members of the New York City police and fire departments. On February 23, 2003, the 45th Annual Grammy Awards were held at Madison Square Garden and paid tribute to those who died during the 9/11 attacks, to whom the ceremony was dedicated. Ceremony host Bruce Springsteen performed "The Rising" at the Awards. American country singer Darryl Worley paid tribute to the people with his 2003 single, "Have You Forgotten?" from the album of the same name. Newark International Airport was renamed "Newark Liberty International Airport". On September 11, 2002, representatives from over 90 countries came to Battery Park City as New York City Mayor Michael Bloomberg lit an eternal flame to mark the first anniversary of the attacks. Leading the dignitaries were Canadian Prime Minister Jean Chrétien, U.N. Secretary General Kofi Annan, Bloomberg, and Secretary of State Colin Powell. The same day, the Victims of Terrorist Attack on the Pentagon Memorial was dedicated at Arlington National Cemetery near the Pentagon. The memorial is dedicated to the five individuals at the Pentagon whose remains were never found, and the partial remains of another 25 victims are buried beneath the memorial. The names of the 184 victims of the Pentagon attack are inscribed on the memorial's side. A 9/11 memorial public sculpture by Ingrid Lahti was on display at Bellevue Downtown Park, Bellevue, Washington; it was installed September 11, 2002, and displayed through October. Many organizations held memorial services and events for the 10th anniversary of the attacks. The National 9/11 Flag was made from a tattered remains of a American flag found by recovery workers in the early morning of September 12, 2001. It was hanging precariously from some scaffolding at a construction site next to Ground Zero. Because of safety reasons the flag could not be taken down until late October 2001. Charlie Vitchers, a construction superintendent for the Ground Zero cleanup effort, had a crew recover the flag. It was placed in storage for seven years. The flag has made a number appearances across the country including a Boston Red Sox Game, a New York Giants Home Opener, and the USS "New York" Commissioning Ceremony. It also appeared on the CBS Evening News and on ABC World News Tonight "Persons of the Week." The flag began a national tour on Flag day, which was on June 14, 2009. It will visit all 50 states where service heroes, veterans, and other honorees will each add stitching and material from other retired American flags in order to restore the original 13 stripes of the flag. The flag will have a permanent home at the National September 11 Memorial and Museum. The 9-11 Remembrance Flag was created to be a permanent reminder of the thousands of people lost in the September 11 attacks. The purpose of keeping the memories of September 11 alive is not to be forever mourning, but for "learning from the circumstances and making every effort to prevent similar tragedies in our future." The flag is also meant to be a reminder of how the people of this country came together to help each other after the attacks. The red background of the flag represents the blood shed by Americans for their country. The stars represent the lost airplanes and their passengers. The blue rectangles stand for the twin towers and the white pentagon represents the Pentagon building. The blue circle symbolizes the unity of this country after the attacks. The 9/11 National Remembrance Flag was designed by Stephan and Joanne Galvin soon after September 11, 2001. They wanted to do something to help and were inspired by a neighbor's POW/MIA flag. They wanted to sell the flag so people would remember the September 11 attacks and in order to raise money for relief efforts. The blue represents the colors of the state flags that were involved in the attacks. The black represents sorrow for innocent lives lost. The four stars stand for the four planes that crashed and the lives lost, both in the crash and in the rescue efforts, as well as the survivors. The blue star is a representation of American Airlines Flight 77 and the Pentagon. The two white stars represent American Airlines Flight 11 and United Airlines flight 175, as well as the twin towers. The red star stands for United Flight 93 that crashed in Shanksville, Pennsylvania and all those who sacrifice their lives to protect the innocent. The colors of the stars represent the American flag. The four stars are touching each other and the blue parts of the flag in order to symbolize the unity of the people of the United States. The National Flag of Honor and the National Flag of Heroes were created by John Michelotti for three main reasons: (1)"To immortalize the individual victims that were killed in the terrorist attacks of September 11, 2001." (2)"To give comfort to the families left behind knowing that their loved one will be forever honored and remembered." (2)"To create an enduring symbol, recognized by the world, of the human sacrifice that occurred on September 11, 2001." The Flag of Honor and the Flag of Heroes are based on the American flag. They both have the names of all the innocent people who were killed in the September 11 attacks printed on the red and white stripes of the American Flag. Both flags have a white space across the bottom with the name of the flag and a description printed in black. The Flag of Honor reads: "This flag contains the names of those killed in the terrorist attacks of September 11. Now and forever it will represent their immortality. We shall never forget them" The Flag of Heroes reads: " This flag contains the names of the emergency service personnel who gave their lives to save others in the terrorist attacks of September 11. Now and forever it will represent their immortality. We shall never forget them." The Flag of Honor and the Flag of Heroes were featured at the NYC 9/11 Memorial Field 5th Anniversary in Manhattan's Inwood Hill Park September 8–12, 2006. There 3,000 flags which represented those who died in the September 11 attacks. The flags were also featured on the msnbc Today Show and on ABC 13 News, Norfolk, VA. The Remembrance Flag has a white background with large, black Roman numerals IX/XI in the center and four black stars across the top. The IX/XI are the Roman numerals for 9/11. The four stars represent World Trade Center North, World Trade Center South, the Pentagon, and Shanksville, PA. The 10th Anniversary September 11 Memorial Flag was designed by Carrot-Top Industries, a privately owned company in Hillsborough, NC. The exclusive 9/11 memorial flag was designed with the two World Trade Towers set inside a pentagon decorated with a ribbon to commemorate all of the Americans that lost their lives on September 11, 2001. The growing popularity of virtual worlds such as Secondlife has led to the construction of permanent virtual memorials and exhibits. Examples include: On September 11, 2007, a virtual reality World Trade Center Memorial will be presented to the people of the world. The location is in Second Life, on the island we have named after the original design: Celestial Requiem NYC. We have built this memorial because, to be blunt, the world needed it done years ago, and the two years longer to await the completion of the "Reflected Absence" memorial in New York city (by Michael Arad and Peter Walker) was in our opinion two years too long.
https://en.wikipedia.org/wiki?curid=28047
Airport security repercussions due to the September 11 attacks After the September 11 attacks, questions were raised regarding the effectiveness of airport security at the time, as all 19 hijackers involved in 9/11 managed to pass existing checkpoints and board the airplanes without incident. In the months and years following September 11, 2001, security at many airports worldwide was escalated to deter similar terrorist plots. Prior to September 11, 2001, airport screening was provided in the U.S. by private security companies contracted by the airline or airport. In November 2001, the Transportation Security Administration (TSA) was introduced to take over all of the security functions of the FAA, the airlines, and the airports. Among other changes introduced by TSA, bulletproof and locked cockpit doors became standard on commercial passenger aircraft. In some countries, for example Sweden, Norway and Finland, there were no or only random security checks for domestic flights in 2001 and before that. On or quickly after September 11, decisions were made to introduce full security checks there. It was immediately implemented where possible, but took one to two years to implement everywhere since terminals were often not prepared with room for it. Cockpit doors on many aircraft are reinforced and bulletproof to prevent unauthorized access. Passengers are now prohibited from entering the cockpit during flight. Some aircraft are also equipped with CCTV cameras, so the pilots can monitor cabin activity. Pilots are now allowed to carry firearms, but they must be trained and licensed. In the U.S., more air marshals have been placed on flights to improve security. On September 11, hijackers Khalid al-Mihdhar, Majed Moqed, Nawaf al-Hazmi and Salem al-Hazmi all set off the metal detector. Despite being scanned with a hand-held detector, the hijackers were passed through. Security camera footage later showed some hijackers had what appeared to be box cutters clipped to their back pockets. Box cutters and similar small knives were allowed on board certain aircraft at the time. Airport checkpoint screening has been significantly tightened since 2001, and security personnel are more thoroughly trained to detect weapons or explosives. In addition to standard metal detectors, many U.S. airports now employ full-body scanning machines, in which passengers are screened with millimeter wave technology to check for potential hidden weapons or explosives on their persons. Initially, early body scanners provoked quite a bit of controversy because the images produced by the machines were deemed graphic and intrusive. Many considered this an invasion of personal privacy, as TSA screeners were essentially shown an image of each passenger's naked body. Newer body scanners have since been introduced which do not produce an image, but rather alert TSA screeners of areas on the body where an unknown item or substance may be hidden. A TSA security screener then inspects the indicated area(s) manually. On September 11, some hijackers lacked proper identification, yet they were allowed to board due to being on domestic aircraft. After 9/11, all passengers 18 years or older must now have valid, government-issued identification in order to fly. Airports may check the ID of any passenger (and staff member) at any time to ensure the details on the ID match those on the printed boarding pass. Only under exceptional circumstances may an individual fly without a valid ID. If approved for flying without an ID, the individual will be subject to extra screening of their person and their carry-on items. TSA does not have the capability to conduct background checks on passengers at checkpoints. Sensitive areas in airports, including airport ramps and operational spaces, are restricted from the general public. Called a SIDA (Security Identification Display Area) in the U.S., these spaces require special qualifications to enter. A European Union regulation demanded airlines make sure that the individual boarding the aircraft is the same individual who checked in his or her luggage; this was implemented by verifying an individual's identification both at luggage check-in and when boarding. Some countries also fingerprint travellers or use retina and iris scanning to help detect potential criminals. With regard to the 2015 Germanwings flight 9525 crash incident, a suicide by pilot where the captain was unable to regain access to the flight deck, some have stated that security features added to commercial airliners after 9/11 actually work against the safety of such planes. In 2003, John Gilmore sued United Airlines, Southwest Airlines, and then-U.S. Attorney General John Ashcroft, arguing that requiring passengers to show identification before boarding domestic flights is tantamount to an internal passport, and is unconstitutional. Gilmore lost the case, known as "Gilmore v. Gonzales", and an appeal to the U.S. Supreme Court was denied.
https://en.wikipedia.org/wiki?curid=28051
U.S. government response to the September 11 attacks After the September 11, 2001 attacks, the U.S. government responded with immediate action (including rescue operations at the site of the World Trade Center and grounding civilian aircraft), and long-term action, including investigations, legislative changes, military action and restoration projects. Investigations into the motivations and execution of the attacks led to the declaration of War on Terrorism that lead to ongoing military engagements in Afghanistan and subsequently Iraq. Clean-up and restoration efforts led to the rebuilding of Lower Manhattan, and federal grants supported the development of the National September 11 Memorial & Museum. Within hours of the attacks in New York, a massive search and rescue (SAR) operation was launched, which included over 350 search and rescue dogs. Initially, only a handful of wounded people were found at the site, and in the weeks that followed it became evident that there weren't any survivors to be found. Only twenty survivors were found alive in the rubble. Rescue and recovery efforts took months to complete. It took several weeks to put out the fires burning in the rubble of the buildings, with the clean-up not being completed until May 2002. Temporary wooden "viewing platforms" were set up for tourists to view construction crews clearing out the gaping holes where the towers once stood. All of these platforms were closed on May 30, 2002. Many relief funds were immediately set up to assist victims of the attacks, with the task of providing financial assistance to the survivors and the families of victims. By the deadline for victim's compensation of September 11, 2003, 2,833 applications had been received from the families of those killed. In the aftermath of the attacks, many U.S. citizens held the view that the attacks had "changed the world forever." The Bush administration announced a war on terrorism, with the goal of bringing Osama bin Laden and al-Qaeda to justice and preventing the emergence of other terrorist networks. These goals would be accomplished by means including economic and military sanctions against states perceived as harboring terrorists and increasing global surveillance and intelligence sharing. Immediately after the September 11 attacks U.S. officials speculated on possible involvement by Saddam Hussein. Because the attacks on the United States were judged to be within the parameters of its charter, NATO declared that Article 5 of the NATO agreement was satisfied on September 12, 2001, making the US war on terrorism the first time since its inception that NATO would actually participate in a "hot" war. Following the attacks, 762 suspects were taken into custody in the United States. On December 12, 2001, Fox News reported that some 60 Israelis were among them. Federal investigators were reported to have described them as part of a long-running effort to spy on American government officials. A "handful" of these Israelis were described as active Israeli military or intelligence operatives. In a letter to the editor, Ira Glaser, former head of the ACLU, claimed that none of those 762 detainees were charged with terrorism. "The Justice Department inspector general's report implies more than the violation of the civil liberties of 762 non-citizens. It also implies a dysfunctional and ineffective approach to protecting the public after Sept. 11, 2001... No one can be made safer by arresting the wrong people". Immediately after opening the hunt on Osama bin Laden, President Bush also visited the Islamic Center of Washington and asked the public to view Arabs and Muslims living in the United States as American patriots. Congress passed and President Bush signed the Homeland Security Act of 2002, creating the Department of Homeland Security, representing the largest restructuring of the U.S. government in contemporary history. Congress passed the USA PATRIOT Act, stating that it would help detect and prosecute terrorism and other crimes. Civil liberties groups have criticized the PATRIOT Act, saying that it allows law enforcement to invade the privacy of citizens and eliminates judicial oversight of law-enforcement and domestic intelligence gathering. The Bush Administration also invoked 9/11 as the reason to have the National Security Agency initiate a secret operation, "to eavesdrop on telephone and e-mail communications between the United States and people overseas without a warrant." On June 6, 2002, Attorney General Ashcroft proposed regulations that would create a special registration program that required males aged 16 to 64 who were citizens of designated foreign nations resident in the U.S. to register with the Immigration and Naturalization Service (INS), have their identity verified, and be interviewed, photographed and fingerprinted. Called the National Security Entry-Exit Registration System (NSEERS), it comprised two programs, the tracking of arrivals and departures on the one hand, and voluntary registrations of those already in the U.S., known as the "call-in" program. The DOJ acted under the authority of the Immigration and Nationality Act of 1952, which had authorized a registration system but was allowed to lapse in the 1980s because of budget concerns. Ashcroft identified those required to register as "individuals of elevated national security concern who stay in the country for more than 30 days." The processing of arrivals as part of their customs screening began in October 2002. It first focused on arrivals from Iran, Iraq, Libya, Sudan, and Syria.
https://en.wikipedia.org/wiki?curid=28061
Financial assistance following the September 11 attacks Charities and relief agencies raised over $657 million in the three weeks following the September 11, 2001 attacks, the vast bulk going to immediate survivors and victims' families. On September 21, 2001, the Congress approved a bill to aid the airline industry and establish a federal fund for victims. The cost of the mostly open-ended fund reached $7 billion. Victims of earlier terrorist attacks, including those linked to al-Qaida, were not included in the fund—nor were those who would not surrender the right to hold the airlines legally responsible. From the donations to the Emergency Relief Fund, as of 19 November 2001, the American Red Cross granted 3,165 checks to 2,776 families totaling $54.3 million. 172,612 cases were referred to mental health contacts. The 866-GET INFO number received 29,820 calls. As of 3:10 p.m. November 20, 2001, there had been 1,592,295 blood donations since September 11. "Fire Donations" took charitable contributions on behalf of firefighters, EMS, and rescue workers. On Thursday and Friday, September 14–15 September 2001, various relief supplies for the World Trade Center relief effort were collected from the New York City area, and dropped off at the Javits Convention Center or at a staging area at Union Square. By Saturday morning, enough supplies (and volunteers) were collected. Many families and friends of victims have set up memorial funds and projects to give back to their communities and change the world in honor of their loved ones' lives. Examples include:
https://en.wikipedia.org/wiki?curid=28064
Rescue and recovery effort after the September 11 attacks on the World Trade Center The local, state, federal and global reaction to the September 11 attacks on the World Trade Center was considerable. The events of that day elicited a large response of local emergency and rescue personnel to assist in the evacuation of the two towers, resulting in a large loss of the same personnel when the towers collapsed. After the attacks, the media termed the World Trade Center site "Ground Zero", while rescue personnel referred to it as "the Pile". In the ensuing recovery and cleanup efforts, personnel related to the metalwork and construction professions would descend on the site to offer their services and remained until the site was cleared in May 2002. In the years since, investigations and studies have examined effects upon those who participated, noting a variety of afflictions attributed to the debris and stress. After American Airlines Flight 11 crashed into the North Tower (1 WTC) of the World Trade Center, a standard announcement was given to tenants in the South Tower (2 WTC) to stay put and that the building was secure. However, many defied those instructions and proceeded to evacuate the South Tower (most notably, Rick Rescorla, Morgan Stanley Security Director, evacuated 2687 of the 2700 Morgan Stanley employees in the building). All people evacuating were ordered through a door on the mezzanine level that led to a bridge to another building, and everyone was evacuated through the neighboring building. The firefighters in charge did not want anyone going through the front doors at first due to falling debris, and then because of falling people who had jumped from the towers. Standard evacuation procedures for fires in the World Trade Center called for evacuating only the floors immediately above and below the fire, as simultaneous evacuation of up to 50,000 workers would be too chaotic. Firefighters from the New York City Fire Department rushed to the World Trade Center minutes after the first plane struck the north tower. Chief Joseph Pfeifer and his crew with Battalion 1 were among the first on the scene. At 8:50 a.m., an Incident Command Post was established in the lobby of the North Tower. By 9:00 a.m., shortly before United Airlines Flight 175 hit the South Tower, the FDNY chief had arrived and took over command of the response operations. Due to falling debris and safety concerns, he moved the incident command center to a spot located across West Street, but numerous fire chiefs remained in the lobby which continued to serve as an operations post where alarms, elevators, communications systems, and other equipment were operated. The initial response by the FDNY was on rescue and evacuation of building occupants, which involved sending firefighters up to assist people that were trapped in elevators and elsewhere. Firefighters were also required to ensure all floors were completely evacuated. Numerous staging areas were set up near the World Trade Center, where responding fire units could report and get deployment instructions. However, many firefighters arrived at the World Trade Center without stopping at the staging areas. As a result, many chiefs could not keep track of the whereabouts of their units. Numerous firefighters reported directly to the building lobbies, and were ordered by those commanding the operating post to proceed into the building. Problems with radio communication caused commanders to lose contact with many of the firefighters who went into the buildings. The repeater system in the World Trade Center, which was required for portable radio signals to transmit reliably, was malfunctioning after the impact of the planes. As a result, firefighters were unable to report to commanders on their progress, and were unable to hear evacuation orders. Also, many off-duty firefighters arrived to help, without their radios. FDNY commanders lacked communication with the NYPD, who had helicopters at the scene, or with Emergency Medical Services (EMS) dispatchers. The firefighters on the scene also did not have access to television reports or other outside information, which could help in assessing the situation. When the South Tower collapsed at 9:59 a.m., firefighters in the North Tower were not aware of exactly what had happened. The battalion chief in the North Tower lobby immediately issued an order over the radio for firefighters in the tower to evacuate, but many did not hear the order, due to the faulty radios. Because of this, 343 firefighters died in the collapse of the towers. The command post located across West Street was taken out when the South Tower collapsed, making command and control even more difficult and disorganized. When the North Tower collapsed, falling debris killed Peter Ganci, the FDNY chief. Following the collapse of the World Trade Center, a command post was set up at a firehouse in Greenwich Village. The FDNY deployed 200 units (half of all units) to the site, with more than 400 firefighters on the scene when the buildings collapsed. This included a total of 121 engine companies, 62 ladder companies, and other special units. The FDNY also received assistance from fire departments in Nassau, Suffolk, Westchester County, and other neighboring jurisdictions, but with limited ability to manage and coordinate efforts. Besides assisting with recovery operations at Ground Zero, volunteer firefighters from Long Island and Westchester manned numerous firehouses throughout the city to assist with other fire and emergency calls. FDNY Emergency medical technicians (EMT's) and Paramedics, along with 9-1-1 system ambulances operated by voluntary hospitals and volunteer ambulance corps, began arriving at 8:53 a.m., and quickly set up a staging area outside the North Tower, at West Street, which was quickly moved over to the corner of Vesey and West Streets. As more providers responded to the scene, five triage areas were set up around the World Trade Center site. EMS chiefs experienced difficulties communicating via their radios, due to the overwhelming volume of radio traffic. At 9:45, an additional dispatch channel was set aside for use by chiefs and supervisors only, but many did not know about this and continued to operate on the other channel. The communication difficulties meant that commanders lacked good situational awareness. Dispatchers at the 9-1-1 call center, who coordinate EMS response and assign units, were overwhelmed with incoming calls, as well as communications over the radio system. Dispatchers were unable to process and make sense of all the incoming information, including information from people trapped in the towers, about conditions on the upper floors. Overwhelmed dispatchers were unable to effectively give instructions and manage the situation. EMS personnel were in disarray after the collapse of the South Tower at 9:59 a.m. Following the collapse of the North Tower at 10:28 a.m., EMS commanders regrouped on the North End of Battery Park City, at the Embassy Suites Hotel. Around 11:00 a.m., EMS triage centers were relocated and consolidated at the Chelsea Piers and the Staten Island Ferry Terminal. Throughout the early afternoon, the soundstages at the pier were separated into two areas, one for the more seriously injured and one for the walking wounded. On the acute side, multiple makeshift tables, each with a physician, nurse, and other health care workers, and non-emergency service volunteers, were set up for the arrival of mass casualties. Supplies, including equipment for airway and vascular control, were obtained from neighboring hospitals. Throughout the afternoon, local merchants arrived to donate food. Despite this, few patients arrived for treatment, the earliest at about 5 p.m., and were not seriously injured, being limited to smoke inhalation. An announcement was made around 6–7 p.m. that a second shift of providers would cover the evening shift, and that an area was being set up for the day personnel to sleep. Soon after, when it was realized that few would have survived the collapse and be brought to the piers, many decided to leave and the area was closed down. The New York City Police Department quickly responded with the Emergency Service Units (ESU) and other responders after the crash of American Airlines Flight 11 into the North Tower. The NYPD set up its incident command center at Church Street and Vesey Street, on the opposite side of the World Trade Center from where the FDNY was commanding its operations. NYPD helicopters were soon at the scene, reporting on the status of the burning buildings. When the buildings collapsed, 23 NYPD officers were killed, along with 37 Port Authority Police Department officers. The NYPD helped facilitate the evacuation of civilians out of Lower Manhattan, including approximately 5,000 civilians evacuated by the Harbor Unit to Staten Island and to New Jersey. In ensuing days, the police department worked alternating 12-hour shifts to help in the rescue and recovery efforts. Immediately after the first attack, the captains and crews of a large number of local boats steamed into the attack zone to assist in evacuation. These ships had responded by a request from the U.S. Coast Guard to help evacuate those stranded on Manhattan Island. Others, such as the "John J. Harvey", provided supplies and water, which became urgently needed after the Towers' collapse severed downtown water mains. The Coast Guard Auxiliary helped lead a massive maritime evacuation with estimates of the number of people evacuated by water from Lower Manhattan that day in the eight-hour period following the attacks ranging from 500,000 to 1,000,000. Norman Mineta, Secretary of Transportation during the attacks, called the efforts "the largest maritime evacuation conducted in the United States". The evacuation was the largest maritime evacuation or "boatlift" in history by most estimates, passing the nine-day evacuation of Dunkirk during World War II. As many as 2,000 people injured in the attacks were evacuated by this means. Amateur radio played a role in the rescue and clean-up efforts. Amateur radio operators established communications, maintained emergency networks, and formed bucket brigades with hundreds of other volunteer personnel. Approximately 500 amateur radio operators volunteered their services during the disaster and recovery. The New Jersey Legislature honored the role of Amateur Radio operators in a proclamation on December 12, 2002. Note: "Government exhibits are from the trial of Zacarias Moussaoui." On the day following the attacks, 11 people were rescued from the rubble, including six firefighters and three police officers. One woman was rescued from the rubble, near where a West Side Highway pedestrian bridge had been. Two PAPD officers, John McLoughlin and Will Jimeno, were also rescued. Discovered by former U.S. Marines Jason Thomas and Dave Karnes, McLoughlin and Jimeno were pulled out alive after spending nearly 24 hours beneath 30 feet of rubble. Their rescue was later portrayed in the Oliver Stone film, "World Trade Center". In total, only twenty survivors were pulled out of the rubble. The final survivor, Port Authority secretary Genelle Guzman-McMillan, was rescued 27 hours after the collapse of the North Tower. Some firefighters and civilians who survived made cell phone calls from voids beneath the rubble, though the amount of debris made it difficult for rescue workers to get to them. By Wednesday night, 82 deaths had been confirmed by officials in New York City. Rescue efforts were paused numerous times in the days after the attack, due to concerns that nearby buildings, including One Liberty Plaza, were in danger of collapsing. The search and rescue effort in the immediate aftermath at the World Trade Center site involved ironworkers, structural engineers, heavy machinery operators, asbestos workers, boilermakers, carpenters, cement masons, construction managers, electricians, insulators, machinists, plumbers and pipefitters, riggers, sheet metal workers, steelworkers, truckers and teamsters, American Red Cross volunteers, and many others. Lower Manhattan, south of 14th Street, was off-limits, except for rescue and recovery workers. There were also about 400 working dogs, the largest deployment of dogs in the nation's history. New York City Office of Emergency Management was the agency responsible for coordination of the City's response to the attacks. Headed by then-Director Richard Sheirer, the agency was forced to vacate its headquarters, located in 7 World Trade Center, within hours of the attack. The building later collapsed. OEM reestablished operations temporarily at the police academy, where Mayor Giuliani gave many press conferences throughout the afternoon and evening of September 11. By Friday, rescue and reliefs were organized and administered from Pier 92 on the Hudson River. Volunteers quickly descended on Ground Zero to help in the rescue and recovery efforts. At Jacob Javits Convention Center, thousands showed up to offer help, where they registered with authorities. Construction projects around the city came to a halt, as workers walked off the jobs to help at Ground Zero. Ironworkers, welders, steel burners, and others with such skills were in high demand. By the end of the first week, over one thousand ironworkers from across North America had arrived to help, along with countless others. The New York City Department of Design & Construction oversaw the recovery efforts. Beginning on September 12, the Structural Engineers Association of New York (SEAoNY) became involved in the recovery efforts, bringing in experts to review the stability of the rubble, evaluate safety of hundreds of buildings near the site, and designing support for the cranes brought in to clear the debris. The City of New York hired the engineering firm, LZA-Thornton Tomasetti, to oversee the structural engineering operations at the site. To make the effort more manageable, the World Trade Center site was divided into four quadrants or zones. Each zone was assigned a lead contractor, and a team of three structural engineers, subcontractors, and rescue workers. The Federal Emergency Management Agency (FEMA), the United States Army Corps of Engineers, the Occupational Safety and Health Administration (OSHA), and the New York City Office of Emergency Management (OEM) provided support. Forestry incident management teams (IMTs) also provided support beginning in the days after the attacks to help manage operations. A nearby Burger King restaurant was used as a center for police operations. Given that workers worked at the site, or "The Pile", for shifts as long as twelve hours, a specific culture developed at the site, leading to workers developing their own argot. "The Pile" was the term coined by the rescue workers to describe the 1.8 million tons of wreckage left from the collapse of the World Trade Center. They avoided the use of "ground zero", which describes the epicenter of a bomb explosion. Numerous volunteers organized to form "bucket brigades", which passed 5-gallon buckets full of debris down a line to investigators, who sifted through the debris in search of evidence and human remains. Ironworkers helped cut up steel beams into more manageable sizes for removal. Much of the debris was hauled off to the Fresh Kills Landfill on Staten Island where it was further searched and sorted. According to the New York Times, by September 24, 2001 more than 100,000 tons of debris had been removed from the site. Some structural engineers have criticized the decision to recycle the steel from the buildings before it could be analyzed as part of the post-collapse investigation. Some of the steel was reused for memorials. New York City firefighters donated a cross made of steel from the World Trade Center to the Shanksville Volunteer Fire Company in Shanksville, Pennsylvania. The beam, mounted atop a platform shaped like the Pentagon, was erected outside the Shanksville's firehouse near the crash site of United Airlines Flight 93. Twenty-four tons of the steel used in construction of USS "New York" (LPD-21) came from the small amount of rubble from the World Trade Center preserved for posterity. In the days following the destruction of the towers, rescuers found scorch marks, likely made by a cutting torch on a basement doorway underneath 4 WTC; this was thought to be the result of looters. Further exploration of the building's basement revealed that the vault contained large amounts of gold and silver in the form of coins, as well as gold and silver bars. Over the subsequent months, much of the bullion was recovered. Approximately 560,000 dollars worth of the coins, having been stored in the vault by the Bank of Nova Scotia prior to September 11, 2001, were purchased by Lee S. Minshull of Palos Verdes, California, who then sent them to PCGS for grading in 2002. These coins were then sold to collectors. Coins salvaged from 4 WTC's vault included American Silver Eagles, Canadian Gold Maple Leafs, South African Krugerrands and British Gold Britannias. Hazards at the World Trade Center site included a diesel fuel tank buried seven stories below. Approximately 2,000 automobiles that had been in the parking garage also presented a risk, with each containing, on average, at least five gallons of gasoline. Once recovery workers reached down to the parking garage level, they found some cars that had exploded and burned. The United States Customs Service, which was housed in 6 World Trade Center, had 1.2 million rounds of ammunition and weapons in storage in a third-floor vault, to support their firing range. In the hours immediately after the attacks on the World Trade Center, three firefighters raised an American flag over the rubble. The flag was taken from a yacht, and the moment, which was captured on a well-known photograph, evoked comparisons to the iconic Iwo Jima photograph. Morale of rescue workers was boosted on September 14, 2001 when President George W. Bush paid a visit to Ground Zero. Standing with retired firefighter Bob Beckwith, Bush addressed the firefighters and rescue workers with a bullhorn and thanked them. Bush later remarked, "I'm shocked at the size of the devastation, It's hard to describe what it's like to see the gnarled steel and broken glass and twisted buildings silhouetted against the smoke. I said that this was the first act of war on America in the 21st century, and I was right, particularly having seen the scene." After some workers shouted that they could not hear the President, Bush famously responded by saying "I can hear you! The rest of the world hears you. And the people who knocked these buildings down will hear all of us soon!" At some point, rescue workers realized that they were not going to find any more survivors. After a couple of weeks, the conditions at Ground Zero remained harsh, with lingering odors of decaying human remains and smoke. Morale among workers was boosted by letters they received from children around the United States and the world, as well as support from thousands of neighbors in TriBeCa and other Lower Manhattan neighborhoods. This support continued to spread and eventually led to the founding of over 250 non-profit organizations of which raised almost $700 million within their first two years of operation. One of the nonprofits included One Day's Pay, later changed to MyGoodDeed, which championed the effort to designate September 11 as an official National Day of Service (9/11 Day). By 2012, many of the 250 plus organizations had disbanded due to lack of funding as the years progressed. Of the ones that remain, a handful remained functioning for those who remain in need. One of these organizations, Tuesday's Children, was founded the day after September 11 in hopes of supporting the children immediately affected by the attacks. The founder of this non-profit, David Weild IV, now calls them one of the "last men standing" in that they are now one of the few remaining organizations who "provide direct services for what social-service groups and survivors of the attacks call the '9-11 Community.'" Other notable non-profits who are "still standing" include: Immediately following the attacks, members of the Civil Air Patrol (CAP) were called up to help respond. Northeast Region placed their region personnel and assets on alert mere moments after they learned of the attack. With the exception of CAP, civilian flights were grounded by the Federal Aviation Administration. CAP flew aerial reconnaissance missions over Ground Zero, to provide detailed analysis of the wreckage and to aid in recovery efforts, including transportation of blood donations. Elements of the New York Army National Guard's 1-101st Cavalry (Staten Island), 258th Field Artillery, 442nd Military Police Company, and 69th Infantry Regiment based in Manhattan were the first military force to secure Ground Zero on September 11. The 69th Infantry's armory on Lexington Avenue became the Family Information Center to assist persons in locating missing family members. The National Guard supplemented the NYPD and FDNY, with 2,250 guard members on the scene by the next morning. Eventually thousands of New York Army and Air National Guardsmen participated in the rescue/recovery efforts. They conducted site security at the WTC, and at other locations. They provided the NYPD with support for traffic control, and they participated directly in recovery operations providing manpower in the form of "bucket brigades" sorting through the debris by hand. Additionally service members provided security at a variety of location throughout the city and New York State to deter further attacks and reassure the public. Members of the Air National Guard's 109th Airlift Wing out of Scotia, and Syracuse's 174th Fighter Wing immediately responded to New York City, setting up camp at places such as Fort Hamilton. Mostly civil engineers, firefighters and military police, they greatly aided in the clean-up effort. F-16s from the 174th Fighter Wing also ramped up their flying sorties and patrolled the skies. The New Jersey National Guard assisted the New York National Guard's efforts following the attacks. U.S. Marines were also present to assist in the rescue efforts. No official numbers of men who helped out was released but there was evidence that they were there. Films such as 2006 docudrama "World Trade Center" talked of two Marines who rescued two trapped police officers in the rubble. U.S. Marines were headquartered at 340 Westside Hwy Bloomberg News Building. The commanding officer was Navy Commander Hardy, and executive officer was Maj. Priester. These two oversaw 110 military personnel of various branches, various police departments and EMTs. The U.S. Navy deployed a hospital ship USNS "Comfort" (T-AH-20) to Pier 92 in Manhattan. Crew members provided food and shelter for more than 10,000 relief workers. Comfort's 24-hour galley also provided an impressive 30,000 meals. Its medical resources were also used to provide first-aid and sick call services to nearly 600 people. The ship's psychological response team also saw more than 500 patients. A May 14, 2007, "New York Times" article, "Ground Zero Illness Clouding Giuliani's Legacy", gave the interpretation that thousands of workers at Ground Zero have become sick and that "regard Mr. Giuliani's triumph of leadership as having come with a human cost". The article reported that the mayor seized control of the cleanup of Ground Zero, taking control away from established federal agencies, such as the Federal Emergency Management Agency, the U.S. Army Corps of Engineers and the Occupational Safety and Health Administration. He instead handed over responsibility to the "largely unknown" city Department of Design and Construction. Documents indicate that the Giuliani administration never enforced federal requirements requiring the wearing of respirators. Concurrently, the administration threatened companies with dismissal if cleanup work slowed. Workers at the Ground Zero pit worked without proper respirators. They wore painters' masks or no facial covering. Specialists claim that the only effective protection against toxins, such as airborne asbestos, is a special respirator. New York Committee for Occupational Safety and Health industrial hygienist David Newman said, "I was down there watching people working without respirators." He continued, "Others took off their respirators to eat. It was a surreal, ridiculous, unacceptable situation." The local EPA office sidelined the regional EPA office. Dr. Cate Jenkins, a whistle-blower EPA scientist, said that on September 12, 2001, a regional EPA office offered to dispatch 30 to 40 electron microscopes to the WTC pit to test bulk dust samples for the presence of asbestos fibers. Instead, the local office chose the less effective polarized light microscopy testing method. Dr. Jenkins alleged that the local office refused, and said, "We don't want you fucking cowboys here. The best thing they could do is reassign you to Alaska." There were many health problems caused by the toxins. 99% of exposed firefighters reported at least one new respiratory problem while working at the World Trade Center site that they had not experienced before. Chronic airway disease is the main lung injury among firefighters who were exposed to toxins during 9/11. Six years after the attacks, among those who never smoked, approximately 13% of firefighters and 22% of EMS had lungs that did not function as well as others around the same age. Steep declines in pulmonary lung function has been a problem since first detected among firefighters and EMS within a year of 9/11 have persisted. Increasing numbers of Ground Zero workers are getting illnesses, such as cancer. Between September 11, 2001, through 2008, there were 263 new cases of cancer found in 8,927 male firefighters who responded to 9/11 attacks. This number is 25 more than what is expected from men from a similar age group and race. There is a 19% increase in cancer overall, between firefighters who responded to the attacks and those who were not exposed to toxins from responding to the attacks on September 11. On January 30, 2007, Ground Zero workers and groups such as Sierra Club and Unsung Heroes Helping Heroes met at the Ground Zero site and urged President George Bush to spend more money on aid for sick Ground Zero workers. They said that the $25 million that Bush promised for the ill workers was inadequate. A Long Island iron-worker, John Sferazo, at the protest rally said, "Why has it taken you 5½ years to meet with us, Mr. President?" Firefighters, police and their unions, have criticized Mayor Rudy Giuliani over the issue of protective equipment and illnesses after the attacks. An October study by the National Institute of Environmental Safety and Health said that cleanup workers lacked adequate protective gear. The Executive Director of the National Fraternal Order of Police reportedly said of Giuliani: "Everybody likes a Churchillian kind of leader who jumps up when the ashes are still falling and takes over. But two or three good days don't expunge an eight-year record." Sally Regenhard, said, "There's a large and growing number of both FDNY families, FDNY members, former and current, and civilian families who want to expose the true failures of the Giuliani administration when it comes to 9/11." She told the "New York Daily News" that she intends to "Swift Boat" Giuliani. Various health programs arose after the attacks to provide treatment for 9/11-related illnesses among responders, recovery workers, and other survivors. When the James Zadroga 9/11 Health and Compensation Act became federal law in January 2011, these programs were replaced by the World Trade Center Health Program. Soon after the attacks, New York City commissioned McKinsey & Company to investigate the response of both the New York City Fire Department and New York City Police Department and make recommendations on how to respond more effectively to such large-scale emergencies in the future. Officials with the International Association of Fire Fighters have also criticized Rudy Giuliani for failing to support modernized radios that might have spared the lives of more firefighters. Some firefighters never heard the evacuation orders and died in the collapse of the towers. Estimated total costs, as of October 3, 2001 Plans for the World Trade Center rebuilding started in July 2002 which was headed by the Lower Manhattan Development Corporation. There were many proposals on how to build the World Trade Center back however many lacked creativity. Several architects were chosen throughout this construction process, but all of them ran into many problems with the design. The financial crisis in 2008 also forced the construction process over to the Port Authority; however, the Port Authority construction is not going as smoothly as planned. City officials are looking for better ways to lower the problems and delays. The World Trade Center completion of construction was scheduled for 2016. , four of seven planned buildings were completed, as were the transportation hub, 9/11 Memorial, and Liberty Park. Notes Bibliography New York Times: Other:
https://en.wikipedia.org/wiki?curid=28066
Communication during the September 11 attacks Communication problems and successes played an important role in the September 11, 2001, attacks and their aftermath. Systems were variously destroyed or overwhelmed by loads greater than they were designed to carry, or failed to operate as intended or desired. The organizers of the September 11, 2001, attacks apparently planned and coordinated their mission in face to face meetings and used little or no electronic communication. This "radio silence" made their plan more difficult to detect. According to 9/11 Commission staff statement No. 17 there were several communications failures at the federal government level during and after the 9/11 attacks. Perhaps the most serious occurred in an "Air Threat Conference Call" initiated by the National Military Command Center (NMCC) after two planes had crashed into the World Trade Center, but shortly before The Pentagon was hit. The participants were unable to include the Federal Aviation Administration (FAA) air traffic control command center, which had the most information about the hijackings, in the call. According to the staff report: Operators worked feverishly to include the FAA in this teleconference, but they had equipment problems and difficulty finding secure phone numbers. NORAD asked three times before 10:03 to confirm the presence of FAA on the conference, to provide an update on hijackings. The FAA did not join the call until 10:17. The FAA representative who joined the call had no familiarity with or responsibility for a hijack situation, had no access to decision makers, and had none of the information available to senior FAA officials by that time. We found no evidence that, at this critical time, during the morning of September 11, NORAD’s top commanders, in Florida or Cheyenne Mountain Complex, ever coordinated with their counterparts at FAA headquarters to improve situational awareness and organize a common response. Lower-level officials improvised—the FAA’s Boston Center bypassing the chain of command to contact NEADS. But the highest level Defense Department officials relied on the NMCC’s Air Threat Conference, in which FAA did not meaningfully participate. After the 1993 World Trade Center bombing, radio repeaters for New York City Fire Department communication were installed in the tower complex. Because they were unaware that several controls needed to be operated to fully activate the repeater system, fire chiefs at their command post in the lobby of the North Tower thought the repeater was not functioning and did not use it, though it did work and was used by some firefighters. When police officials concluded the twin towers were in danger of collapsing and ordered police to leave the complex, fire officials were not notified. Fire officials on the scene were not monitoring broadcast news reports and did not immediately understand what had happened when the first (South) tower did collapse. There was little communication between New York City Police Department and fire department commands even though an Office of Emergency Management (OEM) had been created in 1996 in part to provide such coordination. A primary reason for OEM's inability to coordinate communications and information-sharing in the early hours of the WTC response was the loss of its emergency operations center, located on the twenty third floor of 7 World Trade Center which had been evacuated after debris from tower's collapse struck the building, igniting several fires. Emergency relief efforts in both Lower Manhattan and at the Pentagon were augmented by volunteer amateur radio operators in the weeks after the attacks. Cell phones and in-plane credit card phones played a major role during and after the attack, starting with hijacked passengers who called family or notified the authorities about what was happening. Passengers and crew who made calls include: Sandra Bradshaw, Todd Beamer, Tom Burnett, Mark Bingham, Peter Hanson, Jeremy Glick, Barbara K. Olson, Renee May, Madeline Amy Sweeney, Betty Ong, Robert Fangman, Brian David Sweeney, and Ed Felt. Innocent occupants aboard United Airlines Flight 93 were able to assess their situation based on these conversations and plan a revolt that resulted in the aircraft crashing. According to the commission staff: "Their actions saved the lives of countless others, and may have saved either the U.S. Capitol or the White House from destruction." According to the 9/11 Commission Report, 13 passengers from Flight 93 made a total of over 30 calls to both family and emergency personnel (twenty-two confirmed air phone calls, two confirmed cell phone and eight not specified in the report). Brenda Raney, Verizon Wireless spokesperson, said that Flight 93 was supported by several cell sites. There were reportedly three phone calls from Flight 11, five from Flight 175, and three calls from Flight 77. Two calls from these flights were recorded, placed by flight attendants: Betty Ong on Flight 11 and CeeCee Lyles on Flight 93. Alexa Graf, an AT&T spokesperson, said it was almost a fluke that the calls reached their destinations. Marvin Sirbu, professor of Engineering and Public Policy at Carnegie Mellon University said on September 14, 2001, that "The fact of the matter is that cell phones can work in almost all phases of a commercial flight." Other industry experts said that it is possible to use cell phones with varying degrees of success during the ascent and descent of commercial airline flights. After each of the hijacked aircraft struck the World Trade Center, people inside the towers made calls to family and loved ones; for the victims, this was their last communication. Other callers directed their pleas for help to 9-1-1. Over nine hours of the 9-1-1 calls were eventually released after petitioning by "The New York Times" and families of the WTC victims. In 2001, U.S. cell phones did not yet have the photography capabilities that became widespread by the mid-2000s. After the attack, the cell phone network of New York City was rapidly overloaded (a mass call event) as traffic doubled over normal levels. Cell phone traffic also overloaded across the East Coast, leading to crashes of the cell phone network. Verizon's downtown wire phone service was interrupted for days and weeks because of cut subscriber cables, and to the 140 West Street exchange being shut for days. Capacity between Brooklyn and Manhattan was also diminished by cut trunk cables. Following the attacks, the issues with the cell network weren't resolved until 36 cellular COWs (cell towers on wheels) were deployed by September 14, 2001, in Lower Manhattan to support the U.S. Federal Emergency Management Agency (FEMA) and provide critical phone service to rescue and recovery workers. Since three of the major television broadcast network owned-and-operated stations had their transmission towers atop the North Tower (One World Trade Center), coverage was limited after the collapse of the tower. The FM transmitter of National Public Radio station WNYC was also destroyed in the collapse of the North Tower and its offices evacuated. For an interim period, it continued broadcasting on its AM frequency and used NPR's New York offices to produce its programming. The satellite feed of one television station, WPIX, froze on the last image received from the WTC mast; the image (a remote-camera shot of the burning towers), viewable across North America (as WPIX is available on cable TV in many areas), remained on the screen for much of the day until WPIX was able to set up alternate transmission facilities. It shows the WTC at the moment power cut off to the WPIX transmitter, prior to the towers' collapse. During the September 11 attacks, WCBS-TV channel 2 and WXTV-TV channel 41 stayed on the air. Unlike most other major New York television stations, WCBS-TV maintained a full-powered backup transmitter at the Empire State Building after moving its main transmitter to the North Tower of the World Trade Center. The station was also simulcasted nationally on Viacom (which at the time owned CBS) cable network VH1 that day. In the immediate aftermath of the attacks, the station lent transmission time to the other stations who had lost their transmitters, until they found suitable backup equipment and locations. The Emergency Alert System was never activated in the terrorist attacks, as the extensive media coverage made it unnecessary. AT&T eliminated any costs for domestic calls originating from the New York City area (phones using area codes 212, 718, 917, 646, and 347) in the days following 9/11. Radio communications during the September 11 attacks served a vital role in coordinating rescue efforts by New York Police Department, New York Fire Department, Port Authority Police Department, and emergency medical services. While radio communications were modified to address problems discovered after the 1993 World Trade Center bombing, investigations into the radio communications during the September 11th attacks discovered that communication systems and protocols that distinguished each department was hampered by the lack of interoperability, damaged or failed network infrastructure during the attack, and overwhelmed by simultaneous communication between superiors and subordinates. A rough time line of the incident could include: The scale of the incident was described in the National Commission report on the attacks as "unprecedented". In roughly seventeen minutes from 8:46 to 9:03 am, over a thousand police, fire, and emergency medical services (EMS) staff arrived at the scene. At some point during a large incident, any agency will reach a point where they find their resources overrun by needs. For example, the Port Authority Police could not schedule staff as if a September 11 attack would occur every shift. There is always a balance struck between readiness and costs. There is conflicting data but some sources suggest there may have been 2,000 to 3,000 workers involved in the rescue operation. It would be rare for most agencies to see an event where there were that many people to be rescued. There is some level of confusion present in any large incident. The National Institute for Standards and Technology (NIST) asserts commanders did not have adequate information and interagency information sharing was inadequate. For example, on September 11, persons in the New York City Police Department (NYPD) 9-1-1 center told callers from the World Trade Center to remain in place and wait for instruction from firefighters and police officers. This was the plan for managing a fire incident in the building and the 9-1-1 center staff were following the plan. This was partly countered by public safety workers going floor-by-floor and telling people to evacuate. The Commission report suggests people in the NYPD 9-1-1 center and New York City Fire Department (FDNY) dispatch would benefit from better situation awareness. The Commission described the call centers as not "fully integrated" with line personnel at the WTC. The report suggests the NYPD 9-1-1 center and FDNY dispatch were overrun by call volumes that had never been seen before. Adding to the confusion, radio coverage problems, radio traffic blocking, and building system problems occurred inside the burning towers. The facts show that much of the equipment worked as designed and users made the best of what was available to them. Typical of any large fire, many 9-1-1 calls with conflicting information were received beginning at 8:46 am. In addition to reports that a plane had hit the World Trade Center, the EMS computer-aided dispatch (CAD) log shows reports of a helicopter crash, explosions, and a building fire. Throughout the incident, people at different locations had very different views of the situation. After the collapse of the first tower, many firefighters in the remaining tower had no idea the first tower had fallen. A factor in radio communications problems included the fact that off-duty personnel self-dispatched to the incident scene. Some off-duty staff went into the towers without radios. According to the Commission report and news coverage, this was true of NYPD, Port Authority Police Department (PAPD), and FDNY personnel. Regardless of any radio coverage problems, these persons could not be commanded or informed by radio. In any incident of this scale, self-dispatched staff without radios would likely be a problem. Even if a cache of radios were brought to the scene to hand out, the scale of this incident would be likely to overrun the number of radios in the cache. NIST concluded, at the beginning of the incident, there was an approximate factor of five (peak) increase in radio communications traffic over a normal level. After the initial peak, radio traffic through the incident followed an approximate factor of three steady increase. FDNY recordings suggest the dispatch personnel were overloaded: both fire and EMS dispatch were often delayed in responding to radio calls. Many 9-1-1 telephone calls to dispatch were disconnected or routed to "all circuits are busy now", intercept recordings. NIST calculated that about one third of radio messages transmitted during the surge of communications were incomplete or unintelligible. Documentary footage suggests the tactical channels were also overloaded; some footage captured audio of two or three conversations occurring simultaneously on a particular channel. In this study of WTC incident communications, radio systems used at the site had problems but were generally effective in that users were able to communicate with one another. A 2002 video documentary "9/11" by Gedeon and Jules Naudet, (referred to as "the documentary") was reviewed. It captured audio from hand-held radios in use at the incident and showed users communicating over radios from the lobby command post in the North Tower. 26 Red Book audio CDs of New York City Fire Department radio transmissions, covering the incident's initial dispatch and the tower failures, were reviewed. These CDs were digitized versions of audio from the Fire Department's logging recorders. In addition, text on an oral history CD with transcripts of fire personnel debriefed on the incident were reviewed. In 2001, the NYPD used Ultra High Frequency (UHF) radios and divided the city into 35 radio zones. Most hand-held radios had at least 20 channels: while not all officers had all channels, all officers had the ability to communicate citywide. As a characteristic of physics, UHF signals penetrate buildings better than lower Very High Frequency (VHF) frequencies used by the FDNY fire units but generally cover shorter distances over open terrain. The Commission report did not cite any technical flaws with the NYPD radio system. PAPD has systems described as "low-power UHF". The Commission report says the systems were specific to a single site with the exception of one channel which was Port-Authority-wide. It's unclear whether the PAPD systems were interstitial and limited to 2 watts output, used normal local-control channels but were limited in power output by the frequency coordinator, or used leaky cable systems which were solely intended to work inside the Port Authority buildings. The report says there were 7, site-specific Port Authority Police channels. In 2001, officers at one site could not, (in all cases), carry their radios to another site and use them. Not all radios had all channels. Recordings of Citywide, Brooklyn, and Manhattan channels for Fire and Citywide, Brooklyn, and Manhattan channels for Emergency medical services were reviewed. Systems generally performed well. The audio coupling point for the logging recorder on Manhattan Fire made the dispatcher's voice difficult to hear. An anonymous fire dispatcher who identifies as "Dispatcher 416" is noteworthy. The Commission report says that, in 2001, FDNY used a system with 5 repeater channels: one for each of the boroughs of Manhattan, Brooklyn, Queens, with the Bronx and Staten Island sharing a single frequency using different Private Line (PL) tones, and a citywide channel. There were also five simplex channels in FDNY radios. Observation shows, back in 2001, that the citywide EMS channel was voting more frequently than normal, signals were noisy, interfering signals were present, and that some receiver sites had equalization differences. Some transmissions had choppy audio possibly representative of interference from FSK paging or intermittent microwave radio paths to one or more receiver sites. For example, if a microwave radio path fails for half-second intervals, the voting comparator may vote out that receiver site until silence is detected. This can cause dropped syllables in the voted audio. Some transmissions were noisy, although transactions show the dispatcher was understanding radio traffic in spite of audio drop-outs in almost every case. The Port Authority repeater, intended to allow communications inside the towers, did not appear to work as intended on September 11. The system, also called "Port Authority Channel 30", was installed after the 1993 World Trade Center attack. News accounts said the system had been turned off for unspecified technical reasons. The Commission report said it was customary to turn the system off because it somehow caused interference to radios in use at fire operations in other parts of the city. The documentary film gives different information, with a Fire Department member from Engine 7/Ladder 1 claiming that the aircraft's impact caused the system to fail. Evidence suggests the remote control console in the lobby command was not working but the repeater was. The radio repeater was located in 5 World Trade Center. A remote control console was connected to the repeater allowing staff at the North Tower lobby command post to communicate without using a hand-held radio. In a review of the logging recorder track of the Port Authority repeater, someone arrived early during the incident and began to establish a command post. From the command post in the lobby of the North Tower (1 World Trade Center), the user can be heard trying to transmit using a remote control unit. After several failed attempts to communicate with a user on the channel, the user steps through every channel selection on the remote, trying each one. The recording contains the tone remote control console stepping through all of its eight function tones. Someone says, "... the wireline isn't working", over the Port Authority channel. Something that looks like a Motorola T-1380-series remote is shown in the documentary. The fact that users pressing buttons on the remote control can clearly be heard on the logging recorder shows the transmit audio path was working. The recording does not reveal whether or not the console function tones were keying the transmitter. Some users in the North Tower lobby interpreted the remote control unit not working as a failure of the entire channel. Other fire units, not knowing the channel had failed, arrived and began using it successfully. The recordings show at least some units were successfully using the repeater to communicate inside the South Tower until the moment it collapsed. The Commission report says the North Tower lobby command may not have worked because of a technical problem, the volume control turned all the way down, or because a button that must be pressed to enable it had not been pushed. On the audio track, an outside agency, possibly in New Jersey and using a repeater, comes through the receive audio on the Port Authority Repeater 7 system. An ambulance being dispatched by the outside (non-FDNY) agency is heard. This may be what the FDNY had described as interference caused when the repeater was left enabled at all times. The distant user appears to be repeated through the system, (possibly on the same CTCSS tone as was configured in Repeater 7). This appears to be a distant co-channel user on the same input frequency as Repeater 7. It's possible that by the random button pressing, a user sent a function tone that temporarily put the base station in "monitor" and that's what caused the outside agency's traffic to be heard. This is unlikely because subsequent transmit function tones should have toggled the receiver from monitor back to CTCSS-enabled. An oral history interview revealed the Port Authority UHF radios were normally used at incidents inside the World Trade Center. The interviewee said in normal, day-to-day calls, the WTC staff handed Port Authority UHF radios to firefighters on their arrival and that these radios, "worked all over." This implies, but does not prove, that it was common knowledge among department members that FDNY radios had coverage problems inside the buildings. The 9-11 Commission uses the phrase, "performed poorly" to describe FDNY radios during the incident. Oral history files show that at least four channels were employed at WTC: One officer said a channel named "Command 3" was used for the North Tower. To those unfamiliar with the details of the FDNY system, it is unclear whether the interviewee meant Tactical 3 or a fifth channel. FDNY personnel are seen using radios during the documentary footage of the WTC lobby area. Analysis of these scenes showed the radios all appeared to be receiving properly. Oral history files confirm radio communications were at least partly functional. A problem that shows up in these types of incidents is that receivers in hand-held radios are subjected to signal levels that are likely to overload the receiver. Several radios may be transmitting within feet of one another on different channels. If overloading occurs, only very strong signals can be received while weaker signals disappear and are not received. The hand held radio receivers shown in the documentary appeared to work properly even though several other hand-held radios were transmitting only feet away. This is a hostile environment and suggests the hand-held equipment used by FDNY had good quality receivers, though in this case, the suggestion is incorrect. Second-hand observation is hardly the proper way to 'test' radio receivers or to distinguish 'good quality' from 'bad' and this is likely a source of continued misunderstanding; particularly when these same radios were operating at higher floors, in closer proximity to, and in direct line-of-sight of digital cellular repeaters. Those repeaters were likely operating at unlicensed power levels, which was a common practice of cellular providers at the time, and continues to this day. Footage reveals intelligible recovered audio coming out of the radios and shows radio users communicating with others. This may not have been true of the entire WTC complex but was true of radio users in the crowded lobby. Analysis of the 26 FDNY audio CDs showed the radios seemed to transmit into the radio systems okay. Radios calling dispatch got through. Calling units were intelligible. Users spoke with dispatchers. Dispatchers answered in ways that suggest they understood what was said. There were no noisy or truncated transmissions heard on any channel, (the equivalent of a dropped cellular call). This suggests the Fire Department's radio backbone is soundly designed and working properly. It is possible that system coverage problems are present; problems that could have been mitigated had the Command Post radio (with greater transmit power) been used. It is also likely that some transmissions did not reach any of the receivers in the system and therefore would not be a detectable problem when listening to the recordings. At the same time those recordings were made, the cellular system was operating at or near full-capacity, meaning every cellular repeater was transmitting. The dense RF interference environment created in NYC that day was essentially a 'perfect storm'; one in which a radio designed 25 years prior could not possibly contend with. In some scenes, captured documentary audio showed the channels were busy. In some cases, two or more conversations were taking place over a single radio channel at the same time. Users on Tactical 1 may have been close enough to one another to communicate because signals in proximity to each other would overpower weaker signals. At any incident of this size, there is likely to be some overlapping radio traffic. In the same way that large incidents exhaust all the firefighting vehicles and staff, the radio channel resources may become taxed to their limits. NIST says about one third of the fire department radio transmissions were not complete or not understandable. Some radio users had selected the wrong channels. For example, on the Repeater 7 channel, a unit was heard to call "Manhattan" dispatch and "Citywide". Although the circumstances that lead to the user selecting the wrong channel are not known, this can occur when the user is trapped in darkness or smoke and cannot see the radio. Users will typically try to count steps in a rotary switch channel selector starting from one end of the switch's travel. A communications van operated by FDNY responded to the incident. Its radio identifier was, "Field Comm." A backup van was in use on the day of the incident because the primary van was out-of-service. The backup van was destroyed and audio recordings of tactical channels used at the incident site were lost. One annoyance with the fire systems was the presence of "unit ID" data bursts. These constant squawks, heard at the end of transmissions, are decoded at dispatch to identify the calling radio. The annoyance of the data bursts is a trade-off that could help find a firefighter who has been injured or needs help. It also automatically displays the unit ID at the dispatch console. In most systems, it also saves dispatch personnel from typing the unit ID. They press one key and the calling unit's ID is inserted into the current CAD screen or command line. Recordings show radios were programmed to send unit ID on tactical channels. Radios accept unit ID on a per-channel basis. When mobile or hand-held radios are programmed, the unit ID encoders should be disabled on all channels where the feature is not used. This saves air time for about two to three syllables of speech per push-to-talk press. For example, unless the communications van or chief's vehicles had push-to-talk unit ID decoders, or the channels were recorded for later analysis where unit IDs were decoded from the recordings, the encoders should be turned off for tactical channels to reduce air time used. It also sounded like some vehicle radios may have had "status buttons" using the data bursts. If true, the operator presses a button on the vehicle radio which sends a short data burst to dispatch. Dispatch gets the unit identity and the new status from a data decoder. These can cause interruptions in voice traffic but cut down on total air time required to conduct business because they occupy the channel for less time than it takes to say, "Engine fifty on scene." This channel was the primary method of communication in the North tower. It was a simplex channel. Users complained it would only reach from the lobby to floors in the thirties. Tactical 1 was a default channel for use at some fire scenes. Some users who realized Repeater 7 was functional switched to that channel and were afforded better coverage than simplex users on Tactical 1. Audio recordings on the documentary film and NIST analysis show Tactical 1 was overloaded with heavy radio traffic. In contrast, the audio CD of Repeater 7 shows the channel was mostly idle. The 9-11 commission report said a new portable repeater system had been developed to address shortcomings of Tactical 1 at a large incident. The system, called, "the post", is carried to an area near the incident and set up for the duration to augment weak signals. The command channel used by officers at the incident was either called "Channel 5" or "Command 5" in documentation. Documents suggest the channel had a repeater but it was not clear if the repeater was citywide, installed in the Field Comm van, or housed in a battalion chief's vehicle. Recordings of this channel were lost when the Field Comm van was destroyed. The documentary film and oral history records show the channel being used effectively. The federal 9/11 Commission Report included recommendations on communications systems used by police, fire, and emergency medical services (EMS) at the WTC incident. In the report and in appearances on television news programs, commissioners said the capabilities of communications systems lacked the ability to communicate across department lines. That is to say, police units could not communicate with fire units directly by radio. Ambulances could not talk with police units directly by radio. Commission member Lee Hamilton, in several television appearances related to a 2006 book on the topic of the WTC incident, reiterated this factually correct view. An example that was cited by Hamilton: during the incident the Police Department helicopter was unable to communicate with Fire Department units in order to warn them of the towers' imminent collapse. The NIST document suggests the helicopter may have been able to offer several minutes warning. "Several minutes" may have been enough to get some people from the lower floors outside. This warning of imminent collapse went out over at least one police radio channel but there is nothing showing it was relayed to other people or channels. FDNY operates at least two communications vans: one of which was brought to the scene at the WTC incident. The Commission report reveals the primary FDNY van was equipped to talk to NYPD helicopters but the backup van (which had no NYPD helicopter capability) was in use on September 11, 2001. In practice, many US helicopters used in emergency services are equipped with radios that allow communications on nearly any conventional two-way radio system, so long as the aircrew know the frequency and associated signaling tones. The radios usually have presets, like a car's broadcast radio, that allow some channels to be configured ahead of need. There was no information in the Commission report suggesting NYPD helicopters had such a capability. While it is technically possible to implement communications across departments, doing so introduces a host of new training and incident command problems. These are problems that would need to be managed in addition to the existing set of issues present at any large incident. The ability to maintain command, and monitor the safety of, groups working at an incident is diminished if a group of firefighters cannot be reached because they've switched over to the EMS channel. This could cause people to be sent to rescue them when there was no need. Similarly, if the Manhattan EMS dispatcher can't reach an ambulance because they are on one of the fire channels, patient care is affected. New York City Police Commissioner Raymond Kelly, appearing on the "Charlie Rose" show, expressed his view that the existing radio systems performed satisfactorily during the WTC incident. In his view, the interoperability desired by the 9-11 Commission was not needed. These problems are not new to the World Trade Center incident; cross-department and cross-discipline communication has been a hotly contested and long-identified issue. For example, at the Oklahoma City federal building bombing incident, the inability to communicate among departments was also cited as a problem. Firefighters heard an evacuation order on their radio channel because of the reported presence of a second bomb. Police and EMS workers reportedly did not know of the order. In Hurricane Katrina's wake, a sergeant in the Louisiana Department of Wildlife and Fisheries appeared on national television to describe not being able to reach persons from other agencies who were assisting with the recovery. She described seeing the people in a nearby boat but not being able to communicate with them. Even if the technical problems are solved, the issue is more complicated than just adding radio channels or talk groups. It is also a cultural problem. In one local incident, a large number of officers from three police agencies were fielded to search for a violent criminal who had evaded officers from one of the agencies. The officers did not coordinate by switching to a shared radio channel. After the incident, one participant said the users thought their radios were incompatible and did not understand how the shared channel worked. This possibly reflects a training problem or a technology literacy problem. The problem seems to have been remedied since then. In another instance, a fire agency had thoroughly trained for interoperability scenarios. During an incident where two agencies with different radio channels responded, a decision-maker said personnel from his agency would stay on their own channel. Decision-makers may occasionally act in unpredictable ways, even if technology literate and well-trained. It is not solely a technical problem, but an operational problem as well. Changes to ICS command structure, or operational changes in how the command post for an incident is set up, may produce better results than buying equipment or adding channels. Sometimes there are interoperability problems even where a structure for interoperability exists. One view of the Incident Command System is that units across department lines would communicate with their own representative at the command post or division level. That representative would relay any needs to another department. For example, a fire unit requesting five paramedic ambulances would identify the magnitude of a medical problem to their fire officer at the command post. This request would add to their commander's operational picture of the division or incident command as she called EMS to request the ambulances. Situation awareness is an important part of effective command and is easy to lose at a large incident. Bypassing incident commanders can contribute to a decomposing of command. One approach to cross-department netting is the capability of some modern trunked systems to provide a function called "dynamic regrouping"; a feature that Motorola doesn't support in simplex (e.g. 'fireground') operations. It is therefore necessary for a disaster to be near enough the infrastructure to allow for repeater access/operation. Many agencies with Motorola trunked systems already have this capability but it's hardly ever used; even in a crisis. The difficulty of operating such a system is often too great for poorly educated dispatchers who often have no college – much less any particular training in computers or communications systems – other than the 'cursory' training they receive in a 3 or 5 day class the vendors offer. The feature allows the dispatch center personnel to send units from different agencies who are responding to the same incident to a common talk group or virtual channel. This assumes the agencies all share a capability to operate over the same trunked radio system, which is rare. In an informal survey of three agencies with trunked systems that included this feature, users at two sites reported they did not think their system included the feature. A representative from a third site said he "...thought they had the feature but never used it." Of the three agencies with the feature, no one knew how to use it. This would suggest, (in at least the three agencies contacted,) that "dynamic regrouping" was not valuable. Like other disaster readiness processes, users would have to practice using the feature in order for it to be useful during an incident. Some agencies use commercial two-way radio as an adjunct to their own communications networks. One professional engineering evaluation of public safety radio systems explains that commercial systems such as Nextel's are not built to the same standards of coverage and non-blocking as public safety trunked systems. Like toy walkie talkies marketed to children, they are usable and helpful for non-urgent communications but should not be considered reliable enough for life safety uses. It is also true that most trunked radio system users are likely to hear busy signals, (error tones showing no channels are available,) for the first time during a large disaster. All systems have a finite capacity. "We don't want or need trunking" is what Chief Charles Dowd (NYPD) was heard to say at an APCO convention in Orlando (2006). NYPD operates a large, conventional repeater network with many legacy channels in the UHF band; and a technology developed "so a large number of users can share a small number of channels" (e.g. trunking) is clearly unnecessary and a frivolous waste of money. With sufficient channels, there is no need for trunking. There are no 'busy' tones in a conventional repeater system. In the event an individual needs to chime in, he simply waits his turn – just as he would do in a trunked system. All 911 ambulances and other FDNY vehicles have data terminals, sometimes referred to by staff in recordings and transcripts as MDTs. These terminals are connected to the computer-aided dispatch (CAD) back end or server. They can display text, page through screens describing jobs, and display lists of units assigned to a job. A thorough analysis of data communications is not possible. What recordings show is that data terminals in at least some field units did not work properly during at least a portion of the incident. At 09:11:14, "Division 3" told Manhattan Fire dispatch, referring to the "summary" screen, "Summary is only giving me a few units. You're going to have to give it to me over the radio. I'm ready to write." This means the terminal was not displaying the entire list of units assigned to Division 3, as it would under normal conditions. The work-around: the Chief had to hand-write the list of units responding. In this one instance, the dispatcher reading the list of about 29 units tied up the Manhattan Fire channel for 53 seconds. During the reading of the list of units responding, one can hear several FDNY units try to interrupt the dispatcher. Their radio traffic was delayed until the entire list was read. This need to read lists of units because of slow or inoperable terminals occurred in at least three or four cases. It's unclear what caused data delays and incomplete screens on the mobile data terminals. Evidenced by the dispatcher reading the list of units assigned to Division 3, the CAD system was working properly at dispatch positions. At least some field units experienced problems. Possible causes of problems with data terminals in vehicles may have included: Data terminals are partly purchased and installed to reduce load on dispatch staff and to reduce traffic on voice channels. When they work properly, they have a significant operational benefit. A data outage during an occurrence of high call traffic can quickly overrun dispatch and voice channel capacity in cases where a routine level of calls for service requires both data terminals and voice channels. New York City Council member Eric Gioia introduced a measure to have the Council investigate the issue of FDNY radio problems.
https://en.wikipedia.org/wiki?curid=28070
Strategic Air Command Strategic Air Command (SAC) was both a United States Department of Defense (DoD) Specified Command and a United States Air Force (USAF) Major Command (MAJCOM), responsible for Cold War command and control of two of the three components of the U.S. military's strategic nuclear strike forces, the so-called "nuclear triad", with SAC having control of land-based strategic bomber aircraft and intercontinental ballistic missiles or ICBMs (the third leg of the triad being submarine-launched ballistic missiles (SLBM) of the U.S. Navy). SAC also operated all strategic reconnaissance aircraft, all strategic airborne command post aircraft, and all USAF aerial refueling aircraft, to include those in the Air Force Reserve (AFRES) and Air National Guard (ANG). However, SAC did not operate the KB-50, WB-50 and WB-47 weather reconnaissance aircraft operated through the mid and late 1960s by the Air Weather Service, nor did SAC operate the HC-130 or MC-130 operations aircraft capable of aerial refueling helicopters that were assigned to Tactical Air Command (TAC), then Military Airlift Command (MAC), and from 1990 onward, those MC-130 aircraft operated by the Air Force Special Operations Command (AFSOC), or any AFRES (now Air Force Reserve Command (AFRC)) or ANG tactical aerial refueling aircraft (e.g., HC-130, MC-130) operationally gained by TAC, MAC or AFSOC. SAC primarily consisted of the Second Air Force (2AF), Eighth Air Force (8AF) and the Fifteenth Air Force (15AF), while SAC headquarters (HQ SAC) included Directorates for Operations & Plans, Intelligence, Command & Control, Maintenance, Training, Communications, and Personnel. At a lower echelon, SAC headquarters divisions included Aircraft Engineering, Missile Concept, and Strategic Communications. In 1992, as part of an overall post-Cold War reorganization of the U.S. Air Force, SAC was disestablished as both a Specified Command and as a MAJCOM, and its and equipment redistributed among the Air Combat Command (ACC), Air Mobility Command (AMC), Pacific Air Forces (PACAF), United States Air Forces in Europe (USAFE), and Air Education and Training Command (AETC), while SAC's central headquarters complex at Offutt AFB, Nebraska was concurrently transferred to the newly created United States Strategic Command (USSTRATCOM), which was established as a joint Unified Combatant Command to replace SAC's Specified Command role. In 2009, SAC's previous USAF MAJCOM role was reactivated and redesignated as the Air Force Global Strike Command (AFGSC), with AFGSC eventually acquiring claimancy and control of all USAF bomber aircraft and the USAF strategic ICBM force. The Strategic Air Forces of the United States during World War II included General Carl Spaatz's European command, United States Strategic Air Forces in Europe (USSTAF), consisting of the 8AF and 15AF, and the United States Strategic Air Forces in the Pacific (USASTAF) and its Twentieth Air Force (20AF). The Operation Overlord air plan for the strategic bombing of both Germany and German military forces in continental Europe prior to the 1944 invasion of France used several Air Forces, primarily those of the USAAF and those of the Royal Air Force (RAF), with command of air operations transferring to the Supreme Commander of the Allied Expeditionary Force on 14 April 1944. Planning to reorganize for a separate and independent postwar U.S. Air Force had begun by the fall of 1945, with the Simpson Board tasked to plan, "...the reorganization of the Army and the Air Force...". In January 1946, Generals Eisenhower and Spaatz agreed on an Air Force organization composed of the Strategic Air Command, the Air Defense Command, the Tactical Air Command, the Air Transport Command and the supporting Air Technical Service Command, Air Training Command, the Air University, and the Air Force Center. Strategic Air Command was originally established in the U.S. Army Air Forces on 21 March 1946, acquiring part of the personnel and facilities of the Continental Air Forces (CAF), the World War II command tasked with the air defense of the continental United States (CONUS). At the time, CAF headquarters was located at Bolling Field (later Bolling AFB) in the District of Columbia and SAC assumed occupancy of its headquarters facilities until relocating SAC headquarters (HQ SAC) to nearby Andrews Field (later Andrews AFB), Maryland as a tenant activity until assuming control of Andrews Field in October 1946. SAC initially totaled 37,000 USAAF personnel. In addition to Bolling Field and, seven months later, Andrews Field, SAC also assumed responsibility for: SAC also had seven additional CAF bases transferred on 21 March 1946 which remained in SAC through the 1947 establishment of the U.S. Air Force as an independent service. Those installations included: On 31 March 1946, the following additional installation was also assigned to SAC: Under the first SAC Commander in Chief, General George C. Kenney, initial units reporting to the Strategic Air Command headquarters on 21 March 1946 included the Second Air Force, the IX Troop Carrier Command and the 73d Air Division. Fifteenth Air Force was assigned to SAC on 31 March (15th AF's 263rd Army Air Force Base Unit—with —transferred the same date directly under HQ SAC ), while the IX Troop Carrier Command was inactivated the same date and its assets redistributed within SAC. With postwar demobilization still underway, eight of the ten assigned bomb groups were inactivated before the Eighth Air Force was assigned to SAC on 7 June 1946 Despite the pressures of demobilization, SAC continued the training and evaluation of bomber crews and units still on active duty in the postwar Army Air Forces. Radar Bomb Scoring became the preferred method of evaluating bomber crews, with the last of 888 simulated bomb runs scored against a bombing site near San Diego, California during 1946, subsequently increasing to 2,449 bomb runs by 1947. In the wake of the successful employment of air-dropped nuclear weapons against Hiroshima and Nagasaki to effectively end World War II, SAC became the focus of the nation's nuclear strike capability, to the extent that Joint Chiefs of Staff (JCS) Publication 1259/27 on 12 December 1946 identified that, "...the 'air atomic' strategic air force should only come under the orders of the JCS." In addition to the strategic bombing mission, SAC also devoted significant resources to aerial reconnaissance. In 1946, SAC's reconnaissance aircraft inventory consisted of F-2 photo variants of the C-45 Expeditor support aircraft, but by 1947 SAC had acquired an F-9C squadron consisting of twelve photo-reconnaissance variants of the B-17G Flying Fortress. An F-13 squadron, the F-13 later re-designated as the RB-29 Superfortress, was also established. SAC conducted routine aerial reconnaissance missions near the Soviet borders or near the 12-mile international waters limit, although some missions actually penetrated into Soviet airspace. The flight profiles of these missions—above 30,000 feet and in excess of 300 knots—made interception by Soviet air forces difficult until the Soviet's 1948 introduction of the MiG-15 jet fighter. Project Nanook, the Cold War's first Top Secret reconnaissance effort, used the first RB-29 missions for mapping and visual reconnaissance in the Arctic and along the northern Soviet coast. Later missions were Project LEOPARD along the Chukchi Peninsula, followed by Projects RICKRACK, STONEWORK, and COVERALLS. In 1946, the US possessed only nine atomic bombs and twenty-seven B-29s capable at any one time of delivering them. Furthermore, it was later determined that an attack by the 509th Composite Bomb Group during the 1947 to 1948 time frame would have required at least five to six days just to transfer custody of the bombs from United States Atomic Energy Commission (AEC) sites to SAC and deploy the aircraft and weapons to forward operating bases before launching nuclear strikes. Unfortunately, postwar budget and personnel cuts had had an insidious effect on SAC as its Deputy Commander, Major General Clements McMullen, implemented mandated force reductions. This continued to wear down SAC as a command and morale plummeted. As a result, by the end of 1947, only two of SAC's eleven groups were combat ready. After the 1948 Bikini Atoll nuclear tests, the "Half Moon" Joint Emergency War Plan developed in May 1948 proposed dropping 50 atomic bombs on twenty Soviet cities, with President Harry S. Truman approving "Half Moon" during the June 1948 Berlin Blockade, (Truman sent B-29s to Europe in July). SAC also ordered special ELINT RB-29s to detect improved Soviet radars and, in cooperation with the 51st Air Force Base Unit, SAC also monitored radioactive fallout from Soviet atomic testing on Novaya Zemlya. In terms of overall Air Force basing and infrastructure, SAC continued to acquire an ever-increasing share of USAF infrastructure and the USAF associated budget. In 1947, before the USAF was established as an independent service, construction commenced on Limestone AAF, Maine (later renamed Loring AFB), a new SAC installation specifically designed to accommodate the B-36 Peacemaker. Fort Dix AAF, New Jersey (later McGuire AFB); Spokane AAF, Washington (later Fairchild AFB); and Wendover Field, Utah (later Wendover AFB) were also transferred to SAC between 30 April and 1 September 1947. Following establishment of the USAF as a separate service, SAC bases in the United States consisted of: Those bases subsequently added to SAC in the United States included: In addition to bases under its operational control, SAC also maintained tenant wings at several bases under the control of other USAF MAJCOMs. These non-SAC bases with SAC tenants included SAC also often maintained a tenant presence at former SAC bases that the command subsequently transferred and relinquished to other MAJCOMs, to include but not limited to: SAC transferred to the United States Air Force on 26 September 1947, concurrent with the latter's establishment as a separate military service. Units directly under SAC HQ included the 8AF and 15AF, as well as the 311th Air Division, 4th Fighter Wing, 82nd Fighter Wing, 307th Bomb Wing, and two reconnaissance units, the 311th Reconnaissance Wing and the 46th Reconnaissance Squadron. The 56th Fighter Wing was subsequently assigned to SAC on 1 October 1947. Following the establishment of the U.S. Air Force, most SAC installations on U.S. territory were renamed as "Air Force Base" during late 1947 and into 1948, while non-U.S. installations were renamed as "Air Base". In May 1948, in an exercise versus Air Defense Command's "Blue" force, a SAC "Red" strike force simulated attacks on Eastern Seaboard targets as far south as Virginia. After a "scathing" 1948 Lindbergh review of SAC operations in the air and at six SAC bases, General Kenney was removed as Commanding General on 15 October 1948 and replaced on 19 October 1948 by 8AF's commander, Lieutenant General Curtis LeMay. Upon Lemay's assumption of command, SAC had only 60 nuclear-capable aircraft, none of which possessed a realistic long range capability against the Soviet Union. The B-29D, which had become the B-50 in December 1945, was first delivered to SAC in June 1948. This was followed by SAC's first Convair B-36 Peacemaker bomber arriving at Kirtland AFB, New Mexico in September 1948. In November 1948, LeMay had SAC's headquarters and its command post moved from Andrews AFB, Maryland to Offutt AFB, Nebraska. At Offutt, the command moved into the "A Building", a three-story facility which had previously been used by the Glenn L. Martin Company during World War II. Concurrent with establishment of this new headquarters facility, Lemay also increased SAC Radar Bomb Scoring (RBS) runs the same year to 12,084. SAC also enhanced its organic fighter escort capability by initiating replacement of its World War II vintage piston-engine F-51D Mustang and F-82E Twin Mustang fighter aircraft with F-84G Thunderjets. In January 1949, SAC conducted simulated raids on Wright-Patterson AFB, Ohio. Assessments of these simulated raids by "...LeMay's entire command...were appalling", despite the SAC deputy commander, Major General McMullen, having instructed all bomber units to improve their effectiveness. To motivate crews and improve operational effectiveness command-wide, SAC established a competition, the first so-called "Bomb Comp" in 1948. Winners of this inaugural event were the 43rd Bombardment Group (unit) and, for aircrew award, a B-29 team from the 509th Bombardment Group. Given its global operating environment, SAC also opened its own survival school at Camp Carson, Colorado in 1949, later moving this school to Stead AFB, Nevada in 1952 before transferring the school to the Air Training Command in 1954. SAC also created Emergency War Plan 1–49 (EWP 1–49), which outlined the means for delivering 133 atomic bombs, "...the entire stockpile...in a single massive attack..." on 70 Soviet cities over a 30-day period. The first Soviet atomic bomb test occurred on 29 August 1949 and the Joint Chiefs of Staff (JCS) subsequently identified SAC's primary objective was to damage or destroy the Soviet Union's ability to deliver nuclear weapons. The JCS further defined SAC's secondary objective was to stop any Soviet advances into Western Europe, and its tertiary objective was the previous EWP 1–49 industrial mission. In July 1950, in response to combat operations on the Korean peninsula, SAC dispatched ten nuclear-capable bombers to Guam and deployed four B-29 bomber wings in Korea for tactical operations, although this action caused SAC commander Lemay to comment "...too many splinters were being whittled off the [deterrence] stick". Initial SAC B-29 successes against North Korea in the summer of 1950 were countered by subsequent Soviet MiG-15 fighter-interceptors, and SAC's 27th Fighter Escort Wing began escorting the bombers with F–84 Thunderjets. Ground-directed bombing (GDB) was subsequently used for close air support (CAS) missions after three SAC radar bomb scoring (RBS) squadron detachments (Dets C, K, & N) arrived at Pusan in September 1950. In 1951, SAC "began to eliminate its combat groups", transferring medium bombardment groups "to Far East Air Forces (FEAF) Bomber Command for combat." In 1951, LeMay convinced the Air Staff to allow SAC to approve nuclear targets, and he continued refusing to submit war plans for JCS review, which the JCS eventually came to accept (of 20,000 candidates in 1960, SAC designated 3,560 as bombing targets—mostly Soviet air defense: airfields and suspected missile sites.) Although experimented with prior to World War II, SAC refined aerial refueling to a fine art. SAC's in-flight refueling mission began in July 1952 when its 31st Fighter-Escort Wing refueled sixty F-84G Thunderjets from Turner AFB, Georgia to Travis AFB, California non-stop with fuel from twenty-four KB-29P Superfortresses modified into aerial tankers. Exercise FOX PETER ONE followed with 31st FEW fighters being refueled Hickam AFB en route to Hawaii. On 15 March 1953, a 38th Strategic Reconnaissance Squadron RB-50 returned fire on a Soviet MiG-15, while a 343d Strategic Reconnaissance Squadron RB-50 was shot down over the Sea of Japan 2 days after the Korean Armistice, while on 7 November 1954, an RB-29 was shot down near Hokkaido Island in northern Japan. By the time of 27 July 1953 Korean War cease-fire, SAC B-29s had flown over 21,000 sorties and dropped nearly 167,000 tons of bombs, with thirty-four B-29s lost in combat and forty-eight B-29s were lost to damage or crashes. SAC's first jet strategic bomber was the swept-wing B-47 medium bomber, which first entered service in 1951 and became operational within SAC in 1953. The B-47 was a component of the October 1953 "New Look" strategy, which articulated, in part, that: ""...to minimize the threat...the major purpose of air defense was not to shoot down enemy bombers—it was to allow SAC...to get into the air"[--and]" not be destroyed on the ground"[--to allow]" massive retaliation".". Concern of a bomber gap grew after the 1955 Soviet Aviation Day and the Soviets rejected the "Open Skies" Treaty proposed at the Geneva Summit on 21 July 1955. US bomber strength peaked with "over 2,500 bombers" after production "of over 2,000 B-47s and almost 750 B-52s" (circa 1956, 50% of SAC aircraft & 80% of SAC bombers were B-47s). In an effort to concurrently enhance it reconnaissance capabilities, SAC also received several RB-57D Canberra aircraft in April 1956, with the aircraft initially based at Turner AFB, Georgia. In 1957, these aircraft were forward deployed to Rhein-Main Air Base, West Germany, in order to conduct reconnaissance missions along the borders of the Soviet Union and other Warsaw Pact nations. However, an unintended consequence of this deployment was that Hawker Hunter fighters of the Royal Air Force stationed in the United Kingdom and in continental Europe often intercepted these classified RB-57 missions as they returned to Rhein-Main AB from over the Baltic. Since it was designed as a medium bomber, SAC's B-47 Stratojet traded speed for range. Because of this shorter range, and in order to better enable the B-47 fleet to reach its target sets in the Soviet Union, SAC routinely deployed its US-based B-47 wings to overseas forward operating bases in North Africa, Spain and Turkey. This program, in effect from 1957 to 1966, was known as "Reflex" with Sixteenth Air Force (16AF), a SAC numbered air force permanently stationed in Europe, having tactical and administrative control of the forward deployed aircraft and units. Beginning in 1955, SAC also moved a portion of its bomber and aerial refueling aircraft to a 24-hour alert status, either on the ground or airborne. By 1960, fully one third of SAC's bombers and aerial refueling aircraft were on 24-hour alert, with those crews and aircraft not already airborne ready to take off from designated alert sites at their respective bases within fifteen minutes. Bomber aircraft on ground alert were armed with nuclear weapons while aerial tanker aircraft were sufficiently fueled to provide maximum combat fuel offload to the bombers. Concurrent with this increased alert posture and in order to better hone strategic bombing skillsets, the 1955 SAC Bombing and Navigation Competition was characterized by radar bomb scoring (RBS) runs on Amarillo, Denver, Salt Lake City, Kansas City, San Antonio and Phoenix; and the 1957 competition (nicknamed "Operation Longshot") had three targets: Atlanta, Kansas City, and St. Louis. This use of RBS with simulated target areas utilizing mobile and fixed bomb scoring sites adjacent to major cities, industrial areas, military installations and dedicated bombing ranges throughout the United States. This format would continue through successive SAC Bombing and Navigation Competitions through the remainder of the 1950s, 1960s, 1970s and 1980s. Commencing in the late 1950s, in addition to representation from every SAC wing with a bombing and/or air refueling mission, later SAC competitions would also include participating bomber and aerial refueling units from the Royal Air Force's Bomber Command and (after 30 April 1968) its successor, RAF Strike Command. It was described as the "Western Pentagon," specifically a, "...four-story, reinforced concrete and masonry office building..." above ground and a "...segregated, adjacent three-story below ground command post." This was the description of what would become Building 500 at Offutt AFB and the new headquarters complex built expressly for SAC, with construction commencing in 1955. SAC headquarters moved from the A Building at Offutt AFB to Building 500 in 1957. The underground nuclear bunker had 24-inch thick walls and base floor, 10-inch thick intermediate floors, and 24-to-42-inch thick roof. It also contained a war room with six 16-foot data display screens and the capacity to sustain up to 800 people underground for two weeks. The below ground bunker portion of the headquarters complex also contained an IBM 704 computer, which was used to develop monthly weather forecasts at targets, as well as for computing fuel consumption and fallout cloud patterns for planning strike routes and egress routes (e.g., determining the timing as to which targets to bomb first). In 1957, SAC also constructed The Notch, a facility alternatively known as the 8th Air Force Combat Operations Center (COC) and the Westover Communications Annex, since it was a sub-post of nearby Westover AFB. A 3-story nuclear bunker located on Bare Mountain, Massachusetts, The Notch was built with three-foot thick walls, 1.5 foot thick steel blast doors, and 20 feet underground to protect 350 people for 35 days. The Notch was shut down as a SAC facility in 1970 when 8th Air Force was relocated to Barksdale AFB, Louisiana. Despite this investment in "hardened" headquarters and command and control facilities, the 1957 Gaither Commission identified, "...little likelihood of SAC's bombers surviving [a Soviet first strike] since there was no way to detect an incoming attack until the first [Soviet nuclear weapon] warhead landed." As a result, SAC's bombers and tankers began sitting armed ground alert at their respective bases on 1 Oct 57. In another organizational change during this time period, SAC's fighter escort wings were transferred to Tactical Air Command (TAC) during 1957 and 1958. Finally, during January 1958's Exercise Fir Fly, SAC "faker" aircraft (twelve B-47s) simulated bombing strikes against metropolitan areas and military installations in the United States defended by Air Defense Command's 28th Air Division. After SAC's 1st Missile Division was activated on 18 March 1957, SAC HQ established the Office of Assistant CINCSAC (SAC MIKE) at the Air Force Ballistic Missile Division in California on 1 January 1958. SAC MIKE was responsible for missile development liaison, the intermediate range Jupiter and Thor missiles having been transferred to SAC for alert in 1958. Beginning on 1 February 1958, a SAC Liaison Team was also located at the NORAD Command Post at Ent AFB, Colorado, and the two commands agreed that direct land line communications should connect SAC bases with NORAD's Air Defense Direction Centers. Also in the late 1950s, SAC continued to enhance its intelligence collection activities and develop innovative means of improving the survivability of its forces to surprise attack. From 1958–, a SAC Detachment (TUSLOG Det 50) operated at Incirlik AB, Turkey, monitoring Soviet missile telemetry from the Kapustin Yar and Tyuratam launch complexes, while in 1959, SAC's Operation Big Star studied, prototyped and evaluated the potential of deploying of Minuteman I ICBMs on civilian railroad tracks via USAF-operated locomotives and trains. President Eisenhower approved the first Atlas ICBM launch by a SAC crew for 9 September 1959 at Vandenberg AFB. While missile operations continued to ramp up, robust training for flight crews to ensure survivability for strike missions also continued. In some instances SAC bombers would oppose ADC fighter-interceptors simulating Soviet interceptors. Conversely, SAC assisted ADC readiness by simulating Soviet bomber threats to the continental United States that ADC fighters would respond to. However, following a mid-air collision between an ADC F-102 and a SAC B-47 during a 17 December 1959 Quick Kick exercise, simulated NORAD fighter attacks were prohibited against SAC bombers. On 18 March 1960, SAC intercontinental missiles began alert at Maine's Snark Missile Launch Complex adjacent to Presque Isle AFB. The following month, on 22 April 1960, SAC turned over the last British-based PGM-17 Thor IRBM to the Royal Air Force. This was soon followed by SAC's first Titan I ICBMs at Lowry AFB's Titan I Missile Complex 1A in Colorado being placed on alert that June. Beginning in November 1959, in order to counter Soviet surface-to-air missile threats, SAC began adding low-altitude bombing training for its manned bomber force as an adjunct to its legacy high-altitude training. Use of low level flight route corridors known as "Oil Burner" routes (later renamed "Olive Branch" routes in the 1970s), and the first of three SAC RBS trains were utilized starting in 1960. On 30 June 1960, SAC had 696 aircraft on alert in the Zone of Interior, also known as the ZI (referred to today as the Continental United States, or CONUS) and at overseas bases. These 696 aircraft were 113 B-52s, 346 B-47s, 85 KC-135s, and 152 KC-97s. SAC's Emergency War Order (EWO) required the first aircraft to be airborne within 8 minutes and all aircraft to be airborne within 15 minutes after notification. During the mid-1950s, having recalled numerous World War II USAAF and Korean War USAF combat veteran pilots, navigators, bombardiers and aircrewmen from inactive reserve status back to various lengths of active duty, SAC took the lead in integrating the Air Force's reserve components into the overall SAC structure. By the beginning of the 1960s, SAC had also engineered the assignment of KC-97 Stratotanker aerial refueling aircraft to Air National Guard groups and wings and having them fall under SAC's operational claimancy. On 11 August 1960, President Eisenhower approved the creation of the Joint Strategic Target Planning Staff (JSTPS), co-located at SAC headquarters at Offutt AFB.) JSTPS also included non-SAC agencies tasked with preparing the Single Integrated Operation Plan, or SIOP, and the National Strategic Target List for nuclear war. On 1 July 1960, a SAC RB-47 with a six-man crew was shot down in international airspace over the Barents Sea by a Soviet MiG-19. Four of the crewmen were killed and two surviving crewmen were captured and held in Lubyanka Prison in Moscow for seven months. On 3 February 1961, SAC's Boeing EC-135 Looking Glass, began operations as the Airborne Command Post for the Nuclear Triad and the Post-Attack Command and Control System. From this date and for the next 29 1/2 years, until 24 July 1990, SAC would maintain at least one Looking Glass aircraft continuously aloft 24 hours a day, 365 days a year, with an embarked SAC general officer and battle staff, ready to assume command of all strategic nuclear strike forces in the event that SAC headquarters was destroyed in a Soviet first strike. SAC's airborne alerts during this period also included Operation Chrome Dome for the bomber and tanker force. Although ostensibly a peacetime mission, Chrome Dome placed heavy demands on flight crews and five B-52 aircraft were lost to airborne mishaps during the operation's eight-year period. On 11 May 1961, SAC took delivery of its first B-58 Hustler supersonic medium bomber, assigning it to the 305th Bombardment Wing at Bunker Hill AFB. Optimized for high-altitude, high-speed penetration into Soviet territory prior to Soviet advancements in high-altitude surface-to-air missiles, the B-58 was expensive to operate and inefficient at lower altitudes. Its service in SAC would be comparatively short, eventually being replaced by the FB-111 by 1970. After an early 1961 development by SAC of a Radar Bomb Scoring (RBS) field kit for use in the U.S. Army's Nike surface-to-air missile systems, SAC aircraft flew several mock penetrations into Air Defense Command sectors in the 1961 SAGE/Missile Master test program, as well as the joint SAC-NORAD Sky Shield II exercise followed by Sky Shield III on 2 September 1962. In 1961, following the Berlin Crisis, President John F. Kennedy increased the number of SAC aircraft on alert to 50 percent and during periods of increased tensions SAC kept some B-52 airborne in the event of a surprise attack. In 1962, SAC gained full control of the various "Q Areas" developed by Sandia Laboratories for nuclear weapon storage adjacent to Loring AFB (Site E (Maine)/Caribou AFS), Ellsworth AFB (Site F (South Dakota)/Rushmore AFS), Fairchild AFB (Site G (Washington)/Deep Creek AFS), Travis AFB (Site H (California)/Fairfield AFS), and Westover AFB (Site I (Massachusetts)/Stony Brook AFS). These adjunct sites were subsequently converted to USAF-operated and maintained weapon storage areas (WSAs) in the same manner as WSAs on other SAC bases. The solid fuel LGM-30A Minuteman I was first deployed in 1962 and the LGM-25C Titan II reached operational service in 1963. Project Added Effort phased out all first-generation ICBMs beginning on 1 May 1964 when Atlas-D were taken off alert at Vandenberg AFB's 576th SMS (LGM-30F Minuteman II replaced Minuteman I in 1965). In October 1962, an SAC BRASS KNOB mission U-2 piloted by Major Richard S. Heyser detected Soviet intermediate range ballistic missiles in Cuba. BRASS KNOB operations involving multiple U-2 aircraft were subsequently commenced at a forward operating location at McCoy AFB, Florida the same month. On the morning of 27 October, a SAC RB-47H of the 55th Strategic Reconnaissance Wing, forward deployed to Kindley AFB, Bermuda crashed on takeoff, killing all four crewmembers, while later that afternoon, a 4028th Strategic Reconnaissance Squadron U-2 forward deployed to McCoy AFB for BRASS KNOB operations was shot down over Cuba by an SA-2 Guideline missile, killing the pilot, Major Rudolf Anderson. Throughout the early 1960s, the Kennedy Administration, under the aegis of Secretary of Defense McNamara, cancelled numerous SAC modernization programs. This included the Mach 3 North American B-70 Valkyrie in 1961, the GAM-87 Skybolt missile in 1962, and the Rocky Mountain Deep Underground Support Center in 1963. The B-70's demise came due to its design as a high-altitude bomber with very limited low-altitude performance, making it vulnerable to rapid advances in Soviet high altitude surface-to-air missile defense systems. The following year, Skybolt, an air-launched ballistic missile, was cancelled following numerous test failures and the perceived greater reliability of land-based and submarine-based ballistic missile systems. Although initially entering service in 1957, SAC's 2nd-generation aerial refueling aircraft, the KC-135 Stratotanker, had reached sufficient inventory numbers to allow SAC to begin divestiture of its KC-97 Stratofreighter tankers, transferring them to SAC-gained Air Force Reserve and Air National Guard units. As the KC-135 became the primary aerial tanker in active service, SAC employed the aircraft for several non-stop B-52 and KC-135 flights around the world, demonstrating that SAC no longer needed to depend on Reflex stations at air bases in Spain and Britain.) After the Secretary of Defense rejected LeMay's November 1964 proposal for a "...strategic air campaign against 94 targets in North Vietnam...", thirty SAC B-52Fs were deployed to Andersen AFB, Guam on 17 February 1965, representing the first increment of SAC aircraft forward deployed for the Vietnam War. The following month, in March 1965, the Strategic Air Command Advanced Echelon (SACADVON) was established as a "...liaison unit for CINCSAC [was] located at MACV Headquarters to assist with the B-52 effort." On 23 May 1965, SAC B-52Fs began unarmed missions for radar mapping "...and later to test bombing with the assistance of ground homing beacons..." SAC began saturation bombing on 18 June 1965 (8000 tons per month in 1966) and conducted Operation Arc Light missions from 1965 until the end of hostilities involving U.S. forces in 1973. All B-52F missions in 1965 were against targets in South Vietnam (RVN) except for the December "...Duck Flight mission [that] hit a suspected VC supply storage area [for which] part of the target box was in Laos." In April 1966, Vietnam operations began with the B-52D model, a 1956 model designed to use the AGM-28 Hound Dog cruise missile and the ADM-20 Quail aerial decoys for low altitude operations and modified in late 1965 by Project Big Belly to increase conventional bomb capacity. SAC's RBS Squadrons were discontinued when most detachment personnel transferred to Vietnam from 1966 to 1973 for Combat Skyspot ground-directed bombing operations. The first "Quick Reaction" bombing was the "Pink Lady" mission on 6 July 1966 using SAC B-52D/Fs to support the U.S. Army's 1st Air Cavalry Division. The 1972 Operation Linebacker II also used Skyspot for Hanoi/Haiphong bombings in North Vietnam which resulted in the loss of 25 SAC aircrew members. By May 1967, SACADVON had moved to Seventh Air Force headquarters at Tan Son Nhut Air Base, South Vietnam to schedule and coordinate "...strikes for the 7th AF and MACV." From a level of 161,921 military and 20,215 civilian assigned to SAC in June 1968, SAC lost 13,698 first term airmen from November 1968 to May 1969 in a three phase drawdown known as Project 693 to comply with Public Law 90-364. While conventional bombing, air refueling and strategic air reconnaissance operations in Southeast Asia increasingly occupied SAC's operational commitments, SAC's primary mission of nuclear deterrence continued to remain its primary focus. In 1969, "...SAC's B-52s and B-58s could carry B28, B41, B43, B53, and BA53 nuclear weapons" (SAC had 311 nuclear AGM-28 Hound Dog missiles at the end of the year.) This also coincided with the B-58 Hustler's in-progress retirement from SAC's active inventory and its replacement with the FB-111. On 18 March 1969, along the South Vietnamese border, SAC first bombed Cambodia (Operation Menu through 26 May 1970 was controlled by Skyspot). On 17 February 1970, SAC conducted the first "GOOD LOOK" bombing of Laos at the Plaine des Jarres after B-52 photorecon missions ("GOOD LOOK ALPHA" in August 1969 and "GOOD LOOK BRAVO" ) and the observations of a Skyspot installation in Thailand. SAC transferred "...HQ 8th AF...to Andersen AFB, Guam on 1 April 1970 to oversee B-52D/G operations and to complement SACADVON". 8th AF took over from Third Air Division the generation of "frag" orders based on daily strike requests and amendments from COMUSMACV. In 1970, SAC deployed the LGM-30G Minuteman III ICBM with multiple independently targetable reentry vehicle or MIRVs, for striking 3 targets, while concurrently retiring the B-58 Hustler supersonic bomber. 1972 saw the commencement of Operation Linebacker II, a combined Seventh Air Force and U.S. Navy Task Force 77 aerial bombing campaign, conducted against targets in North Vietnam during the final period of US involvement in the Vietnam War. Linebacker II was conducted from 18 December to 29 December 1972, leading to several informal names such as "The December Raids" and "The Christmas Bombings". Unlike the previous Operation Rolling Thunder and Operation Linebacker interdiction operations, Linebacker II would be a "maximum effort" bombing campaign to destroy major target complexes in the Hanoi and Haiphong areas which could only be accomplished by SAC B-52D/Gs. It saw the largest heavy bomber strikes launched by the U.S. Air Force since the end of World War II. Linebacker II was a modified extension of the Operation Linebacker bombings conducted from May to October 1972, with the emphasis of the new campaign shifted to attacks by B-52 Stratofortress heavy bombers rather than smaller tactical fighter aircraft. During Linebacker II, a total of 741 B-52D/G sorties were dispatched from bases in Thailand and Guam to bomb North Vietnam and 729 actually completed their missions. Overall SAC losses during Linebacker II numbered fifteen B-52s. The U.S. government claimed that the operation had succeeded in forcing North Vietnam's Politburo to return to the negotiating table, with the Paris Peace Accords signed shortly after the operation. By early 1973, offensive SAC air operations in Southeast Asia ceased and numerous SAC aircrewmen who had been shot down and captured as prisoners of war by North Vietnam were repatriated to the United States. SAC aircraft used during the Vietnam War included B-52D, B-52F, B-52G, KC-135A, KC-135Q, various versions of the RC-135, SR-71, U-2, and EC-135. During the Vietnam War, due to the escalating costs of combat operations in Southeast Asia, SAC was required to close several SAC bases, consolidate other bases, or transfer several bases to other MAJCOMs, other services, or the Air Reserve Component in order to remain within budgetary constraints. This included: With the Vietnam War draw-down following the Paris Peace Treaty in 1973, reduced defense budgets forced SAC to inactivate several more wings, close still more bases in CONUS and Puerto Rico, transfer still additional bases to other MAJCOMS or the Air Reserve Component, and retire older B-52B, B-52C, B-52E and B-52F aircraft: In 1973, the National Emergency Airborne Command Post, or NEACP, aircraft entered SAC's inventory. Consisting of four Boeing E-4 aircraft, these highly modified Boeing 747 airframes were assigned to the 55th Strategic Reconnaissance Wing at Offutt AFB and were forward deployed as necessary to support the National Command Authority. By 1975, SAC's manned bomber strength included several hundred B-52D, B-52G, B-52H and FB-111A aircraft, and "...SAC's first major exercise in 23 years" was Exercise Global Shield 79. As for the ICBM force, SAC reached a peak strength of 1000 Minuteman II and III and 54 Titan II ICBMs on active status before seeing reductions and retirements through a combination of obsolescing systems and various arms reduction treaties with the Soviet Union. By 1977, SAC had been pinning its hopes for a new manned strategic bomber in the form of the Rockwell B-1A Lancer. However, on 30 June 1977, President Jimmy Carter Carter announced that the B-1A would be canceled in favor of ICBMs, submarine-launched ballistic missiles (SLBMs), and a fleet of modernized B-52s armed with air-launched cruise missiles (ALCMs). On 1 December 1979, SAC assumed control of the ballistic missile warning system (BMEWS) and all Space Surveillance Network facilities from the inactivating Aerospace Defense Command (ADC). These activities would later be (transferred to Air Force Space Command (AFSPC) when the latter was established in 1982. SAC also continued to operate the Air Force's entire KC-135 aerial refueling fleet, its EC-135 LOOKING GLASS and E-4 NEACAP command post aircraft, as well the entire strategic reconnaissance aircraft fleet consisting of the U-2, SR-71, RC-135, and WC-135. In 1981, SAC received a new air refueling tanker aircraft to supplement the aging KC-135 Stratotanker force. Based on the McDonnell Douglas DC-10 commercial airliner, the KC-10A Extender was deployed equipped with improved military avionics, aerial refueling, and satellite communications equipment. That same year, President Ronald Reagan reversed the 1977 Carter administration decision regarding the B-1, directing that 100 examples of a refined version of the aircraft, now designated the B-1B Lancer, be procured as a long-range combat aircraft for SAC. The LGM-118A Peacekeeper ICBM reached SAC in 1986, and the 114 Peacekeepers had a total warhead yield of about 342 megatons. This also served to offset the retirement of the obsolescent and maintenance-intensive LGM-25C Titan II ICBM, the last example of which was deactivated in May 1987. An additional underground "16,000 square-foot, two-story reinforced concrete" command post for HQ SAC was also constructed at Offutt AFB from 1986 to 1989 from a design by Leo A. Daly, who had designed the adjoining 1957 bunker. The first Rockwell B-1B Lancer was also delivered to SAC in 1987. On 22 November 1988, the Northrop Grumman B-2 Spirit, under development as the Advanced Technology Bomber (ATB), a so-called "black program" since 1979, was officially acknowledged and rolled out for the first time for public display. The first "stealth bomber" designed for SAC, the aircraft made its first flight in May 1989. SAC reorganization at the end of the Cold War began as early as 1988 when the Carlucci Commission planned the closure of: The closures were the beginning of a post-Cold War process that would later become known as Base Realignment and Closure or BRAC. Although Mather AFB's navigator training mission would relocate to Randolph AFB, Texas, the Mather bomber/tanker wing would inactivate and the AFRES tanker group would relocate to nearby McClellan AFB, relocating again four years later to Beale AFB when another BRAC process would close McClellan AFB. Concurrently, the Pease AFB bomber/tanker wing would lose its FB-111 aircraft and transfer to Whiteman AFB, Missouri in preparation for transition to the B-2 Spirit while a portion of Pease would be transferred to the New Hampshire Air National Guard for its ANG air refueling wing and be renamed Pease Air National Guard Base. Additional closures and divestments of SAC bases would continue throughout the late 1980s and early 1990s, accelerating even more so as a result the START I Treaty's mandated elimination of both the entire B-52G fleet and the inactivation of all Minuteman II and Peacekeeper ICBMs, as well as the 1992 reorganization of the Air Force that disestablished SAC and dispersed its assets to other new or existing MAJCOMs, primarily ACC and AMC. In addition to closures of Mather AFB and Pease AFB, this would eventually include the following subsequent closure and realignment actions, primarily due to BRAC: On 1 July 1989, the 1st Combat Evaluation Group reporting directly to SAC headquarters was split with most HQ 1CEVG organizations transferring to SAC HQ (e.g., the Command Instrument Flight Division) and RBS personnel, equipment, and becoming the 1st Electronic Combat Range Group. Airborne NEACP alerts ended in 1990 and during 1991's Operation Desert Storm to liberate Kuwait from Iraqi invasion and occupation, SAC bomber, tanker and reconnaissance aircraft flew operations (e.g., B-52s with conventional bombs and conventional warhead AGM-86 ALCMs) near Iraq from bases in Great Britain, Turkey, Cyprus, Diego Garcia, Saudi Arabia, and the United Arab Emirates. Following Operation Desert Storm, the dissolution of the Soviet Union and the "de facto" end of the Cold War, President George H. W. Bush and Secretary of Defense Dick Cheney directed SAC to take all bomber and refueling aircraft and Minuteman II ICBMs off of continuous nuclear alert on 27 September 1991 and placing said aircraft on quick reaction ground alert. The 31 May 1992 major reorganization of the USAF organizational structure subsequently disestablished SAC, moving its bomber, reconnaissance and aerial command post aircraft and all SAC ICBMs, along with all Tactical Air Command aircraft, to the newly established Air Combat Command (ACC). The newly established Air Mobility Command (AMC) inherited most of SAC's KC-135 Stratotanker aircraft and the entire KC-10 Extender aerial refueling tanker force, while some KC-135s were reassigned directly to USAFE and PACAF, with one additional air refueling wing assigned to the Air Education and Training Command (AETC) as the KC-135 formal training unit. Land-based ICBMs were later transferred from ACC to Air Force Space Command (AFSPC), while manned bombers remained in ACC. USAF nuclear forces in ACC and AFSPC were then combined with the United States Navy's Fleet Ballistic Missile submarine forces to form the United States Strategic Command (USSTRATCOM), which took over the SAC Headquarters complex at Offutt AFB. In 2009, the entire land-based USAF ICBM force and that portion of the USAF manned bomber force that was still nuclear-capable, e.g., the B-2 Spirit and B-52 Stratofortress, was transferred to the newly established Air Force Global Strike Command (AFGSC), while the B-1 Lancer conventional bomber force remained in ACC. In 2015, these B-1 units were also transferred to Air Force Global Strike Command, which assumed responsibility for all current and future USAF bomber forces. The SAC Museum located adjacent to Offutt AFB was moved in 1998 to a site near Ashland, Nebraska and renamed as the Strategic Air and Space Museum in 2001. Organizations commemorating SAC include the Strategic Air Command Veterans Association, the SAC Society, the B-47 Stratojet Association, the B-52 Stratofortress Association, the FB-111 Association, the SAC Airborne Command Control Association, the Association of Air Force Missileers, the SAC Elite Guard Association and the Strategic Air Command Memorial Amateur Radio Club. After the Cold War, SAC histories included a 1996 almanac and a 2006 organizational history. In 2009, the Air Force Global Strike Command (AFGSC) was activated with the lineage of Strategic Air Command. AFGSC, headquartered at Barksdale AFB, Louisiana, is one of two USAF component commands assigned to United States Strategic Command (USSTRATCOM). AFGSC currently consists of Eighth Air Force (8AF), responsible for the nuclear-capable manned heavy bomber force, and Twentieth Air Force (20AF), responsible for the ICBM force. Strategic Air Command in the United Kingdom was among the command's largest overseas concentrations of forces, with additional forces under SAC's 16th Air Force at air bases in North Africa, Spain and Turkey during the 1950s and 1960s. SAC "Provisional" wings were also located in Kadena AB, Okinawa and U-Tapao Royal Thai Navy Airfield / U-Tapao AB, Thailand during the Vietnam War SAC also maintained bomber, tanker, and/or reconnaissance aircraft assets at the former Ramey AFB, Puerto Rico in the 1950s, 1960s and 1970s, and at Andersen AFB, Guam; RAF Mildenhall, RAF Fairford and RAF Alconbury in the United Kingdom; Moron AB, Spain; Lajes Field, Azores (Portugal); Diego Garcia, BIOT; and the former NAS Keflavik, Iceland through the 1990s. SAC also conducted operations from RAF Fairford, RAF Alconbury and RAF Mildenhall in the United Kingdom, Moron AB in Spain, Lajes Field in the Azores (Portugal), RAF Akrotiri in Cyprus, Incirlik AB in Turkey, Diego Garcia in the British Indian Ocean Territory, and from multiple air bases in Egypt, Saudi Arabia, Oman, and the United Arab Emirates during the first Gulf War (Operations Desert Shield and Desert Storm) from 1990 to 1991.
https://en.wikipedia.org/wiki?curid=28118
Scheme (programming language) Scheme is a minimalist dialect of the Lisp family of programming languages. Scheme consists of a small standard core with powerful tools for language extension. Scheme was created during the 1970s at the MIT AI Lab and released by its developers, Guy L. Steele and Gerald Jay Sussman, via a series of memos now known as the Lambda Papers. It was the first dialect of Lisp to choose lexical scope and the first to require implementations to perform tail-call optimization, giving stronger support for functional programming and associated techniques such as recursive algorithms. It was also one of the first programming languages to support first-class continuations. It had a significant influence on the effort that led to the development of Common Lisp. The Scheme language is standardized in the official IEEE standard and a "de facto" standard called the "Revised Report on the Algorithmic Language Scheme" (R"n"RS). The most widely implemented standard is R5RS (1998); a new standard, R6RS, was ratified in 2007. Scheme has a diverse user base due to its compactness and elegance, but its minimalist philosophy has also caused wide divergence between practical implementations, so much that the Scheme Steering Committee calls it "the world's most unportable programming language" and "a "family" of dialects" rather than a single language. Scheme started in the 1970s as an attempt to understand Carl Hewitt's Actor model, for which purpose Steele and Sussman wrote a "tiny Lisp interpreter" using Maclisp and then "added mechanisms for creating actors and sending messages". Scheme was originally called "Schemer", in the tradition of other Lisp-derived languages such as Planner or "Conniver". The current name resulted from the authors' use of the ITS operating system, which limited filenames to two components of at most six characters each. Currently, "Schemer" is commonly used to refer to a Scheme programmer. A new language standardization process began at the 2003 Scheme workshop, with the goal of producing an R6RS standard in 2006. This process broke with the earlier R"n"RS approach of unanimity. R6RS features a standard module system, allowing a split between the core language and libraries. A number of drafts of the R6RS specification were released, the final version being R5.97RS. A successful vote resulted in the ratification of the new standard, announced on August 28, 2007. Currently the newest releases of various Scheme implementations support the R6RS standard. There is a portable reference implementation of the proposed implicitly phased libraries for R6RS, called psyntax, which loads and bootstraps itself properly on various older Scheme implementations. A feature of R6RS is the record-type descriptor (RTD). When an RTD is created and used, the record type representation can show the memory layout. It also calculated object field bit mask and mutable Scheme object field bit masks, and helped the garbage collector know what to do with the fields without traversing the whole fields list that are saved in the RTD. RTD allows users to expand the basic RTD to create a new record system. R6RS introduces numerous significant changes to the language. The source code is now specified in Unicode, and a large subset of Unicode characters may now appear in Scheme symbols and identifiers, and there are other minor changes to the lexical rules. Character data is also now specified in Unicode. Many standard procedures have been moved to the new standard libraries, which themselves form a large expansion of the standard, containing procedures and syntactic forms that were formerly not part of the standard. A new module system has been introduced, and systems for exception handling are now standardized. Syntax-rules has been replaced with a more expressive syntactic abstraction facility (syntax-case) which allows the use of all of Scheme at macro expansion time. Compliant implementations are now "required" to support Scheme's full numeric tower, and the semantics of numbers have been expanded, mainly in the direction of support for the IEEE 754 standard for floating point numerical representation. The R6RS standard has caused controversy because it is seen to have departed from the minimalist philosophy. In August 2009, the Scheme Steering Committee, which oversees the standardization process, announced its intention to recommend splitting Scheme into two languages: a large modern programming language for programmers; and a small version, a subset of the large version retaining the minimalism praised by educators and casual implementors. Two working groups were created to work on these two new versions of Scheme. The Scheme Reports Process site has links to the working groups' charters, public discussions and issue tracking system. The ninth draft of R7RS (small language) was made available on April 15, 2013. A vote ratifying this draft closed on May 20, 2013, and the final report has been available since August 6, 2013, describing "the 'small' language of that effort: therefore it cannot be considered in isolation as the successor to R6RS". Scheme is primarily a functional programming language. It shares many characteristics with other members of the Lisp programming language family. Scheme's very simple syntax is based on s-expressions, parenthesized lists in which a prefix operator is followed by its arguments. Scheme programs thus consist of sequences of nested lists. Lists are also the main data structure in Scheme, leading to a close equivalence between source code and data formats (homoiconicity). Scheme programs can easily create and evaluate pieces of Scheme code dynamically. The reliance on lists as data structures is shared by all Lisp dialects. Scheme inherits a rich set of list-processing primitives such as codice_1, codice_2 and codice_3 from its Lisp progenitors. Scheme uses strictly but dynamically typed variables and supports first class procedures. Thus, procedures can be assigned as values to variables or passed as arguments to procedures. This section concentrates mainly on innovative features of the language, including those features that distinguish Scheme from other Lisps. Unless stated otherwise, descriptions of features relate to the R5RS standard. "In examples provided in this section, the notation "===> result" is used to indicate the result of evaluating the expression on the immediately preceding line. This is the same convention used in R5RS." This subsection describes those features of Scheme that have distinguished it from other programming languages from its earliest days. These are the aspects of Scheme that most strongly influence any product of the Scheme language, and they are the aspects that all versions of the Scheme programming language, from 1973 onward, share. Scheme is a very simple language, much easier to implement than many other languages of comparable expressive power. This ease is attributable to the use of lambda calculus to derive much of the syntax of the language from more primitive forms. For instance of the 23 s-expression-based syntactic constructs defined in the R5RS Scheme standard, 14 are classed as derived or library forms, which can be written as macros involving more fundamental forms, principally lambda. As R5RS says (R5RS sec. 3.1): "The most fundamental of the variable binding constructs is the lambda expression, because all other variable binding constructs can be explained in terms of lambda expressions." Example: a macro to implement codice_4 as an expression using codice_5 to perform the variable bindings. (define-syntax let Thus using codice_4 as defined above a Scheme implementation would rewrite "codice_7" as "codice_8", which reduces implementation's task to that of coding procedure instantiations. In 1998, Sussman and Steele remarked that the minimalism of Scheme was not a conscious design goal, but rather the unintended outcome of the design process. "We were actually trying to build something complicated and discovered, serendipitously, that we had accidentally designed something that met all our goals but was much simpler than we had intended...we realized that the lambda calculus—a small, simple formalism—could serve as the core of a powerful and expressive programming language." Like most modern programming languages and unlike earlier Lisps such as Maclisp, Scheme is lexically scoped: all possible variable bindings in a program unit can be analyzed by reading the text of the program unit without consideration of the contexts in which it may be called. This contrasts with dynamic scoping which was characteristic of early Lisp dialects, because of the processing costs associated with the primitive textual substitution methods used to implement lexical scoping algorithms in compilers and interpreters of the day. In those Lisps, it was perfectly possible for a reference to a free variable inside a procedure to refer to quite distinct bindings external to the procedure, depending on the context of the call. The impetus to incorporate lexical scoping, which was an unusual scoping model in the early 1970s, into their new version of Lisp, came from Sussman's studies of ALGOL. He suggested that ALGOL-like lexical scoping mechanisms would help to realize their initial goal of implementing Hewitt's Actor model in Lisp. The key insights on how to introduce lexical scoping into a Lisp dialect were popularized in Sussman and Steele's 1975 Lambda Paper, "Scheme: An Interpreter for Extended Lambda Calculus", where they adopted the concept of the lexical closure (on page 21), which had been described in an AI Memo in 1970 by Joel Moses, who attributed the idea to Peter J. Landin. Alonzo Church's mathematical notation, the lambda calculus, has inspired Lisp's use of "lambda" as a keyword for introducing a procedure, as well as influencing the development of functional programming techniques involving the use of higher-order functions in Lisp. But early Lisps were not suitable expressions of the lambda calculus because of their treatment of free variables. A formal lambda system has axioms and a complete calculation rule. It is helpful for the analysis using mathematical logic and tools. In this system, calculation can be seen as a directional deduction. The syntax of lambda calculus follows the recursive expressions from x, y, z, ...,parentheses, spaces, the period and the symbol λ. The function of lambda calculation includes: First, serve as a starting point of powerful mathematical logic. Second, it can reduce the requirement of programmers to consider the implementation details, because it can be used to imitate machine evaluation. Finally, the lambda calculation created a substantial meta-theory. The introduction of lexical scope resolved the problem by making an equivalence between some forms of lambda notation and their practical expression in a working programming language. Sussman and Steele showed that the new language could be used to elegantly derive all the imperative and declarative semantics of other programming languages including ALGOL and Fortran, and the dynamic scope of other Lisps, by using lambda expressions not as simple procedure instantiations but as "control structures and environment modifiers". They introduced continuation-passing style along with their first description of Scheme in the first of the Lambda Papers, and in subsequent papers, they proceeded to demonstrate the raw power of this practical use of lambda calculus. Scheme inherits its block structure from earlier block structured languages, particularly ALGOL. In Scheme, blocks are implemented by three "binding constructs": codice_4, codice_10 and codice_11. For instance, the following construct creates a block in which a symbol called codice_12 is bound to the number 10: (let ((var 10)) Blocks can be nested to create arbitrarily complex block structures according to the need of the programmer. The use of block structuring to create local bindings alleviates the risk of namespace collision that can otherwise occur. One variant of codice_4, codice_10, permits bindings to refer to variables defined earlier in the same construct, thus: The other variant, codice_11, is designed to enable mutually recursive procedures to be bound to one another. ===> ((1 . 0) (1 . 0) (2 . 1) (2 . 2) (3 . 2) (3 . 3) (4 . 4) (5 . 4) (5 . 5)) All procedures bound in a single codice_11 may refer to one another by name, as well as to values of variables defined earlier in the same codice_11, but they may not refer to "values" defined later in the same codice_11. A variant of codice_4, the "named let" form, has an identifier after the codice_4 keyword. This binds the let variables to the argument of a procedure whose name is the given identifier and whose body is the body of the let form. The body may be repeated as desired by calling the procedure. The named let is widely used to implement iteration. Example: a simple counter ===> (1 2 3 4 5 6 7 8 9 10) Like any procedure in Scheme, the procedure created in the named let is a first class object. Scheme has an iteration construct, codice_21, but it is more idiomatic in Scheme to use tail recursion to express iteration. Standard-conforming Scheme implementations are required to optimize tail calls so as to support an unbounded number of active tail calls (R5RS sec. 3.5)—a property the Scheme report describes as "proper tail recursion"—making it safe for Scheme programmers to write iterative algorithms using recursive structures, which are sometimes more intuitive. Tail recursive procedures and the "named codice_4" form provide support for iteration using tail recursion. ===> (0 1 4 9 16 25 36 49 64 81) Continuations in Scheme are first-class objects. Scheme provides the procedure codice_23 (also known as codice_24) to capture the current continuation by packing it up as an escape procedure bound to a formal argument in a procedure provided by the programmer. (R5RS sec. 6.4) First-class continuations enable the programmer to create non-local control constructs such as iterators, coroutines, and backtracking. Continuations can be used to emulate the behavior of return statements in imperative programming languages. The following function codice_25, given function codice_26 and list codice_27, returns the first element codice_28 in codice_27 such that codice_30 returns true. ===> 7 ===> #f The following example, a traditional programmer's puzzle, shows that Scheme can handle continuations as first-class objects, binding them to variables and passing them as arguments to procedures. (let* ((yin When executed this code displays a counting sequence: codice_31 In contrast to Common Lisp, all data and procedures in Scheme share a common namespace, whereas in Common Lisp functions and data have separate namespaces making it possible for a function and a variable to have the same name, and requiring special notation for referring to a function as a value. This is sometimes known as the "Lisp-1 vs. Lisp-2" distinction, referring to the unified namespace of Scheme and the separate namespaces of Common Lisp. In Scheme, the same primitives that are used to manipulate and bind data can be used to bind procedures. There is no equivalent of Common Lisp's codice_32 and codice_33 primitives. (define f 10) f ===> 10 (set! f (+ f f 6)) f ===> 26 (set! f (lambda (n) (+ n 12))) ===> 18 (set! f (f 1)) f ===> 13 (apply + '(1 2 3 4 5 6)) ===> 21 ===> (101 102 103) This subsection documents design decisions that have been taken over the years which have given Scheme a particular character, but are not the direct outcomes of the original design. Scheme specifies a comparatively full set of numerical datatypes including complex and rational types, which is known in Scheme as the numerical tower (R5RS sec. 6.2). The standard treats these as abstractions, and does not commit the implementor to any particular internal representations. Numbers may have the quality of exactness. An exact number can only be produced by a sequence of exact operations involving other exact numbers—inexactness is thus contagious. The standard specifies that any two implementations must produce equivalent results for all operations resulting in exact numbers. The R5RS standard specifies procedures codice_34 and codice_35 which can be used to change the exactness of a number. codice_35 produces "the exact number that is numerically closest to the argument". codice_34 produces "the inexact number that is numerically closest to the argument". The R6RS standard omits these procedures from the main report, but specifies them as R5RS compatibility procedures in the standard library (rnrs r5rs (6)). In the R5RS standard, Scheme implementations are not required to implement the whole numerical tower, but they must implement "a coherent subset consistent with both the purposes of the implementation and the spirit of the Scheme language" (R5RS sec. 6.2.3). The new R6RS standard does require implementation of the whole tower, and "exact integer objects and exact rational number objects of practically unlimited size and precision, and to implement certain procedures...so they always return exact results when given exact arguments" (R6RS sec. 3.4, sec. 11.7.1). Example 1: exact arithmetic in an implementation that supports exact rational complex numbers. (define x (+ 1/3 1/4 -1/5 -1/3i 405/50+2/3i)) x ===> 509/60+1/3i (exact? x) ===> #t Example 2: Same arithmetic in an implementation that supports neither exact rational numbers nor complex numbers but does accept real numbers in rational notation. (define xr (+ 1/3 1/4 -1/5 405/50)) (define xi (+ -1/3 2/3)) xr ===> 8.48333333333333 xi ===> 0.333333333333333 (exact? xr) ===> #f ===> #f Both implementations conform to the R5RS standard but the second does not conform to R6RS because it does not implement the full numerical tower. Scheme supports delayed evaluation through the codice_38 form and the procedure codice_39. ===> 22 ===> 70 ===> 22 The lexical context of the original definition of the promise is preserved, and its value is also preserved after the first use of codice_39. The promise is only ever evaluated once. These primitives, which produce or handle values known as promises, can be used to implement advanced lazy evaluation constructs such as streams. In the R6RS standard, these are no longer primitives, but instead, are provided as part of the R5RS compatibility library (rnrs r5rs (6)). In R5RS, a suggested implementation of codice_38 and codice_39 is given, implementing the promise as a procedure with no arguments (a thunk) and using memoization to ensure that it is only ever evaluated once, irrespective of the number of times codice_39 is called (R5RS sec. 6.4). SRFI 41 enables the expression of both finite and infinite sequences with extraordinary economy. For example, this is a definition of the fibonacci sequence using the functions defined in SRFI 41: (define fibs (stream-ref fibs 99) ===> 218922995834555169026 Most Lisps specify an order of evaluation for procedure arguments. Scheme does not. Order of evaluation—including the order in which the expression in the operator position is evaluated—may be chosen by an implementation on a call-by-call basis, and the only constraint is that "the effect of any concurrent evaluation of the operator and operand expressions is constrained to be consistent with some sequential order of evaluation." (R5RS sec. 4.1.3) ===> 3 ev is a procedure that describes the argument passed to it, then returns the value of the argument. In contrast with other Lisps, the appearance of an expression in the operator position (the first item) of a Scheme expression is quite legal, as long as the result of the expression in the operator position is a procedure. In calling the procedure "+" to add 1 and 2, the expressions (ev +), (ev 1) and (ev 2) may be evaluated in any order, as long as the effect is not as if they were evaluated in parallel. Thus the following three lines may be displayed in any order by standard Scheme when the above example code is executed, although the text of one line may not be interleaved with another because that would violate the sequential evaluation constraint. In the R5RS standard and also in later reports, the syntax of Scheme can easily be extended via the macro system. The R5RS standard introduced a powerful hygienic macro system that allows the programmer to add new syntactic constructs to the language using a simple pattern matching sublanguage (R5RS sec 4.3). Prior to this, the hygienic macro system had been relegated to an appendix of the R4RS standard, as a "high level" system alongside a "low level" macro system, both of which were treated as extensions to Scheme rather than an essential part of the language. Implementations of the hygienic macro system, also called codice_44, are required to respect the lexical scoping of the rest of the language. This is assured by special naming and scoping rules for macro expansion and avoids common programming errors that can occur in the macro systems of other programming languages. R6RS specifies a more sophisticated transformation system, codice_45, which has been available as a language extension to R5RS Scheme for some time. (define-syntax when Invocations of macros and procedures bear a close resemblance—both are s-expressions—but they are treated differently. When the compiler encounters an s-expression in the program, it first checks to see if the symbol is defined as a syntactic keyword within the current lexical scope. If so, it then attempts to expand the macro, treating the items in the tail of the s-expression as arguments without compiling code to evaluate them, and this process is repeated recursively until no macro invocations remain. If it is not a syntactic keyword, the compiler compiles code to evaluate the arguments in the tail of the s-expression and then to evaluate the variable represented by the symbol at the head of the s-expression and call it as a procedure with the evaluated tail expressions passed as actual arguments to it. Most Scheme implementations also provide additional macro systems. Among popular ones are syntactic closures, explicit renaming macros and codice_46, a non-hygienic macro system similar to codice_47 system provided in Common Lisp. The inability to specify whether or not a macro is hygienic is one of the shortcomings of the macro system. Alternative models for expansion such as scope sets provide a potential solution. Prior to R5RS, Scheme had no standard equivalent of the codice_48 procedure which is ubiquitous in other Lisps, although the first Lambda Paper had described codice_49 as "similar to the LISP function EVAL" and the first Revised Report in 1978 replaced this with codice_50, which took two arguments. The second, third and fourth revised reports omitted any equivalent of codice_48. The reason for this confusion is that in Scheme with its lexical scoping the result of evaluating an expression depends on where it is evaluated. For instance, it is not clear whether the result of evaluating the following expression should be 5 or 6: If it is evaluated in the outer environment, where codice_52 is defined, the result is the sum of the operands. If it is evaluated in the inner environment, where the symbol "+" has been bound to the value of the procedure "*", the result is the product of the two operands. R5RS resolves this confusion by specifying three procedures that return environments and providing a procedure codice_48 that takes an s-expression and an environment and evaluates the expression in the environment provided. (R5RS sec. 6.5) R6RS extends this by providing a procedure called codice_54 by which the programmer can specify exactly which objects to import into the evaluation environment. With modern scheme (usually compatible with R5RS) to evaluate this expression, you need to define function codice_49 which can look like this: codice_56 is global environment from your interpreter. That's why codice_57 still point to plus operation. In most dialects of Lisp including Common Lisp, by convention the value codice_58 evaluates to the value false in a boolean expression. In Scheme, since the IEEE standard in 1991, all values except #f, including codice_58's equivalent in Scheme which is written as '(), evaluate to the value true in a boolean expression. (R5RS sec. 6.3.1) Where the constant representing the boolean value of true is codice_60 in most Lisps, in Scheme it is codice_61. In Scheme the primitive datatypes are disjoint. Only one of the following predicates can be true of any Scheme object: codice_62, codice_63, codice_64, codice_65, codice_66, codice_67, codice_68, codice_69, codice_70. (R5RS sec 3.2) Within the numerical datatype, by contrast, the numerical values overlap. For example, an integer value satisfies all of the codice_71, codice_72, codice_73, codice_74 and codice_65 predicates at the same time. (R5RS sec 6.2) Scheme has three different types of equivalence between arbitrary objects denoted by three different "equivalence predicates", relational operators for testing equality, codice_76, codice_77 and codice_78: Type dependent equivalence operations also exist in Scheme: codice_86 and codice_87 compare two strings (the latter performs a case-independent comparison); codice_88 and codice_89 compare characters; codice_90 compares numbers. Up to the R5RS standard, the standard comment in Scheme was a semicolon, which makes the rest of the line invisible to Scheme. Numerous implementations have supported alternative conventions permitting comments to extend for more than a single line, and the R6RS standard permits two of them: an entire s-expression may be turned into a comment (or "commented out") by preceding it with codice_91 (introduced in SRFI 62) and a multiline comment or "block comment" may be produced by surrounding text with codice_92 and codice_93. Scheme's input and output is based on the "port" datatype. (R5RS sec 6.6) R5RS defines two default ports, accessible with the procedures codice_94 and codice_95, which correspond to the Unix notions of standard input and standard output. Most implementations also provide codice_96. Redirection of input and standard output is supported in the standard, by standard procedures such as codice_97 and codice_98. Most implementations provide string ports with similar redirection capabilities, enabling many normal input-output operations to be performed on string buffers instead of files, using procedures described in SRFI 6. The R6RS standard specifies much more sophisticated and capable port procedures and many new types of port. The following examples are written in strict R5RS Scheme. Example 1: With output defaulting to (current-output-port): Example 2: As 1, but using optional port argument to output procedures Example 3: As 1, but output is redirected to a newly created file (let ((hello0 (lambda () (display "Hello world") (newline)))) Example 4: As 2, but with explicit file open and port close to send output to file Example 5: As 2, but with using call-with-output-file to send output to a file. Similar procedures are provided for input. R5RS Scheme provides the predicates codice_99 and codice_100. For character input and output, codice_101, codice_102, codice_103 and codice_104 are provided. For writing and reading Scheme expressions, Scheme provides codice_105 and codice_106. On a read operation, the result returned is the end-of-file object if the input port has reached the end of the file, and this can be tested using the predicate codice_107. In addition to the standard, SRFI 28 defines a basic formatting procedure resembling Common Lisp's codice_108 function, after which it is named. In Scheme, procedures are bound to variables. At R5RS the language standard formally mandated that programs may change the variable bindings of built-in procedures, effectively redefining them. (R5RS "Language changes") For example, one may extend codice_57 to accept strings as well as numbers by redefining it: (set! + ===> 6 ===> "123" In R6RS every binding, including the standard ones, belongs to some library, and all exported bindings are immutable. (R6RS sec 7.1) Because of this, redefinition of standard procedures by mutation is forbidden. Instead, it is possible to import a different procedure under the name of a standard one, which in effect is similar to redefinition. In Standard Scheme, procedures that convert from one datatype to another contain the character string "->" in their name, predicates end with a "?", and procedures that change the value of already-allocated data end with a "!". These conventions are often followed by Scheme programmers. In formal contexts such as Scheme standards, the word "procedure" is used in preference to "function" to refer to a lambda expression or primitive procedure. In normal usage, the words "procedure" and "function" are used interchangeably. Procedure application is sometimes referred to formally as "combination". As in other Lisps, the term "thunk" is used in Scheme to refer to a procedure with no arguments. The term "proper tail recursion" refers to the property of all Scheme implementations, that they perform tail-call optimization so as to support an indefinite number of active tail calls. The form of the titles of the standards documents since R3RS, "Revisedn Report on the Algorithmic Language Scheme", is a reference to the title of the ALGOL 60 standard document, "Revised Report on the Algorithmic Language Algol 60," The Summary page of R3RS is closely modeled on the Summary page of the ALGOL 60 Report. The language is formally defined in the standards R5RS (1998) and R6RS (2007). They describe standard "forms": keywords and accompanying syntax, which provide the control structure of the language, and standard procedures which perform common tasks. This table describes the standard forms in Scheme. Some forms appear in more than one row because they cannot easily be classified into a single function in the language. Forms marked "L" in this table are classed as derived "library" forms in the standard and are often implemented as macros using more fundamental forms in practice, making the task of implementation much easier than in other languages. Note that codice_110 is defined as a library syntax in R5RS, but the expander needs to know about it to achieve the splicing functionality. In R6RS it is no longer a library syntax. The following two tables describe the standard procedures in R5RS Scheme. R6RS is far more extensive and a summary of this type would not be practical. Some procedures appear in more than one row because they cannot easily be classified into a single function in the language. String and character procedures that contain "-ci" in their names perform case-independent comparisons between their arguments: upper case and lower case versions of the same character are taken to be equal. Implementations of - and / that take more than two arguments are defined but left optional at R5RS. Because of Scheme's minimalism, many common procedures and syntactic forms are not defined by the standard. In order to keep the core language small but facilitate standardization of extensions, the Scheme community has a "Scheme Request for Implementation" (SRFI) process by which extension libraries are defined through careful discussion of extension proposals. This promotes code portability. Many of the SRFIs are supported by all or most Scheme implementations. SRFIs with fairly wide support in different implementations include: The elegant, minimalist design has made Scheme a popular target for language designers, hobbyists, and educators, and because of its small size, that of a typical interpreter, it is also a popular choice for embedded systems and scripting. This has resulted in scores of implementations, most of which differ from each other so much that porting programs from one implementation to another is quite difficult, and the small size of the standard language means that writing a useful program of any great complexity in standard, portable Scheme is almost impossible. The R6RS standard specifies a much broader language, in an attempt to broaden its appeal to programmers. Almost all implementations provide a traditional Lisp-style read–eval–print loop for development and debugging. Many also compile Scheme programs to executable binary. Support for embedding Scheme code in programs written in other languages is also common, as the relative simplicity of Scheme implementations makes it a popular choice for adding scripting capabilities to larger systems developed in languages such as C. The Gambit, Chicken, and Bigloo Scheme interpreters compile Scheme to C, which makes embedding particularly easy. In addition, Bigloo's compiler can be configured to generate JVM bytecode, and it also features an experimental bytecode generator for .NET. Some implementations support additional features. For example, Kawa and JScheme provide integration with Java classes, and the Scheme to C compilers often make it easy to use external libraries written in C, up to allowing the embedding of actual C code in the Scheme source. Another example is Pvts, which offers a set of visual tools for supporting the learning of Scheme. Scheme is widely used by a number of schools; in particular, a number of introductory Computer Science courses use Scheme in conjunction with the textbook "Structure and Interpretation of Computer Programs" (SICP). For the past 12 years, PLT has run the ProgramByDesign (formerly TeachScheme!) project, which has exposed close to 600 high school teachers and thousands of high school students to rudimentary Scheme programming. MIT's old introductory programming class 6.001 was taught in Scheme, Although 6.001 has been replaced by more modern courses, SICP continues to be taught at MIT. Likewise, the introductory class at UC Berkeley, CS 61A, was until 2013 taught entirely in Scheme, save minor diversions into Logo to demonstrate dynamic scope. Today, like MIT, Berkeley has replaced the syllabus with a more modern version that is primarily taught in Python 3, but the current syllabus is still based on the old curriculum, and parts of the class are still taught in Scheme. The textbook "How to Design Programs" by Matthias Felleisen, currently at Northeastern University, is used by some institutes of higher education for their introductory computer science courses. Both Northeastern University and Worcester Polytechnic Institute use Scheme exclusively for their introductory courses Fundamentals of Computer Science (CS2500) and Introduction to Program Design (CS1101), respectively. Rose-Hulman uses Scheme in its more advanced Programming Language Concepts course. Indiana University's introductory class, C211, is taught entirely in Scheme. A self-paced version of the course, CS 61AS, continues to use Scheme. The introductory computer science courses at Yale and Grinnell College are also taught in Scheme. Programming Design Paradigms, a mandatory course for the Computer science Graduate Students at Northeastern University, also extensively uses Scheme. The former introductory Computer Science course at the University of Minnesota - Twin Cities, CSCI 1901, also used Scheme as its primary language, followed by a course that introduced students to the Java programming language; however, following the example of MIT, the department replaced 1901 with the Python-based CSCI 1133, while functional programming is covered in detail in the third-semester course CSCI 2041. In the software industry, Tata Consultancy Services, Asia's largest software consultancy firm, uses Scheme in their month-long training program for fresh college graduates. Scheme is/was also used for the following:
https://en.wikipedia.org/wiki?curid=28119
Society for Psychical Research The Society for Psychical Research (SPR) is a nonprofit organisation in the United Kingdom. Its stated purpose is to understand events and abilities commonly described as psychic or paranormal. It describes itself as the "first society to conduct organised scholarly research into human experiences that challenge contemporary scientific models." It does not, however, since its inception in 1882, hold any corporate opinions: SPR members assert a variety of beliefs with regard to the nature of the phenomena studied. The Society for Psychical Research (SPR) originated from a discussion between journalist Edmund Rogers and the physicist William F. Barrett in autumn 1881. This led to a conference on 5 and 6 January 1882 at the headquarters of the British National Association of Spiritualists which the foundation of the Society was proposed. The committee included Barrett, Rogers, Stainton Moses, Charles Massey, Edmund Gurney, Hensleigh Wedgwood and Frederic W. H. Myers. The SPR was formally constituted on 20 February 1882 with philosopher Henry Sidgwick as its first president. The SPR was the first organisation of its kind in the world, its stated purpose being "to approach these varied problems without prejudice or prepossession of any kind, and in the same spirit of exact and unimpassioned enquiry which has enabled science to solve so many problems, once not less obscure nor less hotly debated." Other early members included the author Jane Barlow, the renowned chemist Sir William Crookes, physicist Sir Oliver Lodge, Nobel laureate Charles Richet and psychologist William James. Members of the SPR initiated and organised the International Congresses of Physiological/Experimental psychology. Areas of study included hypnotism, dissociation, thought-transference, mediumship, Reichenbach phenomena, apparitions and haunted houses and the physical phenomena associated with séances. The SPR were to introduce a number of neologisms which have entered the English language, such as 'telepathy', which was coined by Frederic Myers. The Society is run by a President and a Council of twenty members, and is open to interested members of the public to join. The organisation is based at 1 Vernon Mews, London, with a library and office open to members, and with large book and archival holdings in Cambridge University Library, Cambridgeshire, England. It publishes the peer reviewed quarterly "Journal of the Society for Psychical Research" ("JSPR"), the irregular "Proceedings" and the magazine "Paranormal Review". It holds an annual conference, regular lectures and two study days per year and supports the "LEXSCIEN" on-line library project. Among the first important works was the two-volume publication in 1886, "Phantasms of the Living", concerning telepathy and apparitions, co-authored by Gurney, Myers and Frank Podmore. This text, and subsequent research in this area, was received negatively by the scientific mainstream, though Gurney and Podmore provided a defense of the society's early work in this area in mainstream publications. The SPR "devised methodological innovations such as randomized study designs" and conducted "the first experiments investigating the psychology of eyewitness testimony (Hodgson and Davey, 1887), [and] empirical and conceptual studies illuminating mechanisms of dissociation and hypnotism" In 1894, the "Census of Hallucinations" was published which sampled 17,000 people. Out of these, 1, 684 persons reported having experienced a hallucination of an apparition. Such efforts were claimed to have undermined "the notion of dissociation and hallucinations as intrinsically pathological phenomena" The SPR investigated many spiritualist mediums such as Eva Carrière and Eusapia Palladino. During the early twentieth century, the SPR studied a series of automatic scripts and trance utterances from a group of automatic writers, known as the cross-correspondences. Famous cases investigated by the Society include Borley Rectory and the Enfield Poltergeist. In 1912 the Society extended a request for a contribution to a special medical edition of its Proceedings to Sigmund Freud. Though according to Ronald W. Clark (1980) "Freud surmised, no doubt correctly, that the existence of any link between the founding fathers of psychoanalysis and investigation of the paranormal would hamper acceptance of psychoanalysis" as would any perceived involvement with the occult. Nonetheless, Freud did respond, contributing an essay titled "A Note on the Unconscious in Psycho-Analysis" to the Medical Supplement to the Proceedings of the Society for Psychical Research. Much of the early work involved investigating, exposing and in some cases duplicating fake phenomena. In the late 19th century, SPR investigations into séance phenomena led to the exposure of many fraudulent mediums. Richard Hodgson distinguished himself in that area. In 1884, Hodgson was sent by the SPR to India to investigate Helena Blavatsky and concluded that her claims of psychic power were fraudulent. However these findings were much later (April 1986) reviewed and retracted by the SPR. In 1886 and 1887 a series of publications by S. J. Davey, Hodgson and Sidgwick in the SPR journal exposed the slate writing tricks of the medium William Eglinton. Hodgson with his friend, S. J. Davey, had staged fake séances for educating the public (including SPR members). Davey gave sittings under an assumed name, duplicating the phenomena produced by Eglinton, and then proceeded to point out to the sitters the manner in which they had been deceived. Because of this, some spiritualist members such as Stainton Moses resigned from the SPR. In 1891, Alfred Russel Wallace requested for the Society to properly investigate spirit photography. Eleanor Sidgwick responded with a critical paper in the SPR which cast doubt on the subject and discussed the fraudulent methods that spirit photographers such as Édouard Isidore Buguet, Frederic Hudson and William H. Mumler had utilised. Due to the exposure of William Hope and other fraudulent mediums, Arthur Conan Doyle led a mass resignation of eighty-four members of the Society for Psychical Research, as they believed the Society was opposed to spiritualism. Science historian William Hodson Brock has noted that "By the 1900s most avowed spiritualists had left the SPR and gone back to the BNAS (the London Spiritualist Alliance since 1884), having become upset by the sceptical tone of most of the SPR's investigations." The Society has been criticised by both spiritualists and sceptics. Prominent spiritualists at first welcomed the SPR and cooperated fully. But relations soured when spiritualists discovered that the SPR would not accept outside testimony as proof, and the society accused some prominent mediums of fraud. Spiritualist Arthur Conan Doyle resigned from the SPR in 1930, to protest what he regarded as the SPR's overly restrictive standards of proof. Psychic investigator and believer in spiritualism Nandor Fodor criticised the SPR for its "strong bias" against physical manifestations of spiritualism. Sceptics have criticised members of the SPR for having motives liable to impair scientific objectivity. According to SPR critics John Grant and Eric Dingwall (a member of the SPR), early SPR members such as Henry Sidgwick, Frederic W. H. Myers, and William Barrett hoped to cling to something spiritual through psychical research. Myers stated that "[T]he Society for Psychical Research was founded, with the establishment of thought-transference—already rising within measurable distance of proof—as its primary aim." Defenders of the SPR have stated in reply that "a 'will to believe' in post-mortem survival, telepathy and other scientifically unpopular notions, does not necessarily exclude a "will to know" and thus the capacity for thorough self-criticism, methodological rigour and relentless suspicion of errors." The sceptic and physicist Victor J. Stenger has written: Ivor Lloyd Tuckett an author of an early sceptical work on psychical research wrote that although the SPR have collected some valuable work, most of its active members have "no training in psychology fitting them for their task, and have been the victims of pronounced bias, as sometimes they themselves have admitted." Trevor H. Hall, an ex-member of the Society for Psychical Research, criticised SPR members as "credulous and obsessive wish... to believe." Hall also claimed SPR members "lack knowledge of deceptive methods." Writer Edward Clodd asserted that the SPR members William F. Barrett and Oliver Lodge had insufficient competence for the detection of fraud and suggested that their spiritualist beliefs were based on magical thinking and primitive superstition. Clodd described the SPR as offering "barbaric spiritual philosophy", and characterised the language of SPR members as using such terms as "subliminal consciousness" and "telepathic energy," as a disguise for "bastard supernaturalism." A 2004 psychological study involving 174 members of the Society for Psychical Research completed a delusional ideation questionnaire and a deductive reasoning task. As predicted, the study showed that "individuals who reported a strong belief in the paranormal made more errors and displayed more delusional ideation than sceptical individuals". There was also a reasoning bias which was limited to people who reported a belief in, rather than experience of, paranormal phenomena. The results suggested that reasoning abnormalities may have a causal role in the formation of paranormal belief. Some sceptical members have resigned from the SPR. Eric Dingwall resigned and wrote " After sixty years' experience and personal acquaintance with most of the leading parapsychologists of that period I do not think I could name half a dozen whom I could call objective students who honestly wished to discover the truth. The great majority wanted to prove something or other: They wanted the phenomena into which they were inquiring to serve some purpose in supporting preconceived theories of their own." The following is a list of presidents: The Society publishes "Proceedings of the Society for Psychical Research", the "Journal of the Society for Psychical Research", and the "Paranormal Review", as well as the online Psi Encyclopedia. First published in 1882 as a public record of the activities of the SPR, the "Proceedings" are now reserved for longer pieces of work, such as Presidential Addresses, and are only occasionally published. The current editor is Dr David Vernon. The "Journal of the Society for Psychical Research" has been published quarterly since 1884. It was introduced as a private, members-only periodical to supplement the "Proceedings". It now focuses on current laboratory and field research, but also includes theoretical, methodological and historical papers on parapsychology. It also publishes book reviews and correspondence. The current editor is Dr David Vernon. The "Paranormal Review" is the magazine of the Society for Psychical Research. Formerly known as the "Psi Researcher", it has been published since 1996. Previous editors have included Dr Nicola J. Holt. The current editor is Dr Leo Ruickbie. A number of other psychical research organisations use the term 'Society for Psychical Research' in their name. SPR histories Scholarly studies Criticism
https://en.wikipedia.org/wiki?curid=28122
Sniper A sniper is a military/paramilitary marksman who engages targets from positions of concealment or at distances exceeding the target's detection capabilities. Snipers generally have specialized training and are equipped with high-precision rifles and high-magnification optics, and often also serve as scouts/observers feeding tactical information back to their units or command headquarters. In addition to long-range and high-grade marksmanship, military snipers are trained in a variety of special operation techniques: detection, stalking, target range estimation methods, camouflage, field craft, infiltration, special reconnaissance and observation, surveillance and target acquisition. The verb "to snipe" originated in the 1770s among soldiers in British India in reference to shooting snipes, which was considered an extremely challenging game bird for hunters due to its alertness, camouflaging color and erratic flight behavior. Snipe hunters therefore needed to be stealthy in addition to being good trackers and marksmen. The agent noun "sniper" appears by the 1820s. The term "sniper" was first attested militarily in 1824 in the sense of the somewhat older term "sharpshooter", an 18th-century calque of German "Scharfschütze", in use in British newspapers as early as 1801. Different countries use different military doctrines regarding snipers in military units, settings, and tactics. Generally, a sniper's primary function in modern warfare is to provide detailed surveillance from a concealed position and, if necessary, to reduce the enemy's combat ability by neutralizing high-value targets (especially officers and other key personnel) and in the process pinning down and demoralizing the enemy. Typical sniper missions include managing intelligence information they gather during reconnaissance, target acquisition and impact feedback for air strikes and artillery, assisting employed combat force with accurate fire support and counter-sniper tactics, killing enemy commanders, selecting targets of opportunity, and even destruction of military equipment, which tend to require use of anti-materiel rifles in the larger calibers such as the .50 BMG, like the Barrett M82, McMillan Tac-50, and Denel NTW-20. Soviet- and Russian-derived military doctrines include squad-level snipers. Snipers have increasingly been demonstrated as useful by US and UK forces in the recent Iraq campaign in a fire support role to cover the movement of infantry, especially in urban areas. Military snipers from the US, UK and other countries that adopt their military doctrine are typically deployed in two-man sniper teams consisting of a shooter and a spotter. A common practice is for a shooter and a spotter to take turns in order to avoid eye fatigue. In most recent combat operations occurring in large densely populated towns, such as Fallujah, Iraq, two teams would be deployed together to increase their security and effectiveness in an urban environment. A sniper team would be armed with a long-range weapon and a rapid-firing shorter-ranged weapon in case of close quarter combat. The German doctrine of largely independent snipers and emphasis on concealment, developed during the Second World War, has been most influential on modern sniper tactics, and is currently used throughout Western militaries (examples are specialized camouflage clothing, concealment in terrain and emphasis on coup d'œil). Sniper rifles are classified as crew-served, as the term is used in the United States military. A sniper team (or sniper cell) consists of a combination of one or more "shooters" with force protection elements and support personnel: such as a "spotter" or a "flanker". Within the Table of Organization and Equipment for both the United States Army and the U.S. Marine Corps, the operator of the weapon has an assistant trained to fulfill multiple roles, in addition to being sniper-qualified in the operation of the weapon. The shooter fires the shot while the spotter assists in observation of targets, atmospheric conditions and handles ancillary tasks as immediate security of their location, communication with other parties; including directing artillery fire and close air support. A flanker's task is to observe areas not immediately visible to the sniper or spotter and assist with the team's perimeter and rear security, therefore flankers are usually armed with an assault rifle or battle rifle. Both spotter and flanker carry additional ammunition and associated equipment. The spotter detects, observes, and assigns targets and watches for the results of the shot. Using a spotting scope or a rangefinder, the spotter will also read the wind by using physical indicators and the mirage caused by the heat on the ground. Also, in conjunction with the shooter, the spotter will make calculations for distance, angle shooting (slant range), mil dot related calculations, correction for atmospheric conditions and leads for moving targets. It is not unusual for the spotter to be equipped with a notepad and a laptop computer specifically for performing these calculations. Law enforcement snipers, commonly called police snipers, and military snipers differ in many ways, including their areas of operation and tactics. A police sharpshooter is part of a police operation and usually takes part in relatively short missions. Police forces typically deploy such sharpshooters in hostage scenarios. This differs from a military sniper, who operates as part of a larger army, engaged in warfare. Sometimes as part of a SWAT team, police snipers are deployed alongside negotiators and an assault team trained for close quarters combat. As policemen, they are trained to shoot only as a last resort, when there is a direct threat to life; the police sharpshooter has a well-known rule: "Be prepared to take a life to save a life." Police snipers typically operate at much shorter ranges than military snipers, generally under and sometimes even less than . Both types of snipers do make difficult shots under pressure, and often perform one-shot kills. Police units that are unequipped for tactical operations may rely on a specialized SWAT team, which may have a dedicated sniper. Some police sniper operations begin with military assistance. Police snipers placed in vantage points, such as high buildings, can provide security for events. In one high-profile incident commonly referred to as "The Shot Seen Around the World" due to going viral online, Mike Plumb, a SWAT sniper in Columbus, Ohio, prevented a suicide by shooting a revolver out of the individual's hand, leaving him unharmed. The need for specialized training for police sharpshooters was made apparent in 1972 during the Munich massacre when the German police could not deploy specialized personnel or equipment during the standoff at the airport in the closing phase of the crisis, and consequently all of the Israeli hostages were killed. While the German army did have snipers in 1972, the use of army snipers in the scenario was impossible due to the German constitution's explicit prohibition of the use of the military in domestic matters. This lack of trained snipers who could be used in civilian roles was later addressed with the founding of the specialized police counter-terrorist unit GSG 9. The longest confirmed sniper kill in combat was achieved by an undisclosed member of the Canadian JTF2 special forces in June 2017 at a distance of . The previous record holder was Craig Harrison, a Corporal of Horse (CoH) in the Blues and Royals RHG/D of the British Army. In November 2009, Harrison struck two Taliban machine gunners consecutively south of Musa Qala in Helmand Province in Afghanistan at a range of or 1.54 miles using a L115A3 Long Range Rifle. The QTU Lapua external ballistics software, using continuous doppler drag coefficient (Cd) data provided by Lapua, predicts that such shots traveling would likely have struck their targets after nearly 6.0 seconds of flight time, having lost 93% of their kinetic energy, retaining of their original velocity, and having dropped or 2.8° from the original bore line. Due to the extreme distances and travel time involved, even a light cross-breeze of would have diverted such shots off target, which would have required compensation. The calculation assumes a "flat-fire scenario" (a situation where the shooting and target positions are at equal elevation), utilizing British military custom high-pressure .338 Lapua Magnum cartridges, loaded with 16.2 g (250 gr) Lapua LockBase B408 bullets, fired at 936 m/s (3,071 ft/s) muzzle velocity under the following on-site (average) atmospheric conditions: barometric pressure: at sea-level equivalent or on-site, humidity: 25.9%, and temperature: in the region for November 2009, resulting in an air density ρ = 1.0854 kg/m3 at the elevation of Musa Qala. Harrison mentions in reports that the environmental conditions were perfect for long range shooting, "... no wind, mild weather, clear visibility." In a BBC interview, Harrison reported it took about nine shots for him and his spotter to initially range the target successfully. Before the development of rifling, firearms were smoothbore and inaccurate over long distance. Barrel rifling was invented at the end of the fifteenth century, but was only employed in large cannons. Over time, rifling, along with other gunnery advances, has increased the performance of modern firearms. Early forms of sniping or marksmanship were used during the American Revolutionary War. For instance, in 1777 at the battle of Saratoga the Colonists hid in the trees and used early model rifles to shoot British officers. Most notably, Timothy Murphy shot and killed General Simon Fraser of Balnain on 7 October 1777 at a distance of about 400 yards. During the Battle of Brandywine, Capt. Patrick Ferguson had a tall, distinguished American officer in his rifle's iron sights. Ferguson did not take the shot, as the officer had his back to Ferguson; only later did Ferguson learn that George Washington had been on the battlefield that day. A special unit of marksmen was established during the Napoleonic Wars in the British Army. While most troops at that time used inaccurate smoothbore muskets, the British "Green Jackets" (named for their distinctive green uniforms) used the famous Baker rifle. Through the combination of a leather wad and tight grooves on the inside of the barrel (rifling), this weapon was far more accurate, though slower to load. These Riflemen were the elite of the British Army, and served at the forefront of any engagement, most often in skirmish formation, scouting out and delaying the enemy. Another term, "sharp shooter" was in use in British newspapers as early as 1801. In the "Edinburgh Advertiser", 23 June 1801, can be found the following quote in a piece about the North British Militia; "This Regiment has several Field Pieces, and two companies of Sharp Shooters, which are very necessary in the modern Stile of War". The term appears even earlier, around 1781, in Continental Europe, translated from the German Scharfschütze. The Whitworth rifle was arguably the first long-range sniper rifle in the world. A muzzleloader designed by Sir Joseph Whitworth, a prominent British engineer, it used polygonal rifling instead, which meant that the projectile did not have to bite into grooves as was done with conventional rifling. The Whitworth rifle was far more accurate than the Pattern 1853 Enfield, which had shown some weaknesses during the recent Crimean War. At trials in 1857 which tested the accuracy and range of both weapons, Whitworth's design outperformed the Enfield at a rate of about three to one. The Whitworth rifle was capable of hitting the target at a range of 2,000 yards, whereas the Enfield could only manage it at 1,400 yards. During the Crimean War, the first optical sights were designed to fit onto rifles. Much of this pioneering work was the brainchild of Colonel D. Davidson, using optical sights produced by Chance Brothers of Birmingham. This allowed a marksman to observe and target objects more accurately at a greater distance than ever before. The telescopic sight, or scope, was originally fixed and could not be adjusted, which therefore limited its range. Despite its success at the trials, the rifle was not adopted by the British Army. However, the Whitworth Rifle Company was able to sell the weapon to the French army, and also to the Confederacy during the American Civil War, where both the Union and Confederate armies employed sharpshooters. The most notable incident was during the Battle of Spotsylvania Court House, where on 9 May 1864, Union General John Sedgwick was killed by a Confederate Whitworth sharpshooter at a range of about after saying the enemy "couldn't hit an elephant at this distance". During the Boer War the latest breech-loading rifled guns with magazines and smokeless powder were used by both sides. The British were equipped with the Lee–Metford rifle, while the Boers had received the latest Mauser rifles from Germany. In the open terrain of South Africa the marksmen were a crucial component to the outcome of the battle. The first British sniper unit began life as the Lovat Scouts, a Scottish Highland regiment formed in 1899, that earned high praise during the Second Boer War (1899–1902). The unit was formed by Lord Lovat and reported to an American, Major Frederick Russell Burnham, the British Army Chief of Scouts under Lord Roberts. Burnham fittingly described these scouts as "half wolf and half jackrabbit.". Just like their Boer scout opponents, these scouts were well practised in the arts of marksmanship, field craft, map reading, observation, and military tactics. They were skilled woodsmen and practitioners of discretion: "He who shoots and runs away, lives to shoot another day." They were also the first known military unit to wear a ghillie suit. Hesketh Hesketh-Prichard said of them that "keener men never lived", and that "Burnham was the greatest scout of our time." Burnham distinguished himself in wars in South Africa, Rhodesia, and in Arizona fighting the Apaches, and his definitive work, "Scouting on Two Continents," provides a dramatic and enlightening picture of what a sniper was at the time and how he operated. After the war, this regiment went on to formally become the first official sniper unit, then better known as "sharpshooters". During World War I, snipers appeared as deadly sharpshooters in the trenches. At the start of the war, only Imperial Germany had troops that were issued scoped sniper rifles. Although sharpshooters existed on all sides, the Germans specially equipped some of their soldiers with scoped rifles that could pick off enemy soldiers showing their heads out of their trench. At first the French and British believed such hits to be coincidental hits, until the German scoped rifles were discovered. During World War I, the German army received a reputation for the deadliness and efficiency of its snipers, partly because of the high-quality lenses that German industry could manufacture. During the First World War, the static movement of trench warfare and a need for protection from snipers created a requirement for loopholes both for discharging firearms and for observation. Often a steel plate was used with a "key hole", which had a rotating piece to cover the loophole when not in use. Soon the British army began to train their own snipers in specialized sniper schools. Major Hesketh Hesketh-Prichard was given formal permission to begin sniper training in 1915, and founded the First Army School of Sniping, Observation, and Scouting at Linghem in France in 1916. Starting with a first class of only six, in time he was able to lecture to large numbers of soldiers from different Allied nations, proudly proclaiming in a letter that his school was turning out snipers at three times the rate of any such other school in the world. He also devised a metal-armoured double loophole that would protect the sniper observer from enemy fire. The front loophole was fixed, but the rear was housed in a metal shutter sliding in grooves. Only when the two loopholes were lined up—a one-to-twenty chance—could an enemy shoot between them. Another innovation was the use of a dummy head to find the location of an enemy sniper. The papier-mâché figures were painted to resemble soldiers to draw sniper fire. Some were equipped with rubber surgical tubing so the dummy could "smoke" a cigarette and thus appear realistic. Holes punched in the dummy by enemy sniper bullets then could be used for triangulation purposes to determine the position of the enemy sniper, who could then be attacked with artillery fire. He developed many of the modern techniques in sniping, including the use of spotting scopes and working in pairs, and using Kim's Game to train observational skills. In 1920, he wrote his account of his war time activities in his book "", to which reference is still made by modern authors regarding the subject. The main sniper rifles used during the First World War were the German Mauser Gewehr 98; the British Pattern 1914 Enfield and Lee–Enfield SMLE Mk III, the Canadian Ross rifle, the American M1903 Springfield, the Italian M1891 Carcano, and the Russian M1891 Mosin–Nagant During the interbellum, most nations dropped their specialized sniper units, notably the Germans. Effectiveness and dangers of snipers once again came to the fore during the Spanish Civil War. The only nation that had specially trained sniper units during the 1930s was the Soviet Union. Soviet snipers were trained in their skills as marksmen, in using the terrain to hide themselves from the enemy and the ability to work alongside regular forces. This made the Soviet sniper training focus more on "normal" combat situations than those of other nations. Snipers reappeared as important factors on the battlefield from the first campaign of World War II. During Germany's 1940 campaigns, lone, well-hidden French and British snipers were able to halt the German advance for a considerable amount of time. For example, during the pursuit to Dunkirk, British snipers were able to significantly delay the German infantry's advance. This prompted the British once again to increase training of specialized sniper units. Apart from marksmanship, British snipers were trained to blend in with the environment, often by using special camouflage clothing for concealment. However, because the British Army offered sniper training exclusively to officers and non-commissioned officers, the resulting small number of trained snipers in combat units considerably reduced their overall effectiveness. During the Winter War, Finnish snipers took a heavy toll of the invading Soviet army. Simo Häyhä is credited with 505 confirmed kills, most with the Finnish version of the iron-sighted bolt-action Mosin–Nagant. One of the best known battles involving snipers, and the battle that made the Germans reinstate their specialized sniper training, was the Battle of Stalingrad. Their defensive position inside a city filled with rubble meant that Soviet snipers were able to inflict significant casualties on the Wehrmacht troops. Because of the nature of fighting in city rubble, snipers were very hard to spot and seriously dented the morale of the German attackers. The best known of these snipers was probably Vasily Zaytsev, featured in the novel "War of the Rats" and the subsequent film "Enemy At The Gates". German "Scharfschützen" were prepared before the war, equipped with Karabiner 98 and later Gewehr 43 rifles, but there were often not enough of these weapons available, and as such some were armed with captured scoped Mosin–Nagant 1891/30, SVT or Czech Mauser rifles. The Wehrmacht re-established its sniper training in 1942, drastically increasing the number of snipers per unit with the creation of an additional 31 sniper training companies by 1944. German snipers were at the time the only snipers in the world issued with purpose-manufactured sniping ammunition, known as the 'effect-firing' sS round. The 'effect-firing' sS round featured an extra carefully measured propellant charge and seated a heavy 12.8 gram (198 gr) full-metal-jacketed boat-tail projectile of match-grade build quality, lacking usual features such as a seating ring to improve the already high ballistic coefficient of .584 (G1) further. For aiming optics German snipers used the Zeiss Zielvier 4x (ZF39) telescopic sight which had bullet drop compensation in 50 m increments for ranges from 100 m up to 800 m or in some variations from 100 m up to 1000 m or 1200 m. There were ZF42, Zielfernrohr 43 (ZF 4), Zeiss Zielsechs 6x, Zeiss Zielacht 8x and other telescopic sights by various manufacturers like the Ajack 4x, Hensoldt Dialytan 4x and Kahles Heliavier 4x with similar features employed on German sniper rifles. Several different mountings produced by various manufacturers were used for mounting aiming optics to the rifles. In February 1945 the Zielgerät 1229 active infrared aiming device was issued for night sniping with the StG 44 assault rifle. A total of 428,335 individuals received Red Army sniper training, including Soviet and non-Soviet partisans, with 9,534 receiving the sniping 'higher qualification'. During World War ІІ, two six-month training courses for women alone trained nearly 55,000 snipers, of which more than two thousand later served in the army. On average there was at least one sniper in an infantry platoon and one in every reconnaissance platoon, including in tank and even artillery units. Some used the PTRD anti-tank rifle with an adapted scope as an early example of an anti-materiel rifle. In the United States Armed Forces, sniper training was only very elementary and was mainly concerned with being able to hit targets over long distances. Snipers were required to be able to hit a body over 400 meters away, and a head over 200 meters away. There was almost no instruction in blending into the environment. Sniper training varied from place to place, resulting in wide variation in the qualities of snipers. The main reason the US did not extend sniper training beyond long-range shooting was the limited deployment of US soldiers until the Normandy Invasion. During the campaigns in North Africa and Italy, most fighting occurred in arid and mountainous regions where the potential for concealment was limited, in contrast to Western and Central Europe. The U.S. Army's lack of familiarity with sniping tactics proved disastrous in Normandy and the campaign in Western Europe where they encountered well trained German snipers. In Normandy, German snipers remained hidden in the dense vegetation and were able to encircle American units, firing at them from all sides. The American and British forces were surprised by how near the German snipers could approach in safety and attack them, as well as by their ability to hit targets at up to 1,000m. A notable mistake made by inexperienced American soldiers was to lie down and wait when targeted by German snipers, allowing the snipers to pick them off one after another. German snipers often infiltrated Allied lines and sometimes when the front-lines moved, they continued to fight from their sniping positions, refusing to surrender until their rations and munitions were exhausted. Those tactics were also a consequence of changes in German enlistment. After several years of war and heavy losses on the Eastern Front, the German army was forced to rely more heavily on enlisting teenage soldiers. Due to lack of training in more complex group tactics, and thanks to rifle training provided by the Hitlerjugend, those soldiers were often used as autonomous left-behind snipers. While an experienced sniper would take a few lethal shots and retreat to a safer position, those young boys, due both to a disregard for their own safety and to lack of tactical experience would frequently remain in a concealed position and fight until they ran out of ammunition or were killed or wounded. While this tactic generally ended in the demise of the sniper, giving rise to the nickname "Suicide Boys" that was given to those soldiers, this irrational behavior proved quite disruptive to the Allied forces' progress. After World War II, many elements of German sniper training and doctrine were copied by other countries. In the Pacific War, the Empire of Japan trained snipers. In the jungles of Asia and the Pacific Islands, snipers posed a serious threat to U.S., British, and Commonwealth troops. Japanese snipers were specially trained to use the environment to conceal themselves. Japanese snipers used foliage on their uniforms and dug well-concealed hide-outs that were often connected with small trenches. There was no need for long range accuracy because most combat in the jungle took place within a few hundred meters. Japanese snipers were known for their patience and ability to remain hidden for long periods. They almost never left their carefully camouflaged hiding spots. This meant that whenever a sniper was in the area, the location of the sniper could be determined after the sniper had fired a few shots. The Allies used their own snipers in the Pacific, notably the U.S. Marines, who used M1903 Springfield rifles. Common sniper rifles used during the Second World War include: the Soviet M1891/30 Mosin–Nagant and, to a lesser extent, the SVT-40; the German Mauser Karabiner 98k and Gewehr 43; the British Lee–Enfield No. 4 and Pattern 1914 Enfield; the Japanese Arisaka 97; the American M1903A4 Springfield and M1C Garand. The Italians trained few snipers and supplied them with a scoped Carcano Model 1891. Military sniper training aims to teach a high degree of proficiency in camouflage and concealment, stalking, observation and map reading as well as precision marksmanship under various operational conditions. Trainees typically shoot thousands of rounds over a number of weeks, while learning these core skills. Snipers are trained to squeeze the trigger straight back with the ball of their finger, to avoid jerking the gun sideways. The most accurate position is prone, with a sandbag supporting the stock, and the stock's cheek-piece against the cheek. In the field, a bipod can be used instead. Sometimes a sling is wrapped around the weak arm (or both) to reduce stock movement. Some doctrines train a sniper to breathe deeply before shooting, then hold their lungs empty while they line up and take their shot. Some go further, teaching their snipers to shoot between heartbeats to minimize barrel motion. The key to sniping is accuracy, which applies to both the weapon and the shooter. The weapon should be able to consistently place shots within tight tolerances. The sniper in turn must utilize the weapon to accurately place shots under varying conditions. A sniper must have the ability to accurately estimate the various factors that influence a bullet's trajectory and point of impact such as: range to the target, wind direction, wind velocity, altitude and elevation of the sniper and the target and ambient temperature. Mistakes in estimation compound over distance and can decrease lethality or cause a shot to miss completely. Snipers zero their weapons at a target range or in the field. This is the process of adjusting the scope so that the bullets' points-of-impact is at the point-of-aim (centre of scope or scope's cross-hairs) for a specific distance. A rifle and scope should retain its zero as long as possible under all conditions to reduce the need to re-zero during missions. A sandbag can serve as a useful platform for shooting a sniper rifle, although any soft surface such as a rucksack will steady a rifle and contribute to consistency. In particular, bipods help when firing from a prone position, and enable the firing position to be sustained for an extended period of time. Many police and military sniper rifles come equipped with an adjustable bipod. Makeshift bipods known as shooting sticks can be constructed from items such as tree branches or ski poles. Some military snipers use three-legged shooting sticks. Range and accuracy vary depending on the cartridge and specific ammunition types that are used. Typical ranges for common battle field cartridges are as follows: Servicemen volunteer for the rigorous sniper training and are accepted on the basis of their aptitude, physical ability, marksmanship, patience and mental stability. Military snipers may be further trained as forward air controllers (FACs) to direct air strikes or forward observers (FOs) to direct artillery or mortar fire. From 2011, the Russian armed forces has run newly developed sniper courses in military district training centres. In place of the Soviet practice of mainly squad sharpshooters, which were often designated during initial training (and of whom only few become snipers "per se"), "new" Army snipers are to be trained intensively for 3 months (for conscripts) or longer (for contract soldiers). The training program includes theory and practice of countersniper engagements, artillery spotting and coordination of air support. The first instructors are the graduates of the Solnechnogorsk sniper training centre. The method of sniper deployment, according to the Ministry of Defence, is likely to be one three-platoon company at the brigade level, with one of the platoons acting independently and the other two supporting the battalions as needed. The range to the target is measured or estimated as precisely as conditions permit and correct range estimation becomes absolutely critical at long ranges, because a bullet travels with a curved trajectory and the sniper must compensate for this by aiming higher at longer distances. If the exact distance is not known the sniper may compensate incorrectly and the bullet path may be too high or low. As an example, for a typical military sniping cartridge such as 7.62×51mm NATO (.308 Winchester) M118 Special Ball round this difference (or “drop”) from is . This means that if the sniper incorrectly estimated the distance as 700 meters when the target was in fact 800 meters away, the bullet will be 200 millimeters lower than expected by the time it reaches the target. Laser rangefinders may be used, and range estimation is often the job of both parties in a team. One useful method of range finding without a laser rangefinder is comparing the height of the target (or nearby objects) to their size on the mil dot scope, or taking a known distance and using some sort of measure (utility poles, fence posts) to determine the additional distance. The average human head is in width, average human shoulders are apart and the average distance from a person's pelvis to the top of their head is . To determine the range to a target without a laser rangefinder, the sniper may use the mil dot reticle on a scope to accurately find the range. Mil dots are used like a slide rule to measure the height of a target, and if the height is known, the range can be as well. The height of the target (in yards) ×1000, divided by the height of the target (in mils), gives the range in yards. This is only in general, however, as both scope magnification (7×, 40×) and mil dot spacing change. The USMC standard is that 1 mil (that is, 1 milliradian) equals 3.438 MOA (minute of arc, or, equivalently, minute of angle), while the US Army standard is 3.6 MOA, chosen so as to give a diameter of 1 yard at a distance of 1,000 yards (or equivalently, a diameter of 1 meter at a range of 1 kilometer.) Many commercial manufacturers use 3.5, splitting the difference, since it is easier to work with. It is important to note that angular mil ("mil") is only an approximation of a milliradian and different organizations use different approximations. At longer ranges, bullet drop plays a significant role in targeting. The effect can be estimated from a chart, which may be memorized or taped to the rifle, although some scopes come with Bullet Drop Compensator (BDC) systems that only require the range be dialed in. These are tuned to both a specific class of rifle and specific ammunition. Every bullet type and load will have different ballistics. .308 Federal 175 grain (11.3 g) BTHP match shoots at . Zeroed at , a 16.2 MOA adjustment would have to be made to hit a target at . If the same bullet was shot with 168 grain (10.9 g), a 17.1 MOA adjustment would be necessary. Shooting uphill or downhill is confusing for many because gravity does not act perpendicular to the direction the bullet is traveling. Thus, gravity must be divided into its component vectors. Only the fraction of gravity equal to the cosine of the angle of fire with respect to the horizon affects the rate of fall of the bullet, with the remainder adding or subtracting negligible velocity to the bullet along its trajectory. To find the correct zero, the sniper multiplies the actual distance to the range by this fraction and aims as if the target were that distance away. For example, a sniper who observes a target 500 meters away at a 45-degree angle downhill would multiply the range by the cosine of 45 degrees, which is 0.707. The resulting distance will be 353 meters. This number is equal to the horizontal distance to the target. All other values, such as windage, time-to-target, impact velocity, and energy will be calculated based on the actual range of 500 meters. Recently, a small device known as a cosine indicator has been developed. This device is clamped to the tubular body of the telescopic sight, and gives an indicative readout in numerical form as the rifle is aimed up or down at the target. This is translated into a figure used to compute the horizontal range to the target. Windage plays a significant role, with the effect increasing with wind speed or the distance of the shot. The slant of visible convections near the ground can be used to estimate crosswinds, and correct the point of aim. All adjustments for range, wind, and elevation can be performed by aiming off the target, called "holding over" or Kentucky windage. Alternatively, the scope can be adjusted so that the point of aim is changed to compensate for these factors, sometimes referred to as "dialing in". The shooter must remember to return the scope to zeroed position. Adjusting the scope allows for more accurate shots, because the cross-hairs can be aligned with the target more accurately, but the sniper must know exactly what differences the changes will have on the point-of-impact at each target range. For moving targets, the point-of-aim is ahead of the target in the direction of movement. Known as "leading" the target, the amount of "lead" depends on the speed and angle of the target's movement as well as the distance to the target. For this technique, holding over is the preferred method. Anticipating the behavior of the target is necessary to accurately place the shot. The term "hide site" refers to a covered and concealed position from which a sniper and his team can conduct surveillance or fire at targets. A good hide conceals and camouflages the sniper effectively, provides cover from enemy fire and allows a wide view of the surrounding area. The main purpose of ghillie suits and hide sites is to break up the outline of a person with a rifle. Many snipers use ghillie suits to hide and stay hidden. Ghillie suits vary according to the terrain into which the sniper wishes to blend. For example, in dry grassland the sniper will typically wear a ghillie suit covered in dead grass. Shot placement, which is where on the body the sniper is aiming, varies with the type of sniper. Military snipers, who generally do not shoot at targets at less than , usually attempt body shots, aiming at the chest. These shots depend on tissue damage, organ trauma, and blood loss to kill the target. Body shots are used because the chest is a larger target. Police snipers, who generally shoot at much shorter distances, may attempt a more precise shot at particular parts of body or particular devices: in one incident in 2007 in Marseille, a GIPN sniper took a shot from at the pistol of a police officer threatening to commit suicide, destroying the weapon and preventing the police officer from killing himself. In a high-risk or hostage-taking situation where a suspect is imminently threatening to kill a hostage, police snipers may take head shots to ensure an instant kill. The snipers aim for the medulla oblongata to sever the spine from the brain. While this is believed to prevent the target from reflexively firing their weapon, there is evidence that any brain-hit is sufficient. Snipers are trained for the detection, identification, and location of a targeted soldier in sufficient detail to permit the effective employment of lethal and non-lethal means. Since most kills in modern warfare are by crew-served weapons, reconnaissance is one of the most effective uses of snipers. They use their aerobic conditioning, infiltration skills and excellent long-distance observation equipment (optical scopes) and tactics to approach and observe the enemy. In this role, their rules of engagement typically let them shoot at high-value targets of opportunity, such as enemy officers. The targets may be personnel or high-value materiel (military equipment and weapons) but most often they target the most important enemy personnel such as officers or specialists (e.g. communications operators) so as to cause maximum disruption to enemy operations. Other personnel they might target include those who pose an immediate threat to the sniper, like dog handlers, who are often employed in a search for snipers. A sniper identifies officers by their appearance and behavior such as symbols of rank, talking to radio operators, sitting as a passenger in a car, sitting in a car with a large radio antenna, having military servants, binoculars/map cases or talking and moving position more frequently. If possible, snipers shoot in descending order by rank, or if rank is unavailable, they shoot to disrupt communications. Some rifles, such as the Denel NTW-20 and Vidhwansak, are designed for a purely anti-materiel (AM) role, e.g. shooting turbine disks of parked aircraft, missile guidance packages, expensive optics, and the bearings, tubes or wave guides of radar sets. A sniper equipped with the correct rifle can target radar dishes, water containers, the engines of vehicles, and any number of other targets. Other rifles, such as the .50 caliber rifles produced by Barrett and McMillan, are not designed exclusively as AM rifles, but are often employed in such a way, providing the range and power needed for AM applications in a lightweight package compared to most traditional AM rifles. Other calibers, such as the .408 Cheyenne Tactical and the .338 Lapua Magnum, are designed to be capable of limited AM application, but are ideally suited as long range anti-personnel rounds. Often in situations with multiple targets, snipers use relocation. After firing a few shots from a certain position, snipers move unseen to another location before the enemy can determine where they are and mount a counter-attack. Snipers will frequently use this tactic to their advantage, creating an atmosphere of chaos and confusion. In other, rarer situations, relocation is used to eliminate the factor of wind. As sniper rifles are often extremely powerful and consequently loud, it is common for snipers to use a technique known as sound masking. When employed by a highly skilled marksman, this tactic can be used as a substitute for a noise suppressor. Very loud sounds in the environment, such as artillery shells air bursting or claps of thunder, can often mask the sound of the shot. This technique is frequently used in clandestine operations, infiltration tactics, and guerrilla warfare. Due to the surprise nature of sniper fire, high lethality of aimed shots and frustration at the inability to locate and counterattack snipers, sniper tactics have a significant negative effect on morale. Extensive use of sniper tactics can be used to induce constant stress and fear in opposing forces, making them afraid to move about or leave cover. In many ways, the psychological impact imposed by snipers is quite similar to those of landmines, booby-traps, and IEDs (constant threat, high "per event" lethality, inability to strike back). Historically, captured snipers are often summarily executed. This happened during World War I, and World War II, for example the second Biscari Massacre when 36 suspected snipers were lined up and shot on 14 July 1943. As a result, if a sniper is in imminent danger of capture, he may discard any items (sniper rifle, laser rangefinder, etc.) which might indicate his status as a sniper. The risk of captured snipers being summarily executed is explicitly referred to in Chapter 6 of US Army doctrine document FM 3-060.11 entitled "SNIPER AND COUNTERSNIPER TACTICS, TECHNIQUES, AND PROCEDURES": The negative reputation and perception of snipers can be traced back to the American Revolution, when American "Marksmen" intentionally targeted British officers, an act considered uncivilized by the British Army at the time (this reputation was cemented during the Battle of Saratoga, when Benedict Arnold allegedly ordered his marksmen to target British General Simon Fraser, an act that won the battle and French support). The British side used specially selected sharpshooters as well, often German mercenaries. To demoralize enemy troops, snipers can follow predictable patterns. During the 26th of July Movement in the Cuban Revolution, the revolutionaries led by Fidel Castro always killed the foremost man in a group of President Batista's soldiers. Realizing this, none of Batista's men would walk first, as it was suicidal. This effectively decreased the army's willingness to search for rebel bases in the mountains. An alternative approach to this psychological process is to kill the second man in the row, leading to the psychological effect of nobody wanting to follow the "leader". The occurrence of sniper warfare has led to the evolution of many counter-sniper tactics in modern military strategies. These aim to reduce the damage caused by a sniper to an army, which can often be harmful to both combat capabilities and morale. The risk of damage to a chain of command can be reduced by removing or concealing features that would otherwise indicate an officer's rank. Modern armies tend to avoid saluting officers in the field, and eliminate rank insignia on battle dress uniforms (BDU). Officers can seek maximum cover before revealing themselves as good candidates for elimination through actions such as reading maps or using radios. Friendly snipers can be used to hunt the enemy sniper. Besides direct observation, defending forces can use other techniques. These include calculating the trajectory of a bullet by triangulation. Traditionally, triangulation of a sniper's position was done manually, though radar-based technology has recently become available. Once located, the defenders can attempt to approach the sniper from cover and overwhelm them. The United States military is funding a project known as RedOwl (Robot Enhanced Detection Outpost With Lasers), which uses laser and acoustic sensors to determine the exact direction from which a sniper round has been fired. The more rounds fired by a sniper, the greater the chance the target has of locating him. Thus, attempts to draw fire are often made, sometimes by offering a helmet slightly out of concealment, a tactic successfully employed in the Winter War by the Finns known as "Kylmä-Kalle" (Cold Charlie). They used a shop mannequin or other doll dressed as a tempting target, such as an officer. The doll was then presented as if it were a real man sloppily covering himself. Usually, Soviet snipers were unable to resist the temptation of an apparently easy kill. Once the angle where the bullet came from was determined, a large caliber gun, such as a Lahti L-39 "Norsupyssy" ("Elephant rifle") anti-tank rifle was fired at the sniper to kill him. Other tactics include directing artillery or mortar fire onto suspected sniper positions, the use of smoke screens, placing tripwire-operated munitions, mines, or other booby-traps near suspected sniper positions. Even dummy trip-wires can be placed to hamper sniper movement. If anti-personnel mines are unavailable, it is possible to improvise booby-traps by connecting trip-wires to hand grenades, smoke grenades or flares. Though these may not kill a sniper, they will reveal their location. Booby-trap devices can be placed near likely sniper hides, or along the probable routes to and from positions. Knowledge of sniper field-craft will assist in this task. The use of canine units had been very successful, especially during the Vietnam War. The use of sniping (in the sense of shooting at relatively long range from a concealed position) to murder came to public attention in a number of sensational U.S. criminal cases, including the Austin sniper incident of 1966 (Charles Whitman), the John F. Kennedy assassination (Lee Harvey Oswald), and the Beltway sniper attacks of late 2002 (Lee Boyd Malvo). However, these incidents usually do not involve the range or skill of military snipers; in all three cases the perpetrators had U.S. military training, but in other specialties. News reports will often (inaccurately) use the term sniper to describe anyone shooting with a rifle at another person. Sniping has been used in asymmetric warfare situations, for example in the Northern Ireland Troubles, where in 1972, the bloodiest year of the conflict, the majority of the soldiers killed were shot by concealed IRA riflemen. There were some instances in the early 1990s of British soldiers and RUC personnel being shot with .50 caliber Barrett rifles by sniper teams collectively known as the South Armagh sniper. The sniper is particularly suited to combat environments where one side is at a disadvantage. A careful sniping strategy can use a few individuals and resources to thwart the movement or other progress of a much better equipped or larger force. Sniping enables a few persons to instil terror in a much larger regular force — regardless of the size of the force the snipers are attached to. It is widely accepted that sniping, while effective in specific instances, is much more effective as a broadly deployed psychological attack or as a force-multiplier. Snipers are less likely to be treated mercifully than non-snipers if captured by the enemy. The rationale for this is that ordinary soldiers shoot at each other at 'equal opportunity' whilst snipers take their time in tracking and killing individual targets in a methodical fashion with a relatively low risk of retaliation. In 2003, the U.S.-led multinational coalition composed of primarily U.S. and UK troops occupied Iraq and attempted to establish a new government in the country. However, shortly after the initial invasion, violence against coalition forces and among various sectarian groups led to asymmetric warfare with the Iraqi insurgency and civil war between many Sunni and Shia Iraqis. Through to November 2005 the Army had attributed 28 of 2,100 U.S. deaths to enemy snipers. In 2006, it was claimed that one insurgent sniper, "Juba", had shot up to 37 American soldiers. Training materials obtained by U.S. intelligence had among its tips for shooting U.S. troops, "Killing doctors and chaplains is suggested as a means of psychological warfare.", suggesting that those casualties would demoralize entire units. Sniper activity was reported during the Arab Spring civil unrest in Libya in 2011, both from anti-governmental and pro-governmental supporters, and in Syria at least from pro-government forces. Even before firearms were available, soldiers such as archers were specially trained as elite marksmen.
https://en.wikipedia.org/wiki?curid=28123
Sign A sign is an object, quality, event, or entity whose presence or occurrence indicates the probable presence or occurrence of something else. A natural sign bears a causal relation to its object—for instance, thunder is a sign of storm, or medical symptoms a sign of disease. A conventional sign signifies by agreement, as a full stop signifies the end of a sentence; similarly the words and expressions of a language, as well as bodily gestures, can be regarded as signs, expressing particular meanings. The physical objects most commonly referred to as signs (notices, road signs, etc., collectively known as signage) generally inform or instruct using written text, symbols, pictures or a combination of these. The philosophical study of signs and symbols is called semiotics; this includes the study of semiosis, which is the way in which signs (in the semiotic sense) operate. Semiotics, epistemology, logic, and philosophy of language are concerned about the nature of signs, what they are and how they signify. The nature of signs and symbols and significations, their definition, elements, and types, is mainly established by Aristotle, Augustine, and Aquinas. According to these classic sources, significance is a relationship between two sorts of things: signs and the kinds of things they signify (intend, express or mean), where one term necessarily causes something else to come to the mind. Distinguishing natural signs and conventional signs, the traditional theory of signs (Augustine) sets the following threefold partition of things: all sorts of indications, evidences, symptoms, and physical signals, there are signs which are "always" signs (the entities of the mind as ideas and images, thoughts and feelings, constructs and intentions); and there are signs that "have" to get their signification (as linguistic entities and cultural symbols). So, while natural signs serve as the source of signification, the human mind is the agency through which signs signify naturally occurring things, such as objects, states, qualities, quantities, events, processes, or relationships. Human language and discourse, communication, philosophy, science, logic, mathematics, poetry, theology, and religion are only some of fields of human study and activity where grasping the nature of signs and symbols and patterns of signification may have a decisive value. Communication takes place without words but via the mind as a result of signs and symbols; They communicate/pass across/ messages to the human mind through their pictorial representation. The word "sign" has a variety of meanings in English, including: St. Augustine was the first man who synthesized the classical and Hellenistic theories of signs. For him a sign is a thing which is used to signify other things and to make them come to mind ("De Doctrina Christiana" (hereafter DDC) 1.2.2; 2.1.1). The most common signs are spoken and written words (DDC 1.2.2; 2.3.4-2.4.5). Although God cannot be fully expressible, Augustine gave emphasis to the possibility of God’s communication with humans by signs in Scripture (DDC 1.6.6). Augustine endorsed and developed the classical and Hellenistic theories of signs. Among the mainstream in the theories of signs, i.e., that of Aristotle and that of Stoics, the former theory filtered into the works of Cicero (106-43 BC, "De inventione rhetorica" 1.30.47-48) and Quintilian (circa 35-100, "Institutio Oratoria" 5.9.9-10), which regarded the sign as an instrument of inference. In his commentary on Aristotle’s "De Interpretatione", Ammonius said, "according to the division of the philosopher Theophrastus, the relation of speech is twofold, first in regard to the audience, to which speech signifies something, and secondly in regard to the things about which the speaker intends to persuade the audience." If we match DDC with this division, the first part belongs to DDC Book IV and the second part to DDC Books I-III. Augustine, although influenced by these theories, advanced his own theological theory of signs, with whose help one can infer the mind of God from the events and words of Scripture. Books II and III of DDC enumerate all kinds of signs and explain how to interpret them. Signs are divided into natural ("naturalia") and conventional ("data"); the latter is divided into animal ("bestiae") and human ("homines"); the latter is divided into non-words ("cetera") and words ("verba"); the latter is divided into spoken words ("voces") and written words ("litterae"); the latter is divided into unknown signs ("signa ignota") and ambiguous signs ("signa ambigua"); both the former and the latter are divided respectively into particular signs ("signa propria") and figurative signs ("signa translata"), among which the unknown figurative signs belong to the pagans. In addition to exegetical knowledge (Quintilian, "Institutio Oratoria" 1.4.1-3 and 1.8.1-21) which follows the order of reading ("lectio"), textual criticism ("emendatio"), explanation ("enarratio"), and judgment ("iudicium"), one needs to know the original language (Hebrew and Greek) and broad background information on Scripture (DDC 2.9.14-2.40.60). Augustine’s understanding of signs includes several hermeneutical presuppositions as important factors. First, the interpreter should proceed with humility, because only a humble person can grasp the truth of Scripture (DDC 2.41.62). Second, the interpreter must have a spirit of active inquiry and should not hesitate to learn and use pagan education for the purpose of leading to Christian learning, because all truth is God’s truth (DDC 2.40.60-2.42.63). Third, the heart of interpreter should be founded, rooted, and built up in love which is the final goal of the entire Scriptures (DDC 2.42.63). The sign does not function as its own goal, but its purpose lies in its role as a signification ("res significans", DDC 3.9.13). God gave signs as a means to reveal himself; Christians need to exercise hermeneutical principles in order to understand that divine revelation. Even if the Scriptural text is obscure, it has meaningful benefits. For the obscure text prevents us from falling into pride, triggers our intelligence (DDC 2.6.7), tempers our faith in the history of revelation (DDC 3.8.12), and refines our mind to be suitable to the holy mysteries (DDC 4.8.22). When interpreting signs, the literal meaning should first be sought, and then the figurative meaning (DDC 3.10.14-3.23.33). Augustine suggests the hermeneutical principle that the obscure Scriptural verse is interpreted with the help of plain and simple verses, which formed the doctrine of "scriptura scripturae interpres" (Scripture is the Interpreter of Scripture) in the Reformation Era. Moreover, he introduces the seven rules of Tyconius the Donatist to interpret the obscure meaning of the Bible, which demonstrates his understanding that all truth belongs to God (DDC 3.3.42-3.37.56). In order to apply Augustine's hermeneutics of the sign appropriately in modern times, every division of theology must be involved and interdisciplinary approaches must be taken.
https://en.wikipedia.org/wiki?curid=28130
Standard Alphabet by Lepsius The Standard Alphabet is a Latin-script alphabet developed by Karl Richard Lepsius. Lepsius initially used it to transcribe Egyptian hieroglyphs and extended it to write African languages, published in 1854 and 1855, and in a revised edition in 1863. The alphabet was comprehensive but was not used much as it contained a lot of diacritic marks and was difficult to read and typeset at that time. It was, however, influential in later projects such as Ellis's Paleotype, and diacritics such as the acute accent for palatalization, under-dot for retroflex, underline for Arabic emphatics, and the click letters continue in modern use. Vowel length is indicated by a macron ("ā") or a breve ("ă") for long and short vowels, respectively. Open vowels are marked by a line under the letter ("e̱"), while a dot below the letter makes it a close vowel ("ẹ"). Rounded front vowels are written with an umlaut ("ö" and "ü" ), either on top or below, when the space above the letter is needed for vowel length marks (thus "ṳ̄" or "ṳ̆"). Unrounded back vowels are indicated by a corner (˻) below "e" or "i". (Central vowels may be written as one of these series, or as reduced vowels.) As in the , nasal vowels get a tilde ("ã"). A small circle below a letter is used to mark both the schwa ("e̥", also "ḁ" etc. for other reduced vowels) and syllabic consonants ("r̥" or "l̥", for instance). Diphthongs do not receive any special marking, they are simply juxtaposed ("ai" ). A short sign can be used to distinguish which element of the diphthong is the on- or off-glide ("uĭ, ŭi") Vowels in hiatus can be indicated with a diaeresis when necessary ("aï" ). Other vowels are "a" with a subscript "e" for ; "a" with a subscript "o" for , and "o̩" for or maybe . The English syllabic is "ṙ̥". Word stress is marked with an acute accent on a long vowel ("á") and with a grave accent on a short vowel ("à"). The Lepsius letters without predictable diacritics are as follows: Other consonant sounds may be derived from these. For example, palatal and palatalized consonants are marked with an acute accent: "ḱ" , "ǵ" , "ń" , "χ́" , "š́" , "γ́" , "ž́" , "ĺ" , "‘ĺ" , "ı́" , "ṕ" , etc. These can also be written "ky, py" etc. Labialized velars are written with an over-dot: "ġ" , "n̈" , etc. (A dot on a non-velar letter, as in "ṅ" and "ṙ" in the table above, indicates a guttural articulation.) Retroflex consonants are marked with an under-dot: "ṭ" , "ḍ" , "ṇ" , "ṣ̌" , "ẓ̌" , "ṛ" , "ḷ" , and "ı̣" . The Semitic "emphatic" consonants are marked with an underline: "ṯ" , "ḏ" , "s̱" , "ẕ" , "δ̱" , "ḻ" . Aspiration is typically marked by "h": "kh" , but a turned apostrophe (Greek "spiritus asper") is also used: "k̒" , "ģ" . Either convention may be used for voiceless sonorants: "m̒" , "‘l" . Affricates are generally written as sequences, e.g. "tš" for . But the single letters "č" , "ǰ" , "c̀" , "j̀" , "ț" , and "d̦" are also used. Implosives are written with a macron: "b̄" , "d̄" , "j̄" , "ḡ" . As with vowels, long (geminate) consonants may also be written with a macron, so this transcription can be ambiguous. Lepsius typically characterized ejective consonants as tenuis, as they are completely unaspirated, and wrote them with the Greek "spiritus lenis" ("p’", "t’", etc.), which may be the source of the modern convention for ejectives in the IPA. However, when his sources made it clear that there was some activity in the throat, he transcribed them as emphatics. When transcribing consonant letters which are pronounced the same but are etymologically distinct, as in Armenian, diacritics from the original alphabet or roman transliteration may be carried over. Similarly, unique sounds such as Czech "ř" may be carried over into Lepsius transcription. Lepsius used a diacritic "r" under "t᷊" and "d᷊" for some poorly described sounds in Dravidian languages. Standard capitalization is used. For example, when written in all caps, "γ" becomes "Γ" (as in "AFΓAN" "Afghan"). Tones are marked with an acute and grave accents (backticks) to the right and near the top or the bottom of the corresponding vowel. The diacritic may be underlined for a lower pitch, distinguishing in all eight possible tones. Tone is not written directly, but rather needs to be established separately for each language. For example, the acute accent may indicate a high tone, a rising tone, or, in the case of Chinese, any tone called "rising" (上) for historical reasons. Low rising and falling tones can be distinguished from high rising and falling tones by underlining the accent mark: . The underline also transcribes the Chinese "yin" tones, under the mistaken impression that these tones are actually lower. Two additional tone marks, without any defined phonetic value, are used for Chinese: "level" maˏ (平) and checked maˎ (入); these may also be underlined.
https://en.wikipedia.org/wiki?curid=28131
Sidehill gouger In American folklore, a Sidehill gouger is a fearsome critter adapted to living on hillsides by having legs on one side of their body shorter than the legs on the opposite side. This peculiarity allows them to walk on steep hillsides, although only in one direction; when lured or chased into the plain, they are trapped in an endless circular path. The creature is variously known as the Sidehill Dodger, Sidehill Hoofer, Sidehill Ousel, Sidehill Loper, Gyascutus, Sidewinder, Wampus, Gudaphro, Hunkus, Rickaboo Racker, Prock, Gwinter, or Cutter Cuss. Sidehill gougers are mammals who dwell in hillside burrows, and are occasionally depicted as laying eggs There are usually 6 to 8 pups to a litter. Since the gouger is footed for hillsides, it cannot stand up on level ground. If by accident a gouger falls from a hill, it can easily be captured or starve to death. When a clockwise gouger meets a counter-clockwise gouger, they have to fight to the death since they can only go in one direction. The formation of terracettes has been attributed to gouger activity. Gougers are said to have migrated to the west from New England, a feat accomplished by a pair of gougers who clung to each other in a fashion comparable to "a pair of drunks going home from town with their longer legs on the outer sides. A Vermont variation is known as the Wampahoofus. It was reported that farmers crossbreed them with their cows so they could graze easily on mountain sides. Frank C. Whitmore and Nicholas Hotton, in their joint tongue-in-cheek response to an article in "Smithsonian Magazine", expounded the taxonomy of sidehill gougers ("Membriinequales declivitous"), noting in particular "the sidehill dodger, which inhabits the Driftless Area of Wisconsin; the dextrosinistral limb ratio approaches unity although the metapodials on the downhill side are noticeably stouter." A special award, the Order of the Sidehill Gouger, is awarded to worthy members for hard and long standing volunteer efforts by the Alberta Group of the Royal Canadian Air Force Association.
https://en.wikipedia.org/wiki?curid=28132
Second Vatican Council The Second Ecumenical Council of the Vatican, commonly known as the or , addressed relations between the Catholic Church and the modern world. The Council, through the Holy See, was formally opened under the pontificate of Pope John XXIII on 11 October 1962 and was closed under Pope Paul VI on the Solemnity of the Immaculate Conception on 8 December 1965. Several changes resulted from the Council, including the renewal of consecrated life with a revised charism, ecumenical efforts towards dialogue with other religions, and the universal call to holiness, which according to Pope Paul VI was "the most characteristic and ultimate purpose of the teachings of the Council". According to Pope Benedict XVI, the most important and essential message of the Council is "the Paschal Mystery as the center of what it is to be Christian and therefore of the Christian life, the Christian year, the Christian seasons". Other changes which followed the Council included the widespread use of vernacular languages in the Mass instead of Latin, the subtle disuse of ornate clerical regalia, the revision of Eucharistic (liturgical) prayers, the abbreviation of the liturgical calendar, the ability to celebrate the Mass "versus populum" (with the officiant facing the congregation), as well as "ad orientem" (facing the "East" and the Crucifix), and modern aesthetic changes encompassing contemporary Catholic liturgical music and artwork. Many of these changes remain divisive among the Catholic faithful. Of those who took part in the Council's opening session, four have become popes: Cardinal Giovanni Battista Montini, who on succeeding John XXIII took the name Pope Paul VI; Bishop Albino Luciani, the future Pope John Paul I; Bishop Karol Wojtyła, who became Pope John Paul II; and Father Joseph Ratzinger, present as a theological consultant, who became Pope Benedict XVI. In the 1950s, theological and biblical studies in the Catholic Church had begun to sway away from the Neo-Scholasticism and biblical literalism which a reaction to Catholic modernism had enforced since the First Vatican Council. This shift could be seen in theologians such as Karl Rahner and John Courtney Murray who, following Pope John XXIII's call for "aggiornamento", looked to integrate modern human experience with church principles based on Jesus Christ, as well as in others such as Yves Congar, Henri de Lubac, and Joseph Ratzinger who looked to an accurate understanding of scripture and the early Church Fathers as a source of renewal ("ressourcement"). At the same time, the world's bishops faced challenges driven by political, social, economic, and technological change. Some of these bishops sought new ways of addressing those challenges. The First Vatican Council had been held nearly a century before but had been cut short in 1870 when the Italian Army entered the city of Rome at the end of Italian unification. As a result, only deliberations on the role of the papacy and the congruent relationship of faith and reason were completed, with the role of the bishops and laity in the Church left unaddressed. Pope John XXIII gave notice of his intention to convene the Council on 25 January 1959, less than three months after his election in October 1958. This sudden announcement, which caught the Curia by surprise, caused little initial official comment from Church insiders. Reaction to the announcement was widespread and largely positive from both religious and secular leaders outside the Catholic Church, and the Council was formally summoned by the apostolic constitution "Humanae Salutis" on 25 December 1961. In various discussions before the Council convened, John XXIII said that it was time to "open the windows [of the Church] and let in some fresh air". He invited other Christians outside the Catholic Church to send observers to the Council. Acceptances came from both the Eastern Orthodox Church and Protestant denominations as internal observers, but these observers did not cast votes in the approbation of the conciliar documents. Pope John XXIII's announcement on 25 January 1959, in the chapter hall of the Benedictine monastery attached to the Basilica of Saint Paul Outside the Walls in Rome, of his intention to call a general council came as a surprise even to the cardinals present. The Pontiff pre-announced the council under a full moon when the faithful with their candlelights gathered in St. Peter's square and jokingly noted about the brightness of the moon. He had tested the idea only ten days before with one of them, his Cardinal Secretary of State Domenico Tardini, who gave enthusiastic support to the idea. Although the Pope later said the idea came to him in a flash in his conversation with Tardini, two cardinals had earlier attempted to interest him in the idea. They were two of the most conservative, Ernesto Ruffini and Alfredo Ottaviani, who had already in 1948 proposed the idea to Pope Pius XII and who put it before John XXIII on 27 October 1958. Actual preparations for the Council took more than two years, and included work from 10 specialised commissions, people for mass media and Christian Unity, and a Central Preparatory Commission with 120 members for overall coordination, composed mostly of members of the Roman Curia. At Vatican I, 737 attended, mostly from Europe. Attendance at Vatican II varied in later sessions from 2,100 to over 2,300. In addition, a varying number of "periti" ("experts") were available for theological consultation—a group that turned out to have a major influence as the Council went forward. Seventeen Orthodox Churches and Protestant denominations sent observers. More than three dozen representatives of other Christian communities were present at the opening session, and the number grew to nearly 100 by the end of the 4th Council Sessions. Pope John XXIII opened the Council on 11 October 1962 in a public session at St. Peter's basilica in Vatican City and read the declaration "Gaudet Mater Ecclesia" before the Council Fathers. What is needed at the present time is a new enthusiasm, a new joy and serenity of mind in the unreserved acceptance by all of the entire Christian faith, without forfeiting that accuracy and precision in its presentation which characterized the proceedings of the Council of Trent and the First Vatican Council. What is needed, and what everyone imbued with a truly Christian, Catholic and apostolic spirit craves today, is that this doctrine shall be more widely known, more deeply understood, and more penetrating in its effects on men's moral lives. What is needed is that this certain and immutable doctrine, to which the faithful owe obedience, be studied afresh and reformulated in contemporary terms. For this deposit of faith, or truths which are contained in our time-honored teaching is one thing; the manner in which these truths are set forth (with their meaning preserved intact) is something else. () The first working session of the Council was on 13 October 1962. That day's agenda included the election of members of the ten conciliar commissions. Each commission would have sixteen elected and eight appointed members, and they were expected to do most of the work of the Council. It had been expected that the members of the preparatory commissions, where the Curia was heavily represented, would be confirmed as the majorities on the conciliar commissions. But senior French Cardinal Achille Liénart addressed the Council, saying that the bishops could not intelligently vote for strangers. He asked that the vote be postponed to give all the bishops a chance to draw up their own lists. German Cardinal Josef Frings seconded that proposal, and the vote was postponed. The first meeting of the Council adjourned after only fifteen minutes. The bishops met to discuss the membership of the commissions, along with other issues, both in national and regional groups, as well as in gatherings that were more informal. The "schemata" (Latin for drafts) from the preparatory sessions were rejected and new ones were created. When the Council met on 16 October 1962, a new slate of commission members was presented and approved by the Council. One important change was a significant increase in membership from Central and Northern Europe, beyond countries such as Spain or Italy. More than 100 bishops from Africa, Asia, and Latin America were Dutch or Belgian and tended to associate with the bishops from those countries. These groups were led by Cardinals Bernardus Johannes Alfrink of the Netherlands and Leo Suenens of Belgium. Eleven commissions and three secretariats were established, with their respective presidents: After adjournment on 8 December, work began on preparations for the sessions scheduled for 1963. These preparations, however, were halted upon the death of Pope John XXIII on 3 June 1963, since a Catholic ecumenical council is automatically interrupted and suspended upon the death of the pope who convened it, until the next pope orders the council to be continued or dissolved. Pope Paul VI was elected on 21 June 1963 and immediately announced that the Council would continue. In the months prior to the second session, Pope Paul VI worked to correct some of the problems of organization and procedure that had been discovered during the first session. The changes included inviting additional lay Catholic and non-Catholic observers, reducing the number of proposed schemata to seventeen (which were made more general, in keeping with the pastoral nature of the Council) and later eliminating the requirement of secrecy surrounding general sessions. Pope Paul's opening address on 29 September 1963 stressed the pastoral nature of the Council, and set out four purposes for it: During this second session, the bishops approved the constitution on the liturgy, "Sacrosanctum Concilium", and the decree on social communication, "Inter mirifica". Work went forward with the schemata on the Church, bishops and dioceses, and on ecumenism. On 8 November 1963, Josef Frings criticized the Holy Office, and drew an articulate and impassioned defense by its Secretary, Alfredo Ottaviani, in one of the most dramatic exchanges of the Council. (Cardinal Frings' theological adviser was the young Joseph Ratzinger, who would later as a Cardinal head the same department of the Holy See, and from 2005–13 reign as Pope Benedict XVI). The second session ended on 4 December. In the time between the second and third sessions, the proposed schemata were further revised on the basis of comments from the Council Fathers. A number of topics were reduced to statements of fundamental propositions that could gain approval during the third session, with postconciliar commissions handling implementation of these measures. At the end of the second session, Cardinal Leo Joseph Suenens of Belgium had asked the other bishops: "Why are we even discussing the reality of the church when half of the church is not even represented here?", referring to women. In response, 15 women were appointed as auditors in September 1964. Eventually 23 women were auditors at the Second Vatican Council, including 10 women religious. The auditors had no official role in the deliberations, although they attended the meetings of subcommittees working on Council documents, particularly texts that dealt with the laity. They also met together on a weekly basis to read draft documents and to comment on them. During the third session, which began on 14 September 1964, the Council Fathers worked through a large volume of proposals. There "were approved and promulgated by the Pope" schemata on ecumenism ("Unitatis redintegratio"); the official view on Protestant and Eastern Orthodox "separated brethren"; the Eastern Rite churches ("Orientalium Ecclesiarum"); and the Dogmatic Constitution of the Church ("Lumen gentium"). Schemata on the life and ministry of priests and the missionary activity of the Church were rejected and sent back to commissions for complete rewriting. Work continued on the remaining schemata, in particular those on the Church in the modern world and on religious freedom. There was controversy over revisions of the decree on religious freedom and the failure to vote on it during the third session, but Pope Paul promised that this schema would be the first to be reviewed in the next session. Pope Paul closed the third session on 21 November by announcing a change in the Eucharistic fast and formally reaffirming Mary as "Mother of the Church". While some called for more dogmas about Mary, in a 2 February 1965 speech Paul VI referred to the "Christocentric and Church-centered direction which the Council intends to give to our doctrine and devotion to our Lady". Going into the final session, Paul VI and most of the bishops wanted this to be the final session. Cardinal Ritter observed that, "We were stalled by the delaying tactics of a very small minority" in the Curia who were more industrious in communicating with the pope then was the more progressive majority. Eleven schemata remained unfinished at the end of the third session, and commissions worked to give them their final form. Schema 13, on the Church in the modern world, was revised by a commission that worked with the assistance of laypersons. Pope Paul VI opened the last session of the Council on 14 September 1965 and on the following day promulgated the "motu proprio" establishing the Synod of Bishops. This more permanent structure was intended to preserve close cooperation of the bishops with the Pope after the Council. The first business of the fourth session was the consideration of the decree on religious freedom, "Dignitatis humanae", one of the more controversial of the conciliar documents that passed on 21 September by a vote of 1,997 for to 224 against. The principal work of the other part of the session was work on three documents, all of which were approved by the Council Fathers. The lengthened and revised pastoral constitution on the Church in the modern world, "Gaudium et spes", was followed by decrees on missionary activity, "Ad gentes," and on the ministry and life of priests, "Presbyterorum ordinis". The Council also gave final approval to other documents that had been considered in earlier sessions. These included the Dogmatic Constitution on Divine Revelation ("Dei verbum") and the decrees on the pastoral office of bishops ("Christus Dominus"), on the life of persons in religious orders (expanded and modified from earlier sessions, finally titled "Perfectae caritatis"), on education for the priesthood ("Optatam totius"), on Christian education ("Gravissimum educationis"), and on the role of the laity ("Apostolicam actuositatem"). One of the more controversial documents was "Nostra aetate", which stated that the Jews of the time of Christ, taken indiscriminately, and all Jews today are no more responsible for the death of Christ than Christians. Better Jewish-Catholic relations have been emphasized since the Council. A major event of the final days of the Council was the act of Pope Paul and Orthodox Patriarch Athenagoras of a joint expression of regret for many of the past actions that had led up to the Great Schism between the western and eastern churches. "The old story of the Samaritan has been the model of the spirituality of the Council" (Paul VI., address, 7 December). On 8 December, the Council was formally closed, with the bishops professing their obedience to the Council's decrees. To help carry forward the work of the Council, Pope Paul: During the Second Vatican Council the bishops produced four major "constitutions" and twelve other documents. The first document passed by the Council was "Sacrosanctum Concilium" ("Most Sacred Council") on the church's liturgy. Pope Benedict XVI explained that an essential idea of the Council itself is the "Paschal Mystery (Christ's passion, death and resurrection) as the center of what it is to be Christian and therefore of the Christian life, the Christian year, the Christian seasons, expressed in Eastertide and on Sunday which is always the day of the Resurrection." Thus, the liturgy, especially the Eucharist which makes the Paschal Mystery present, is "the summit toward which the activity of the Church is directed; at the same time it is the font from which all her power flows." The matter that had the most immediate effect on the lives of individual Catholics was the revision of the liturgy. The central idea was that there ought to be lay participation in the liturgy which means they "take part fully aware of what they are doing, actively engaged in the rite, and enriched by its effects" (SC 11). Since the mid-1960s, permission has been granted to celebrate the Mass in vernacular languages. It has been emphasized that the language used should be known to the gathered people. The amount of Scripture read during Mass was greatly expanded, through different annual cycles of readings. The revised version of the Latin text of the Mass remains the authoritative text on which translations are based. The invitation for more active, conscious participation of the laity through Mass in the vernacular did not stop with the decree on the liturgy. It was taken up by the later documents of the Council that called for a more active participation of the laity in the life of the Church, a turn away from clericalism toward a new age of the laity. The Dogmatic Constitution on the Church "Lumen gentium" "(""Light of the Nations") gave direction to several of the documents that followed it, including those on Ecumenism, on Non-Christian Religions, on Religious Freedom, and on The Church in the Modern World (see below). A most contentious conclusion that seems to follow from the Bishops' teaching in the decree is that while "in some sense other Christian communities are institutionally defective," these communities can "in some cases be more effective as vehicles of grace." Belgian Bishop Emil de Smedt, commenting on institutional defects that had crept into the Catholic church, "contrasted the hierarchical model of the church that embodied the triad of 'clericalism, legalism, and triumphalism' with one that emphasized the 'people of God', filled with the gifts of the Holy Spirit and radically equal in grace," that was extolled in "Lumen Gentium". According to Pope Paul VI, "the most characteristic and ultimate purpose of the teachings of the Council" is the universal call to holiness. John Paul II calls this "an intrinsic and essential aspect of [the Council Fathers'] teaching on the Church", where "all the faithful of Christ of whatever rank or status, are called to the fullness of the Christian life and to the perfection of charity" ("Lumen gentium", 40). Pope Francis, in his apostolic letter "Evangelii Gaudium" (17) which laid out the programmatic for his pontificate, said that "on the basis of the teaching of the Dogmatic Constitution "Lumen Gentium"" he would discuss the entire People of God which evangelizes, missionary outreach, the inclusion of the poor in society, and peace and dialogue within society. Francis has also followed the call of the Council for a more collegial style of leadership, through synods of bishops and through his personal use of a worldwide advisory council of eight cardinals. The Council's document "Dei Verbum" ("The Word of God") states the principle active in the other Council documents that "The study of the sacred page is, as it were, the soul of sacred theology". It is said of "Dei Verbum" that "arguably it is the most seminal of all the conciliar documents," with the fruits of a return to the Bible as the foundation of Christian life and teaching, evident in the other Council documents. Joseph Ratzinger, who would become Pope Benedict XVI, said of the emphasis on the Bible in the Council that prior to Vatican II the theology manuals continued to confuse "propositions about revelation with the content of revelation. It represented not abiding truths of faith, but rather the peculiar characteristics of post-Reformation polemic." In spite of the guarded approval of biblical scholarship under Pius XII, scholars suspected of Modernism were silenced right up to Vatican II. The Council brought a definitive end to the Counter-Reformation and, in a spirit of "aggiornamento", reached back "behind St. Thomas himself and the Fathers, to the biblical theology which governs the first two chapters of the Constitution on the Church." "The documents of the Second Vatican Council are shot through with the language of the Bible. ...The church's historical journey away from its earlier focus upon these sources was reversed at Vatican II." For instance, the Council's document on the liturgy called for a broader use of liturgical texts, which would now be in the vernacular, along with more enlightened preaching on the Bible explaining "the love affair between God and humankind". This document, named for its first words "Gaudium et Spes" ("Joy and Hope"), built on "Lumen Gentium"'s understanding of the Church as the “pilgrim people of God” and as “communion”, aware of the long history of the Church's teaching and in touch with what it calls the “signs of the times”. It reflects the understanding that Baptism confers on all the task that Jesus entrusted to the Church, to be on mission to the world in ways that the present age can understand, in cooperation with the ongoing work of the Spirit. And for those who "draw a distinction between non-negotiable teachings on human sexuality and negotiable teachings on social justice, "Gaudium et Spes" is an insuperable obstacle and the pontificate of Pope Francis is making that obvious for all with eyes to see." Opening declaration – "Gaudet Mater Ecclesia" ("Mother Church Rejoices") was the opening declaration of the Second Vatican Council, delivered by Pope John XXIII on 11 October 1962 before the bishops and representatives of 86 governments or international groups. He criticizes the "prophets of doom who are always forecasting disaster" for the church or world. He speaks of the advantage of separation of Church and state but also the challenge to integrate faith with public life. The Church "meets today's needs by explaining the validity of her doctrine more fully rather than by condemning," by reformulating ancient doctrine for pastoral effectiveness. Also, the Church is "moved by mercy and goodness towards her separated children." John XXIII before his papacy had proven his gifts as a papal diplomat and as Apostolic Nuncio to France. On the Means of Social Communication – The decree "Inter mirifica" ("Among the wonderful", 1963) addresses issues concerning the press, cinema, television, and other media of communication. Ecumenism – The decree "Unitatis redintegratio" ("Reintegration of Unity", 1964) opens with the statement: "The restoration of unity among all Christians is one of the principal concerns of the Second Vatican Council." Of the Eastern Catholic Churches – The decree "Orientalium Ecclesiarum" ("Of the Eastern Churches", 1964) recognizes the right of Eastern Catholics in communion with the Holy See to keep their distinct liturgical practices and avoid Latinisation. It encourages them to "take steps to return to their ancestral traditions." Mission Activity – The decree "Ad gentes" ("To the Nations", 1965) treats evangelization as the fundamental mission of the Catholic Church, "to bring good news to the poor." It includes sections on training missionaries and on forming communities. The Apostolate of the Laity – The decree "Apostolicam actuositatem" ("Apostolic Activity", 1965) declares that the apostolate of the laity is "not only to bring the message and grace of Christ to men but also to penetrate and perfect the temporal order with the spirit of the Gospel", in every field of life, together or through various groups, with respectful cooperation with the Church's hierarchy. The Pastoral Office of Bishops – The decree "Christus Dominus" ("Christ the Lord", 1965) places renewed emphasis on collegiality and on strong conferences of bishops, while respecting the papacy. On Religious Freedom – The declaration "Dignitatis humanae" ("Of the Dignity of the Human Person", 1965) is "on the right of the person and of communities to social and civil freedom in matters religious". Non-Christian Religions – The declaration "Nostra aetate" ("In our time", 1965) reflects that people are being drawn closer together in our time. The Church "regards with sincere reverence those ways of conduct and of life, those precepts and teachings which, though differing in many aspects from the ones she holds and sets forth, nonetheless often reflect a ray of that Truth which enlightens all men." And Jews today "should not be presented as rejected or accursed by God" for what happened to Jesus. The Adaptation and Renewal of Religious Life – The decree "Perfectae Caritatis" ("Of perfect charity", 1965) calls for "adaptation and renewal of the religious life [that] includes both the constant return to the sources of all Christian life and to the original spirit of the institutes and their adaptation to the changed conditions of our time." On the Ministry and Life of Priests – The decree "Presbyterorum ordinis" ("The order of priests", 1965) describes priests as "father and teacher" but also "brothers among brothers with all those who have been reborn at the baptismal font." Priests must "promote the dignity" of the laity, "willingly listen" to them, acknowledge and diligently foster "exalted charisms of the laity", and "entrust to the laity duties in the service of the Church, allowing them freedom and room for action." Also, the human and spiritual needs of priests are discussed in detail. On Priestly Training – The decree "Optatam totius" ("Desired [renewal] of the whole", 1965). On Christian Education – The declaration "Gravissimum educationis" ("Extremely important [time] of education", 1965). Closing Statement – On 12 January, 1966, a month after the close of the Council, Pope Paul VI wrote the letter "Udienze Generale" on how the Council was to be interpreted. The questioning of the nature of and even validity of the Second Vatican Council continues to be a contending point of rejection and conflict among various religious communities, some of which are not in communion with the Catholic Church. In particular, two schools of thought may be discerned: The most recent edition of the 1983 Code of Canon Law states that Catholics may not disregard the teaching of an ecumenical council even if it does not propose such as definitive. Accordingly, it also maintains the view that the present living Pope alone judges the criterion of membership for being in "in communio" with the Church. The present canon law further articulates: In addition to general spiritual guidance, the Second Vatican Council produced very specific recommendations, such as in the document "Gaudium et Spes": "Any act of war aimed indiscriminately at the destruction of entire cities of extensive areas along with their population is a crime against God and man himself. It merits unequivocal and unhesitating condemnation." "Dignitatis humanae", authored largely by United States theologian John Courtney Murray, challenged the Council fathers to find "reasons for religious freedom" in which they believed, and drew from scripture scholar John L. McKenzie the comment: "The Church can survive the disorder of development better than she can stand the living death of organized immobility." By "the spirit of Vatican II" is often meant promoting teachings and intentions attributed to the Second Vatican Council in ways not limited to literal readings of its documents, spoken of as the "letter" of the Council (cf. Saint Paul's phrase, "the letter kills, but the Spirit gives life"). The spirit of Vatican II is invoked for a great variety of ideas and attitudes. Bishop John Tong Hon of Hong Kong used it with regard merely to an openness to dialogue with others, saying: "We are guided by the spirit of Vatican II: only dialogue and negotiation can solve conflicts." In contrast, Michael Novak described it as a spirit that: To mark the fiftieth anniversary of the beginning of Vatican II, in October 2011, Pope Benedict XVI declared the period from October 2012 to the Solemnity of Christ the King at the end of November 2013 a "Year of Faith", as: It has been suggested that the pontificate of Pope Francis will be looked upon as the "decisive moment in the history of the church in which the full force of the Second Vatican Council's reformist vision was finally realized." Francis returned to the Vatican II theme of "ressourcement", breaking with the Catholic philosophical tradition that had originated with Thomas Aquinas seven centuries before, and looked to original sources in the New Testament. In contrast to John Paul II who emphasized continuity with the past in Vatican II's teachings, Francis' words and actions were noted from the start for their discontinuities, with an emphasis on Jesus himself and on mercy: a "church that is poor and for the poor", "disposal of the baroque trappings" in liturgical celebrations, and revision of the institutional aspects of the church. From his first gesture when elected Pope, calling himself simply Bishop of Rome, Francis connected with the thrust of the Council away from "legalism, triumphalism, and clericalism". He made greater use of church synods, and instituted a more collegial manner of governance by constituting a Council of Cardinal Advisers from throughout the world to assist him which a church historian calls the "most important step in the history of the church for the past 10 centuries." His refocusing the Church on “a moral theology that rests on scripture and Jesus’ command to love” is also seen as coming from the Council, as is his lifting up the laity for mission and calling for the presence of women in theologates. He has softened the "forbidding" image of the Church by applying Vatican II's views on respect for conscience to issues like atheism, homosexuality, and the sacraments. This has led to a struggle between "anti-Vatican II diehards and clerics who prefer John XXIII’s (and Francis’s) generosity of spirit." On the issue of liturgy, he has tried to advance the renewal initiated by Vatican II that would elicit more conscious, active participation by the people. And while his predecessors had taken a dim view of liberation theology, his more positive view is seen as flowing from a discernment of "the signs of the times" called for by "Gaudium et spes". He appointed more cardinals from the southern hemisphere and constituted an advisory counsel of eight cardinals from around the world to advise him on reform, which a church historian calls the "most important step in the history of the church for the past 10 centuries." Several of the Fathers and theologians-experts, as well as several Roman Popes and council observers, became canonized saints or are in the process of canonization. These include:
https://en.wikipedia.org/wiki?curid=28134
Slovene language Slovene ( or ), or alternatively Slovenian (; "slovenski jezik" or "slovenščina"), is a South Slavic language spoken by the Slovenes. It is spoken by about 2.5 million speakers worldwide, the majority of whom live in Slovenia, where it is the official language. As Slovenia is part of the European Union, Slovene is also one of its 24 official and working languages. Standard Slovene is the national standard language that was formed in the 18th and 19th century, based on Upper and Lower Carniolan dialect groups, more specifically on language of Ljubljana and its adjacent areas. The Lower Carniolan dialect group was the dialect used by Primož Trubar while he also used the Slovene language as spoken in Ljubljana, since he lived in the city for more than 20 years. It was the speech of Ljubljana that Trubar took as a foundation of what later became standard Slovene, with small addition of his native speech, that is Lower Carniolan dialect Trubar's choice was later adopted also by other Protestant writers in the 16th century, and ultimately led to a formation of more standard language. The Upper dialect was also used by most authors during the language revival of the 18th and early 19th century, and was also the language spoken by France Prešeren, the latter, as was the case with most of Slovene writers and poets, lived and worked in Ljubljana, which speech ultimately grew closer to the Upper Carniolan dialect group. Unstandardized dialects are more preserved in regions of the Slovene Lands where compulsory schooling was in languages other than Standard Slovene, as was the case with the Carinthian Slovenes in Austria, and the Slovene minority in Italy. For example, the Resian and Torre (Ter) dialects in the Italian Province of Udine differ most from other Slovene dialects. The distinctive characteristics of Slovene are dual grammatical number, two accentual norms (one characterized by pitch accent), and abundant inflection (a trait shared with many Slavic languages). Although Slovene is basically an SVO language, word order is very flexible, often adjusted for emphasis or stylistic reasons. Slovene has a T–V distinction: second-person plural forms are used for individuals as a sign of respect. Slovene is an Indo-European language belonging to the Western subgroup of the South Slavic branch of the Slavic languages, together with Serbo-Croatian. It is close to the Chakavian and especially Kajkavian dialects of Serbo-Croatian, but further from the Shtokavian dialect, the basis for the Bosnian, Croatian, Montenegrin, and Serbian standard languages. Furthermore, Slovene shares certain linguistic characteristics with all South Slavic languages, including those of the Eastern subgroup, such as Bulgarian. Mutual intelligibility with varieties of Serbo-Croatian is hindered by differences in vocabulary, grammar, and pronunciation, Kajkavian being the most mutually intelligible. The Slovene language has many commonalities with the West Slavic languages. Like all Slavic languages, Slovene traces its roots to the same proto-Slavic group of languages that produced Old Church Slavonic. The earliest known examples of a distinct, written dialect possibly connected to Slovene are from the "Freising manuscripts," known in Slovene as "Brižinski spomeniki". The consensus estimate of their date of origin is between 972 and 1039 (most likely before 1000). These religious writings are among the oldest surviving manuscripts in any Slavic language. The "Freising manuscripts" are a record of a proto-Slovene language that was spoken in a more scattered territory than modern Slovene, which included most of the present-day Austrian states of Carinthia and Styria, as well as East Tyrol, the Val Pusteria in South Tyrol, and some areas of Upper and Lower Austria. By the 15th century, most of the northern areas were gradually Germanized: the northern border of the Slovene-speaking territory stabilized on the line going from north of Klagenfurt to south of Villach and east of Hermagor in Carinthia, while in Styria it was pretty much identical with the current Austrian-Slovenian border. This linguistic border remained almost unchanged until the late 19th century, when a second process of Germanization took place, mostly in Carinthia. Between the 9th and 12th century, proto-Slovene spread into northern Istria and in the areas around Trieste. During most of the Middle Ages, Slovene was a vernacular language of the peasantry, although it was also spoken in most of the towns on Slovenian territory, together with German or Italian. Although during this time, German emerged as the spoken language of the nobility, Slovene had some role in the courtly life of the Carinthian, Carniolan and Styrian nobility, as well. This is proved by the survival of certain ritual formulas in Slovene (such as the ritual installation of the Dukes of Carinthia). The words "Buge waz primi, gralva Venus!" ("God be With You, Queen Venus!"), with which Bernhard von Spanheim greeted the poet Ulrich von Liechtenstein, who was travelling around Europe in guise of Venus, upon his arrival in Carinthia in 1227 (or 1238), is another example of some level of Slovene knowledge among high nobility in the region. The first printed Slovene words, "stara pravda" (meaning 'old justice' or 'old laws'), appeared in 1515 in Vienna in a poem of the German mercenaries who suppressed the Slovene peasant revolt: the term was presented as the peasants' motto and battle cry. Standard Slovene emerged in the second half of the 16th century, thanks to the works of Slovene Lutheran authors, who were active during the Protestant Reformation. The most prominent authors from this period are Primož Trubar, who wrote the first books in Slovene; Adam Bohorič, the author of the first Slovene grammar; and Jurij Dalmatin, who translated the entire Bible into Slovene. From the high Middle Ages up to the dissolution of the Austro-Hungarian Empire in 1918, in the territory of present-day Slovenia, German was the language of the elite, and Slovene was the language of the common people. During this period, German had a strong influence on Slovene, and many Germanisms are preserved in contemporary colloquial Slovene. Many Slovene scientists before the 1920s also wrote in foreign languages, mostly German, which was the "lingua franca" of science throughout Central Europe at the time. During the rise of Romantic nationalism in the 19th century, the cultural movements of Illyrism and Pan-Slavism brought words from Serbo-Croatian, specifically Croatian dialects, and Czech into standard Slovene, mostly to replace words previously borrowed from German. Most of these innovations have remained, although some were dropped in later development. In the second half of the 19th century, many nationalist authors made an abundant use of Serbo-Croatian words: among them were Fran Levstik and Josip Jurčič, who wrote the first novel in Slovene in 1866. This tendency was reversed in the Fin de siècle period by the first generation of modernist Slovene authors (most notably the writer Ivan Cankar), who resorted to a more "pure" and simple language without excessive Serbo-Croatian borrowings. During the Kingdom of Yugoslavia in the 1920s and 1930s, the influence of Serbo-Croatian increased again. This was opposed by the younger generations of Slovene authors and intellectuals; among the most fierce opponents of an excessive Serbo-Croatian influence on Slovene were the intellectuals associated with the leftist journal "Sodobnost", as well as some younger Catholic activists and authors. After 1945, numerous Serbo-Croatian words that had been used in the previous decades were dropped. The result was that a Slovene text from the 1910s is frequently closer to modern Slovene than a text from the 1920s and 1930s. Between 1920 and 1941, the official language of the Kingdom of Yugoslavia was defined as "Serbian-Croatian-Slovene". In practice, Slovene was used in Slovenia, both in education and administration. Many state institutions used only Serbo-Croatian, and a Slovene–Serbo-Croatian bilingualism was applied in many spheres of public life in Slovenia. For examples, at the post offices, railways and in administrative offices, Serbo-Croatian was used together with Slovene. However, state employees were expected to be able to speak Slovene in Slovenia. During the same time, western Slovenia (the Slovenian Littoral and the western districts of Inner Carniola) was under Italian administration and submitted to a violent policy of Fascist Italianization; the same policy was applied to Slovene speakers in Venetian Slovenia, Gorizia and Trieste. Between 1923 and 1943, all public use of the Slovene language in these territories was strictly prohibited, and Slovene language activists were persecuted by the state. After the Carinthian Plebiscite of 1920, a less severe policy of Germanization took place in the Slovene-speaking areas of southern Carinthia which remained under Austrian administration. After the Anschluss of 1938, the use of Slovene was strictly forbidden in Carinthia, as well. This accelerated a process of language shift in Carinthia, which continued throughout the second half of the 20th century: according to the Austro-Hungarian census of 1910, around 21% of inhabitants of Carinthia spoke Slovene in their daily communication; by 1951, this figure dropped to less than 10%, and by 2001 to a mere 2.8%. During World War II, Slovenia was divided among the Axis Powers of Fascist Italy, Nazi Germany, and Hungary. Each of the occupying powers tried to either discourage or entirely suppress the Slovene language. Following World War II, Slovenia became part of the Socialist Federal Republic of Yugoslavia. Slovene was one of the official languages of the federation. In the territory of Slovenia, it was commonly used in almost all areas of public life. One important exception was the Yugoslav army, where Serbo-Croatian was used exclusively, even in Slovenia. National independence has revitalized the language: since 1991, when Slovenia gained independence, Slovene has been used as an official language in all areas of public life. In 2004 it became one of the official languages of the European Union upon Slovenia's admission. Joža Mahnič, a literary historian and president of the publishing house "Slovenska matica", said in February 2008 that Slovene is a language rich enough to express everything, including the most sophisticated and specialised texts. In February 2010, Janez Dular, a prominent Slovenian linguist, commented that, although Slovene is not an endangered language, its scope has been shrinking, especially in science and higher education. The language is spoken by about 2.5 million people, mainly in Slovenia, but also by Slovene national minorities in Friuli-Venezia Giulia, Italy (around 90,000 in Venetian Slovenia, Resia Valley, Canale Valley, Province of Trieste and in those municipalities of the Province of Gorizia bordering with Slovenia), in southern Carinthia and some parts of Styria in Austria (25,000). It is also spoken in Croatia, especially in Istria, Rijeka and Zagreb (11,800-13,100), in southwestern Hungary (3-5,000), in Serbia (5,000), and by the Slovene diaspora throughout Europe and the rest of the world (around 300,000), particularly in the United States (most notably Ohio, home to an estimated 3,400 speakers), Canada, Argentina, Australia and South Africa. Slovene is sometimes characterized as the most diverse Slavic language in terms of dialects, with different degrees of mutual intelligibility. Accounts of the number of dialects range from as few as seven dialects, often considered dialect groups or dialect bases that are further subdivided into as many as 50 dialects. Other sources characterize the number of dialects as nine or eight. The Slovene proverb "Every village has its own voice" ("Vsaka vas ima svoj glas") depicts the differences in dialects. Although pronunciation differs greatly from area to area, those differences do not pose major obstacles to understanding. The standard language is mainly used in public presentations or on formal occasions. The Prekmurje dialect used to have a written norm of its own at one point. The Resian dialects have an independent written norm that is used by their regional state institutions. Speakers of those two dialects have considerable difficulties with being understood by speakers of other varieties of Slovene, needing code-switching to Standard Slovene. Other dialects are mutually intelligible when speakers avoid the excessive usage of regionalisms. Regionalisms are mostly limited to culinary and agricultural expressions, although there are many exceptions. Some loanwords have become so deeply rooted in the local language that people have considerable difficulties in finding a standard expression for the dialect term (for instance, "kovter" meaning blanket is "prešita odeja" in Standard Slovene, but the latter term is "never" used in speech). Southwestern dialects incorporate a great deal of calques and loanwords from Italian, whereas eastern and northwestern dialects are replete with lexemes of German origin. Usage of such words hinders intelligibility between dialects and is greatly discouraged in formal situations. Slovene has a phoneme set consisting of 21 consonants and 8 vowels. Slovene has 21 distinctive consonant phonemes. All voiced obstruents are devoiced at the end of words unless immediately followed by a word beginning with a vowel or a voiced consonant. In consonant clusters, voicing distinction is neutralized and all consonants assimilate the voicing of the rightmost segment. In this context, , and may occur as voiced allophones of , and , respectively (e.g. "vŕh drevésa" ). The sequences , and occur only before a vowel. Before a consonant or word-finally, they are reduced to , and respectively. This is reflected in the spelling in the case of , but not for and . Under certain (somewhat unpredictable) circumstances, at the end of a syllable may become , merging with the allophone of in that position. Slovene has an eight-vowel (or, according to Peter Jurgec, nine-vowel) system, in comparison to the five-vowel system of Serbo-Croatian. Slovene nouns retain six of the seven Slavic noun cases: nominative, accusative, genitive, dative, locative and instrumental. There is no distinct vocative; the nominative is used in that role. Nouns, adjectives and pronouns have three numbers: singular, dual and plural. Nouns in Slovene are either masculine, feminine or neuter gender. In addition, there is a distinction between animate and inanimate nouns, although this is only relevant for masculine nouns and only in the singular. Animate nouns have an accusative singular form that is identical to the genitive, while for inanimate nouns the accusative singular is the same as the nominative. Animacy is based mostly on semantics and is less rigid than gender. Generally speaking a noun is animate if it refers to something that is generally thought to have free will or the ability to move of its own accord. This includes all nouns for people and animals. All other nouns are inanimate, including plants and other non-moving life forms, and also groups of people or animals. However, there are some nouns for inanimate objects that are generally animate, which mostly include inanimate objects that are named after people or animals. This includes: Slovene, like most other European languages, has a T–V distinction, or two forms of 'you' for formal and informal situations, respectively. Although informal address using the 2nd person singular "ti" form (known as "tikanje") is officially limited to friends and family, talk among children, and addressing animals, it is increasingly used among the middle generation to signal a relaxed attitude or lifestyle instead of its polite or formal counterpart using the 2nd person plural "vi" form (known as "vikanje"). An additional nonstandard but widespread use of a singular participle combined with a plural auxiliary verb (known as "polvikanje") signals a somewhat more friendly and less formal attitude while maintaining politeness: The use of nonstandard forms ("polvikanje") might be frowned upon by many people and would not likely be used in a formal setting. The use of the 3rd person plural "oni" ('they') form (known as "onikanje" in both direct address and indirect reference; this is similar to using "Sie" in German) as an ultra-polite form is now archaic or dialectal. It is associated with servant-master relationships in older literature, the child-parent relationship in certain conservative rural communities, and parishioner-priest relationships. Foreign words used in Slovene are of various types depending on the assimilation they have undergone. The types are: The loanwords are mostly from German and Italian, while the more recently borrowed and less assimilated words are typically from English. There are no definite or indefinite articles as in English ("a", "an", "the") or German ("der", "die", "das", "ein", "eine"). A whole verb or a noun is described without articles and the grammatical gender is found from the word's termination. It is enough to say "barka" ("a" or "the barge"), "Noetova barka" ('Noah's ark'). The gender is known in this case to be feminine. In declensions, endings are normally changed; see below. If one should like to somehow distinguish between definiteness or indefiniteness of a noun, one would say "(prav/natanko/ravno) tista barka" ('that/precise/exact barge') for "the barge" and "neka/ena barka" ('some/a barge') for "a barge". Definiteness of a noun phrase can also be discernible through the ending of the accompanying adjective. One should say "rdeči šotor" ([exactly that] red tent) or "rdeč šotor" ([a] red tent). This difference is observable only for masculine nouns in nominative or accusative case. Because of the lack of article in Slovene and audibly insignificant difference between the masculine adjective forms, most dialects do not distinguish between definite and indefinite variants of the adjective, leading to hypercorrection when speakers try to use Standard Slovenian. This alphabet () was derived in the mid-1840s from the system created by Croatianist Ljudevit Gaj. Intended for the Serbo-Croatian language (in all its varieties), it was patterned on the Czech alphabet of the 1830s. Before that was, for example, written as , or ; as , , or ; sometimes as as a relic from the now modern Russian yery character, usually transliterated as "y"; as ; as ; as ; as , or . The standard Slovenian orthography, used in almost all situations, uses only the letters of the ISO basic Latin alphabet plus , , and : The orthography thus underdifferentiates several phonemic distinctions: In the tonemic varieties of Slovene, the ambiguity is even worse: "e" in a final syllable can stand for any of (although is rare). The reader is expected to gather the interpretation of the word from the context, as in these examples: To compensate for the shortcomings of the standard orthography, Slovenian also uses standardized diacritics or accent marks to denote stress, vowel length and pitch accent, much like the closely related Serbo-Croatian. However, as in Serbo-Croatian, use of such accent marks is restricted to dictionaries, language textbooks and linguistic publications. In normal writing, the diacritics are almost never used, except in a few minimal pairs where real ambiguity could arise. Two different and mutually incompatible systems of diacritics are used. The first is the simpler non-tonemic system, which can be applied to all Slovene dialects. It is more widely used and is the standard representation in dictionaries such as SSKJ. The tonemic system also includes tone as part of the representation. However, neither system reliably distinguishes schwa from the front mid-vowels, nor vocalised l from regular l . Some sources write these as "ə" and "ł", respectively, but this is not as common. In the non-tonemic system, the distinction between the two mid-vowels is indicated, as well as the placement of stress and length of vowels: The tonemic system uses the diacritics somewhat differently from the non-tonemic system. The high-mid vowels and are written "ẹ ọ" with a subscript dot, while the low-mid vowels and are written as plain "e o". Pitch accent and length is indicated by four diacritical marks: The schwa vowel is written ambiguously as "e", but its accentuation will sometimes distinguish it: a long vowel mark can never appear on a schwa, while a grave accent can appear only on a schwa. Thus, only "ȅ" and unstressed "e" are truly ambiguous. Standard Slovene spelling and grammar are defined by the Orthographic Committee and the Fran Ramovš Institute of the Slovenian Language, which are both part of the Slovenian Academy of Sciences and Arts ("Slovenska akademija znanosti in umetnosti", SAZU). The newest reference book of standard Slovene spelling (and to some extent also grammar) is the "Slovenski pravopis" ("SP2001"; Slovene Normative Guide). The latest printed edition was published in 2001 (reprinted in 2003 with some corrections) and contains more than 130,000 dictionary entries. In 2003, an electronic version was published. The official dictionary of modern Slovene, which was also prepared by SAZU, is "Slovar slovenskega knjižnega jezika" ("SSKJ"; Standard Slovene Dictionary). It was published in five volumes by Državna Založba Slovenije between 1970 and 1991 and contains more than 100,000 entries and subentries with accentuation, part-of-speech labels, common collocations, and various qualifiers. In the 1990s, an electronic version of the dictionary was published and is available online. The SAZU considers SP2001 to be the normative source on Slovenian language. When dictionary entries in SP2001 and SSKJ differ, the SP2001 entry takes precedence. SP2001 is called a Spelling Dictionary by the European Network of e-Lexicography.
https://en.wikipedia.org/wiki?curid=28135
Slovak language Slovak () or alternatively Slovakian, is a West Slavic language of the Czech–Slovak group. Spoken by approximately 5 million people as a native language, it serves as the official language of Slovakia and one of the 24 official languages of the European Union. Slovak is closely related to Czech, to the point of mutual intelligibility to a very high degree, as well as Polish. Like other Slavic languages, Slovak is a fusional language with a complex system of morphology and relatively flexible word order. Its vocabulary has been extensively influenced by Latin and German and other Slavic languages. The Czech–Slovak group developed within West Slavic in the high medieval period, and the standardization of Czech and Slovak within the Czech–Slovak dialect continuum emerged in the early modern period. In the later mid-19th century, the modern Slovak alphabet and written standard became codified by Ľudovít Štúr and reformed by Martin Hattala. The Moravian dialects spoken in the western part of the country along the border with the Czech Republic are also sometimes classified as Slovak, although some of their western variants are closer to Czech; they nonetheless form the bridge dialects between the two languages. Slovak speakers are also found in the Slovak diaspora in the United States, the Czech Republic, Argentina, Serbia, Ireland, Romania, Poland (where Slovak is a recognised minority language), Canada, Hungary, Germany, Croatia, Israel, the United Kingdom, Australia, Austria, Ukraine, Norway and in other countries to a lesser extent. Slovak uses the Latin script with small modifications that include the four diacritics (ˇ, ´, ¨, ˆ) placed above certain letters () The primary principle of Slovak spelling is the phonemic principle. The secondary principle is the morphological principle: forms derived from the same stem are written in the same way even if they are pronounced differently. An example of this principle is the assimilation rule (see below). The tertiary principle is the etymological principle, which can be seen in the use of "i" after certain consonants and of "y" after other consonants, although both "i" and "y" are usually pronounced the same way. Finally, the rarely applied grammatical principle is present when, for example, the basic singular form and plural form of masculine adjectives are written differently with no difference in pronunciation (e.g. = nice – singular versus = nice – plural). In addition, the following rules are present: Most loanwords from foreign languages are respelt using Slovak principles either immediately or later. For example, "weekend" is spelled , "software" – , "gay" – (both not exclusively), and "quality" is spelled . Personal and geographical names from other languages using Latin alphabets keep their original spelling unless a fully Slovak form of the name exists (e.g. for "London"). Slovak features some heterophonic homographs (words with identical spelling but different pronunciation and meaning), the most common examples being (beautiful) versus (beautifully). The main features of Slovak syntax are as follows: Some examples include the following: Word order in Slovak is relatively free, since strong inflection enables the identification of grammatical roles (subject, object, predicate, etc.) regardless of word placement. This relatively free word order allows the use of word order to convey topic and emphasis. Some examples are as follows: The unmarked order is subject–verb–object. Variation in word order is generally possible, but word order is not completely free. In the above example, the noun phrase cannot be split up, so that the following combinations are not possible: And the following sentence is stylistically infelicitous: The regular variants are as follows: Slovak does not have articles. The demonstrative pronoun (fem: , neuter: ) may be used in front of the noun in situations where definiteness must be made explicit. Slovak nouns are inflected for case and number. There are six cases: nominative, genitive, dative, accusative, locative, and instrumental. The vocative is no longer morphologically marked. There are two numbers: singular and plural. Nouns have inherent gender. There are three genders: masculine, feminine, and neuter. Adjectives and pronouns must agree with nouns in case, number, and gender. The numerals 0–10 have unique forms, with numerals 1–4 requiring specific gendered representations. Numerals 11–19 are formed by adding to the end of each numeral. The suffix is used to create numerals 20, 30 and 40; for numerals 50, 60, 70, 80 and 90, is used. Compound numerals (21, 1054) are combinations of these words formed in the same order as their mathematical symbol is written (e.g. 21 = , literally "twenty-one"). The numerals are as follows: Some higher numbers: (200) , (300) , (900) , (1,000) , (1,100) , (2,000) , (100,000) , (200,000) , (1,000,000) , (1,000,000,000) . Counted nouns have two forms. The most common form is the plural genitive (e.g. = five houses or = one hundred two women), while the plural form of the noun when counting the amounts of 2–4, etc., is usually the nominative form without counting (e.g. = two houses or = two women) but gender rules do apply in many cases. Verbs have three major conjugations. Three persons and two numbers (singular and plural) are distinguished. Several conjugation paradigms exist as follows: Adverbs are formed by replacing the adjectival ending with the ending - or - / -. Sometimes both - and - are possible. Examples include the following: The comparative/superlative of adverbs is formed by replacing the adjectival ending with a comparative/superlative ending - or -. Examples include the following: Each preposition is associated with one or more grammatical cases. The noun governed by a preposition must appear in the case required by the preposition in the given context (e.g. from friends = ). is the genitive case of . It must appear in this case because the preposition (= from) always calls for its objects to be in the genitive. The Slovak language is a descendant of Proto-Slavic, itself a descendant of Proto-Indo-European. It is closely related to the other West Slavic languages, primarily to Czech and Polish. Czech also influenced the language in its later development. The highest number of borrowings in the old Slovak vocabulary come from Latin, German, Czech, Hungarian, Polish and Greek (in that order). Recently, it is also influenced by English. Although most dialects of Czech and Slovak are mutually intelligible (see Comparison of Slovak and Czech), eastern Slovak dialects are less intelligible to speakers of Czech and closer to Polish, Ruthenian and Ukrainian and contact between speakers of Czech and speakers of the eastern dialects is limited. Since the dissolution of Czechoslovakia it has been permitted to use Czech in TV broadcasting and during court proceedings (Administration Procedure Act 99/1963 Zb.). From 1999 to August 2009, the Minority Language Act 184/1999 Z.z., in its section (§) 6, contained the variously interpreted unclear provision saying that "When applying this act, it holds that the use of the Czech language fulfills the requirement of fundamental intelligibility with the state language"; the state language is Slovak and the Minority Language Act basically refers to municipalities with more than 20% ethnic minority population (no such Czech municipalities are found in Slovakia). Since 1 September 2009 (due to an amendment to the State Language Act 270/1995 Z.z.) a language "fundamentally intelligible with the state language" (i.e. the Czech language) may be used in contact with state offices and bodies by its native speakers, and documents written in it and issued by bodies in the Czech Republic are officially accepted. Regardless of its official status, Czech is used commonly both in Slovak mass media and in daily communication by Czech natives as an equal language. Czech and Slovak have a long history of interaction and mutual influence well before the creation of Czechoslovakia in 1918, a state which existed until 1993. Literary Slovak shares significant orthographic features with Czech, as well as technical and professional terminology dating from the Czechoslovak period, but phonetic, grammatical, and vocabulary differences do exist. Slavic language varieties are relatively closely related, and have had a large degree of mutual influence, due to the complicated ethnopolitical history of their historic ranges. This is reflected in the many features Slovak shares with neighboring language varieties. Standard Slovak shares high degrees of mutual intelligibility with many Slavic varieties. Despite this closeness to other Slavic varieties, significant variation exists among Slovak dialects. In particular, eastern varieties differ significantly from the standard language, which is based on central and western varieties. Eastern Slovak dialects have the greatest degree of mutual intelligibility with Polish of all the Slovak dialects, followed by Rusyn, but both Eastern Slovak and Rusyn lack familiar technical terminology and upper register expressions. Polish and Sorbian also differ quite considerably from Czech and Slovak in upper registers, but non-technical and lower register speech is readily intelligible. Some mutual intelligibility occurs with spoken Rusyn, Ukrainian, and even Russian (in this order), although their orthographies are based on the Cyrillic script. Sports: Food: Clothing: Exclamations: Nouns: Verbs: Greetings: Hungarians and Slovaks have had a language interaction ever since the settlement of Hungarians in the Carpathian area. Hungarians also adopted many words from various Slavic languages related to agriculture and administration, and a number of Hungarian loanwords are found in Slovak. Some examples are as follows: There are many Slovak dialects, which are divided into the following four basic groups: The fourth group of dialects is often not considered a separate group, but a subgroup of Central and Western Slovak dialects (see e.g. Štolc, 1968), but it is currently undergoing changes due to contact with surrounding languages (Serbo-Croatian, Romanian, and Hungarian) and long-time geographical separation from Slovakia (see the studies in "Zborník Spolku vojvodinských slovakistov", e.g. Dudok, 1993). For an external map of the three groups in Slovakia see here. The dialect groups differ mostly in phonology, vocabulary, and tonal inflection. Syntactic differences are minor. Central Slovak forms the basis of the present-day standard language. Not all dialects are fully mutually intelligible. It may be difficult for an inhabitant of the western Slovakia to understand a dialect from eastern Slovakia and the other way around. The dialects are fragmented geographically, separated by numerous mountain ranges. The first three groups already existed in the 10th century. All of them are spoken by the Slovaks outside Slovakia (USA, Canada, Croatian Slavonia, and elsewhere), and central and western dialects form the basis of the lowland dialects (see above). The western dialects contain features common with the Moravian dialects in the Czech Republic, the southern central dialects contain a few features common with South Slavic languages, and the eastern dialects a few features common with Polish and the East Slavonic languages (cf. Štolc, 1994). Lowland dialects share some words and areal features with the languages surrounding them (Serbo-Croatian, Hungarian, and Romanian). Standard Slovak () is defined by an Act of Parliament on the State Language of the Slovak Republic (language law). According to this law, Ministry of Culture approves and publishes the codified form of the Slovak language based on the judgment of specialised Slovakistic linguistic institutes and specialists in the area of the state language. This is traditionally Ľudovit Štúr Institute of Linguistics, which is part of the Slovak Academy of Sciences. In practice, Ministry of Culture publishes a document that specifies authoritative reference books for standard Slovak usage. There are four such publications:
https://en.wikipedia.org/wiki?curid=28136
Supercluster A supercluster is a large group of smaller galaxy clusters or galaxy groups; it is among the largest known structures of the universe. The Milky Way is part of the Local Group galaxy group (which contains more than 54 galaxies), which in turn is part of the Virgo Supercluster, which is part of the Laniakea Supercluster. The large size and low density of superclusters means that they, unlike clusters, expand with the Hubble expansion. The number of superclusters in the observable universe is estimated to be 10 million. The existence of superclusters indicates that the galaxies in the Universe are not uniformly distributed; most of them are drawn together in groups and clusters, with groups containing up to some dozens of galaxies and clusters up to several thousand galaxies. Those groups and clusters and additional isolated galaxies in turn form even larger structures called superclusters. Their existence was first postulated by George Abell in his 1958 Abell catalogue of galaxy clusters. He called them "second-order clusters", or clusters of clusters. Superclusters form massive structures of galaxies, called "filaments", "supercluster complexes", "walls" or "sheets", that may span between several hundred million light-years to 10 billion light-years, covering more than 5% of the observable universe. These are the largest structures known to date. Observations of superclusters can give information about the initial condition of the universe, when these superclusters were created. The directions of the rotational axes of galaxies within superclusters are studied by those who believe that they may give insight and information into the early formation process of galaxies in the history of the Universe. Interspersed among superclusters are large voids of space where few galaxies exist. Superclusters are frequently subdivided into groups of clusters called galaxy groups and clusters. Although superclusters are supposed to be the largest structures in the universe, according to the Cosmological principle, larger structures have been observed in surveys, including the Sloan Great Wall.
https://en.wikipedia.org/wiki?curid=28142
Salicylic acid Salicylic acid (from Latin "salix", "willow tree") is a lipophilic monohydroxybenzoic acid, a type of phenolic acid, and a beta hydroxy acid (BHA). It has the formula C7H6O3. This colorless crystalline organic acid is widely used in organic synthesis and functions as a plant hormone. It is derived from the metabolism of salicin. In addition to serving as an important active metabolite of aspirin ("acetylsalicylic acid"), which acts in part as a prodrug to salicylic acid, it is probably best known for its use as a key ingredient in topical anti-acne products. The salts and esters of salicylic acid are known as salicylates. It is on the World Health Organization's List of Essential Medicines, the safest and most effective medicines needed in a health system. Salicylic acid as a medication is used most commonly to help remove the outer layer of the skin. As such, it is used to treat warts, psoriasis, acne, ringworm, dandruff, and ichthyosis. Similar to other hydroxy acids, salicylic acid is a key ingredient in many skincare products for the treatment of seborrhoeic dermatitis, acne, psoriasis, calluses, corns, keratosis pilaris, acanthosis nigricans, ichthyosis and warts. Salicylic acid is used in the production of other pharmaceuticals, including 4-aminosalicylic acid, sandulpiride, and landetimide (via Salethamide). Salicylic acid was one of the original starting materials for making acetylsalicylic acid (aspirin) in 1897. Bismuth subsalicylate, a salt of bismuth and salicylic acid, is the active ingredient in stomach relief aids such as Pepto-Bismol, is the main ingredient of Kaopectate and "displays anti-inflammatory action (due to salicylic acid) and also acts as an antacid and mild antibiotic". Other derivatives include methyl salicylate used as a liniment to soothe joint and muscle pain and choline salicylate used topically to relieve the pain of mouth ulcers. Salicylic acid is used as a food preservative, a bactericidal and an antiseptic. Sodium salicylate is a useful phosphor in the vacuum ultraviolet spectral range, with nearly flat quantum efficiency for wavelengths between 10 and 100 nm. It fluoresces in the blue at 420 nm. It is easily prepared on a clean surface by spraying a saturated solution of the salt in methanol followed by evaporation. Aspirin (acetylsalicylic acid or ASA) can be prepared by the esterification of the phenolic hydroxyl group of salicylic acid with the acetyl group from acetic anhydride or acetyl chloride. Salicylic acid modulates COX2 gene expression to decrease the formation of pro-inflammatory prostaglandins. Salicylate may competitively inhibit prostaglandin formation. Salicylate's antirheumatic (nonsteroidal anti-inflammatory) actions are a result of its analgesic and anti-inflammatory mechanisms. Salicylic acid works by causing the cells of the epidermis to slough off more readily, preventing pores from clogging up, and allowing room for new cell growth. Salicylic acid inhibits the oxidation of uridine-5-diphosphoglucose (UDPG) competitively with nicotinamide adenosine dinucleotide (NAD) and noncompetitively with UDPG. It also competitively inhibits the transferring of glucuronyl group of uridine-5-phosphoglucuronic acid (UDPGA) to the phenolic acceptor. The wound-healing retardation action of salicylates is probably due mainly to its inhibitory action on mucopolysaccharide synthesis. 17% to 27% salicylic acid used in the form of a paint, and 20% to 50% in plaster form, which are sold for wart and corn removal should not be applied to the face and should not be used for acne treatment. Even for wart removal, such a solution should be applied once or twice a day – more frequent use may lead to an increase in side-effects without an increase in efficacy. If high concentrations of salicylic ointment are applied to a large percentage of body surface, high levels of salicylic acid can enter the blood, requiring hemodialysis to avoid further complications. Salicylic acid has the formula C6H4(OH)COOH, where the OH group is "ortho" to the carboxyl group. It is also known as 2-hydroxybenzoic acid. It is poorly soluble in water (2 g/L at 20 °C). Salicylic acid is biosynthesized from the amino acid phenylalanine. In "Arabidopsis thaliana" it can be synthesized via a phenylalanine-independent pathway. Sodium salicylate is commercially prepared by treating sodium phenolate (the sodium salt of phenol) with carbon dioxide at high pressure (100 atm) and high temperature (390 K) – a method known as the Kolbe-Schmitt reaction. Acidification of the product with sulfuric acid gives salicylic acid: It can also be prepared by the hydrolysis of aspirin (acetylsalicylic acid) or methyl salicylate (oil of wintergreen) with a strong acid or base. Salicylic acid degrades to phenol and carbon dioxide at 200 - 230 °C: Hippocrates, Galen, Pliny the Elder and others knew that willow bark could ease pain and reduce fevers. It was used in Europe and China to treat these conditions. This remedy is mentioned in texts from ancient Egypt, Sumer and Assyria. The Cherokee and other Native Americans use an infusion of the bark for fever and other medicinal purposes. In 2014, archaeologists identified traces of salicylic acid on 7th century pottery fragments found in east central Colorado. The Reverend Edward Stone, a vicar from Chipping Norton, Oxfordshire, England, noted in 1763 that the bark of the willow was effective in reducing a fever. The active extract of the bark, called "salicin", after the Latin name for the white willow ("Salix alba"), was isolated and named by the German chemist Johann Andreas Buchner in 1828. A larger amount of the substance was isolated in 1829 by Henri Leroux, a French pharmacist.
https://en.wikipedia.org/wiki?curid=28143
Seaborgium Seaborgium is a synthetic chemical element with the symbol Sg and atomic number 106. It is named after the American nuclear chemist Glenn T. Seaborg. As a synthetic element, it can be created in a laboratory but is not found in nature. It is also radioactive; the most stable known isotope, 269Sg, has a half-life of approximately 14 minutes. In the periodic table of the elements, it is a d-block transactinide element. It is a member of the 7th period and belongs to the group 6 elements as the fourth member of the 6d series of transition metals. Chemistry experiments have confirmed that seaborgium behaves as the heavier homologue to tungsten in group 6. The chemical properties of seaborgium are characterized only partly, but they compare well with the chemistry of the other group 6 elements. In 1974, a few atoms of seaborgium were produced in laboratories in the Soviet Union and in the United States. The priority of the discovery and therefore the naming of the element was disputed between Soviet and American scientists, and it was not until 1997 that International Union of Pure and Applied Chemistry (IUPAC) established seaborgium as the official name for the element. It is one of only two elements named after a living person at the time of naming, the other being oganesson, element 118. Following claims of the observation of elements 104 and 105 in 1970 by Albert Ghiorso et al. at the Lawrence Livermore National Laboratory, a search for element 106 using oxygen-18 projectiles and the previously used californium-249 target was conducted. Several 9.1 MeV alpha decays were reported and are now thought to originate from element 106, though this was not confirmed at the time. In 1972, the HILAC accelerator received equipment upgrades, preventing the team from repeating the experiment, and data analysis was not done during the shutdown. This reaction was tried again several years later, in 1974, and the Berkeley team realized that their new data agreed with their 1971 data, to the astonishment of Ghiorso. Hence, element 106 could have actually been discovered in 1971 if the original data was analyzed more carefully. Two groups claimed discovery of the element. Unambiguous evidence of element 106 was first reported in 1974 by a Russian research team in Dubna led by Yuri Oganessian, in which targets of lead-208 and lead-207 were bombarded with accelerated ions of chromium-54. In total, fifty-one spontaneous fission events were observed with a half-life between four and ten milliseconds. After having ruled out nucleon transfer reactions as a cause for these activities, the team concluded that the most likely cause of the activities was the spontaneous fission of isotopes of element 106. The isotope in question was first suggested to be seaborgium-259, but was later corrected to seaborgium-260. A few months later in 1974, researchers including Glenn T. Seaborg, Carol Alonso and Albert Ghiorso at the University of California, Berkeley, and E. Kenneth Hulet from the Lawrence Livermore National Laboratory, also synthesized the element by bombarding a californium-249 target with oxygen-18 ions, using equipment similar to that which had been used for the synthesis of element 104 five years earlier, observing at least seventy alpha decays, seemingly from the isotope seaborgium-263m with a half-life of seconds. The alpha daughter rutherfordium-259 and granddaughter nobelium-255 had previously been synthesised and the properties observed here matched with those previously known, as did the intensity of their production. The cross-section of the reaction observed, 0.3 nanobarns, also agreed well with theoretical predictions. These bolstered the assignment of the alpha decay events to seaborgium-263m. A dispute thus arose from the initial competing claims of discovery, though unlike the case of the synthetic elements up to element 105, neither team of discoverers chose to announce proposed names for the new elements, thus averting an element naming controversy temporarily. The dispute on discovery, however, dragged on until 1992, when the IUPAC/IUPAP Transfermium Working Group (TWG), formed to put an end to the controversy by making conclusions regarding discovery claims for elements 101 to 112, concluded that the Soviet synthesis of seaborgium-260 was not convincing enough, "lacking as it is in yield curves and angular selection results", whereas the American synthesis of seaborgium-263 was convincing due to its being firmly anchored to known daughter nuclei. As such, the TWG recognised the Berkeley team as official discoverers in their 1993 report. Seaborg had previously suggested to the TWG that if Berkeley was recognised as the official discoverer of elements 104 and 105, they might propose the name "kurchatovium" (symbol Kt) for element 106 to honour the Dubna team, which had proposed this name for element 104 after Igor Kurchatov, the former head of the Soviet nuclear research programme. However, due to the worsening relations between the competing teams after the publication of the TWG report (because the Berkeley team vehemently disagreed with the TWG's conclusions, especially regarding element 104), this proposal was dropped from consideration by the Berkeley team. After being recognized as official discoverers, the Berkeley team started deciding on a name in earnest: Seaborg's son Eric remembered the naming process as follows: The name "seaborgium" and symbol "Sg" were announced at the 207th national meeting of the American Chemical Society in March 1994 by Kenneth Hulet, one of the co-discovers. However, IUPAC resolved in August 1994 that an element could not be named after a living person, and Seaborg was still alive at the time. Thus, in September 1994, IUPAC recommended a set of names in which the names proposed by the three laboratories (the third being the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany) with competing claims to the discovery for elements 104 to 109 were shifted to various other elements, in which "rutherfordium" (Rf), the Berkeley proposal for element 104, was shifted to element 106, with "seaborgium" being dropped entirely as a name. This decision ignited a firestorm of worldwide protest for disregarding the historic discoverer's right to name new elements, and against the new retroactive rule against naming elements after living persons; the American Chemical Society stood firmly behind the name "seaborgium" for element 106, together with all the other American and German naming proposals for elements 104 to 109, approving these names for its journals in defiance of IUPAC. At first, IUPAC defended itself, with an American member of its committee writing: "Discoverers don't have a right to name an element. They have a right to suggest a name. And, of course, we didn't infringe on that at all." However, Seaborg responded: Bowing to public pressure, IUPAC proposed a different compromise in August 1995, in which the name "seaborgium" was reinstated for element 106 in exchange for the removal of all but one of the other American proposals, which met an even worse response. Finally, IUPAC rescinded these previous compromises and made a final, new recommendation in August 1997, in which the American and German proposals for elements 104 to 109 were all adopted, including "seaborgium" for element 106, with the single exception of element 105, named "dubnium" to recognise the contributions of the Dubna team to the experimental procedures of transactinide synthesis. This list was finally accepted by the American Chemical Society, which wrote: Seaborg commented regarding the naming: Seaborg died a year and a half later, on 25 February 1999, at the age of 86. Superheavy elements such as seaborgium are produced by bombarding lighter elements in particle accelerators that induces fusion reactions. Whereas most of the isotopes of seaborgium can be synthesized directly this way, some heavier ones have only been observed as decay products of elements with higher atomic numbers. Depending on the energies involved, fusion reactions that generate superheavy elements are separated into "hot" and "cold". In hot fusion reactions, very light, high-energy projectiles are accelerated toward very heavy targets (actinides), giving rise to compound nuclei at high excitation energy (~40–50 MeV) that may either fission or evaporate several (3 to 5) neutrons. In cold fusion reactions, the produced fused nuclei have a relatively low excitation energy (~10–20 MeV), which decreases the probability that these products will undergo fission reactions. As the fused nuclei cool to the ground state, they require emission of only one or two neutrons, and thus, allows for the generation of more neutron-rich products. The latter is a distinct concept from that of where nuclear fusion claimed to be achieved at room temperature conditions (see cold fusion). Seaborgium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Twelve different isotopes of seaborgium have been reported with atomic masses 258–267, 269, and 271, three of which, seaborgium-261, 263, and 265, have known metastable states. All of these decay only through alpha decay and spontaneous fission, with the single exception of seaborgium-261 that can also undergo electron capture to dubnium-261. There is a trend toward increasing half-lives for the heavier isotopes; thus the heaviest three known isotopes, 267Sg, 269Sg, and 271Sg, are also the longest-lived, having half-lives in minutes. Some other isotopes in this region are predicted to have comparable or even longer half-lives. Additionally, 263Sg, 265Sg, and 265mSg have half-lives measured in seconds. All the remaining isotopes have half-lives measured in milliseconds, with the exception of the shortest-lived isotope, 261mSg, with a half-life of only 92 microseconds. The proton-rich isotopes from 258Sg to 261Sg were directly produced by cold fusion; all heavier isotopes were produced from the repeated alpha decay of the heavier elements hassium, darmstadtium, and flerovium, with the exceptions of the isotopes 263mSg, 264Sg, 265Sg, and 265mSg, which were directly produced by hot fusion through irradiation of actinide targets. The twelve isotopes of seaborgium have half-lives ranging from 92 microseconds for 261mSg to 14 minutes for 269Sg. Very few properties of seaborgium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that seaborgium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, but properties of seaborgium metal remain unknown and only predictions are available. Seaborgium is expected to be a solid under normal conditions and assume a body-centered cubic crystal structure, similar to its lighter congener tungsten. It should be a very heavy metal with a density of around 35.0 g/cm3, which would be the fourth-highest of any of the 118 known elements, lower only than bohrium (37.1 g/cm3), meitnerium (37.4 g/cm3) and hassium (41 g/cm3), the three following elements in the periodic table. In comparison, the densest known element that has had its density measured, osmium, has a density of only 22.61 g/cm3. This results from seaborgium's high atomic weight, the lanthanide and actinide contractions, and relativistic effects, although production of enough seaborgium to measure this quantity would be impractical, and the sample would quickly decay. Seaborgium is the fourth member of the 6d series of transition metals and the heaviest member of group 6 in the periodic table, below chromium, molybdenum, and tungsten. All the members of the group form a diversity of oxoanions. They readily portray their group oxidation state of +6, although this is highly oxidising in the case of chromium, and this state becomes more and more stable to reduction as the group is descended: indeed, tungsten is the last of the 5d transition metals where all four 5d electrons participate in metallic bonding. As such, seaborgium should have +6 as its most stable oxidation state, both in the gas phase and in aqueous solution, and this is the only oxidation state that is experimentally known for it; the +5 and +4 states should be less stable, and the +3 state, the most common for chromium, would be the least stable for seaborgium. This stabilisation of the highest oxidation state occurs in the early 6d elements because of the similarity between the energies of the 6d and 7s orbitals, since the 7s orbitals are relativistically stabilised and the 6d orbitals are relativistically destabilised. This effect is so large in the seventh period that seaborgium is expected to lose its 6d electrons before its 7s electrons (Sg, [Rn]5f146d47s2; Sg+, [Rn]5f146d37s2; Sg2+, [Rn]5f146d37s1; Sg4+, [Rn]5f146d2; Sg6+, [Rn]5f14). Because of the great destabilisation of the 7s orbital, SgIV should be even more unstable than WIV and should be very readily oxidised to SgVI. The predicted ionic radius of the hexacoordinate Sg6+ ion is 65 pm, while the predicted atomic radius of seaborgium is 128 pm. Nevertheless, the stability of the highest oxidation state is still expected to decrease as LrIII > RfIV > DbV > SgVI. Some predicted standard reduction potentials for seaborgium ions in aqueous acidic solution are as follows: Seaborgium should form a very volatile hexafluoride (SgF6) as well as a moderately volatile hexachloride (SgCl6), pentachloride (SgCl5), and oxychlorides SgO2Cl2 and SgOCl4. SgO2Cl2 is expected to be the most stable of the seaborgium oxychlorides and to be the least volatile of the group 6 oxychlorides, with the sequence MoO2Cl2 > WO2Cl2 > SgO2Cl2. The volatile seaborgium(VI) compounds SgCl6 and SgOCl4 are expected to be unstable to decomposition to seaborgium(V) compounds at high temperatures, analogous to MoCl6 and MoOCl4; this should not happen for SgO2Cl2 due to the much higher energy gap between the highest occupied and lowest unoccupied molecular orbitals, despite the similar Sg–Cl bond strengths (similarly to molybdenum and tungsten). Molybdenum and tungsten are very similar to each other and show important differences to the smaller chromium, and seaborgium is expected to follow the chemistry of tungsten and molybdenum quite closely, forming an even greater variety of oxoanions, the simplest among them being seaborgate, , which would form from the rapid hydrolysis of , although this would take place less readily than with molybdenum and tungsten as expected from seaborgium's greater size. Seaborgium should hydrolyse less readily than tungsten in hydrofluoric acid at low concentrations, but more readily at high concentrations, also forming complexes such as SgO3F− and : complex formation competes with hydrolysis in hydrofluoric acid. Experimental chemical investigation of seaborgium has been hampered due to the need to produce it one atom at a time, its short half-life, and the resulting necessary harshness of the experimental conditions. The isotope 265Sg and its isomer 265mSg are advantageous for radiochemistry: they are produced in the 248Cm(22Ne,5n) reaction. In the first experimental chemical studies of seaborgium in 1995 and 1996, seaborgium atoms were produced in the reaction 248Cm(22Ne,4n)266Sg, thermalised, and reacted with an O2/HCl mixture. The adsorption properties of the resulting oxychloride were measured and compared with those of molybdenum and tungsten compounds. The results indicated that seaborgium formed a volatile oxychloride akin to those of the other group 6 elements, and confirmed the decreasing trend of oxychloride volatility down group 6: In 2001, a team continued the study of the gas phase chemistry of seaborgium by reacting the element with O2 in a H2O environment. In a manner similar to the formation of the oxychloride, the results of the experiment indicated the formation of seaborgium oxide hydroxide, a reaction well known among the lighter group 6 homologues as well as the pseudohomologue uranium. Predictions on the aqueous chemistry of seaborgium have largely been confirmed. In experiments conducted in 1997 and 1998, seaborgium was eluted from cation-exchange resin using a HNO3/HF solution, most likely as neutral SgO2F2 or the anionic complex ion [SgO2F3]− rather than . In contrast, in 0.1 M nitric acid, seaborgium does not elute, unlike molybdenum and tungsten, indicating that the hydrolysis of [Sg(H2O)6]6+ only proceeds as far as the cationic complex [Sg(OH)4(H2O)]2+ or [Sg(OH)3(H2O)2]+, while that of molybdenum and tungsten proceeds to neutral [MO2(OH)2)]. The only other oxidation state known for seaborgium other than the group oxidation state of +6 is the zero oxidation state. Similarly to its three lighter congeners, forming chromium hexacarbonyl, molybdenum hexacarbonyl, and tungsten hexacarbonyl, seaborgium has been shown in 2014 to also form seaborgium hexacarbonyl, Sg(CO)6. Like its molybdenum and tungsten homologues, seaborgium hexacarbonyl is a volatile compound that reacts readily with silicon dioxide.
https://en.wikipedia.org/wiki?curid=28144
Serpens Serpens ("the Serpent", Greek ) is a constellation of the northern hemisphere. One of the 48 constellations listed by the 2nd-century astronomer Ptolemy, it remains one of the 88 modern constellations defined by the International Astronomical Union. It is unique among the modern constellations in being split into two non-contiguous parts, Serpens Caput (Serpent Head) to the west and Serpens Cauda (Serpent Tail) to the east. Between these two halves lies the constellation of Ophiuchus, the "Serpent-Bearer". In figurative representations, the body of the serpent is represented as passing behind Ophiuchus between Mu Serpentis in "Serpens Caput" and Nu Serpentis in "Serpens Cauda". The brightest star in Serpens is the red giant star Alpha Serpentis, or Unukalhai, in Serpens Caput, with an apparent magnitude of 2.63. Also located in Serpens Caput are the naked-eye globular cluster Messier 5 and the naked-eye variables R Serpentis and Tau4 Serpentis. Notable extragalactic objects include Seyfert's Sextet, one of the densest galaxy clusters known; Arp 220, the prototypical ultraluminous infrared galaxy; and Hoag's Object, the most famous of the very rare class of galaxies known as ring galaxies. Part of the Milky Way's galactic plane passes through Serpens Cauda, which is therefore rich in galactic deep-sky objects, such as the Eagle Nebula (IC 4703) and its associated star cluster Messier 16. The nebula measures 70 light-years by 50 light-years and contains the Pillars of Creation, three dust clouds that became famous for the image taken by the Hubble Space Telescope. Other striking objects include the Red Square Nebula, one of the few objects in astronomy to take on a square shape; and Westerhout 40, a massive nearby star-forming region consisting of a molecular cloud and an H II region. In Greek mythology, Serpens represents a snake held by the healer Asclepius. Represented in the sky by the constellation Ophiuchus, Asclepius once killed a snake, but the animal was subsequently resurrected after a second snake placed a revival herb on it before its death. As snakes shed their skin every year, they were known as the symbol of rebirth in ancient Greek society, and legend says Asclepius would revive dead humans using the same technique he witnessed. Although this is likely the logic for Serpens' presence with Ophiuchus, the true reason is still not fully known. Sometimes, Serpens was depicted as coiling around Ophiuchus, but the majority of atlases showed Serpens passing either behind Ophiuchus' body or between his legs. In some ancient atlases, the constellations Serpens and Ophiuchus were depicted as two separate constellations, although more often they were shown as a single constellation. One notable figure to depict Serpens separately was Johann Bayer; thus, Serpens' stars are cataloged with separate Bayer designations from those of Ophiuchus. When Eugène Delporte established modern constellation boundaries in the 1920s, he elected to depict the two separately. However, this posed the problem of how to disentangle the two constellations, with Deporte deciding to split Serpens into two areas—the head and the tail—separated by the continuous Ophiuchus. These two areas became known as Serpens Caput and Serpens Cauda, "caput" being the Latin word for head and "cauda" the Latin word for tail. In Chinese astronomy, most of the stars of Serpens represented part of a wall surrounding a marketplace, known as Tianshi, which was in Ophiuchus and part of Hercules. Serpens also contains a few Chinese constellations. Two stars in the tail represented part of Shilou, the tower with the market office. Another star in the tail represented Liesi, jewel shops. One star in the head (Mu Serpentis) marked Tianru, the crown prince's wet nurse, or sometimes rain. There were two "serpent" constellations in Babylonian astronomy, known as Mušḫuššu and Bašmu. It appears that Mušḫuššu was depicted as a hybrid of a dragon, a lion and a bird, and loosely corresponded to Hydra. Bašmu was a horned serpent (c.f. Ningishzida) and roughly corresponds to the Ὄφις constellation of Eudoxus of Cnidus on which the Ὄφις ("Serpens") of Ptolemy is based. Serpens is the only one of the 88 modern constellations to be split into two disconnected regions in the sky: "Serpens Caput" (the head) and "Serpens Cauda" (the tail). The constellation is also unusual in that it depends on another constellation for context; specifically, it is being held by the Serpent Bearer Ophiuchus. Serpens Caput is bordered by Libra to the south, Virgo and Boötes to the east, Corona Borealis to the north, and Ophiuchus and Hercules to the west; Serpens Cauda is bordered by Sagittarius to the south, Scutum and Aquila to the east, and Ophiuchus to the north and west. Covering 636.9 square degrees total, it ranks 23rd of the 88 constellations in size. It appears prominently in both the northern and southern skies during the Northern Hemisphere's summer. Its main asterism consists of 11 stars, and 108 stars in total are brighter than magnitude 6.5, the traditional limit for naked-eye visibility. Serpens Caput's boundaries, as set by Eugène Delporte in 1930, are defined by a 10-sided polygon, while Serpens Cauda's are defined by a 22-sided polygon. In the equatorial coordinate system, the right ascension coordinates of Serpens Caput's borders lie between and , while the declination coordinates are between and . Serpens Cauda's boundaries lie between right ascensions of and and declinations of and . The International Astronomical Union (IAU) adopted the three-letter abbreviation "Ser" for the constellation in 1922. Marking the heart of the serpent is the constellation's brightest star, Alpha Serpentis. Traditionally called Unukalhai, is a red giant of spectral type K2III located approximately 23 parsecs distant with a visual magnitude of 2.630 ± 0.009, meaning it can easily be seen with the naked eye even in areas with substantial light pollution. A faint companion is in orbit around the red giant star, although it is not visible to the naked eye. Situated near Alpha is Lambda Serpentis, a magnitude 4.42 ± 0.05 star rather similar to the Sun positioned only 12 parsecs away. Another solar analog in Serpens is the primary of Psi Serpentis, a binary star located slightly further away at approximately 14 parsecs. Beta, Gamma, and Iota Serpentis form a distinctive triangular shape marking the head of the snake, with Kappa Serpentis (the proper name is Gudja) being roughly midway between Gamma and Iota. The brightest of the four with an apparent magnitude of roughly 3.67, Beta Serpentis is a white main-sequence star roughly 160 parsecs distant. It is likely that a nearby 10th-magnitude star is physically associated with Beta, although it is not certain. The Mira variable R Serpentis, situated between Beta and Gamma, is visible to the naked eye at its maximum of 5th-magnitude, but, typical of Mira variables, it can fade to below magnitude 14. Gamma Serpentis itself is an F-type subgiant located only 11 parsecs distant and thus is quite bright, being of magnitude 3.84 ± 0.05. The star is known to show solar-like oscillations. Delta Serpentis, forming part of the body of the snake between the heart and the head, is a multiple star system positioned around 70 parsecs from Earth. Consisting of four stars, the system has a total apparent magnitude of 3.79 as viewed from Earth, although two of the stars, with a combined apparent magnitude of 3.80, provide nearly all the light. The primary, a white subgiant, is a Delta Scuti variable with an average apparent magnitude of 4.23. Positioned very near Delta, both in the night sky and likely in actual space at an estimated distance of around 70 parsecs, is the barium star 16 Serpentis. Another notable variable star visible to the naked eye is Chi Serpentis, an Alpha² Canum Venaticorum variable situated midway between Delta and Beta which varies from its median brightness of 5.33 by 0.03 magnitudes over a period of approximately 1.5 days. The two stars in Serpens Caput that form part of the Snake's body below the heart are Epsilon and Mu Serpentis, both third-magnitude A-type main-sequence stars. Both have a peculiarity: Epsilon is an Am star, while Mu is a binary. Located slightly northwest of Mu is 36 Serpentis, another A-type main-sequence star. This star also has a peculiarity; it is a binary with the primary component being a Lambda Boötis star, meaning that it has solar-like amounts of carbon, nitrogen, and oxygen, while containing very low amounts of iron peak elements. 25 Serpentis, positioned a few degrees northeast of Mu Serpentis, is a spectroscopic binary consisting of a hot B-type giant and an A-type main-sequence star. The primary is a slowly pulsating B star, which causes the system to vary by 0.03 magnitudes. Serpens Caput contains many RR Lyrae variables, although most are too faint to be seen without professional photography. The brightest is VY Serpentis, only of 10th magnitude. This star's period has been increasing by approximately 1.2 seconds per century. A variable star of a different kind is Tau4 Serpentis, a cool red giant that pulsates between magnitudes 5.89 and 7.07 in 87 days. This star has been found to display an inverse P Cygni profile, where cold infalling gas on to the star creates redshifted hydrogen absorption lines next to the normal emission lines. Several stars in Serpens have been found to have planets. The brightest, Omega Serpentis, located between Epsilon and Mu, is an orange giant with a planet of at least 1.7 Jupiter-masses. NN Serpentis, an eclipsing post-common-envelope binary consisting of a white dwarf and a red dwarf, is very likely to have two planets causing variations in the period of the eclipses. Although it does not have a planet, the solar analog HD 137510 has been found to have a brown dwarf companion within the brown-dwarf desert. PSR B1534+11 is a system consisting of two neutron stars orbiting each other, one of which is a pulsar with a period of 37.9 milliseconds. Situated approximately 1000 parsecs distant, the system was used to test Albert Einstein's theory of general relativity, validating the system's relativistic parameters to within 0.2% of values predicted by the theory. The X-ray emission from the system has been found to be present when the non-pulsar star intersects the equatorial pulsar wind of the pulsar, and the system's orbit has been found to vary slightly. The brightest star in the tail, Eta Serpentis, is similar to Alpha Serpentis' primary in that it is a red giant of spectral class K. This star, however, is known to exhibit solar-like oscillations over a period of approximately 2.16 hours. The other two stars in Serpens Cauda forming its asterism are Theta and Xi Serpentis. Xi, where the asterism crosses over to Mu Serpentis in the head, is a triple star system located approximately 105 parsecs away. Two of the stars, with a combined apparent magnitude of around 3.5, form a spectroscopic binary with an angular separation of only 2.2 milliarcseconds, and thus cannot be resolved with modern equipment. The primary is a white giant with an excess of strontium. Theta, forming the tip of the tail, is also a multiple system, consisting of two A-type main-sequence stars with a combined apparent magnitude of around 4.1 separated by almost half an arcminute. Lying near the boundary with Ophiuchus are Zeta, Nu, and Omicron Serpentis. All three are 4th-magnitude main-sequence stars, with Nu and Omicron being of spectral type A and Zeta being of spectral type F. Nu is a single star with a 9th-magnitude visual companion, while Omicron is a Delta Scuti variable with amplitude variations of 0.01 magnitudes. In 1909, the symbiotic nova RT Serpentis appeared near Omicron, although it only reached a maximum magnitude of 10. The star system 59 Serpentis, also known as d Serpentis, is a triple star system consisting of a spectroscopic binary containing an A-type star and an orange giant and an orange giant secondary. The system shows irregular variations in brightness between magnitudes 5.17 and 5.2. In 1970, the nova FH Serpentis appeared just slightly north of 59 Serpentis, reaching a maximum brightness of 4.5. Also near 59 Serpentis in the Serpens Cloud are several Orion variables. MWC 297 is a Herbig Be star that in 1994 exhibited a large X-ray flare and increased in X-ray luminosity by five times before returning to the quiescent state. The star also appears to possess a circumstellar disk. Another Orion variable in the region is VV Serpentis, a Herbig Ae star that has been found to exhibit Delta Scuti pulsations. VV Serpentis has also, like MWC 297, been found to have a dusty disk surrounding it, and is also a UX Orionis star, meaning that it shows irregular variations in its brightness. The star HR 6958, also known as MV Serpentis, is an Alpha2 Canum Venaticorum variable that is faintly visible to the naked eye. The star's metal abundance is ten times higher than the Sun for most metals at the iron peak and up to 1,000 times more for heavier elements. It has also been found to contain excess silicon. Barely visible to the naked eye is HD 172365, a likely post-blue straggler in the open cluster IC 4756 that contains a large excess of lithium. HD 172189, also located in IC 4756, is an Algol variable eclipsing binary with a 5.70 day period. The primary star in the system is also a Delta Scuti variable, undergoing multiple pulsation frequencies, which, combined with the eclipses, causes the system to vary by around a tenth of a magnitude. As the galactic plane passes through it, Serpens Cauda contains many massive OB stars. Several of these are visible to the naked eye, such as NW Serpentis, an early Be star that has been found to be somewhat variable. The variability is interesting; according to one study, it could be one of the first discovered hybrids between Beta Cephei variables and slowly pulsating B stars. Although not visible to the naked eye, HD 167971 (MY Serpentis) is a Beta Lyrae variable triple system consisting of three very hot O-type stars. A member of the cluster NGC 6604, the two eclipsing stars are both blue giants, with one being of the very early spectral type O7.5III. The remaining star is either a blue giant or supergiant of a late O or early B spectral type. Also an eclipsing binary, the HD 166734 system consists of two O-type blue supergiants in orbit around each other. Less extreme in terms of mass and temperature is HD 161701, a spectroscopic binary consisting of a B-type primary and an Ap secondary, although it is the only known spectroscopic binary to consist of a star with excess of mercury and manganese and an Ap star. South of the Eagle Nebula on the border with Sagittarius is the eclipsing binary W Serpentis, whose primary is a white giant that is interacting with the secondary. The system has been found to contain an accretion disk, and was one of the first discovered Serpentids, which are eclipsing binaries containing exceptionally strong far-ultraviolet spectral lines. It is suspected that such Serpentids are in an earlier evolutionary phase, and will evolve first into double periodic variables and then classical Algol variables. Also near the Eagle Nebula is the eclipsing Wolf–Rayet binary CV Serpentis, consisting of a Wolf–Rayet star and a hot O-type subgiant. The system is surrounded by a ring-shaped nebula, likely formed during the Wolf–Rayet phase of the primary. The eclipses of the system vary erratically, and although there are two theories as to why, neither of them is completely consistent with current understanding of stars. Serpens Cauda contains a few X-ray binaries. One of these, GX 17+2, is a low-mass X-ray binary consisting of a neutron star and, as in all low-mass X-ray binaries, a low-mass star. The system has been classified as a Sco-like Z source, meaning that its accretion is near the Eddington limit. The system has also been found to approximately every 3 days brighten by around 3.5 K-band magnitudes, possibly due to the presence of a synchrotron jet. Another low-mass X-ray binary, Serpens X-1, undergoes occasional X-ray bursts. One in particular lasted nearly four hours, possibly explained by the burning of carbon in "a heavy element ocean". As the galactic plane does not pass through this part of Serpens, a view to many galaxies beyond it is possible. However, a few structures of the Milky Way Galaxy are present in Serpens Caput, such as Messier 5, a globular cluster positioned approximately 8° southwest of α Serpentis, next to the star 5 Serpentis. Barely visible to the naked eye under good conditions, and is located approximately 25,000 ly distant. Messier 5 contains a large number of known RR Lyrae variable stars, and is receding from us at over 50 km/s. The cluster contains two millisecond pulsars, one of which is in a binary, allowing the proper motion of the cluster to be measured. The binary could help our understanding of neutron degenerate matter; the current median mass, if confirmed, would exclude any "soft" equation of state for such matter. The cluster has been used to test for magnetic dipole moments in neutrinos, which could shed light on some hypothetical particles such as the axion. Another globular cluster is Palomar 5, found just south of Messier 5. Many stars are leaving this globular cluster due to the Milky Way's gravity, forming a tidal tail over 30000 light-years long. The L134/L183 is a dark nebula complex that, along with a third cloud, is likely formed by fragments of a single original cloud located 36 degrees away from the galactic plane, a large distance for dark nebulae. The entire complex is thought to be around 140 parsecs distant. L183, also referred to as L134N, is home to several infrared sources, indicating pre-stellar sources thought to present the first known observation of the contraction phase between cloud cores and prestellar cores. The core is split into three regions, with a combined mass of around 25 solar masses. Outside of the Milky Way, there are no bright deep-sky objects for amateur astronomers in Serpens Caput, with nothing else above 10th magnitude. The brightest is NGC 5962, a spiral galaxy positioned around 28 megaparsecs distant with an apparent magnitude of 11.34. Slightly fainter is NGC 5921, a barred spiral galaxy with a LINER-type active galactic nucleus situated somewhat closer at a distance of 21 megaparsecs. A type II supernova was observed in this galaxy in 2001 and was designated SN 2001X. Fainter still are the spirals NGC 5964 and NGC 6118, with the latter being host to the supernova SN 2004dk. Hoag's Object, located 600 million light-years from Earth, is a member of the very rare class of galaxies known as ring galaxies. The outer ring is largely composed of young blue stars while the core is made up of older yellow stars. The predominant theory regarding its formation is that the progenitor galaxy was a barred spiral galaxy whose arms had velocities too great to keep the galaxy's coherence and therefore detached. Arp 220 is another unusual galaxy in Serpens. The prototypical ultraluminous infrared galaxy, Arp 220 is somewhat closer than Hoag's Object at 250 million light-years from Earth. It consists of two large spiral galaxies in the process of colliding with their nuclei orbiting at a distance of 1,200 light-years, causing extensive star formation throughout both components. It possesses a large cluster of more than a billion stars, partially covered by thick dust clouds near one of the galaxies' core. Another interacting galaxy pair, albeit in an earlier stage, consists of the galaxies NGC 5953 and NGC 5954. In this case, both are active galaxies, with the former a Seyfert 2 galaxy and the latter a LINER-type galaxy. Both are undergoing a burst of star formation triggered by the interaction. Seyfert's Sextet is a group of six galaxies, four of which are interacting gravitationally and two of which simply appear to be a part of the group despite their greater distance. The gravitationally bound cluster lies at a distance of 190 million light-years from Earth and is approximately 100,000 light-years across, making Seyfert's Sextet one of the densest galaxy group known. Astronomers predict that the four interacting galaxies will eventually merge to form a large elliptical galaxy. The radio source 3C 326 was originally though to emanate from a giant elliptical galaxy. However, in 1990, it was shown that the source is instead a brighter, smaller galaxy a few arcseconds north. This object, designated 3C 326 N, has enough gas for star formation, but is being inhibited due to the energy from the radio galaxy nucleus. A much larger galaxy cluster is the redshift-0.0354 Abell 2063. The cluster is thought to be interacting with the nearby galaxy group MKW 3s, based on radial velocity measurements of galaxies and the positioning of the cD galaxy at the center of Abell 2063. The active galaxy at the center of MKW 3s—NGC 5920—appears to be creating a bubble of hot gas from its radio activity. Near the 5th-magnitude star Pi Serpentis lies AWM 4, a cluster containing an excess of metals in the intracluster medium. The central galaxy, NGC 6051, is a radio galaxy that is probably responsible for this enrichment. Similar to AWM 4, the cluster Abell 2052 has central cD radio galaxy, 3C 317. This radio galaxy is believed to have restarted after a period of inactivity less than 200 years ago. The galaxy has over 40,000 known globular clusters, the highest known total of any galaxy as of 2002. Consisting of two quasars with a separation of less than 5 arcseconds, the quasar pair 4C 11.50 is one of the visually closest pairs of quasars in the sky. The two have markedly different redshifts, however, and are thus unrelated. The foreground member of the pair (4C 11.50 A) does not have enough mass to refract light from the background component (4C 11.50 B) enough to produce a lensed image, although it does have a true companion of its own. An even stranger galaxy pair is 3C 321. Unlike the previous pair, the two galaxies making up 3C 321 are interacting with each other and are in the process of merging. Both members appear to be active galaxies; the primary radio galaxy may be responsible for the activity in the secondary by means of the former's jet driving material onto the latter's supermassive black hole. An example of gravitational lensing is found in the radio galaxy 3C 324. First thought to be a single overluminous radio galaxy with a redshift of "z" = 1.206, it was found in 1987 to actually be two galaxies, with the radio galaxy at the aforementioned redshift being lensed by another galaxy at redshift "z" = 0.845. The first example of a multiply-imaged radio galaxy discovered, the source appears to be an elliptical galaxy with a dust lane obscuring our view of the visual and ultraviolet emission from the nucleus. In even shorter wavelengths, the BL Lac object PG 1553+113 is a heavy emitter of gamma rays. This object is the most distant found to emit photons with energies in the TeV range as of 2007. The spectrum is unique, with hard emission in some ranges of the gamma-ray spectrum in stark contrast to soft emission in others. In 2012, the object flared in the gamma-ray spectrum, tripling in luminosity for two nights, allowing the redshift to be accurately measured as "z" = 0.49. Several gamma-ray bursts (GRBs) have been observed in Serpens Caput, such as GRB 970111, one of the brightest GRBs observed. An optical transient event associated with this GRB has not been found, despite its intensity. The host galaxy initially also proved elusive, however it now appears that the host is a Seyfert I galaxy located at redshift "z" = 0.657. The X-ray afterglow of the GRB has also been much fainter than for other dimmer GRBs. More distant is GRB 060526 (redshift "z" = 3.221), from which X-ray and optical afterglows were detected. This GRB was very faint for a long-duration GRB. Part of the galactic plane passes through the tail, and thus Serpens Cauda is rich in deep-sky objects within our own galaxy. The Eagle Nebula and its associated star cluster, Messier 16 lie 7,000 light-years from Earth in the direction of the galactic center. The nebula measures 70 light-years by 50 light-years and contains the Pillars of Creation, three dust clouds that became famous for the image taken by the Hubble Space Telescope. The stars being born in the Eagle Nebula, added to those with an approximate age of 5 million years have an average temperature of 45,000 kelvins and produce prodigious amounts of radiation that will eventually destroy the dust pillars. Despite its fame, the Eagle Nebula is fairly dim, with an integrated magnitude of approximately 6.0. The star-forming regions in the nebula are often evaporating gaseous globules; unlike Bok globules they only hold one protostar. North of Messier 16, at a distance of approximately 2000 parsecs, is the OB association Serpens OB2, containing over 100 OB stars. Around 5 million years old, the association appears to still contain star-forming regions, and the light from its stars is illuminating the HII region S 54. Within this HII region is the open cluster NGC 6604, which is the same age as the surrounding OB association, and the cluster is now thought to simply be the densest part of it. The cluster appears to be producing a thermal chimney of ionized gas, caused by the interaction of the gas from the galactic disk with the galactic halo. Another open cluster in Serpens Cauda is IC 4756, containing at least one naked-eye star, HD 172365 (another naked-eye star in the vicinity, HD 171586, is most likely unrelated). Positioned approximately 440 parsecs distant, the cluster is estimated to be around 800 million years old, quite old for an open cluster. Despite the presence of the Milky Way in Serpens Cauda, one globular cluster can be found: NGC 6535, although invisible to the naked eye, can be made out in small telescopes just north of Zeta Serpentis. Rather small and sparse for a globular cluster, this cluster contains no known RR Lyrae variables, which is unusual for a globular cluster. MWC 922 is a star surrounded by a planetary nebula. Dubbed the Red Square Nebula due to its similarities to the Red Rectangle Nebula, the planetary nebula appears to be a nearly perfect square with a dark band around the equatorial regions. The nebula contains concentric rings, which are similar to those seen in the supernova SN 1987A. MWC 922 itself is an FS Canis Majoris variable, meaning that it is a Be star containing exceptionally bright hydrogen emission lines as well as select forbidden lines, likely due to the presence of a close binary. East of Xi Serpentis is another planetary nebula, Abell 41, containing the binary star MT Serpentis at its center. The nebula appears to have a bipolar structure, and the axis of symmetry of the nebula has been found to be within 5° of the line perpendicular to the orbital plane of the stars, strengthening the link between binary stars and bipolar planetary nebulae. On the other end of the stellar age spectrum is L483, a dark nebula which contains the protostar IRAS 18418-0440. Although classified as a class 0 protostar, it has some unusual features for such an object, such as a lack of high-velocity stellar winds, and it has been proposed that this object is in transition between class 0 and class I. A variable nebula exists around the protostar, although it is only visible in infrared light. The Serpens cloud is a massive star-forming molecular cloud situated in the southern part of Serpens Cauda. Only two million years old and 420 parsecs distant, the cloud is known to contain many protostars such as Serpens FIRS 1 and Serpens SVS 20. The Serpens South protocluster was uncovered by NASA's Spitzer Space Telescope in the southern portion of the cloud, and it appears that star formation is still continuing in the region. Another site of star formation is the Westerhout 40 complex, consisting of a prominent HII region adjacent to a molecular cloud. Located around 500 parsecs distant, it is one of the nearest massive regions of star formation, but as the molecular cloud obscures the HII region, rendering it and its embedded cluster tough to see visibly, it is not as well-studied as others. The embedded cluster likely contains over 600 stars above 0.1 solar masses, with several massive stars, including at least one O-type star, being responsible for lighting the HII region and the production of a bubble. Despite the presence of the Milky Way, several active galaxies are visible in Serpens Cauda as well, such as PDS 456, found near Xi Serpentis. The most intrinsically luminous nearby active galaxy, this AGN has been found to be extremely variable in the X-ray spectrum. This has allowed light to be shed on the nature of the supermassive black hole at the center, likely a Kerr black hole. It is possible that the quasar is undergoing a transition from an ultraluminous infrared galaxy to a classical radio-quiet quasar, but there are problems with this theory, and the object appears to be an exceptional object that does not completely lie within current classification systems. Nearby is NRAO 530, a blazar that has been known to flare in the X-rays occasionally. One of these flares was for less than 2000 seconds, making it the shortest flare ever observed in a blazar as of 2004. The blazar also appears to show periodic variability in its radio wave output over two different periods of six and ten years. There are two daytime meteor showers that radiate from Serpens, the Omega Serpentids and the Sigma Serpentids. Both showers peak between December 18 and December 25.
https://en.wikipedia.org/wiki?curid=28149
Sculptor Group The Sculptor Group is a loose group of galaxies visible near the south galactic pole. The group is one of the closest groups of galaxies to the Local Group; the distance to the center of the group from the Milky Way is approximately . The Sculptor Galaxy (NGC 253) and a few other galaxies form a gravitationally-bound core in the center of this group. A few other galaxies at the periphery may be associated with the group but may not be gravitationally bound. Because most of the galaxies in this group are actually weakly gravitationally bound, the group may also be described as a filament. It is considered to be at an early stage of evolution in which galaxies are still falling into the group along filamentary structures. The table below lists galaxies that have been identified as associated with the Sculptor Galaxy (and hence associated with the group) by I. D. Karachentsev and collaborators. The object names used in the above table differ from the names used by Karachentsev and collaborators. NGC, IC, UGC, and PGC numbers have been used when possible to allow for easier referencing. The irregular galaxy NGC 55, the spiral galaxy NGC 300, and their companion galaxies have been considered by many researchers to be part of this group. However, recent distance measurements to these and other galaxies in the same region of the sky show that NGC 55, NGC 300, and their companions may simply be foreground galaxies that are physically unassociated with the Sculptor Group. The galaxies NGC 24 and NGC 45 are located in the vicinity of the Sculptor Group, but are now considered background objects.
https://en.wikipedia.org/wiki?curid=28150
State (polity) A state is a polity under a system of governance. There is no undisputed definition of a state. A widely used definition from the German sociologist Max Weber is that a "state" is a polity that maintains a monopoly on the legitimate use of violence, although other definitions are not uncommon. Some states are sovereign (known as sovereign states), while others are subject to external sovereignty or hegemony, wherein supreme authority lies in another state. The term "state" also applies to federated states that are members of a federation, in which sovereignty is shared between member states and a federal body. Most of the human population has existed within a state system for millennia; however, for most of prehistory people lived in stateless societies. The first states arose about 5,500 years ago in conjunction with rapid growth of cities, invention of writing and codification of new forms of religion. Over time, a variety of different forms developed, employing a variety of justifications for their existence (such as divine right, the theory of the social contract, etc.). Today, the modern nation state is the predominant form of state to which people are subject. The word "state" and its cognates in some other European languages ("stato" in Italian, "estado" in Spanish and Portuguese, "état" in French, "Staat" in German) ultimately derive from the Latin word "status", meaning "condition, circumstances". The English noun "state" in the generic sense "condition, circumstances" predates the political sense. It is introduced to Middle English c. 1200 both from Old French and directly from Latin. With the revival of the Roman law in 14th-century Europe, the term came to refer to the legal standing of persons (such as the various "estates of the realm" – noble, common, and clerical), and in particular the special status of the king. The highest estates, generally those with the most wealth and social rank, were those that held power. The word also had associations with Roman ideas (dating back to Cicero) about the ""status rei publicae"", the "condition of public matters". In time, the word lost its reference to particular social groups and became associated with the legal order of the entire society and the apparatus of its enforcement. The early 16th-century works of Machiavelli (especially "The Prince") played a central role in popularizing the use of the word "state" in something similar to its modern sense. The contrasting of church and state still dates to the 16th century. The North American colonies were called "states" as early as the 1630s. The expression "L'Etat, c'est moi" ("I am the State") attributed to Louis XIV of France is probably apocryphal, recorded in the late 18th century. There is no academic consensus on the most appropriate definition of the state. The term "state" refers to a set of different, but interrelated and often overlapping, theories about a certain range of political phenomena. The act of defining the term can be seen as part of an ideological conflict, because different definitions lead to different theories of state function, and as a result validate different political strategies. According to Jeffrey and Painter, "if we define the 'essence' of the state in one place or era, we are liable to find that in another time or space something which is also understood to be a state has different 'essential' characteristics". Different definitions of the state often place an emphasis either on the ‘means’ or the ‘ends’ of states. Means-related definitions include those by Max Weber and Charles Tilly, both of whom define the state according to its violent means. For Weber, the state "is a human community that (successfully) claims the monopoly of the legitimate use of physical force within a given territory" (Politics as a Vocation), while Tilly characterises them as "coercion-wielding organisations" (Coercion, Capital, and European States). Ends-related definitions emphasis instead the teleological aims and purposes of the state. Marxist thought regards the ends of the state as being the perpetuation of class domination in favour of the ruling class which, under the capitalist mode of production, is the bourgeoisie. The state exists to defend the ruling class's claims to private property and its capturing of surplus profits at the expense of the proletariat. Indeed, Marx claimed that "the executive of the modern state is nothing but a committee for managing the common affairs of the whole bourgeoisie" (Communist Manifesto). Liberal thought provides another possible teleology of the state. According to John Locke, the goal of the state/commonwealth was "the preservation of property" (Second Treatise on Government), with 'property' in Locke's work referring not only to personal possessions but also to one's life and liberty. On this account, the state provides the basis for social cohesion and productivity, creating incentives for wealth creation by providing guarantees of protection for one's life, liberty and personal property. The most commonly used definition is Max Weber's, which describes the state as a compulsory political organization with a centralized government that maintains a monopoly of the legitimate use of force within a certain territory. General categories of state institutions include administrative bureaucracies, legal systems, and military or religious organizations. Another commonly accepted definition of the state is the one given at the Montevideo Convention on Rights and Duties of States in 1933. It provides that "[t]he state as a person of international law should possess the following qualifications: (a) a permanent population; (b) a defined territory; (c) government; and (d) capacity to enter into relations with the other states." And that "[t]he federal state shall constitute a sole person in the eyes of international law." According to the "Oxford English Dictionary", a state is "a. an organized political community under one government; a commonwealth; a nation. b. such a community forming part of a federal republic, esp the United States of America". Confounding the definition problem is that "state" and "government" are often used as synonyms in common conversation and even some academic discourse. According to this definition schema, the states are nonphysical persons of international law, governments are organizations of people. The relationship between a government and its state is one of representation and authorized agency. States may be classified by political philosophers as sovereign if they are not dependent on, or subject to any other power or state. Other states are subject to external sovereignty or hegemony where ultimate sovereignty lies in another state. Many states are federated states which participate in a federal union. A federated state is a territorial and constitutional community forming part of a federation. (Compare confederacies or confederations such as Switzerland.) Such states differ from sovereign states in that they have transferred a portion of their sovereign powers to a federal government. One can commonly and sometimes readily (but not necessarily usefully) classify states according to their apparent make-up or focus. The concept of the nation-state, theoretically or ideally co-terminous with a "nation", became very popular by the 20th century in Europe, but occurred rarely elsewhere or at other times. In contrast, some states have sought to make a virtue of their multi-ethnic or multi-national character (Habsburg Austria-Hungary, for example, or the Soviet Union), and have emphasised unifying characteristics such as autocracy, monarchical legitimacy, or ideology. Imperial states have sometimes promoted notions of racial superiority. Other states may bring ideas of commonality and inclusiveness to the fore: note the "res publica" of ancient Rome and the "Rzeczpospolita" of Poland-Lithuania which finds echoes in the modern-day republic. The concept of temple states centred on religious shrines occurs in some discussions of the ancient world. Relatively small city-states, once a relatively common and often successful form of polity, have become rarer and comparatively less prominent in modern times, although a number of them survive as federated states, like the present day German city-states, or as otherwise autonomous entities with limited sovereignty, like Hong Kong, Gibraltar and Ceuta. To some extent, urban secession, the creation of a new city-state (sovereign or federated), continues to be discussed in the early 21st century in cities such as London. A state can be distinguished from a government. The state is the organization while the government is the particular group of people, the administrative bureaucracy that controls the state apparatus at a given time. That is, governments are the means through which state power is employed. States are served by a continuous succession of different governments. States are immaterial and nonphysical social objects, whereas governments are groups of people with certain coercive powers. Each successive government is composed of a specialized and privileged body of individuals, who monopolize political decision-making, and are separated by status and organization from the population as a whole. States can also be distinguished from the concept of a "nation", where "nation" refers to a cultural-political community of people. A nation-state refers to a situation where a single ethnicity is associated with a specific state. In the classical thought, the state was identified with both political society and civil society as a form of political community, while the modern thought distinguished the nation state as a political society from civil society as a form of economic society. Thus in the modern thought the state is contrasted with civil society. Antonio Gramsci believed that civil society is the primary locus of political activity because it is where all forms of "identity formation, ideological struggle, the activities of intellectuals, and the construction of hegemony take place." and that civil society was the nexus connecting the economic and political sphere. Arising out of the collective actions of civil society is what Gramsci calls "political society", which Gramsci differentiates from the notion of the state as a polity. He stated that politics was not a "one-way process of political management" but, rather, that the activities of civil organizations conditioned the activities of political parties and state institutions, and were conditioned by them in turn. Louis Althusser argued that civil organizations such as church, schools, and the family are part of an "ideological state apparatus" which complements the "repressive state apparatus" (such as police and military) in reproducing social relations. Jürgen Habermas spoke of a public sphere that was distinct from both the economic and political sphere. Given the role that many social groups have in the development of public policy and the extensive connections between state bureaucracies and other institutions, it has become increasingly difficult to identify the boundaries of the state. Privatization, nationalization, and the creation of new regulatory bodies also change the boundaries of the state in relation to society. Often the nature of quasi-autonomous organizations is unclear, generating debate among political scientists on whether they are part of the state or civil society. Some political scientists thus prefer to speak of policy networks and decentralized governance in modern societies rather than of state bureaucracies and direct state control over policy. The earliest forms of the state emerged whenever it became possible to centralize power in a durable way. Agriculture and writing are almost everywhere associated with this process: agriculture because it allowed for the emergence of a social class of people who did not have to spend most of their time providing for their own subsistence, and writing (or an equivalent of writing, like Inca quipus) because it made possible the centralization of vital information. The first known states were created in the Fertile Crescent, India, China, Mesoamerica, the Andes, and others, but it is only in relatively modern times that states have almost completely displaced alternative "stateless" forms of political organization of societies all over the planet. Roving bands of hunter-gatherers and even fairly sizable and complex tribal societies based on herding or agriculture have existed without any full-time specialized state organization, and these "stateless" forms of political organization have in fact prevailed for all of the prehistory and much of the history of the human species and civilization. Initially states emerged over territories built by conquest in which one culture, one set of ideals and one set of laws have been imposed by force or threat over diverse nations by a civilian and military bureaucracy. Currently, that is not always the case and there are multinational states, federated states and autonomous areas within states. Since the late 19th century, virtually the entirety of the world's inhabitable land has been parcelled up into areas with more or less definite borders claimed by various states. Earlier, quite large land areas had been either unclaimed or uninhabited, or inhabited by nomadic peoples who were not organised as states. However, even within present-day states there are vast areas of wilderness, like the Amazon rainforest, which are uninhabited or inhabited solely or mostly by indigenous people (and some of them remain uncontacted). Also, there are states which do not hold de facto control over all of their claimed territory or where this control is challenged. Currently the international community comprises around 200 sovereign states, the vast majority of which are represented in the United Nations. For most of human history, people have lived in stateless societies, characterized by a lack of concentrated authority, and the absence of large inequalities in economic and political power. The anthropologist Tim Ingold writes: During the Neolithic period, human societies underwent major cultural and economic changes, including the development of agriculture, the formation of sedentary societies and fixed settlements, increasing population densities, and the use of pottery and more complex tools. Sedentary agriculture led to the development of property rights, domestication of plants and animals, and larger family sizes. It also provided the basis for the centralized state form by producing a large surplus of food, which created a more complex division of labor by enabling people to specialize in tasks other than food production. Early states were characterized by highly stratified societies, with a privileged and wealthy ruling class that was subordinate to a monarch. The ruling classes began to differentiate themselves through forms of architecture and other cultural practices that were different from those of the subordinate laboring classes. In the past, it was suggested that the centralized state was developed to administer large public works systems (such as irrigation systems) and to regulate complex economies. However, modern archaeological and anthropological evidence does not support this thesis, pointing to the existence of several non-stratified and politically decentralized complex societies. Mesopotamia is generally considered to be the location of the earliest civilization or complex society, meaning that it contained cities, full-time division of labor, social concentration of wealth into capital, unequal distribution of wealth, ruling classes, community ties based on residency rather than kinship, long distance trade, monumental architecture, standardized forms of art and culture, writing, and mathematics and science. It was the world's first literate civilization, and formed the first sets of written laws. Although state-forms existed before the rise of the Ancient Greek empire, the Greeks were the first people known to have explicitly formulated a political philosophy of the state, and to have rationally analyzed political institutions. Prior to this, states were described and justified in terms of religious myths. Several important political innovations of classical antiquity came from the Greek city-states and the Roman Republic. The Greek city-states before the 4th century granted citizenship rights to their free population, and in Athens these rights were combined with a directly democratic form of government that was to have a long afterlife in political thought and history. During Medieval times in Europe, the state was organized on the principle of feudalism, and the relationship between lord and vassal became central to social organization. Feudalism led to the development of greater social hierarchies. The formalization of the struggles over taxation between the monarch and other elements of society (especially the nobility and the cities) gave rise to what is now called the Standestaat, or the state of Estates, characterized by parliaments in which key social groups negotiated with the king about legal and economic matters. These estates of the realm sometimes evolved in the direction of fully-fledged parliaments, but sometimes lost out in their struggles with the monarch, leading to greater centralization of lawmaking and military power in his hands. Beginning in the 15th century, this centralizing process gives rise to the absolutist state. Cultural and national homogenization figured prominently in the rise of the modern state system. Since the absolutist period, states have largely been organized on a national basis. The concept of a national state, however, is not synonymous with nation state. Even in the most ethnically homogeneous societies there is not always a complete correspondence between state and nation, hence the active role often taken by the state to promote nationalism through emphasis on shared symbols and national identity. Most political theories of the state can roughly be classified into two categories. The first are known as "liberal" or "conservative" theories, which treat capitalism as a given, and then concentrate on the function of states in capitalist society. These theories tend to see the state as a neutral entity separated from society and the economy. Marxist and anarchist theories on the other hand, see politics as intimately tied in with economic relations, and emphasize the relation between economic power and political power. They see the state as a partisan instrument that primarily serves the interests of the upper class. Anarchism is a political philosophy which considers the state and hierarchies to be immoral, unnecessary and harmful and instead promotes a stateless society, or anarchy, a self-managed, self-governed society based on voluntary, cooperative institutions. Anarchists believe that the state is inherently an instrument of domination and repression, no matter who is in control of it. Anarchists note that the state possesses the monopoly on the legal use of violence. Unlike Marxists, anarchists believe that revolutionary seizure of state power should not be a political goal. They believe instead that the state apparatus should be completely dismantled, and an alternative set of social relations created, which are not based on state power at all. Various Christian anarchists, such as Jacques Ellul, have identified the State and political power as the Beast in the Book of Revelation. Marx and Engels were clear in that the communist goal was a classless society in which the state would have "withered away", replaced only by "administration of things". Their views are found throughout their Collected Works, and address past or then extant state forms from an analytical and tactical viewpoint, but not future social forms, speculation about which is generally antithetical to groups considering themselves Marxist but who – not having conquered the existing state power(s) – are not in the situation of supplying the institutional form of an actual society. To the extent that it makes sense, there is no single "Marxist theory of state", but rather several different purportedly "Marxist" theories have been developed by adherents of Marxism. Marx's early writings portrayed the bourgeois state as parasitic, built upon the superstructure of the economy, and working against the public interest. He also wrote that the state mirrors class relations in society in general, acting as a regulator and repressor of class struggle, and as a tool of political power and domination for the ruling class. The "Communist Manifesto" claimed that the state to be nothing more than "a committee for managing the common affairs of the "bourgeoisie". For Marxist theorists, the role of the modern bourgeois state is determined by its function in the global capitalist order. Ralph Miliband argued that the ruling class uses the state as its instrument to dominate society by virtue of the interpersonal ties between state officials and economic elites. For Miliband, the state is dominated by an elite that comes from the same background as the capitalist class. State officials therefore share the same interests as owners of capital and are linked to them through a wide array of social, economic, and political ties. Gramsci's theories of state emphasized that the state is only one of the institutions in society that helps maintain the hegemony of the ruling class, and that state power is bolstered by the ideological domination of the institutions of civil society, such as churches, schools, and mass media. Pluralists view society as a collection of individuals and groups, who are competing for political power. They then view the state as a neutral body that simply enacts the will of whichever groups dominate the electoral process. Within the pluralist tradition, Robert Dahl developed the theory of the state as a neutral arena for contending interests or its agencies as simply another set of interest groups. With power competitively arranged in society, state policy is a product of recurrent bargaining. Although pluralism recognizes the existence of inequality, it asserts that all groups have an opportunity to pressure the state. The pluralist approach suggests that the modern democratic state's actions are the result of pressures applied by a variety of organized interests. Dahl called this kind of state a polyarchy. Pluralism has been challenged on the ground that it is not supported by empirical evidence. Citing surveys showing that the large majority of people in high leadership positions are members of the wealthy upper class, critics of pluralism claim that the state serves the interests of the upper class rather than equitably serving the interests of all social groups. Jürgen Habermas believed that the base-superstructure framework, used by many Marxist theorists to describe the relation between the state and the economy, was overly simplistic. He felt that the modern state plays a large role in structuring the economy, by regulating economic activity and being a large-scale economic consumer/producer, and through its redistributive welfare state activities. Because of the way these activities structure the economic framework, Habermas felt that the state cannot be looked at as passively responding to economic class interests. Michel Foucault believed that modern political theory was too state-centric, saying "Maybe, after all, the state is no more than a composite reality and a mythologized abstraction, whose importance is a lot more limited than many of us think." He thought that political theory was focusing too much on abstract institutions, and not enough on the actual practices of government. In Foucault's opinion, the state had no essence. He believed that instead of trying to understand the activities of governments by analyzing the properties of the state (a reified abstraction), political theorists should be examining changes in the practice of government to understand changes in the nature of the state. Foucault argues that it is technology that has created and made the state so elusive and successful, and that instead of looking at the state as something to be toppled we should look at the state as technological manifestation or system with many heads; Foucault argues instead of something to be overthrown as in the sense of the Marxist and Anarchist understanding of the state. Every single scientific technological advance has come to the service of the state Foucault argues and it is with the emergence of the Mathematical sciences and essentially the formation of Mathematical statistics that one gets an understanding of the complex technology of producing how the modern state was so successfully created. Foucault insists that the Nation state was not a historical accident but a deliberate production in which the modern state had to now manage coincidentally with the emerging practice of the Police (Cameral science) 'allowing' the population to now 'come in' into "jus gentium" and "civitas" (Civil society) after deliberately being excluded for several millennia. Democracy wasn't (the newly formed voting franchise) as is always painted by both political revolutionaries and political philosophers as a cry for political freedom or wanting to be accepted by the 'ruling elite', Foucault insists, but was a part of a skilled endeavour of switching over new technology such as; Translatio imperii, Plenitudo potestatis and "extra" "Ecclesiam nulla salus" readily available from the past Medieval period, into mass persuasion for the future industrial 'political' population(deception over the population) in which the political population was now asked to insist upon itself "the president must be elected". Where these political symbol agents, represented by the pope and the president are now democratised. Foucault calls these new forms of technology Biopower and form part of our political inheritance which he calls Biopolitics. Heavily influenced by Gramsci, Nicos Poulantzas, a Greek neo-Marxist theorist argued that capitalist states do not always act on behalf of the ruling class, and when they do, it is not necessarily the case because state officials consciously strive to do so, but because the 'structural' position of the state is configured in such a way to ensure that the long-term interests of capital are always dominant. Poulantzas' main contribution to the Marxist literature on the state was the concept of 'relative autonomy' of the state. While Poulantzas' work on 'state autonomy' has served to sharpen and specify a great deal of Marxist literature on the state, his own framework came under criticism for its 'structural functionalism'. It can be considered as a single structural universe: the historical reality that takes shape in societies characterized by a codified or crystallized right, with a power organized hierarchically and justified by the law that gives it authority, with a well-defined social and economic stratification, with an economic and social organization that gives the society precise organic characteristics, with one (or multiple) religious organizations, in justification of the power expressed by such a society and in support of the religious beliefs of individuals and accepted by society as a whole. Such a structural universe, evolves in a cyclical manner, presenting two different historical phases (a mercantile phase, or “open society”, and a feudal phase or “closed society”), with characteristics so divergent that it can qualify as two different levels of civilization which, however, are never definitive, but that alternate cyclically, being able, each of the two different levels, to be considered progressive (in a partisan way, totally independent of the real value of well-being, degrees of freedom granted, equality realized and a concrete possibility to achieve further progress of the level of civilization), even by the most cultured fractions, educated and intellectually more equipped than the various societies, of both historical phases. State autonomy theorists believe that the state is an entity that is impervious to external social and economic influence, and has interests of its own. "New institutionalist" writings on the state, such as the works of Theda Skocpol, suggest that state actors are to an important degree autonomous. In other words, state personnel have interests of their own, which they can and do pursue independently of (at times in conflict with) actors in society. Since the state controls the means of coercion, and given the dependence of many groups in civil society on the state for achieving any goals they may espouse, state personnel can to some extent impose their own preferences on civil society. States generally rely on a claim to some form of political legitimacy in order to maintain domination over their subjects. The rise of the modern day state system was closely related to changes in political thought, especially concerning the changing understanding of legitimate state power and control. Early modern defenders of absolutism (Absolute monarchy), such as Thomas Hobbes and Jean Bodin undermined the doctrine of the divine right of kings by arguing that the power of kings should be justified by reference to the people. Hobbes in particular went further to argue that political power should be justified with reference to the individual(Hobbes wrote in the time of the English Civil War), not just to the people understood collectively. Both Hobbes and Bodin thought they were defending the power of kings, not advocating for democracy, but their arguments about the nature of sovereignty were fiercely resisted by more traditional defenders of the power of kings, such as Sir Robert Filmer in England, who thought that such defenses ultimately opened the way to more democratic claims. Max Weber identified three main sources of political legitimacy in his works. The first, legitimacy based on traditional grounds is derived from a belief that things should be as they have been in the past, and that those who defend these traditions have a legitimate claim to power. The second, legitimacy based on charismatic leadership, is devotion to a leader or group that is viewed as exceptionally heroic or virtuous. The third is rational-legal authority, whereby legitimacy is derived from the belief that a certain group has been placed in power in a legal manner, and that their actions are justifiable according to a specific code of written laws. Weber believed that the modern state is characterized primarily by appeals to rational-legal authority. Some states are often labeled as "weak" or "failed". In David Samuels's words "...a failed state occurs when sovereignty over claimed territory has collapsed or was never effectively at all". Authors like Samuels and Joel S. Migdal have explored the emergence of weak states, how they are different from Western "strong" states and its consequences to the economic development of developing countries. Early state formation To understand the formation of weak states, Samuels compares the formation of European states in the 1600s with the conditions under which more recent states were formed in the twentieth century. In this line of argument, the state allows a population to resolve a collective action problem, in which citizens recognize the authority of the state and this exercise the power of coercion over them. This kind of social organization required a decline in legitimacy of traditional forms of ruling (like religious authorities) and replaced them with an increase in the legitimacy of depersonalized rule; an increase in the central government's sovereignty; and an increase in the organizational complexity of the central government (bureaucracy). The transition to this modern state was possible in Europe around 1600 thanks to the confluence of factors like the technological developments in warfare, which generated strong incentives to tax and consolidate central structures of governance to respond to external threats. This was complemented by the increasing on the production of food (as a result of productivity improvements), which allowed to sustain a larger population and so increased the complexity and centralization of states. Finally, cultural changes challenged the authority of monarchies and paved the way to the emergence of modern states. Late state formation The conditions that enabled the emergence of modern states in Europe were different for other countries that started this process later. As a result, many of these states lack effective capabilities to tax and extract revenue from their citizens, which derives in problems like corruption, tax evasion and low economic growth. Unlike the European case, late state formation occurred in a context of limited international conflict that diminished the incentives to tax and increase military spending. Also, many of these states emerged from colonization in a state of poverty and with institutions designed to extract natural resources, which have made more difficult to form states. European colonization also defined many arbitrary borders that mixed different cultural groups under the same national identities, which has made difficult to build states with legitimacy among all the population, since some states have to compete for it with other forms of political identity. As a complement of this argument, Migdal gives a historical account on how sudden social changes in the Third World during the Industrial Revolution contributed to the formation of weak states. The expansion of international trade that started around 1850, brought profound changes in Africa, Asia and Latin America that were introduced with the objective of assure the availability of raw materials for the European market. These changes consisted in: i) reforms to landownership laws with the objective of integrate more lands to the international economy, ii) increase in the taxation of peasants and little landowners, as well as collecting of these taxes in cash instead of in kind as was usual up to that moment and iii) the introduction of new and less costly modes of transportation, mainly railroads. As a result, the traditional forms of social control became obsolete, deteriorating the existing institutions and opening the way to the creation of new ones, that not necessarily lead these countries to build strong states. This fragmentation of the social order induced a political logic in which these states were captured to some extent by "strongmen", who were capable to take advantage of the above-mentioned changes and that challenge the sovereignty of the state. As a result, these decentralization of social control impedes to consolidate strong states.
https://en.wikipedia.org/wiki?curid=28151
Stevia Stevia () is a sweetener and sugar substitute derived from the leaves of the plant species "Stevia rebaudiana", native to Brazil and Paraguay. The active compounds are steviol glycosides (mainly stevioside and rebaudioside), which have 30 to 150 times the sweetness of sugar, are heat-stable, pH-stable, and not fermentable. The body does not metabolize the glycosides in stevia, so it contains zero calories like some artificial sweeteners. Stevia's taste has a slower onset and longer duration than that of sugar, and some of its extracts may have a bitter or licorice-like aftertaste at high concentrations. The legal status of stevia as a food additive or dietary supplement varies from country to country. In the United States, high-purity stevia glycoside extracts have been generally recognized as safe (GRAS) since 2008, and are allowed in food products, but stevia leaf and crude extracts do not have GRAS or Food and Drug Administration (FDA) approval for use in food. The European Union approved stevia additives in 2011, while in Japan stevia has been widely used as a sweetener for decades. The plant "Stevia rebaudiana" has been used for more than 1,500 years by the Guaraní peoples of South America, who called it "ka'a he'ê" ("sweet herb"). The leaves have been used traditionally for hundreds of years in both Brazil and Paraguay to sweeten local teas and medicines, and as a "sweet treat". The genus was named for Spanish botanist and physician Petrus Jacobus Stevus (Pedro Jaime Esteve 1500–1556), a professor of botany at the University of Valencia. In 1899, Swiss botanist Moisés Santiago Bertoni, while conducting research in eastern Paraguay, first described the plant and the sweet taste in detail. Only limited research was conducted on the topic until, in 1931, two French chemists isolated the glycosides that give stevia its sweet taste. During the 1990s, the United States Food and Drug Administration (FDA) received two petitions requesting that stevia be classified as generally recognized as safe (GRAS), but the FDA "disagreed with [the] conclusions [detailed in the petitions]". Stevia remained banned for all uses until the Dietary Supplement Health and Education Act of 1994, after which the FDA revised its stance and permitted stevia to be used as a dietary supplement, although still not as a food additive. In 1999, prompted by early studies, the European Commission banned stevia's use in food products within the European Union pending further research. In 2006, research data compiled in the safety evaluation released by the World Health Organization found no adverse effects. In December 2008, the FDA gave a "no objection" approval for GRAS status to Truvia (developed by Cargill and the Coca-Cola Company) and PureVia (developed by PepsiCo and the Whole Earth Sweetener Company, a subsidiary of Merisant), both of which use rebaudioside A derived from the stevia plant. However, the FDA said that these products are not stevia, but a highly purified product. In 2015, the FDA still regarded stevia as "not an approved food additive", and stated that it "has not been affirmed as GRAS in the United States due to inadequate toxicological information". In June 2016, the U.S. Customs and Border Protection issued an order of detention for stevia products made in China based on information that the products were made using prison labor. As of 2017, high-purity stevia glycosides are considered safe and allowable as ingredients in food products sold in the United States. In the early 1970s, sweeteners such as cyclamate and saccharin were gradually decreased or removed from a variant formulation of Coca-Cola. Consequently, use of stevia as an alternative began in Japan, with the aqueous extract of the leaves yielding purified steviosides developed as sweeteners. The first commercial stevia sweetener in Japan was produced by the Japanese firm Morita Kagaku Kogyo Co., Ltd. in 1971. The Japanese have been using stevia in food products and soft drinks, (including Coca-Cola), and for table use. In 2006, Japan consumed more stevia than any other country, with stevia accounting for 40% of the sweetener market. In the mid-1980s, stevia became popular in U.S. natural foods and health food industries, as a noncaloric natural sweetener for teas and weight-loss blends. The makers of the synthetic sweetener NutraSweet (at the time Monsanto) asked the FDA to require testing of the herb. As of 2006, China was the world's largest exporter of stevioside products. In 2007, the Coca-Cola Company announced plans to obtain approval for its stevia-derived sweetener, Rebiana, for use as a food additive within the United States by 2009, as well as plans to market Rebiana-sweetened products in 12 countries that allow stevia's use as a food additive. In May 2008, Coca-Cola and Cargill announced the availability of Truvia, a consumer brand stevia sweetener containing erythritol and Rebiana, which the FDA permitted as a food additive in December 2008. Coca-Cola announced intentions to release stevia-sweetened beverages in late December 2008. From 2013 onwards, Coca-Cola Life, containing stevia as a sweetener, was launched in various countries around the world. Shortly afterward, PepsiCo and Pure Circle announced PureVia, their brand of stevia-based sweetener, but withheld release of beverages sweetened with rebaudioside A until receipt of FDA confirmation. Since the FDA permitted Truvia and PureVia, both Coca-Cola and PepsiCo have introduced products that contain their new sweeteners. Rebaudioside A has the least bitterness all the steviol glycosides in the "Stevia rebaudiana" plant. To produce rebaudioside A commercially, stevia plants are dried and subjected to a water extraction process. This crude extract contains about 50% rebaudioside A. The various glycosides are separated and purified via crystallization techniques, typically using ethanol or methanol as solvent. Stevia extracts and derivatives are produced industrially and marketed under different trade names. Glycosides are molecules that contain glucose residues bound to other non-sugar substances called aglycones (molecules with other sugars are polysaccharides). Preliminary experiments deduce that the tongue's taste receptors react to the glycosides and transduce the sweet taste sensation and the lingering bitter aftertaste by direct activation of sweet and bitter receptors. According to basic research, steviol glycosides and steviol interact with a protein channel called TRPM5, potentiating the signal from the sweet or bitter receptors, amplifying the taste of other sweet, bitter and umami tastants. The synergetic effect of the glycosides on the sweet receptor and TRPM5 explains the sweetness sensation. Some steviol glycosides (rebaudioside A) are perceived sweeter than others (stevioside). Steviol cannot be further digested in the digestive tract and is taken up into the bloodstream, metabolised by the liver to steviol glucuronide, and excreted in the urine. A three-dimensional map of the proteins in stevia, showing the crystalline structures that produce both the sensation of sweetness and bitter aftertaste in the sweetener, was reported in 2019. A 2011 review found that the use of stevia sweeteners as replacements for sugar might benefit people with diabetes, children, and those wishing to lower their intake of calories. Although both steviol and rebaudioside A have been found to be mutagenic in laboratory "in vitro" testing, these effects have not been demonstrated for the doses and routes of administration to which humans are exposed. Two 2010 review studies found no health concerns with stevia or its sweetening extracts. The WHO's Joint Experts Committee on Food Additives has approved, based on long-term studies, an acceptable daily intake of steviol glycoside of up to 4 mg/kg of body weight. In 2010, The European Food Safety Authority established an acceptable daily intake of 4 mg/kg/day of steviol, in the form of steviol glycosides. Meanwhile, the Memorial Sloan Kettering Cancer Center warns that "steviol at high dosages may have weak mutagenic activity," and a review "conducted for" the Center for Science in the Public Interest notes that there are no published carcinogenicity results for rebaudioside A (or stevioside). In August 2019, the US FDA placed an import alert on stevia leaves and crude extracts – which do not have GRAS status – and on foods or dietary supplements containing them due to concerns about safety and potential for toxicity. The plant may be grown legally in most countries, although some countries restrict its use as a sweetener. The legally allowed uses and maximum dosage of the extracts and derived products vary widely from country to country.
https://en.wikipedia.org/wiki?curid=28152
Search for extraterrestrial intelligence The search for extraterrestrial intelligence (SETI) is a collective term for scientific searches for intelligent extraterrestrial life, for example, monitoring electromagnetic radiation for signs of transmissions from civilizations on other planets. Scientific investigation began shortly after the advent of radio in the early 1900s, and focused international efforts have been going on since the 1980s. In 2015, Stephen Hawking and Russian billionaire Yuri Milner announced a well-funded effort called Breakthrough Listen. There have been many earlier searches for extraterrestrial intelligence within the Solar System. In 1896, Nikola Tesla suggested that an extreme version of his wireless electrical transmission system could be used to contact beings on Mars. In 1899, while conducting experiments at his Colorado Springs experimental station, he thought he had detected a signal from that planet since an odd repetitive static signal seemed to cut off when Mars set in the night sky. Analysis of Tesla's research has ranged from suggestions that Tesla detected nothing and that he simply misunderstood the new technology he was working with, to claims that Tesla may have been observing signals from Marconi's European radio experiments and even that he could have picked up naturally occurring Jovian plasma torus signals. In the early 1900s, Guglielmo Marconi, Lord Kelvin and David Peck Todd also stated their belief that radio could be used to contact Martians, with Marconi stating that his stations had also picked up potential Martian signals. On August 21–23, 1924, Mars entered an opposition closer to Earth than at any time in the century before or the next 80 years. In the United States, a "National Radio Silence Day" was promoted during a 36-hour period from August 21–23, with all radios quiet for five minutes on the hour, every hour. At the United States Naval Observatory, a radio receiver was lifted above the ground in a dirigible tuned to a wavelength between 8 and 9 km, using a "radio-camera" developed by Amherst College and Charles Francis Jenkins. The program was led by David Peck Todd with the military assistance of Admiral Edward W. Eberle (Chief of Naval Operations), with William F. Friedman (chief cryptographer of the United States Army), assigned to translate any potential Martian messages. A 1959 paper by Philip Morrison and Giuseppe Cocconi first pointed out the possibility of searching the microwave spectrum, and proposed frequencies and a set of initial targets. In 1960, Cornell University astronomer Frank Drake performed the first modern SETI experiment, named "Project Ozma", after the Queen of Oz in L. Frank Baum's fantasy books. Drake used a radio telescope in diameter at Green Bank, West Virginia, to examine the stars Tau Ceti and Epsilon Eridani near the 1.420 gigahertz marker frequency, a region of the radio spectrum dubbed the "water hole" due to its proximity to the hydrogen and hydroxyl radical spectral lines. A 400 kilohertz band around the marker frequency was scanned, using a single-channel receiver with a bandwidth of 100 hertz. He found nothing of interest. Soviet scientists took a strong interest in SETI during the 1960s and performed a number of searches with omnidirectional antennas in the hope of picking up powerful radio signals. Soviet astronomer Iosif Shklovsky wrote the pioneering book in the field, "Universe, Life, Intelligence" (1962), which was expanded upon by American astronomer Carl Sagan as the best-selling book "Intelligent Life in the Universe" (1966). In the March 1955 issue of "Scientific American", John D. Kraus described an idea to scan the cosmos for natural radio signals using a flat-plane radio telescope equipped with a parabolic reflector. Within two years, his concept was approved for construction by Ohio State University. With a total of US$71,000 in grants from the National Science Foundation, construction began on an plot in Delaware, Ohio. This Ohio State University Radio Observatory telescope was called "Big Ear". Later, it began the world's first continuous SETI program, called the Ohio State University SETI program. In 1971, NASA funded a SETI study that involved Drake, Bernard M. Oliver of Hewlett-Packard Corporation, and others. The resulting report proposed the construction of an Earth-based radio telescope array with 1,500 dishes known as "Project Cyclops". The price tag for the Cyclops array was US$10 billion. Cyclops was not built, but the report formed the basis of much SETI work that followed. The Ohio State SETI program gained fame on August 15, 1977, when Jerry Ehman, a project volunteer, witnessed a startlingly strong signal received by the telescope. He quickly circled the indication on a printout and scribbled the exclamation "Wow!" in the margin. Dubbed the "Wow! signal", it is considered by some to be the best candidate for a radio signal from an artificial, extraterrestrial source ever discovered, but it has not been detected again in several additional searches. In 1980, Carl Sagan, Bruce Murray, and Louis Friedman founded the U.S. Planetary Society, partly as a vehicle for SETI studies. In the early 1980s, Harvard University physicist Paul Horowitz took the next step and proposed the design of a spectrum analyzer specifically intended to search for SETI transmissions. Traditional desktop spectrum analyzers were of little use for this job, as they sampled frequencies using banks of analog filters and so were restricted in the number of channels they could acquire. However, modern integrated-circuit digital signal processing (DSP) technology could be used to build autocorrelation receivers to check far more channels. This work led in 1981 to a portable spectrum analyzer named "Suitcase SETI" that had a capacity of 131,000 narrow band channels. After field tests that lasted into 1982, Suitcase SETI was put into use in 1983 with the Harvard/Smithsonian radio telescope at Oak Ridge Observatory in Harvard, Massachusetts. This project was named "Sentinel" and continued into 1985. Even 131,000 channels were not enough to search the sky in detail at a fast rate, so Suitcase SETI was followed in 1985 by Project "META", for "Megachannel Extra-Terrestrial Assay". The META spectrum analyzer had a capacity of 8.4 million channels and a channel resolution of 0.05 hertz. An important feature of META was its use of frequency Doppler shift to distinguish between signals of terrestrial and extraterrestrial origin. The project was led by Horowitz with the help of the Planetary Society, and was partly funded by movie maker Steven Spielberg. A second such effort, META II, was begun in Argentina in 1990, to search the southern sky. META II is still in operation, after an equipment upgrade in 1996. The follow-on to META was named "BETA", for "Billion-channel Extraterrestrial Assay", and it commenced observation on October 30, 1995. The heart of BETA's processing capability consisted of 63 dedicated fast Fourier transform (FFT) engines, each capable of performing a 222-point complex FFTs in two seconds, and 21 general-purpose personal computers equipped with custom digital signal processing boards. This allowed BETA to receive 250 million simultaneous channels with a resolution of 0.5 hertz per channel. It scanned through the microwave spectrum from 1.400 to 1.720 gigahertz in eight hops, with two seconds of observation per hop. An important capability of the BETA search was rapid and automatic re-observation of candidate signals, achieved by observing the sky with two adjacent beams, one slightly to the east and the other slightly to the west. A successful candidate signal would first transit the east beam, and then the west beam and do so with a speed consistent with Earth's sidereal rotation rate. A third receiver observed the horizon to veto signals of obvious terrestrial origin. On March 23, 1999, the 26-meter radio telescope on which Sentinel, META and BETA were based was blown over by strong winds and seriously damaged. This forced the BETA project to cease operation. In 1978, the NASA SETI program had been heavily criticized by Senator William Proxmire, and funding for SETI research was removed from the NASA budget by Congress in 1981; however, funding was restored in 1982, after Carl Sagan talked with Proxmire and convinced him of the program's value. In 1992, the U.S. government funded an operational SETI program, in the form of the NASA Microwave Observing Program (MOP). MOP was planned as a long-term effort to conduct a general survey of the sky and also carry out targeted searches of 800 specific nearby stars. MOP was to be performed by radio antennas associated with the NASA Deep Space Network, as well as the radio telescope of the National Radio Astronomy Observatory at Green Bank, West Virginia and the radio telescope at the Arecibo Observatory in Puerto Rico. The signals were to be analyzed by spectrum analyzers, each with a capacity of 15 million channels. These spectrum analyzers could be grouped together to obtain greater capacity. Those used in the targeted search had a bandwidth of 1 hertz per channel, while those used in the sky survey had a bandwidth of 30 hertz per channel. MOP drew the attention of the United States Congress, where the program was ridiculed and canceled one year after its start. SETI advocates continued without government funding, and in 1995 the nonprofit SETI Institute of Mountain View, California resurrected the MOP program under the name of Project "Phoenix", backed by private sources of funding. Project Phoenix, under the direction of Jill Tarter, is a continuation of the targeted search program from MOP and studies roughly 1,000 nearby Sun-like stars. From 1995 through March 2004, Phoenix conducted observations at the Parkes radio telescope in Australia, the radio telescope of the National Radio Astronomy Observatory in Green Bank, West Virginia, and the radio telescope at the Arecibo Observatory in Puerto Rico. The project observed the equivalent of 800 stars over the available channels in the frequency range from 1200 to 3000 MHz. The search was sensitive enough to pick up transmitters with 1 GW EIRP to a distance of about 200 light-years. According to Prof. Tarter, in 2012 it costs around "$2 million per year to keep SETI research going at the SETI Institute" and approximately 10 times that to support "all kinds of SETI activity around the world". Many radio frequencies penetrate Earth's atmosphere quite well, and this led to radio telescopes that investigate the cosmos using large radio antennas. Furthermore, human endeavors emit considerable electromagnetic radiation as a byproduct of communications such as television and radio. These signals would be easy to recognize as artificial due to their repetitive nature and narrow bandwidths. If this is typical, one way of discovering an extraterrestrial civilization might be to detect artificial radio emissions from a location outside the Solar System. Many international radio telescopes are currently being used for radio SETI searches, including the Low Frequency Array (LOFAR) in Europe, the Murchison Widefield Array (MWA) in Australia, and the Lovell Telescope in the United Kingdom. The SETI Institute collaborated with the Radio Astronomy Laboratory at the Berkeley SETI Research Center to develop a specialized radio telescope array for SETI studies, something like a mini-cyclops array. Formerly known as the One Hectare Telescope (1HT), the concept was renamed the "Allen Telescope Array" (ATA) after the project's benefactor Paul Allen. Its sensitivity would be equivalent to a single large dish more than 100 meters in diameter if completed. Presently, the array under construction has 42 dishes at the Hat Creek Radio Observatory in rural northern California. The full array (ATA-350) is planned to consist of 350 or more offset-Gregorian radio dishes, each in diameter. These dishes are the largest producible with commercially available satellite television dish technology. The ATA was planned for a 2007 completion date, at a cost of US$25 million. The SETI Institute provided money for building the ATA while University of California, Berkeley designed the telescope and provided operational funding. The first portion of the array (ATA-42) became operational in October 2007 with 42 antennas. The DSP system planned for ATA-350 is extremely ambitious. Completion of the full 350 element array will depend on funding and the technical results from ATA-42. ATA-42 (ATA) is designed to allow multiple observers simultaneous access to the interferometer output at the same time. Typically, the ATA snapshot imager (used for astronomical surveys and SETI) is run in parallel with the beam forming system (used primarily for SETI). ATA also supports observations in multiple synthesized pencil beams at once, through a technique known as "multibeaming". Multibeaming provides an effective filter for identifying false positives in SETI, since a very distant transmitter must appear at only one point on the sky. SETI Institute's Center for SETI Research (CSR) uses ATA in the search for extraterrestrial intelligence, observing 12 hours a day, 7 days a week. From 2007-2015, ATA has identified hundreds of millions of technological signals. So far, all these signals have been assigned the status of noise or radio frequency interference because a) they appear to be generated by satellites or Earth-based transmitters, or b) they disappeared before the threshold time limit of ~1 hour. Researchers in CSR are presently working on ways to reduce the threshold time limit, and to expand ATA's capabilities for detection of signals that may have embedded messages. Berkeley astronomers used the ATA to pursue several science topics, some of which might have turned up transient SETI signals, until 2011, when the collaboration between the University of California, Berkeley and the SETI Institute was terminated. CNET published an article and pictures about the Allen Telescope Array (ATA) on December 12, 2008. In April 2011, the ATA was forced to enter an 8-month "hibernation" due to funding shortfalls. Regular operation of the ATA was resumed on December 5, 2011. In 2012, new life was breathed into the ATA thanks to a $3.6M philanthropic donation by Franklin Antonio, Co-Founder and Chief Scientist of QUALCOMM Incorporated. This gift supports upgrades of all the receivers on the ATA dishes to have dramatically (2x - 10x from 1–8 GHz) greater sensitivity than before and supporting sensitive observations over a wider frequency range from 1–18 GHz, though initially the radio frequency electronics go to only 12 GHz. As of July, 2013 the first of these receivers was installed and proven. Full installation on all 42 antennas is expected in June, 2014. ATA is especially well suited to the search for extraterrestrial intelligence SETI and to discovery of astronomical radio sources, such as heretofore unexplained non-repeating, possibly extragalactic, pulses known as fast radio bursts or FRBs. SERENDIP (Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations) is a SETI program launched in 1979 by the Berkeley SETI Research Center. SERENDIP takes advantage of ongoing "mainstream" radio telescope observations as a "piggy-back" or "commensal" program, using large radio telescopes including the NRAO 90m telescope at Green Bank and the Arecibo 305m telescope. Rather than having its own observation program, SERENDIP analyzes deep space radio telescope data that it obtains while other astronomers are using the telescopes. The most recently deployed SERENDIP spectrometer, SERENDIP V.v, was installed at the Arecibo Observatory in June 2009 and is currently operational. The digital back-end instrument is an FPGA-based 128 million-channel digital spectrometer covering 200 MHz of bandwidth. It takes data commensally with the seven-beam Arecibo L-band Feed Array (ALFA). The program has found around 400 suspicious signals, but there is not enough data to prove that they belong to extraterrestrial intelligence. "Breakthrough Listen" is a ten-year initiative with $100 million funding begun in July 2015 to actively search for intelligent extraterrestrial communications in the universe, in a substantially expanded way, using resources that had not previously been extensively used for the purpose. It has been described as the most comprehensive search for alien communications to date. The science program for Breakthrough Listen is based at Berkeley SETI Research Center, located in the Astronomy Department at the University of California, Berkeley. Announced in July 2015, the project is observing for thousands of hours every year on two major radio telescopes, the Green Bank Observatory in West Virginia, and the Parkes Observatory in Australia. Previously, only about 24 to 36 hours of telescope per year were used in the search for alien life. Furthermore, the Automated Planet Finder at Lick Observatory is searching for optical signals coming from laser transmissions. The massive data rates from the radio telescopes (24 GB/s at Green Bank) necessitated the construction of dedicated hardware at the telescopes to perform the bulk of the analysis. Some of the data are also analyzed by volunteers in the SETI@home distributed computing network. Founder of modern SETI Frank Drake is one of the scientists on the project's advisory committee. In October 2019, Breakthrough Listen started a collaboration with scientists from the TESS team (Transiting Exoplanet Survey Satellite) to look for signs of advanced extraterrestrial life. Thousands of new planets found by TESS will be scanned for technosignatures by Breakthrough Listen partner facilities across the globe. Data from TESS monitoring of stars will also be searched for anomalies. China's 500 meter Aperture Spherical Telescope (FAST) lists "detecting interstellar communication signals" as part of its science mission. It is funded by the National Development and Reform Commission (NDRC) and managed by the National Astronomical observatories (NAOC) of the Chinese Academy of Sciences (CAS). FAST is the first radio observatory built with SETI as a core scientific goal. FAST consists of a fixed diameter spherical dish constructed in a natural depression sinkhole caused by karst processes in the region. It is the world's largest filled-aperture radio telescope. According to its website, FAST could search out to 28 light-years, and would be able to reach 1400 stars. If the transmitter's radiated power is increased to 1000,000 MW, FAST would be able to reach one million stars. This is compared to the Arecibo 305 meter telescope detection distance of 18 light-years. Since 2016, UCLA undergraduate and graduate students have been participating in radio searches for technosignatures with the Green Bank Telescope. Targets include the Kepler field, TRAPPIST-1, and solar-type stars. The search is sensitive to Arecibo-class transmitters located within 420 ly of Earth and to transmitters that are 1000 times more powerful than Arecibo located within 13,000 ly of Earth. The SETI@home project uses distributed computing to analyze signals acquired by the SERENDIP project. SETI@home was conceived by David Gedye along with Craig Kasnoff and is a popular volunteer distributed computing project that was launched by the Berkeley SETI Research Center at the University of California, Berkeley, in May 1999. It was originally funded by The Planetary Society and Paramount Pictures, and later by the state of California. The project is run by director David P. Anderson and chief scientist Dan Werthimer. Any individual can become involved with SETI research by downloading the Berkeley Open Infrastructure for Network Computing (BOINC) software program, attaching to the SETI@home project, and allowing the program to run as a background process that uses idle computer power. The SETI@home program itself runs signal analysis on a "work unit" of data recorded from the central 2.5 MHz wide band of the SERENDIP IV instrument. After computation on the work unit is complete, the results are then automatically reported back to SETI@home servers at University of California, Berkeley. By June 28, 2009, the SETI@home project had over 180,000 active participants volunteering a total of over 290,000 computers. These computers give SETI@home an average computational power of 617 teraFLOPS. In 2004 radio source SHGb02+14a set off speculation in the media that a signal had been detected but researchers noted the frequency drifted rapidly and the detection on three SETI@home computers fell within random chance. As of 2010, after 10 years of data collection, SETI@home has listened to that one frequency at every point of over 67 percent of the sky observable from Arecibo with at least three scans (out of the goal of nine scans), which covers about 20 percent of the full celestial sphere. SETI Network is the only operational private search system. The SETI Net station consists of off-the-shelf, consumer-grade electronics to minimize cost and to allow this design to be replicated as simply as possible. It has a 3-meter parabolic antenna that can be directed in azimuth and elevation, an LNA that covers the 1420 MHz spectrum, a receiver to reproduce the wideband audio, and a standard personal computer as the control device and for deploying the detection algorithms. The antenna can be pointed and locked to one sky location, enabling the system to integrate on it for long periods. Currently the Wow! signal area is being monitored when it is above the horizon. All search data are collected and made available on the Internet archive. SETI Net started operation in the early 1980s as a way to learn about the science of the search, and has developed several software packages for the amateur SETI community. It has provided an astronomical clock, a file manager to keep track of SETI data files, a spectrum analyzer optimized for amateur SETI, remote control of the station from the Internet, and other packages. It can be reached at https://www.seti.net Founded in 1994 in response to the United States Congress cancellation of the NASA SETI program, The SETI League, Inc. is a membership-supported nonprofit organization with 1,500 members in 62 countries. This grass-roots alliance of amateur and professional radio astronomers is headed by executive director emeritus H. Paul Shuch, the engineer credited with developing the world's first commercial home satellite TV receiver. Many SETI League members are licensed radio amateurs and microwave experimenters. Others are digital signal processing experts and computer enthusiasts. The SETI League pioneered the conversion of backyard satellite TV dishes in diameter into research-grade radio telescopes of modest sensitivity. The organization concentrates on coordinating a global network of small, amateur-built radio telescopes under Project Argus, an all-sky survey seeking to achieve real-time coverage of the entire sky. Project Argus was conceived as a continuation of the all-sky survey component of the late NASA SETI program (the targeted search having been continued by the SETI Institute's Project Phoenix). There are currently 143 Project Argus radio telescopes operating in 27 countries. Project Argus instruments typically exhibit sensitivity on the order of 10−23 Watts/square metre, or roughly equivalent to that achieved by the Ohio State University Big Ear radio telescope in 1977, when it detected the landmark "Wow!" candidate signal. The name "Argus" derives from the mythical Greek guard-beast who had 100 eyes, and could see in all directions at once. In the SETI context, the name has been used for radio telescopes in fiction (Arthur C. Clarke, ""Imperial Earth""; Carl Sagan, ""Contact""), was the name initially used for the NASA study ultimately known as "Cyclops," and is the name given to an omnidirectional radio telescope design being developed at the Ohio State University. While most SETI sky searches have studied the radio spectrum, some SETI researchers have considered the possibility that alien civilizations might be using powerful lasers for interstellar communications at optical wavelengths. The idea was first suggested by R. N. Schwartz and Charles Hard Townes in a 1961 paper published in the journal "Nature" titled "Interstellar and Interplanetary Communication by Optical Masers". However, the 1971 Cyclops study discounted the possibility of optical SETI, reasoning that construction of a laser system that could outshine the bright central star of a remote star system would be too difficult. In 1983, Townes published a detailed study of the idea in the United States journal "Proceedings of the National Academy of Sciences", which was met with widespread agreement by the SETI community. There are two problems with optical SETI. The first problem is that lasers are highly "monochromatic", that is, they emit light only on one frequency, making it troublesome to figure out what frequency to look for. However, emitting light in narrow pulses results in a broad spectrum of emission; the spread in frequency becomes higher as the pulse width becomes narrower, making it easier to detect an emission. The other problem is that while radio transmissions can be broadcast in all directions, lasers are highly directional. Interstellar gas and dust is almost transparent to near infrared, so these signals can be seen from greater distances, but the extraterrestrial laser signals would need to be transmitted in the direction of Earth in order to be detected. Optical SETI supporters have conducted paper studies of the effectiveness of using contemporary high-energy lasers and a ten-meter diameter mirror as an interstellar beacon. The analysis shows that an infrared pulse from a laser, focused into a narrow beam by such a mirror, would appear thousands of times brighter than the Sun to a distant civilization in the beam's line of fire. The Cyclops study proved incorrect in suggesting a laser beam would be inherently hard to see. Such a system could be made to automatically steer itself through a target list, sending a pulse to each target at a constant rate. This would allow targeting of all Sun-like stars within a distance of 100 light-years. The studies have also described an automatic laser pulse detector system with a low-cost, two-meter mirror made of carbon composite materials, focusing on an array of light detectors. This automatic detector system could perform sky surveys to detect laser flashes from civilizations attempting contact. Several optical SETI experiments are now in progress. A Harvard-Smithsonian group that includes Paul Horowitz designed a laser detector and mounted it on Harvard's optical telescope. This telescope is currently being used for a more conventional star survey, and the optical SETI survey is "piggybacking" on that effort. Between October 1998 and November 1999, the survey inspected about 2,500 stars. Nothing that resembled an intentional laser signal was detected, but efforts continue. The Harvard-Smithsonian group is now working with Princeton University to mount a similar detector system on Princeton's 91-centimeter (36-inch) telescope. The Harvard and Princeton telescopes will be "ganged" to track the same targets at the same time, with the intent being to detect the same signal in both locations as a means of reducing errors from detector noise. The Harvard-Smithsonian SETI group led by Professor Paul Horowitz built a dedicated all-sky optical survey system along the lines of that described above, featuring a 1.8-meter (72-inch) telescope. The new optical SETI survey telescope is being set up at the Oak Ridge Observatory in Harvard, Massachusetts. The University of California, Berkeley, home of SERENDIP and SETI@home, is also conducting optical SETI searches and collaborates with the NIROSETI program. The optical SETI program at Breakthrough Listen is being directed by Geoffrey Marcy, an extrasolar planet hunter, and it involves examination of records of spectra taken during extrasolar planet hunts for a continuous, rather than pulsed, laser signal. This survey uses the Automated Planet Finder 2.4-m telescope at the Lick Observatory, situated on the summit of Mount Hamilton, east of San Jose, California, USA. The other Berkeley optical SETI effort is being pursued by the Harvard-Smithsonian group and is being directed by Dan Werthimer of Berkeley, who built the laser detector for the Harvard-Smithsonian group. This survey uses a 76-centimeter (30-inch) automated telescope at Leuschner Observatory and an older laser detector built by Werthimer. In May 2017, astronomers reported studies related to laser light emissions from stars, as a way of detecting technology-related signals from an alien civilization. The reported studies included KIC 8462852, an oddly dimming star in which its unusual starlight fluctuations may be the result of interference by an artificial megastructure, such as a Dyson swarm, made by such a civilization. No evidence was found for technology-related signals from KIC 8462852 in the studies. The possibility of using interstellar messenger probes in the search for extraterrestrial intelligence was first suggested by Ronald N. Bracewell in 1960 (see Bracewell probe), and the technical feasibility of this approach was demonstrated by the British Interplanetary Society's starship study Project Daedalus in 1978. Starting in 1979, Robert Freitas advanced arguments for the proposition that physical space-probes are a superior mode of interstellar communication to radio signals. See Voyager Golden Record. In recognition that any sufficiently advanced interstellar probe in the vicinity of Earth could easily monitor the terrestrial Internet, Invitation to ETI was established by Prof. Allen Tough in 1996, as a Web-based SETI experiment inviting such spacefaring probes to establish contact with humanity. The project's 100 Signatories includes prominent physical, biological, and social scientists, as well as artists, educators, entertainers, philosophers and futurists. Prof. H. Paul Shuch, executive director emeritus of The SETI League, serves as the project's Principal Investigator. Inscribing a message in matter and transporting it to an interstellar destination can be enormously more energy efficient than communication using electromagnetic waves if delays larger than light transit time can be tolerated. That said, for simple messages such as "hello," radio SETI could be far more efficient. If energy requirement is used as a proxy for technical difficulty, then a solarcentric Search for Extraterrestrial Artifacts (SETA) may be a useful supplement to traditional radio or optical searches. Much like the "preferred frequency" concept in SETI radio beacon theory, the Earth-Moon or Sun-Earth libration orbits might therefore constitute the most universally convenient parking places for automated extraterrestrial spacecraft exploring arbitrary stellar systems. A viable long-term SETI program may be founded upon a search for these objects. In 1979, Freitas and Valdes conducted a photographic search of the vicinity of the Earth-Moon triangular libration points and , and of the solar-synchronized positions in the associated halo orbits, seeking possible orbiting extraterrestrial interstellar probes, but found nothing to a detection limit of about 14th magnitude. The authors conducted a second, more comprehensive photographic search for probes in 1982 that examined the five Earth-Moon Lagrangian positions and included the solar-synchronized positions in the stable L4/L5 libration orbits, the potentially stable nonplanar orbits near L1/L2, Earth-Moon , and also in the Sun-Earth system. Again no extraterrestrial probes were found to limiting magnitudes of 17–19th magnitude near L3/L4/L5, 10–18th magnitude for /, and 14–16th magnitude for Sun-Earth . In June 1983, Valdes and Freitas used the 26 m radiotelescope at Hat Creek Radio Observatory to search for the tritium hyperfine line at 1516 MHz from 108 assorted astronomical objects, with emphasis on 53 nearby stars including all visible stars within a 20 light-year radius. The tritium frequency was deemed highly attractive for SETI work because (1) the isotope is cosmically rare, (2) the tritium hyperfine line is centered in the SETI waterhole region of the terrestrial microwave window, and (3) in addition to beacon signals, tritium hyperfine emission may occur as a byproduct of extensive nuclear fusion energy production by extraterrestrial civilizations. The wideband- and narrowband-channel observations achieved sensitivities of 5–14 x 10−21 W/m²/channel and 0.7-2 x 10−24 W/m²/channel, respectively, but no detections were made. Technosignatures, including all signs of technology, are a recent avenue in the search for extraterrestrial intelligence. Technosignatures may originate from various sources, from megastructures such as Dyson spheres and space mirrors or space shaders to the atmospheric contamination created by an industrial civilization, or city lights on extrasolar planets, and may be detectable in the future with large hypertelescopes. Technosignatures can be divided into three broad categories: astroengineering projects, signals of planetary origin, and spacecraft within and outside the Solar System. An astroengineering installation such as a Dyson sphere, designed to convert all of the incident radiation of its host star into energy, could be detected through the observation of an infrared excess from a solar analog star, or by the star's apparent disappearance in the visible spectrum over several years. After examining some 100,000 nearby large galaxies, a team of researchers has concluded that none of them display any obvious signs of highly advanced technological civilizations. Another hypothetical form of astroengineering, the Shkadov thruster, moves its host star by reflecting some of the star's light back on itself, and would be detected by observing if its transits across the star abruptly end with the thruster in front. Asteroid mining within the Solar System is also a detectable technosignature of the first kind. Individual extrasolar planets can be analyzed for signs of technology. Avi Loeb of the Harvard-Smithsonian Center for Astrophysics has proposed that persistent light signals on the night side of an exoplanet can be an indication of the presence of cities and an advanced civilization. In addition, the excess infrared radiation and chemicals produced by various industrial processes or terraforming efforts may point to intelligence. Light and heat detected from planets need to be distinguished from natural sources to conclusively prove the existence of civilization on a planet. However, as argued by the Colossus team, a civilization heat signature should be within a "comfortable" temperature range, like terrestrial urban heat islands, i.e. only a few degrees warmer than the planet itself. In contrast, such natural sources as wild fires, volcanoes, etc. are significantly hotter, so they will be well distinguished by their maximum flux at a different wavelength. Extraterrestrial craft are another target in the search for technosignatures. Magnetic sail interstellar spacecraft should be detectable over thousands of light-years of distance through the synchrotron radiation they would produce through interaction with the interstellar medium; other interstellar spacecraft designs may be detectable at more modest distances. In addition, robotic probes within the Solar System are also being sought out with optical and radio searches. For a sufficiently advanced civilization, hyper energetic neutrinos from Planck scale accelerators should be detectable at a distance of many Mpc. Italian physicist Enrico Fermi suggested in the 1950s that if technologically advanced civilizations are common in the universe, then they should be detectable in one way or another. (According to those who were there, Fermi either asked "Where are they?" or "Where is everybody?") The Fermi paradox is commonly understood as asking why extraterrestrials have not visited Earth, but the same reasoning applies to the question of why signals from extraterrestrials have not been heard. The SETI version of the question is sometimes referred to as "the Great Silence". The Fermi paradox can be stated more completely as follows: There are multiple explanations proposed for the Fermi paradox, ranging from analyses suggesting that intelligent life is rare (the "Rare Earth hypothesis"), to analyses suggesting that although extraterrestrial civilizations may be common, they would not communicate with us, could not travel across interstellar distances, or destroy themselves before they master the technology of either interstellar travel or communication. Science writer Timothy Ferris has posited that since galactic societies are most likely only transitory, an obvious solution is an interstellar communications network, or a type of library consisting mostly of automated systems. They would store the cumulative knowledge of vanished civilizations and communicate that knowledge through the galaxy. Ferris calls this the "Interstellar Internet", with the various automated systems acting as network "servers". If such an Interstellar Internet exists, the hypothesis states, communications between servers are mostly through narrow-band, highly directional radio or laser links. Intercepting such signals is, as discussed earlier, very difficult. However, the network could maintain some broadcast nodes in hopes of making contact with new civilizations. Although somewhat dated in terms of "information culture" arguments, not to mention the obvious technological problems of a system that could work effectively for billions of years and requires multiple lifeforms agreeing on certain basics of communications technologies, this hypothesis is actually testable (see below). A significant problem is the vastness of space. Despite piggybacking on the world's most sensitive radio telescope, Charles Stuart Bowyer said, the instrument could not detect random radio noise emanating from a civilization like ours, which has been leaking radio and TV signals for less than 100 years. For SERENDIP and most other SETI projects to detect a signal from an extraterrestrial civilization, the civilization would have to be beaming a powerful signal directly at us. It also means that Earth civilization will only be detectable within a distance of 100 light-years. The International Academy of Astronautics (IAA) has a long-standing SETI Permanent Study Group (SPSG, formerly called the IAA SETI Committee), which addresses matters of SETI science, technology, and international policy. The SPSG meets in conjunction with the International Astronautical Congress (IAC) held annually at different locations around the world, and sponsors two SETI Symposia at each IAC. In 2005, the IAA established the SETI: Post-Detection Science and Technology Taskgroup (Chairman, Professor Paul Davies) "to act as a Standing Committee to be available to be called on at any time to advise and consult on questions stemming from the discovery of a putative signal of extraterrestrial intelligent (ETI) origin." However, the protocols mentioned apply only to radio SETI rather than for METI (Active SETI). The intention for METI is covered under the SETI charter "Declaration of Principles Concerning Sending Communications with Extraterrestrial Intelligence". On October 2000 astronomers Iván Almár and Jill Tarter presented a paper to The SETI Permanent Study Group in Rio de Janeiro, Brazil which proposed a scale (modelled after the Torino scale) which is an ordinal scale between zero and ten that quantifies the impact of any public announcement regarding evidence of extraterrestrial intelligence; the Rio scale has since inspired the 2005 San Marino Scale (in regard to the risks of transmissions from Earth) and the 2010 London Scale (in regard to the detection of extraterrestrial life) The Rio Scale itself was revised in 2018. The SETI Institute does not officially recognize the Wow! signal as of extraterrestrial origin (as it was unable to be verified). The SETI Institute has also publicly denied that the candidate signal Radio source SHGb02+14a is of extraterrestrial origin. Although other volunteering projects such as Zooniverse credit users for discoveries, there is currently no crediting or early notification by SETI@Home following the discovery of a signal. Some people, including Steven M. Greer, have expressed cynicism that the general public might not be informed in the event of a genuine discovery of extraterrestrial intelligence due to significant vested interests. Some, such as Bruce Jakosky have also argued that the official disclosure of extraterrestrial life may have far reaching and as yet undetermined implications for society, particularly for the world's religions. Active SETI, also known as messaging to extraterrestrial intelligence (METI), consists of sending signals into space in the hope that they will be picked up by an alien intelligence. In November 1974, a largely symbolic attempt was made at the Arecibo Observatory to send a message to other worlds. Known as the Arecibo Message, it was sent towards the globular cluster M13, which is 25,000 light-years from Earth. Further IRMs Cosmic Call, Teen Age Message, Cosmic Call 2, and A Message From Earth were transmitted in 1999, 2001, 2003 and 2008 from the Evpatoria Planetary Radar. Physicist Stephen Hawking, in his book "A Brief History of Time", suggests that "alerting" extraterrestrial intelligences to our existence is foolhardy, citing humankind's history of treating its own kind harshly in meetings of civilizations with a significant technology gap, e.g., the extermination of Tasmanian aborigines. He suggests, in view of this history, that we "lay low". In one response to Hawking, in September 2016, astronomer Seth Shostak, allays such concerns. Astronomer Jill Tarter also disagrees with Hawking, arguing that aliens developed and long-lived enough to communicate and travel across interstellar distances would have evolved a cooperative and less violent intelligence. She does think it is too soon for humans to attempt active SETI and that humans should be more advanced technologically first but keep listening in the meantime. The concern over METI was raised by the science journal "Nature" in an editorial in October 2006, which commented on a recent meeting of the International Academy of Astronautics SETI study group. The editor said, "It is not obvious that all extraterrestrial civilizations will be benign, or that contact with even a benign one would not have serious repercussions" (Nature Vol 443 12 October 06 p 606). Astronomer and science fiction author David Brin has expressed similar concerns. Richard Carrigan, a particle physicist at the Fermi National Accelerator Laboratory near Chicago, Illinois, suggested that passive SETI could also be dangerous and that a signal released onto the Internet could act as a computer virus. Computer security expert Bruce Schneier dismissed this possibility as a "bizarre movie-plot threat". To lend a quantitative basis to discussions of the risks of transmitting deliberate messages from Earth, the SETI Permanent Study Group of the International Academy of Astronautics adopted in 2007 a new analytical tool, the San Marino Scale. Developed by Prof. Ivan Almar and Prof. H. Paul Shuch, the scale evaluates the significance of transmissions from Earth as a function of signal intensity and information content. Its adoption suggests that not all such transmissions are equal, and each must be evaluated separately before establishing blanket international policy regarding active SETI. However, some scientists consider these fears about the dangers of METI as panic and irrational superstition; see, for example, Alexander L. Zaitsev's papers. Biologist João Pedro de Magalhães also proposed in 2015 transmitting an invitation message to any extraterrestrial intelligences watching us already in the context of the Zoo Hypothesis and inviting them to respond, arguing this would not put us in any more danger than we are already if the Zoo Hypothesis is correct. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake, Elon Musk and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". On 28 March 2015, a related essay was written by Seth Shostak and published in "The New York Times". The Breakthrough Message program is an open competition announced in July 2015 to design a digital message that could be transmitted from Earth to an extraterrestrial civilization, with a US$1,000,000 prize pool. The message should be "representative of humanity and planet Earth". The program pledges "not to transmit any message until there has been a wide-ranging debate at high levels of science and politics on the risks and rewards of contacting advanced civilizations". As various SETI projects have progressed, some have criticized early claims by researchers as being too "euphoric". For example, Peter Schenkel, while remaining a supporter of SETI projects, wrote in 2006 that SETI has also occasionally been the target of criticism by those who suggest that it is a form of pseudoscience. In particular, critics allege that no observed phenomena suggest the existence of extraterrestrial intelligence, and furthermore that the assertion of the existence of extraterrestrial intelligence has no good Popperian criteria for falsifiability, as explained in a 2009 editorial in "Nature", which said: "Nature" added that SETI was "marked by a hope, bordering on faith" that aliens were aiming signals at us, that a hypothetical alien SETI project looking at Earth with "similar faith" would be "sorely disappointed" (despite our many untargeted radar and TV signals, and our few targeted Active SETI radio signals denounced by those fearing aliens), and that it had difficulties attracting even sympathetic working scientists and Government funding because it was "an effort so likely to turn up nothing". However "Nature" also added that "Nonetheless, a small SETI effort is well worth supporting, especially given the enormous implications if it did succeed" and that "happily, a handful of wealthy technologists and other private donors have proved willing to provide that support". Supporters of the Rare Earth Hypothesis argue that advanced lifeforms are likely to be very rare, and that, if that is so, then SETI efforts will be futile. However the Rare Earth Hypothesis itself faces many criticisms. In 1993 Roy Mash claimed that "Arguments favoring the existence of extraterrestrial intelligence nearly always contain an overt appeal to big numbers, often combined with a covert reliance on generalization from a single instance" and concluded that "the dispute between believers and skeptics is seen to boil down to a conflict of intuitions which can barely be engaged, let alone resolved, given our present state of knowledge". In 2012 Milan M. Ćirković (who was then research professor at the Astronomical Observatory of Belgrade and a research associate of the Future of Humanity Institute at the University of Oxford) claimed that Mash was unrealistically over-reliant on excessive abstraction that ignored the empirical information available to modern SETI researchers. George Basalla, Emeritus Professor of History at the University of Delaware, is a critic of SETI who argued in 2006 that "extraterrestrials discussed by scientists are as imaginary as the spirits and gods of religion or myth", and has in turn been criticized by Milan M. Ćirković for, among other things, being unable to distinguish between "SETI believers" and "scientists engaged in SETI", who are often sceptical (especially about quick detection), such as Freeman Dyson (and, at least in their later years, Iosif Shklovsky and Sebastian von Hoerner), and for ignoring the difference between the knowledge underlying the arguments of modern scientists and those of ancient Greek thinkers. Massimo Pigliucci, Professor of Philosophy at CUNY-City College, asked in 2010 whether SETI is "uncomfortably close to the status of pseudoscience" due to the lack of any clear point at which negative results cause the hypothesis of Extraterrestrial Intelligence to be abandoned, before eventually concluding that SETI is "almost-science", which is described by Milan M. Ćirković as Pigliucci putting SETI in "the illustrious company of string theory, interpretations of quantum mechanics, evolutionary psychology and history (of the 'synthetic' kind done recently by Jared Diamond)", while adding that his justification for doing so with SETI "is weak, outdated, and reflecting particular philosophical prejudices similar to the ones described above in Mash and Basalla".
https://en.wikipedia.org/wiki?curid=28153
Sextans Sextans is a minor equatorial constellation which was introduced in 1687 by Johannes Hevelius. Its name is Latin for the astronomical sextant, an instrument that Hevelius made frequent use of in his observations. Sextans as a constellation covers a rather dim, sparse region of the sky. It has only one star above the fifth magnitude, namely α Sextantis at 4.49m. The constellation contains a few double stars, including γ, 35, and 40 Sextantis. There are a few notable variable stars, including β, 25, 23 Sextantis, and LHS 292. NGC 3115, an edge-on lenticular galaxy, is the only noteworthy deep-sky object. It also lies near the ecliptic, which causes the Moon, and some of the planets to occasionally pass through it for brief periods of time. The constellation is the location of the field studied by the COSMOS project, undertaken by the Hubble Space Telescope. Sextans B is a fairly bright dwarf irregular galaxy at magnitude 6.6, 4.3 million light-years from Earth. It is part of the Local Group of galaxies. CL J1001+0220 is as of 2016 the most distant-known galaxy cluster at redshift z=2.506, 11.1 billion light-years from Earth. In June 2015, astronomers reported evidence for population III stars in the Cosmos Redshift 7 galaxy (at "z" = 6.60) found in Sextans. Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it.
https://en.wikipedia.org/wiki?curid=28154
List of nearest stars and brown dwarfs Some 52 stellar systems beyond our own, the Solar System, currently lie within of the Sun. These systems contain a total of 63 stars, of which 50 are red dwarfs, by far the most common type of star in the Milky Way. Much more massive stars, such as our own, make up the remaining 13. In addition to these "true" stars, scientists have identified 11 brown dwarfs (objects not quite massive enough to fuse hydrogen), and four white dwarfs (extremely dense collapsed cores that remain after stars such as our Sun have exhausted all fusable hydrogen in their core and have shed slowly their outer layers). Despite the relative proximity of these 78 objects to Earth, only nine are bright enough in visible light to reach or exceed the dimmest brightness to be visible to the naked eye from Earth, 6.5 apparent magnitude. All of these objects are currently moving in the Local Bubble, a region within the Orion–Cygnus Arm of the Milky Way. Based on results from the Gaia telescope's second data release from April 2018, an estimated 694 stars will possibly approach the Solar System to less than 5 parsecs in the next 15 million years. Of these, 26 have a good probability to come within and another 7 within . This number is likely much higher, due to the sheer number of stars needed to be surveyed; a star approaching the Solar System 10 million years ago, moving at a typical Sun-relative 20–200 kilometers per second, would be 600–6,000 light-years from the Sun at present day, with millions of stars closer to the Sun. The closest encounter to the Sun so far predicted is the low-mass orange dwarf star Gliese 710 / HIP 89825 with roughly 60% the mass of the Sun. It is currently predicted to pass from the Sun in million years from the present, close enough to significantly disturb our Solar System's Oort cloud. The easiest way to determine stellar distance to the Sun for objects at these distances is parallax, which measures how much stars appear to move against background objects over the course of Earth's orbit around the Sun. As a parsec (parallax-second) is defined by the distance of an object that would appear to move exactly one second of arc against background objects, stars less than 5 parsecs away will have measured parallaxes of over 0.2 arcseconds, or 200 milliarcseconds. Determining past and future positions relies on accurate astrometric measurements of their parallax and total proper motions (how far they move across the sky due to their actual velocity relative to the Sun), along with spectroscopically determined radial velocities (their speed directly towards or away from us, which combined with proper motion defines their true movement through the sky relative to the Sun). Both of these measurements are subject to increasing and significant errors over very long time spans, especially over the several thousand-year time spans it takes for stars to noticeably move relative to each other. The classes of the stars and brown dwarfs are shown in the color of their spectral types (these colors are derived from conventional names for the spectral types and do not represent the star's observed color). Many brown dwarfs are not listed by visual magnitude but are listed by near-infrared J band apparent magnitude due to how dim (and often invisible) they are in visible color bands (U, B or V). Absolute magnitude (with electromagnetic wave, 'light' band denoted in subscript) is a measurement at a 10-parsec distance across imaginary empty space devoid of all its sparse dust and gas. Some of the parallaxes and resultant distances are rough measurements. Over long periods of time, the slow independent motion of stars change in both relative position and in their distance from the observer. This can cause other currently distant stars to fall within a stated range, which may be readily calculated and predicted using accurate astrometric measurements of parallax and total proper motions, along with spectroscopically determined radial velocities. Although predictions can be extrapolated back into the past or forward into the future, they are subject to increasing significant cumulative errors over very long periods. Inaccuracies of these measured parameters make determining the true minimum distances of any encountering stars or brown dwarfs fairly difficult. One of the first stars known to approach the Sun particularly close is Gliese 710. The star, whose mass is roughly half that of the Sun, is currently 62 light-years from the Solar System. It was first noticed in 1999 using data from the Hipparcos satellite, and was estimated to pass less than from the Sun in 1.4 million years. With the release of "Gaia"'s observations of the star, it has since been refined to a much closer , close enough to significantly disturb objects in the Oort cloud, which extends out to from the Sun. The second-closest object known to approach the Sun was only discovered in 2018 after "Gaia" second data release, known as 2MASS J0610-4246. Its approach has not been fully described due to it being a distant binary star with a red dwarf, but almost certainly passed less than 1 light-year from the Solar System roughly 1.16 million years ago.
https://en.wikipedia.org/wiki?curid=28162
Sagitta Sagitta is a dim but distinctive constellation in the northern sky. Its name is Latin for "arrow", and it should not be confused with the significantly larger constellation Sagittarius, the archer. Although Sagitta is an ancient constellation, it has no star brighter than 3rd magnitude and has the third-smallest area of all constellations. It was included among the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains one of the 88 modern constellations defined by the International Astronomical Union. Located to the north of the equator, Sagitta can be seen from every location on Earth except within the Antarctic circle. The red giant Gamma Sagittae is the constellation's brightest star, with an apparent magnitude of 3.47. Delta, Epsilon, Zeta and Theta Sagittae are multiple stars whose components can be seen in small telescopes. V Sagittae is a cataclysmic variable binary star system that is expected to go nova and briefly become the most luminous star in the Milky Way and one of the brightest stars in our sky around the year 2083. Two star systems have been found to have Jupiter-like planets, while a third—15 Sagittae—has a brown dwarf companion. The Ancient Greeks called this constellation "Oistos" "the arrow". It was regarded as the weapon that Hercules used to kill the eagle (Aquila) of Jove that perpetually gnawed Prometheus' liver. The Arrow is located beyond the north border of Aquila, the Eagle. Richard Hinckley Allen proposed that the Arrow could be the one shot by Hercules towards the adjacent Stymphalian birds (6th labor) who had claws, beaks and wings of iron, and who lived on human flesh in the marshes of Arcadia - Aquila the Eagle, Cygnus the Swan, and Lyra (the Vulture) - and still lying between them, whence the title Herculea. Eratosthenes claimed it as the arrow with which Apollo exterminated the Cyclopes. The Romans named it Sagitta. In Arabic, it became "al-sahm" "arrow", though this name became "Sham" and was transferred to Alpha Sagittae only. The Greek name has also been mistranslated as "ὁ istos" "the loom" and thus in Arabic "al-nawl". It was also called "al-'anaza" "pike/javelin". Four stars make up an arrow-shaped asterism located due north of the bright star Altair. Covering 79.9 square degrees and hence 0.194% of the sky, Sagitta ranks 86th of the 88 modern constellations by area. Only Equuleus and Crux are smaller. Sagitta is most readily observed from the late spring to early autumn to northern hemisphere observers, with midnight culmination occurring on 17 July. Its position in the Northern Celestial Hemisphere means that the whole constellation is visible to observers north of 69°S. Sagitta is bordered by Vulpecula to the north, Hercules to the west, Aquila to the south, and Delphinus to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Sge"; American astronomer Henry Norris Russell, who devised the code, had to resort to using the genitive form of the name to come up with a letter to include ('e') that was not in the name of the constellation Sagittarius. The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of twelve segments ("illustrated in infobox"). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 16.08° and 21.64°. Johann Bayer gave Bayer designations to eight stars, labelling them Alpha to Theta. What was viewed by Bayer, Friedrich Wilhelm Argelander and Heis as a single star Theta was in fact three stars, and is now equated to HR 7705. John Flamsteed added the letters x (mistaken as Chi), y and z to 13, 14 and 15 Sagittae in his "Catalogus Britannicus". All three were dropped by later astronomers John Bevis and Francis Baily. In his "Uranometria", Bayer depicted Alpha, Beta and Epsilon Sagittae as the fins of the arrow. Also known as Sham, Alpha is a yellow bright giant star of spectral class G1 II (with an apparent magnitude of 4.38, which lies at a distance of 382 ± 8 light-years from Earth. Originally 4 times as massive as the Sun, it has swollen and brightened to 20 times its diameter and 340 times its luminosity. Also of magnitude 4.38, Beta is a G-type giant located 420 ± 10 light-years distant from Earth. Estimated to be around 129 million years old, it is 4.33 times as massive as the Sun, and has expanded to roughly 27 times its diameter. Epsilon Sagittae is a double star whose component stars can be seen in a small telescope. With an apparent magnitude of 5.77, the main star is a 331-million year-old yellow giant of spectral type G8 III around 3.09 times as massive as the Sun, that has swollen to its diameter. It is 580 ± 10 light-years distant. Its companion of magnitude 8.35 is 87.4 arc seconds distant, but is actually an unrelated blue supergiant around 7,000 light-years distant from Earth. Ptolemy saw the constellation's brightest star Gamma Sagittae as marking the arrow's head, while Bayer saw Gamma, Eta and Theta as depicting the arrow's shaft. Gamma Sagittae is a red giant of spectral type M0III, and magnitude 3.47. It lies at a distance of 258 ± 4 light-years from Earth. It has around 90% of the Sun's mass yet has a radius 54 times that of the Sun and is 575 times as bright. It is most likely on the red-giant branch of its evolutionary lifespan, having exhausted its core hydrogen and now burning it in its shell. Eta Sagittae is an orange giant of spectral class K2 III with a magnitude of 5.1. Located 155.9 ± 0.9 light-years from Earth, it has a 61.1% chance of being a member of the Hyades-Pleiades stream of stars that share a common motion through space. Theta Sagittae is a multiple star system, with two components 12 arcseconds apart visible in a small telescope. Delta and Zeta depicted the spike according to Bayer. both of these stars are separapable into multiple component stars in a small telescope. Delta Sagittae is a suspected visual double - M2 II+A0 V probably single image, composite spectrum), magnitude 3.82. Zeta Sagittae is a triple system, approximately 326 light-years from Earth, the primary an A-type star. FG Sagittae is a remote highly luminous star around 4000 light years distant from Earth. WR 124 is a Wolf-Rayet star moving at great speed surrounded by a nebula of ejected gas. HD 183143 is a remote blue-white star around 6500 light-years away that has been described as a blue hypergiant. It is an Alpha Cygni variable. R Sagittae is a member of the rare RV Tauri variable class of star. It ranges in magnitude from 8.2 to 10.4. It is around 8,100 light-years distant. It has a diameter times that of the Sun, and is as luminous, yet most likely to be less massive than the Sun. An ageing star, it has moved on from the asymptotic giant branch of stellar evolution and is on its way to becoming a planetary nebula. S Sagittae is a Classical Cepheid that varies from magnitude 5.24 to 6.04 every 8.38 days. It is a yellow-white supergiant that pulsates between spectral types F6Ib and G5Ib. Around 6 or 7 times as massive and 3500 times as luminous as the Sun, it is located around 5,100 light-years away from Earth. U Sagittae is an eclipsing binary that varies in brightness between magnitudes 6.6 and 9.2 over 3.4 days, making it a suitable target for enthusiasts with small telescopes. The system is 900 ± 10 light-years distant. X Sagittae is a semiregular variable ranging between magnitudes 7.9 and 8.4 over 196 days. A carbon star, it has a surface temperature of 2576 K. Located near 18 Sagittae is V Sagittae, the prototype of the V Sagittae variables, cataclysmic variables that are also super soft X-ray source. It is expected to become a luminous red nova when the two stars merge around the year 2083, and briefly become the most luminous star in the Milky Way and one of the brightest stars in our sky. WZ Sagittae is another cataclysmic variable, composed of a white dwarf that has about 85% the mass of the Sun, and low mass star companion that has been calculated to be a brown dwarf of spectral class L2 that is only 8% as massive as the Sun. Normally a faint object dimmer than magnitude 15, it flared up in 1913, 1946 and 1978 to be visible in binoculars. The black widow pulsar (B1957+20) is the second millisecond pulsar ever discovered. It is a massive neutron star that is ablating its brown dwarf-sized companion which causes the pulsar's radio signals to attenuate as they pass through the outflowing material. HD 231701 is a yellow-white main sequence star hotter and larger than our Sun, with a Jupiter-like planet that was discovered in 2007 by the radial velocity technique . The planet orbits at a distance of from the star with a period of 141.6 days. HAT-P-34 is a star 1.392 ± 0.047 times as massive as the Sun with 1.535+0.135 −0.102 times its radius and 3.63+0.75− 0.51 times its luminosity. With an apparent magnitude of 10.4, it is 819 ± 9 light-years distant. A planet 3.328 ± 0.211 times as massive as Jupiter was discovered transiting it in 2012. With a period of 5.45 days and a distance of 0.06 astronomical units from its star, it has an estimated surface temperature of 1520 ± 60 K. 15 Sagittae is a solar analog—a star similar to the Sun, with 1.08 ± 0.04 times its mass, 1.115 ± 0.021 times its radius and 1.338 ± 0.03 times its luminosity. It has a brown dwarf substellar companion that is around the same size as Jupiter but 69 times as massive with a surface temperature of between 1,510 and 1,850 K, taking around 73.3 years to complete an orbit around the star. The system is estimated to be 2.5 ± 1.8 billion years old. The band of the Milky Way and the Great Rift within it pass though Sagitta, with Alpha, Beta and Epsilon Sagittae marking the Rift's border. Located between Beta and Gamma Sagittae is Messier 71, a very loose globular cluster mistaken for quite some time for a dense open cluster. It lies at a distance of about 13,000 light-years from Earth and was first discovered by the French astronomer Philippe Loys de Chéseaux in the year 1745 or 1746. There are two notable planetary nebulae in Sagitta: NGC 6886—composed of a hot central post-AGB star that has 55% of the Sun's mass yet 2700 ± 850 its luminosity, with a surface teperature of 142,000 K, and surrounding nebula estimated to have been expanding for between 1280 and 1600 years, and the Necklace Nebula—originally a close binary that one component of which swallowed the other as it expanded to become a giant star. The smaller star remained in orbit inside the larger, whose rotation speed increased greatly, resulting in it flinging its outer layers off into space, forming a ring with knots of bright gas formed from clumps of stellar material. Both nebulae are around 15,000 light-years distant from Earth.
https://en.wikipedia.org/wiki?curid=28163
Sharable Content Object Reference Model Shareable Content Object Reference Model (SCORM) is a collection of standards and specifications for web-based electronic educational technology (also called e-learning). It defines communications between client side content and a host system (called "the run-time environment"), which is commonly supported by a learning management system. SCORM also defines how content may be packaged into a transferable ZIP file called "Package Interchange Format." SCORM is a specification of the Advanced Distributed Learning (ADL) Initiative from the Office of the United States Secretary of Defense. SCORM 2004 introduced a complex idea called sequencing, which is a set of rules that specifies the order in which a learner may experience content objects. In simple terms, they constrain a learner to a fixed set of paths through the training material, permit the learner to "bookmark" their progress when taking breaks, and assure the acceptability of test scores achieved by the learner. The standard uses XML, and it is based on the results of work done by AICC, IEEE LTSC, and Ariadne. SCORM was designed to be web-based and utilizes JavaScript to facilitate communication between the client side content and the run-time environment. Each SCORM version specifies the methods that the run-time environment should support and how those methods should behave. Content launched by the run time environment can then call those methods utilizing JavaScript. SCORM 1.1 is the first production version. It used a Course Structure Format XML file based on the AICC specifications to describe content structure, but lacked a robust packaging manifest and support for metadata. Quickly abandoned in favor of SCORM 1.2. This was the first version that was widely used. It is still widely used and is supported by most Learning Management Systems. This is the current version. It is based on new standards for API and content object-to-runtime environment communication, with many ambiguities of previous versions resolved. Includes ability to specify adaptive sequencing of activities that use the content objects. Includes ability to share and use information about the success status for multiple learning objectives or competencies across content objects and across courses for the same learner within the same learning management system. A more robust test suite helps ensure good interoperability. The Experience API (also known as xAPI or Tin Can API) was finalized to version 1.0 in April 2013. The Experience API solves many of the problems inherent with older versions of SCORM. Just like SCORM, ADL is the steward of the Experience API. AICC with their cmi5 planned to use xAPI as their transport standard, but AICC membership decided to dissolve the organization and transferred cmi5 to ADL. The Experience API (Tin Can API) is a web service that allows software clients to read and write experiential data in the form of “statement” objects. In their simplest form, statements are in the form of “I did this”, or more generally “actor verb object”. More complex statement forms can be used. There is also a built-in query API to help filter recorded statements, and a state API that allows for a sort of “scratch space” for consuming applications. Experience API statements are stored in a data store called a Learning Record Store, which can exist on its own or within a Learning Management System. Server software Content editing software
https://en.wikipedia.org/wiki?curid=28165
Sejm The Sejm (English: – ), officially known as the Sejm of the Republic of Poland, is the lower house of the bicameral parliament of Poland. The Sejm has been the highest governing body of the Third Polish Republic since the transition of government in 1989. Along with the upper house of parliament, the Senate, it forms the national legislature in Poland. The Diet is composed of 460 deputies (singular "deputowany" or "poseł" – "envoy") elected every four years by a universal ballot. The Sejm is presided over by a speaker called the "Marshal of the Sejm" ("Marszałek Sejmu"). The first true lower house of parliament was founded in 1493 as an assembly of Polish nobles and their representatives. In the Kingdom of Poland, the term ""Sejm"" referred to an entire two-chamber parliament, comprising the Chamber of Deputies (), the Senate and the King. It was thus a three-estate parliament, and the absolute authority of the monarch was diminished by parliament's ruling under the "nihil novi" act from 1505. The Henrician Articles from 1573 further strengthened the assembly's jurisdiction, making Poland a constitutional elective monarchy. Since the Second Polish Republic (1918–1939), ""Sejm"" has referred only to the larger house of the parliament. The origin of the contemporary "Sejm" (meaning "gathering") is traced back to the King's Councils – "wiece" – which gained considerable authority during the time of Poland's fragmentation between 1146 and 1295. The 1180 Sejm in Łęczyca (known as the 'First Polish parliament') was the most notable of these councils, in that for the first time in Poland's history it established laws constraining the power of the ruler. It forbade arbitrary sequestration of supplies in the countryside and takeover of bishopric lands after the death of a bishop. These early "Sejm"s were not a regular event, they convened at the King's behest. Following the 1493 "Sejm" in Piotrków, it became a regularly convening body, to which indirect elections were held every two years. The bicameral system was also established; the "Sejm" then comprised two chambers: the "Senat" (Senate) of 81 bishops and other dignitaries; and the Chamber of Deputies, made up of 54 envoys elected by smaller local "sejmik" (assemblies of landed nobility) in each of the Kingdom's provinces. At the time, Poland's nobility, which accounted for around 10% of the state's population (then the highest amount in Europe), was becoming particularly influential, and with the eventual development of the Golden Liberty, the "Sejm"'s powers increased dramatically. Over time, the envoys in the lower chamber grew in number and power as they pressed the king for more privileges. The "Sejm" eventually became even more active in supporting the goals of the privileged classes when the King ordered that the landed nobility and their estates (peasants) be drafted into military service. The Union of Lublin in 1569, united the Kingdom of Poland and the Grand Duchy of Lithuania as one single state, the Polish–Lithuanian Commonwealth, and thus the "Sejm" was supplemented with new envoys from among the Lithuanian nobility. The Commonwealth ensured that the state of affairs surrounding the three-estates system continued, with the "Sejm", Senate and King forming the estates and supreme deliberating body of the state. In the first few decades of the 16th century, the Senate had established its precedence over the "Sejm"; however, from the mid-1500s onwards, the "Sejm" became a very powerful representative body of the "szlachta" ("middle nobility"). Its chambers reserved the final decisions in legislation, taxation, budget, and treasury matters (including military funding), foreign policy, and the confirment of nobility. The 1573 Warsaw Confederation saw the nobles of the "Sejm" officially sanction and guarantee religious tolerance in Commonwealth territory, ensuring a refuge for those fleeing the ongoing Reformation and Counter-Reformation wars in Europe. Until the end of the 16th century, unanimity was not required, and the majority-voting process was the most commonly used system for voting. Later, with the rise of the Polish magnates and their increasing power, the unanimity principle was re-introduced with the institution of the nobility's right of "liberum veto" (Latin: "I freely forbid"). Additionally, if the envoys were unable to reach a unanimous decision within six weeks (the time limit of a single session), deliberations were declared void and all previous acts passed by that "Sejm" were annulled. From the mid-17th century onward, any objection to a "Sejm" resolution, by either an envoy or a senator, automatically caused the rejection of other, previously approved resolutions. This was because all resolutions passed by a given session of the "Sejm" formed a whole resolution, and, as such, was published as the annual "constituent act" of the "Sejm", e.g. the ""Anno Domini" 1667" act. In the 16th century, no single person or small group dared to hold up proceedings, but, from the second half of the 17th century, the "liberum veto" was used to virtually paralyze the "Sejm", and brought the Commonwealth to the brink of collapse. The "liberum veto" was abolished with the adoption of Poland's 3rd May Constitution in 1791, a piece of legislation which was passed as the "Government Act", and for which the "Sejm" required four years to propagate and adopt. The constitution's acceptance, and the possible long-term consequences it may have had, is arguably the reason for which the powers of Habsburg Austria, Russia and Prussia then decided to partition the Polish–Lithuanian Commonwealth, thus putting an end to over 300 years of Polish parliamentary continuity. It is estimated that between 1493 and 1793, a "Sejm" was held 240 times, the total debate-time sum of which was 44 years. After the fall of the Duchy of Warsaw, which existed as a Napoleonic client state between 1807 and 1815, and its short-lived "Sejm" of the Duchy of Warsaw, the "Sejm" of Congress Poland was established in Congress Poland of the Russian Empire; it was composed of the king (the Russian emperor), the upper house (Senate), and the lower house (Chamber of Deputies). Overall, during the period from 1795 until the re-establishment of Poland's sovereignty in 1918, little power was actually held by any Polish legislative body and the occupying powers of Russia, Prussia (later united Germany) and Austria propagated legislation for their own respective formerly-Polish territories at a national level. The Chamber of Deputies, despite its name, consisted not only of 77 envoys (sent by local assemblies) from the hereditary nobility, but also of 51 deputies, elected by the non-noble population. All deputies were covered by Parliamentary immunity, with each individual serving for a term of office of six years, with third of the deputies being elected every two years. Candidates for deputy had to be able to read and write, and have a certain amount of wealth. The legal voting age was 21, except for those citizens serving in the military, the personnel of which were not allowed to vote. Parliamentary sessions were initially convened every two years, and lasted for (at least) 30 days. However, after many clashes between liberal deputies and conservative government officials, sessions were later called only four times (1818, 1820, 1826, and 1830, with the last two sessions being secret). The "Sejm" had the right to call for votes on civil and administrative legal issues, and, with permission from the king, it could also vote on matters related to the fiscal policy and the military. It had the right to exercise control over government officials, and to file petitions. The 64-member Senate on the other hand, was composed of "voivodes" and "kasztelans" (both types of provincial governors), Russian envoys, diplomats or princes, and nine bishops. It acted as the Parliamentary Court, had the right to control "citizens' books", and had similar legislative rights as did the Chamber of Deputies. In the Free City of Cracow (1815–1846), a unicameral Assembly of Representatives was established, and from 1827, a unicameral provincial "sejm" existed in the Grand Duchy of Poznań. Poles were elected to and represented the majority in both of these legislatures; however, they were largely powerless institutions and exercised only very limited power. After numerous failures in securing legislative sovereignty in the early 19th century, many Poles simply gave up trying to attain a degree of independence from their foreign master-states. In the Austrian partition, a relatively powerless "Sejm" of the Estates operated until the time of the Spring of Nations. After this, in the mid to late 19th century, only in autonomous Galicia (1861–1914) was there a unicameral and functional National "Sejm", the "Sejm" of the Land. It is recognised today as having played a major and overwhelming positive role in the development of Polish national institutions. In the second half of the 19th century, Poles were able to become members of the parliaments of Austria, Prussia and Russia, where they formed Polish Clubs. Deputies of Polish nationality were elected to the Prussian "Landtag" from 1848, and then to the German Empire's "Reichstag" from 1871. Polish Deputies were members of the Austrian State Council (from 1867), and from 1906 were also elected to the Russian Imperial State "Duma" (lower chamber) and to the State Council (upper chamber). After the First World War and re-establishment of Polish independence, the convocation of parliament, under the democratic electoral law of 1918, became an enduring symbol of the new state's wish to demonstrate and establish continuity with the 300-year Polish parliamentary traditions established before the time of the partitions. Maciej Rataj emphatically paid tribute to this with the phrase: "There is Poland there, and so is the "Sejm"". During the interwar period of Poland's independence, the first Legislative "Sejm" of 1919, a Constituent Assembly, passed the Small Constitution of 1919, which introduced a parliamentary republic and proclaimed the principle of the "Sejm"'s sovereignty. This was then strengthened, in 1921, by the March Constitution, one of the most democratic European constitutions enacted after the end of World War I. The constitution established a political system which was based on Montesquieu's doctrine of separation of powers, and which restored the bicameral "Sejm" consisting of a chamber of deputies (to which alone the name of ""Sejm"" was from then on applied) and the Senate. In 1919, Roza Pomerantz-Meltzer, a member of the Zionist party, became the first woman ever elected to the "Sejm". The legal content of the March Constitution allowed for "Sejm" supremacy in the system of state institutions at the expense of the executive powers, thus creating a parliamentary republic out of the Polish state. An attempt to strengthen executive powers in 1926 (through the August Amendment) proved too limited and largely failed in helping avoid legislative grid-lock which had ensued as a result of too-great parliamentary power in a state which had numerous diametrically-opposed political parties sitting in its legislature. In 1935, the parliamentary republic was weakened further when, by way of, Józef Piłsudski's May Coup, the president was forced to sign the April Constitution of 1935, an act through which the head of state assumed the dominant position in legislating for the state and the Senate increased its power at the expense of the "Sejm". On 2 September 1939, the "Sejm" held its final pre-war session, during which it declared Poland's readiness to defend itself against invading German forces. On 2 November 1939, the President dissolved the "Sejm" and the Senate, which were then, according to plan, to resume their activity within two months after the end of the Second World War; this, however, never happened. During wartime, the National Council (1939–1945) was established to represent the legislature as part of the Polish Government in Exile. Meanwhile, in Nazi-occupied Poland, the Council of National Unity was set up; this body functioned from 1944 to 1945 as the parliament of the Polish Underground State. With the cessation of hostilities in 1945, and subsequent rise to power of the Communist-backed Provisional Government of National Unity, the Second Polish Republic legally ceased to exist. The "Sejm" in the Polish People's Republic had 460 deputies throughout most of its history. At first, this number was declared to represent one deputy per 60,000 citizens (425 were elected in 1952), but, in 1960, as the population grew, the declaration was changed: The constitution then stated that the deputies were representative "of" the people and could be recalled "by" the people, but this article was never used, and, instead of the "five-point electoral law", a non-proportional, "four-point" version was used. Legislation was passed with majority voting. Under the 1952 Constitution, the Sejm was defined as "the highest organ of State authority" in Poland, as well as "the highest spokesman of the will of the people in town and country." On paper, it was vested with great lawmaking and oversight powers. For instance, it was empowered with control over "the functioning of other organs of State authority and administration," and ministers were required to answer questions posed by deputies within seven days. In practice, it did little more than rubber-stamp decisions already made by the Communist Polish United Workers Party and its executive bodies. This was standard practice in nearly all Communist regimes due to the principle of democratic centralism. The "Sejm" voted on the budget and on the periodic national plans that were a fixture of communist economies. The "Sejm" deliberated in sessions that were ordered to convene by the State Council. The "Sejm" also chose a "Prezydium" ("presiding body") from among its members. The "Prezydium" was headed by the speaker, or Marshal, who was always a member of the United People's Party. In its preliminary session, the "Sejm" also nominated the Prime Minister, the Council of Ministers of Poland, and members of the State Council. It also chose many other government officials, including the head of the Supreme Chamber of Control and members of the State Tribunal and the Constitutional Tribunal, as well as the Ombudsman (the last three bodies of which were created in the 1980s). When the Sejm was not in session, the State Council had the power to issue decrees that had the force of law. However, those decrees had to be approved by the Sejm at its next session. In practice, the principles of democratic centralism meant that such approval was only a formality. The Senate of Poland was abolished by the Polish people's referendum in 1946, after which the "Sejm" became the sole legislative body in Poland. Even though the "Sejm" was largely subservient to the Communist party, one deputy, Romuald Bukowski (an independent) voted against the imposition of martial law in 1982. After the end of communism in 1989, the Senate was reinstated as the second house of a bicameral national assembly, while the "Sejm" remained the first house. The "Sejm" is now composed of 460 deputies elected by proportional representation every four years. Between 7 and 19 deputies are elected from each constituency using the d'Hondt method (with one exception, in 2001, when the Sainte-Laguë method was used), their number being proportional to their constituency's population. Additionally, a threshold is used, so that candidates are chosen only from parties that gained at least 5% of the nationwide vote (candidates from ethnic-minority parties are exempt from this threshold).
https://en.wikipedia.org/wiki?curid=28167
Stock exchange A stock exchange, securities exchange, or bourse is a facility where stockbrokers and traders can buy and sell securities, such as shares of stock and bonds and other financial instruments. Stock exchanges may also provide facilities for the issue and redemption of such securities and instruments and capital events including the payment of income and dividends. Securities traded on a stock exchange include stock issued by listed companies, unit trusts, derivatives, pooled investment products and bonds. Stock exchanges often function as "continuous auction" markets with buyers and sellers consummating transactions via open outcry at a central location such as the floor of the exchange or by using an electronic trading platform. To be able to trade a security on a certain stock exchange, the security must be listed there. Usually, there is a central location at least for record keeping, but trade is increasingly less linked to a physical place, as modern markets use electronic communication networks, which give them advantages of increased speed and reduced cost of transactions. Trade on an exchange is restricted to brokers who are members of the exchange. In recent years, various other trading venues, such as electronic communication networks, alternative trading systems and "dark pools" have taken much of the trading activity away from traditional stock exchanges. Initial public offerings of stocks and bonds to investors is done in the primary market and subsequent trading is done in the secondary market. A stock exchange is often the most important component of a stock market. Supply and demand in stock markets are driven by various factors that, as in all free markets, affect the price of stocks (see stock valuation). There is usually no obligation for stock to be issued through the stock exchange itself, nor must stock be subsequently traded on an exchange. Such trading may be "off exchange" or over-the-counter. This is the usual way that derivatives and bonds are traded. Increasingly, stock exchanges are part of a global securities market. Stock exchanges also serve an economic function in providing liquidity to shareholders in providing an efficient means of disposing of shares. There is little consensus among scholars as to when corporate stock was first traded. Some see the key event as the Dutch East India Company's founding in 1602, while others point to earlier developments (Bruges, Antwerp in 1531 and in Lyon in 1548). The first book in history of securities exchange, the Confusion of Confusions, was written by the Dutch-Jewish trader Joseph de la Vega and the Amsterdam Stock Exchange is often considered the oldest “modern” securities market in the world.. On the other hand, economist Ulrike Malmendier of the University of California at Berkeley argues that a share market existed as far back as ancient Rome, that derives from Etruscan "Argentari". In the Roman Republic, which existed for centuries before the Empire was founded, there were "societates publicanorum", organizations of contractors or leaseholders who performed temple-building and other services for the government. One such service was the feeding of geese on the Capitoline Hill as a reward to the birds after their honking warned of a Gallic invasion in 390 B.C. Participants in such organizations had "partes" or shares, a concept mentioned various times by the statesman and orator Cicero. In one speech, Cicero mentions "shares that had a very high price at the time". Such evidence, in Malmendier's view, suggests the instruments were tradable, with fluctuating values based on an organization's success. The "societas" declined into obscurity in the time of the emperors, as most of their services were taken over by direct agents of the state. Tradable bonds as a commonly used type of security were a more recent innovation, spearheaded by the Italian city-states of the late medieval and early Renaissance periods. While the Italian city-states produced the first transferable government bonds, they did not develop the other ingredient necessary to produce a fully-fledged capital market: the stock market in its modern sense. In the early 1600s the Dutch East India Company (VOC) became the first company in history to issue bonds and shares of stock to the general public. As Edward Stringham (2015) notes, "companies with transferable shares date back to classical Rome, but these were usually not enduring endeavors and no considerable secondary market existed (Neal, 1997, p. 61)." The VOC, formed to build up the spice trade, operated as a colonial ruler in what is now Indonesia and beyond, a purview that included conducting military operations against the wishes of the exploited natives and of competing colonial powers. Control of the company was held tightly by its directors, with ordinary shareholders not having much influence on management or even access to the company's accounting statements. However, shareholders were rewarded well for their investment. The company paid an average dividend of over 16% per year from 1602 to 1650. Financial innovation in Amsterdam took many forms. In 1609, investors led by Isaac Le Maire formed history's first bear market syndicate, but their coordinated trading had only a modest impact in driving down share prices, which tended to remain robust throughout the 17th century. By the 1620s, the company was expanding its securities issuance with the first use of corporate bonds. Joseph de la Vega, also known as Joseph Penso de la Vega and by other variations of his name, was an Amsterdam trader from a Spanish Jewish family and a prolific writer as well as a successful businessman in 17th-century Amsterdam. His 1688 book "Confusion of Confusions" explained the workings of the city's stock market. It was the earliest book about stock trading and inner workings of a stock market, taking the form of a dialogue between a merchant, a shareholder and a philosopher, the book described a market that was sophisticated but also prone to excesses, and de la Vega offered advice to his readers on such topics as the unpredictability of market shifts and the importance of patience in investment. In England, King William III sought to modernize the kingdom's finances to pay for its wars, and thus the first government bonds were issued in 1693 and the Bank of England was set up the following year. Soon thereafter, English joint-stock companies began going public. London's first stockbrokers, however, were barred from the old commercial center known as the Royal Exchange, reportedly because of their rude manners. Instead, the new trade was conducted from coffee houses along Exchange Alley. By 1698, a broker named John Castaing, operating out of Jonathan's Coffee House, was posting regular lists of stock and commodity prices. Those lists mark the beginning of the London Stock Exchange. One of history's greatest financial bubbles occurred around 1720. At the center of it were the South Sea Company, set up in 1711 to conduct English trade with South America, and the Mississippi Company, focused on commerce with France's Louisiana colony and touted by transplanted Scottish financier John Law, who was acting in effect as France's central banker. Investors snapped up shares in both, and whatever else was available. In 1720, at the height of the mania, there was even an offering of "a company for carrying out an undertaking of great advantage, but nobody to know what it is". By the end of that same year, share prices had started collapsing, as it became clear that expectations of imminent wealth from the Americas were overblown. In London, Parliament passed the Bubble Act, which stated that only royally chartered companies could issue public shares. In Paris, Law was stripped of office and fled the country. Stock trading was more limited and subdued in subsequent decades. Yet the market survived, and by the 1790s shares were being traded in the young United States. On May 17, 1792, the New York Stock Exchange opened under a platanus occidentalis (buttonwood tree) in New York City, as 24 stockbrokers signed the Buttonwood Agreement, agreeing to trade five securities under that buttonwood tree. Stock exchanges have multiple roles in the economy. This may include the following: Besides the borrowing capacity provided to an individual or firm by the banking system, in the form of credit or a loan, a stock exchange provides companies with the facility to raise capital for expansion through selling shares to the investing public. Capital intensive companies, particularly high tech companies, always need to raise high volumes of capital in their early stages. For this reason, the public market provided by the stock exchanges has been one of the most important funding sources for many capital intensive startups. , it has been much more demanding for the high-tech entrepreneur to take his/her company public, unless either the company is already generating sales and earnings, or the company has demonstrated credibility and potential from successful outcomes: clinical trials, market research, patent registrations, etc. This is quite different from the situation of the 1990s to early-2000s period, when a number of companies (particularly Internet boom and biotechnology companies) went public in the most prominent stock exchanges around the world in the total absence of sales, earnings, or any type of well-documented promising outcome. Though it's not as common, it still happens that highly speculative and financially unpredictable hi-tech startups are listed for the first time in a major stock exchange. Additionally, there are smaller, specialized entry markets for these kind of companies with stock indexes tracking their performance (examples include the Alternext, CAC Small, SDAX, TecDAX). Companies have also raised significant amounts of capital through R&D limited partnerships. Tax law changes that were enacted in 1987 in the United States changed the tax deductibility of investments in R&D limited partnerships. In order for a partnership to be of interest to investors today, the cash on cash return must be high enough to entice investors. A general source of capital for startup companies has been venture capital. This source remains largely available today, but the maximum statistical amount that the venture company firms in aggregate will invest in any one company is not limitless (it was approximately $15 million in 2001 for a biotechnology company). Another alternative source of cash for a private company is a corporate partner, usually an established multinational company, which provides capital for the smaller company in return for marketing rights, patent rights, or equity. Corporate partnerships have been used successfully in a large number of cases. When people draw their savings and invest in shares (through an initial public offering or the seasoned equity offering of an already listed company), it usually leads to rational allocation of resources because funds, which could have been consumed, or kept in idle deposits with banks, are mobilized and redirected to help companies' management boards finance their organizations. This may promote business activity with benefits for several economic sectors such as agriculture, commerce and industry, resulting in stronger economic growth and higher productivity levels of firms. Companies view acquisitions as an opportunity to expand product lines, increase distribution channels, hedge against volatility, increase their market share, or acquire other necessary business assets. A takeover bid or mergers and acquisitions through the stock market is one of the simplest and most common ways for a company to grow by acquisition or fusion. Both casual and professional stock investors, as large as institutional investors or as small as an ordinary middle-class family, through dividends and stock price increases that may result in capital gains, share in the wealth of profitable businesses. Unprofitable and troubled businesses may result in capital losses for shareholders. By having a wide and varied scope of owners, companies generally tend to improve management standards and efficiency to satisfy the demands of these shareholders and the more stringent rules for public corporations imposed by public stock exchanges and the government. This improvement can be attributed in some cases to the price mechanism exerted through shares of stock, wherein the price of the stock falls when management is considered poor (making the firm vulnerable to a takeover by new management) or rises when management is doing well (making the firm less vulnerable to a takeover). In addition, publicly listed shares are subject to greater transparency so that investors can make informed decisions about a purchase. Consequently, it is alleged that public companies (companies that are owned by shareholders who are members of the general public and trade shares on public exchanges) tend to have better management records than privately held companies (those companies where shares are not publicly traded, often owned by the company founders, their families and heirs, or otherwise by a small group of investors). Despite this claim, some well-documented cases are known where it is alleged that there has been considerable slippage in corporate governance on the part of some public companies, particularly in the cases of accounting scandals. The policies that led to the dot-com bubble in the late 1990s and the subprime mortgage crisis in 2007–08 are also examples of corporate mismanagement. The mismanagement of companies such as Pets.com (2000), Enron (2001), One.Tel (2001), Sunbeam Products (2001), Webvan (2001), Adelphia Communications Corporation (2002), MCI WorldCom (2002), Parmalat (2003), American International Group (2008), Bear Stearns (2008), Lehman Brothers (2008), General Motors (2009) and Satyam Computer Services (2009) all received plenty of media attention. Many banks and companies worldwide utilize securities identification numbers (ISIN) to identify, uniquely, their stocks, bonds and other securities. Adding an ISIN code helps to distinctly identify securities and the ISIN system is used worldwide by funds, companies, and governments. However, when poor financial, ethical or managerial records become public, stock investors tend to lose money as the stock and the company tend to lose value. In the stock exchanges, shareholders of underperforming firms are often penalized by significant share price decline, and they tend as well to dismiss incompetent management teams. As opposed to other businesses that require huge capital outlay, investing in shares is open to both the large and small stock investors as minimum investment amounts are minimal. Therefore, the stock exchange provides the opportunity for small investors to own shares of the same companies as large investors. Governments at various levels may decide to borrow money to finance infrastructure projects such as sewage and water treatment works or housing estates by selling another category of securities known as bonds. These bonds can be raised through the stock exchange whereby members of the public buy them, thus loaning money to the government. The issuance of such bonds can obviate, in the short term, direct taxation of citizens to finance development—though by securing such bonds with the full faith and credit of the government instead of with collateral, the government must eventually tax citizens or otherwise raise additional funds to make any regular coupon payments and refund the principal when the bonds mature. At the stock exchange, share prices rise and fall depending, largely, on economic forces. Share prices tend to rise or remain stable when companies and the economy in general show signs of stability and growth. A recession, depression, or financial crisis could eventually lead to a stock market crash. Therefore, the movement of share prices and in general of the stock indexes can be an indicator of the general trend in the economy. Each stock exchange imposes its own listing requirements upon companies that want to be listed on that exchange. Such conditions may include minimum number of shares outstanding, minimum market capitalization, and minimum annual income. The listing requirements imposed by some stock exchanges include: Stock exchanges originated as mutual organizations, owned by its member stockbrokers. However, the major stock exchanges have "demutualized", where the members sell their shares in an initial public offering. In this way the mutual organization becomes a corporation, with shares that are listed on a stock exchange. Examples are Australian Securities Exchange (1998), Euronext (merged with New York Stock Exchange), NASDAQ (2002), Bursa Malaysia (2004), the New York Stock Exchange (2005), Bolsas y Mercados Españoles, and the São Paulo Stock Exchange (2007). The Shenzhen Stock Exchange and Shanghai Stock Exchange can be characterized as quasi-state institutions insofar as they were created by government bodies in China and their leading personnel are directly appointed by the China Securities Regulatory Commission. Another example is Tashkent Stock Exchange established in 1994, three years after the collapse of the Soviet Union, mainly state-owned but has a form of a public corporation (joint-stock company). Korea Exchange (KRX) owns 25% less one share of the Tashkent Stock Exchange. In 2018, there were 15 licensed stock exchanges in the United States, of which 13 actively traded securities. All of these exchanges were owned by three publicly traded multinational companies, Intercontinental Exchange, Nasdaq, Inc., and Cboe Global Markets, except one, IEX. In 2019, a group of financial corporations announced plans to open a members owned exchange, MEMX, an ownership structure similar to the mutual organizations of earlier exchanges. In the 19th century, exchanges were opened to trade forward contracts on commodities. Exchange traded forward contracts are called futures contracts. These "commodity markets" later started offering future contracts on other products, such as interest rates and shares, as well as options contracts. They are now generally known as futures exchanges. Lists:
https://en.wikipedia.org/wiki?curid=28168
Saint Boniface Boniface (; 675 – 5 June 754 AD), born Winfrid (also spelled Winifred, Wynfrith, Winfrith or Wynfryth) in the Devon town of Crediton in Anglo-Saxon England, was a leading figure in the Anglo-Saxon mission to the Germanic parts of the Frankish Empire during the 8th century. He organised significant foundations of the church in Germany and was made archbishop of Mainz by Pope Gregory III. He was martyred in Frisia in 754, along with 52 others, and his remains were returned to Fulda, where they rest in a sarcophagus which became a site of pilgrimage. Boniface's life and death as well as his work became widely known, there being a wealth of material available—a number of "vitae", especially the near-contemporary "Vita Bonifatii auctore Willibaldi", legal documents, possibly some sermons, and above all his correspondence. He is venerated as a saint in the Christian church and became the patron saint of Germania, known as the "Apostle of the Germans". Norman F. Cantor notes the three roles Boniface played that made him "one of the truly outstanding creators of the first Europe, as the apostle of Germania, the reformer of the Frankish church, and the chief fomentor of the alliance between the papacy and the Carolingian family." Through his efforts to reorganize and regulate the church of the Franks, he helped shape the Latin Church in Europe, and many of the dioceses he proposed remain today. After his martyrdom, he was quickly hailed as a saint in Fulda and other areas in Germania and in England. He is still venerated strongly today by German Catholics. Boniface is celebrated as a missionary; he is regarded as a unifier of Europe, and he is regarded by German Catholics as a national figure. In 2019 Devon County Council with the support of the Anglican and Catholic churches in Exeter and Plymouth, officially recognised St Boniface as the Patron Saint of Devon. The earliest Bonifacian "vita", Willibald's, does not mention his place of birth but says that at an early age he attended a monastery ruled by Abbot Wulfhard in "escancastre", or "Examchester", which seems to denote Exeter, and may have been one of many "monasteriola" built by local landowners and churchmen; nothing else is known of it outside the Bonifacian "vitae". This monastery is believed to have occupied the site of the Church of St Mary Major in the City of Exeter, demolished in 1971, next to which was later built Exeter Cathedral. Later tradition places his birth at Crediton, but the earliest mention of Crediton in connection to Boniface is from the early fourteenth century, in John Grandisson's "Legenda Sanctorum: The Proper Lessons for Saints' Days according to the use of Exeter". In one of his letters Boniface mentions he was "born and reared...[in] the synod of London", but he may have been speaking metaphorically. According to the "vitae", Winfrid was of a respected and prosperous family. Against his father's wishes he devoted himself at an early age to the monastic life. He received further theological training in the Benedictine monastery and minster of Nhutscelle (Nursling), not far from Winchester, which under the direction of abbot Winbert had grown into an industrious centre of learning in the tradition of Aldhelm. Winfrid taught in the abbey school and at the age of 30 became a priest; in this time, he wrote a Latin grammar, the "Ars Grammatica", besides a treatise on verse and some Aldhelm-inspired riddles. While little is known about Nursling outside of Boniface's "vitae", it seems clear that the library there was significant. In order to supply Boniface with the materials he needed, it would have contained works by Donatus, Priscian, Isidore, and many others. Around 716, when his abbot Wynberth of Nursling died, he was invited (or expected) to assume his position—it is possible that they were related, and the practice of hereditary right among the early Anglo-Saxons would affirm this. Winfrid, however, declined the position and in 716 set out on a missionary expedition to Frisia. Boniface first left for the continent in 716. He traveled to Utrecht, where Willibrord, the "Apostle of the Frisians," had been working since the 690s. He spent a year with Willibrord, preaching in the countryside, but their efforts were frustrated by the war then being carried on between Charles Martel and Radbod, King of the Frisians. Willibrord fled to the abbey he had founded in Echternach (in modern-day Luxembourg) while Boniface returned to Nursling. Boniface returned to the continent the next year and went straight to Rome, where Pope Gregory II renamed him "Boniface", after the (legendary) fourth-century martyr Boniface of Tarsus, and appointed him missionary bishop for Germania—he became a bishop without a diocese for an area that lacked any church organization. He would never return to England, though he remained in correspondence with his countrymen and kinfolk throughout his life. According to the "vitae" Boniface felled the Donar Oak, Latinized by Willibald as "Jupiter's oak," near the present-day town of Fritzlar in northern Hesse. According to his early biographer Willibald, Boniface started to chop the oak down, when suddenly a great wind, as if by miracle, blew the ancient oak over. When the god did not strike him down, the people were amazed and converted to Christianity. He built a chapel dedicated to Saint Peter from its wood at the site—the chapel was the beginning of the monastery in Fritzlar. This account from the "vita" is stylized to portray Boniface as a singular character who alone acts to root out paganism. Lutz von Padberg and others point out that what the "vitae" leave out is that the action was most likely well-prepared and widely publicized in advance for maximum effect, and that Boniface had little reason to fear for his personal safety since the Frankish fortified settlement of Büraburg was nearby. According to Willibald, Boniface later had a church with an attached monastery built in Fritzlar, on the site of the previously built chapel, according to tradition. The support of the Frankish mayors of the palace (maior domos), and later the early Pippinid and Carolingian rulers, was essential for Boniface's work. Boniface had been under the protection of Charles Martel from 723 on. The Christian Frankish leaders desired to defeat their rival power, the pagan Saxons, and to incorporate the Saxon lands into their own growing empire. Boniface's campaign of destruction of indigenous Germanic pagan sites may have benefited the Franks in their campaign against the Saxons. In 732, Boniface traveled again to Rome to report, and Pope Gregory III conferred upon him the pallium as archbishop with jurisdiction over what is now Germany. Boniface again set out for the German lands and continued his mission, but also used his authority to work on the relations between the papacy and the Frankish church. Rome wanted more control over that church, which it felt was much too independent and which, in the eyes of Boniface, was subject to worldly corruption. Charles Martel, after having defeated the forces of the Umayyad Caliphate during the Battle of Poitiers (732), had rewarded many churches and monasteries with lands, but typically his supporters who held church offices were allowed to benefit from those possessions. Boniface would have to wait until the 740s before he could try to address this situation, in which Frankish church officials were essentially sinecures, and the church itself paid little heed to Rome. During his third visit to Rome in 737–38, he was made papal legate for Germany. After Boniface's third trip to Rome, Charles Martel established four dioceses in Bavaria (Salzburg, Regensburg, Freising, and Passau) and gave them to Boniface as archbishop and metropolitan over all Germany east of the Rhine. In 745, he was granted Mainz as metropolitan see. In 742, one of his disciples, Sturm (also known as Sturmi, or Sturmius), founded the abbey of Fulda not far from Boniface's earlier missionary outpost at Fritzlar. Although Sturm was the founding abbot of Fulda, Boniface was very involved in the foundation. The initial grant for the abbey was signed by Carloman, the son of Charles Martel, and a supporter of Boniface's reform efforts in the Frankish church. Boniface himself explained to his old friend, Daniel of Winchester, that without the protection of Charles Martel he could "neither administer his church, defend his clergy, nor prevent idolatry". According to German historian Gunther Wolf, the high point of Boniface's career was the Concilium Germanicum, organized by Carloman in an unknown location in April 743. Although Boniface was not able to safeguard the church from property seizures by the local nobility, he did achieve one goal, the adoption of stricter guidelines for the Frankish clergy, who often hailed directly from the nobility. After Carloman's resignation in 747 he maintained a sometimes turbulent relationship with the king of the Franks, Pepin; the claim that he would have crowned Pepin at Soissons in 751 is now generally discredited. Boniface balanced this support and attempted to maintain some independence, however, by attaining the support of the papacy and of the Agilolfing rulers of Bavaria. In Frankish, Hessian, and Thuringian territory, he established the dioceses of Würzburg and Erfurt. By appointing his own followers as bishops, he was able to retain some independence from the Carolingians, who most likely were content to give him leeway as long as Christianity was imposed on the Saxons and other Germanic tribes. According to the "vitae", Boniface had never relinquished his hope of converting the Frisians, and in 754 he set out with a retinue for Frisia. He baptized a great number and summoned a general meeting for confirmation at a place not far from Dokkum, between Franeker and Groningen. Instead of his converts, however, a group of armed robbers appeared who slew the aged archbishop. The "vitae" mention that Boniface persuaded his (armed) comrades to lay down their arms: "Cease fighting. Lay down your arms, for we are told in Scripture not to render evil for good but to overcome evil by good." Having killed Boniface and his company, the Frisian bandits ransacked their possessions but found that the company's luggage did not contain the riches they had hoped for: "they broke open the chests containing the books and found, to their dismay, that they held manuscripts instead of gold vessels, pages of sacred texts instead of silver plates." They attempted to destroy these books, the earliest "vita" already says, and this account underlies the status of the Ragyndrudis Codex, now held as a Bonifacian relic in Fulda, and supposedly one of three books found on the field by the Christians who inspected it afterward. Of those three books, the Ragyndrudis Codex shows incisions that could have been made by sword or axe; its story appears confirmed in the Utrecht hagiography, the "Vita altera", which reports that an eye-witness saw that the saint at the moment of death held up a gospel as spiritual protection. The story was later repeated by Otloh's "vita"; at that time, the Ragyndrudis Codex seems to have been firmly connected to the martyrdom. Boniface's remains were moved from the Frisian countryside to Utrecht, and then to Mainz, where sources contradict each other regarding the behavior of Lullus, Boniface's successor as archbishop of Mainz. According to Willibald's "vita" Lullus allowed the body to be moved to Fulda, while the (later) "Vita Sturmi", a hagiography of Sturm by Eigil of Fulda, Lullus attempted to block the move and keep the body in Mainz. His remains were eventually buried in the abbey church of Fulda after resting for some time in Utrecht, and they are entombed within a shrine beneath the high altar of Fulda Cathedral, previously the abbey church. Veneration of Boniface in Fulda began immediately after his death; his grave was equipped with a decorative tomb around ten years after his burial, and the grave and relics became the center of the abbey. Fulda monks prayed for newly elected abbots at the grave site before greeting them, and every Monday the saint was remembered in prayer, the monks prostrating themselves and reciting Psalm 50. After the abbey church was rebuilt to become the Ratgar Basilica (dedicated 791), Boniface's remains were translated to a new grave: since the church had been enlarged, his grave, originally in the west, was now in the middle; his relics were moved to a new apse in 819. From then on Boniface, as patron of the abbey, was regarded as both spiritual intercessor for the monks and legal owner of the abbey and its possessions, and all donations to the abbey were done in his name. He was honored on the date of his martyrdom, 5 June (with a mass written by Alcuin), and (around the year 1000) with a mass dedicated to his appointment as bishop, on 1 December. Willibald's "vita" describes how a visitor on horseback come to the site of the martyrdom, and a hoof of his horse got stuck in the mire. When it was pulled loose, a well sprang up. By the time of the "Vita altera Bonifatii" (9th century), there was a church on the site, and the well had become a "fountain of sweet water" used to sanctify people. The "Vita Liudgeri", a hagiographical account of the work of Ludger, describes how Ludger himself had built the church, sharing duties with two other priests. According to James Palmer, the well was of great importance since the saint's body was hundreds of miles away; the physicality of the well allowed for an ongoing connection with the saint. In addition, Boniface signified Dokkum's and Frisia's "connect[ion] to the rest of (Frankish) Christendom". Saint Boniface's feast day is celebrated on 5 June in the Roman Catholic Church, the Lutheran Church, the Anglican Communion and the Eastern Orthodox Church. A famous statue of Saint Boniface stands on the grounds of Mainz Cathedral, seat of the archbishop of Mainz. A more modern rendition stands facing St. Peter's Church of Fritzlar. The UK National Shrine is located at the Catholic church at Crediton, Devon, which has a bas-relief of the felling of Thor's Oak, by sculptor Kenneth Carter. The sculpture was unveiled by Princess Margaret in his native Crediton, located in Newcombes Meadow Park. There is also a series of paintings there by Timothy Moore. There are quite a few churches dedicated to St. Boniface in the United Kingdom: Bunbury, Cheshire; Chandler's Ford and Southampton Hampshire; Adler Street, London; Papa Westray, Orkney; St Budeaux, Plymouth (now demolished); Bonchurch, Isle of Wight; Cullompton, Devon. Bishop George Errington founded St Boniface's Catholic College, Plymouth in 1856. The school celebrates Saint Boniface on 5 June each year. In 1818, Father Norbert Provencher founded a mission on the east bank of the Red River in what was then Rupert's Land, building a log church and naming it after St. Boniface. The log church was consecrated as Saint Boniface Cathedral after Provencher was himself consecrated as a bishop and the diocese was formed. The community that grew around the cathedral eventually became the city of Saint Boniface, which merged into the city of Winnipeg in 1971. In 1844, four Grey Nuns arrived by canoe in Manitoba, and in 1871, built Western Canada's first hospital: St. Boniface Hospital, where the Assiniboine and Red Rivers meet. Today, St. Boniface Hospital is the second-largest hospital in Manitoba. Some traditions credit Saint Boniface with the invention of the Christmas tree. The "vitae" mention nothing of the sort. However, it is mentioned on a BBC-Devon website, in an account which places Geismar in Bavaria, and in a number of educational books, including "St. Boniface and the Little Fir Tree", "The Brightest Star of All: Christmas Stories for the Family", "The American normal readers". and a short story by Henry van Dyke, "The First Christmas Tree". The earliest "Life" of Boniface was written by a certain Willibald, an Anglo-Saxon priest who came to Mainz after Boniface's death, around 765. Willibald's biography was widely dispersed; Levison lists some forty manuscripts. According to his lemma, a group of four manuscripts including Codex Monacensis 1086 are copies directly from the original. Listed second in Levison's edition is the entry from a late ninth-century Fulda document: Boniface's status as a martyr is attested by his inclusion in the "Fulda Martyrology" which also lists, for instance, the date (1 November) of his translation in 819, when the Fulda Cathedral had been rebuilt. A "Vita Bonifacii" was written in Fulda in the ninth century, possibly by Candidus of Fulda, but is now lost. The next "vita", chronologically, is the "Vita altera Bonifatii auctore Radbodo", which originates in the Bishopric of Utrecht, and was probably revised by Radboud of Utrecht (899–917). Mainly agreeing with Willibald, it adds an eye-witness who presumably saw the martyrdom at Dokkum. The "Vita tertia Bonifatii" likewise originates in Utrecht. It is dated between 917 (Radboud's death) and 1075, the year Adam of Bremen wrote his "Gesta Hammaburgensis ecclesiae pontificum", which used the "Vita tertia". A later "vita", written by Otloh of St. Emmeram (1062–1066), is based on Willibald's and a number of other "vitae" as well as the correspondence, and also includes information from local traditions. Boniface engaged in regular correspondence with fellow churchmen all over Western Europe, including the three popes he worked with, and with some of his kinsmen back in England. Many of these letters contain questions about church reform and liturgical or doctrinal matters. In most cases, what remains is one half of the conversation, either the question or the answer. The correspondence as a whole gives evidence of Boniface's widespread connections; some of the letters also prove an intimate relationship especially with female correspondents. There are 150 letters in what is generally called the Bonifatian correspondence, though not all them are by Boniface or addressed to him. They were assembled by order of archbishop Lullus, Boniface's successor in Mainz, and were initially organized into two parts, a section containing the papal correspondence and another with his private letters. They were reorganized in the eighth century, in a roughly chronological ordering. Otloh of St. Emmeram, who worked on a new "vita" of Boniface in the eleventh century, is credited with compiling the complete correspondence as we have it. The correspondence was edited and published already in the seventeenth century, by Nicolaus Serarius. Stephan Alexander Würdtwein's 1789 edition, "Epistolae S. Bonifacii Archiepiscopi Magontini", was the basis for a number of (partial) translations in the nineteenth century. The first version to be published by Monumenta Germaniae Historica (MGH) was the edition by Ernst Dümmler (1892); the most authoritative version until today is Michael Tangl's 1912 "Die Briefe des Heiligen Bonifatius, Nach der Ausgabe in den Monumenta Germaniae Historica", published by MGH in 1916. This edition is the basis of Ephraim Emerton's selection and translation in English, "The Letters of Saint Boniface", first published in New York in 1940; it was republished most recently with a new introduction by Thomas F.X. Noble in 2000. Included among his letters and dated to 716 is one to Abbess Edburga of Minster-in-Thanet containing the "Vision of the Monk of Wenlock". This otherworld vision describes how a violently ill monk is freed from his body and guided by angels to a place of judgment, where angels and devils fight over his soul as his sins and virtues come alive to accuse and defend him. He sees a hell of purgation full of pits vomiting flames. There is a bridge over a pitch-black boiling river. Souls either fall from it or safely reach the other side cleansed of their sins. This monk even sees some of his comtemporary monks and is told to warn them to repent before they die. This vision bears signs of influence by the Apocalypse of Paul, the visions from the "Dialogues" of Gregory the Great, and the visions recorded by Bede. Some fifteen preserved sermons are traditionally associated with Boniface, but that they were actually his is not generally accepted. Early in his career, before he left for the continent, Boniface wrote the "Ars Bonifacii", a grammatical treatise presumably for his students in Nursling. Helmut Gneuss reports that one manuscript copy of the treatise originates from (the south of) England, mid-eighth century; it is now held in Marburg, in the Hessisches Staatsarchiv. He also wrote a treatise on verse, the "Caesurae uersuum", and a collection of twenty acrostic riddles, the "Enigmata", influenced greatly by Aldhelm and containing many references to works of Vergil (the "Aeneid", the "Georgics", and the "Eclogues"). The riddles fall into two sequences of ten poems. The first, "De virtutibus" ('on the virtues'), comprises: 1. "de ueritate"/truth; 2. "de fide catholica"/the Catholic faith; 3. "de spe"/hope; 4. "de misericordia"/compassion; 5. "de caritate"/love; 6. "de iustitia"/justice; 7. "de patientia"/patience; 8. "de pace uera, cristiana"/true, Christian peace; 9. "de humilitate cristiania"/Christian humility; 10. "de uirginitate"/virginity. The second sequence, "De vitiis" ('on the vices'), comprises: 1. "de neglegentia"/carelessness; 2. "de iracundia"/hot temper; 3. "de cupiditate"/greed; 4. "de superbia"/pride; 5. "de crapula"/intemperance; 6. "de ebrietate"/drunkenness; 7. "de luxoria"/fornication; 8. "de inuidia"/envy; 9. "de ignorantia"/ignorance; 10. "de uana gloria"/vainglory. Three octosyllabic poems written in clearly Aldhelmian fashion (according to Andy Orchard) are preserved in his correspondence, all composed before he left for the continent. A letter by Boniface charging Aldebert and Clement with heresy is preserved in the records of the Roman Council of 745 that condemned the two. Boniface had an interest in the Irish canon law collection known as "Collectio canonum Hibernensis", and a late 8th/early 9th-century manuscript in Würzburg contains, besides a selection from the "Hibernensis", a list of rubrics that mention the heresies of Clemens and Aldebert. The relevant folios containing these rubrics were most likely copied in Mainz, Würzburg, or Fulda—all places associated with Boniface. Michael Glatthaar suggested that the rubrics should be seen as Boniface's contribution to the agenda for a synod. Boniface's death (and birth) has given rise to a number of noteworthy celebrations. The dates for some of these celebrations have undergone some changes: in 1805, 1855, and 1905 (and in England in 1955) anniversaries were calculated with Boniface's death dated in 755, according to the "Mainz tradition"; in Mainz, Michael Tangl's dating of the martyrdom in 754 was not accepted until after 1955. Celebrations in Germany centered on Fulda and Mainz, in the Netherlands on Dokkum and Utrecht, and in England on Crediton and Exeter. The first German celebration on a fairly large scale was held in 1805 (the 1,050th anniversary of his death), followed by a similar celebration in a number of towns in 1855; both of these were predominantly Catholic affairs emphasizing the role of Boniface in German history. But if the celebrations were mostly Catholic, in the first part of the 19th century the respect for Boniface in general was an ecumenical affair, with both Protestants and Catholics praising Boniface as a founder of the German nation, in response to the German nationalism that arose after the Napoleonic era came to an end. The second part of the 19th century saw increased tension between Catholics and Protestants; for the latter, Martin Luther had become the model German, the founder of the modern nation, and he and Boniface were in direct competition for the honor. In 1905, when strife between Catholic and Protestant factions had eased (one Protestant church published a celebratory pamphlet, Gerhard Ficker's "Bonifatius, der "Apostel der Deutschen""), there were modest celebrations and a publication for the occasion on historical aspects of Boniface and his work, the 1905 "Festgabe" by Gregor Richter and Carl Scherer. In all, the content of these early celebrations showed evidence of the continuing question about the meaning of Boniface for Germany, though the importance of Boniface in cities associated with him was without question. In 1954, celebrations were widespread in England, Germany, and the Netherlands, and a number of these celebrations were international affairs. Especially in Germany, these celebrations had a distinctly political note to them and often stressed Boniface as a kind of founder of Europe, such as when Konrad Adenauer, the (Catholic) German chancellor, addressed a crowd of 60,000 in Fulda, celebrating the feast day of the saint in a European context: "Das, was wir in Europa gemeinsam haben, [ist] gemeinsamen Ursprungs" ("What we have in common in Europe comes from the same source"). When Pope John Paul II visited Germany in November 1980, he spent two days in Fulda (17 and 18 November). He celebrated Mass in Fulda Cathedral with 30,000 gathered on the square in front of the building, and met with the German Bishops' Conference (held in Fulda since 1867). The pope next celebrated mass outside the cathedral, in front of an estimated crowd of 100,000, and hailed the importance of Boniface for German Christianity: "Der heilige Bonifatius, Bischof und Märtyrer, "bedeutet" den 'Anfang' des Evangeliums und der Kirche in Eurem Land" ("The holy Boniface, bishop and martyr, "signifies" the beginning of the gospel and the church in your country"). A photograph of the pope praying at Boniface's grave became the centerpiece of a prayer card distributed from the cathedral. In 2004, anniversary celebrations were held throughout Northwestern Germany and Utrecht, and Fulda and Mainz—generating a great amount of academic and popular interest. The event occasioned a number of scholarly studies, esp. biographies (for instance, by Auke Jelsma in Dutch, Lutz von Padberg in German, and Klaas Bruinsma in Frisian), and a fictional completion of the Boniface correspondence (Lutterbach, "Mit Axt und Evangelium"). A German musical proved a great commercial success, and in the Netherlands an opera was staged. There is an extensive body of literature on the saint and his work. At the time of the various anniversaries, edited collections were published containing essays by some of the best-known scholars of the time, such as the 1954 collection "Sankt Bonifatius: Gedenkgabe zum Zwölfhundertsten Todestag" and the 2004 collection "Bonifatius—Vom Angelsächsischen Missionar zum Apostel der Deutschen". In the modern era, published a number of biographies and articles on the saint focusing on his missionary praxis and his relics. The most authoritative biography remains Theodor Schieffer's "Winfrid-Bonifatius und die Christliche Grundlegung Europas" (1954).
https://en.wikipedia.org/wiki?curid=28172
Data storage Data storage is the recording (storing) of information (data) in a storage medium. DNA and RNA, handwriting, phonographic recording, magnetic tape, and optical discs are all examples of storage media. Recording is accomplished by virtually any form of energy. Electronic data storage requires electrical power to store and retrieve data. Data storage in a digital, machine-readable medium is sometimes called "digital data". Computer data storage is one of the core functions of a general purpose computer. Electronic documents can be stored in much less space than paper documents. Barcodes and magnetic ink character recognition (MICR) are two ways of recording machine-readable data on paper. A recording medium is a physical material that holds information. Newly created information is distributed and can be stored in four storage media–print, film, magnetic, and optical–and seen or heard in four information flows–telephone, radio and TV, and the Internet as well as being observed directly. Digital information is stored on electronic media in many different recording formats. With electronic media, the data and the recording media are sometimes referred to as "software" despite the more common use of the word to describe computer software. With (traditional art) static media, art materials such as crayons may be considered both equipment and medium as the wax, charcoal or chalk material from the equipment becomes part of the surface of the medium. Some recording media may be temporary either by design or by nature. Volatile organic compounds may be used to preserve the environment or to purposely make data expire over time. Data such as smoke signals or skywriting are temporary by nature. Depending on the volatility, a gas (e.g. atmosphere, smoke) or a liquid surface such as a lake would be considered a temporary recording medium if at all. A 2003 UC Berkeley report estimated that about five exabytes of new information were produced in 2002, and that 92% of this data was stored on hard disk drives. This was about twice the data produced in 2000. The amount of data transmitted over telecommunication systems in 2002 was nearly 18 exabytes—three and a half times more than was recorded on non-volatile storage. Telephone calls constituted 98% of the telecommunicated information in 2002. The researchers' highest estimate for the growth rate of newly stored information (uncompressed) was more than 30% per year. It has been estimated that the year 2002 was the beginning of the digital age for information storage: an age in which more information is stored on digital storage devices than on analog storage devices. In 1986, approximately 1% of the world's capacity to store information was in digital format; this grew to 3% by 1993, to 25% by 2000, and to 97% by 2007. These figures correspond to less than three compressed exabytes in 1986, and 295 compressed exabytes in 2007. The quantity of digital storage doubled roughly every three years. In a more limited study, the International Data Corporation estimated that the total amount of digital data in 2007 was 281 exabytes, and that the total amount of digital data produced exceeded the global storage capacity for the first time. A study published in 2011 estimated that the world's technological capacity to store information in analog and digital devices grew from less than three (optimally compressed) exabytes in 1986, to 295 (optimally compressed) exabytes in 2007, and doubles roughly every three years.
https://en.wikipedia.org/wiki?curid=28174
Sinn Féin Sinn Féin ( , ; ) is a centre-left to left-wing Irish republican political party active in both the Republic of Ireland and Northern Ireland. The original Sinn Féin organisation was founded in 1905 by Arthur Griffith, but has split substantially on a number of occasions since then—notably giving rise to the two traditionally dominant parties of southern Irish politics—Fianna Fáil and Fine Gael—in the aftermath of the Irish Civil War. The party took its current form in 1970 after another split (with the other faction eventually becoming the Workers' Party of Ireland). It has historically been associated with the Provisional Irish Republican Army (IRA). Mary Lou McDonald became party president in February 2018. Sinn Féin is one of the two largest parties in the Northern Ireland Assembly, winning just one seat less than the Democratic Unionist Party (DUP) at the 2017 Northern Ireland Assembly election. In that assembly it is the largest Irish nationalist party, and it holds four ministerial posts in the power-sharing Northern Ireland Executive . In the UK House of Commons, Sinn Féin holds seven of Northern Ireland's 18 seats, making it the second-largest bloc after the DUP; there it follows a policy of abstentionism, refusing to sit in parliament or vote on bills. In the Oireachtas (the parliament of the Republic of Ireland), Sinn Féin won the largest share of first-preference votes at the 2020 Irish general election. The phrase "Sinn Féin" is Irish for "Ourselves" or "We Ourselves", although it is frequently mistranslated as "ourselves alone" (from ""Sinn Féin Amháin"", an early-20th-century slogan. See also Sinn Féin (slogan)). The name is an assertion of Irish national sovereignty and self-determination; i.e., the Irish people governing themselves, rather than being part of a political union with Great Britain (England, Scotland and Wales) under the Westminster Parliament. A split in January 1970, mirroring a split in the IRA, led to the emergence of two groups calling themselves Sinn Féin. One, under the continued leadership of Tomás Mac Giolla, became known as "Sinn Féin (Gardiner Place)", or "Official Sinn Féin"; the other, led by Ruairí Ó Brádaigh, became known as "Sinn Féin (Kevin Street)", or "Provisional Sinn Féin". As the "Officials" dropped all mention of Sinn Féin from their name in 1982–instead calling themselves the Workers' Party of Ireland–the term "Provisional Sinn Féin" has fallen out of use, and the party is now known simply as "Sinn Féin". Sinn Féin members have been referred to colloquially as "Shinners", a term intended as a pejorative. Sinn Féin was founded on 28 November 1905, when, at the first annual Convention of the National Council, Arthur Griffith outlined the Sinn Féin policy, "to establish in Ireland's capital a national legislature endowed with the moral authority of the Irish nation". The party contested the 1908 North Leitrim by-election, where it secured 27% of the vote. Thereafter, both support and membership fell. At the 1910 "Ard Fheis" (party conference) the attendance was poor, and there was difficulty finding members willing to take seats on the executive. In 1914, Sinn Féin members, including Griffith, joined the anti-Redmond Irish Volunteers, which was referred to by Redmondites and others as the "Sinn Féin Volunteers". Although Griffith himself did not take part in the Easter Rising of 1916, many Sinn Féin members did, as they were also members of both the Volunteers and the Irish Republican Brotherhood. Government and newspapers dubbed the Rising "the Sinn Féin Rising". After the Rising, republicans came together under the banner of Sinn Féin, and at the 1917 "Ard Fheis" the party committed itself for the first time to the establishment of an Irish Republic. In the 1918 general election, Sinn Féin won 73 of Ireland's 105 seats, and in January 1919, its MPs assembled in Dublin and proclaimed themselves Dáil Éireann, the parliament of Ireland. The party supported the Irish Republican Army during the War of Independence, and members of the Dáil government negotiated the Anglo-Irish Treaty with the British government in 1921. In the Dáil debates that followed, the party divided on the Treaty. Anti-Treaty members led by Éamon de Valera walked out, and pro- and anti-Treaty members took opposite sides in the ensuing Civil War. Pro-Treaty Dáil deputies and other Treaty supporters formed a new party, Cumann na nGaedheal, on 27 April 1923 at a meeting in Dublin, where delegates agreed on a constitution and political programme. Cumann na nGaedheal went on to govern the new Irish Free State for nine years. (It merged with two other organisations to form Fine Gael in 1933.) Anti-Treaty Sinn Féin members continued to boycott the Dáil. At a special "Ard Fheis" in March 1926, de Valera proposed that elected members be allowed to take their seats in the Dáil if and when the controversial Oath of Allegiance was removed. When his motion was defeated, de Valera resigned from Sinn Féin; on 16 May 1926, he founded his own party, Fianna Fáil, which was dedicated to republicanising the Free State from within its political structures. He took most Sinn Féin Teachtaí Dála (TDs) with him. De Valera's resignation meant also the loss of financial support from America. The rump Sinn Féin party could field no more than fifteen candidates, and won only six seats in the June 1927 general election, a level of support not seen since before 1916. Vice-President and "de facto" leader Mary MacSwiney announced that the party simply did not have the funds to contest the second election called that year, declaring "no true Irish citizen can vote for any of the other parties". Fianna Fáil came to power at the 1932 general election (to begin what would be an unbroken 16-year spell in government) and went on to long dominate politics in the independent Irish state. An attempt in the 1940s to access funds that had been put in the care of the High Court led to the Sinn Féin Funds case, which the party lost and in which the judge ruled that it was not the legal successor to the Sinn Féin of 1917. At the 1955 United Kingdom general election, two Sinn Féin candidates were elected to Westminster, but the party's vote decreased at the following election in 1959, during the IRA's Border Campaign. Through the 1960s, some leading figures in the movement, such as Cathal Goulding, Sean Garland, Liam McMillen, Tomas MacGiolla, moved steadily to the left, even to Marxism, as a result of their own reading and thinking and contacts with the Irish and international left. This angered more traditional republicans, who wanted to stick to the national question and armed struggle. The Garland Commission was set up in 1967, to investigate the possibility of ending abstentionism. Its report angered the already disaffected traditional republican element within the party, notably Seán Mac Stíofáin and Ruairí Ó Brádaigh, who viewed such a policy as treason against the Irish Republic. The Sinn Féin party split in two at the beginning of 1970. At the party's "Ard Fheis" on 11 January the proposal to end abstentionism and take seats, if elected, in the Dáil, the Parliament of Northern Ireland and the Parliament of the United Kingdom was put before the members. A similar motion had been adopted at an IRA convention the previous month, leading to the formation of a Provisional Army Council by Mac Stíofáin and other members opposed to the leadership. When the motion was put to the "Ard Fheis", it failed to achieve the necessary two-thirds majority. The Executive attempted to circumvent this by introducing a motion in support of IRA policy, at which point the dissenting delegates walked out of the meeting. These members reconvened at another place, appointed a Caretaker Executive and pledged allegiance to the Provisional Army Council. The Caretaker Executive declared itself opposed to the ending of abstentionism, the drift towards "extreme forms of socialism", the failure of the leadership to defend the nationalist people of Belfast during the 1969 Northern Ireland riots, and the expulsion of traditional republicans by the leadership during the 1960s. At its October 1970 "Ard Fheis", delegates were informed that an IRA convention had been held and had regularised its structure, bringing to an end the 'provisional' period. By then, however, the label "Provisional" or "Provo" was already being applied to them by the media. The opposing, anti-abstentionist party became known as "Official Sinn Féin". It changed its name in 1977 to "Sinn Féin – The Workers' Party", and in 1982 to "The Workers' Party". Because the "Provisionals" were committed to military rather than political action, Sinn Féin's initial membership was largely confined, in Danny Morrison's words, to men "over military age or women". A Sinn Féin organiser of the time in Belfast described the party's role as "agitation and publicity". New "cumainn" (branches) were established in Belfast, and a new newspaper, "Republican News", was published. Sinn Féin took off as a protest movement after the introduction of internment in August 1971, organising marches and pickets. The party launched its platform, "Éire Nua" ("a New Ireland") at the 1971 "Ard Fheis". In general, however, the party lacked a distinct political philosophy. In the words of Brian Feeney, "Ó Brádaigh would use Sinn Féin "ard fheiseanna" (party conferences) to announce republican policy, which was, in effect, IRA policy, namely that Britain should leave the North or the 'war' would continue". Sinn Féin was given a concrete presence in the community when the IRA declared a ceasefire in 1975. 'Incident centres' were set up to communicate potential confrontations to the British authorities. They were manned by Sinn Féin, which had been legalised the previous year by Merlyn Rees, Secretary of State for Northern Ireland. Political status for prisoners became an issue after the ending of the truce. Rees released the last of the internees but introduced the Diplock courts, and ended 'Special Category Status' for all prisoners convicted after 1 March 1976. This led first to the blanket protest, and then to the dirty protest. Around the same time, Gerry Adams began writing for "Republican News", calling for Sinn Féin to become more involved politically. Over the next few years, Adams and those aligned with him would extend their influence throughout the republican movement and slowly marginalise Ó Brádaigh, part of a general trend of power in both Sinn Féin and the IRA shifting north. In particular, Ó Brádaigh's part in the 1975 IRA ceasefire had damaged his reputation in the eyes of Ulster republicans. The prisoners' protest climaxed with the 1981 hunger strike, during which striker Bobby Sands was elected Member of Parliament for Fermanagh and South Tyrone as an Anti H-Block candidate. After his death on hunger strike, his seat was held, with an increased vote, by his election agent, Owen Carron. Two other Anti H-Block candidates were elected to Dáil Éireann in the general election in the Republic. These successes convinced republicans that they should contest every election. Danny Morrison expressed the mood at the 1981 "Ard Fheis" when he said: This was the origin of what became known as the Armalite and ballot box strategy. "Éire Nua" was dropped in 1982, and the following year Ó Brádaigh stepped down as leader, and was replaced by Adams. Under Adams' leadership electoral politics became increasingly important. In 1983 Alex Maskey was elected to Belfast City Council, the first Sinn Féin member to sit on that body. Sinn Féin polled over 100,000 votes in the Westminster elections that year, and Adams won the West Belfast seat that had been held by the Social Democratic and Labour Party (SDLP). By 1985 it had fifty-nine seats on seventeen of the twenty-six Northern Ireland councils, including seven on Belfast City Council. The party began a reappraisal of the policy of abstention from the Dáil. At the 1983 "Ard Fheis" the constitution was amended to remove the ban on the discussion of abstentionism to allow Sinn Féin to run a candidate in the forthcoming European elections. However, in his address, Adams said, "We are an abstentionist party. It is not my intention to advocate change in this situation." A motion to permit entry into the Dáil was allowed at the 1985 "Ard Fheis", but did not have the active support of the leadership, and it failed narrowly. By October of the following year an IRA Convention had indicated its support for elected Sinn Féin TDs taking their seats. Thus, when the motion to end abstention was put to the "Ard Fheis" on 1 November 1986, it was clear that there would not be a split in the IRA as there had been in 1970. The motion was passed with a two-thirds majority. Ó Brádaigh and about twenty other delegates walked out, and met in a Dublin hotel with hundreds of supporters to re-organise as Republican Sinn Féin. Tentative negotiations between Sinn Féin and the British government led to more substantive discussions with the SDLP in the 1990s. Multi-party negotiations began in 1994 in Northern Ireland, without Sinn Féin. The Provisional IRA declared a ceasefire in the autumn of 1994. Sinn Féin then joined the talks, but the Conservative government under John Major soon came to depend on unionist votes to remain in power. It suspended Sinn Féin from the talks, and began to insist that the IRA decommission all of their weapons before Sinn Féin be re-admitted to the talks; this led to the IRA calling off its ceasefire. The new Labour government of Tony Blair was not reliant on unionist votes and re-admitted Sinn Féin, leading to another, permanent, ceasefire. The talks led to the Good Friday Agreement of 10 April 1998 (officially known as the "Belfast Agreement"), which set up an inclusive devolved government in the North, and altered the Dublin government's constitutional claim to the whole island in Articles 2 and 3 of the Constitution of Ireland. Republicans opposed to the direction taken by Sinn Féin in the peace process formed the 32 County Sovereignty Movement in the late 1990s. The party expelled Denis Donaldson, a party official, in December 2005, with him stating publicly that he had been in the employ of the British government as an agent since the 1980s. Donaldson told reporters that the British security agencies who employed him were behind the collapse of the Assembly and set up Sinn Féin to take the blame for it, a claim disputed by the British Government. Donaldson was found fatally shot in his home in County Donegal on 4 April 2006, and a murder inquiry was launched. In April 2009, the Real IRA released a statement taking responsibility for the killing. When Sinn Féin and the Democratic Unionist Party (DUP) became the largest parties, by the terms of the Good Friday Agreement no deal could be made without the support of both parties. They nearly reached a deal in November 2004, but the DUP insisted on photographic and/or video evidence that decommissioning had been carried out, which was unacceptable to Sinn Féin. On 2 September 2006, Martin McGuinness publicly stated that Sinn Féin would refuse to participate in a shadow assembly at Stormont, asserting that his party would only take part in negotiations that were aimed at restoring a power-sharing government. This development followed a decision on the part of members of Sinn Féin to refrain from participating in debates since the Assembly's recall the previous May. The relevant parties to these talks were given a deadline of 24 November 2006 to decide upon whether or not they would ultimately form the executive. The 86-year Sinn Féin boycott of policing in Northern Ireland ended on 28 January 2007, when the "Ard Fheis" voted overwhelmingly to support the Police Service of Northern Ireland (PSNI). Sinn Féin members began to sit on Policing Boards and join District Policing Partnerships. There was opposition to this decision within Sinn Féin, and some members left, including elected representatives. The most well-known opponent was former IRA prisoner Gerry McGeough, who stood in the 2007 Assembly election against Sinn Féin in the constituency of Fermanagh and South Tyrone, as an Independent Republican. He polled 1.8% of the vote. Others who opposed this development left to found the Republican Network for Unity. Immediately after the June 2017 UK general election, where the Conservatives won 49% of seats but not an overall majority, so that non-mainstream parties could have significant influence, Gerry Adams announced for Sinn Féin that their elected MPs would continue the policy of not swearing allegiance to the Queen, as would be required for them to take their seats in the Westminster Parliament. In 2017 and 2018 there were allegations of bullying within the party, leading to a number of resignations and expulsions of elected members. At the "Ard Fheis" on 18 November 2017, Gerry Adams announced he would stand down as president of Sinn Féin in 2018, and would not stand for re-election as TD for Louth. On 10 February 2018, Mary Lou McDonald was announced as the new president of Sinn Féin at a special Ard Fheis in Dublin. Michelle O'Neill was also elected as Vice President of the party. McDonald has made clear that as President of Sinn Féin, her ambition is to be in government north and south – and is willing to work in coalition as the major or minor party of government in the southern jurisdiction, a shift in policy compared to Adams ambition to govern as a minority government in the Oireachtas. Sinn Féin are opposed to Northern Ireland leaving the European Union together with the rest of the United Kingdom, with Martin McGuinness suggesting a referendum on the reunification of Ireland immediately after the 2016 United Kingdom European Union membership referendum results were announced, a stance later reiterated by Mary Lou McDonald as a way of resolving the border issues raised by Brexit. Sinn Féin is the largest Irish republican political party, and was historically associated with the IRA, while also having been associated with the Provisional IRA in the party's modern incarnation. The Irish government alleged that senior members of Sinn Féin have held posts on the IRA Army Council. However, the SF leadership has denied these claims. The United States government has made similar allegations. A republican document of the early 1980s stated: "Both Sinn Féin and the IRA play different but converging roles in the war of national liberation. The Irish Republican Army wages an armed campaign... Sinn Féin maintains the propaganda war and is the public and political voice of the movement". Robert White states at that time Sinn Fein was the junior partner in the relationship with the IRA, and they were separate organisations despite there being some overlapping membership. The British government stated in 2005 that "we had always said all the way through we believed that Sinn Féin and the IRA were inextricably linked and that had obvious implications at leadership level". The Northern Bank robbery of £26.5 million in Belfast in December 2004 further delayed a political deal in Northern Ireland. The IRA were widely blamed for the robbery although Sinn Féin denied this and stated that party officials had not known of the robbery nor sanctioned it. Because of the timing of the robbery, it is considered that the plans for the robbery must have been laid whilst Sinn Féin was engaged in talks about a possible peace settlement. This undermined confidence among unionists about the sincerity of republicans towards reaching agreement. In the aftermath of the row over the robbery, a further controversy erupted when, on RTÉ's "Questions and Answers" programme, the chairman of Sinn Féin, Mitchel McLaughlin, insisted that the IRA's controversial killing of a mother of ten young children, Jean McConville, in the early 1970s though "wrong", was not a crime, as it had taken place in the context of the political conflict. Politicians from the Republic, along with the Irish media, strongly attacked McLaughlin's comments. On 10 February 2005, the government-appointed Independent Monitoring Commission reported that it firmly supported the PSNI and Garda Síochána assessments that the IRA was responsible for the Northern Bank robbery and that certain senior members of Sinn Féin were also senior members of the IRA and would have had knowledge of and given approval to the carrying out of the robbery. Sinn Féin has argued that the IMC is not independent, and that the inclusion of former Alliance Party leader John Alderdice and a British security head was proof of this. The IMC recommended further financial sanctions against Sinn Féin members of the Northern Ireland Assembly. The British government responded by saying it would ask MPs to vote to withdraw the parliamentary allowances of the four Sinn Féin MPs elected in 2001. Gerry Adams responded to the IMC report by challenging the Irish government to have him arrested for IRA membership—a crime in both jurisdictions—and for conspiracy. On 20 February 2005, Irish Minister for Justice, Equality and Law Reform Michael McDowell publicly accused three of the Sinn Féin leadership, Gerry Adams, Martin McGuinness and Martin Ferris (TD for Kerry North) of being on the seven-man IRA Army Council; they later denied this. On 27 February 2005, a demonstration against the murder of Robert McCartney on 30 January 2005 was held in east Belfast. Alex Maskey, a former Sinn Féin Lord Mayor of Belfast, was told by relatives of McCartney to "hand over the 12" IRA members involved. The McCartney family, although formerly Sinn Féin voters themselves, urged witnesses to the crime to contact the PSNI. Three IRA men were expelled from the organisation, and a man was charged with McCartney's murder. Irish Taoiseach Bertie Ahern subsequently called Sinn Féin and the IRA "both sides of the same coin". The official ostracism of Sinn Féin was shown in February 2005 when Dáil Éireann passed a motion condemning the party's alleged involvement in illegal activity. US President George W. Bush and Senator Edward Kennedy refused to meet Gerry Adams while meeting the family of Robert McCartney. On 10 March 2005, the House of Commons in London passed without significant opposition a motion, introduced by the British government, to withdraw the allowances of the four Sinn Féin MPs for one year, in response to the Northern Bank Robbery. This measure cost the party approximately £400,000. However, the debate prior to the vote mainly surrounded the more recent events connected with the murder of Robert McCartney. Conservatives and unionists put down amendments to have the Sinn Féin MPs evicted from their offices at the House of Commons but these were defeated. In March 2005, Mitchell Reiss, the United States Special Envoy for Northern Ireland, condemned the party's links to the IRA, saying "it is hard to understand how a European country in the year 2005 can have a private army associated with a political party". The October 2015 Assessment on Paramilitary Groups in Northern Ireland concluded that the Provisional IRA still existed "in a much reduced form", and that some IRA members believed its Army Council oversaw both the IRA and Sinn Féin, although it believed that the leadership "remains committed to the peace process and its aim of achieving a united Ireland by political means". Most of the party's policies are intended to be implemented on an "all-Ireland" basis which further emphasises their central aim of creating a united Ireland. Sinn Féin is a democratic socialist and left-wing party. In the European Parliament, the party aligns itself with the European United Left–Nordic Green Left (GUE/NGL) parliamentary group. The party pledges support for minority rights, migrants' rights, and eradicating poverty. Although it is not in favour of the extension of legalised abortion (British 1967 Act) to Northern Ireland, Sinn Féin state they are opposed to the attitudes in society which "pressurise women" to have abortions and "criminalise" women who make this decision. The party does state that in cases of incest, rape, sexual abuse, "fatal foetal abnormalities", or when a woman's life and health are at risk or in danger, the final decision must rest with the woman. In the 2018 Irish abortion referendum, the party campaigned for a 'Yes' vote, but remained opposed to abortions up to 12 weeks. Categorised as "populist socialist" in literature, in 2014 leading party strategist and ideologue Eoin Ó Broin described Sinn Féin's entire political project as unashamedly populist. Sinn Féin has been considered to be Eurosceptic. The party campaigned for a "No" vote in the Irish referendum on joining the European Economic Community in 1972. Sinn Féin was on the same side of the debate as the DUP and most of the UUP in that they wanted to pull out when UK had its referendum in 1975. The party was critical of the supposed need for an EU constitution as proposed in 2002, and urged a "No" vote in the 2008 referendum on the Lisbon Treaty, although Mary Lou McDonald said that there was "no contradiction in being pro-Europe, but anti-treaty". In its manifesto for the 2015 UK general election, Sinn Féin pledged that the party would campaign for the UK to stay within the European Union (EU), with Martin McGuinness saying that an exit "would be absolutely economically disastrous". Gerry Adams said that, if there were to be a referendum on the question, there ought to be a separate and binding referendum for Northern Ireland. Its policy of a "Europe of Equals", and its critical engagement after 2001, together with its engagement with the European Parliament, marks a change from the party's previous opposition to the EU. The party expresses, on one hand, "support for Europe-wide measures that promote and enhance human rights, equality and the all-Ireland agenda", and on the other a "principled opposition" to a European superstate. This has led political commentators to define the party as soft Eurosceptic since the 21st century. Sinn Féin's main political goal is a united Ireland. Other key policies from their most recent election manifesto are listed below: Sinn Féin supports the creation of a "Minister for Europe", the right to self-determination regarding independence of the Basque Country from Spain and France, the Palestinians in the Israeli–Palestinian conflict and the independence of Catalonia from Spain. Sinn Féin support a policy of "critical engagement with the EU", and have a "principled opposition" to a European superstate. It opposes an EU constitution because it would reduce the sovereignty of the member-states. It also criticises the EU on grounds of neoliberalism. Sinn Féin MEP Matt Carthy says that the "European Union must become a cooperative union of nation states committed to working together on issues such as climate change, migration, trade, and using our common strengths to improve the lives of citizens. If it does not, EU disintegration becomes a real possibility." The party did however support continued UK membership of the European Union in the UK's 2016 EU referendum. Sinn Féin is organised throughout Ireland, and membership is open to all Irish residents over the age of 16. The party is organised hierarchically into "cumainn" (branches), "comhairle ceantair" (district executives), and "cúigí" (regional executives). At national level, the "Coiste Seasta" (Standing Committee) oversees the day-to-day running of Sinn Féin. It is an eight-member body nominated by the Sinn Féin "Ard Chomhairle" (National Executive) and also includes the chairperson of each "cúige". The Sinn Féin "Ard Chomhairle" meets at least once a month. It directs the overall implementation of Sinn Féin policy and activities of the party. The "Ard Chomhairle" also oversees the operation of various departments of Sinn Féin, viz Administration, Finance, National Organiser, Campaigns, Sinn Féin Republican Youth, Women's Forum, Culture, Publicity and International Affairs. It is made up of the following: Officer Board and nine other members, all of whom are elected by delegates to the "Ard Fheis", fifteen representing the five "Cúige" regions (three delegates each). The "Ard Chomhairle" can co-opt eight members for specific posts and additional members can be co-opted, if necessary, to ensure that at least thirty per cent of "Ard Chomhairle" members are women. The Ardfheis (national delegate conference) is the ultimate policy-making body of the party, where delegates, directly elected by members of "cumainn", can decide on and implement policy. It is held at least once a year, but a special "Ard Fheis" can be called by the "Ard Chomhairle" or the membership under special circumstances. Sinn Féin returned to Northern Ireland elections at the 1982 Assembly elections, winning five seats with 64,191 votes (10.1%). The party narrowly missed winning additional seats in Belfast North and Fermanagh and South Tyrone. In the 1983 UK general election eight months later, Sinn Féin increased its support, breaking the six-figure vote barrier in Northern Ireland for the first time by polling 102,701 votes (13.4%). Gerry Adams won the Belfast West constituency, and Danny Morrison fell only 78 votes short of victory in Mid Ulster. The 1984 European elections proved to be a disappointment, with Sinn Féin's candidate Danny Morrison polling 91,476 (13.3%) and falling well behind the SDLP candidate John Hume. By the beginning of 1985, Sinn Féin had won its first representation on local councils, owing to three by-election wins in Omagh (Seamus Kerr, May 1983) and Belfast (Alex Maskey in June 1983 and Sean McKnight in March 1984). Three sitting councillors also defected to Sinn Féin in Dungannon, Fermanagh and Derry (the last defecting from the SDLP). Sinn Féin succeeded in winning 59 seats in the 1985 local government elections, after it had predicted winning only 40 seats. However, the results continued to show a decline from the peak of 1983, as the party won 75,686 votes (11.8%). The party failed to gain any seats in the 1986 by-elections caused by the resignation of unionist MPs in protest at the Anglo-Irish Agreement. While this was partly due to an electoral pact between unionist candidates, the SF vote fell in the four constituencies they contested. In the 1987 general election, Gerry Adams held his Belfast West seat, but the party failed to make breakthroughs elsewhere and overall polled 83,389 votes (11.4%). The same year saw the party contest the Dáil election in the Republic of Ireland; however, it failed to win any seats and polled less than 2%. The 1989 local government elections saw a drop in support for Sinn Féin. Defending 58 seats (the 59 won in 1985, plus two 1987 by-election gains in West Belfast, minus three councillors who had defected to Republican Sinn Féin in 1986), the party lost 15 seats. In the aftermath of the election, Mitchell McLaughlin admitted that recent IRA activity had affected the Sinn Féin vote. In the 1989 European election, Danny Morrison again failed to win a seat, polling at 48,914 votes (9%). The nadir for SF in this period came in 1992, with Gerry Adams losing his Belfast West seat to the SDLP, and the SF vote falling in the other constituencies that they had contested relative to 1987. In the 1997 UK general election, Adams regained Belfast West. Martin McGuinness also won a seat in Mid Ulster. In the Irish general election the same year the party won its first seat since 1957, with Caoimhghín Ó Caoláin gaining a seat in the Cavan–Monaghan constituency. In the Irish local elections of 1999 the party increased its number of councillors from 7 to 23. The party overtook its nationalist rival, the Social Democratic and Labour Party, as the largest nationalist party in the local elections and UK general election of 2001, winning four Westminster seats to the SDLP's three. The party continues to subscribe, however, to an abstentionist policy towards the Westminster British parliament, on account of opposing that parliament's jurisdiction in Northern Ireland, as well as its oath to the Queen. Sinn Féin increased its share of the nationalist vote in the 2003, 2007, and 2011 Assembly elections, with Martin McGuinness, former Minister for Education, taking the post of deputy First Minister in the Northern Ireland power-sharing Executive Committee. The party has three ministers in the Executive Committee. In the 2010 general election, the party retained its five seats, and for the first time topped the poll at a Westminster election in Northern Ireland, winning 25.5% of the vote. All Sinn Féin MPs increased their share of the vote and with the exception of Fermanagh and South Tyrone, increased their majorities. In Fermanagh and South Tyrone, Unionist parties agreed a joint candidate, this resulted in the closest contest of the election, with Sinn Féin MP Michelle Gildernew holding her seat by 4 votes after 3 recounts and an election petition challenging the result. Sinn Féin lost some ground in the 2016 Assembly election, dropping one seat to finish with 28, ten behind the DUP. In the snap election eight months later caused by the resignation of McGuinness as deputy First Minister, however, the party surged, winning 27.9% of the popular vote to 28.1% for the DUP, and 27 seats to the DUP's 28 in an Assembly reduced by 18 seats. The withdrawal of the DUP party whip from Jim Wells in May 2018 meant that Sinn Féin became the joint-largest party in the Assembly alongside the DUP, with 27 seats each. The party had five TDs elected in the 2002 Irish general election, an increase of four from the previous election. At the general election in 2007 the party had expectations of substantial gains, with poll predictions that they would gain five to ten seats. However, the party lost one of its seats to Fine Gael. Seán Crowe, who had topped the poll in Dublin South-West fell to fifth place, with his first preference vote reduced from 20.28% to 12.16%. On 26 November 2010, Pearse Doherty won a seat in the Donegal South-West by-election. It was the party's first by-election victory in the Republic of Ireland since 1925. After negotiations with the left-wing Independent TDs Finian McGrath and Maureen O'Sullivan, a Technical Group was formed in the Dáil to give its members more speaking time. In the 2011 Irish general election the party made significant gains. All its sitting TDs were returned, with Seán Crowe regaining the seat he had lost in 2007 in Dublin South-West. In addition to winning long-targeted seats such as Dublin Central and Dublin North-West, the party gained unexpected seats in Cork East and Sligo–North Leitrim. It ultimately won 14 seats, the best performance at the time for the party's current incarnation. The party went on to win three seats in the Seanad election which followed their success at the general election. In the 2016 election it made further gains, finishing with 23 seats and overtaking the Labour Party as the third-largest party in the Dáil. It ran seven candidates in the Seanad election, all of whom were successful. The party achieved their greatest contemporary result in the 2020 Irish general election, topping the first-preference votes with 24.5% and winning 37 seats. Due to poor results in the 2019 local elections and elections to the European Parliament, the party ran only 42 candidates and did not compete in Cork North-West. The party achieved unexpected success in the early counting, with 27 candidates being elected on the first count. Party leader Mary Lou McDonald called the result a "revolution" and announced she would pursue the formation of a government including Sinn Féin. Sinn Féin is represented on most county and city councils. It made large gains in the local elections of 2004, increasing its number of councillors from 21 to 54, and replacing the Progressive Democrats as the fourth-largest party in local government. At the local elections of June 2009, the party's vote fell by 0.95% to 7.34%, with no change in the number of seats. Losses in Dublin and urban areas were balanced by gains in areas such as Limerick, Wicklow, Cork, Tipperary and Kilkenny and the border counties . However, three of Sinn Féin's seven representatives on Dublin City Council resigned within six months of the June 2009 elections, one of them defecting to the Labour Party. In the 2004 European Parliament election, Bairbre de Brún won Sinn Féin's first seat in the European Parliament, at the expense of the Social Democratic and Labour Party (SDLP). She came in second behind Jim Allister, then of the Democratic Unionist Party (DUP). In the 2009 election, de Brún was re-elected with 126,184 first preference votes, the only candidate to reach the quota on the first count. This was the first time since elections began in 1979 that the DUP failed to take the first seat, and was the first occasion Sinn Féin topped a poll in any Northern Ireland election. Sinn Féin made a breakthrough in the Dublin constituency in 2004. The party's candidate, Mary Lou McDonald, was elected on the sixth count as one of four MEPs for Dublin. In the 2009 election, when Dublin's representation was reduced to three MEPs, she failed to hold her seat. In the South constituency their candidate, Councillor Toiréasa Ferris, managed to nearly double the number of first preference votes, lying third after the first count, but failed to get enough transfers to win a seat. In the 2014 election, Martina Anderson topped the poll in Northern Ireland, as did Lynn Boylan in Dublin. Liadh Ní Riada was elected in the South constituency, and Matt Carthy in Midlands–North-West. In the 2019 election, Carthy was re-elected, but Boylan and Ní Riada lost their seats. Anderson also held her Northern Ireland seat until early 2020 when her term was cut short by Brexit.
https://en.wikipedia.org/wiki?curid=28175
Willis Tower The Willis Tower (Formerly and informally: Sears Tower, its name for 36 years) is a 110-story, skyscraper in Chicago, Illinois. At completion in 1973, it surpassed the World Trade Center in New York City to become the tallest building in the world, a title that it held for nearly 25 years; it was also the tallest building in the Western Hemisphere for 41 years, until the new One World Trade Center surpassed it in 2014. While it held the title of "Tallest Office Building" until 2014, it lost the title of "Tallest Man-Made Structure" after only 3 years. The CN Tower in Toronto, which serves as a communications tower, took over the title in 1976. The Willis Tower is considered a seminal achievement for engineer Fazlur Rahman Khan. It is currently the third-tallest building in the United States and the Western hemisphere – and the 23rd-tallest in the world. Each year, more than one million people visit its observation deck, the highest in the United States, making it one of Chicago's most popular tourist destinations. The structure was renamed in 2009 by the Willis Group as a term of its lease. , the building's largest tenant is United Airlines, which moved its corporate headquarters from 77 West Wacker Drive (then the United Building) in 2012, occupying around 20 floors. Other major tenants include the building's namesake Willis Towers Watson and law firms Schiff Hardin and Seyfarth Shaw. Morgan Stanley plans to move to the building in 2019 and become its fourth-largest tenant by 2020. In 1969, Sears, Roebuck & Co. was the largest retailer in the world, with about 350,000 employees. Sears executives decided to consolidate the thousands of employees in offices distributed throughout the Chicago area into one building on the western edge of Chicago's Loop. Sears asked its outside counsel, Arnstein, Gluck, Weitzenfeld & Minow (now known as Arnstein & Lehr, LLP) to suggest a location. The firm consulted with local and federal authorities and the applicable law, then offered Sears two options: the Goose Island area northwest of downtown, and a two-block area bounded by Franklin Street on the east, Jackson Boulevard on the south, Wacker Drive on the west and Adams Street on the north, with Quincy Street running through the middle from east to west. After selection of the latter site, permits to vacate Quincy Street were obtained. Attorneys from the Arnstein firm, headed by Andrew Adsit, began buying the properties parcel by parcel. Sears purchased 15 old buildings from 100 owners and paid $2.7 million to the City of Chicago for the portion of Quincy Street the project absorbed. Sears, which needed of office space for its planned consolidation and predicted growth, commissioned architects Skidmore, Owings & Merrill (SOM). Their team of Colombian-Peruvian architect Bruce Graham and Bangladeshi-Pakistani structural engineer Fazlur Rahman Khan designed the building as nine square "tubes" (each essentially a separate building), clustered in a 3×3 matrix forming a square base with sides. All nine tubes would rise up to the 50th floor of the building, where the northwest and southeast tubes terminate. The northeast and southwest tubes reach the 66th floor; the north, east, and south tubes end at the 90th. The remaining west and center tubes reach 108 floors. The Sears Tower was the first building to use this innovative design. It was both structurally efficient and economic: at 1,450 feet, it provided more space and rose higher than the Empire State Building and cost much less per unit area. The system would prove highly influential in skyscraper construction and has been used in most supertall buildings since, including the world's current tallest building, the Burj Khalifa. To honor Khan's contributions, the Structural Engineers Association of Illinois commissioned a sculpture of him for the lobby of the Willis Tower. Sears decided to focus their initial occupancy on housing their merchandise group, renting out the remaining space to other tenants until needed. The latter floor areas had to be designed to a smaller footprint with a high window-space to floor-space ratio to be attractive to prospective lessees. Smaller floorplates required a taller structure to yield sufficient square footage. Skidmore architects proposed a tower with large, floors in the lower part of the building with gradually tapered floorplates in a series of setbacks, which would give the tower its distinctive look. As Sears continued to offer optimistic projections for growth, the tower's proposed floor count increased rapidly into the low hundreds, surpassing the height of New York's unfinished World Trade Center to become the world's tallest building. The height was restricted by a limit imposed by the Federal Aviation Administration (FAA) to protect air traffic. The financing of the tower was provided by Sears. It was topped with two antennas for television and radio broadcasting. Sears and the City of Chicago approved the design and the first steel was put in place in April 1971. The structure was completed in May 1973. The construction cost about US$150 million, equivalent to $ million in 2020 dollars. By comparison, Taipei 101, built in 2004, cost the equivalent of US$2.21 billion in 2018 dollars. Black bands appear on the tower around the 29th–32nd, 64th–65th, 88th–89th, and 104th–108th floors. These elements are louvres to ventilate the building's environmental support systems and obscure its belted trusses. Even though regulations did not require a fire sprinkler system, the building was equipped with one from the beginning. There are around 40,000 sprinkler heads in the building, installed at a cost of $4 million. In February 1982, two television antennas were added to the structure, increasing its total height to . The western antenna was later extended, bringing the overall height to on June 5, 2000, to improve reception of local NBC station WMAQ-TV. As the construction of the building neared the 50th floor, lawsuits for an injunction were filed seeking to stop the building from exceeding 67 floors. The suits alleged that above that point television reception would deteriorate and cause property values to plummet. The first suit was filed by the state attorney in neighboring Lake County on March 17, 1972. A second suit was filed on March 28 in Cook County Circuit Court by the villages of Skokie, Northbrook, and Deerfield, Illinois. Sears filed motions to dismiss the Lake and Cook County lawsuits and on May 17, 1972, Judge LaVerne Dickson, Chief of the Lake County Circuit Court, dismissed the suit, saying, "I find nothing that gives television viewers the right to reception without interference. They will have to find some other means of ensuring reception such as taller antennas." The Lake County state's attorney filed a notice of appeal to the Illinois Supreme Court, which ultimately decided in favor of Sears. In his decision on June 12, Judge Charles R. Barrett contended the plaintiffs did not have a right to undistorted television reception. Meanwhile, the Illinois Citizens' Committee for Broadcasting requested the Federal Communications Commission (FCC) halt construction so the building would not interfere with television reception. On May 26, 1972, the Commission declined to take action on the grounds it did not have jurisdiction. On June 30, 1972, the Illinois Supreme Court affirmed the previous rulings by Lake and Cook County Circuit Courts, by a letter order with a written opinion to follow. On September 8, 1972, the United States Court of Appeals for the Seventh Circuit upheld the FCC decision. The court's written opinion was filed on September 20, 1972. In affirming the lower court rulings, it held that "absent legislation to the contrary, defendant has a proprietary right to construct a building to its desired height and that completion of the project would not constitute a nuisance under the circumstances of this case." Sears' optimistic growth projections were not realized. Competition beyond its traditional rivals such as Montgomery Ward arose from emerging retail giants including Kmart, Kohl's, and Walmart. As a result of a surplus of office space that emerged in the 1980s, the tower did not draw as many tenants as projected and so stood half-vacant for a decade. In 1984, Sears decided to improve the appeal of the lower floors of the tower to pedestrians. Their solution resulted in the addition of a new entryway dubbed by many as the "Lunchbox Entrance." Sears looked into selling the Sears Tower in the late 1980s. In July 1990, with no potential buyer apparent, Sears took out a mortgage loan on the tower for $850 million from MetLife and AEW Capital Management, with Metlife as the holder of the mortgage note. This loan would mature in 2005. In 1990, the law firm of Keck, Mahin & Cate decided to move into a development that would become 77 West Wacker Drive, rebuffing Sears' attempts to entice the firm to stay. Just two years later, Sears began a move of its own offices out of the building to a new campus in Hoffman Estates, Illinois, which was completed in 1995. As the maturation of the mortgage approached, Sears renegotiated the loan in 1994. The negotiations resulted in an agreement where Sears would no longer be liable for the $850 million loan, although it would only nominally own the building, while AEW and Metlife effectively had total control. As part of the 1994 agreement, AEW and Metlife would be able to take official ownership of the building in 2003. However, in 1997, Toronto-based TrizecHahn Corporation, (at the time the lessee of the CN Tower), purchased AEW's holdings in the building for $110 million, assuming $4 million in liabilities and a $734 million mortgage. Trizec projected that the Sears Tower would quickly reach a value of $1 billion. These projections were not met, with the tower facing the same vacancy and other problems it saw under Sears, although Trizec made somewhat successful efforts to attract new tenants. Following the September 11 attacks, two of the largest tenants, Goldman Sachs and Merrill Lynch, immediately announced plans for vacating 300,000 ft2 of space. In 2003, Trizec sold its holdings of the tower to MetLife for $9 million. In 2004, MetLife sold the building to a group of investors including New York-based Joseph Chetrit, Joseph Moinian, Lloyd Goldman, Joseph Cayre and Jeffrey Feil, and Skokie, Illinois-based American Landmark Properties. The quoted price was $840 million, with $825 million held in a mortgage. In June 2006, seven men were arrested by the FBI and charged with plotting to destroy the tower. Deputy FBI Director John Pistole described their plot as "more aspirational than operational". The case went to court in October 2007. After three trials, five of the suspects were convicted and two acquitted. The alleged leader of the group, Narseal Batiste, was sentenced to 13½ years in prison. In response to the perceived threat of an attack, the building's largest tenant at this time, Ernst & Young, moved to North Wacker Drive in early 2009. Since 2007, the owners had considered plans for the construction of a hotel on the north side of Jackson Boulevard, between Wacker Drive and Franklin Street, close to the entrance of the observation deck, above the tower’s underground parking garage. According to the tower’s owners, the second building was considered in the original design. The plan was eventually cancelled as city zoning did not permit construction of a such a tall building in that location. In February 2009, the owners announced they were considering a plan to paint the structure silver, an idea that was later abandoned. It was hoped that a new, silver, paint-job would "rebrand" the building and highlight its advances in energy efficiency for an estimated cost of $50 million. Although Sears' naming rights expired in 2003, the building continued to be called the Sears Tower for several years, despite multiple changes in ownership. In March 2009, London-based insurance broker Willis Group Holdings agreed to lease a portion of the building and obtained the naming rights. On July 16, 2009, the building was officially renamed Willis Tower. On August 13, 2012, United Airlines announced it would move its corporate headquarters from 77 West Wacker Drive to Willis Tower. In 2015, the Blackstone Group purchased the tower for a reported $1.3 billion, the highest price ever paid for a U.S. property outside of New York City. In 2017, Blackstone announced a $500 million "facelift" for the property which would include the construction of a six-story commercial complex in the tower's plaza area.\ In May 2020, heavy rains caused three of the basement levels to flood, knocking out power to the building. This also resulted in many TV and radio stations going off the air. The Willis Tower observation deck, called the Skydeck, opened on June 22, 1974. Located on the 103rd floor at an elevation of , it is the highest observation deck in the United States and one of Chicago's most famous tourist attractions. Tourists can experience how the building sways in wind and see far over the plains of Illinois and across Lake Michigan to Indiana, Michigan, and Wisconsin in clear conditions. Elevators reach the top in about 60 seconds, allowing occupants to feel the change in pressure as they ascend. The Skydeck competes with the John Hancock Center's observation floor a mile and a half away but reaching lower. Some 1.7 million tourists visit annually. A second observation deck on the 99th floor serves as a backup. The tourist entrance can be found on the south side of the building along Jackson Boulevard. In January 2009, a major renovation of the Skydeck was begun, including the installation of retractable glass balconies which extend approximately from the facade of the 103rd floor, overlooking South Wacker Drive. The all-glass boxes, informally dubbed "The Ledge", allow visitors to see the street below. The boxes, which can accommodate , opened to the public on July 2, 2009. On May 29, 2014, the laminated glass flooring of one of the boxes shattered while visitors were inside but there were no injuries. The flooring on that same box shattered on June 12, 2019. The Willis Tower remains the third tallest building in the Americas (after One World Trade Center and Central Park Tower) and the Western Hemisphere. With a pinnacle height of , it is the third-tallest freestanding structure in the Americas, shorter than Toronto's CN Tower. It is the eighth-tallest freestanding structure in the world by pinnacle height. At tall, including decorative spires, the Petronas Twin Towers in Kuala Lumpur, Malaysia, controversially claimed to be the tallest building in the world in 1998. In the ensuing controversy, four categories of "tallest building" were created. Of these, Petronas was the tallest in the category of height to the top of architectural elements, meaning spires but not antennas. Taipei 101 in Taiwan claimed the record in three of the four categories in 2004 to become recognized as the tallest building in the world. Taipei 101 surpassed the Petronas Twin Towers in spire height and the Sears Tower in roof height and highest occupied floor. The tower retained one record: its antenna exceeded Taipei 101's spire in height. In 2008, Shanghai World Financial Center claimed the records of tallest building by roof and highest occupied floor. On August 12, 2007, the Burj Khalifa in Dubai was reported by its developers to have surpassed the tower in all height categories. Upon completion, One World Trade Center in New York City surpassed the Willis Tower through its structural and pinnacle heights, but not by roof, observation deck elevation, or highest occupied floor. Until 2000, the tower did not hold the record for being the tallest building by pinnacle height. From 1969 to 1978, this record was held by John Hancock Center, whose antenna reached a height of , taller than the Sears Tower's original height. One World Trade Center became taller by pinnacle height with the addition of a 359-foot (109.4-meter) antenna, bringing its total height to . In 1982, two antennas were installed which brought its total height to , making it taller than the John Hancock Center but not One World Trade Center. However, the extension of the tower's western antenna in June 2000 to allowed it to just barely claim the title of tallest building by pinnacle height. On May 25, 1981, Dan Goodwin, wearing a homemade Spider-Man suit while using suction cups, camming devices, and sky hooks, and despite several attempts by the Chicago Fire Department to stop him, made the first successful outside ascent of the tower. Goodwin was arrested at the top after the seven-hour climb and was later charged with trespassing. Goodwin stated that the reason he made the climb was to call attention to shortcomings in high-rise rescue and firefighting techniques. After a lengthy interrogation by Chicago's District Attorney and Fire Commissioner, Goodwin was officially released from jail. In August 1999, French urban climber Alain "Spiderman" Robert, using only his bare hands and bare feet, scaled the building's exterior glass and steel wall all the way to the top. A thick fog settled in near the end of his climb, making the last 20 stories of the building's glass and steel exterior slippery. Annually, since 2009, the Willis Tower has hosted SkyRise Chicago, the world's tallest indoor stair climb, as a charity event benefiting Shirley Ryan AbilityLab, where participants can (legally) climb the Willis Tower's 103-story staircase. Although Sears sold the tower in 1994 and had completely vacated it by 1995, the company retained the naming rights to the building through 2003. The new owners were rebuffed in renaming deals with CDW Corp in 2005 and the U.S. Olympic Committee in 2008. London-based insurance broker Willis Group Holdings Ltd leased more than of space on three floors in 2009. A Willis spokesman said the naming rights were obtained as part of the negotiations at no cost to Willis and the building was renamed Willis Tower on July 16, 2009. The naming rights are valid for 15 years, so it is possible that the building's name could change again as soon as 2024. The "Chicago Tribune" joked that the building's new name reminded them of the oft-repeated "What you talkin' 'bout, Willis?" catchphrase from the American television sitcom "Diff'rent Strokes" and considered the name-change ill-advised in "a city with a deep appreciation of tradition and a healthy ego, where some Chicagoans still mourn the switch from Marshall Field's to Macy's". This feeling was confirmed in a July 16, 2009 CNN article in which some Chicago area residents expressed reluctance to accept the Willis Tower name, and in an article that appeared in the October 2010 issue of "Chicago" magazine that ranked the building among Chicago's 40 most important, the author pointedly refused to acknowledge the name change and referred to the building as the "Sears Tower". "Time" magazine called the name change one of the top 10 worst corporate name changes and pointed to negative press coverage by local news outlets and online petitions from angry residents. The naming rights issue continued into 2013, when Eric Zorn noted in the "Chicago Tribune" that "We're stubborn about such things. This month marked four years since the former Sears Tower was re-christened Willis Tower, and the new name has yet to stick." Many broadcast station transmitters are located at the top of Willis Tower. Each list is ranked by height from the top down. Stations at the same height on the same mast indicate the use of a diplexer into the same shared antenna. Due to its extreme height, FM stations (all class B) are very limited in power output. NOAA Weather Radio station KWO39 transmits off the tower at 162.550 MHz. Programmed by the National Weather Service Weather Forecast Office in Chicago, it is equipped with Specific Area Message Encoding (SAME), which sets off a siren on specially-programmed weather radios to alert of an impending hazard. The building has appeared in numerous films and television shows set in Chicago such as "Ferris Bueller's Day Off", where Ferris and company visit the observation deck. "Late Night with Conan O'Brien" introduced a character called The Sears Tower Dressed In Sears Clothing when the show visited Chicago in 2006. The building is also featured in History Channel's "Life After People", in which it and other human-made landmarks suffer from neglect without humans around, collapsing two hundred years after people are gone. In an episode of the television series "Monk", Adrian Monk tries to conquer his fear of heights by imagining that he is on top of the tower. In an episode of "Kenan and Kel", Kenan Rockmore and Kel Kimble decide to climb to the top of the tower so Kenan can declare his love for a girl, but they end up getting stuck on a window washer's platform 110 stories up. The Chicago Franchise frequently features locations where the tower is visible in the distance. In the movie "", it is damaged by a tornado. In the 1996 film "Michael", by Nora Ephron, the tower, then the tallest building in the world, is the last Earthly wonder the archangel Michael gets to see before vanishing from Earth. In "1969", a Season 2 episode of the science-fiction series "Stargate SG-1", the SG-1 team accidentally travels back in time to the titular year. At one point the team travels through Chicago and the tower is shown (erroneously, since construction did not begin on the tower until two years later in 1971). In the 2004 film "I, Robot", the tower is shown updated in the year 2035 with new triangular antennas. It is portrayed as having its height surpassed by the fictional USR (United States Robotics) Building. In the 2008 film "The Dark Knight", it is part of Gotham City. In the 2011 film "", it is featured in a number of scenes. The most notable one is when the N.E.S.T team tries to enter the city using V-22 Osprey helicopters. They use Willis Tower for cover before using wing suits to descend into the city streets. In the 2013 film "Man of Steel", the tower is the location of the offices of the "Daily Planet". In the 2014 film "Divergent", it is shown abandoned and decayed in a future Chicago. In the 2015 film "Jupiter Ascending", the tower is featured prominently as the place where Caine and Jupiter await a spaceship to lift them off the planet. In the 2018 film "Rampage", the Energyne corporation is headquartered in the building and uses the tower's antenna to broadcast an echolocation signal that would attract three mutated monsters. After the antenna is destroyed, the entire building collapses to the ground due to being heavily damaged by a giant mutated ape. In the 2019 film "Captive State", which takes place a decade after an extraterrestrial race has taken over the Earth in an alien invasion, the aliens in Chicago have replaced the tower's antennas with their own jamming device to shut down all digital communication worldwide. The modified tower was also featured on one of the film's posters. In Sufjan Stevens' 2005 album "Illinois", the tower is referenced in the track "Seer's Tower." In the 1987 VHS video game Captain Power set in 2147, you can fly around the tower which is at that time called "The Tower of the Seer".
https://en.wikipedia.org/wiki?curid=28176
Samaritans The Samaritans (; Samaritan Hebrew: , "" (, 'Guardians/Keepers/Watchers (of the Torah)'; , "al-Sāmiriyyūn") are an ethnoreligious group originating from the Israelites (or Hebrews) of the Ancient Near East. Ancestrally, Samaritans claim descent from the tribe of Ephraim and tribe of Manasseh (two sons of Joseph) as well as from the Levites, who have links to ancient Samaria (now constituting the majority of the territory known as the West Bank) from the period of their entry into Canaan, while some Orthodox Jews suggest that it was from the beginning of the Babylonian captivity up to the Samaritan polity under the rule of Baba Rabba. Samaritans used to include descendants whose ancestry was ascribed to the Benjamin tribe, but this line became extinct in the 1960s. According to Samaritan tradition, the split between them and the Judean-led Southern Israelites began during the biblical time of the priest Eli when the Southern Israelites split off from the central Israelite tradition, as they perceive it. In the Talmud, a central post-exilic religious text of Rabbinic Judaism, the Samaritans are called "Cuthites" or Cutheans (, "Kutim"), referring to the ancient city of Kutha, geographically located in what is today Iraq. Josephus's "Wars of the Jews" also refers to the Samaritans as the Cuthites. In the biblical account, however, Kuthah was one of several cities from which people were brought to Samaria, and they worshiped Nergal. Modern genetics partially support both the claims of the Samaritans and the account in the Hebrew Bible (and Talmud), suggesting that the genealogy of the Samaritans lies in some combination of these two accounts. This suggests that the Samaritans remained a genetically isolated population. The Samaritans are adherents of Samaritanism, a religion closely related to Judaism. Samaritans believe that their worship, which is based on the Samaritan Pentateuch, is the true religion of the ancient Israelites from before the Babylonian captivity, preserved by those who remained in the Land of Israel, as opposed to Judaism, which they see as a related but altered and amended religion, brought back by those returning from the Babylonian Captivity. The Samaritans believe that Mount Gerizim was the original Holy Place of Israel from the time that Joshua conquered Canaan. The major issue between Jews and Samaritans has always been the location of the Chosen Place to worship God: The Temple Mount of Moriah in Jerusalem according to Judaism or Mount Gerizim according to Samaritanism. Once a large community, the Samaritan population appears to have shrunk significantly in the wake of the bloody suppression of the Samaritan Revolts (mainly in 529 CE and 555 CE) against the Byzantine Empire. Conversion to Christianity under the Byzantines also reduced their numbers. Conversions to Islam took place as well, and by the mid–Middle Ages, Benjamin of Tudela estimated only around 1,900 Samaritans remained in Palestine and Syria. The present-day population has been consistently divided between Qiryat Luza on Mount Gerizim and the city of Holon, just outside Tel Aviv. Most Samaritans in Holon and Qiryat Luza today speak Hebrew and Arabic. For liturgical purposes, Samaritan Hebrew, Samaritan Aramaic, and Arabic are used, all written with the Samaritan alphabet, a variant of the Paleo-Hebrew alphabet, which is distinct from the Jewish Hebrew alphabet which is stylized form of the Imperial Aramaic script. Hebrew and later Aramaic were languages in use by the Jewish and Samaritan inhabitants of Judea (the name by which Israel was known during part of the Second Temple era) before the Roman exile. Samaritans have a stand-alone religious status in Israel, and there are occasional conversions from Judaism to Samaritanism and vice versa due to marriages. While the Israeli Rabbinic authorities consider Samaritanism to be a branch of Judaism, the Chief Rabbinate of Israel requires Samaritans to officially go through a formal conversion to Judaism in order to be recognized as Halakhic Jews. One example is Israeli TV personality Sofi Tsedaka, who formally converted to Rabbinic Judaism at the age of 18. Samaritans with Israeli citizenship are obligated to undertake mandatory service in the Israel Defense Forces, while those with dual Israeli-Palestinian citizenship (living in Qiryat Luza) are generally exempted. There is conflict over the etymology of the name for the Samaritans in Hebrew, stemming from the fact that they are referred to differently in different dialects of Hebrew. This has accompanied controversy over whether the Samaritans are named after the geographic area of Samaria (the northern part of what is now globally known as the West Bank), or whether the area received its name from the group. This distinction is controversial in part because different interpretations can be used to justify or deny claims of ancestry over this region, which has been deeply contested in modern times. In Samaritan Hebrew, the Samaritans call themselves "Shamerim" (שַמֶרִים), which according to the Anchor Bible Dictionary, is derived from the Ancient Hebrew term meaning 'Guardians/Keepers/Watchers [of the Torah/Law]'. Biblical Hebrew "Šomerim" () 'Guardians' (singular "Šomer") comes from the Hebrew Semitic root שמר, which means 'to watch, guard'. Historically, Samaria was the key geographical concentration of the Samaritan community. Thus, it may suggest the region of Samaria is named after the Samaritans, rather than the Samaritans being named after the region. In Jewish tradition, however, it is sometimes claimed that Mount Samaria, meaning 'Watch Mountain', is actually named so because watchers used to watch from those mountains for approaching armies from Egypt in ancient times. In Modern Hebrew, the Samaritans are called , which would appear to simply mean 'inhabitants of Samaria'. This is a politically sensitive distinction. That the etymology of the Samaritans' ethnonym in Samaritan Hebrew is derived from "Guardians/Keepers/Watchers [of the Law/Torah]", as opposed to Samaritans being named after the region of Samaria, has in history been supported by a number of Christian Church fathers, including Epiphanius of Salamis in the "Panarion", Jerome and Eusebius in the "Chronicon" and Origen in "The Commentary on Saint John's Gospel", and in some Talmudic commentary of Tanhuma on Genesis 31, and Pirke De-Rabbi Eliezer 38, p. 21. According to Samaritan tradition, Mount Gerizim was the original Holy Place of the Israelites from the time that Joshua conquered Canaan and the tribes of Israel settled the land. The reference to Mount Gerizim derives from the biblical story of Moses ordering Joshua to take the Twelve Tribes of Israel to the mountains by Shekhem (Nablus) and place half of the tribes, six in number, on Mount Gerizim, the Mount of the Blessing, and the other half on Mount Ebal, the Mount of the Curse. The two mountains were used to symbolize the significance of the commandments and serve as a warning to whoever disobeyed them (Deut. 11:29; 27:12; Josh. 8:33). Samaritans claim they are Israelite descendants of the Northern Israelite tribes of Ephraim and Manasseh, who survived the destruction of the Kingdom of Israel (Samaria) by the Assyrians in 722 BCE. Samaritan historiography places the basic schism from the remaining part of Israel after the tribes of Israel conquered and returned to the land of Canaan, led by Joshua. In its account, after Joshua's death, Eli the priest left the Tabernacle which Moses erected in the desert and established on Mount Gerizim and built another one under his own rule in the hills of Shiloh. Abu l-Fath, who in the 14th century wrote a major work of Samaritan history, comments on Samaritan origins as follows: Further, the "Samaritan Chronicle Adler", or New Chronicle, believed to have been composed in the 18th century using earlier chronicles as sources states: The emergence of the Samaritans as an ethnic and religious community distinct from other Levant peoples appears to have occurred at some point after the Assyrian conquest of the Israelite Kingdom of Israel in approximately 721 BCE. The records of Sargon II of Assyria indicate that he deported 27,290 inhabitants of the former kingdom. Jewish tradition affirms the Assyrian deportations and replacement of the previous inhabitants by forced resettlement by other peoples but claims a different ethnic origin for the Samaritans. The Talmud accounts for a people called "Cuthim" on a number of occasions, mentioning their arrival by the hands of the Assyrians. According to 2 Kings and Josephus, the people of Israel were removed by the king of the Assyrians (Sargon II) to Halah, to Gozan on the Khabur River and to the towns of the Medes. The king of the Assyrians then brought people from Babylon, Cuthah, Avah, Emath, and Sepharvaim to place in Samaria. Because God sent lions among them to kill them, the king of the Assyrians sent one of the priests from Bethel to teach the new settlers about God's ordinances. The eventual result was that the new settlers worshiped both the God of the land and their own gods from the countries from which they came. In the Chronicles,, following Samaria's destruction, King Hezekiah is depicted as endeavouring to draw the Ephraimites and Manassites closer to Judah. Temple repairs at the time of Josiah were financed by money from all "the remnant of Israel" in Samaria, including from Manasseh, Ephraim, and Benjamin. Jeremiah likewise speaks of people from Shekhem, Shiloh, and Samaria who brought offerings of frankincense and grain to the House of YHWH. Chronicles makes no mention of an Assyrian resettlement. Yitzakh Magen argues that the version of Chronicles is perhaps closer to the historical truth and that the Assyrian settlement was unsuccessful, a notable Israelite population remained in Samaria, part of which, following the conquest of Judah, fled south and settled there as refugees. A Midrash (Genesis Rabbah Sect. 94) relates about an encounter between Rabbi Meir and a Samaritan. The story that developed includes the following dialogue: Rabbi Meir: What tribe are you from? The Samaritan: From Joseph. Rabbi Meir: No! The Samaritan: From which one then? Rabbi Meir: From Issachar. The Samaritan: How do you figure? Rabbi Meir: For it is written (Gen 46:13): The sons of Issachar: Tola, Puvah, Iob, and Shimron. These are the Samaritans (shamray). Zertal dates the Assyrian onslaught at 721 BCE to 647 BCE and discusses three waves of imported settlers. He shows that Mesopotamian pottery in Samaritan territory cluster around the lands of Menasheh and that the type of pottery found was produced around 689 BCE. Some date their split with the Jews to the time of Nehemiah, Ezra, and the building of the Second Temple in Jerusalem after the Babylonian exile. Returning exiles considered the Samaritans to be non-Israelites and, thus, not fit for this religious work. The "Encyclopaedia Judaica" (under "Samaritans") summarizes both past and present views on the Samaritans' origins. It says: Furthermore, to this day the Samaritans claim descent from the tribe of Joseph: The Dead Sea scroll 4Q372 hopes that the northern tribes will return to the land of Joseph. The current dwellers in the north are referred to as fools, an enemy people. However, they are not referred to as foreigners. It goes on to say that the Samaritans mocked Jerusalem and built a temple on a high place to provoke Israel. The account of the Assyrian kings, which was among the archaeological discoveries in Babylon, differs from the Samaritan account, and confirms much of the Jewish biblical account but may differ in regard to the ethnicity of the foreigners settled in Samaria by the Assyrians. At one point, it is simply said that they were from Arabia, while at another, that they were brought from a number of countries conquered by Sargon II: Also, The narratives in Genesis about the rivalries among the twelve sons of Jacob are viewed by some as describing tensions between north and south. They were temporarily united in the United Monarchy, but after the death of Solomon, the kingdom split in two, the Kingdom of Israel with its last capital city Samaria and the Kingdom of Judah with its capital Jerusalem. The Deuteronomistic history, written in Judah, portrayed Israel as a sinful kingdom, divinely punished for its idolatry and iniquity by being destroyed by the Assyrians in 720 BCE. The tensions continued in the postexilic period. The Books of Kings are more inclusive than Ezra–Nehemiah since the ideal is of one Israel with twelve tribes, whereas the Books of Chronicles concentrate on the Kingdom of Judah and ignore the Kingdom of Israel (Samaria). The Samaritans claimed that they were the true Israel who were descendants of the "Ten Lost Tribes" taken into Assyrian captivity. They had their own sacred precinct on Mount Gerizim and claimed that it was the original sanctuary. Moreover, they claimed that their version of the Pentateuch was the original and that the Jews had a falsified text produced by Ezra during the Babylonian exile. Both Jewish and Samaritan religious leaders taught that it was wrong to have any contact with the opposite group, and neither was to enter the other's territories or even to speak to the other. During the New Testament period, the tensions were exploited by Roman authorities as they likewise had done between rival tribal factions elsewhere, and Josephus reports numerous violent confrontations between Jews and Samaritans throughout the first half of the first century. According to historian Lawrence Schiffman, throughout the Persian Period, Judeans and Samaritans fought periodically with one another. The Samaritans were a blend of all kinds of people—made up of Israelites who were not exiled when the Northern Kingdom was destroyed in 722 BCE—of various different nationalities whom the Assyrians had resettled in the area. The Assyrians did this as an attempt to ensure that Israel’s national dream could not come true. According to the Jewish version of events, when the Judean exile ended in 539 BCE and the exiles began returning home from Babylon, Samaritans found their former homeland of the north populated by other people who claimed the land as their own and Jerusalem, their former glorious capital, in ruins. The inhabitants worshiped the Pagan gods, but when the then-sparsely populated areas became infested with dangerous wild beasts, they appealed to the king of Assyria for Israelite priests to instruct them on how to worship the "God of that country." The result was a syncretistic religion, in which national groups worshiped the Israelite God, but they also served their own gods in accordance with the customs of the nations from which they had been brought. According to Chronicles 36:22–23, the Persian emperor, Cyrus the Great (reigned 559–530 BCE), permitted the return of the exiles to their homeland and ordered the rebuilding of the Temple (Zion). The prophet Isaiah identified Cyrus as "the Lord's Messiah". The word "Messiah" refers to an anointed individual, such as a king or priest. During the First Temple, it was possible for foreigners to help the Jewish people in an informal way until tension grew between the Samaritans and Judeans. This meant that foreigners could physically move into Judean land and abide by its laws and religion. Ezra 4 says that the local inhabitants of the land offered to assist with the building of the new Temple during the time of Zerubbabel, but their offer was rejected. According to Ezra, this rejection precipitated a further interference not only with the rebuilding of the Temple but also with the reconstruction of Jerusalem. The issue surrounding the Samaritans offer to help rebuild the temple was a complicated one that took a while for the Judeans to think over. There had always been a division between the north and the south and this instance perfectly illustrates that. Following Solomon's death, sectionalism formed and inevitably led to the division of the kingdom. This division lead to the Judeans rejecting the offer made by the Samaritans to centralise worship at the Temple. The text is not clear on this matter, but one possibility is that these "people of the land" were thought of as Samaritans. We do know that Samaritan and Jewish alienation increased and that the Samaritans eventually built their own temple on Mount Gerizim, near Shechem. The rebuilding of the Jewish Temple in Jerusalem took several decades. The project was first led by Sheshbazzar (ca. 538 BCE), later by Zerubbabel and Jeshua, and later still by Haggai and Zechariah (520–515 BCE). The work was completed in 515 BCE. The term "Kuthim" applied by Jews to the Samaritans had clear pejorative connotations, implying that they were interlopers brought in from Kutha in Mesopotamia and rejecting their claim of descent from the ancient Tribes of Israel. According to many scholars, archaeological excavations at Mount Gerizim indicate that a Samaritan temple was built there in the first half of the 5th century BCE. The date of the schism between Samaritans and Jews is unknown, but by the early 4th century BCE the communities seem to have had distinctive practices and communal separation. Antiochus IV Epiphanes was on the throne of the Seleucid Empire from 175 to 163 BCE. His policy was to Hellenize his entire kingdom and standardize religious observance. According to 1 Maccabees 1:41-50 he proclaimed himself the incarnation of the Greek god Zeus and mandated death to anyone who refused to worship him. In the 2nd century BCE, a series of events led to a revolution by a faction of Judeans against Antiochus IV. The universal peril led the Samaritans, eager for safety, to repudiate all connection and kinship with the Jews. The request was granted. This was put forth as the final breach between the two groups, being alleged at a much later date in the Christian Bible (John 4:9), "For Jews have no dealings with Samaritans"—or not "alleged" if the Greek sunchrasthai merely refers to not sharing utensils (NABRE). Anderson notes that during the reign of Antiochus IV (175–164 BCE): Josephus Book 12, Chapter 5 quotes the Samaritans as saying: During the Hellenistic period, Samaria was largely divided between a Hellenizing faction based in Samaria (Sebastaea) and a pious faction in Shekhem and surrounding rural areas, led by the High Priest. Samaria was a largely autonomous state nominally dependent on the Seleucid Empire until around 113 BCE, when the Jewish Hasmonean ruler John Hyrcanus destroyed the Samaritan temple and devastated Samaria. The Hellinized Samaritan Temple at Mount Gerizim was destroyed by John Hyrcanus in 113 BC, having existed about 200 years. Only a few stone remnants of it exist today. Under the Roman Empire, Samaria became a part of the Herodian Kingdom, Herodian Tetrarchy and with deposition of the Herodian ethnarch Herod Achelaus in early 1st century CE, Samaria became a part of the province of Judaea. Samaritans appear briefly in the Christian gospels, most notably in the account of the Samaritan woman at the well and the parable of the Good Samaritan. In the latter, it is only the Samaritan who helped the man stripped of clothing, beaten, and left on the road half dead, his Abrahamic covenantal circumcision implicitly evident. The priest and Levite walked past. But the Samaritan helped the naked man regardless of his nakedness (itself religiously offensive to the priest and Levite), his self-evident poverty, or to which Hebrew sect he belonged. The Temple of Gerizim was rebuilt after the Bar Kokhba revolt against the Romans, around 136 CE. A building dated to the second century BCE, the Delos Synagogue, is commonly identified as a Samaritan synagogue, which would make it the oldest known Jewish or Samaritan synagogue. On the other hand, Matassa argues that, although there is evidence of Samaritans on Delos, there is no evidence the building was a synagogue. Much of Samaritan liturgy was set by the high priest Baba Rabba in the 4th century. There were some Samaritans in the Sasanian Empire, where they served in the army. This period is considered as something of a golden age for the Samaritan community, the population thought to number up to a million. According to Samaritan sources, Eastern Roman emperor Zeno (who ruled 474–491 and whom the sources call "Zait the King of Edom") persecuted the Samaritans. The Emperor went to Neapolis (Shechem), gathered the elders and asked them to convert; when they refused, Zeno had many Samaritans killed, and re-built the synagogue as a church. Zeno then took for himself Mount Gerizim, where the Samaritans worshiped God, and built several edifices, among whom a tomb for his recently deceased son, on which he put a cross, so that the Samaritans, worshiping God, would prostrate in front of the tomb. Later, in 484, the Samaritans revolted. The rebels attacked Sichem, burned five churches built on Samaritan holy places and cut the finger of bishop Terebinthus, who was officiating the ceremony of Pentecost. They elected a Justa (or Justasa/Justasus) as their king and moved to Caesarea, where a noteworthy Samaritan community lived. Here several Christians were killed and the church of St. Sebastian was destroyed. Justa celebrated the victory with games in the circus. According to John Malalas, the "dux Palaestinae" Asclepiades, whose troops were reinforced by the Caesarea-based Arcadiani of Rheges, defeated Justa, killed him and sent his head to Zeno. According to Procopius, Terebinthus went to Zeno to ask for revenge; the Emperor personally went to Samaria to quell the rebellion. Some modern historians believe that the order of the facts preserved by Samaritan sources should be inverted, as the persecution of Zeno was a consequence of the rebellion rather than its cause, and should have happened after 484, around 489. Zeno rebuilt the church of St. Procopius in Neapolis (Sichem) and the Samaritans were banned from Mount Gerizim, on whose top a signalling tower was built to alert in case of civil unrest. Under a charismatic, messianic figure named Julianus ben Sabar (or ben Sahir), the Samaritans launched a war to create their own independent state in 529. With the help of the Ghassanids, Emperor Justinian I crushed the revolt; tens of thousands of Samaritans died or were enslaved. The Samaritan faith, which had previously enjoyed the status of "religio licita", was virtually outlawed thereafter by the Christian Byzantine Empire; from a population once at least in the hundreds of thousands, the Samaritan community dwindled to tens of thousands. Though initially guaranteed religious freedom after the Muslim conquest of Palestine, Samaritan numbers dropped further as a result of massacres and conversions. By the time of the Arab conquests, apart from Palestine, small dispersed communities of Samaritans were living also in Arab Egypt, Syria, and Iran. Like other non-Muslims in the empire, such as Jews, Samaritans were often considered to be People of the Book. Their minority status was protected by the Muslim rulers, and they had the right to practice their religion, but, as dhimmi, adult males had to pay the jizya or "protection tax". This however changed during late Abbasid period, with increasing persecution targeting the Samaritan community and considering them infidels which must convert to Islam. The tradition of men wearing a red tarboosh may go back to an order by the Abbasid Caliph al-Mutawakkil (847-861 CE) that required non-Muslims to be distinguished from Muslims. During the Crusades, Samaritans, like the non-Latin Christian inhabitants of the Kingdom of Jerusalem, were second-class citizens, but they were tolerated and perhaps favored because they were docile and had been mentioned positively in the Christian New Testament. While the majority of the Samaritan population in Damascus was massacred or converted during the reign of the Ottoman Pasha Mardam Beq in the early 17th century, the remainder of the Samaritan community there, in particular, the Danafi family, which is still influential today, moved back to Nablus in the 17th century. The Nablus community endured because most of the surviving diaspora returned, and they have maintained a tiny presence there to this day. In 1624, the last Samaritan High Priest of the line of Eleazar son of Aaron died without issue, but according to Samaritan tradition, descendants of Aaron's other son, Ithamar, remained and took over the office. By the late Ottoman period, the Samaritan community dwindled to its lowest. In 19th century, with pressure of conversion and persecution from the local rulers and occasional natural disasters, the community fell to just over 100 persons. The situation of the Samaritan community improved significantly during the British Mandate of Palestine. At that time, they began to work in the public sector, like many other groups. The censuses of 1922 and 1931 recorded 163 and 182 Samaritans in Palestine, respectively. The majority of them lived in Nablus. After the end of the British Mandate of Palestine and the subsequent establishment of the State of Israel, some of the Samaritans who were living in Jaffa emigrated to Samaria and lived in Nablus. By the late 1950s, around 100 Samaritans left the West Bank for Israel under an agreement with the Jordanian authorities in the West Bank. In 1954, Israeli President Yitzhak Ben-Zvi fostered a Samaritan enclave in Holon, Israel. Until the 1990s, most of the Samaritans resided in the West Bank city of (Nablus) below Mount Gerizim. They relocated to the mountain itself near the Israeli settlement Har Brakha as a result of violence during the First Intifada (1987–1990). Consequently, all that is left of the Samaritan community in Nablus itself is an abandoned synagogue. The Israeli army maintains a presence in the area. Demographic investigations of the Samaritan community were carried out in the 1960s. Detailed pedigrees of the last 13 generations show that the Samaritans comprise four lineages: Recently several genetic studies on the Samaritan population were made using haplogroup comparisons as well as wide-genome genetic studies. Of the 12 Samaritan males used in the analysis, 10 (83%) had Y chromosomes belonging to haplogroup J, which includes three of the four Samaritan families. The Joshua-Marhiv family belongs to Haplogroup J-M267 (formerly "J1"), while the Danafi and Tsedakah families belong to haplogroup J-M172 (formerly "J2"), and can be further distinguished by the M67 SNP-- the derived allele of which has been found in the Danafi family-- and the PF5169 SNP found in the Tsedakah family. However the biggest and most important Samaritan family, the Cohen family (Tradition: Tribe of Levi), was found to belong to haplogroup E. This article predated the change of the classification of haplogroup E3b1-M78 to E3b1a-M78 and the further subdivision of E3b1a-M78 into 6 subclades based on the research of Cruciani, et al. A 2004 article on the genetic ancestry of the Samaritans by Shen "et al." concluded from a sample comparing Samaritans to several Jewish populations, all currently living in Israel—representing the Beta Israel, Ashkenazi Jews, Iraqi Jews, Libyan Jews, Moroccan Jews, and Yemenite Jews, as well as Israeli Druze and Palestinians—that "the principal components analysis suggested a common ancestry of Samaritan and Jewish patrilineages. Most of the former may be traced back to a common ancestor in what is today identified as the paternally inherited Israelite high priesthood (Cohanim) with a common ancestor projected to the time of the Assyrian conquest of the kingdom of Israel." Archaeologists Aharoni, et al., estimated that this "exile of peoples to and from Israel under the Assyrians" took place during ca. 734–712 BCE. The authors speculated that when the Assyrians conquered the Northern Kingdom of Israel, resulting in the exile of many of the Israelites, a subgroup of the Israelites that remained in the Land of Israel "married Assyrian and female exiles relocated from other conquered lands, which was a typical Assyrian policy to obliterate national identities." The study goes on to say that "Such a scenario could explain why Samaritan Y chromosome lineages cluster tightly with Jewish Y lineages, while their mitochondrial lineages are closest to Iraqi Jewish and Israeli Arab mtDNA sequences." Non-Jewish Iraqis were not sampled in this study; however, mitochondrial lineages of Jewish communities tend to correlate with their non-Jewish host populations, unlike paternal lineages which almost always correspond to Israelite lineages. There were 1 million Samaritans in biblical times, but in recent times the numbers are smaller. There were 100 in 1786 and 141 in 1919, then 150 in 1967. This grew to 745 in 2011, 751 in 2012, 756 in 2013, 760 in 2014, 777 in 2015, 785 in 2016, 796 in 2017, 810 in 2018 and 820 in 2019. Half reside in modern homes at Kiryat Luza on Mount Gerizim, which is sacred to them, and the rest in the city of Holon, just outside Tel Aviv. There are also four Samaritan families residing in Binyamina-Giv'at Ada, Matan, and Ashdod. As a small community physically divided between neighbors in a hostile region, Samaritans have been hesitant to overtly take sides in the Arab–Israeli conflict, fearing that doing so could lead to negative repercussions. While the Samaritan communities in both the West Bank's Nablus and Israeli Holon have assimilated to the surrounding respective cultures, Hebrew has become the primary domestic language for Samaritans. Samaritans who are Israeli citizens are drafted into the military, along with the Jewish citizens of Israel. Relations of Samaritans with Jewish Israelis and Muslim and Christian Palestinians in neighboring areas have been mixed. Samaritans living in both Israel and in the West Bank enjoy Israeli citizenship. Samaritans in the Palestinian Authority-ruled territories are a minority in the midst of a Muslim majority. They had a reserved seat in the Palestinian Legislative Council in the election of 1996, but they no longer have one. Samaritans living in Samaria have been granted passports by both Israel and the Palestinian Authority. One of the biggest problems facing the community today is the issue of continuity. With such a small population, divided into only four families (Cohen, Tsedakah, Danafi, and Marhiv, with the Matar family dying out in 1968) and a general refusal to accept converts, there has been a history of genetic disorders within the group due to the small gene pool. To counter this, the Samaritan community has recently agreed that men from the community marry non-Samaritan (primarily, Israeli Jewish) women, provided that the women agree to follow Samaritan religious practices. There is a six-month trial period before officially joining the Samaritan community to see whether this is a commitment that the woman would like to take. This often poses a problem for the women, who are typically less than eager to adopt the strict interpretation of biblical (Levitical) laws regarding menstruation, by which they must live in a separate dwelling during their periods and after childbirth. There have been a few instances of intermarriage. In addition, all marriages within the Samaritan community are first approved by a geneticist at Tel HaShomer Hospital, in order to prevent the spread of genetic disorders. In meetings arranged by "international marriage agencies", a small number of Ukrainian women have recently been allowed to marry into the community in an effort to expand the gene pool. The Samaritan community in Israel also faces demographic challenges as some young people leave the community and convert to Judaism. A notable example is Israeli television presenter Sofi Tsedaka, who has made a documentary about her leaving the community at age 18. The head of the community is the Samaritan High Priest, who is selected by age from the priestly family and resides on Mount Gerizim. The current high priest is Aabed-El ben Asher ben Matzliach who assumed the office in 2013. Much of the local Palestinian population of Nablus is believed to be descended from Samaritans, who had converted to Islam. According to the historian Fayyad Altif, large numbers of Samaritans converted due to persecution under various Muslim rulers, and because the monotheistic nature of Islam made it easy for them to accept it. The Samaritans themselves describe the Ottoman period as the worst period in their modern history, as many Samaritan families were forced to convert to Islam during that time. Even today, certain Nabulsi family names such as Al-Amad, Al-Samri, Maslamani, Yaish, and Shaksheer among others, are associated with Samaritan ancestry. For the Samaritans in particular, the passing of the al-Hakim Edict by the Fatimid Caliphate in 1021, under which all Jews and Christians in the Fatimid ruled southern Levant were ordered to either convert to Islam or leave, along with another notable forced conversion to Islam imposed at the hands of the rebel ibn Firāsa, would contribute to their rapid unprecedented decrease, and ultimately almost complete extinction as a separate religious community. As a result, they had decreased from nearly a million and a half in late Roman (Byzantine) times to 146 people by the end of the Ottoman Era. In 1940, the future Israeli president and historian Yitzhak Ben-Zvi wrote an article in which he stated that two thirds of the residents of Nablus and the surrounding neighboring villages were of Samaritan origin. He mentioned the name of several Palestinian Muslim families as having Samaritan origins, including the Al-Amad, Al-Samri, Buwarda and Kasem families, who protected Samaritans from Muslim persecution in the 1850s. He further claimed that these families had written records testifying to their Samaritan ancestry, which were maintained by their priests and elders. According to "The Economist", "most ethnic Samaritans are now pious Muslims." The Samaritan religion is based on some of the same books used as the basis of Judaism but differs from the latter. Samaritan religious works include the Samaritan version of the Torah, the Memar Markah, the Samaritan liturgy, and Samaritan law codes and biblical commentaries. Many claim the Samaritans appear to have a text of the Torah as old as the Masoretic Text; scholars have various theories concerning the actual relationships between these three texts. According to Samaritans, it was on Mount Gerizim that Abraham was commanded by God to offer Isaac, his son, as a sacrifice. In both narratives, God then causes the sacrifice to be interrupted, explaining that this was the ultimate test of Abraham's obedience, as a result of which all the world would receive blessing. The Torah mentions the place where God chooses to establish His name (Deut 12:5), and Judaism takes this to refer to Jerusalem. However, the Samaritan text speaks of the place where God "has chosen" to establish His name, and Samaritans identify it as Mount Gerizim, making it the focus of their spiritual values. The legitimacy of the Samaritan temple was attacked by Jewish scholars including Andronicus ben Meshullam. In the Christian Bible, the Gospel of John relates an encounter between a Samaritan woman and Jesus in which she says that the mountain was the center of their worship. She poses the question to Jesus when she realizes that he is the Messiah. Jesus affirms the Jewish position, saying "You (that is, the Samaritans) worship what you do not know". The Samaritans have retained an offshoot of the Ancient Hebrew script, a High Priesthood, the slaughtering and eating of lambs on Passover eve, and the celebration of the first month's beginning around springtime as the New Year. Yom Teru`ah (the biblical name for "Rosh Hashanah"), at the beginning of Tishrei, is not considered a New Year as it is in Rabbinic Judaism. The Samaritan Pentateuch differs from the Jewish Masoretic Text as well. Some differences are doctrinal: for example, the Samaritan Torah explicitly states that Mount Gerizim is "the place that God "has chosen"" to establish His name, as opposed to the Jewish Torah that refers to "the place that God "chooses"". Other differences are minor and seem more or less accidental. Samaritans refer to themselves as "Benai Yisrael" ("Children of Israel") which is a term used by all Jewish denominations as a name for the Jewish people as a whole. They, however, do not refer to themselves as "Yehudim" (Jews), the standard Hebrew name for Jews. The Talmudic attitude expressed in tractate Kutim is that they are to be treated as Jews in matters where their practice coincides with Rabbinic Judaism but as non-Jews where their practice differs. Some claim that since the 19th century, Rabbinic Judaism has regarded the Samaritans as a Jewish sect and the term "Samaritan Jews" has been used for them. Samaritan law is not the same as Halakha (Rabbinic Jewish law). The Samaritans have several groups of religious texts, which correspond to Jewish Halakha. A few examples of such texts are: Samaria or Samaritans are mentioned in the New Testament books of Matthew, Luke, John and Acts. The Gospel of Mark contains no mention of Samaritans or Samaria. The best known reference to the Samaritans is the Parable of the Good Samaritan, found in the Gospel of Luke. The following references are found: The rest of the New Testament makes no specific mention of Samaria or Samaritans. "The Samaritan News", a monthly magazine started in 1969, is written in Samaritan Aramaic, Hebrew, Arabic, and English and deals with current and historical issues with which the Samaritan community is concerned. The "Samaritan Update" is a bi-monthly e-newsletter for Samaritan Studies. A documentary film was produced in 2018 entitled "How to Save a Tribe" presented by traveller/author Leon McCarron. It focuses on the population crisis among the Samaritans. In the CBS hit sci-fi TV show Person of Interest (TV series), Samaritan is the code name of the antagonistic super AI surveillance system, operating against the main AI code-named "The Machine" in the show. Samaritan view Jewish view Independent views Books and other information Photographic links Video links
https://en.wikipedia.org/wiki?curid=28179
Seneca Lake (New York) Seneca Lake is the largest of the glacial Finger Lakes of the U.S. state of New York, and the deepest lake entirely within the state. It is promoted as being the lake trout capital of the world, and is host of the National Lake Trout Derby. Because of its depth and relative ease of access, the US Navy uses Seneca Lake to perform test and evaluation of equipment ranging from single element transducers to complex sonar arrays and systems. The lake takes its name from the Seneca nation of Native Americans. At the north end of Seneca Lake is the city of Geneva, New York, home of Hobart and William Smith Colleges and the New York State Agricultural Experiment Station, a division of Cornell University. At the south end of the lake is the village of Watkins Glen, New York, famed for auto racing and waterfalls. Due to Seneca Lake's unique macroclimate it is home to over 50 wineries, many of them farm wineries and is the location of the Seneca Lake AVA. (See Seneca Lake wine trail). At long, it is the second longest of the Finger Lakes and has the largest volume, estimated at , roughly half of the water in all the Finger Lakes. It has an average depth of , a maximum depth of , and a surface area of . For comparison, Scotland's famous Loch Ness is long, wide, has a surface area of , an average depth of , a maximum depth of , and total volume of of water. Seneca's two main inlets are Catharine Creek at the southern end and the Keuka Lake Outlet. Seneca Lake lets out into the Seneca River/ Cayuga-Seneca Canal, which joins Seneca and Cayuga Lakes at their northern ends. It is fed by underground springs and replenished at a rate of 328,000 gallons (1240 m³) per minute. These springs keep the water moving in a circular motion, giving it little chance to freeze over. Because of Seneca Lake's great depth its temperature remains a near-constant . In summer the top warms to . Seneca lake has a typical aquatic population for large deep lakes in the northeast, with coldwater fish such as lake trout and Atlantic salmon inhabiting the deeper waters, and warmwater fish such as smallmouth bass and yellow perch inhabiting the shallower areas. The lake is also home to a robust population of "sawbellies," the local term for alewife shad. Seneca Lake was formed at least two million years ago by glacial carving of streams and valleys. Originally it was a part of a series of rivers that flowed northward. Around this time many continental glaciers moved into the area and started the Pleistocene glaciation also known as the Ice Age. It is presumed that the Finger Lakes were created by many advances and retreats of massive glaciers that were up to 2 miles wide. Over 200 years ago, there were Iroquois villages on Seneca Lake's surrounding hillsides. During the American Revolutionary War, their villages, including Kanadaseaga ("Seneca Castle"), were wiped out during the 1779 Sullivan Expedition by Continental troops under order by General George Washington (in retaliation of the Wyoming Massacre ) to invade their homeland, destroy their dwellings and crops, and end their threat to the patriots. They destroyed nearly 50 Seneca and Cayuga villages. Today roadside signs trace Sullivan's route along the east side of Seneca Lake where the burning of villages and crops occurred. After the war, the Iroquois were forced to cede their land when Britain was defeated. Their millions of acres were sold and some lands in this area were granted to veterans of the army in payment for their military service. A slow stream of European-American settlers began to arrive circa 1790. Initially the settlers were without a market nearby or a way to get their crops to market. The settlers' isolation ended in 1825 with the opening of the Erie Canal. The canal linked the Finger Lakes Region to the outside world. Steamships, barges and ferries quickly became Seneca Lake's ambassadors of commerce and trade. The former, short Crooked Lake Canal linked Seneca Lake to Keuka Lake. Numerous canal barges sank during operations and rest on the bottom of the lake. A collection of barges at the southwest end of the lake, near the village of Watkins Glen, is being preserved and made accessible for scuba diving by the Finger Lakes Underwater Preserve Association. The lake is a popular fishing destination. Fish species in the lake include lake trout, rainbow trout, brown trout, landlocked salmon, largemouth bass, smallmouth bass, northern pike, pickerel, and yellow perch. In July 1900, newspaper reports carried reports that on the evening of July 14, 1899, the steamboat Otetiani, carrying several dozen passengers, encountered a 25-foot-long sea monster with "two rows of sharp, white teeth." The steamer is said to have given chase to the creature and deliberately rammed it at full speed. The creature was struck by the ship's paddle wheel midway between head and tail, it spine broken. It raised its four-foot-long head, then gave a gasp as it died. The ship attempted to rope the monster and tow it back to shore, but it sank to the bottom of Seneca Lake. A report sometime later in the "Geneva Gazette" suggested that the incident was a hoax. The painted rocks located at the southern end of the lake on the eastern cliff face depict an American flag, Tee-pee, and several Native Americans. The older paintings, located on the bottom of the cliff, were said to have been drawn in 1779 after the Senecas escaped men from John Sullivan's campaign. However, this account is questioned by historian Barbara Bell, arguing that it is unlikely that the Senecas would have returned to paint the paintings having just escaped from Sullivan's men. She suggests instead that these paintings may have been made much later, for tourists on Seneca Lake boat tours. It is known that the more visible and prominent paintings of the Native Americans, American flag, and Tee-pee were added in 1929 during the Sullivan Sesquicentennial. There are two mistakes in these 1929 additions: firstly the Native Americans in the Seneca Region used longhouses and not Tee-pees, and secondly the flag is displayed pointing to the left which is never to be done on a horizontal surface. Seneca Lake is also the site of strange and currently unexplained cannon-like booms and shakes that are heard and felt in the surrounding area. They are known locally as the Seneca Guns, Lake Drums, or Lake Guns, and these types of phenomena are known elsewhere as skyquakes. The term Lake Guns originated in the short story "The Lake Gun" by James Fenimore Cooper in 1851. There is no explanation that takes into account sounds the Iroquois heard before Cooper's time; it is possible sonic booms have been mistaken for natural sounds in modern days. The east side of Seneca Lake was once home to a military training ground called Sampson Naval Base, primarily used during World War II. It became Sampson Air Force Base during the Korean War and was used for basic training. After Sampson AFB closed, the airfield remained as Seneca Army Airfield but was closed in 2000. The training grounds of Sampson have since been converted to a civilian picnic area called Sampson State Park. There is still a Naval facility at Seneca Lake, the Naval Undersea Warfare Center (NUWC) Sonar test facility. A scale model of the sonar section of the nuclear submarine USS Seawolf (SSN 21) was tested during the development of this ship, which was launched in June, 1995. There is a YSI EMM-2500 Buoy Platform located in the north end of Seneca Lake roughly in the center. Its coordinates are: latitude: 42°41'49.99"N, longitude: 76°55'29.93"W. The buoy has cellular modem communications and measures wind speed and direction, relative humidity, air temperature, barometric pressure, light intensity, and the water's depth and temperature, conductivity, turbidity, and chlorophyll-a levels. The buoy was initially deployed in June 2006. The water depth where it is located is about . Viticulture and winemaking in the area date back to the 19th century, with the foundation of the Seneca Lake Wine Company in 1866 marking the first major winery in the area. The modern era of wine production began in the 1970s with the establishment of several wineries and the passage of the New York Farm Winery Act of 1976. The region was established as an American Viticultural Area in 1988. Seneca Lake Wine Trail hosts many events on and around the lake including the annual winter 'Deck the Halls' event, at which local wineries showcase their vintages. The Elmira & Seneca Lake Railway opened for operation on June 19, 1900 from Horseheads, New York to Seneca Lake.
https://en.wikipedia.org/wiki?curid=28180
Strait of Gibraltar The Strait of Gibraltar (; , Archaic: Pillars of Hercules), also known as the Straits of Gibraltar, is a narrow strait that connects the Atlantic Ocean to the Mediterranean Sea and separates Gibraltar and Peninsular Spain in Europe from Morocco in Africa. The two continents are separated by of ocean at the Strait's narrowest point. The Strait's depth ranges between which possibly interacted with the lower mean sea level of the last major glaciation 20,000 years ago when the level of the sea is believed to have been lower by . Ferries cross between the two continents every day in as little as 35 minutes. The Spanish side of the Strait is protected under El Estrecho Natural Park. The name comes from the Rock of Gibraltar, which in turn originates from the Arabic (meaning "Tariq's Mount"), named after Tariq ibn Ziyad. It is also known as the Straits of Gibraltar, the Gut of Gibraltar (although this is mostly archaic), the STROG (STRait Of Gibraltar) in naval use, and (), "Gate of Morocco" or "Gate of the West". In the Middle Ages, Muslims called it (), "the Passage", the Romans called it (Strait of Cadiz), and in the ancient world it was known as the "Pillars of Hercules" (). On the northern side of the Strait are Spain and Gibraltar (a British overseas territory in the Iberian Peninsula), while on the southern side are Morocco and Ceuta (a Spanish autonomous city in northern Africa). Its boundaries were known in antiquity as the Pillars of Hercules. Due to its location, the Strait is commonly used for illegal immigration from Africa to Europe. The International Hydrographic Organization defines the limits of the Strait of Gibraltar as follows: The seabed of the Strait is composed of synorogenic Betic-Rif clayey flysch covered by Pliocene and/or Quaternary calcareous sediments, sourced from thriving cold water coral communities. Exposed bedrock surfaces, coarse sediments and local sand dunes attest to the strong bottom current conditions at the present time. Around 5.9 million years ago, the connection between the Mediterranean Sea and the Atlantic Ocean along the Betic and Rifan Corridor was progressively restricted until its total closure, effectively causing the salinity of the Mediterranean to rise periodically within the gypsum and salt deposition range, during what is known as the Messinian salinity crisis. In this water chemistry environment, dissolved mineral concentrations, temperature and stilled water currents combined and occurred regularly to precipitate many mineral salts in layers on the seabed. The resultant accumulation of various huge salt and mineral deposits about the Mediterranean basin are directly linked to this era. It is believed that this process took a short time, by geological standards, lasting between 500,000 and 600,000 years. It is estimated that, were the Strait closed even at today's higher sea level, most water in the Mediterranean basin would evaporate within only a thousand years, as it is believed to have done then, and such an event would lay down mineral deposits like the salt deposits now found under the sea floor all over the Mediterranean. After a lengthy period of restricted intermittent or no water exchange between the Atlantic Ocean and Mediterranean basin, approximately 5.33 million years ago, the Atlantic-Mediterranean connection was completely reestablished through the Strait of Gibraltar by the Zanclean flood, and has remained open ever since. The erosion produced by the incoming waters seems to be the main cause for the present depth of the Strait ( at the narrows, at the Camarinal Sill). The Strait is expected to close again as the African Plate moves northward relative to the Eurasian Plate, but on geological rather than human timescales. The Strait has been identified as an Important Bird Area by BirdLife International because of the hundreds of thousands of seabirds which use it every year to migrate between the Mediterranean and the Atlantic, including significant numbers of Scopoli's and Balearic shearwaters, Audouin's and lesser black-backed gulls, razorbills, and Atlantic puffins. A resident killer whale pod of some 36 individuals lives around the Strait, one of the few that are left in Western European waters. The pod may be facing extinction in the coming decades due to long term effects of PCB pollution. Evidence of the first human habitation of the area by Neanderthals dates back to 125,000 years ago. It is believed that the Rock of Gibraltar may have been one of the last outposts of Neanderthal habitation in the world, with evidence of their presence there dating to as recently as 24,000 years ago. Archaeological evidence of Homo sapiens habitation of the area dates back years. The relatively short distance between the two shores has served as a quick crossing point for various groups and civilizations throughout history, including Carthaginians campaigning against Rome, Romans travelling between the provinces of Hispania and Mauritania, Vandals raiding south from Germania through Western Rome and into North Africa in the 5th century, Moors and Berbers in the 8th–11th centuries, and Spain and Portugal in the 16th century. Beginning in 1492, the Strait began to play a certain cultural role in acting as a barrier against cross-channel conquest and the flow of culture and language that would naturally follow such a conquest. In that year, the last Muslim government north of the Strait was overthrown by a Spanish force. Since that time, the Strait has come to foster the development of two very distinct and varied cultures on either side of it after sharing much the same culture for over 300 years from the 8th century to the early 13th century. On the northern side, Christian-European culture has remained dominant since the expulsion of the last Muslim kingdom in 1492, along with the Romance Spanish language, while on the southern side, Muslim-Arabic/Mediterranean has been dominant since the spread of Islam into North Africa in the 700s, along with the Arabic language. For the last 500 years, religious and cultural intolerance, more than the small travel barrier that the Strait presents, has come to act as a powerful enforcing agent of the cultural separation that exists between these two groups. The small British enclave of the city of Gibraltar presents a third cultural group found in the Strait. This enclave was first established in 1704 and has since been used by Britain to act as a surety for control of the sea lanes into and out of the Mediterranean. Following the Spanish coup of July 1936 the Spanish Republican Navy tried to blockade the Strait of Gibraltar to hamper the transport of Army of Africa troops from Spanish Morocco to Peninsular Spain. On 5 August 1936 the so-called Convoy de la victoria was able to bring at least 2,500 men across the Strait, breaking the republican blockade. The Strait is an important shipping route from the Mediterranean to the Atlantic. There are ferries that operate between Spain and Morocco across the Strait, as well as between Spain and Ceuta and Gibraltar to Tangier. In December 2003, Spain and Morocco agreed to explore the construction of an undersea rail tunnel to connect their rail systems across the Strait. The gauge of the rail would be to match the proposed construction and conversion of significant parts of the existing broad gauge system to standard gauge. While the project remains in a planning phase, Spanish and Moroccan officials have met to discuss it as recently as 2012, and proposals predict it could be completed by 2025. The Strait of Gibraltar links the Atlantic Ocean directly to the Mediterranean Sea. This direct linkage creates certain unique flow and wave patterns. These unique patterns are created due to the interaction of various regional and global evaporative forces, tidal forces, and wind forces. Through the Strait, water generally flows more or less continually in both an eastward and a westward direction. A smaller amount of deeper saltier and therefore denser waters continually work their way westwards (the Mediterranean outflow), while a larger amount of surface waters with lower salinity and density continually work their way eastwards (the Mediterranean inflow). These general flow tendencies may be occasionally interrupted for brief periods by temporary tidal flows, depending on various lunar and solar alignments. Still, on the whole and over time, the balance of the water flow is eastwards, due to an evaporation rate within the Mediterranean basin higher than the combined inflow of all the rivers that empty into it. At the Strait's far western end is the Camarinal Sill, the Strait's shallowest point which limits mixing between the cold, less saline Atlantic water and the warm Mediterranean waters. The Mediterranean waters are so much saltier than the Atlantic waters that they sink below the constantly incoming water and form a highly saline ("thermohaline", both warm and salty) layer of bottom water. This layer of bottom-water constantly works its way out into the Atlantic as the Mediterranean outflow. On the Atlantic side of the Strait, a density boundary separates the Mediterranean outflow waters from the rest at about depth. These waters flow out and down the continental slope, losing salinity, until they begin to mix and equilibrate more rapidly, much further out at a depth of about . The Mediterranean outflow water layer can be traced for thousands of kilometres west of the Strait, before completely losing its identity. During the Second World War, German U-boats used the currents to pass into the Mediterranean Sea without detection, by maintaining silence with engines off. From September 1941 to May 1944 Germany managed to send 62 U-boats into the Mediterranean. All these boats had to navigate the British-controlled Strait of Gibraltar where nine U-boats were sunk while attempting passage and 10 more had to break off their run due to damage. No U-boats ever made it back into the Atlantic and all were either sunk in battle or scuttled by their own crews. Internal waves (waves at the density boundary layer) are often produced by the Strait. Like traffic merging on a highway, the water flow is constricted in both directions because it must pass over the Camarinal Sill. When large tidal flows enter the Strait and the high tide relaxes, internal waves are generated at the Camarinal Sill and proceed eastwards. Even though the waves may occur down to great depths, occasionally the waves are almost imperceptible at the surface, at other times they can be seen clearly in satellite imagery. These "internal waves" continue to flow eastward and to refract around coastal features. They can sometimes be traced for as much as , and sometimes create interference patterns with refracted waves. Except for its far eastern end, the Strait lies within the territorial waters of Spain and Morocco. The United Kingdom claims around Gibraltar on the northern side of the Strait, putting part of it inside British territorial waters. As this is less than the maximum, it means, according to the British claim, that part of the Strait lies in international waters. The ownership of Gibraltar and its territorial waters is disputed by Spain. Similarly, Morocco disputes Spanish sovereignty over Ceuta on the southern coast. There are several islets, such as the disputed Isla Perejil, that are claimed by both Morocco and Spain. Under the United Nations Convention on the Law of the Sea, vessels passing through the strait do so under the regime of transit passage, rather than the more limited innocent passage allowed in most territorial waters. Some studies have proposed the possibility of erecting tidal power generating stations within the Strait, to be powered from the predictable current at the Strait. In the 1920s and 1930s, the Atlantropa project proposed damming the Strait to generate large amounts of electricity and lower the sea level of the Mediterranean by several hundreds of meters to create large new lands for settlement. This proposal would however have devastating effects on the local climate and ecology and would dramatically change the strength of the West African Monsoon.
https://en.wikipedia.org/wiki?curid=28181
Social epistemology Social epistemology refers to a broad set of approaches that can be taken in epistemology (the study of knowledge) that construes human knowledge as a collective achievement. Another way of characterizing social epistemology is as the evaluation of the social dimensions of knowledge or information. As a field of inquiry in analytic philosophy, social epistemology deals with questions about knowledge in social contexts, meaning those in which knowledge attributions cannot be explained by examining individuals in isolation from one another. The most common topics discussed in contemporary social epistemology are testimony (e.g. "When does a belief that x is true which resulted from being told 'x is true' constitute knowledge?"), peer disagreement (e.g. "When and how should I revise my beliefs in light of other people holding beliefs that contradict mine?", and group epistemology (e.g. "What does it mean to attribute knowledge to groups rather than individuals, and when are such knowledge attributions appropriate?"). Social epistemology also examines the social justification of belief. One of the enduring difficulties with defining "social epistemology" that arises is the attempt to determine what the word "knowledge" means in this context. There is also a challenge in arriving at a definition of "social" which satisfies academics from different disciplines. Social epistemologists may exist working in many of the disciplines of the humanities and social sciences, most commonly in philosophy and sociology. In addition to marking a distinct movement in traditional and analytic epistemology, social epistemology is associated with the interdisciplinary field of science and technology studies (STS). The consideration of social dimensions of knowledge in relation to philosophy started in 380 B.C.E with Plato's dialogue: Charmides. In it he questions the degree of certainty an unprofessional in a field can have towards a person's claim to be a specialist in that same field. As the exploration of a dependence on authoritative figures constitutes a part of the study of social epistemology, it confirms the existence of the ideology in minds long before it was given its label. In 1936, Karl Mannheim turned Karl Marx's theory of ideology (which interpreted the "social" aspect in epistemology to be of a political or sociological nature) into an analysis of how the human society develops and functions in this respect. The term "social epistemology" was first coined by the library scientists Margaret Egan and Jesse Shera in the 1950s, and it was also used by Steven Shapin in 1979. However, it was not until the 1980s that the current sense of "social epistemology" began to emerge. In the 1980s, here was a powerful growth of interest amongst philosophers in topics such as epistemic value of testimony, the nature and function of expertise, proper distribution of cognitive labor and resources among individuals in the communities and the status of group reasoning and knowledge. In 1987, the philosophical journal "Synthese" published a special issue on social epistemology which included two authors that have since taken the branch of epistemology in two divergent directions: Alvin Goldman and Steve Fuller. Fuller founded a journal called "Social Epistemology: A journal of knowledge, culture, and policy" in 1987 and published his first book, "Social Epistemology", in 1988. Goldman's "Knowledge in a Social World" came out in 1999. Goldman advocates for a type of epistemology which is sometimes called "veritistic epistemology" because of its large emphasis on truth. This type of epistemology is sometimes seen to side with "essentialism" as opposed to "multiculturalism". But Goldman has argued that this association between veritistic epistemology and essentialism is not necessary. He describes Social Epistemology as knowledge derived from one's interactions with another person, group or society. Goldman looks into one of the two strategies of the socialization of epistemology. This strategy includes the evaluation of social factors that impact knowledge formed on true belief. In contrast, Fuller takes preference for the second strategy that defines knowledge influenced by social factors as collectively accepted belief. The difference between the two can be simplified with exemplars e.g.: the first strategy means analyzing how your degree of wealth (a social factor) influences what information you determine to be valid whilst the second strategy occurs when an evaluation is done on wealth’s influence upon your knowledge acquired from the beliefs of the society in which you find yourself. In 2012, on the occasion of the 25th anniversary of "Social Epistemology", Fuller reflected upon the history and the prospects of the field, including the need for social epistemology to re-connect with the larger issues of knowledge production first identified by Charles Sanders Peirce as "cognitive economy" and nowadays often pursued by library and information science. As for the "analytic social epistemology", to which Goldman has been a significant contributor, Fuller concludes that it has "failed to make significant progress owing, in part, to a minimal understanding of actual knowledge practices, a minimised role for philosophers in ongoing inquiry, and a focus on maintaining the status quo of epistemology as a field." The basic view of knowledge that motivated the emergence of social epistemology as it is perceived today can be traced to the work of Thomas Kuhn and Michel Foucault, which gained acknowledgment at the end of the 1960s. Both brought historical concerns directly to bear on problems long associated with the philosophy of science. Perhaps the most notable issue here was the nature of truth, which both Kuhn and Foucault described as a relative and contingent notion. On this background, ongoing work in the sociology of scientific knowledge (SSK) and the history and philosophy of science (HPS) was able to assert its epistemological consequences, leading most notably to the establishment of the strong programme at the University of Edinburgh. In terms of the two strands of social epistemology, Fuller is more sensitive and receptive to this historical trajectory (if not always in agreement) than Goldman, whose "veritistic" social epistemology can be reasonably read as a systematic rejection of the more extreme claims associated with Kuhn and Foucault. In the standard sense of the term today, social epistemology is a field within analytic philosophy. The field of social epistemology focuses on the social aspects of how knowledge is created and disseminated. What precisely these social aspects are, and whether they have beneficial or detrimental effects upon the possibilities to create, acquire and spread knowledge is a subject of continuous debate. The most common topics discussed in contemporary social epistemology are testimony (e.g. "When does a belief that x is true which resulted from being told 'x is true' constitute knowledge?"), peer disagreement (e.g. "When and how should I revise my beliefs in light of other people holding beliefs that contradict mine?", and group epistemology (e.g. "What does it mean to attribute knowledge to groups rather than individuals, and when are such knowledge attributions appropriate?"). Within the field, "the social" is approached in two complementary and not mutually exclusive ways: "the social" character of knowledge can either be approached through inquiries in "inter-individual" epistemic relations or through inquiries focusing on epistemic "communities". The inter-individual approach typically focuses on issues such as testimony, epistemic trust as a form of trust placed by one individual in another, epistemic dependence, epistemic authority, etc. The community approach typically focuses on issues such as community standards of justification, community procedures of critique, diversity, epistemic justice, and collective knowledge. Social epistemology as a field within analytic philosophy has close ties to, and often overlaps with feminist epistemology and philosophy of science. While parts of the field engage in abstract, normative considerations of knowledge creation and dissemination, other parts of the field are "naturalized epistemology" in the sense that they draw on empirically gained insights---which could mean natural science research from, e.g., cognitive psychology, be that qualitative or quantitative social science research. (For the notion of "naturalized epistemology" see Willard Van Orman Quine.) And while parts of the field are concerned with analytic considerations of rather general character, case-based and domain-specific inquiries in, e.g., knowledge creation in collaborative scientific practice, knowledge exchange on online platforms or knowledge gained in learning institutions play an increasing role. Important academic journals for social epistemology as a field within analytic philosophy are, e.g., "Episteme", "Hypatia", "Social Epistemology", and "Synthese". However, major works within this field are also published in journals that predominantly address philosophers of science and psychology or in interdisciplinary journals which focus on particular domains of inquiry (such as, e.g., "Ethics and Information Technology"). In both stages, both varieties of social epistemology remain largely "academic" or "theoretical" projects. Yet both emphasize the social significance of knowledge and therefore the cultural value of social epistemology itself. A range of journals publishing social epistemology welcome papers that include a policy dimension. More practical applications of social epistemology can be found in the areas of library science, academic publishing, guidelines for scientific authorship and collaboration, knowledge policy and debates over the role of the Internet in knowledge transmission and creation. Social epistemology is still considered a relatively new addition to philosophy, with its problems and theories still fresh and in rapid movement. Of increasing importance is social epistemology developments within transdisciplinarity as manifested by media ecology.
https://en.wikipedia.org/wiki?curid=28182
Sound card A sound card (also known as an audio card) is an internal expansion card that provides input and output of audio signals to and from a computer under control of computer programs. The term "sound card" is also applied to external audio interfaces used for professional audio applications. Sound functionality can also be integrated onto the motherboard, using components similar to those found on plug-in cards. The integrated sound system is often still referred to as a "sound card". Sound processing hardware is also present on modern video cards with HDMI to output sound along with the video using that connector; previously they used a S/PDIF connection to the motherboard or sound card. Typical uses of sound cards or sound card functionality include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation, education and entertainment (games) and video projection. Sound cards are also used for computer-based communication such as voice over IP and teleconferencing. Sound cards use a digital-to-analog converter (DAC), which converts recorded or generated digital signal data into an analog format. The output signal is connected to an amplifier, headphones, or external device using standard interconnects, such as a TRS phone connector. A common external connector is the microphone connector. Input through a microphone connector can be used, for example, by speech recognition or voice over IP applications. Most sound cards have a line in connector for an analog input from a sound source that has higher voltage levels than a microphone. In either case, the sound card uses an analog-to-digital converter to digitize this signal. Some cards include a sound chip to support production of synthesized sounds, usually for real-time generation of music and sound effects using minimal data and CPU time. The card may use direct memory access to transfer the samples to and from main memory, from where a recording and playback software may read and write it to the hard disk for storage, editing, or further processing. An important sound card characteristic is polyphony, which refers to its ability to process and output multiple independent voices or sounds simultaneously. These distinct channels are seen as the number of audio outputs, which may correspond to a speaker configuration such as 2.0 (stereo), 2.1 (stereo and sub woofer), 5.1 (surround), or other configuration. Sometimes, the terms "voice" and "channel" are used interchangeably to indicate the degree of polyphony, not the output speaker configuration. For example, many older sound chips could accommodate three voices, but only one output audio channel (i.e., a single mono output), requiring all voices to be mixed together. Later cards, such as the AdLib sound card, had a 9-voice polyphony combined in 1 mono output channel. Early PC sound cards had multiple FM synthesis voices (typically 9 or 16) which were used for MIDI music. The full capabilities of advanced cards are often not fully used; only one (mono) or two (stereo) voice(s) and channel(s) are usually dedicated to playback of digital sound samples, and playing back more than one digital sound sample usually requires a software downmix at a fixed sampling rate. Modern low-cost integrated sound cards (i.e., those built into motherboards) such as audio codecs like those meeting the AC'97 standard and even some lower-cost expansion sound cards still work this way. These devices may provide more than two sound output channels (typically 5.1 or 7.1 surround sound), but they usually have no actual hardware polyphony for either sound effects or MIDI reproduction these tasks are performed entirely in software. This is similar to the way inexpensive softmodems perform modem tasks in software rather than in hardware. In the early days of wavetable synthesis, some sound card manufacturers advertised polyphony solely on the MIDI capabilities alone. In this case, typically, the card is only capable of two channels of digital sound and the polyphony specification solely applies to the number of MIDI instruments the sound card is capable of producing at once. Modern sound cards may provide more flexible "audio accelerator" capabilities which can be used in support of higher levels of polyphony or other purposes such as hardware acceleration of 3D sound, positional audio and real-time DSP effects. Connectors on the sound cards are color-coded as per the PC System Design Guide. They will also have symbols with arrows, holes and soundwaves that are associated with each jack position, the meaning of each is given below: Sound cards for IBM PC compatible computers were very uncommon until 1988. For the majority IBM PC users, the internal PC speaker was the only way for early PC software to produce sound and music. The speaker hardware was typically limited to square waves. The resulting sound was generally described as "beeps and boops" which resulted in the common nickname "beeper". Several companies, most notably Access Software, developed techniques for digital sound reproduction over the PC speaker like RealSound. The resulting audio, while functional, suffered from heavily distorted output and low volume, and usually required all other processing to be stopped while sounds were played. Other home computers of the 1980s like the Commodore 64 included hardware support for digital sound playback and/or music synthesis, leaving the IBM PC at a disadvantage when it came to multimedia applications. Early sound cards for the IBM PC platform were not designed for gaming or multimedia applications, but rather on specific audio applications, such as music composition with the AdLib Personal Music System, IBM Music Feature Card, and Creative Music System, or on speech synthesis like Digispeech "DS201", Covox Speech Thing, and Street Electronics "Echo". In 1988, a panel of computer-game CEOs stated at the Consumer Electronics Show that the PC's limited sound capability prevented it from becoming the leading home computer, that it needed a $49–79 sound card with better capability than current products, and that once such hardware was widely installed their companies would support it. Sierra On-Line, which had pioneered supporting EGA and VGA video, and 3 1/2" disks, promised that year to support the AdLib, IBM Music Feature, and Roland MT-32 sound cards in its games. A 1989 "Computer Gaming World" survey found that 18 of 25 game companies planned to support AdLib, six Roland and Covox, and seven Creative Music System/Game Blaster. One of the first manufacturers of sound cards for the IBM PC was AdLib, which produced a card based on the Yamaha YM3812 sound chip, also known as the OPL2. The AdLib had two modes: A 9-voice mode where each voice could be fully programmed, and a less frequently used "percussion" mode with 3 regular voices producing 5 independent percussion-only voices for a total of 11. (The percussion mode was considered inflexible by most developers; it was used mostly by AdLib's own composition software.) Creative Labs also marketed a sound card about the same time called the Creative Music System. Although the "C/MS " had twelve voices to AdLib's nine, and was a stereo card while the AdLib was mono, the basic technology behind it was based on the Philips SAA1099 chip which was essentially a square-wave generator. It sounded much like twelve simultaneous PC speakers would have except for each channel having amplitude control, and failed to sell well, even after Creative renamed it the Game Blaster a year later, and marketed it through RadioShack in the US. The Game Blaster retailed for under $100 and was compatible with many popular games, such as Silpheed. A large change in the IBM PC compatible sound card market happened when Creative Labs introduced the Sound Blaster card. Recommended by Microsoft to developers creating software based on the Multimedia PC standard, the Sound Blaster cloned the AdLib and added a sound coprocessor for recording and play back of digital audio (likely to have been an Intel microcontroller relabeled by Creative). It was incorrectly called a "DSP" (to suggest it was a digital signal processor), a game port for adding a joystick, and capability to interface to MIDI equipment (using the game port and a special cable). With more features at nearly the same price, and compatibility as well, most buyers chose the Sound Blaster. It eventually outsold the AdLib and dominated the market. Roland also made sound cards in the late 1980s, most of them being high quality "prosumer" cards, such as the MT-32 and LAPC-I. Roland cards often sold for hundreds of dollars, and sometimes over a thousand. Many games had music written for their cards, such as Silpheed and Police Quest II. The cards were often poor at sound effects such as laughs, but for music were by far the best sound cards available until the mid nineties. Some Roland cards, such as the SCC, and later versions of the MT-32 were made to be less expensive, but their quality was usually drastically poorer than the other Roland cards. By 1992 one sound card vendor advertised that its product was "Sound Blaster, AdLib, Disney Sound Source and Covox Speech Thing Compatible!". Responding to readers complaining about an article on sound cards that unfavorably mentioned the Gravis Ultrasound, "Computer Gaming World" stated in January 1994 that "The de facto standard in the gaming world is Sound Blaster compatibility ... It would have been unfair to have recommended anything else". The magazine that year stated that "Wing Commander II" was "Probably the game responsible" for making it the standard card. The Sound Blaster line of cards, together with the first inexpensive CD-ROM drives and evolving video technology, ushered in a new era of multimedia computer applications that could play back CD audio, add recorded dialogue to video games, or even reproduce full motion video (albeit at much lower resolutions and quality in early days). The widespread decision to support the Sound Blaster design in multimedia and entertainment titles meant that future sound cards such as Media Vision's Pro Audio Spectrum and the Gravis Ultrasound had to be Sound Blaster compatible if they were to sell well. Until the early 2000s (by which the AC'97 audio standard became more widespread and eventually usurped the SoundBlaster as a standard due to its low cost and integration into many motherboards), Sound Blaster compatibility is a standard that many other sound cards still support to maintain compatibility with many games and applications released. When game company Sierra On-Line opted to support add-on music hardware in addition to built-in hardware such as the PC speaker and built-in sound capabilities of the IBM PCjr and Tandy 1000, what could be done with sound and music on the IBM PC changed dramatically. Two of the companies Sierra partnered with were Roland and AdLib, opting to produce in-game music for King's Quest 4 that supported the MT-32 and AdLib Music Synthesizer. The MT-32 had superior output quality, due in part to its method of sound synthesis as well as built-in reverb. Since it was the most sophisticated synthesizer they supported, Sierra chose to use most of the MT-32's custom features and unconventional instrument patches, producing background sound effects (e.g., chirping birds, clopping horse hooves, etc.) before the Sound Blaster brought playing real audio clips to the PC entertainment world. Many game companies also supported the MT-32, but supported the Adlib card as an alternative because of the latter's higher market base. The adoption of the MT-32 led the way for the creation of the MPU-401/Roland Sound Canvas and General MIDI standards as the most common means of playing in-game music until the mid-1990s. Early ISA bus sound cards were half-duplex, meaning they couldn't record and play digitized sound simultaneously, mostly due to inferior card hardware (e.g., DSPs). Later, ISA cards like the SoundBlaster AWE series and Plug-and-play Soundblaster clones eventually became full-duplex and supported simultaneous recording and playback, but at the expense of using up two IRQ and DMA channels instead of one, making them no different from having two half-duplex sound cards in terms of configuration. Towards the end of the ISA bus' life, ISA sound cards started taking advantage of IRQ sharing, thus reducing the IRQs needed to one, but still needed two DMA channels. Many Conventional PCI bus cards do not have these limitations and are mostly full-duplex. It should also be noted that many modern PCI bus cards also do not require free DMA channels to operate. Also, throughout the years, sound cards have evolved in terms of digital audio sampling rate (starting from 8-bit , to 32-bit, that the latest solutions support). Along the way, some cards started offering 'wavetable' sample-based synthesis, which provides superior MIDI synthesis quality relative to the earlier OPL-based solutions, which uses FM-synthesis. Also, some higher end cards started having their own RAM and processor for user-definable sound samples and MIDI instruments as well as to offload audio processing from the CPU. For years, sound cards had only one or two channels of digital sound (most notably the Sound Blaster series and their compatibles) with the exception of the E-MU card family, the Gravis GF-1 and AMD Interwave, which had hardware support for up to 32 independent channels of digital audio. Early games and MOD-players needing more channels than a card could support had to resort to mixing multiple channels in software. Even today, the tendency is still to mix multiple sound streams in software, except in products specifically intended for gamers or professional musicians, with a sensible difference in price from "software based" products. Also, in the early era of 'wavetable' sample-based synthesis, sound card companies would also sometimes boast about the card's polyphony capabilities in terms of MIDI synthesis. In this case polyphony solely refers to the count of MIDI notes the card is capable of synthesizing simultaneously at one given time and not the count of digital audio streams the card is capable of handling. In regards to physical sound output, the number of physical sound channels has also increased. The first sound card solutions were mono. Stereo sound was introduced in the early 1980s, and quadraphonic sound came in 1989. This was shortly followed by 5.1 channel audio. The latest sound cards support up to audio channels in the 7.1 speaker setup. Most new sound cards no longer have the audio loopback device commonly called "Stereo Mix"/"Wave out mix"/"Mono Mix"/"What U Hear" that was once very prevalent and that allows users to digitally record speaker output to the microphone input. Lenovo and other manufacturers fail to implement the chipset feature in hardware, while other manufacturers disable the driver from supporting it. In some cases loopback can be reinstated with driver updates (as in the case of some Dell computers); alternatively software (Total Recorder or Virtual Audio Cable) can be purchased to enable the functionality. According to Microsoft, the functionality was hidden by default in Windows Vista (to reduce user confusion), but is still available, as long as the underlying sound card drivers and hardware support it. Ultimately, the user can connect the line out directly to the line in (analog hole). In laptops, manufacturers have gradually moved from providing 3 separate jacks with TRS connectors - usually for line in, line out/headphone out and microphone in to just a single combo jack with TRRS connector that combines microphone in and line out. Professional sound cards are special sound cards optimized for low-latency multichannel sound recording and playback, including studio-grade fidelity. Their drivers usually follow the Audio Stream Input Output protocol for use with professional sound engineering and music software, although ASIO drivers are also available for a range of consumer-grade sound cards. Professional sound cards are usually described as "audio interfaces", and sometimes have the form of external rack-mountable units using USB, FireWire, or an optical interface, to offer sufficient data rates. The emphasis in these products is, in general, on multiple input and output connectors, direct hardware support for multiple input and output sound channels, as well as higher sampling rates and fidelity as compared to the usual consumer sound card. In that respect, their role and intended purpose is more similar to a specialized multi-channel data recorder and real-time audio mixer and processor, roles which are possible only to a limited degree with typical consumer sound cards. On the other hand, certain features of consumer sound cards such as support for environmental audio extensions (EAX), optimization for hardware acceleration in video games, or real-time ambience effects are secondary, nonexistent or even undesirable in professional sound cards, and as such audio interfaces are not recommended for the typical home user. The typical "consumer-grade" sound card is intended for generic home, office, and entertainment purposes with an emphasis on playback and casual use, rather than catering to the needs of audio professionals. In response to this, Steinberg (the creators of audio recording and sequencing software, Cubase and Nuendo) developed a protocol that specified the handling of multiple audio inputs and outputs. In general, consumer grade sound cards impose several restrictions and inconveniences that would be unacceptable to an audio professional. One of a modern sound card's purposes is to provide an AD/DA converter (analog to digital/digital to analog). However, in professional applications, there is usually a need for enhanced recording (analog to digital) conversion capabilities. One of the limitations of consumer sound cards is their comparatively large sampling latency; this is the time it takes for the AD Converter to complete conversion of a sound sample and transfer it to the computer's main memory. Consumer sound cards are also limited in the "effective" sampling rates and bit depths they can actually manage (compare analog versus digital sound) and have lower numbers of less flexible input channels: professional studio recording use typically requires more than the two channels that consumer sound cards provide, and more accessible connectors, unlike the variable mixture of internal—and sometimes virtual—and external connectors found in consumer-grade sound cards. In 1984, the first IBM PCjr had a rudimentary 3-voice sound synthesis chip (the SN76489) which was capable of generating three square-wave tones with variable amplitude, and a pseudo-white noise channel that could generate primitive percussion sounds. The Tandy 1000, initially a clone of the PCjr, duplicated this functionality, with the Tandy TL/SL/RL models adding digital sound recording and playback capabilities. Many games during the 1980s that supported the PCjr's video standard (described as "Tandy-compatible", "Tandy graphics", or "TGA") also supported PCjr/Tandy 1000 audio. In the late 1990s many computer manufacturers began to replace plug-in sound cards with a "codec" chip (actually a combined audio AD/DA-converter) integrated into the motherboard. Many of these used Intel's AC'97 specification. Others used inexpensive ACR slot accessory cards. From around 2001 many motherboards incorporated integrated "real" (non-codec) sound cards, usually in the form of a custom chipset providing something akin to full Sound Blaster compatibility, providing relatively high-quality sound. However, these features were dropped when AC'97 was superseded by Intel's HD Audio standard, which was released in 2004, again specified the use of a codec chip, and slowly gained acceptance. As of 2011, most motherboards have returned to using a codec chip, albeit an HD Audio compatible one, and the requirement for Sound Blaster compatibility relegated to history. Various non-IBM PC compatible computers, such as early home computers like the Commodore 64 (1982) and Amiga (1985), NEC's PC-88 and PC-98, Fujitsu's FM-7 and FM Towns, the MSX, Apple's Macintosh, and workstations from manufacturers like Sun, have had their own motherboard integrated sound devices. In some cases, most notably in those of the Macintosh, Amiga, C64, PC-98, MSX, FM-7, and FM towns, they provide very advanced capabilities (as of the time of manufacture), in others they are only minimal capabilities. Some of these platforms have also had sound cards designed for their bus architectures that cannot be used in a standard PC. Several Japanese computer platforms, including the PC-88, PC-98, MSX, and FM-7, featured built-in FM synthesis sound from Yamaha by the mid-1980s. By 1989, the FM Towns computer platform featured built-in PCM sample-based sound and supported the CD-ROM format. The custom sound chip on Amiga, named Paula, had four digital sound channels (2 for the left speaker and 2 for the right) with 8-bit resolution (although with patches, 14/15-bit was accomplishable at the cost of high CPU usage) for each channel and a 6-bit volume control per channel. Sound playback on Amiga was done by reading directly from the chip-RAM without using the main CPU. Most arcade games have integrated sound chips, the most popular being the Yamaha OPL chip for BGM coupled with a variety of DACs for sampled audio and sound effects. The earliest known sound card used by computers was the Gooch Synthetic Woodwind, a music device for PLATO terminals, and is widely hailed as the precursor to sound cards and MIDI. It was invented in 1972. Certain early arcade machines made use of sound cards to achieve playback of complex audio waveforms and digital music, despite being already equipped with onboard audio. An example of a sound card used in arcade machines is the Digital Compression System card, used in games from Midway. For example, Mortal Kombat II on the Midway T Unit hardware. The T-Unit hardware already has an onboard YM2151 OPL chip coupled with an OKI 6295 DAC, but said game uses an added on DCS card instead. The card is also used in the arcade version of Midway and Aerosmith's Revolution X for complex looping BGM and speech playback (Revolution X used fully sampled songs from the band's album that transparently looped- an impressive feature at the time the game was released). MSX computers, while equipped with built-in sound capabilities, also relied on sound cards to produce better quality audio. The card, known as Moonsound, uses a Yamaha OPL4 sound chip. Prior to the Moonsound, there were also sound cards called "MSX Music" and "MSX Audio", which uses OPL2 and OPL3 chipsets, for the system. The Apple II series of computers, which did not have sound capabilities beyond a beep until the IIGS, could use plug-in sound cards from a variety of manufacturers. The first, in 1978, was ALF's Apple Music Synthesizer, with 3 voices; two or three cards could be used to create 6 or 9 voices in stereo. Later ALF created the Apple Music II, a 9-voice model. The most widely supported card, however, was the Mockingboard. Sweet Micro Systems sold the Mockingboard in various models. Early Mockingboard models ranged from 3 voices in mono, while some later designs had 6 voices in stereo. Some software supported use of two Mockingboard cards, which allowed 12-voice music and sound. A 12-voice, single card clone of the Mockingboard called the Phasor was made by Applied Engineering. In late 2005 a company called ReactiveMicro.com produced a 6-voice clone called the Mockingboard v1 and also had plans to clone the Phasor and produce a hybrid card user-selectable between Mockingboard and Phasor modes plus support both the SC-01 or SC-02 speech synthesizers. The Sinclair ZX Spectrum that initially only had a beeper had some sound cards made for it. One example is the TurboSound. Other examples are the Fuller Box, Melodik for the Didaktik Gamma, AY-Magic et.c. The Zon X-81 for the ZX81 was also possible to use on the ZX Spectrum using an adapter. Devices such as the Covox Speech Thing could be attached to the parallel port of an IBM PC and feed 6- or 8-bit PCM sample data to produce audio. Also, many types of professional sound cards (audio interfaces) have the form of an external FireWire or USB unit, usually for convenience and improved fidelity. Sound cards using the PCMCIA Cardbus interface were available before laptop and notebook computers routinely had onboard sound. Cardbus audio may still be used if onboard sound quality is poor. When Cardbus interfaces were superseded by Expresscard on computers since about 2005, manufacturers followed. Most of these units are designed for mobile DJs, providing separate outputs to allow both playback and monitoring from one system, however some also target mobile gamers, providing high-end sound to gaming laptops who are usually well-equipped when it comes to graphics and processing power, but tend to have audio codecs that are no better than the ones found on regular laptops. USB sound "cards" are external devices that plug into the computer via USB. They are often used in studios and on stage by electronic musicians including live PA performers and DJs. DJs who use DJ software typically use sound cards integrated into DJ controllers or specialized DJ sound cards. DJ sound cards sometimes have inputs with phono preamplifiers to allow turntables to be connected to the computer to control the software's playback of music files with timecode vinyl. The USB specification defines a standard interface, the USB audio device class, allowing a single driver to work with the various USB sound devices and interfaces on the market. Mac OS X, Windows, and Linux support this standard. However, many USB sound cards do not conform to the standard and require proprietary drivers from the manufacturer. Even cards meeting the older, slow, USB 1.1 specification are capable of high quality sound with a limited number of channels, or limited sampling frequency or bit depth, but USB 2.0 or later is more capable. A USB audio interface may also describe a device allowing a computer which has a sound-card, yet lacks a standard audio socket, to be connected to an external device which requires such a socket, via its USB socket. The main function of a sound card is to play audio, usually music, with varying formats (monophonic, stereophonic, various multiple speaker setups) and degrees of control. The source may be a CD or DVD, a file, streamed audio, or any external source connected to a sound card input. Audio may be recorded. Sometimes sound card hardware and drivers do not support recording a source that is being played. A card can also be used, in conjunction with software, to generate arbitrary waveforms, acting as an audio-frequency function generator. Free and commercial software is available for this purpose; there are also online services that generate audio files for any desired waveforms, playable through a sound card. A card can be used, again in conjunction with free or commercial software, to analyse input waveforms. For example, a very-low-distortion sinewave oscillator can be used as input to equipment under test; the output is sent to a sound card's line input and run through Fourier transform software to find the amplitude of each harmonic of the added distortion. Alternatively, a less pure signal source may be used, with circuitry to subtract the input from the output, attenuated and phase-corrected; the result is distortion and noise only, which can be analysed. There are programs which allow a sound card to be used as an audio-frequency oscilloscope. For all measurement purposes a sound card must be chosen with good audio properties. It must itself contribute as little distortion and noise as possible, and attention must be paid to bandwidth and sampling. A typical integrated sound card, the Realtek ALC887, according to its data sheet has distortion of about 80 dB below the fundamental; cards are available with distortion better than -100 dB. Sound cards with a sampling rate of 192 kHz can be used to synchronize the clock of the computer with a time signal transmitter working on frequencies below 96 kHz like DCF 77 with a special software and a coil at the entrance of the sound card, working as antenna , . To use a sound card, the operating system (OS) typically requires a specific device driver, a low-level program that handles the data connections between the physical hardware and the operating system. Some operating systems include the drivers for many cards; for cards not so supported, drivers are supplied with the card, or available for download.
https://en.wikipedia.org/wiki?curid=28184
Symmetry group In group theory, the symmetry group of a geometric object is the group of all transformations under which the object is invariant, endowed with the group operation of composition. Such a transformation is an invertible mapping of the ambient space which takes the object to itself, and which preserves all the relevant structure of the object. A frequent notation for the symmetry group of an object "X" is "G" = Sym("X"). For an object in a metric space, its symmetries form a subgroup of the isometry group of the ambient space. This article mainly considers symmetry groups in Euclidean geometry, but the concept may also be studied for more general types of geometric structure. We consider the "objects" possessing symmetry to be geometric figures, images, and patterns, such as a wallpaper pattern. For symmetry of physical objects, one may also take their physical composition as part of the pattern. (A pattern may be specified formally as a scalar field, a function of position with values in a set of colors or substances; as a vector field; or as a more general function on the object.) The group of isometries of space induces a group action on objects in it, and the symmetry group Sym("X") consists of those isometries which map "X" to itself (as well as mapping any further pattern to itself). We say "X" is "invariant" under such a mapping, and the mapping is a "symmetry" of "X". The above is sometimes called the full symmetry group of "X" to emphasize that it includes orientation-reversing isometries (reflections, glide reflections and improper rotations), as long as those isometries map this particular "X" to itself. The subgroup of orientation-preserving symmetries (translations, rotations, and compositions of these) is called its proper symmetry group. An object is chiral when it has no orientation-reversing symmetries, so that its proper symmetry group is equal to its full symmetry group. Any symmetry group whose elements have a common fixed point, which is true if the group is finite or the figure is bounded, can be represented as a subgroup of the orthogonal group O("n") by choosing the origin to be a fixed point. The proper symmetry group is then a subgroup of the special orthogonal group SO("n"), and is called the rotation group of the figure. In a discrete symmetry group, the points symmetric to a given point do not accumulate toward a limit point. That is, every orbit of the group (the images of a given point under all group elements) forms a discrete set. All finite symmetry groups are discrete. Discrete symmetry groups come in three types: (1) finite point groups, which include only rotations, reflections, inversions and rotoinversions – i.e., the finite subgroups of O("n"); (2) infinite lattice groups, which include only translations; and (3) infinite space groups containing elements of both previous types, and perhaps also extra transformations like screw displacements and glide reflections. There are also continuous symmetry groups (Lie groups), which contain rotations of arbitrarily small angles or translations of arbitrarily small distances. An example is O(3), the symmetry group of a sphere. Symmetry groups of Euclidean objects may be completely classified as the subgroups of the Euclidean group E("n") (the isometry group of R"n"). Two geometric figures have the same "symmetry type" when their symmetry groups are "conjugate" subgroups of the Euclidean group: that is, when the subgroups "H"1, "H"2 are related by for some "g" in E("n"). For example: In the following sections, we only consider isometry groups whose orbits are topologically closed, including all discrete and continuous isometry groups. However, this excludes for example the 1D group of translations by a rational number; such a non-closed figure cannot be drawn with reasonable accuracy due to its arbitrarily fine detail. The isometry groups in one dimension are: See also symmetry groups in one dimension. Up to conjugacy the discrete point groups in two-dimensional space are the following classes: C1 is the trivial group containing only the identity operation, which occurs when the figure is asymmetric, for example the letter "F". C2 is the symmetry group of the letter "Z", C3 that of a triskelion, C4 of a swastika, and C5, C6, etc. are the symmetry groups of similar swastika-like figures with five, six, etc. arms instead of four. D1 is the 2-element group containing the identity operation and a single reflection, which occurs when the figure has only a single axis of bilateral symmetry, for example the letter "A". D2, which is isomorphic to the Klein four-group, is the symmetry group of a non-equilateral rectangle. This figure has four symmetry operations: the identity operation, one twofold axis of rotation, and two nonequivalent mirror planes. D3, D4 etc. are the symmetry groups of the regular polygons. Within each of these symmetry types, there are two degrees of freedom for the center of rotation, and in the case of the dihedral groups, one more for the positions of the mirrors. The remaining isometry groups in two dimensions with a fixed point are: Non-bounded figures may have isometry groups including translations; these are: Up to conjugacy the set of three-dimensional point groups consists of 7 infinite series, and 7 other individual groups. In crystallography, only those point groups are considered which preserve some crystal lattice (so their rotations may only have order 1, 2, 3, 4, or 6). This crystallographic restriction of the infinite families of general point groups results in 32 crystallographic point groups (27 individual groups from the 7 series, and 5 of the 7 other individuals). The continuous symmetry groups with a fixed point include those of: For objects with scalar field patterns, the cylindrical symmetry implies vertical reflection symmetry as well. However, this is not true for vector field patterns: for example, in cylindrical coordinates with respect to some axis, the vector field formula_1 has cylindrical symmetry with respect to the axis whenever formula_2 and formula_3 have this symmetry (no dependence on formula_4); and it has reflectional symmetry only when formula_5. For spherical symmetry, there is no such distinction: any patterned object has planes of reflection symmetry. The continuous symmetry groups without a fixed point include those with a screw axis, such as an infinite helix. See also subgroups of the Euclidean group. In wider contexts, a symmetry group may be any kind of transformation group, or automorphism group. Each type of mathematical structure has invertible mappings which preserve the structure. Conversely, specifying the symmetry group can define the structure, or at least clarify the meaning of geometric congruence or invariance; this is one way of looking at the Erlangen programme. For example, objects in a hyperbolic non-Euclidean geometry have Fuchsian symmetry groups, which are the discrete subgroups of the isometry group of the hyperbolic plane, preserving hyperbolic rather than Euclidean distance. (Some are depicted in drawings of Escher.) Similarly, automorphism groups of finite geometries preserve families of point-sets (discrete subspaces) rather than Euclidean subspaces, distances, or inner products. Just as for Euclidean figures, objects in any geometric space have symmetry groups which are subgroups of the symmetries of the ambient space. Another example of a symmetry group is that of a combinatorial graph: a graph symmetry is a permutation of the vertices which takes edges to edges. Any finitely presented group is the symmetry group of its Cayley graph; the free group is the symmetry group of an infinite tree graph. Cayley's theorem states that any abstract group is a subgroup of the permutations of some set "X", and so can be considered as the symmetry group of "X" with some extra structure. In addition, many abstract features of the group (defined purely in terms of the group operation) can be interpreted in terms of symmetries. For example, let "G" = Sym("X") be the finite symmetry group of a figure "X" in a Euclidean space, and let "H" ⊂ "G" be a subgroup. Then "H" can be interpreted as the symmetry group of "X"+, a "decorated" version of "X". Such a decoration may be constructed as follows. Add some patterns such as arrows or colors to "X" so as to break all symmetry, obtaining a figure "X"# with Sym("X"#) = {1}, the trivial subgroup; that is, "gX"# ≠ "X"# for all non-trivial "g" ∈ "G". Now we get: Normal subgroups may also be characterized in this framework. The symmetry group of the translation "gX" + is the conjugate subgroup "gHg"−1. Thus "H" is normal whenver: that is, whenever the decoration of "X"+ may be drawn in any orientation, with respect to any side or feature of "X", and still yield the same symmetry group "gHg"−1 = "H". As an example, consider the dihedral group "G" = "D"3 = Sym("X"), where "X" is an equilateral triangle. We may decorate this with an arrow on one edge, obtaining an asymmetric figure "X"#. Letting τ ∈ "G" be the reflection of the arrowed edge, the composite figure "X"+ = "X"# ∪ τ"X"# has a bidirectional arrow on that edge, and its symmetry group is "H" = {1, τ}. This subgroup is not normal, since "gX"+ may have the bi-arrow on a different edge, giving a different reflection symmetry group. However, letting H = {1, ρ, ρ2} ⊂ "D"3 be the cyclic subgroup generated by a rotation, the decorated figure "X"+ consists of a 3-cycle of arrows with consistent orientation. Then "H" is normal, since drawing such a cycle with either orientation yields the same symmetry group "H".
https://en.wikipedia.org/wiki?curid=28186
Singular they Singular "they" is the use in English of the pronoun "they" or its inflected or derivative forms, "them", "their", "theirs", and "themselves" (or "themself"), as an epicene (gender-neutral) singular pronoun. It typically occurs with an unspecified antecedent, as in sentences such as: The singular "they" emerged by the 14th century, about a century after plural "they". It has been commonly employed in everyday English ever since then and has gained currency in official contexts. Singular "they" has been criticised since the mid-18th century by prescriptive commentators who consider it an error. Its continued use in modern standard English has become more common and formally accepted with the change toward gender-neutral language, though many style guides continue to describe it as colloquial and less appropriate in formal writing. In the early 21st century, use of singular "they" with known individuals emerged for people who do not identify as male or female, as in the following example: "They" in this context was named "Word of the Year" for 2015 by the American Dialect Society, and for 2019 by Merriam-Webster. In 2020, the American Dialect Society also selected it as "Word of the Decade" for the 2010s. The "singular "they"" permits a singular antecedent, but is used with the same (plural) verb forms as plural "they", and has the same inflected forms as plural "they" (i.e. "them", "their", and "theirs"), except that in the reflexive form, "themself" is sometimes used instead of "themselves". "Themself" is attested from the 14th to 16th centuries. Its use has been increasing since the 1970s or 1980s, though it is sometimes still classified as "a minority form". In 2002, Payne and Huddleston, in "The Cambridge Grammar of the English Language", called its use in standard dialect "rare and acceptable only to a minority of speakers" but "likely to increase with the growing acceptance of "they" as a singular pronoun". It is useful when referring to a single person of indeterminate gender, where the plural form "themselves" might seem incongruous, as in: The Canadian government recommends "themselves" as the reflexive form of singular "they" for use in Canadian federal legislative texts and advises against using "themself", but "themself" is also found: "They" with a singular antecedent goes back to the Middle English of the 14th century (slightly younger than "they" with a plural antecedent, which was borrowed from Old Norse in the 13th century), and has remained in common use for centuries in spite of its proscription by traditional grammarians beginning in the mid 18th century. Informal spoken English exhibits nearly universal use of the singular "they". An examination by Jürgen Gerner of the British National Corpus published in 1998 found that British speakers, regardless of social status, age, sex, or region, used the singular "they" overwhelmingly more often than the gender-neutral "he" or other options. Singular "they" is found in the writings of many respected authors. Here are some examples, arranged chronologically: Alongside "they", it was acceptable to use the pronoun "he" to refer to an indefinite person of any gender, as in the following: Such usage is still occasionally found but has lost acceptability in most contexts, due to not being gender-neutral. The earliest known explicit recommendation by a grammarian to use the generic "he" rather than "they" in formal English is Ann Fisher's mid-18th century "A New Grammar" assertion that "The "Masculine Person" answers to the "general Name", which comprehends both "Male" and "Female"; as, "any Person who knows what he says."" (Ann Fisher as quoted by Ostade) Nineteenth-century grammarians insisted on "he" as a gender-neutral pronoun on the grounds of number agreement, while rejecting "he or she" as clumsy, and this was widely adopted: e.g. in 1850, the British Parliament passed an act which provided that, when used in acts of Parliament "words importing the masculine gender shall be deemed and taken to include females". Baskervill and Sewell mention the common use of the singular "they" in their "An English Grammar for the Use of High School, Academy and College Class" of 1895, but prefer the generic "he" on the basis of number agreement: Baskervill gives a number of examples of recognized authors using the singular "they", including: It has been argued that the real motivation for promoting the "generic" "he" was an androcentric world view, with the default sex of humans being male – and the default gender therefore being masculine. There is some evidence for this: Wilson wrote in 1560: And Poole wrote in 1646: In spite of continuous attempts on the part of educationalists to proscribe singular "they" in favour of "he", this advice was largely ignored; even writers of the period continued to use "they" (though the proscription may have been observed more by American writers). Use of the purportedly gender-neutral "he" remained acceptable until at least the 1960s, though some uses of "he" were later criticized as being awkward or silly, for instance when referring to: "He" is still sometimes found in contemporary writing when referring to a generic or indeterminate antecedent. In some cases it is clear from the situation that the persons potentially referred to are likely to be male, as in: In some cases the antecedent may refer to persons who are only "probably" male or to occupations traditionally thought of as male: In other situations, the antecedent may refer to: In 2010, Choy and Clark still recommend the use of generic "he" "in formal speech or writing": In 2015, "Fowler's Dictionary of Modern English Usage" calls this "the now outmoded use of "he" to mean 'anyone, stating: In 2016, "Garner's Modern English" calls the generic use of masculine pronouns "the traditional view, now widely assailed as sexist". The earliest known attempt to create gender-neutral pronouns dates back to 1792, when Scottish economist James Anderson advocated for an indeterminate pronoun "ou". In 1808, poet Samuel Taylor Coleridge suggested "it" and "which" as neutral pronouns for the word "Person": In the second half of the 20th century, people expressed more widespread concern at the use of sexist and male-oriented language. This included criticism of the use of "man" as a generic term to include men and women and of the use of "he" to refer to any human, regardless of sex (social gender). It was argued that "he" could not sensibly be used as a generic pronoun understood to include men and women. William Safire in his "On Language" column in "The New York Times" approved of the use of generic "he", mentioning the mnemonic phrase "the male embraces the female". C. Badendyck from Brooklyn wrote to the "New York Times" in a reply: By 1980, the movement toward gender-neutral language had gained wide support, and many organizations, including most publishers, had issued guidelines on the use of gender-neutral language, but stopped short of recommending "they" to be third-person singular with a non-indeterminate, singular antecedent. The use of masculine generic nouns and pronouns in written and spoken language has decreased since the 1970s. In a corpus of spontaneous speech collected in Australia in the 1990s, singular "they" had become the most frequently used generic pronoun (rather than generic "he" or "he or she"). Similarly, a study from 2002 looking at a corpus of American and British newspapers showed a preference for "they" to be used as a singular epicene pronoun. The increased use of singular "they" may owe in part to an increasing desire for gender-neutral language. A solution in formal writing has often been to write ""he or she"", or something similar, but this is often considered awkward or overly politically correct, particularly when used excessively. In 2016, the journal "American Speech" published a study by Darren K. LaScotte investigating the pronouns used by native English speakers in informal written responses to questions concerning a subject of unspecified gender, finding that 68% of study participants chose singular "they" to refer to such an antecedent. Some participants noted that they found constructions such as "he or she" inadequate as they do not include people who do not identify as either male or female. "They" in this context was named Word of the Year for 2019 by Merriam-Webster and for 2015 by the American Dialect Society. On January 4, 2020, the American Dialect Society announced they had crowned "they", again in this context, Word of the Decade for the 2010s. The singular antecedent can be a pronoun such as "someone", "anybody", or "everybody", or an interrogative pronoun such as "who": Although the pronouns "everybody", "everyone", "nobody", and "no one" are singular in form and are used with a singular verb, these pronouns have an "implied plurality" that is somewhat similar to the implied plurality of collective or group nouns such as "crowd" or "team", and in some sentences where the antecedent is one of these "implied plural" pronouns, the word "they" cannot be replaced by generic "he", suggesting a "notional plural" rather than a "bound variable" interpretation . This is in contrast to sentences that involve multiple pairwise relationships and singular "they", such as: There are examples where the antecedent pronoun (such as "everyone") may refer to a collective, with no necessary implication of pairwise relationships. These are examples of plural "they": Which are apparent because they do not work with a generic "he" or "he or she": In addition, for these "notional plural" cases, it would not be appropriate to use "themself" instead of "themselves" as in: The singular antecedent can also be a noun such as "person", "patient", or "student": Known individuals may be referred to as "they" if the individual's gender is unknown to the speaker. A known individual may also be referred to as "they" if the individual is non-binary or genderqueer, regards male or female pronouns as inappropriate, and prefers "they" instead. Several social media applications permit account holders to choose to identify their gender using one of a variety of non-binary or genderqueer options, such as "gender fluid", "agender", or "bigender", and to designate a pronoun, including "they"/"them", which they wish to be used when referring to them. Though "singular "they"" has long been used with antecedents such as "everybody" or generic persons of unknown gender, this use, which may be chosen by an individual, is recent. The singular "they" in the meaning "gender-neutral singular pronoun for a known person, as a non-binary identifier" was chosen by the American Dialect Society as their "Word of the Year" for 2015. In 2016, the American Dialect Society wrote: The vote followed the previous year's approval of this use by "The Washington Post" style guide, when Bill Walsh, the "Post"s copy editor, said that the singular "they" is "the only sensible solution to English's lack of a gender-neutral third-person singular personal pronoun". In 2019, the non-binary "they" was added to Merriam-Webster's dictionary. The first non-binary main character on North American television appeared on the Showtime drama series "Billions" in 2017, with Asia Kate Dillon playing Taylor Mason. Both actor and character use singular they. Though both generic "he" and generic "they" have long histories of use, and both are still used, both are also systematically avoided by particular groups. Style guides that avoid expressing a preference for either approach sometimes recommend recasting a problem sentence, for instance replacing generic expressions with plurals to avoid the criticisms of either party. The use of singular "they" may be more accepted in British English than in American English, or vice versa. "Garner's Modern American Usage" (2nd ed., 2003) recommends cautious use of singular "they", and avoidance where possible because its use is stigmatized. Garner suggests that use of singular "they" is more acceptable in British English: and apparently regrets the resistance by the American language community: He regards the trend toward using singular "they" with antecedents like "everybody", "anyone" and "somebody" as inevitable: In the 14th edition (1993) of "The Chicago Manual of Style", the University of Chicago Press explicitly recommended using singular "they" and "their", noting a "revival" of this usage and citing "its venerable use by such writers as Addison, Austen, Chesterfield, Fielding, Ruskin, Scott, and Shakespeare." From the 15th edition (2003), this was changed. In Chapter 5 of the 16th edition (2010), now written by Bryan A. Garner, the recommendations are: and: According to "The American Heritage Book of English Usage" and its usage panel of selected writers, journalism professors, linguists, and other experts, many Americans avoid use of "they" to refer to a singular antecedent out of respect for a "traditional" grammatical rule, despite use of singular "they" by modern writers of note and mainstream publications: The 6th edition of the American Psychological Association's "Publication Manual" rejects most use of singular "they" and gives the following example as "incorrect" usage: However, APA style endorses using "they" if it is someone's (for example, a non-binary person's) preferred pronoun. The upcoming 7th edition of the APA Publication Manual, scheduled to be released in October 2019, will include guidelines on using singular "they" for bias free writing. William Strunk Jr. & E. B. White, the original authors of "The Elements of Style", found use of "they" with a singular antecedent unacceptable and advised use of the singular pronoun ("he"). In the 3rd edition (1979), the recommendation was still: The assessment, in 1979, was: In the 4th edition (2000), use of singular "they" was still proscribed against, but use of generic "he" was no longer recommended. Joseph M. Williams, who wrote a number of books on writing with "", discusses the advantages and disadvantages of various solutions when faced with the problem of referring to an antecedent such as "someone", "everyone", "no one" or a noun that does not indicate gender and suggests that this will continue to be a problem for some time. He "suspect[s] that eventually we will accept the plural "they" as a correct singular" but states that currently "formal usage requires a singular pronoun". According to "The Little, Brown Handbook", most experts – and some teachers and employers – find use of singular "they" unacceptable: It recommends using "he or she" or avoiding the problem by rewriting the sentence to use a plural or omit the pronoun. The "Purdue Online Writing Lab" ("OWL") states that "grammar shifts and changes over time", that the use of singular "they" is acceptable, and that singular "they" as a replacement for "he" or "she" is more inclusive: "The Washington Post"'s stylebook, as of 2015, recommends trying to "write around the problem, perhaps by changing singulars to plurals, before using the singular they as a last resort" and specifically permits use of "they" for a "gender-nonconforming person". The "Associated Press Stylebook", as of 2017, recommends: "They/them/their is acceptable in limited cases as a singular and-or gender-neutral pronoun, when alternative wording is overly awkward or clumsy. However, rewording usually is possible and always is preferable." In "The Handbook of Nonsexist Writing", Casey Miller and Kate Swift accept or recommend singular uses of "they" in cases where there is an element of semantic plurality expressed by a word such as "everyone" or where an indeterminate "person" is referred to, citing examples of such usage in formal speech. They also suggest rewriting sentences to use a plural "they", eliminating pronouns, or recasting sentences to use "one" or (for babies) "it". In the first edition of "A Dictionary of Modern English Usage" (published in 1926) use of the generic "he" is recommended. It is stated that singular "they" is disapproved of by grammarians. Numerous examples of its use by eminent writers in the past are given, but it is stated that "few good modern writers would flout [grammarians] so conspicuously as Fielding and Thackeray", whose sentences are described as having an "old-fashioned sound". The second edition, "Fowler's Modern English Usage" (edited by Sir Ernest Gowers and published in 1965) continues to recommend use of the generic "he"; use of the singular "they" is called "the popular solution", which "sets the literary man's teeth on edge". It is stated that singular "they" is disapproved of by grammarians but common in colloquial speech. Numerous examples of its use by eminent writers are given, but it is stated that "few good modern writers would flout [grammarians] so conspicuously as Fielding and Thackeray". According to the third edition, "The New Fowler's Modern English Usage" (edited by Burchfield and published in 1996) singular "they" has not only been widely used by good writers for centuries, but is now generally accepted, except by some conservative grammarians, including the Fowler of 1926, who, it is argued, ignored the evidence: "The Complete Plain Words" was originally written in 1948 by Ernest Gowers, a civil servant, in an attempt by the British civil service to improve "official English". A second edition, edited by Sir Bruce Fraser, was published in 1973. It refers to "they" or "them" as the "equivalent of a singular pronoun of common sex" as "common in speech and not unknown in serious writing " but "stigmatized by grammarians as usage grammatically indefensible. The book's advice for "official writers" (civil servants) is to avoid its use and not to be tempted by its "greater convenience", though "necessity may eventually force it into the category of accepted idiom". A new edition of "Plain Words", revised and updated by Gowers's great granddaughter, Rebecca Gowers, was published in 2014. It notes that singular "they" and "them" have become much more widespread since Gowers' original comments, but still finds it "safer" to treat a sentence like 'The reader may toss their book aside' as incorrect "in formal English", while rejecting even more strongly sentences like "The Times Style and Usage Guide" (first published in 2003 by "The Times" of London) recommends avoiding sentences like by using a plural construction: "The Cambridge Guide to English Usage" (2004, Cambridge University Press) finds singular "they" "unremarkable": It expresses several preferences. "The Economist Style Guide" refers to the use of "they" in sentences like as "scrambled syntax that people adopt because they cannot bring themselves to use a singular pronoun". "New Hart's Rules" (Oxford University Press, 2012) is aimed at those engaged in copy editing, and the emphasis is on the formal elements of presentation including punctuation and typeface, rather than on linguistic style, although – like "The Chicago Manual of Style" – it makes occasional forays into matters of usage. It advises against use of the purportedly gender-neutral "he", and suggests cautious use of "they" where "he or she" presents problems. The 2011 edition of the "New International Version Bible" uses singular "they" instead of the traditional "he" when translating pronouns that apply to both genders in the original Greek or Hebrew. This decision was based on research by a commission that studied modern English usage and determined that singular "they" ("them"/"their") was by far the most common way that English-language speakers and writers today refer back to singular antecedents such as "whoever", "anyone", "somebody", "a person", "no one", and the like." The British edition of "The Handbook of Nonsexist Writing", modified in some respects from the original US edition to conform to differences in culture and vocabulary, preserved the same recommendations, allowing singular "they" with semantically plural terms like "everyone" and indeterminate ones like "person", but recommending a rewrite to avoid. The Australian "Federation Press Style Guide for Use in Preparation of Book Manuscripts" recommends "gender-neutral language should be used", stating that use of "they" and "their" as singular pronouns is acceptable. According to "A Comprehensive Grammar of the English Language" (1985): "The Cambridge Grammar of the English Language" discusses the prescriptivist argument that "they" is a plural pronoun and that the use of "they" with a singular "antecedent" therefore violates the rule of agreement between antecedent and pronoun, but takes the view that "they", though "primarily" plural, can also be singular in a secondary "extended" sense, comparable to the purportedly extended sense of "he" to include female gender. Use of singular "they" is stated to be "particularly common", even "stylistically neutral" with antecedents such as "everyone", "someone", and "no one", but more restricted when referring to common nouns as antecedents, as in Use of the pronoun "themself" is described as being "rare" and "acceptable only to a minority of speakers", while use of the morphologically plural "themselves" is considered problematic when referring to "someone" rather than "everyone" (since only the latter implies a plural set). There are also issues of grammatical acceptability when reflexive pronouns refer to singular noun phrases joined by "or", the following all being problematic: On the motivation for using singular "they", "A Student's Introduction to English Grammar" states: The alternative "he or she" can be "far too cumbersome", as in: or even "flatly ungrammatical", as in "Among younger speakers", use of singular "they" even with definite noun-phrase antecedents finds increasing acceptance, "sidestepping any presumption about the sex of the person referred to", as in: One explanation given for some uses of "they" referring to a singular antecedent is "notional agreement", when the antecedent is seen as semantically plural: In other words, in the Shakespeare quotation "a mother" is syntactically singular, but stands for all mothers; and in the Shaw quotation, "no man" is syntactically singular (demonstrated by taking the singular form "goes"), but is semantically plural ("all" go [to kill] not to be killed), hence idiomatically requiring "they". Such use, which goes back a long way, includes examples where the sex is known, as in the above examples. Distributive constructions apply a "single" idea to "multiple" members of a group. They are typically marked in English by words like "each", "every" and "any". The simplest examples are applied to groups of two, and use words like "either" and "or" – "Would you like tea or coffee?". Since distributive constructions apply an idea relevant to each individual in the group, rather than to the group as a whole, they are most often conceived of as singular, and a singular pronoun is used: However, many languages, including English, show ambivalence in this regard. Because distribution also requires a group with more than one member, plural forms are sometimes used. The singular "they", which uses the same verb form that plurals do, is typically used to refer to an indeterminate antecedent, for example: In some sentences, typically those including words like "every" or "any", the morphologically singular antecedent does not refer to a single entity but is "anaphorically linked" to the associated pronoun to indicate a set of pairwise relationships, as in the sentence: Linguists like Steven Pinker and Rodney Huddleston explain sentences like this (and others) in terms of bound variables, a term borrowed from logic. Pinker prefers the terms "quantifier" and "bound variable" to "antecedent" and " pronoun". He suggests that pronouns used as "variables" in this way are more appropriately regarded as homonyms of the equivalent referential pronouns. The following shows different types of anaphoric reference, using various pronouns, including "they": A study of whether "singular "they"" is more "difficult" to understand than gendered pronouns ("In Search of Gender Neutrality: Is Singular "They" a Cognitively Efficient Substitute for Generic "He"?" by Foertsch and Gernsbacher) found that "singular "they" is a cognitively efficient substitute for generic "he" or "she", particularly when the antecedent is nonreferential" (e.g. "anybody", "a nurse", or "a truck driver") rather than referring to a specific person (e.g. "a runner I knew" or "my nurse"). Clauses with singular "they" were read "just as quickly as clauses containing a gendered pronoun that matched the stereotype of the antecedent" (e.g. "she" for a nurse and "he" for a truck driver) and "much more quickly than clauses containing a gendered pronoun that went against the gender stereotype of the antecedent". On the other hand, when the pronoun "they" was used to refer to known individuals ("referential antecedents, for which the gender was presumably known", e.g. "my nurse", "that truck driver", "a runner I knew"), reading was slowed when compared with use of a gendered pronoun consistent with the "stereotypic gender" (e.g. "he" for a specific truck driver). The study concluded, that "the increased use of singular "they" is not problematic for the majority of readers". The singular and plural use of "they" can be compared with the pronoun "you", which had been both a plural and polite singular, but by about 1700 replaced "thou" for singular referents. For "you", the singular reflexive pronoun ("yourself") is different from its plural reflexive pronoun ("yourselves"); with "they" one can hear either "themself" or "themselves" for the singular reflexive pronoun. Singular "they" has also been compared to "royal we" (also termed "editorial we"), when a single person uses first-person plural in place of first-person singular pronouns. Similar to singular "you", its singular reflexive pronoun ("ourself") is different from the plural reflexive pronoun ("ourselves"). While the pronoun "it", which is used for inanimate objects, can be used for infants of unspecified gender, it tends to be dehumanizing, and is therefore more likely in a clinical context. In a more personal context, the use of "it" to refer to a person might indicate antipathy or other negative emotions. "It" can also be used for non-human animals of unspecified sex, though "they" is common for pets and other domesticated animals of unspecified sex, especially when referred to by a proper name (e.g. "Rags", "Snuggles"). Normally, vertebrate birds and mammals with a known sex are referred to by their respective male or female pronoun ("he" and "she"; "him" and "her"). It is uncommon to use singular "they" instead of "it" for something other than a life form. Sources of original examples
https://en.wikipedia.org/wiki?curid=28187
Space Shuttle The Space Shuttle was a partially reusable low Earth orbital spacecraft system that was operated from 1981 to 2011 by the National Aeronautics and Space Administration (NASA) as part of the Space Shuttle program. Its official program name was Space Transportation System (STS), taken from a 1969 plan for a system of reusable spacecraft of which it was the only item funded for development. The first of four orbital test flights occurred in 1981, leading to operational flights beginning in 1982. Five complete Space Shuttle orbiter vehicles were built and flown on a total of 135 missions from 1981 to 2011, launched from the Kennedy Space Center (KSC) in Florida. Operational missions launched numerous satellites, interplanetary probes, and the Hubble Space Telescope (HST); conducted science experiments in orbit; and participated in construction and servicing of the International Space Station. The Space Shuttle fleet's total mission time was 1322 days, 19 hours, 21 minutes and 23 seconds. Space Shuttle components include the Orbiter Vehicle (OV) with three clustered Rocketdyne RS-25 main engines, a pair of recoverable solid rocket boosters (SRBs), and the expendable external tank (ET) containing liquid hydrogen and liquid oxygen. The Space Shuttle was launched vertically, like a conventional rocket, with the two SRBs operating in parallel with the orbiter's three main engines, which were fueled from the ET. The SRBs were jettisoned before the vehicle reached orbit, and the ET was jettisoned just before orbit insertion, which used the orbiter's two Orbital Maneuvering System (OMS) engines. At the conclusion of the mission, the orbiter fired its OMS to deorbit and reenter the atmosphere. The orbiter was protected during reentry by its thermal protection system tiles, and it glided as a spaceplane to a runway landing, usually to the Shuttle Landing Facility at KSC, Florida or Rogers Dry Lake in Edwards Air Force Base, California. After landing at Edwards, the orbiter was flown back to the KSC on the Shuttle Carrier Aircraft, a specially modified Boeing 747. The first orbiter, "Enterprise", was built in 1976, used in Approach and Landing Tests and has no orbital capability. Four fully operational orbiters were initially built: "Columbia", "Challenger", "Discovery", and "Atlantis". Of these, two were lost in mission accidents: "Challenger" in 1986 and "Columbia" in 2003, with a total of fourteen astronauts killed. A fifth operational (and sixth in total) orbiter, "Endeavour", was built in 1991 to replace "Challenger". The Space Shuttle was retired from service upon the conclusion of "Atlantis"s final flight on July 21, 2011. The U.S. relied on the Russian Soyuz spacecraft to transport astronauts to the International Space Station from the last Shuttle flight until the first Commercial Crew Development launch on May 30, 2020. During the 1950s, the United States Air Force proposed using a reusable piloted glider to perform military operations such as reconnaissance, satellite attack, and employing air-to-ground weapons. In the late-1950s, the Air Force began developing the partially reusable X-20 Dyna-Soar. The Air Force collaborated with NASA on the Dyna-Soar, and began training six pilots in June 1961. The rising costs of development and the prioritization of Project Gemini led to the cancellation of the Dyna-Soar program in December 1963. In addition to the Dyna-Soar, the Air Force had conducted a study in 1957 to test the feasibility of reusable boosters. This became the basis for the aerospaceplane, a fully reusable spacecraft that was never developed beyond the initial design phase in 1962–1963. Beginning in the early 1950s, NASA and the Air Force collaborated on developing lifting bodies to test aircraft that primarily generated lift from their fuselages instead of wings, and tested the M2-F1, M2-F2, M2-F3, HL-10, X-24A, and the X-24B. The program tested aerodynamic characteristics that would later be incorporated in design of the Space Shuttle, including unpowered landing from a high altitude and speed. In September 1966, NASA and the Air Force released a joint study concluding that a new vehicle was required to satisfy their respective future demands, and that a partially reusable system would be the most cost-effective solution. The head of the NASA Office of Manned Space Flight, George Mueller, announced the plan for a reusable shuttle on August 10, 1968. NASA issued a request for proposal (RFP) for designs of the Integrated Launch and Re-entry Vehicle (ILRV), which would later become the Space Shuttle. Rather than award a contract based upon initial proposals, NASA announced a phased approach for the Space Shuttle contracting and development; Phase A was a request for studies completed by competing aerospace companies, Phase B was a competition between two contractors for a specific contract, Phase C involved designing the details of the spacecraft components, and Phase D was the production of the spacecraft. In December 1968, NASA created the Space Shuttle Task Group to determine the optimal design for a reusable spacecraft, and issued study contracts to General Dynamics, Lockheed, McDonnell Douglas, and North American Rockwell. In July 1969, the Space Shuttle Task Group issued a report that determined the Shuttle would support a space station, launch, service, and retrieve satellites, and support short-duration crewed missions. The report also created three classes of a future reusable shuttle: Class I would have a reusable orbiter mounted on expendable boosters, Class II would use multiple expendable rocket engines and a single propellant tank (stage-and-a-half), and Class III would have both a reusable orbiter and a reusable booster. In September 1969, the Space Task Group, under leadership of Vice President Spiro Agnew, issued a report calling for the development of a space shuttle to bring people and cargo to low Earth orbit (LEO), as well as a space tug for transfers between orbits and the Moon, and a reusable nuclear stage for deep space travel. After the release of the Space Shuttle Task Group report, many aerospace engineers favored the Class III, fully reusable design because of perceived savings in hardware costs. Max Faget, a NASA engineer who had worked to design the Mercury capsule, patented a design for a two-stage fully recoverable system with a straight-winged orbiter mounted on a larger straight-winged booster. The Air Force Flight Dynamics Laboratory argued that a straight-wing design would not be able to withstand the high thermal and aerodynamic stresses during reentry, and would not provide the required cross-range capability. Additionally, the Air Force required a larger payload capacity than Faget's design allowed. In January 1971, NASA and Air Force leadership decided that a reusable delta-wing orbiter mounted on an expendable propellant tank would be the optimal design for the Space Shuttle. After establishing the need for a reusable, heavy-lift spacecraft, NASA and the Air Force began determining the design requirements of their respective services. The Air Force expected to use the Space Shuttle to launch large satellites, and required it to be capable of lifting to an eastward LEO or into a polar orbit. The satellite designs also required that the Space Shuttle have a payload bay. NASA evaluated the F-1 and J-2 engines from the Saturn rockets, and determined that they were insufficient for the requirements of the Space Shuttle; in July 1971, it issued a contract to Rocketdyne to begin development on the RS-25 engine. NASA reviewed 29 potential designs for the Space Shuttle, and determined that a design with two side boosters should be used, and the boosters should be reusable to reduce costs. NASA and the Air Force elected to use solid-propellant boosters because of the lower costs and the ease of refurbishing them for reuse after they landed in the ocean. In January 1972, President Richard Nixon approved the Shuttle, and NASA decided on its final design in March. That August, NASA awarded the contract to build the orbiter to North American Rockwell, the solid-rocket booster contract to Morton Thiokol, and the external tank contract to Martin Marietta. On June 4, 1974, Rockwell began construction on the first orbiter, OV-101, which would later be named "Enterprise". "Enterprise" was designed as a test vehicle, and did not include engines or heat shielding. Construction was completed on September 17, 1976, and "Enterprise" was moved to the Edwards Air Force Base to begin testing. Rockwell constructed the Main Propulsion Test Article (MPTA)-098, which was a structural truss mounted to the ET with three RS-25 engines attached. It was tested at the National Space Technology Laboratory (NSTL) to ensure that the engines could safely run through the launch profile. Rockwell conducted mechanical and thermal stress tests on Structural Test Article (STA)-099 to determine the effects of aerodynamic and thermal stresses during launch and reentry. The beginning of the development of the RS-25 Space Shuttle Main Engine was delayed for nine months while Pratt & Whitney challenged the contract that had been issued to Rocketdyne. The first engine was completed in March 1975, after issues with developing the first throttlable, reusable engine. During engine testing, the RS-25 experienced multiple nozzle failures, as well as broken turbine blades. Despite the problems during testing, NASA ordered the nine RS-25 engines needed for its three orbiters under construction in May 1978. NASA experienced significant delays in the development of the Space Shuttle's thermal protection system. Previous NASA spacecraft had used ablative heat shields, but those could not be reused. NASA chose to use ceramic tiles for thermal protection, as the shuttle could then be constructed of lightweight aluminum, and the tiles could be individually replaced as needed. Construction began on "Columbia" on March 27, 1975, and it was delivered to the KSC on March 25, 1979. At the time of its arrival at the KSC, "Columbia" still had 6,000 of its 30,000 tiles remaining to be installed. However, many of the tiles that had been originally installed had to be replaced, requiring two years of installation before "Columbia" could fly. On January 5, 1979, NASA commissioned a second orbiter. Later that month, Rockwell began converting STA-099 to OV-099, later named "Challenger". On January 29, 1979, NASA ordered two additional orbiters, OV-103 and OV-104, which were named "Discovery" and "Atlantis". Construction of OV-105, later named "Endeavour", began in February 1982, but NASA decided to limit the Space Shuttle fleet to four orbiters in 1983. After the loss of "Challenger", NASA resumed production of "Endeavour" in September 1987. After it arrived at Edwards AFB, "Enterprise" underwent flight testing with the Shuttle Carrier Aircraft, a Boeing 747 that had been modified to carry the orbiter. In February 1977, "Enterprise" began the Approach and Landing Tests and underwent captive flights, where it remained attached to the Shuttle Carrier Aircraft for the duration of the flight. On August 12, 1977, "Enterprise" conducted its first glide test, where it detached from the Shuttle Carrier Aircraft and landed at Edwards AFB. After four additional flights, "Enterprise" was moved to the Marshall Space Flight Center (MSFC) on March 13, 1978. "Enterprise" underwent shake tests in the Mated Vertical Ground Vibration Test, where it was attached to an external tank and solid rocket boosters, and underwent vibrations to simulate the stresses of launch. In April 1979, "Enterprise" was taken to the KSC, where it was attached to an external tank and solid rocket boosters, and moved to LC-39. Once installed at the launch pad, the Space Shuttle was used to verify the proper positioning of launch complex hardware. "Enterprise" was taken back to California in August 1979, and later served in the development of the SLC-6 at Vandenberg AFB in 1984. On November 24, 1980, "Columbia" was mated with its external tank and solid-rocket boosters, and was moved to LC-39 on December 29. The first Space Shuttle mission, STS-1, would be the first time NASA performed a crewed first-flight of a spacecraft. On April 12, 1981, the Space Shuttle launched for the first time, and was piloted by John Young and Robert Crippen. During the two-day mission, Young and Crippen tested equipment on board the shuttle, and found several of the ceramic tiles had fallen off the top side of the "Columbia". NASA coordinated with the Air Force to use satellites to image the underside of "Columbia", and determined there was no damage. "Columbia" reentered the atmosphere and landed at Edwards AFB on April 14. NASA conducted three additional test flights with "Columbia" in 1981 and 1982. On July 4, 1982, STS-4, flown by Ken Mattingly and Henry Hartsfield, landed on a concrete runway at Edwards AFB. President Ronald Reagan and his wife Nancy met the crew, and delivered a speech. After STS-4, NASA declared its Space Transportation System (STS) operational. The Space Shuttle was the first operational orbital spacecraft designed for reuse. Each Space Shuttle orbiter was designed for a projected lifespan of 100 launches or ten years of operational life, although this was later extended. At launch, it consisted of the orbiter, which contained the crew and payload, the external tank (ET), and the two solid rocket boosters (SRBs). Responsibility for the Shuttle components was spread among multiple NASA field centers. The KSC was responsible for launch, landing and turnaround operations for equatorial orbits (the only orbit profile actually used in the program), the U.S. Air Force at the Vandenberg Air Force Base was responsible for launch, landing and turnaround operations for polar orbits (though this was never used), the Johnson Space Center (JSC) served as the central point for all Shuttle operations, the MSFC was responsible for the main engines, external tank, and solid rocket boosters, the John C. Stennis Space Center handled main engine testing, and the Goddard Space Flight Center managed the global tracking network. The orbiter had design elements and capabilities of both a rocket and an aircraft to allow it to launch vertically and then land as a glider. Its three-part fuselage provided support for the crew compartment, cargo bay, flight surfaces, and engines. The rear of the orbiter contained the Space Shuttle Main Engines (SSME), which provided thrust during launch, as well as the Orbital Maneuvering System (OMS), which allowed the orbiter to achieve, alter, and exit its orbit once in space. Its double-delta wings were long, and were swept 81° at the inner leading edge and 45° at the outer leading edge. Each wing had an inboard and outboard elevon to provide flight control during reentry, along with a flap located between the wings, below the engines to control pitch. The orbiter's vertical stabilizer was swept backwards at 45°, and contained a rudder that could split to act as a speed brake. The vertical stabilizer also contained a two-part drag parachute system to slow the orbiter after landing. The orbiter used retractable landing gear with a nose landing gear and two main landing gear, each containing two tires. The main landing gear contained two brake assemblies each, and the nose landing gear contained an electro-hydraulic steering mechanism. The Space Shuttle crew varied by mission. The test flight had only two members each, the commander and pilot, who were both qualified pilots that could fly and land the orbiter. The on-orbit operations, such as experiments, payload deployment, and EVAs, were conducted primarily by the mission specialists who were specifically trained for their intended missions and systems. Early in the Space Shuttle program, NASA flew with payload specialists, who were typically systems specialists who worked for the company paying for the payload's deployment or operations. The final payload specialist, Gregory B. Jarvis, flew on STS-51L, and future non-pilots were designated as mission specialists. An astronaut flew as a crewed spaceflight engineer on both STS-51C and STS-51J to serve as a military representative for a National Reconnaissance Office payload. A Space Shuttle crew typically had seven astronauts, with STS-61A flying with eight. The crew compartment comprised three decks, and was the pressurized, habitable area on all Space Shuttle missions. The cockpit consisted of two seats for the commander and pilot, as well as an additional two to four seats for crew members. The mid-deck was located below the cockpit, and was where the galley and crew bunks were set up, as well as three or four crew member seats. The mid-deck contained the airlock, which could support two astronauts on an extravehicular activity (EVA), as well as access to pressurized research modules. An equipment bay was below the mid-deck, which stored environmental control and waste management systems. On the first four Shuttle missions, astronauts wore modified U.S. Air Force high-altitude full-pressure suits, which included a full-pressure helmet during ascent and descent. From the fifth flight, STS-5, until the loss of "Challenger", the crew wore one-piece light blue nomex flight suits and partial-pressure helmets. After the "Challenger" disaster, the crew members wore the Launch Entry Suit (LES), a partial-pressure version of the high-altitude pressure suits with a helmet. In 1994, the LES was replaced by the full-pressure Advanced Crew Escape Suit (ACES), which improved the safety of the astronauts in an emergency situation. "Columbia" originally had modified SR-71 zero-zero ejection seats installed for the ALT and first four missions, but these were disabled after STS-4 and removed after STS-9. The flight deck was the top level of the crew compartment, and contained the flight controls for the orbiter. The commander sat in the front left seat, and the pilot sat in the front right seat, with two to four additional seats set up for additional crew members. The instrument panels contained over 2,100 displays and controls, and the commander and pilot were both equipped with a heads-up display (HUD) and a Rotational Hand Controller (RHC) to gimbal the engines during powered flight and fly the orbiter during unpowered flight. Both seats also had rudder controls, to allow rudder movement in flight and nose-wheel steering on the ground. The orbiter vehicles were originally installed with the Multifunction CRT Display System (MCDS) to display and control flight information. The MCDS displayed the flight information at the commander and pilot seats, as well as at the aft seating location, and also controlled the data on the HUD. In 1998, "Atlantis" was upgraded with the Multifunction Electronic Display System (MEDS), which was a glass cockpit upgrade to the flight instruments that replaced the eight MCDS display units with 11 multifunction colored digital screens. MEDS was flown for the first time in May 2000 on STS-98, and the other orbiter vehicles were upgraded to it. The aft section of the flight decked contained windows looking into the payload bay, as well as an RHC to control the Remote Manipulator System during cargo operations. Additionally, the aft flight deck had monitors for a closed-circuit television to view the cargo bay. The mid-deck contained the crew equipment storage, sleeping area, galley, medical equipment, and hygiene stations for the crew. The crew used modular lockers to store equipment that could be scaled depending on their needs, as well as permanently installed floor compartments. The mid-deck contained a port-side hatch that crew used for entry and exit while on Earth. Additionally, each orbiter was originally installed with an internal airlock in the mid-deck. The internal airlock was replaced with an external airlock in the payload bay on "Discovery", "Atlantis", and "Endeavour" to improve docking with Mir and the ISS, along with the Orbiter Docking System. The orbiter was equipped with an avionics system to provide information and control during atmospheric flight. Its avionics suite contained three microwave scanning beam landing systems, three gyroscopes, three TACANs, three accelerometers, two radar altimeters, two barometric altimeters, three attitude indicators, two Mach indicators, and two Mode C transponders. During reentry, the crew deployed two air data probes once they were travelling slower than Mach 5. The orbiter had three inertial measuring units (IMU) that it used for guidance and navigation during all phases of flight. The orbiter contains two star trackers to align the IMUs while in orbit. The star trackers are deployed while in orbit, and can automatically or manually align on a star. In 1991, NASA began upgrading the inertial measurement units with an inertial navigation system (INS), which provided more accurate location information. In 1993, NASA flew a GPS receiver for the first time aboard STS-51. In 1997, Honeywell began developing an integrated GPS/INS to replace the IMU, INS, and TACAN systems, which first flew on STS-118 in August 2007 While in orbit, the crew primarily communicated using one of four S band radios, which provided both voice and data communications. Two of the S band radios were phase modulation transceivers, and could transmit and receive information. The other two S band radios were frequency modulation transmitters, and were used to transmit data to NASA. As S band radios can operate only within their line of sight, NASA used the Tracking and Data Relay Satellite System and the Spacecraft Tracking and Data Acquisition Network ground stations to communicate with the orbiter throughout its orbit. Additionally, the orbiter deployed a high-bandwidth Ku-band radio out of the cargo bay, which could also utilized as a rendezvous radar. The orbiter was also equipped with two UHF radios for communications with air traffic control and astronauts conducting EVA. The Space Shuttle's fly-by-wire control system was entirely reliant on its main computer, the Data Processing System (DPS). The DPS controlled the flight controls and thrusters on the orbiter, as well as the ET and SRBs during launch. The DPS consisted of five general purpose computers (GPC), two magnetic tape mass memory units (MMUs), and the associated sensors to monitors the Space Shuttle components. The original GPC used was the IBM AP-101B, which used a separate central processing unit (CPU) and input/output processor (IOP), and non-volatile solid-state memory. From 1991 to 1993, the orbiter vehicles were upgraded to the AP-101S, which improved the memory and processing capabilities, and reduced the volume and weight of the computers by combining the CPU and IOP into a single unit. Four of the GPCs were loaded with the Primary Avionics Software System (PASS), which was Space Shuttle-specific software that provided control through all phases of flight. During ascent, maneuvering, reentry, and landing, the four PASS GPCs functioned identically to produce quadruple redundancy, and would error check their results. In case of a software error that would cause erroneous reports from the four PASS GPCs, a fifth GPC ran the Backup Flight System, which used a different program and could control the Space Shuttle through ascent, orbit, and reentry, but could not support an entire mission. The five GPCs were separated in three separate bays within the mid-deck to provide redundancy in the event of a cooling fan failure. After achieving orbit, the crew would switch some of the GPCs functions from guidance, navigation, and control (GNC) to systems management (SM) and payload (PL) to support the operational mission. The Space Shuttle was not launched if its flight would run from December to January, as its flight software would have required the orbiter vehicle's computers to be reset at the year change. In 2007, NASA engineers devised a solution so Space Shuttle flights could cross the year-end boundary. Space Shuttle missions typically brought a portable general support computer (PGSC) that could integrate with the orbiter vehicle's computers and communication suite, as well as monitor scientific and payload data. Early missions brought the Grid Compass, one of the first laptop computers, as the PGSC, but later missions brought Apple and Intel laptops. The payload bay comprised most of the orbiter vehicle's fuselage, and provided the cargo-carrying space for the Space Shuttle's payloads. It was long and wide, and could accommodate cylindrical payloads up to in diameter. Two payload bay doors hinged on either side of the bay, and provided a relatively airtight seal to protect payloads from heating during launch and reentry. Payloads were secured in the payload bay to the attachment points on the longerons. The payload bay doors served an additional function as radiators for the orbiter vehicle's heat, and were opened upon reaching orbit for heat rejection. The orbiter could be used in conjunction with a variety of add-on components depending on the mission. This included orbital laboratories, boosters for launching payloads farther into space, the Remote Manipulator System(RMS), and to extend the mission duration. To limit the fuel consumption while the orbiter was docked at the ISS, the Station-to-Shuttle Power Transfer System (SSPTS) was developed to convert and transfer station power to the orbiter. The SSPTS was first used on STS-118, and was installed on "Discovery" and "Endeavour". The RMS, also known as Canadarm, was a mechanical arm attached to the cargo bay. It could be used to grasp and manipulate payloads, as well as serve as a mobile platform for astronauts conducting an EVA. The RMS was built by the Canadian company Spar Aerospace, and was controlled by an astronaut inside the orbiter's flight deck using their windows and closed-circuit television. The RMS allowed for six degrees of freedom, and had six joints located at three points along the arm. The original RMS could deploy or retrieve payloads up to , which was later improved to . The Spacelab module was a European-funded pressurized laboratory that was carried within the payload bay and allowed for scientific research while in orbit. The Spacelab module contained two segments that were mounted in the aft end of the payload bay to maintain the center of gravity during flight. Astronauts entered the Spacelab module through a or tunnel that connected to the airlock. The Spacelab equipment was primarily stored in pallets, which provided storage for both experiments as well as computer and power equipment. Spacelab hardware was flown on 28 missions through 1999, and studied subjects including astronomy, microgravity, radar, and life sciences. Spacelab hardware also supported missions such as Hubble Space Telescope (HST) servicing and space station resupply. The Spacelab module was tested STS-2 and STS-3, and the first full mission was on STS-9. Three RS-25 engines, also known as the Space Shuttle Main Engines (SSME), were mounted on the orbiter's aft fuselage in a triangular pattern. The engine nozzles could gimbal ±10.5° in pitch, and ±8.5° in yaw during ascent to change the direction of their thrust to steer the Shuttle. The titanium alloy reusable engines were independent from the orbiter vehicle, and would be removed and replaced in between flights. The RS-25 is a staged-combustion cycle cryogenic engine that used liquid oxygen and hydrogen, and had a higher chamber pressure than any previous liquid rocket. The original main combustion chamber operated at a maximum pressure of . The engine nozzle is tall and has an interior diameter of . The nozzle is cooled by 1,080 interior lines carrying liquid hydrogen, and is thermally protected by insulative and ablative material. The RS-25 engines had several improvements to enhance reliability and power. During the development program, Rocketdyne determined that the engine was capable of safe reliable operation at 104% of the originally specified thrust. To keep the engine thrust values consistent with previous documentation and software, NASA kept the original specified thrust as 100%, but had the RS-25 operate at higher thrust. RS-25 upgrade versions were denoted as Block I and Block II. 109% thrust level was achieved with the Block II engines in 2001, which reduced the chamber pressure to , as it had a larger throat area. The normal maximum throttle was 104 percent, with 106% or 109% used for mission aborts. The Orbital Maneuvering System (OMS) consisted of two aft-mounted AJ10-190 engines and the associated propellant tanks. The AJ10 engines used monomethylhydrazine (MMH) oxidized by dinitrogen tetroxide (N2O4). The pods carried a maximum of of MMH and of N2O4. The OMS engines were used after main engine cut-off (MECO) for orbital insertion. Throughout the flight, they were used for orbit changes, as well as the deorbit burn prior to reentry. Each OMS engine produced of thrust, and the entire system could provide of velocity change. The orbiter was protected from heat during reentry by the thermal protection system (TPS), a thermal soaking protective layer around the orbiter. In contrast with previous US spacecraft, which had used ablative heat shields, the reusability of the orbiter required a multi-use heat shield. During reentry, the TPS experienced temperatures up to , but had to keep the orbiter vehicle's aluminum skin temperature below . The TPS primarily consisted of four types of tiles. The nose cone and leading edges of the wings experienced temperatures above , and were protected by reinforced carbon-carbon tiles (RCC). Thicker RCC tiles were developed and installed in 1998 to prevent damage from micrometeoroid and orbital debris, and were further improved after RCC damage caused in the "Columbia" disaster. Beginning with STS-114, the orbiter vehicles were equipped with the wing leading edge impact detection system to alert the crew to any potential damage. The entire underside of the orbiter vehicle, as well as the other hottest surfaces, were protected with high-temperature reusable surface insulation. Areas on the upper parts of the orbiter vehicle were coated in a white low-temperature reusable surface insulation, which provided protection for temperatures below . The payload bay doors and parts of the upper wing surfaces were coated in reusable felt surface insulation, as the temperature there remained below . The Space Shuttle external tank (ET) carried the propellant for the Space Shuttle Main Engines, and connected the orbiter vehicle with the solid rocket boosters. The ET was tall and in diameter, and contained separate tanks for liquid oxygen (LOX) and liquid hydrogen (LH2). The LOX tank was housed in the nose of the ET, and was tall. The LH2 comprised the bulk of the ET, and was tall. The orbiter vehicle was attached to the ET at two umbilical plates, which contained five propellant and two electrical umbilicals, and forward and aft structural attachments. The exterior of the ET was covered in orange spray-on foam to allow it to survive the heat of ascent. The ET provided propellant to the Space Shuttle Main Engines from liftoff until main engine cutoff. The ET separated from the orbiter vehicle 18 seconds after engine cutoff, and could be triggered automatically or manually. At the time of separation, the orbiter vehicle retracted its umbilical plates, and the umbilical cords were sealed to prevent excess propellant from venting into the orbiter vehicle. After the bolts attached at the structural attachments were sheared, the ET separated from the orbiter vehicle. At the time of separation, gaseous oxygen was vented from the nose to cause the ET to tumble, ensuring that it would break up upon reentry. The ET was the only major component of the Space Shuttle system that was not reused, and it would travel along a ballistic trajectory into the Indian or Pacific Ocean. For the first two missions, STS-1 and STS-2, the ET was covered in of white fire-retardant latex paint to provide protection against damage from ultraviolet radiation. Further research determined that the foam itself was sufficiently protected, and the ET was no longer covered in latex paint beginning on STS-3. A light-weight tank (LWT) was first flown on STS-6, which reduced tank weight by . The LWT's weight was reduced by removing components from the LH2 tank and reducing the thickness of some skin panels. In 1998, a super light-weight ET (SLWT) first flew on STS-91. The SLWT used the 2195 aluminum-lithium alloy, which was 40% stronger and 10% less dense than its predecessor, 2219 aluminum-lithium alloy. The SLWT weighed less than the LWT, which allowed the Space Shuttle to deliver heavy elements to ISS's high inclination orbit. The Solid Rocket Boosters (SRB) provided 71.4% of the Space Shuttle's thrust during liftoff and ascent, and were the largest solid-propellant motors ever flown. Each SRB was tall and wide, weighed , and had a steel exterior approximately thick. The SRB's subcomponents were the solid-propellant motor, nose cone, and rocket nozzle. The solid-propellant motor comprised the majority of the SRB's structure. Its casing consisted of 11 steel sections which made up its four main segments. The nose cone housed the forward separation motors and the parachute systems that were used during recovery. The rocket nozzles could gimbal up to 8° to allow for in-flight adjustments. The rocket motors were each filled with a total of solid rocket propellant, and joined together in the Vehicle Assembly Building (VAB) at KSC. In addition to providing thrust during the first stage of launch, the SRBs provided structural support for the orbiter vehicle and ET, as they were the only system that was connected to the mobile launcher platform (MLP). At the time of launch, the SRBs were armed at T-5 minutes, and could only be electrically ignited once the RS-25 engines had ignited and were without issue. They each provided of thrust, which was later improved to beginning on STS-8. After expending their fuel, the SRBs were jettisoned approximately two minutes after launch at an altitude of approximately . Following separation, they deployed drogue and main parachutes, landed in the ocean, and were recovered by the crews aboard the ships MV Freedom Star and MV Liberty Star. Once they were returned to Cape Canaveral, they were cleaned and disassembled. The rocket motor, igniter, and nozzle were then shipped to Thiokol to be refurbished and reused on subsequent flights. The SRBs underwent several redesigns throughout the program's lifetime. STS-6 and STS-7 used SRBs that were lighter than the standard-weight cases due to walls that were thinner, but were determined to be too thin. Subsequent flights until STS-26 used cases that were thinner than the standard-weight cases, which saved . After the "Challenger" disaster as a result of a failed O-ring in cold temperature, the SRBs were redesigned to provide a constant seal regardless of the ambient temperature. The Space Shuttle's operations were supported by vehicles and infrastructure that facilitated its transportation, construction, and crew access. The crawler-transporters carried the MLP and the Space Shuttle from the VAB to the launch site. The Shuttle Carrier Aircraft (SCA) were two modified Boeing 747s that could carry an orbiter on its back. The original SCA (N905NA) was first flown in 1975, and was used for the ALT and ferrying the orbiter from Edwards AFB to the KSC on all missions prior to 1991. A second SCA (N911NA) was acquire in 1988, and was first used to transport "Endeavour" from the factory to the KSC. Following the retirement of the Space Shuttle, N905NA was put on display at the JSC, and N911NA was put on display at the Joe Davis Heritage Airpark in Palmdale, California. The Crew Transport Vehicle (CTV) was a modified airport jet bridge that was used to assist astronauts to egress from the orbiter after landing, where they would undergo their post-mission medical checkups. The Astrovan transported astronauts from the crew quarters in the Operations and Checkout Building to the launch pad on launch day. The NASA Railroad comprised three locomotives that transported SRB segments from the Florida East Coast Railway in Titusville to the KSC. The Space Shuttle was prepared for launch primarily in the VAB at the KSC. The SRBs were assembled and attached to the external tank on the MLP. The orbiter vehicle was prepared at the Orbiter Processing Facility (OPF) and transferred to the VAB, where a crane was used to rotate it to the vertical orientation and mate it to the external tank. Once the entire stack was assembled, the MLP was carried for to Launch Complex 39 by one of the crawler-transporters. After the Space Shuttle arrived at one of the two launchpads, it would connect to the Fixed and Rotation Service Structures, which provided servicing capabilities, payload insertion, and crew transportation. The crew was transported to the launch pad at T−3 hours and entered the orbiter vehicle, which was closed at T−2 hours. LOX and LH2 were loaded into the external tank via umbilicals that attached to the orbiter vehicle, which began at T−5 hours 35 minutes. At T−3 hours 45 minutes, the LH2 fast-fill was complete, followed 15 minutes later by the LOX. Both tanks were slowly filled up until the launch as the oxygen and hydrogen evaporated. The launch commit criteria considered precipitation, temperatures, cloud cover, lightning forecast, wind, and humidity. The Space Shuttle was not launched under conditions where it could have been struck by lightning, as its exhaust plume could have triggered lightning by providing a current path to ground after launch, which occurred on Apollo 12. The NASA Anvil Rule for a Shuttle launch stated that an anvil cloud could not appear within a distance of . The Shuttle Launch Weather Officer monitored conditions until the final decision to scrub a launch was announced. In addition to the weather at the launch site, conditions had to be acceptable at one of the Transatlantic Abort Landing sites and the SRB recovery area. The mission crew and the Launch Control Center (LCC) personnel completed systems checks throughout the countdown. Two built-in holds at T−20 minutes and T−9 minutes provided scheduled breaks to address any issues and additional preparation. After the built-in hold at T−9 minutes, the countdown was automatically controlled by the Ground Launch Sequencer (GLS) at the LCC, which stopped the countdown if it sensed a critical problem with any of the Space Shuttle's onboard systems. At T−3 minutes 45 seconds, the engines began conducting gimbal tests, which were concluded at T−2 minutes 15 seconds. The ground launch processing system handed off the control to the orbiter vehicle's GPCs at T−31 seconds. At T−16 seconds, the GPCs armed the SRBs, the sound suppression system (SPS) began to drench the MLP and SRB trenches with of water to protect the orbiter vehicle from damage by acoustical energy and rocket exhaust reflected from the flame trench and MLP during lift off. At T−10 seconds, hydrogen igniters were activated under each engine bell to quell the stagnant gas inside the cones before ignition. Failure to burn these gases could trip the onboard sensors and create the possibility of an overpressure and explosion of the vehicle during the firing phase. The LH2 prevalves were opened at T−9.5 seconds in preparation for engine start. Beginning at T−6.6 seconds, the main engines were ignited sequentially at 120 millisecond intervals. All three RS-25 engines were required to reach 90% rated thrust by T−3 seconds, otherwise the GPCs would initiate an RSLS abort. If all three engines indicated nominal performance by T−3 seconds, they were commanded to gimbal to liftoff configuration and the command would be issued to arm the SRBs for ignition at T−0. Between T−6.6 seconds and T−3 seconds, while the RS-25 engines were firing but the SRBs were still bolted to the pad, the offset thrust caused the Space Shuttle to pitch down measured at the tip of the external tank; the 3 second delay allowed the stack to return to nearly vertical before SRB ignition. At T−0 seconds, the eight frangible nuts holding the SRBs to the pad were detonated, the final umbilicals were disconnected, the SSMEs were commanded to 100% throttle, and the SRBs were ignited. By T+0.23 seconds, the SRBs built up enough thrust for liftoff to commence, and reached maximum chamber pressure by T+0.6 seconds. At T−0, the JSC Mission Control Center assumed control of the flight from the LCC. At T+4 seconds, when the Space Shuttle reached an altitude of , the RS-25 engines were throttled up to 104.5%. At approximately T+7 seconds, the Space Shuttle rolled to a heads-down orientation at an altitude of , which reduced aerodynamic stress and provided an improved communication and navigation orientation. Approximately 20−30 seconds into ascent and an altitude of , the RS-25 engines were throttled down to 65−72% to reduce the maximum aerodynamic forces at Max Q. Additionally, the shape of the SRB propellant was designed to cause thrust to decrease at the time of Max Q. The GPCs could dynamically control the throttle of the RS-25 engines based upon the performance of the SRBs. At approximately T+123 seconds and an altitude of , pyrotechnic fasteners released the SRBs, which reached an apogee of before parachuting into the Atlantic Ocean. The Space Shuttle continued its ascent using only the RS-25 engines. On earlier missions the Space Shuttle remained in the heads-down orientation to maintain communications with the tracking station in Bermuda, but later missions, beginning with STS-87, rolled to a heads up orientation at T+6 minutes for communication with the tracking and data relay satellite constellation. The RS-25 engines were throttled at T+7 minutes 30 seconds to limit vehicle acceleration to 3 "g". At 6 seconds prior to main engine cutoff (MECO), which occurred at T+8 minutes 30 seconds, the RS-25 engines were throttled down to 67%. The GPCs controlled ET separation, and dumped the remaining LOX and LH2 to prevent outgassing while in orbit. The ET continued on a ballistic trajectory and broke up during reentry, with some small pieces landing in the Indian or Pacific Ocean. Early missions used two firings of the OMS to achieve orbit; the first firing raised the apogee while the second circularized the orbit. Missions after STS-38 used the RS-25 engines to achieve the optimal apogee, and used the OMS engines to circularize the orbit. The orbital altitude and inclination were mission dependent, and the Space Shuttle's orbits varied from to . The type of mission that the Space Shuttle was assigned to dictated the type of orbit that it entered. The initial design of the reusable Space Shuttle envisioned an increasingly cheap launch platform to deploy commercial and government satellites. Early missions routinely ferried satellites, which determined the type of orbit that the orbiter vehicle would enter. Following the "Challenger" disaster, many commercial payloads were moved to expendable commercial rockets, such as the Delta II. While later missions still launched commercial payloads, Space Shuttle assignments were routinely directed towards scientific payloads, such as the Hubble Space Telescope, Spacelab, and the Galileo spacecraft. Beginning with STS-74, the orbiter vehicle conducted dockings with the Mir space station. In its final decade of operation, the Space Shuttle was used for the construction of the International Space Station. Most missions involved staying in orbit several days to two weeks, although longer missions were possible with the Extended Duration Orbiter pallet. The 17 day 15 hour STS-80 mission was the longest Space Shuttle mission duration. Approximately four hours prior to deorbit, the crew began preparing the orbiter vehicle for reentry by closing the payload doors, radiating excess heat, and retracting the Ku-band antenna. The orbiter vehicle maneuvered to an upside down, tail first orientation and began a 2-4 minute OMS burn approximately 20 minutes before it reenter the atmosphere. The orbiter vehicle reoriented itself to a nose-forward position with a 40° angle-of-attack, and the forward RCS jets were emptied of fuel and disabled prior to reentry. The orbiter vehicle's reentry was defined as starting at an altitude , when it was traveling approximately Mach 25. The orbiter vehicle's reentry was controlled by the GPCs, which followed a preset angle-of-attack plan to prevent unsafe heating of the TPS. The GPCs also controlled the multiple aerobraking S-turns, using only the roll axis, to dissipate excess speed without changing the angle-of-attack. The orbiter vehicle's aft RCS jets were disabled as it descended and its ailerons, elevators, and rudder became effective in the lower atmosphere. At an altitude of , the orbiter vehicle opened its speed brake on the vertical stabilizer. At 8 minutes 44 seconds prior to landing, the crew deployed the air data probes, and began lowering the angle-of-attack to 36°. The orbiter's maximum glide ratio/lift-to-drag ratio varied considerably with speed, ranging from 1.3 at hypersonic speeds to 4.9 at subsonic speeds. The orbiter vehicle flew to one of the two Heading Alignment Cones, located away from each end of the runway's centerline, where it made its final turns to dissipate excess energy prior to its approach and landing. Once the orbiter vehicle was traveling subsonically, the crew took over manual control of the flight. The approach and landing phase began when the orbiter vehicle was at an altitude of and traveling at . the orbiter vehicle followed either a -20° or -18° glideslope and descended at approximately . The speed brake was used to keep a continuous speed, and crew initiated a pre-flare maneuver to a -1.5° glideslope at an altitude of . The landing gear was deployed 10 seconds prior to touchdown, when the orbiter was at an altitude of and traveling . A final flare maneuver reduced the orbiter vehicle's descent rate to , with touchdown occurring at , depending on the weight of the orbiter vehicle. After the landing gear touched down, the crew deployed a drag chute out of the vertical stabilizer, and began wheel braking when the orbiter vehicle was traveling slower than . After wheels stop, the crew deactivated the flight components and prepared to exit. The primary Space Shuttle landing site was the Shuttle Landing Facility at KSC. In the event of unfavorable landing conditions, the Shuttle could delay its landing or land at an alternate location. The primary alternate was Edwards AFB, which was used for 54 landings. STS-3 landed at the White Sands Space Harbor in New Mexico and required extensive post-processing after exposure to the gypsum-rich sand, some of which was found in "Columbia" debris after STS-107. Landings at alternate airfields required the Shuttle Carrier Aircraft to transport the orbiter back to Cape Canaveral. In addition to the pre-planned landing airfields, there were 85 agreed-upon emergency landing sites to be used in different abort scenarios, with 58 located in other countries. The landing locations were chosen based upon political relationships, favorable weather, a runway at least long, and TACAN or DME equipment. Additionally, as the orbiter vehicle only had UHF radios, international sites with only VHF radios would have been unable to communicate directly with the crew. Facilities on the east coast of the US were planned for East Coast Abort Landings, while several sites in Europe and Africa were planned in the event of a Transoceanic Abort Landing. The facilities were prepared with equipment and personnel in the event of an emergency shuttle landing, but were never used. After the landing, ground crews approached the orbiter to conduct safety checks. Teams wearing self-contained breathing gear tested for presence of hydrogen, hydrazine, monomethylhydrazine, nitrogen tetroxide, and ammonia to ensure the landing area was safe. Air conditioning and Freon lines were connected to cool the crew and equipment and dissipate excess heat from reentry. A flight surgeon boarded the orbiter and performed medical checks of the crew before they disembarked. Once the orbiter was secured, it was towed to the OPF to be inspected, repaired, and prepared for the next mission. The Space Shuttle flew from April 12, 1981 until July 21, 2011. Throughout the program, the Space Shuttle had 135 missions, of which 133 returned safely. Throughout its lifetime, the Space Shuttle was used to conduct scientific research, deploy commercial, military, and scientific payloads, and was involved in the construction and operation of Mir and the ISS. During its tenure, the Space Shuttle served as the only U.S. vehicle to launch astronauts, of which there was no replacement until the launch of Crew Dragon Demo-2 on May 30, 2020. The overall NASA budget of the Space Shuttle program has been estimated to be $221 billion (in 2012). The developers of the Space Shuttle advocated for reusability as a cost-saving measure, which resulted in higher development costs for presumed lower costs-per-launch. During the design of the Space Shuttle, the Phase B proposals were not as cheap as the initial Phase A estimates indicated; Space Shuttle program manager Robert Thompson acknowledged that reducing cost-per-pound was not the primary objective of the further design phases, as other technical requirements could not be met with the reduced costs. Development estimates made in 1972 projected a per-pound cost of payload as low as $1,109 (in 2012) per pound, but the actual payload costs, not to include the costs for the research and development of the Space Shuttle, were $37,207 (in 2012) per pound. Per-launch costs varied throughout the program, and were dependent on the rate of flights as well as research, development, and investigation proceedings throughout the Space Shuttle program. In 1982, NASA published an estimate of $260 million (in 2012) per flight, which was based on the prediction of 24 flights per year for a decade. The per-launch cost from 1995–2002, when the orbiters and ISS were not being constructed and there was no recovery work following a loss of crew, was $806 million. NASA published a study in 1999 that concluded that costs were $576 million (in 2012) if there were seven launches per year. In 2009, NASA determined that the cost of adding a single launch-per-year was $252 million (in 2012), which indicated that much of the Space Shuttle program costs are for year-round personnel and operations that continued regardless of the launch rate. Accounting for the entire Space Shuttle program budget, the per-launch cost was $1.642 billion (in 2012). On January 28, 1986, STS-51-L disintegrated 73 seconds after launch due to the failure of the right SRB, killing all seven astronauts on board "Challenger". The disaster was caused by low-temperature impairment of an O-ring, a mission-critical seal used between segments of the SRB casing. Failure of the O-ring allowed hot combustion gases to escape from between the booster sections and burn through the adjacent ET, leading to a sequence of events which caused the orbiter to disintegrate. Repeated warnings from design engineers voicing concerns about the lack of evidence of the O-rings' safety when the temperature was below 53 °F (12 °C) had been ignored by NASA managers. On February 1, 2003, "Columbia" disintegrated during re-entry, killing all seven of the STS-107 crew, because of damage to the carbon-carbon leading edge of the wing caused during launch. Ground control engineers had made three separate requests for high-resolution images taken by the Department of Defense that would have provided an understanding of the extent of the damage, while NASA's chief TPS engineer requested that astronauts on board "Columbia" be allowed to leave the vehicle to inspect the damage. NASA managers intervened to stop the Department of Defense's imaging of the orbiter and refused the request for the spacewalk, and thus the feasibility of scenarios for astronaut repair or rescue by "Atlantis" were not considered by NASA management at the time. The partial reusability of the Space Shuttle was one of the primary design requirements during its initial development. The technical decisions that dictated the orbiter's return and reuse reduced the per-launch payload capabilities with the intention of lowering the per-launch costs and resulting in a high-launch rate. The actual costs of a Space Shuttle launch were higher than initial predictions, and the Space Shuttle did not fly the intended 24 missions per year as initially predicted by NASA. The Space Shuttle was originally intended as a launch vehicle to deploy satellites, which it was primarily used for on the missions prior to the "Challenger" disaster. NASA's pricing, which was below cost, was lower than expendable launch vehicles; the intention was that the high volume of Space Shuttle missions would compensate for early financial losses. The improvement of expendable launch vehicles and the transition away from commercial payload on the Space Shuttle resulted in expendable launch vehicles becoming the primary deployment option for satellites. The fatal "Challenger" and "Columbia" disasters demonstrated the safety risks of the Space Shuttle that could result in the loss of the crew. The spaceplane design of the orbiter limited the abort options, as the abort scenarios required the controlled flight of the orbiter to a runway or to allow the crew to egress individually, rather than the abort escape options on the Apollo and Soyuz space capsules. Early safety analyses advertised by NASA engineers and management predicted the chance of a catastrophic failure resulting in the death of the crew as ranging from 1 in 100 launches to as rare as 1 in 100,000. Following the loss of two Space Shuttle missions, the risks for the initial missions were reevaluated, and the chance for a catastrophic loss of the vehicle and crew was found to be as high as 1 in 9. NASA management was criticized after for accepting increased risk to the crew in exchange for higher mission rates. Both the "Challenger" and "Columbia" reports explained that NASA culture had failed to keep the crew safe by not objectively evaluating the potential risks of the missions. The Space Shuttle retirement was announced in January 2004. To ensure the ISS was properly assembled, the contributing partners determined the need for 16 remaining assembly missions in March 2006. One additional Hubble Space Telescope servicing mission was approved in October 2006. Originally, STS-134 was to be the final Space Shuttle mission. However, the "Columbia" disaster resulted in additional orbiters being prepared for launch on need in the event of a rescue mission. As "Atlantis" was prepared for the final launch on need mission, the decision was made in September 2010 that it would fly as STS-135 with a four-person crew that could remain at the ISS in the event of an emergency. STS-135 launched on July 8, 2011, and landed at the KSC on July 21, 2011, at 5:57 a.m. EDT (09:57 UTC). Following each orbiter's final flight, it was safed in preparation for display. The OMS and RCS systems used presented the primary dangers due to their toxic hypergolic propellant, and most of their components were permanently removed to prevent any dangerous outgassing. Components from the orbiters were transferred to the US Air Force, ISS program, and Russian and Canadian governments. The engines were removed to be used on the Space Launch System, and spare RS-25 nozzles were attached for display purposes. The Space Shuttle, and fictitious variants, have been featured in numerous movies. The 1979 James Bond film "Moonraker" featured a Space Shuttle that was stolen while loaned to the United Kingdom. The 1986 film "SpaceCamp" portrayed "Atlantis" accidentally launching into space with a group of U.S. Space Camp participants as its crew. The 2013 film "Gravity" features the fictional Space Shuttle "Explorer" during STS-157, whose crew are killed or left stranded after it is destroyed by a shower of high speed orbital debris. The Space Shuttle has been featured as a Lego model. The Space Shuttle also appears in flight simulator and space flight simulator games such as "Microsoft Space Simulator", "Orbiter", and "Space Shuttle Mission 2007". The U.S. Postal Service has released several postage issues that depict the Space Shuttle. The first such stamps were issued in 1981, and are on display at the National Postal Museum.
https://en.wikipedia.org/wiki?curid=28189
Snow Snow comprises individual ice crystals that grow while suspended in the atmosphere—usually within clouds—and then fall, accumulating on the ground where they undergo further changes. It consists of frozen crystalline water throughout its life cycle, starting when, under suitable conditions, the ice crystals form in the atmosphere, increase to millimeter size, precipitate and accumulate on surfaces, then metamorphose in place, and ultimately melt, slide or sublimate away. Snowstorms organize and develop by feeding on sources of atmospheric moisture and cold air. Snowflakes nucleate around particles in the atmosphere by attracting supercooled water droplets, which freeze in hexagonal-shaped crystals. Snowflakes take on a variety of shapes, basic among these are platelets, needles, columns and rime. As snow accumulates into a snowpack, it may blow into drifts. Over time, accumulated snow metamorphoses, by sintering, sublimation and freeze-thaw. Where the climate is cold enough for year-to-year accumulation, a glacier may form. Otherwise, snow typically melts seasonally, causing runoff into streams and rivers and recharging groundwater. Major snow-prone areas include the polar regions, the northernmost half of the Northern Hemisphere and mountainous regions worldwide with sufficient moisture and cold temperatures. In the Southern Hemisphere, snow is confined primarily to mountainous areas, apart from Antarctica. Snow affects such human activities as transportation: creating the need for keeping roadways, wings, and windows clear; agriculture: providing water to crops and safeguarding livestock; sports such as skiing, snowboarding, and snowmachine travel; and warfare. Snow affects ecosystems, as well, by providing an insulating layer during winter under which plants and animals are able to survive the cold. Snow develops in clouds that themselves are part of a larger weather system. The physics of snow crystal development in clouds results from a complex set of variables that include moisture content and temperatures. The resulting shapes of the falling and fallen crystals can be classified into a number of basic shapes and combinations thereof. Occasionally, some plate-like, dendritic and stellar-shaped snowflakes can form under clear sky with a very cold temperature inversion present. Snow clouds usually occur in the context of larger weather systems, the most important of which is the low-pressure area, which typically incorporate warm and cold fronts as part of their circulation. Two additional and locally productive sources of snow are lake-effect (also sea-effect) storms and elevation effects, especially in mountains. Mid-latitude cyclones are low-pressure areas which are capable of producing anything from cloudiness and mild snow storms to heavy blizzards. During a hemisphere's fall, winter, and spring, the atmosphere over continents can be cold enough through the depth of the troposphere to cause snowfall. In the Northern Hemisphere, the northern side of the low-pressure area produces the most snow. For the southern mid-latitudes, the side of a cyclone that produces the most snow is the southern side. A cold front, the leading edge of a cooler mass of air, can produce frontal snowsqualls—an intense frontal convective line (similar to a rainband), when temperature is near freezing at the surface. The strong convection that develops has enough moisture to produce whiteout conditions at places which line passes over as the wind causes intense blowing snow. This type of snowsquall generally lasts less than 30 minutes at any point along its path but the motion of the line can cover large distances. Frontal squalls may form a short distance ahead of the surface cold front or behind the cold front where there may be a deepening low-pressure system or a series of trough lines which act similar to a traditional cold frontal passage. In situations where squalls develop post-frontally it is not unusual to have two or three linear squall bands pass in rapid succession only separated by 25 miles (40 kilometers) with each passing the same point in roughly 30 minutes apart. In cases where there is a large amount of vertical growth and mixing the squall may develop embedded cumulonimbus clouds resulting in lightning and thunder which is dubbed thundersnow. A warm front can produce snow for a period, as warm, moist air overrides below-freezing air and creates precipitation at the boundary. Often, snow transitions to rain in the warm sector behind the front. Lake-effect snow is produced during cooler atmospheric conditions when a cold air mass moves across long expanses of warmer lake water, warming the lower layer of air which picks up water vapor from the lake, rises up through the colder air above, freezes and is deposited on the leeward (downwind) shores. The same effect also occurs over bodies of salt water, when it is termed "ocean-effect" or "bay-effect snow". The effect is enhanced when the moving air mass is uplifted by the orographic influence of higher elevations on the downwind shores. This uplifting can produce narrow but very intense bands of precipitation, which deposit at a rate of many inches of snow each hour, often resulting in a large amount of total snowfall. The areas affected by lake-effect snow are called snowbelts. These include areas east of the Great Lakes, the west coasts of northern Japan, the Kamchatka Peninsula in Russia, and areas near the Great Salt Lake, Black Sea, Caspian Sea, Baltic Sea, and parts of the northern Atlantic Ocean. Orographic or relief snowfall is created when moist air is forced up the windward side of mountain ranges by the large-scale wind flow. The lifting of moist air up the side of a mountain range results in adiabatic cooling, and ultimately condensation and precipitation. Moisture is gradually removed from the air by this process, leaving drier and warmer air on the descending, or leeward, side. The resulting enhanced snowfall, along with the decrease in temperature with elevation, combine to increase snow depth and seasonal persistence of snowpack in snow-prone areas. Mountain waves have also been found to help enhance precipitation amounts downwind of mountain ranges by enhancing the lift needed for condensation and precipitation. A snowflake consists of roughly 1019 water molecules, which are added to its core at different rates and in different patterns, depending on the changing temperature and humidity within the atmosphere that the snowflake falls through on its way to the ground. As a result, snowflakes vary among themselves, while following similar patterns. Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. These droplets are able to remain liquid at temperatures lower than , because to freeze, a few molecules in the droplet need to get together by chance to form an arrangement similar to that in an ice lattice. Then the droplet freezes around this "nucleus". In warmer clouds an aerosol particle or "ice nucleus" must be present in (or in contact with) the droplet to act as a nucleus. Ice nuclei are very rare compared to that cloud condensation nuclei on which liquid droplets form. Clays, desert dust and biological particles can be nuclei. Artificial nuclei include particles of silver iodide and dry ice, and these are used to stimulate precipitation in cloud seeding. Once a droplet has frozen, it grows in the supersaturated environment—one where air is saturated with respect to ice when the temperature is below the freezing point. The droplet then grows by diffusion of water molecules in the air (vapor) onto the ice crystal surface where they are collected. Because water droplets are so much more numerous than the ice crystals due to their sheer abundance, the crystals are able to grow to hundreds of micrometers or millimeters in size at the expense of the water droplets by the Wegener–Bergeron–Findeisen process. The corresponding depletion of water vapor causes the ice crystals to grow at the droplets' expense. These large crystals are an efficient source of precipitation, since they fall through the atmosphere due to their mass, and may collide and stick together in clusters, or aggregates. These aggregates are snowflakes, and are usually the type of ice particle that falls to the ground. Although the ice is clear, scattering of light by the crystal facets and hollows/imperfections mean that the crystals often appear white in color due to diffuse reflection of the whole spectrum of light by the small ice particles. Micrography of thousands of snowflakes from 1885 onward, starting with Wilson Alwyn Bentley, revealed the wide diversity of snowflakes within a classifiable set of patterns. Closely matching snow crystals have been observed. Ukichiro Nakaya developed a crystal morphology diagram, relating crystal shapes to the temperature and moisture conditions under which they formed, which is summarized in the following table. Nakaya discovered that the shape is also a function of whether the prevalent moisture is above or below saturation. Forms below the saturation line trend more towards solid and compact. Crystals formed in supersaturated air trend more towards lacy, delicate and ornate. Many more complex growth patterns also form such as side-planes, bullet-rosettes and also planar types depending on the conditions and ice nuclei. If a crystal has started forming in a column growth regime, at around , and then falls into the warmer plate-like regime, then plate or dendritic crystals sprout at the end of the column, producing so called "capped columns". Magono and Lee devised a classification of freshly formed snow crystals that includes 80 distinct shapes. They documented each with micrographs. Snow accumulates from a series of snow events, punctuated by freezing and thawing, over areas that are cold enough to retain snow seasonally or perennially. Major snow-prone areas include the Arctic and Antarctic, the Northern Hemisphere, and alpine regions. The liquid equivalent of snowfall may be evaluated using a snow gauge or with a standard rain gauge, adjusted for winter by removal of a funnel and inner cylinder. Both types of gauges melt the accumulated snow and report the amount of water collected. At some automatic weather stations an ultrasonic snow depth sensor may be used to augment the precipitation gauge. Snow flurry, snow shower, snow storm and blizzard describe snow events of progressively greater duration and intensity. A blizzard is a weather condition involving snow and has varying definitions in different parts of the world. In the United States, a blizzard occurs when two conditions are met for a period of three hours or more: A sustained wind or frequent gusts to , and sufficient snow in the air to reduce visibility to less than . In Canada and the United Kingdom, the criteria are similar. While heavy snowfall often occurs during blizzard conditions, falling snow is not a requirement, as blowing snow can create a ground blizzard. Snowstorm intensity may be categorized by visibility and depth of accumulation. Snowfall's intensity is determined by visibility, as follows: The "International Classification for Seasonal Snow on the Ground" defines "height of new snow" as the depth of freshly fallen snow, in centimeters as measured with a ruler, that accumulated on a snowboard during an observation period of 24 hours, or other observation interval. After the measurement, the snow is cleared from the board and the board is placed flush with the snow surface to provide an accurate measurement at the end of the next interval. Melting, compacting, blowing and drifting contribute to the difficulty of measuring snowfall. Glaciers with their permanent snowpacks cover about 10% of the earth's surface, while seasonal snow covers about nine percent, mostly in the Northern Hemisphere, where seasonal snow covers about , according to a 1987 estimate. A 2007 estimate of snow cover over the Northern Hemisphere suggested that, on average, snow cover ranges from a minimum extent of each August to a maximum extent of each January or nearly half of the land surface in that hemisphere. A study of Northern Hemisphere snow cover extent for the period 1972–2006 suggests a reduction of over the 35-year period. The following are world records regarding snowfall and snowflakes: After deposition, snow progresses on one of two paths that determine its fate, either "ablation" (mostly by melting) or transitioning from firn (multi-year snow) into "glacier ice". During this transition, snow "is a highly porous, sintered material made up of a continuous ice structure and a continuously connected pore space, forming together the snow microstructure". Almost always near its melting temperature, a snowpack is continually transforming these properties in a process, known as "metamorphism", wherein all three phases of water may coexist, including liquid water partially filling the pore space. Starting as a powdery deposition, snow becomes more granular when it begins to compact under its own weight, be blown by the wind, sinter particles together and commence the cycle of melting and refreezing. Water vapor plays a role as it deposits ice crystals, known as hoar frost, during cold, still conditions. Over the course of time, a snowpack may settle under its own weight until its density is approximately 30% of water. Increases in density above this initial compression occur primarily by melting and refreezing, caused by temperatures above freezing or by direct solar radiation. In colder climates, snow lies on the ground all winter. By late spring, snow densities typically reach a maximum of 50% of water. Snow that persists into summer evolves into névé, granular snow, which has been partially melted, refrozen and compacted. Névé has a minimum density of , which is roughly half of the density of liquid water. Firn is snow that has persisted for multiple years and has been recrystallized into a substance denser than névé, yet less dense and hard than glacial ice. Firn resembles caked sugar and is very resistant to shovelling. Its density generally ranges from to , and it can often be found underneath the snow that accumulates at the head of a glacier. The minimum altitude that firn accumulates on a glacier is called the "firn limit", "firn line" or "snowline". There are four main mechanisms for movement of deposited snow: "drifting" of unsintered snow, "avalanches" of accumulated snow on steep slopes, "snowmelt" during thaw conditions, and the "movement of glaciers" after snow has persisted for multiple years and metamorphosed into glacier ice. When powdery, snow drifts with the wind from the location where it originally fell, forming deposits with a depth of several meters in isolated locations. After attaching to hillsides, blown snow can evolve into a snow slab, which is an avalanche hazard on steep slopes. An avalanche (also called a snowslide or snowslip) is a rapid flow of snow down a sloping surface. Avalanches are typically triggered in a starting zone from a mechanical failure in the snowpack (slab avalanche) when the forces on the snow exceed its strength but sometimes only with gradually widening (loose snow avalanche). After initiation, avalanches usually accelerate rapidly and grow in mass and volume as they entrain more snow. If the avalanche moves fast enough some of the snow may mix with the air forming a powder snow avalanche, which is a type of gravity current. They occur in three major mechanisms: Many rivers originating in mountainous or high-latitude regions receive a significant portion of their flow from snowmelt. This often makes the river's flow highly seasonal resulting in periodic flooding during the spring months and at least in dry mountainous regions like the mountain West of the US or most of Iran and Afghanistan, very low flow for the rest of the year. In contrast, if much of the melt is from glaciated or nearly glaciated areas, the melt continues through the warm season, with peak flows occurring in mid to late summer. Glaciers form where the accumulation of snow and ice exceeds ablation. The area in which an alpine glacier forms is called a cirque (corrie or cwm), a typically armchair-shaped geological feature, which collects snow and where the snowpack compacts under the weight of successive layers of accumulating snow, forming névé. Further crushing of the individual snow crystals and reduction of entrapped air in the snow turns it into glacial ice. This glacial ice will fill the cirque until it overflows through a geological weakness or an escape route, such as the gap between two mountains. When the mass of snow and ice is sufficiently thick, it begins to move due to a combination of surface slope, gravity and pressure. On steeper slopes, this can occur with as little as 15 m (50 ft) of snow-ice. Scientists study snow at a wide variety of scales that include the physics of chemical bonds and clouds; the distribution, accumulation, metamorphosis, and ablation of snowpacks; and the contribution of snowmelt to river hydraulics and ground hydrology. In doing so, they employ a variety of instruments to observe and measure the phenomena studied. Their findings contribute to knowledge applied by engineers, who adapt vehicles and structures to snow, by agronomists, who address the availability of snowmelt to agriculture, and those, who design equipment for sporting activities on snow. Scientists develop and others employ snow classification systems that describe its physical properties at scales ranging from the individual crystal to the aggregated snowpack. A sub-specialty is avalanches, which are of concern to engineers and outdoors sports people, alike. Snow science addresses how snow forms, its distribution, and processes affecting how snowpacks change over time. Scientists improve storm forecasting, study global snow cover and its effect on climate, glaciers, and water supplies around the world. The study includes physical properties of the material as it changes, bulk properties of in-place snow packs, and the aggregate properties of regions with snow cover. In doing so, they employ on-the-ground physical measurement techniques to establish ground truth and remote sensing techniques to develop understanding of snow-related processes over large areas. In the field snow scientists often excavate a snow pit within which to make basic measurements and observations. Observations can describe features caused by wind, water percolation, or snow unloading from trees. Water percolation into a snowpack can create flow fingers and ponding or flow along capillary barriers, which can refreeze into horizontal and vertical solid ice formations within the snowpack. Among the measurements of the properties of snowpacks that the "International Classification for Seasonal Snow on the Ground" includes are: snow height, snow water equivalent, snow strength, and extent of snow cover. Each has a designation with code and detailed description. The classification extends the prior classifications of Nakaya and his successors to related types of precipitation and are quoted in the following table: "All are formed in cloud, except for rime, which forms on objects exposed to supercooled moisture." It also has a more extensive classification of deposited snow than those that pertain to airborne snow. The categories include both natural and man-made snow types, descriptions of snow crystals as they metamorphose and melt, the development of hoar frost in the snow pack and the formation of ice therein. Each such layer of a snowpack differs from the adjacent layers by one or more characteristics that describe its microstructure or density, which together define the snow type, and other physical properties. Thus, at any one time, the type and state of the snow forming a layer have to be defined because its physical and mechanical properties depend on them. Physical properties include microstructure, grain size and shape, snow density, liquid water content, and temperature. Remote sensing of snowpacks with satellites and other platforms typically includes multi-spectral collection of imagery. Multi-faceted interpretation of the data obtained allows inferences about what is observed. The science behind these remote observations has been verified with ground-truth studies of the actual conditions. Satellite observations record a decrease in snow-covered areas since the 1960s, when satellite observations began. In some regions such as China, a trend of increasing snow cover was observed from 1978 to 2006. These changes are attributed to global climate change, which may lead to earlier melting and less coverage area. However, in some areas there may be an increase in snow depth because of higher temperatures for latitudes north of 40°. For the Northern Hemisphere as a whole the mean monthly snow-cover extent has been decreasing by 1.3% per decade. The most frequently used methods to map and measure snow extent, snow depth and snow water equivalent employ multiple inputs on the visible–infrared spectrum to deduce the presence and properties of snow. The National Snow and Ice Data Center (NSIDC) uses the reflectance of visible and infrared radiation to calculate a normalized difference snow index, which is a ratio of radiation parameters that can distinguish between clouds and snow. Other researchers have developed decision trees, employing the available data to make more accurate assessments. One challenge to this assessment is where snow cover is patchy, for example during periods of accumulation or ablation and also in forested areas. Cloud cover inhibits optical sensing of surface reflectance, which has led to other methods for estimating ground conditions underneath clouds. For hydrological models, it is important to have continuous information about the snow cover. Passive microwave sensors are especially valuable for temporal and spatial continuity because they can map the surface beneath clouds and in darkness. When combined with reflective measurements, passive microwave sensing greatly extends the inferences possible about the snowpack. Snow science often leads to predictive models that include snow deposition, snow melt, and snow hydrology—elements of the Earth's water cycle—which help describe global climate change. Global climate change models (GCMs) incorporate snow as a factor in their calculations. Some important aspects of snow cover include its albedo (reflectivity of incident radiation, including light) and insulating qualities, which slow the rate of seasonal melting of sea ice. As of 2011, the melt phase of GCM snow models were thought to perform poorly in regions with complex factors that regulate snow melt, such as vegetation cover and terrain. These models typically derive snow water equivalent (SWE) in some manner from satellite observations of snow cover. The "International Classification for Seasonal Snow on the Ground" defines SWE as "the depth of water that would result if the mass of snow melted completely". Given the importance of snowmelt to agriculture, hydrological runoff models that include snow in their predictions address the phases of accumulating snowpack, melting processes, and distribution of the meltwater through stream networks and into the groundwater. Key to describing the melting processes are solar heat flux, ambient temperature, wind, and precipitation. Initial snowmelt models used a degree-day approach that emphasized the temperature difference between the air and the snowpack to compute snow water equivalent, SWE. More recent models use an energy balance approach that take into account the following factors to compute "Qm", the energy available for melt. This requires measurement of an array of snowpack and environmental factors to compute six heat flow mechanisms that contribute to "Qm". Snow affects human activity in four major areas, transportation, agriculture, structures, and sports. Most transportation modes are impeded by snow on the travel surface. Agriculture often relies on snow as a source of seasonal moisture. Structures may fail under snow loads. Humans find a wide variety of recreational activities in snowy landscapes. Snow affects the rights of way of highways, airfields and railroads. They share a common tool for clearing snow, the snowplow. However, the application is different in each case—whereas roadways employ anti-icing chemicals to prevent bonding of ice, airfields may not; railroads rely on abrasives to enhance traction on tracks. In the late 20th century, an estimated $2 billion was spent annually in North America on roadway winter maintenance, owing to snow and other winter weather events, according to a 1994 report by Kuemmel. The study surveyed the practices of jurisdictions within 44 US states and nine Canadian provinces. It assessed the policies, practices, and equipment used for winter maintenance. It found similar practices and progress to be prevalent in Europe. The dominant effect of snow on vehicle contact with the road is diminished friction. This can be improved with the use of snow tires, which have a tread designed to compact snow in a manner that enhances traction. However, the key to maintaining a roadway that can accommodate traffic during and after a snow event is an effective anti-icing program that employs both chemicals and plowing. The FHWA "Manual of Practice for an Effective Anti-icing Program" emphasizes "anti-icing" procedures that prevent the bonding of snow and ice to the road. Key aspects of the practice include: understanding anti-icing in light of the level of service to be achieved on a given roadway, the climatic conditions to be encountered, and the different roles of deicing, anti-icing, and abrasive materials and applications, and employing anti-icing "toolboxes", one for operations, one for decision-making and another for personnel. The elements to the toolboxes are: The manual offers matrices that address different types of snow and the rate of snowfall to tailor applications appropriately and efficiently. Snow fences, constructed upwind of roadways control snow drifting by causing windblown, drifting snow to accumulate in a desired place. They are also used on railways. Additionally, farmers and ranchers use snow fences to create drifts in basins for a ready supply of water in the spring. In order to keep airports open during winter storms, runways and taxiways require snow removal. Unlike roadways, where chloride chemical treatment is common to prevent snow from bonding to the pavement surface, such chemicals are typically banned from airports because of their strong corrosive effect on aluminum aircraft. Consequently, mechanical brushes are often used to complement the action of snow plows. Given the width of runways on airfields that handle large aircraft, vehicles with large plow blades, an echelon of plow vehicles or rotary snowplows are used to clear snow on runways and taxiways. Terminal aprons may require or more to be cleared. Properly equipped aircraft are able to fly through snowstorms under Instrument flight rules. Prior to takeoff, during snowstorms they require deicing fluid to prevent accumulation and freezing of snow and other precipitation on wings and fuselages, which may compromise the safety of the aircraft and its occupants. In flight, aircraft rely on a variety of mechanisms to avoid rime and other types of icing in clouds, these include pulsing pneumatic boots, electro-thermal areas that generate heat, and fluid deicers that bleed onto the surface. Railroads have traditionally employed two types of snow plows for clearing track, the wedge plow, which casts snow to both sides, and the rotary snowplow, which is suited for addressing heavy snowfall and casting snow far to one side or the other. Prior to the invention of the rotary snowplow ca. 1865, it required multiple locomotives to drive a wedge plow through deep snow. Subsequent to clearing the track with such plows, a "flanger" is used to clear snow from between the rails that are below the reach of the other types of plow. Where icing may affect the steel-to-steel contact of locomotive wheels on track, abrasives (typically sand) have been used to provide traction on steeper uphills. Railroads employ snow sheds—structures that cover the track—to prevent the accumulation of heavy snow or avalanches to cover tracks in snowy mountainous areas, such as the Alps and the Rocky Mountains. Snow can be compacted to form a snow road and be part of a winter road route for vehicles to access isolated communities or construction projects during the winter. Snow can also be used to provide the supporting structure and surface for a runway, as with the Phoenix Airfield in Antarctica. The snow-compacted runway is designed to withstand approximately 60 wheeled flights of heavy-lift military aircraft a year. Snowfall can be beneficial to agriculture by serving as a thermal insulator, conserving the heat of the Earth and protecting crops from subfreezing weather. Some agricultural areas depend on an accumulation of snow during winter that will melt gradually in spring, providing water for crop growth, both directly and via runoff through streams and rivers, which supply irrigation canals. The following are examples of rivers that rely on meltwater from glaciers or seasonal snowpack as an important part of their flow on which irrigation depends: the Ganges, many of whose tributaries rise in the Himalayas and which provide much irrigation in northeast India, the Indus River, which rises in Tibet and provides irrigation water to Pakistan from rapidly retreating Tibetan glaciers, and the Colorado River, which receives much of its water from seasonal snowpack in the Rocky Mountains and provides irrigation water to some 4 million acres (1.6 million hectares). Snow is an important consideration for loads on structures. To address these, European countries employ "Eurocode 1: Actions on structures - Part 1-3: General actions - Snow loads". In North America, ASCE "Minimum Design Loads for Buildings and Other Structures" gives guidance on snow loads. Both standards employ methods that translate maximum expected ground snow loads onto design loads for roofs. Snow loads and icings are two principal issues for roofs. Snow loads are related to the climate in which a structure is sited. Icings are usually a result of the building or structure generating heat that melts the snow that is on it. "Snow loads" – The "Minimum Design Loads for Buildings and Other Structures" gives guidance on how to translate the following factors into roof snow loads: It gives tables for ground snow loads by region and a methodology for computing ground snow loads that may vary with elevation from nearby, measured values. The "Eurocode 1" uses similar methodologies, starting with ground snow loads that are tabulated for portions of Europe. "Icings" – Roofs must also be designed to avoid ice dams, which result from meltwater running under the snow on the roof and freezing at the eave. Ice dams on roofs form when accumulated snow on a sloping roof melts and flows down the roof, under the insulating blanket of snow, until it reaches below freezing temperature air, typically at the eaves. When the meltwater reaches the freezing air, ice accumulates, forming a dam, and snow that melts later cannot drain properly through the dam. Ice dams may result in damaged building materials or in damage or injury when the ice dam falls off or from attempts to remove ice dams. The melting results from heat passing through the roof under the highly insulating layer of snow. In areas with trees, utility distribution lines on poles are less susceptible to snow loads than they are subject to damage from trees falling on them, felled by heavy, wet snow. Elsewhere, snow can accrete on power lines as "sleeves" of rime ice. Engineers design for such loads, which are measured in kg/m (lb/ft) and power companies have forecasting systems that anticipate types of weather that may cause such accretions. Rime ice may be removed manually or by creating a sufficient short circuit in the affected segment of power lines to melt the accretions. Snow figures into many winter sports and forms of recreation, including skiing and sledding. Common examples include cross-country skiing, Alpine skiing, snowboarding, snowshoeing, and snowmobiling. The design of the equipment used, typically relies on the bearing strength of snow, as with skis or snowboards and contends with the coefficient of friction of snow to allow sliding, often enhance by ski waxes. Skiing is by far the largest form of winter recreation. As of 1994, of the estimated 65–75 million skiers worldwide, there were approximately 55 million who engaged in Alpine skiing, the rest engaged in cross-country skiing. Approximately 30 million skiers (of all kinds) were in Europe, 15 million in the US, and 14 million in Japan. As of 1996, there were reportedly 4,500 ski areas, operating 26,000 ski lifts and enjoying 390 million skier visits per year. The preponderant region for downhill skiing was Europe, followed by Japan and the US. Increasingly, ski resorts are relying on snowmaking, the production of snow by forcing water and pressurized air through a snow gun on ski slopes. Snowmaking is mainly used to supplement natural snow at ski resorts. This allows them to improve the reliability of their snow cover and to extend their ski seasons from late autumn to early spring. The production of snow requires low temperatures. The threshold temperature for snowmaking increases as humidity decreases. Wet-bulb temperature is used as a metric since it takes air temperature and relative humidity into account. Snowmaking is a relatively expensive process in its energy consumption, thereby limiting its use. Ski wax enhances the ability of a ski or other runner to slide over snow, which depends on both the properties of the snow and the ski to result in an optimum amount of lubrication from melting the snow by friction with the ski—too little and the ski interacts with solid snow crystals, too much and capillary attraction of meltwater retards the ski. Before a ski can slide, it must overcome the maximum value static friction. Kinetic (or dynamic) friction occurs when the ski is moving over the snow. Snow affects warfare conducted in winter, alpine environments or at high latitudes. The main factors are "impaired visibility" for acquiring targets during falling snow, "enhanced visibility" of targets against snowy backgrounds for targeting, and mobility for both mechanized and infantry troops. Snowfall can severely inhibit the logistics of supplying troops, as well. Snow can also provide cover and fortification against small-arms fire. Noted winter warfare campaigns where snow and other factors affected the operations include: Both plant and animal life endemic to snow-bound areas develop ways to adapt. Among the adaptive mechanisms for plants are dormancy, seasonal dieback, survival of seeds; and for animals are hibernation, insulation, anti-freeze chemistry, storing food, drawing on reserves from within the body, and clustering for mutual heat. Snow interacts with vegetation in two principal ways, vegetation can influence the deposition and retention of snow and, conversely, the presence of snow can affect the distribution and growth of vegetation. Tree branches, especially of conifers intercept falling snow and prevent accumulation on the ground. Snow suspended in trees ablates more rapidly than that on the ground, owing to its greater exposure to sun and air movement. Trees and other plants can also promote snow retention on the ground, which would otherwise be blown elsewhere or melted by the sun. Snow affects vegetation in several ways, the presence of stored water can promote growth, yet the annual onset of growth is dependent on the departure of the snowpack for those plants that are buried beneath it. Furthermore, avalanches and erosion from snowmelt can scour terrain of vegetation. Snow supports a wide variety of animals both on the surface and beneath. Many invertebrates thrive in snow, including spiders, wasps, beetles, snow scorpionflys and springtails. Such arthropods are typically active at temperatures down to . Invertebrates fall into two groups, regarding surviving subfreezing temperatures: freezing resistant and those that avoid freezing because they are freeze-sensitive. The first group may be cold hardy owing to the ability to produce antifreeze agents in their body fluids that allows survival of long exposure to sub-freezing conditions. Some organisms fast during the winter, which expels freezing-sensitive contents from their digestive tracts. The ability to survive the absence of oxygen in ice is an additional survival mechanism. Small vertebrates are active beneath the snow. Among vertebrates, alpine salamanders are active in snow at temperatures as low as ; they burrow to the surface in springtime and lay their eggs in melt ponds. Among mammals, those that remain active are typically smaller than . Omnivores are more likely to enter a torpor or be hibernators, whereas herbivores are more likely to maintain food caches beneath the snow. Voles store up to of food and pikas up to . Voles also huddle in communal nests to benefit from one another's warmth. On the surface, wolves, coyotes, foxes, lynx, and weasels rely on these subsurface dwellers for food and often dive into the snowpack to find them. Extraterrestrial "snow" includes water-based precipitation, but also precipitation of other compounds prevalent on other planets and moons in the Solar System. Examples are: Lexicon Notable snow events Recreation Related concepts Science and scientists Snow structures
https://en.wikipedia.org/wiki?curid=28191
Symbolics Symbolics refers to two companies: a defunct computer manufacturer Symbolics, Inc., and a privately held company that acquired the assets of the former company and continues to sell and maintain the Open Genera Lisp system and the Macsyma computer algebra system. The symbolics.com domain was originally registered on March 15, 1985, making it the first .com-domain in the world. In August 2009, it was sold to napkin.com (formerly XF.com) Investments. Symbolics, Inc. was a computer manufacturer headquartered in Cambridge, Massachusetts, and later in Concord, Massachusetts, with manufacturing facilities in Chatsworth, California (a suburban section of Los Angeles). Its first CEO, chairman, and founder was Russell Noftsker. Symbolics designed and manufactured a line of Lisp machines, single-user computers optimized to run the programming language Lisp. Symbolics also made significant advances in software technology, and offered one of the premier software development environments of the 1980s and 1990s, now sold commercially as Open Genera for Tru64 UNIX on the Hewlett-Packard (HP) Alpha. The Lisp Machine was the first commercially available "workstation", although that word had not yet been coined. Symbolics was a spinoff from the MIT AI Lab, one of two companies to be founded by AI Lab staffers and associated hackers for the purpose of manufacturing Lisp machines. The other was Lisp Machines, Inc., although Symbolics attracted most of the hackers, and more funding. Symbolics' initial product, the LM-2, introduced in 1981, was a repackaged version of the MIT CADR Lisp machine design. The operating system and software development environment, over 500,000 lines, was written in Lisp from the microcode up, based on MIT's Lisp Machine Lisp. The software bundle was later renamed ZetaLisp, to distinguish the Symbolics' product from other vendors who had also licensed the MIT software. Symbolics' Zmacs text editor, a variant of Emacs, was implemented in a text-processing package named "ZWEI", an acronym for "Zwei was Eine initially", with "Eine" being an acronym for "Eine Is Not Emacs". Both are recursive acronyms and puns on the German words for "one" ("eins", "eine") and "two" ("zwei"). The Lisp Machine system software was then copyrighted by MIT, and was licensed to both Symbolics and LMI. Until 1981, Symbolics shared all its copyrighted enhancements to the source code with MIT and kept it on an MIT server. According to Richard Stallman, Symbolics engaged in a business tactic in which it forced MIT to make all Symbolics' copyrighted fixes and improvements to the Lisp Machine OS available only to Symbolics (and MIT but not to Symbolics competitors), and thereby choke off its competitor LMI, which at that time had insufficient resources to independently maintain or develop the OS and environment. Symbolics felt that they no longer had sufficient control over their product. At that point, Symbolics began using their own copy of the software, located on their company servers, while Stallman says that Symbolics did that to prevent its Lisp improvements from flowing to Lisp Machines, Inc. From that base, Symbolics made extensive improvements to every part of the software, and continued to deliver almost all the source code to their customers (including MIT). However, the policy prohibited MIT staff from distributing the Symbolics version of the software to others. With the end of open collaboration came the end of the MIT hacker community. As a reaction to this, Stallman initiated the GNU project to make a new community. Eventually, Copyleft and the GNU General Public License would ensure that a hacker's software could remain free software. In this way, Symbolics played a key, albeit adversarial, role in instigating the free software movement. In 1983, a year later than planned, Symbolics introduced the 3600 family of Lisp machines. Code-named the "L-machine" internally, the 3600 family was an innovative new design, inspired by the CADR architecture but sharing few of its implementation details. The main processor had a 36-bit word (divided up as 4 or 8 bits of tags, and 32 bits of data or 28 bits of memory address). Memory words were 44 bits, the additional 8 bits being used for error-correcting code (ECC). The instruction set was that of a stack machine. The 3600 architecture provided 4,096 hardware registers, of which half were used as a cache for the top of the control stack; the rest were used by the microcode and time-critical routines of the operating system and Lisp run-time environment. Hardware support was provided for virtual memory, which was common for machines in its class, and for garbage collection, which was unique. The original 3600 processor was a microprogrammed design like the CADR, and was built on several large circuit boards from standard TTL integrated circuits, both features being common for commercial computers in its class at the time. Central processing unit (CPU) clock speed varied depending on which instruction was being executed, but was typically around 5 MHz. Many Lisp primitives could be executed in a single clock cycle. Disk input/output (I/O) was handled by multitasking at the microcode level. A 68000 processor (termed the "front-end processor", (FEP)) started the main computer up, and handled the slower peripherals during normal operation. An Ethernet interface was standard equipment, replacing the Chaosnet interface of the LM-2. The 3600 was roughly the size of a household refrigerator. This was partly due to the size of the processor (the cards were widely spaced to allow wire-wrap prototype cards to fit without interference) and partly due to the size of disk drive technology in the early 1980s. At the 3600's introduction, the smallest disk that could support the ZetaLisp software was wide (most 3600s shipped with the 10½-inch Fujitsu Eagle). The 3670 and 3675 were slightly shorter in height, but were essentially the same machine packed a little tighter. The advent of , and later , disk drives that could hold hundreds of megabytes led to the introduction of the 3640 and 3645, which were roughly the size of a two-drawer file cabinet. Later versions of the 3600 architecture were implemented on custom integrated circuits, reducing the five cards of the original processor design to two, at a large manufacturing cost savings and with performance slightly better than the old design. The 3650, first of the "G machines", as they were known within the company, was housed in a cabinet derived from the 3640s. Denser memory and smaller disk drives enabled the introduction of the 3620, about the size of a modern full-size tower PC. The 3630 was a "fat 3620" with room for more memory and video interface cards. The 3610 was a lower priced variant of the 3620, essentially identical in every way except that it was licensed for application deployment rather than general development. The various models of the 3600 family were popular for artificial intelligence (AI) research and commercial applications throughout the 1980s. The AI commercialization boom of the 1980s led directly to Symbolics' success during the decade. Symbolics computers were widely believed to be the best platform available for developing AI software. The LM-2 used a Symbolics-branded version of the complex space-cadet keyboard, while later models used a simplified version (at right), known simply as the "". The Symbolics keyboard featured the many modifier keys used in Zmacs, notably Control/Meta/Super/Hyper in a block, but did not feature the complex symbol set of the space-cadet keyboard. Also contributing to the 3600 series' success was a line of bit-mapped graphics color video interfaces, combined with extremely powerful animation software. Symbolics' Graphics Division, headquartered in Westwood, Los Angeles, California, near to the major Hollywood movie and television studios, made its S-Render and S-Paint software into industry leaders in the animation business. Symbolics developed the first workstations able to process high-definition television (HDTV) quality video, which enjoyed a popular following in Japan. A 3600, with the standard black-and-white monitor, made a cameo appearance in the movie "Real Genius". The company was also referenced in Michael Crichton's novel "Jurassic Park". Symbolics' Graphics Division was sold to Nichimen Trading Company in the early 1990s, and the S-Graphics software suite (S-Paint, S-Geometry, S-Dynamics, S-Render) ported to Franz Allegro Common Lisp on Silicon Graphics (SGI) and PC computers running Windows NT. Today it is sold as Mirai by Izware LLC, and continues to be used in major motion pictures (most famously in New Line Cinema's "The Lord of the Rings"), video games, and military simulations. Symbolic's 3600-series computers were also used as the first front end "controller" computers for the Connection Machine massively parallel computers manufactured by Thinking Machines Corporation, another MIT spinoff based in Cambridge, Massachusetts. The Connection Machine ran a parallel variant of Lisp and, initially, was used primarily by the AI community, so the Symbolics Lisp machine was a particularly good fit as a front-end machine. For a long time, the operating system didn't have a name, but was finally named "Genera" around 1984. The system included several advanced dialects of Lisp. Its heritage was Maclisp on the PDP-10, but it included more data types, and multiple-inheritance object-oriented programming features. This Lisp dialect was called Lisp Machine Lisp at MIT. Symbolics used the name ZetaLisp. Symbolics later wrote new software in "Symbolics Common Lisp", its version of the Common Lisp standard. In the late 1980s (2 years later than planned), the Ivory family of single-chip Lisp Machine processors superseded the G-Machine 3650, 3620, and 3630 systems. The Ivory 390k transistor VLSI implementation designed in Symbolics Common Lisp using NS, a custom Symbolics Hardware Design Language (HDL), addressed a 40-bit word (8 bits tag, 32 bits data/address). Since it only addressed full words and not bytes or half-words, this allowed addressing of 4 Gigawords (GW) or 16 gigabytes (GB) of memory; the increase in address space reflected the growth of programs and data as semiconductor memory and disk space became cheaper. The Ivory processor had 8 bits of ECC attached to each word, so each word fetched from external memory to the chip was actually 48 bits wide. Each Ivory instruction was 18 bits wide and two instructions plus a 2-bit CDR code and 2-bit Data Type were in each instruction word fetched from memory. Fetching two instruction words at a time from memory enhanced the Ivory's performance. Unlike the 3600's microprogrammed architecture, the Ivory instruction set was still microcoded, but was stored in a 1200 × 180-bit ROM inside the Ivory chip. The initial Ivory processors were fabricated by VLSI Technology Inc in San Jose, California, on a 2 µm CMOS process, with later generations fabricated by Hewlett Packard in Corvallis, Oregon, on 1.25 µm and 1 µm CMOS processes. The Ivory had a stack architecture and operated a 4-stage pipeline: Fetch, Decode, Execute and Write Back. Ivory processors were marketed in stand-alone Lisp Machines (the XL400, XL1200, and XL1201), headless Lisp Machines (NXP1000), and on add-in cards for Sun Microsystems (UX400, UX1200) and Apple Macintosh (MacIvory I, II, III) computers. The Lisp Machines with Ivory processors operated at speeds that were between two and six times faster than a 3600 depending on the model and the revision of the Ivory chip. The Ivory instruction set was later emulated in software for microprocessors implementing the 64-bit Alpha architecture. The "Virtual Lisp Machine" emulator, combined with the operating system and software development environment from the XL machines, is sold as Open Genera. Sunstone was a processor similar to a reduced instruction set computer (RISC), that was to be released shortly after the Ivory. It was designed by Ron Lebel's group at the Symbolics Westwood office. However, the project was canceled the day it was supposed to tape out. As quickly as the commercial AI boom of the mid-1980s had propelled Symbolics to success, the "AI Winter" of the late 1980s and early 1990s, combined with the slowdown of the Ronald Reagan administration's Strategic Defense Initiative, popularly termed "Star Wars", missile defense program, for which the "Defense Advanced Research Projects Agency" (DARPA) had invested heavily in AI solutions, severely damaged Symbolics. An internal war between Noftsker and the CEO the board had hired in 1986, Brian Sear, over whether to follow Sun's suggested lead and focus on selling their software, or to re-emphasize their superior hardware, and the ensuing lack of focus when both Noftsker and Sear were fired from the company caused sales to plummet. This, combined with some ill-advised real estate deals by company management during the boom years (they had entered into large long-term lease obligations in California), drove Symbolics into bankruptcy. Rapid evolution in mass market microprocessor technology (the "PC revolution"), advances in Lisp compiler technology, and the economics of manufacturing custom microprocessors severely diminished the commercial advantages of purpose-built Lisp machines. By 1995, the Lisp machine era had ended, and with it Symbolics' hopes for success. Symbolics continued as an enterprise with very limited revenues, supported mainly by service contracts on the remaining MacIvory, UX-1200, UX-1201, and other machines still used by commercial customers. Symbolics also sold Virtual Lisp Machine (VLM) software for DEC, Compaq, and HP Alpha-based workstations (AlphaStation) and servers (AlphaServer), refurbished MacIvory IIs, and Symbolics keyboards. In July 2005, Symbolics closed its Chatsworth, California, maintenance facility. The reclusive owner of the company, Andrew Topping, died that same year. The current legal status of Symbolics software is uncertain. An assortment of Symbolics hardware was still available for purchase . The United States Department of Defense (US DoD) is still paying Symbolics for regular maintenance work. On March 15, 1985, symbolics.com became the first (and currently, since it is still registered, the oldest) registered .com domain of the Internet. The symbolics.com domain was purchased by XF.com in 2009. Genera also featured the most extensive networking interoperability software seen to that point. A local area network system called Chaosnet had been invented for the Lisp Machine (predating the commercial availability of Ethernet). The Symbolics system supported Chaosnet, but also had one of the first TCP/IP implementations. It also supported DECnet and IBM's SNA network protocols. A Dialnet protocol used phone lines and modems. Genera would, using hints from its distributed "namespace" database (somewhat similar to Domain Name System (DNS), but more comprehensive, like parts of Xerox's Grapevine), automatically select the best protocol combination to use when connecting to network service. An application program (or a user command) would only specify the name of the host and the desired service. For example, a host name and a request for "Terminal Connection" might yield a connection over TCP/IP using the Telnet protocol (although there were many other possibilities). Likewise, requesting a file operation (such as a Copy File command) might pick NFS, FTP, NFILE (the Symbolics network file access protocol), or one of several others, and it might execute the request over TCP/IP, Chaosnet, or whatever other network was most suitable. The most popular application program for the Symbolics Lisp Machine was the ICAD computer-aided engineering system. One of the first networked multi-player video games, a version of Spacewar, was developed for the Symbolics Lisp Machine in 1983. Electronic CAD software on the Symbolics Lisp Machine was used to develop the first implementation of the Hewlett-Packard Precision Architecture (PA-RISC). Symbolics' research and development staff (first at MIT, and then later at the company) produced several major innovations in software technology: The Symbolics Graphics Division (SGD, founded in 1982, sold to Nichimen Graphics in 1992) developed the S-Graphics software suite (S-Paint, S-Geometry, S-Dynamics, S-Render) for Symbolics Genera. This software was also used to create a few computer animated movies and was used for some popular movies.
https://en.wikipedia.org/wiki?curid=28195
Pessimism Pessimism is a negative mental attitude in which an undesirable outcome is anticipated from a given situation. Pessimists tend to focus on the negatives of life in general. A common question asked to test for pessimism is "Is the glass half empty or half full?"; in this situation a pessimist is said to see the glass as half empty, while an optimist is said to see the glass as half full. Throughout history, the pessimistic disposition has had effects on all major areas of thinking. Philosophical pessimism is the related idea that views the world in a strictly anti-optimistic fashion. This form of pessimism is not an emotional disposition as the term commonly connotes. Instead, it is a philosophy or worldview that directly challenges the notion of progress and what may be considered the faith-based claims of optimism. Philosophical pessimists are often existential nihilists believing that life has no intrinsic meaning or value. Their responses to this condition, however, are widely varied and often life-affirming. The term pessimism derives from the Latin word "pessimus" meaning 'the worst'. It was first used by Jesuit critics of Voltaire's 1759 novel "Candide, ou l'Optimisme". Voltaire was satirizing the philosophy of Leibniz who maintained that this was the 'best (optimum) of all possible worlds'. In their attacks on Voltaire, the Jesuits of the "Revue de Trévoux" accused him of "pessimisme". Philosophical pessimism is not a state of mind or a psychological disposition, but rather it is a worldview or ethic that seeks to face up to perceived distasteful realities of the world and eliminate irrational hopes and expectations (such as the Idea of Progress and religious faith) which may lead to undesirable outcomes. Ideas which prefigure philosophical pessimism can be seen in ancient texts such as the Dialogue of Pessimism and Ecclesiastes (which maintains that everything is "hevel", literally 'vapor' or 'breath', but could also mean 'senseless' and 'absurd'.) In Western philosophy, philosophical pessimism is not a single coherent movement, but rather a loosely associated group of thinkers with similar ideas and a family resemblance to each other. In "Pessimism: Philosophy, Ethic, Spirit", Joshua Foa Dienstag outlines the main propositions shared by most philosophical pessimists as "that time is a burden; that the course of history is in some sense ironic; that freedom and happiness are incompatible; and that human existence is absurd." Philosophical pessimists see the self-consciousness of man as bound up with his consciousness of time and that this leads to greater suffering than mere physical pain. While many organisms live in the present, humans and certain species of animals can contemplate the past and future, and this is an important difference. Human beings have foreknowledge of their own eventual fate and this "terror" is present in every moment of our lives as a reminder of the impermanent nature of life and of our inability to control this change. The philosophical pessimistic view of the effect of historical progress tends to be more negative than positive. The philosophical pessimist does not deny that certain areas like science can "progress" but they deny that this has resulted in an overall improvement of the human condition. In this sense it could be said that the pessimist views history as ironic; while seemingly getting better, it is mostly in fact not improving at all, or getting worse. This is most clearly seen in Rousseau's critique of enlightenment civil society and his preference for man in the primitive and natural state. For Rousseau, "our souls have become corrupted to the extent that our sciences and our arts have advanced towards perfection". The pessimistic view of the human condition is that it is in a sense "absurd". Absurdity is seen as an ontological mismatch between our desire for meaning and fulfillment and our inability to find or sustain those things in the world, or as Camus puts it: "a divorce between man and his life, the actor and his setting". The idea that rational thought would lead to human flourishing can be traced to Socrates and is at the root of most forms of western optimistic philosophies. Pessimism turns the idea on its head; it faults the human freedom to reason as the feature that misaligned humanity from our world and sees it as the root of human unhappiness. The responses to this predicament of the human condition by pessimists are varied. Some philosophers, such as Schopenhauer and Mainländer, recommend a form of resignation and self-denial (which they saw exemplified in Indian religions and Christian monasticism). Some followers tend to believe that "expecting the worst leads to the best." Rene Descartes even believed that life was better if emotional reactions to "negative" events were removed. Karl Robert Eduard von Hartmann asserted that with cultural and technological progress, the world and its inhabitants will reach a state in which they will voluntarily embrace nothingness. Others like Nietzsche, Leopardi, Julius Bahnsen and Camus respond with a more life-affirming view, what Nietzsche called a "Dionysian pessimism", an embrace of life as it is in all of its constant change and suffering, without appeal to progress or hedonistic calculus. Albert Camus indicated that the common responses to the absurdity of life are often: Suicide, a leap of faith (as per Kierkegaard's knight of faith), or recognition/rebellion. Camus rejected all but the last option as unacceptable and inauthentic responses. Philosophical pessimism has often been tied to the arts and literature. Schopenhauer's philosophy was very popular with composers (Wagner, Brahms and Mahler). While there are earlier examples of literary pessimism, such as in the work of Miguel de Cervantes, several philosophical pessimists also wrote novels or poetry (Camus and Leopardi respectively). A distinctive literary form which has been associated with pessimism is aphoristic writing, and this can be seen in Leopardi, Nietzsche and Cioran. Nineteenth and twentieth-century writers which could be said to express pessimistic views in their works or to be influenced by pessimistic philosophers include Charles Baudelaire, Samuel Beckett, Gottfried Benn, Jorge Luis Borges, Charles Bukowski, Dino Buzzati, Lord Byron, Louis-Ferdinand Céline, Joseph Conrad, Fyodor Dostoevsky, Mihai Eminescu, Sigmund Freud, Thomas Hardy, Sadegh Hedayat, H. P. Lovecraft, Thomas Mann, Camilo Pessanha, Edgar Saltus and James Thomson. Late-twentieth and twenty-first century authors who could be said to express or explore philosophical pessimism include David Benatar, Thomas Bernhard, Friedrich Dürrenmatt, John Gray, Michel Houellebecq, Alexander Kluge, Thomas Ligotti, Cormac McCarthy, Eugene Thacker, and Peter Wessel Zapffe. In "Philosophy in the Tragic Age of the Greeks", Friedrich Nietzsche argued that the pre-Socratic philosophers such as Anaximander, Heraclitus (called "the Weeping Philosopher") and Parmenides represented a classical form of pessimism. Nietzsche saw Anaximander's philosophy as the "enigmatic proclamation of a true pessimist". Similarly, of Heraclitus' philosophy of flux and strife he wrote: Heraclitus denied the duality of totally diverse worlds—a position which Anaximander had been compelled to assume. He no longer distinguished a physical world from a metaphysical one, a realm of definite qualities from an undefinable "indefinite." And after this first step, nothing could hold him back from a second, far bolder negation: he altogether denied being. For this one world which he retained [...] nowhere shows a tarrying, an indestructibility, a bulwark in the stream. Louder than Anaximander, Heraclitus proclaimed: "I see nothing other than becoming. Be not deceived. It is the fault of your short-sightedness, not of the essence of things, if you believe you see land somewhere in the ocean of becoming and passing-away. You use names for things as though they rigidly, persistently endured; yet even the stream into which you step a second time is not the one you stepped into before." "The Birth of Tragedy. 5, pp. 51–52" Another Greek expressed a form of pessimism in his philosophy: the ancient Cyrenaic philosopher Hegesias (290 BCE). Like later pessimists, Hegesias argued that lasting happiness is impossible to achieve and that all we can do is to try to avoid pain as much as possible. Complete happiness cannot possibly exist; for that the body is full of many sensations, and that the mind sympathizes with the body, and is troubled when that is troubled, and also that fortune prevents many things which we cherished in anticipation; so that for all these reasons, perfect happiness eludes our grasp. Hegesias held that all external objects, events and actions are indifferent to the wise man, even death: "for the foolish person it is expedient to live, but to the wise person it is a matter of indifference". According to Cicero, Hegesias wrote a book called "Death by Starvation", which supposedly persuaded many people that death was more desirable than life. Because of this, Ptolemy II Philadelphus banned Hegesias from teaching in Alexandria. From the 3rd century BCE, Stoicism propounded as an exercise "the premeditation of evils"—concentration on worst possible outcomes. Schopenhauer engaged extensively with the works of Baltasar Gracián (1601–1658) and considered Gracián's novel "El Criticón" "Absolutely unique... a book made for constant use...a companion for life" for "those who wish to prosper in the great world." Schopenhauer's pessimistic outlook was influenced by Gracián, and he translated Gracián's "The Pocket Oracle and Art of Prudence" into German. He praised Gracián for his aphoristic writing style (conceptismo) and often quoted him in his works. Gracian's novel "El Criticón" (The Critic) is an extended allegory of the human search for happiness which turns out to be fruitless on this Earth. "The Critic" paints a bleak and desolate picture of the human condition. His "Pocket Oracle" was a book of aphorisms on how to live in what he saw as a world filled with deception, duplicity and disillusionment. Voltaire was the first European to be labeled as a pessimist due to his critique of Alexander Pope's optimistic "An Essay on Man", and Leibniz' affirmation that "we live in the best of all possible worlds." Voltaire's novel Candide is an extended criticism of theistic optimism and his "Poem on the Lisbon Disaster" is especially pessimistic about the state of mankind and the nature of God. Though himself a Deist, Voltaire argued against the existence of a compassionate personal God through his interpretation of the problem of evil. The major themes of philosophical pessimism were first presented by Rousseau and he has been called "the patriarch of pessimism". For Rousseau, humans in their "natural goodness" have no sense of self-consciousness in time and thus are happier than humans corrupted by society. Rousseau saw the movement out of the state of nature as the origin of inequality and mankind's lack of freedom. The wholesome qualities of man in his natural state, a non-destructive love of self and compassion are gradually replaced by "amour propre", a self-love driven by pride and jealousy of his fellow man. Because of this, modern man lives "always outside himself", concerned with other men, the future and external objects. Rousseau also blames the human faculty of "perfectibility" and human language for tearing us away from our natural state by allowing us to imagine a future in which we are different than what we are now and therefore making us appear inadequate to ourselves (and thus 'perfectible'). Rousseau saw the evolution of modern society as the replacement of natural egalitarianism by alienation and class distinction enforced by institutions of power. Thus "The Social Contract" opens with the famous phrase "Man is born free, and everywhere he is in chains." Even the ruling classes are not free, in fact for Rousseau they are "greater slaves" because they require more esteem from others to rule and must therefore constantly live "outside themselves". Though a lesser known figure outside Italy, Giacomo Leopardi was highly influential in the 19th century, especially for Schopenhauer and Nietzsche. In Leopardi's darkly comic essays, aphorisms, fables and parables, life is often described as a sort of divine joke or mistake. According to Leopardi, because of our conscious sense of time and our endless search for truth, the human desire for happiness can never be truly satiated and joy cannot last. Leopardi claims that "Therefore they greatly deceive themselves, [those] who declare and preach that the perfection of man consists in knowledge of the truth and that all his woes proceed from false opinions and ignorance, and that the human race will at last be happy, when all or most people come to know the truth, and solely on the grounds of that arrange and govern their lives." Furthermore, Leopardi believes that for man it is not possible to forget truth and that "it is easier to rid oneself of any habit before that of philosophizing." Leopardi's response to this condition is to face up to these realities and try to live a vibrant and great life, to be risky and take up uncertain tasks. This uncertainty makes life valuable and exciting but does not free us from suffering, it is rather an abandonment of the futile pursuit of happiness. He uses the example of Christopher Columbus who went on a dangerous and uncertain voyage and because of this grew to appreciate life more fully. Leopardi also sees the capacity of humans to laugh at their condition as a laudable quality that is able to help us deal with our predicament. For Leopardi: "He who has the courage to laugh is master of the world, much like him who is prepared to die." Arthur Schopenhauer's pessimism comes from his elevating of Will above reason as the mainspring of human thought and behavior. The Will is the ultimate metaphysical animating noumenon and it is futile, illogical and directionless striving. Schopenhauer sees reason as weak and insignificant compared to Will; in one metaphor, Schopenhauer compares the human intellect to a lame man who can see, but who rides on the shoulder of the blind giant of Will. Schopenhauer saw human desires as impossible to satisfy. He pointed to motivators such as hunger, thirst and sexuality as the fundamental features of the Will in action, which are always by nature unsatisfactory. All satisfaction, or what is commonly called happiness, is really and essentially always "negative" only, and never positive. It is not a gratification which comes to us originally and of itself, but it must always be the satisfaction of a wish. For desire, that is to say, want, is the precedent condition of every pleasure; but with the satisfaction, the desire and therefore the pleasure cease; and so the satisfaction or gratification can never be more than deliverance from a pain, from a want. Such is not only every actual and evident suffering, but also every desire whose importunity disturbs our peace, and indeed even the deadening boredom that makes existence a burden to us. Schopenhauer notes that once satiated, the feeling of satisfaction rarely lasts and we spend most of our lives in a state of endless striving; in this sense we are, deep down, nothing but Will. Even the moments of satisfaction, when repeated often enough, only lead to boredom and thus human existence is constantly swinging "like a pendulum to and fro between pain and boredom, and these two are in fact its ultimate constituents". This ironic cycle eventually allows us to see the inherent vanity at the truth of existence ("nichtigkeit") and to realize that "the purpose of our existence is not to be happy". Moreover, the business of biological life is a war of all against all filled with constant physical pain and distress, not merely unsatisfied desires. There is also the constant dread of death on the horizon to consider, which makes human life worse than animals. Reason only compounds our suffering by allowing us to realize that biology's agenda is not something we would have chosen had we been given a choice, but it is ultimately helpless to prevent us from serving it. Schopenhauer saw in artistic contemplation a temporary escape from the act of willing. He believed that through "losing yourself" in art one could sublimate the Will. However, he believed that only a resignation from the pointless striving of the will to life through a form of asceticism (as those practiced by eastern monastics and by "saintly persons") could free oneself from the Will altogether. Schopenhauer never used the term "pessimism" to describe his philosophy but he also didn't object when others called it that. Other common terms used to describe his thought were voluntarism and irrationalism which he also never used. During the endtimes of Schopenhauer's life and subsequent years after his death, post-Schopenhauerian pessimism became a rather popular "trend" in 19th century Germany. Nevertheless, it was viewed with disdain by the other popular philosophies at the time, such as Hegelianism, materialism, neo-Kantianism and the emerging positivism. In an age of upcoming revolutions and exciting new discoveries in science, the resigned and a-progressive nature of the typical pessimist was seen as detriment to social development. To respond to this growing criticism, a group of philosophers greatly influenced by Schopenhauer (indeed, some even being his personal acquaintances) developed their own brand of pessimism, each in their own unique way. Thinkers such as Julius Bahnsen, Karl Robert Eduard von Hartmann, Philipp Mainländer and others cultivated the ever-increasing threat of pessimism by converting Schopenhauer's transcendental idealism into what Frederick C. Beiser calls transcendental realism. The transcendental idealist thesis is that we know only the appearances of things (not things-in-themselves); the transcendental realist thesis is that "the knowledge we have of how things appear to us in experience gives us knowledge of things-in-themselves." By espousing transcendental realism, Schopenhauer's own dark observations about the nature of the world would become completely knowable and objective, and in this way they would attain certainty. The certainty of pessimism being, that non-existence is preferable to existence. That, along with the metaphysical reality of the will, were the premises which the "post-Schopenhauerian" thinkers inherited from Schopenhauer's teachings. After this common starting point, each philosopher developed his own negative view of being in their respective philosophies. Some pessimists would "assuage" the critics by accepting the validity of their criticisms and embracing historicism, as was the case with Schopenhauer's literary executor Julius Frauenstädt and with Karl Robert Eduard von Hartmann (who gave transcendental realism a unique twist). Julius Bahnsen would reshape the understanding of pessimism overall, while Philipp Mainländer set out to reinterpret and elucidate the nature of the will, by presenting it as a self-mortifying will-to-death. Friedrich Nietzsche could be said to be a philosophical pessimist even though unlike Schopenhauer (whom he read avidly) his response to the 'tragic' pessimistic view is neither resigned nor self-denying, but a life-affirming form of pessimism. For Nietzsche this was a "pessimism of the future", a "Dionysian pessimism." Nietzsche identified his Dionysian pessimism with what he saw as the pessimism of the Greek pre-socratics and also saw it at the core of ancient Greek tragedy. He saw tragedy as laying bare the terrible nature of human existence, bound by constant flux. In contrast to this Nietzsche saw Socratic philosophy as an optimistic refuge of those who could not bear the tragic any longer. Since Socrates posited that wisdom could lead to happiness, Nietzsche saw this as "morally speaking, a sort of cowardice...amorally, a ruse". Nietzsche was also critical of Schopenhauer's pessimism because in judging the world negatively, it turned to moral judgements about the world and therefore led to weakness and nihilism. Nietzsche's response was a total embracing of the nature of the world, a "great liberation" through a "pessimism of strength" which "does not sit in judgement of this condition". Nietzsche believed that the task of the philosopher was to wield this pessimism like a hammer, to first attack the basis of old moralities and beliefs and then to "make oneself a new pair of wings", i.e. to re-evaluate all values and create new ones. A key feature of this Dionysian pessimism was 'saying yes' to the changing nature of the world, this entailed embracing destruction and suffering joyfully, forever (hence the ideas of amor fati and eternal recurrence). Pessimism for Nietzsche is an art of living that is "good for one's health" as a "remedy and an aid in the service of growing and struggling life". In a 1945 article, Albert Camus wrote "the idea that a pessimistic philosophy is necessarily one of discouragement is a puerile idea." Camus helped popularize the idea of "the absurd", a key term in his famous essay "The Myth of Sisyphus". Like previous philosophical pessimists, Camus sees human consciousness and reason as that which "sets me in opposition to all creation". For Camus, this clash between a reasoning mind which craves meaning and a 'silent' world is what produces the most important philosophical problem, the 'problem of suicide'. Camus believed that people often escape facing the absurd through "eluding" ("l'esquive"), a 'trickery' for "those who live not for life itself but some great idea that will transcend it, refine it, give it a meaning, and betray it". He considered suicide and religion as inauthentic forms of eluding or escaping the problem of existence. For Camus, the only choice was to rebelliously accept and live with the absurd, for "there is no fate that cannot be surmounted by scorn." Camus' response to the absurd problem is illustrated by using the Greek mythic character of Sisyphus, who was condemned by the gods to push a boulder up a hill for eternity. Camus imagines Sisyphus while pushing the rock, realizing the futility of his task, but doing it anyway out of rebellion: "One must imagine Sisyphus happy." There are several theories of epistemology which could arguably be said to be pessimistic in the sense that they consider it difficult or even impossible to obtain knowledge about the world. These ideas are generally related to nihilism, philosophical skepticism and relativism. Friedrich Heinrich Jacobi (1743–1819), analyzed rationalism, and in particular Immanuel Kant's "critical" philosophy in order to carry out a reductio ad absurdum according to which all rationalism reduces to nihilism, and thus it should be avoided and replaced with a return to some type of faith and revelation. Richard Rorty, Michel Foucault, and Ludwig Wittgenstein questioned whether our particular concepts could relate to the world in any absolute way and whether we can justify our ways of describing the world as compared with other ways. In general, these philosophers argue that truth was not about getting it right or representing reality, but was part of subjective social relations of power, or language-games that served our purposes in a particular time. Therefore, these forms of anti-foundationalism, while not being pessimistic per se, rejects any definitions that claims to have discovered absolute 'truths' or foundational facts about the world as valid. Philosophical pessimism stands opposed to the optimism or even utopianism of Hegelian philosophies. Emil Cioran claimed "Hegel is chiefly responsible for modern optimism. How could he have failed to see that consciousness changes only its forms and modalities, but never progresses?" Philosophical pessimism is differentiated from other political philosophies by having no ideal governmental structure or political project, rather pessimism generally tends to be an anti-systematic philosophy of individual action. This is because philosophical pessimists tend to be skeptical that any politics of social progress can actually improve the human condition. As Cioran states, "every step forward is followed by a step back: this is the unfruitful oscillation of history". Cioran also attacks political optimism because it creates an "idolatry of tomorrow" which can be used to authorize anything in its name. This does not mean however, that the pessimist cannot be politically involved, as Camus argued in The Rebel. There is another strain of thought generally associated with a pessimistic worldview, this is the pessimism of cultural criticism and social decline which is seen in Oswald Spengler's 'The Decline of the West'. Spengler promoted a cyclic model of history similar to the theories of Giambattista Vico. Spengler believed modern western civilization was in the 'winter' age of decline ("untergang"). Spenglerian theory was immensely influential in interwar Europe, especially in Weimar Germany. Similarly, traditionalist Julius Evola thought that the world was in the Kali Yuga, a dark age of moral decline. Intellectuals like Oliver James correlate economic progress with economic inequality, the stimulation of artificial needs, and affluenza. Anti-consumerists identify rising trends of conspicuous consumption and self-interested, image-conscious behavior in culture. Post-modernists like Jean Baudrillard have even argued that culture (and therefore our lives) now has no basis in reality whatsoever. Conservative thinkers, especially social conservatives, often perceive politics in a generally pessimistic way. William F. Buckley famously remarked that he was "standing athwart history yelling 'stop!'" and Whittaker Chambers was convinced that capitalism was bound to fall to communism, though he was himself violently anti-communist. Social conservatives often see the West as a decadent and nihilistic civilization which has abandoned its roots in Christianity and/or Greek philosophy, leaving it doomed to fall into moral and political decay. Robert Bork's "Slouching Toward Gomorrah" and Allan Bloom's "The Closing of the American Mind" are famous expressions of this point of view. Many economic conservatives and libertarians believe that the expansion of the state and the role of government in society is inevitable, and they are at best fighting a holding action against it. They hold that the natural tendency of people is to be ruled and that freedom is an exceptional state of affairs which is now being abandoned in favor of social and economic security provided by the welfare state. Political pessimism has sometimes found expression in dystopian novels such as George Orwell's "Nineteen Eighty-Four". Political pessimism about one's country often correlates with a desire to emigrate. During the financial crisis of 2007–08 in the United States, the neologism "pessimism porn" was coined to describe the alleged eschatological and survivalist thrill some people derive from predicting, reading and fantasizing about the collapse of civil society through the destruction of the world's economic system. Technological pessimism is the belief that advances in science and technology do not lead to an improvement in the human condition. Technological pessimism can be said to have originated during the industrial revolution with the Luddite movement. Luddites blamed the rise of industrial mills and advanced factory machinery for the loss of their jobs and set out to destroy them. The Romantic movement was also pessimistic towards the rise of technology and longed for simpler and more natural times. Poets like William Wordsworth and William Blake believed that industrialization was polluting the purity of nature. Some social critics and environmentalists believe that globalization, overpopulation and the economic practices of modern capitalist states over-stress the planet's ecological equilibrium. They warn that unless something is done to slow this, climate change will worsen eventually leading to some form of social and ecological collapse. James Lovelock believes that the ecology of the Earth has already been irretrievably damaged, and even an unrealistic shift in politics would not be enough to save it. According to Lovelock, the Earth's climate regulation system is being overwhelmed by pollution and the Earth will soon jump from its current state into a dramatically hotter climate. Lovelock blames this state of affairs on what he calls "polyanthroponemia", which is when: "humans overpopulate until they do more harm than good." Lovelock states: The presence of 7 billion people aiming for first-world comforts…is clearly incompatible with the homeostasis of climate but also with chemistry, biological diversity and the economy of the system. Some radical environmentalists, anti-globalization activists, and Neo-luddites can be said to hold to this type of pessimism about the effects of modern "progress". A more radical form of environmental pessimism is anarcho-primitivism which faults the agricultural revolution with giving rise to social stratification, coercion, and alienation. Some anarcho-primitivists promote deindustrialization, abandonment of modern technology and rewilding. An infamous anarcho-primitivist is Theodore Kaczynski, also known as the Unabomber who engaged in a nationwide mail bombing campaign. In his 1995 manifesto, "Industrial Society and Its Future" he called attention to the erosion of human freedom by the rise of the modern "industrial-technological system". The manifesto begins thus: The Industrial Revolution and its consequences have been a disaster for the human race. They have greatly increased the life-expectancy of those of us who live in "advanced" countries, but they have destabilized society, have made life unfulfilling, have subjected human beings to indignities, have led to widespread psychological suffering (in the Third World to physical suffering as well) and have inflicted severe damage on the natural world. The continued development of technology will worsen the situation. It will certainly subject human beings to greater indignities and inflict greater damage on the natural world, it will probably lead to greater social disruption and psychological suffering, and it may lead to increased physical suffering even in "advanced" countries. One of the most radical pessimist organizations is the voluntary human extinction movement which argues for the extinction of the human race through antinatalism. Pope Francis' controversial 2015 encyclical on ecological issues is ripe with pessimistic assessments of the role of technology in the modern world. 'Entropy pessimism' represents a special case of technological and environmental pessimism, based on thermodynamic principles. According to the first law of thermodynamics, matter and energy is neither created nor destroyed in the economy. According to the second law of thermodynamics — also known as the entropy law — what happens in the economy is that all matter and energy is transformed from states available for human purposes (valuable natural resources) to states unavailable for human purposes (valueless waste and pollution). In effect, all of man's technologies and activities are only speeding up the general march against a future planetary 'heat death' of degraded energy, exhausted natural resources and a deteriorated environment — a state of maximum entropy locally on earth; 'locally' on earth, that is, when compared to the heat death of the universe, taken as a whole. The term 'entropy pessimism' was coined to describe the work of Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and the paradigm founder of ecological economics. Georgescu-Roegen made extensive use of the entropy concept in his magnum opus on "The Entropy Law and the Economic Process". Since the 1990s, leading ecological economist and steady-state theorist Herman Daly — a student of Georgescu-Roegen — has been the economists profession's most influential proponent of entropy pessimism. Among other matters, the entropy pessimism position is concerned with the existential impossibility of allocating earth's finite stock of mineral resources evenly among an unknown number of present and future generations. This number of generations is likely to remain unknown to us, as there is no way — or only little way — of knowing in advance if or when mankind will ultimately face extinction. In effect, "any" conceivable intertemporal allocation of the stock will inevitably end up with universal economic decline at some future point. Entropy pessimism is a widespread view in ecological economics and in the degrowth movement. Bibas writes that some criminal defense attorneys prefer to err on the side of pessimism: "Optimistic forecasts risk being proven disastrously wrong at trial, an embarrassing result that makes clients angry. On the other hand, if clients plead based on their lawyers' overly pessimistic advice, the cases do not go to trial and the clients are none the wiser." In the ancient world, psychological pessimism was associated with melancholy, and was believed to be caused by an excess of black bile in the body. The study of pessimism has parallels with the study of depression. Psychologists trace pessimistic attitudes to emotional pain or even biology. Aaron Beck argues that depression is due to unrealistic negative views about the world. Beck starts treatment by engaging in conversation with clients about their unhelpful thoughts. Pessimists, however, are often able to provide arguments that suggest that their understanding of reality is justified; as in Depressive realism or (pessimistic realism). Deflection is a common method used by those who are depressed. They let people assume they are revealing everything which proves to be an effective way of hiding. The pessimism item on the Beck Depression Inventory has been judged useful in predicting suicides. The Beck Hopelessness Scale has also been described as a measurement of pessimism. Wender and Klein point out that pessimism can be useful in some circumstances: "If one is subject to a series of defeats, it pays to adopt a conservative game plan of sitting back and waiting and letting others take the risks. Such waiting would be fostered by a pessimistic outlook. Similarly if one is raking in the chips of life, it pays to adopt an expansive risk taking approach, and thus maximize access to scarce resources." Through history, some have concluded that a pessimistic attitude, although justified, must be avoided in order to endure. Optimistic attitudes are favored and of emotional consideration. Al-Ghazali and William James rejected their pessimism after suffering psychological, or even psychosomatic illness. Criticisms of this sort however assume that pessimism leads inevitably to a mood of darkness and utter depression. Many philosophers would disagree, claiming that the term "pessimism" is being abused. The link between pessimism and nihilism is present, but the former does not necessarily lead to the latter, as philosophers such as Albert Camus believed. Happiness is not inextricably linked to optimism, nor is pessimism inextricably linked to unhappiness. One could easily imagine an unhappy optimist, and a happy pessimist. Accusations of pessimism may be used to silence legitimate criticism. The economist Nouriel Roubini was largely dismissed as a pessimist, for his dire but accurate predictions of a coming global financial crisis, in 2006. "Personality Plus" opines that pessimistic temperaments (e.g. melancholy and phlegmatic) can be useful inasmuch as pessimists' focus on the negative helps them spot problems that people with more optimistic temperaments (e.g. choleric and sanguine) miss.
https://en.wikipedia.org/wiki?curid=24743
Peter Wessel Zapffe Peter Wessel Zapffe (December 18, 1899 – October 12, 1990) was a Norwegian metaphysician, author, lawyer and mountaineer. He is often noted for his philosophically pessimistic and fatalistic view of human existence—his system of philosophy in line with the work of the earlier philosopher Arthur Schopenhauer, by whom he was inspired—as well as his firm advocacy of antinatalism. His thoughts regarding the error of human life are presented in the essay "The Last Messiah" (Norwegian: "Den sidste Messias", 1933). This essay is a shorter version of his best-known and yet to be translated work, the philosophical treatise "On the Tragic" ("Om det tragiske", 1941). Zapffe's view is that humans are born with an overdeveloped skill (understanding, self-knowledge) which does not fit into nature's design. The human craving for justification on matters such as life and death cannot be satisfied, hence humanity has a need that nature cannot satisfy. The tragedy, following this theory, is that humans spend all their time trying not to be human. The human being, therefore, is a paradox. In "The Last Messiah" Zapffe described four principal defense mechanisms that humankind uses to avoid facing this paradox: Zapffe was a prolific mountaineer and took a very early interest in environmentalism. This form of nature conservationism sprung from the intent, not of protecting nature, but to avoid human culturalization of nature. Zapffe was the author of many humorous short stories about climbing and other adventures in nature. Zapffe married twice. He remained married to his second wife Berit Zapffe until his death in 1990. Berit died in May 2008. Zapffe remained childless by choice.
https://en.wikipedia.org/wiki?curid=24744
Franc Poincaré The Franc Poincaré is a unit of account that was used in the international regulation of liability. It is defined as 65.5 milligrams of gold of millesimal fineness .900. Formerly it was identical to the French franc, although it has not been so since the 1920s. Practice on its conversion to national currencies varies from state to state; in most states the conversion factor is based not on the market price of gold, but on an official price (a remnant of the gold standard, frequently far below its market price today). The Franc Poincaré has been replaced for most purposes by special drawing rights. Conventions which used the Franc Poincaré included the Convention for the Unification of Certain Rules Relating to International Carriage by Air, the International Convention on Civil Liability for Oil Pollution Damage and the International Convention on the Establishment of an International Fund for Compensation for Oil Pollution Damage.
https://en.wikipedia.org/wiki?curid=24745
PCX PCX, standing for "PiCture eXchange", is an image file format developed by the now-defunct ZSoft Corporation of Marietta, Georgia, United States. It was the native file format for PC Paintbrush and became one of the first widely accepted DOS imaging standards, although it has since been succeeded by more sophisticated image formats, such as BMP, JPEG, and PNG. PCX files commonly stored palette-indexed images ranging from 2 or 4 colors to 16 and 256 colors, although the format has been extended to record true-color (24-bit) images as well. PCX was designed during the early development of PC display hardware and most of the formats it supported are no longer used, Table A shows a list of the most commonly used PCX formats. Contemporary image editing programs may not read PCX files that match older hardware. PCX is supported by common image processing software including ACDSee, GIMP, ImageMagick, IrfanView, LView, Netpbm, PaintShop Pro, Photoshop, Visio, PMview, XnView and GraphicConverter. In version 2.1.4 FFmpeg could encode and decode the PCX pixel formats "rgb24, rgb8, bgr8, rgb4_byte, bgr4_byte, gray, pal8," and "monob". There is a multi-page version of PCX, used by some computer fax and document management programs, with file extension codice_1. A DCX file consists of a header introducing a set of following PCX files. PCX files were designed for use on IBM-compatible PCs and always use little endian byte ordering. A PCX file has three main sections, in the following order The PCX file header contains an identifier byte (value 10), a version number, image dimensions, 16 palette colors, number color planes, bit depth of each plane, and a value for compression method. PCX version numbers range from 0 to 5, this originally denoted the version of the PC Paintbrush program used to create the PCX file. The header always has space for 16 colors though the number of colors used depends upon the bit depth of the image. The header is 74 bytes long and the image data begins 128 bytes after the start of the file, the 54 bytes between are not used. The header is composed of 18 fields: All PCX files use the same compression scheme and the compression value is always 1. No other values have been defined and there are no uncompressed PCX files. One source claims that 0 (uncompressed) is "allowed, but not much software supports it". PCX image data is stored in rows or scan lines in top-down order. If the image has multiple planes, these are stored by plane within row, such that all the red data for row 0 are followed by all the green data for row 0, then all the blue data, then alpha data. This pattern is repeated for each line as shown in Table B. When an image is less than 8 bits per pixel, each line is padded to the next byte boundary. For example, if an image has 1 plane of 1-bit data (monochrome) with a width of 22 pixels, each row will be 3 bytes long, having 24 bits per row with 2 bits unused. PCX image data are compressed using run-length encoding (RLE), a simple lossless compression algorithm that collapses a series of three or more consecutive bytes with identical values into a two-byte pair. The two most-significant bits of a byte are used to determine whether the given data represent a single pixel of a given palette index or color value, or an RLE pair representing a series of several pixels of a single value: Compared to the maximum run length of 128, possible with TGA RLE compression, the PCX run-length encoding offers a larger single-pixel value range, while the maximum run length is restricted to 63. Due to the use of the two most-significant bits as flags, pixel values from 192 to 255 (with their most-significant bit already set) must be stored in an RLE byte pair, even when they only occur one or two pixels in succession, whereas color indexes 0 to 191 can be stored directly "or" in RLE byte pairs (whichever is more space-efficient); therefore, the actual compression ratio could be optimized with proper sorting of palette entries, though this is not feasible where the file must share its color palette with other images. For example, a palette could be optimized with the most commonly used colors occurring in palette positions 0 to 191 and the least common colors allocated to the remaining quarter of the palette. Another inefficiency with the RLE algorithm is that it is possible to store chunks with a length of 0, which allows whitespace in the file. This allowed PCX files to be decompressed slightly faster on the processors it was originally intended for. This quirk could be used for steganography. The PCX compression algorithm requires very little processor power or memory to apply, a significant concern with the computer systems when it was designed. As computers and display hardware grow more sophisticated, the PCX algorithm becomes less space-efficient. Compression algorithms used by newer image formats are more efficient when compressing images such as photographs, and dithered or otherwise complex graphics. A PCX file has space in its header for a 16 color palette. When 256-color VGA hardware became available there was not enough space for the palette in a PCX file; even the 54 unused bytes after the header would not be enough. The solution chosen was to put the palette at the end of the file, along with a marker byte to confirm its existence. If a PCX file has a 256-color palette, it is found 768 bytes from the end of the file. In this case the value in the byte preceding the palette should be 12 (0x0C). The palette is stored as a sequence of RGB triples; its usable length is defined by the number of colors in the image. Colors values in a PCX palette always use 8 bits, regardless of the bit depth of the image.
https://en.wikipedia.org/wiki?curid=24746
Permian–Triassic extinction event The Permian–Triassic extinction event, also known as the P–Tr extinction, the P–T extinction, the End-Permian Extinction, and colloquially as the Great Dying, formed the boundary between the Permian and Triassic geologic periods, as well as between the Paleozoic and Mesozoic eras, approximately 252 million years ago. It is the Earth's most severe known extinction event, with up to 96% of all marine species and 70% of terrestrial vertebrate species becoming extinct. It was the largest known mass extinction of insects. Some 57% of all biological families and 83% of all genera became extinct. There is evidence for one to three distinct pulses, or phases, of extinction. Potential causes for those pulses include one or more large meteor impact events, massive volcanic eruptions (such as the Siberian Traps), and climate change brought on by large releases of underwater methane or methane-producing microbes. The speed of the recovery from the extinction is disputed. Some scientists estimate that it took 10 million years (until the Middle Triassic), due both to the severity of the extinction and because grim conditions returned periodically for another 5 million years. However, studies in Bear Lake County, near Paris, Idaho, showed a relatively quick rebound in a localized Early Triassic marine ecosystem, taking around 2 million years to recover, suggesting that the impact of the extinction may have been felt less severely in some areas than others. Until 2000, it was thought that rock sequences spanning the Permian–Triassic boundary were too few and contained too many gaps for scientists to reliably determine its details. However, it is now possible to date the extinction with millennial precision. U–Pb zircon dates from five volcanic ash beds from the Global Stratotype Section and Point for the Permian–Triassic boundary at Meishan, China, establish a high-resolution age model for the extinction – allowing exploration of the links between global environmental perturbation, carbon cycle disruption, mass extinction, and recovery at millennial timescales. The extinction occurred between 251.941 ± 0.037 and 251.880 ± 0.031 Ma ago, a duration of 60 ± 48 ka. A large (approximately 0.9%), abrupt global decrease in the ratio of the stable isotope to that of , coincides with this extinction, and is sometimes used to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating. Further evidence for environmental change around the P–Tr boundary suggests an rise in temperature, and an increase in levels by (for comparison, the concentration immediately before the industrial revolution was , and the amount today is about 410 ppm). There is also evidence of increased ultraviolet radiation reaching the earth, causing the mutation of plant spores. It has been suggested that the Permian–Triassic boundary is associated with a sharp increase in the abundance of marine and terrestrial fungi, caused by the sharp increase in the amount of dead plants and animals fed upon by the fungi. For a while this "fungal spike" was used by some paleontologists to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating or lack suitable index fossils, but even the proposers of the fungal spike hypothesis pointed out that "fungal spikes" may have been a repeating phenomenon created by the post-extinction ecosystem in the earliest Triassic. The very idea of a fungal spike has been criticized on several grounds, including: "Reduviasporonites", the most common supposed fungal spore, may be a fossilized alga; the spike did not appear worldwide; and in many places it did not fall on the Permian–Triassic boundary. The reduviasporonites may even represent a transition to a lake-dominated Triassic world rather than an earliest Triassic zone of death and decay in some terrestrial fossil beds. Newer chemical evidence agrees better with a fungal origin for "Reduviasporonites", diluting these critiques. Uncertainty exists regarding the duration of the overall extinction and about the timing and duration of various groups' extinctions within the greater process. Some evidence suggests that there were multiple extinction pulses or that the extinction was spread out over a few million years, with a sharp peak in the last million years of the Permian. Statistical analyses of some highly fossiliferous strata in Meishan, Zhejiang Province in southeastern China, suggest that the main extinction was clustered around one peak. Recent research shows that different groups became extinct at different times; for example, while difficult to date absolutely, ostracod and brachiopod extinctions were separated by 670,000 to 1.17 million years. In a well-preserved sequence in east Greenland, the decline of animals is concentrated in a period 10,000 to 60,000 years long, with plants taking an additional several hundred thousand years to show the full impact of the event. An older theory, still supported in some recent papers, is that there were two major extinction pulses 9.4 million years apart, separated by a period of extinctions well above the background level, and that the final extinction killed off only about 80% of marine species alive at that time while the other losses occurred during the first pulse or the interval between pulses. According to this theory one of these extinction pulses occurred at the end of the Guadalupian epoch of the Permian. For example, all but one of the surviving dinocephalian genera died out at the end of the Guadalupian, as did the Verbeekinidae, a family of large-size fusuline foraminifera. The impact of the end-Guadalupian extinction on marine organisms appears to have varied between locations and between taxonomic groups — brachiopods and corals had severe losses. Marine invertebrates suffered the greatest losses during the P–Tr extinction. Evidence of this was found in samples from south China sections at the P–Tr boundary. Here, 286 out of 329 marine invertebrate genera disappear within the final two sedimentary zones containing conodonts from the Permian. The decrease in diversity was probably caused by a sharp increase in extinctions, rather than a decrease in speciation. The extinction primarily affected organisms with calcium carbonate skeletons, especially those reliant on stable CO2 levels to produce their skeletons. These organisms were susceptible to the effects of the ocean acidification that resulted from increased atmospheric CO2. Among benthic organisms the extinction event multiplied background extinction rates, and therefore caused maximum species loss to taxa that had a high background extinction rate (by implication, taxa with a high turnover). The extinction rate of marine organisms was catastrophic. Surviving marine invertebrate groups included articulate brachiopods (those with a hinge), which had undergone a slow decline in numbers since the P–Tr extinction; the Ceratitida order of ammonites; and crinoids ("sea lilies"), which very nearly became extinct but later became abundant and diverse. The groups with the highest survival rates generally had active control of circulation, elaborate gas exchange mechanisms, and light calcification; more heavily calcified organisms with simpler breathing apparatuses suffered the greatest loss of species diversity. In the case of the brachiopods, at least, surviving taxa were generally small, rare members of a formerly diverse community. The ammonoids, which had been in a long-term decline for the 30 million years since the Roadian (middle Permian), suffered a selective extinction pulse 10 million years before the main event, at the end of the Capitanian stage. In this preliminary extinction, which greatly reduced disparity, or the range of different ecological guilds, environmental factors were apparently responsible. Diversity and disparity fell further until the P–Tr boundary; the extinction here (P–Tr) was non-selective, consistent with a catastrophic initiator. During the Triassic, diversity rose rapidly, but disparity remained low. The range of morphospace occupied by the ammonoids, that is, their range of possible forms, shapes or structures, became more restricted as the Permian progressed. A few million years into the Triassic, the original range of ammonoid structures was once again reoccupied, but the parameters were now shared differently among clades. The Permian had great diversity in insect and other invertebrate species, including the largest insects ever to have existed. The end-Permian is the largest known mass extinction of insects; according to some sources, it is the "only" insect mass extinction. Eight or nine insect orders became extinct and ten more were greatly reduced in diversity. Palaeodictyopteroids (insects with piercing and sucking mouthparts) began to decline during the mid-Permian; these extinctions have been linked to a change in flora. The greatest decline occurred in the Late Permian and was probably not directly caused by weather-related floral transitions. Most fossil insect groups found after the Permian–Triassic boundary differ significantly from those before: Of Paleozoic insect groups, only the Glosselytrodea, Miomoptera, and Protorthoptera have been discovered in deposits from after the extinction. The caloneurodeans, monurans, paleodictyopteroids, protelytropterans, and protodonates became extinct by the end of the Permian. In well-documented Late Triassic deposits, fossils overwhelmingly consist of modern fossil insect groups. The geological record of terrestrial plants is sparse and based mostly on pollen and spore studies. Plants are relatively immune to mass extinction, with the impact of all the major mass extinctions "insignificant" at a family level. Even the reduction observed in species diversity (of 50%) may be mostly due to taphonomic processes. However, a massive rearrangement of ecosystems does occur, with plant abundances and distributions changing profoundly and all the forests virtually disappearing; the Palaeozoic flora scarcely survived this extinction. At the P–Tr boundary, the dominant floral groups changed, with many groups of land plants entering abrupt decline, such as "Cordaites" (gymnosperms) and "Glossopteris" (seed ferns). Dominant gymnosperm genera were replaced post-boundary by lycophytes—extant lycophytes are recolonizers of disturbed areas. Palynological or pollen studies from East Greenland of sedimentary rock strata laid down during the extinction period indicate dense gymnosperm woodlands before the event. At the same time that marine invertebrate macrofauna declined, these large woodlands died out and were followed by a rise in diversity of smaller herbaceous plants including Lycopodiophyta, both Selaginellales and Isoetales. Later, other groups of gymnosperms again become dominant but again suffered major die offs. These cyclical flora shifts occurred a few times over the course of the extinction period and afterwards. These fluctuations of the dominant flora between woody and herbaceous taxa indicate chronic environmental stress resulting in a loss of most large woodland plant species. The successions and extinctions of plant communities do not coincide with the shift in values, but occurred many years after. The recovery of gymnosperm forests took 4–5 million years. No coal deposits are known from the Early Triassic, and those in the Middle Triassic are thin and low-grade. This "coal gap" has been explained in many ways. It has been suggested that new, more aggressive fungi, insects and vertebrates evolved and killed vast numbers of trees. These decomposers themselves suffered heavy losses of species during the extinction and are not considered a likely cause of the coal gap. It could simply be that all coal-forming plants were rendered extinct by the P–Tr extinction, and that it took 10 million years for a new suite of plants to adapt to the moist, acid conditions of peat bogs. Abiotic factors (factors not caused by organisms), such as decreased rainfall or increased input of clastic sediments, may also be to blame. On the other hand, the lack of coal may simply reflect the scarcity of all known sediments from the Early Triassic. Coal-producing ecosystems, rather than disappearing, may have moved to areas where we have no sedimentary record for the Early Triassic. For example, in eastern Australia a cold climate had been the norm for a long period, with a peat mire ecosystem adapted to these conditions. Approximately 95% of these peat-producing plants went "locally" extinct at the P–Tr boundary; coal deposits in Australia and Antarctica disappear significantly "before" the P–Tr boundary. There is enough evidence to indicate that over two thirds of terrestrial labyrinthodont amphibians, sauropsid ("reptile") and therapsid ("proto-mammal") families became extinct. Large herbivores suffered the heaviest losses. All Permian anapsid reptiles died out except the procolophonids (although testudines have "morphologically"-anapsid skulls, they are now thought to have separately evolved from diapsid ancestors). Pelycosaurs died out before the end of the Permian. Too few Permian diapsid fossils have been found to support any conclusion about the effect of the Permian extinction on diapsids (the "reptile" group from which lizards, snakes, crocodilians, and dinosaurs (including birds) evolved). The groups that survived suffered extremely heavy losses of species, and some terrestrial vertebrate groups very nearly became extinct at the end of the Permian. Some of the surviving groups did not persist for long past this period, but others that barely survived went on to produce diverse and long-lasting lineages. However, it took 30million years for the terrestrial vertebrate fauna to fully recover both numerically and ecologically. An analysis of marine fossils from the Permian's final Changhsingian stage found that marine organisms with low tolerance for hypercapnia (high concentration of carbon dioxide) had high extinction rates, and the most tolerant organisms had very slight losses. The most vulnerable marine organisms were those that produced calcareous hard parts (from calcium carbonate) and had low metabolic rates and weak respiratory systems, notably calcareous sponges, rugose and tabulate corals, calcite-depositing brachiopods, bryozoans, and echinoderms; about 81% of such genera became extinct. Close relatives without calcareous hard parts suffered only minor losses, such as sea anemones, from which modern corals evolved. Animals with high metabolic rates, well-developed respiratory systems, and non-calcareous hard parts had negligible losses except for conodonts, in which 33% of genera died out. This pattern is consistent with what is known about the effects of hypoxia, a shortage but not total absence of oxygen. However, hypoxia cannot have been the only killing mechanism for marine organisms. Nearly all of the continental shelf waters would have had to become severely hypoxic to account for the magnitude of the extinction, but such a catastrophe would make it difficult to explain the very selective pattern of the extinction. Mathematical models of the Late Permian and Early Triassic atmospheres show a significant but protracted decline in atmospheric oxygen levels, with no acceleration near the P–Tr boundary. Minimum atmospheric oxygen levels in the Early Triassic are never less than present-day levels and so the decline in oxygen levels does not match the temporal pattern of the extinction. Marine organisms are more sensitive to changes in (carbon dioxide) levels than terrestrial organisms are for a variety of reasons. is 28 times more soluble in water than is oxygen. Marine animals normally function with lower concentrations of in their bodies than land animals, as the removal of in air-breathing animals is impeded by the need for the gas to pass through the respiratory system's membranes (lungs' alveolus, tracheae, and the like), even when diffuses more easily than oxygen. In marine organisms, relatively modest but sustained increases in concentrations hamper the synthesis of proteins, reduce fertilization rates, and produce deformities in calcareous hard parts. In addition, an increase in concentration is inevitably linked to ocean acidification, consistent with the preferential extinction of heavily calcified taxa and other signals in the rock record that suggest a more acidic ocean. The decrease in ocean pH is calculated to be up to 0.7 units. It is difficult to analyze extinction and survival rates of land organisms in detail because few terrestrial fossil beds span the Permian–Triassic boundary. Triassic insects are very different from those of the Permian, but a gap in the insect fossil record spans approximately 15 million years from the late Permian to early Triassic. The best-known record of vertebrate changes across the Permian–Triassic boundary occurs in the Karoo Supergroup of South Africa, but statistical analyses have so far not produced clear conclusions. However, analysis of the fossil river deposits of the floodplains indicate a shift from meandering to braided river patterns, indicating an abrupt drying of the climate. The climate change may have taken as little as 100,000 years, prompting the extinction of the unique "Glossopteris" flora and its herbivores, followed by the carnivorous guild. End-Permian extinctions did not occur at an instantaneous time horizon; particularly, floral extinction was delayed in time. In the wake of the extinction event, the ecological structure of present-day biosphere evolved from the stock of surviving taxa. In the sea, the "Modern Evolutionary Fauna" became dominant over elements of the "Palaeozoic Evolutionary Fauna". Typical taxa of shelly benthic faunas were now bivalves, snails, sea urchins and Malacostraca, whereas bony fishes and marine reptiles diversified in the pelagic zone. On land, dinosaurs and mammals arose in the course of the Triassic. The profound change in the taxonomic composition was partly a result of the selectivity of the extinction event, which affected some taxa (e.g., brachiopods) more severely than others (e.g., bivalves). However, recovery was also differential between taxa. Some survivors became extinct some million years after the extinction event without having rediversified (dead clade walking, e.g. the snail family Bellerophontidae, whereas others rose to dominance over geologic times (e.g., bivalves). Marine post-extinction faunas were mostly species-poor and dominated by few disaster species such as the bivalves "Claraia" and "Unionites". Seafloor communities maintained a comparatively low diversity until the end of the Early Triassic, approximately 4 million years after the extinction event. This slow recovery stands in remarkable contrast with the quick recovery seen in nektonic organisms such as ammonoids, which exceeded pre-extinction diversities already two million years after the crisis. The relative delay in the recovery of benthic organisms has been attributed to widespread anoxia, but high abundances of benthic species contradict this explanation. More recent work suggests that the pace of recovery was intrinsically driven by the intensity of competition among species, which drives rates of niche differentiation and speciation. Accordingly, low levels of interspecific competition in seafloor communities that are dominated by primary consumers correspond to slow rates of diversification and high levels of interspecific competition among nektonic secondary and tertiary consumers to high diversification rates. Whereas most marine communities were fully recovered by the Middle Triassic, global marine diversity reached pre-extinction values no earlier than the Middle Jurassic, approximately 75 million years after the extinction event. Prior to the extinction, about two thirds of marine animals were sessile and attached to the sea floor. During the Mesozoic, only about half of the marine animals were sessile while the rest were free-living. Analysis of marine fossils from the period indicated a decrease in the abundance of sessile epifaunal suspension feeders such as brachiopods and sea lilies and an increase in more complex mobile species such as snails, sea urchins and crabs. Before the Permian mass extinction event, both complex and simple marine ecosystems were equally common. After the recovery from the mass extinction, the complex communities outnumbered the simple communities by nearly three to one, and the increase in predation pressure led to the Mesozoic Marine Revolution. Bivalves were fairly rare before the P–Tr extinction but became numerous and diverse in the Triassic, and one group, the rudist clams, became the Mesozoic's main reef-builders. Some researchers think much of the change happened in the 5 million years between the two major extinction pulses. Crinoids ("sea lilies") suffered a selective extinction, resulting in a decrease in the variety of their forms. Their ensuing adaptive radiation was brisk, and resulted in forms possessing flexible arms becoming widespread; motility, predominantly a response to predation pressure, also became far more prevalent. "Lystrosaurus", a pig-sized herbivorous dicynodont therapsid, constituted as much as 90% of some earliest Triassic land vertebrate fauna. Smaller carnivorous cynodont therapsids also survived, including the ancestors of mammals. In the Karoo region of southern Africa, the therocephalians "Tetracynodon", "Moschorhinus" and "Ictidosuchoides" survived, but do not appear to have been abundant in the Triassic. Archosaurs (which included the ancestors of dinosaurs and crocodilians) were initially rarer than therapsids, but they began to displace therapsids in the mid-Triassic. In the mid to late Triassic, the dinosaurs evolved from one group of archosaurs, and went on to dominate terrestrial ecosystems during the Jurassic and Cretaceous. This "Triassic Takeover" may have contributed to the evolution of mammals by forcing the surviving therapsids and their mammaliform successors to live as small, mainly nocturnal insectivores; nocturnal life probably forced at least the mammaliforms to develop fur and higher metabolic rates, while losing part of the differential color-sensitive retinal receptors reptilians and birds preserved. Some temnospondyl amphibians made a relatively quick recovery, in spite of nearly becoming extinct. "Mastodonsaurus" and trematosaurians were the main aquatic and semiaquatic predators during most of the Triassic, some preying on tetrapods and others on fish. Land vertebrates took an unusually long time to recover from the P–Tr extinction; Palaeontologist Michael Benton estimated the recovery was not complete until after the extinction, i.e. not until the Late Triassic, in which dinosaurs, pterosaurs, crocodiles, archosaurs, amphibians, and mammaliforms were abundant and diverse. Pinpointing the exact causes of the Permian–Triassic extinction event is difficult, mostly because it occurred over 250 million years ago, and since then much of the evidence that would have pointed to the cause has been destroyed or is concealed deep within the Earth under many layers of rock. The sea floor is also completely recycled every 200 million years by the ongoing process of plate tectonics and seafloor spreading, leaving no useful indications beneath the ocean. Yet, scientists have gathered significant evidence for causes, and several mechanisms have been proposed. The proposals include both catastrophic and gradual processes (similar to those theorized for the Cretaceous–Paleogene extinction event). Any hypothesis about the cause must explain the selectivity of the event, which affected organisms with calcium carbonate skeletons most severely; the long period (4 to 6 million years) before recovery started, and the minimal extent of biological mineralization (despite inorganic carbonates being deposited) once the recovery began. Evidence that an impact event may have caused the Cretaceous–Paleogene extinction event (Cretaceous–Tertiary) has led to speculation that similar impacts may have been the cause of other extinction events, including the P–Tr extinction, and thus to a search for evidence of impacts at the times of other extinctions, such as large impact craters of the appropriate age. Reported evidence for an impact event from the P–Tr boundary level includes rare grains of shocked quartz in Australia and Antarctica; fullerenes trapping extraterrestrial noble gases; meteorite fragments in Antarctica; and grains rich in iron, nickel, and silicon, which may have been created by an impact. However, the accuracy of most of these claims has been challenged. For example, quartz from Graphite Peak in Antarctica, once considered "shocked", has been re-examined by optical and transmission electron microscopy. The observed features were concluded to be due not to shock, but rather to plastic deformation, consistent with formation in a tectonic environment such as volcanism. An impact crater on the sea floor would be evidence of a possible cause of the P–Tr extinction, but such a crater would by now have disappeared. As 70% of the Earth's surface is currently sea, an asteroid or comet fragment is now perhaps more than twice as likely to hit ocean as it is to hit land. However, Earth's oldest ocean-floor crust is only 200 million years old as it is continually being destroyed and renewed by spreading and subduction. Furthermore, craters produced by very large impacts may be masked by extensive flood basalting from below after the crust is punctured or weakened. Yet, subduction should not be entirely accepted as an explanation for the lack of evidence: as with the K-T event, an ejecta blanket stratum rich in siderophilic elements (such as iridium) would be expected in formations from the time. A large impact might have triggered other mechanisms of extinction described below, such as the Siberian Traps eruptions at either an impact site or the antipode of an impact site. The abruptness of an impact also explains why more species did not rapidly evolve to survive, as would be expected if the Permian–Triassic event had been slower and less global than a meteorite impact. Several possible impact craters have been proposed as the site of an impact causing the P–Tr extinction, including the Bedout structure off the northwest coast of Australia and the hypothesized Wilkes Land crater of East Antarctica. An impact has not been proved in either case, and the idea has been widely criticized. The Wilkes Land sub-ice geophysical feature is of very uncertain age, possibly later than the Permian–Triassic extinction. The Araguainha crater in Brazil has been most recently dated to 254.7 ± 2.5 million years ago, overlapping with estimates for the Permo-Triassic boundary. Much of the local rock was oil shale. The estimated energy released by the Araguainha impact is insufficient to have directly caused the global mass extinction, but the colossal local earth tremors would have released huge amounts of oil and gas from the shattered rock. The resulting sudden global warming might have precipitated the Permian–Triassic extinction event. A 2017 paper by Rampino, Rocca and Presser (after a 1992 abstract by Rampino) noted the discovery of a circular gravity anomaly near the Falkland Islands which might correspond to an impact crater with a diameter of , as supported by seismic and magnetic evidence. Estimates for the age of the structure range up to 250 million years old. This would be substantially larger than the well-known Chicxulub impact crater associated with a later extinction. However, Dave McCarthy and colleagues from the British Geological Survey illustrated that the gravity anomaly is not circular and also that the seismic data presented by Rocca, Rampino and Baez Presser did not cross the proposed crater or provide any evidence for an impact crater. The final stages of the Permian had two flood basalt events. A smaller one, the Emeishan Traps in China, occurred at the same time as the end-Guadalupian extinction pulse, in an area close to the equator at the time. The flood basalt eruptions that produced the Siberian Traps constituted one of the largest known volcanic events on Earth and covered over with lava. The date of the Siberian Traps eruptions and the extinction event are in good agreement. The Emeishan and Siberian Traps eruptions may have caused dust clouds and acid aerosols, which would have blocked out sunlight and thus disrupted photosynthesis both on land and in the photic zone of the ocean, causing food chains to collapse. The eruptions may also have caused acid rain when the aerosols washed out of the atmosphere. That may have killed land plants and molluscs and planktonic organisms which had calcium carbonate shells. The eruptions would also have emitted carbon dioxide, causing global warming. When all of the dust clouds and aerosols washed out of the atmosphere, the excess carbon dioxide would have remained and the warming would have proceeded without any mitigating effects. The Siberian Traps had unusual features that made them even more dangerous. Pure flood basalts produce fluid, low-viscosity lava and do not hurl debris into the atmosphere. It appears, however, that 20% of the output of the Siberian Traps eruptions was pyroclastic (consisted of ash and other debris thrown high into the atmosphere), increasing the short-term cooling effect. The basalt lava erupted or intruded into carbonate rocks and into sediments that were in the process of forming large coal beds, both of which would have emitted large amounts of carbon dioxide, leading to stronger global warming after the dust and aerosols settled. In January 2011, a team, led by Stephen Grasby of the Geological Survey of Canada—Calgary, reported evidence that volcanism caused massive coal beds to ignite, possibly releasing more than 3 trillion tons of carbon. The team found ash deposits in deep rock layers near what is now the Buchanan Lake Formation. According to their article, "coal ash dispersed by the explosive Siberian Trap eruption would be expected to have an associated release of toxic elements in impacted water bodies where fly ash slurries developed... Mafic megascale eruptions are long-lived events that would allow significant build-up of global ash clouds." In a statement, Grasby said, "In addition to these volcanoes causing fires through coal, the ash it spewed was highly toxic and was released in the land and water, potentially contributing to the worst extinction event in earth history." In 2013, a team led by Q.Y. Yang reported the total amounts of important volatiles emitted from the Siberian Traps are 8.5 × 107 Tg CO2, 4.4 × 106 Tg CO, 7.0 × 106 Tg H2S and 6.8 × 107 Tg SO2, the data support a popular notion that the end-Permian mass extinction on the Earth was caused by the emission of enormous amounts of volatiles from the Siberian Traps into the atmosphere. In 2015, evidence and a timeline indicated the extinction was caused by events in the large igneous province of the Siberian Traps. Scientists have found worldwide evidence of a swift decrease of about 1% in the 13C/12C isotope ratio in carbonate rocks from the end-Permian. This is the first, largest, and most rapid of a series of negative and positive excursions (decreases and increases in 13C/12C ratio) that continues until the isotope ratio abruptly stabilised in the middle Triassic, followed soon afterwards by the recovery of calcifying life forms (organisms that use calcium carbonate to build hard parts such as shells). A variety of factors may have contributed to this drop in the 13C/12C ratio, but most turn out to be insufficient to account fully for the observed amount: Other hypotheses include mass oceanic poisoning releasing vast amounts of and a long-term reorganisation of the global carbon cycle. Prior to consideration of the inclusion of roasting carbonate sediments by volcanism, the only proposed mechanism sufficient to cause a global 1% reduction in the 13C/12C ratio was the release of methane from methane clathrates. Carbon-cycle models confirm that it would have had enough effect to produce the observed reduction. Methane clathrates, also known as methane hydrates, consist of methane molecules trapped in cages of water molecules. The methane, produced by methanogens (microscopic single-celled organisms), has a 13C/12C ratio about 6.0% below normal ( −6.0%). At the right combination of pressure and temperature, it gets trapped in clathrates fairly close to the surface of permafrost and in much larger quantities at continental margins (continental shelves and the deeper seabed close to them). Oceanic methane hydrates are usually found buried in sediments where the seawater is at least deep. They can be found up to about below the sea floor, but usually only about below the sea floor. The area covered by lava from the Siberian Traps eruptions is about twice as large as was originally thought, and most of the additional area was shallow sea at the time. The seabed probably contained methane hydrate deposits, and the lava caused the deposits to dissociate, releasing vast quantities of methane. A vast release of methane might cause significant global warming since methane is a very powerful greenhouse gas. Strong evidence suggests the global temperatures increased by about 6 °C (10.8 °F) near the equator and therefore by more at higher latitudes: a sharp decrease in oxygen isotope ratios (18O/16O); the extinction of "Glossopteris" flora ("Glossopteris" and plants that grew in the same areas), which needed a cold climate, with its replacement by floras typical of lower paleolatitudes. However, the pattern of isotope shifts expected to result from a massive release of methane does not match the patterns seen throughout the early Triassic. Not only would such a cause require the release of five times as much methane as postulated for the PETM, but would it also have to be reburied at an unrealistically high rate to account for the rapid increases in the 13C/12C ratio (episodes of high positive ) throughout the early Triassic before it was released again several times. Evidence for widespread ocean anoxia (severe deficiency of oxygen) and euxinia (presence of hydrogen sulfide) is found from the Late Permian to the Early Triassic. Throughout most of the Tethys and Panthalassic Oceans, evidence for anoxia, including fine laminations in sediments, small pyrite framboids, high uranium/thorium ratios, and biomarkers for green sulfur bacteria, appear at the extinction event. However, in some sites, including Meishan, China, and eastern Greenland, evidence for anoxia precedes the extinction. Biomarkers for green sulfur bacteria, such as isorenieratane, the diagenetic product of isorenieratene, are widely used as indicators of photic zone euxinia because green sulfur bacteria require both sunlight and hydrogen sulfide to survive. Their abundance in sediments from the P-T boundary indicates hydrogen sulfide was present even in shallow waters. This spread of toxic, oxygen-depleted water would have devastated marine life, causing widespread die-offs. Models of ocean chemistry suggest that anoxia and euxinia were closely associated with hypercapnia (high levels of carbon dioxide). This suggests that poisoning from hydrogen sulfide, anoxia, and hypercapnia acted together as a killing mechanism. Hypercapnia best explains the selectivity of the extinction, but anoxia and euxinia probably contributed to the high mortality of the event. The persistence of anoxia through the Early Triassic may explain the slow recovery of marine life after the extinction. Models also show that anoxic events can cause catastrophic hydrogen sulfide emissions into the atmosphere (see below). The sequence of events leading to anoxic oceans may have been triggered by carbon dioxide emissions from the eruption of the Siberian Traps. In that scenario, warming from the enhanced greenhouse effect would reduce the solubility of oxygen in seawater, causing the concentration of oxygen to decline. Increased weathering of the continents due to warming and the acceleration of the water cycle would increase the riverine flux of phosphate to the ocean. The phosphate would have supported greater primary productivity in the surface oceans. The increase in organic matter production would have caused more organic matter to sink into the deep ocean, where its respiration would further decrease oxygen concentrations. Once anoxia became established, it would have been sustained by a positive feedback loop because deep water anoxia tends to increase the recycling efficiency of phosphate, leading to even higher productivity. A severe anoxic event at the end of the Permian would have allowed sulfate-reducing bacteria to thrive, causing the production of large amounts of hydrogen sulfide in the anoxic ocean. Upwelling of this water may have released massive hydrogen sulfide emissions into the atmosphere and would poison terrestrial plants and animals and severely weaken the ozone layer, exposing much of the life that remained to fatal levels of UV radiation. Indeed, biomarker evidence for anaerobic photosynthesis by Chlorobiaceae (green sulfur bacteria) from the Late-Permian into the Early Triassic indicates that hydrogen sulfide did upwell into shallow waters because these bacteria are restricted to the photic zone and use sulfide as an electron donor. The hypothesis has the advantage of explaining the mass extinction of plants, which would have added to the methane levels and should otherwise have thrived in an atmosphere with a high level of carbon dioxide. Fossil spores from the end-Permian further support the theory: many show deformities that could have been caused by ultraviolet radiation, which would have been more intense after hydrogen sulfide emissions weakened the ozone layer. In the mid-Permian (during the Kungurian age of the Permian's Cisuralian epoch), Earth's major continental plates joined, forming a supercontinent called Pangaea, which was surrounded by the superocean, Panthalassa. Oceanic circulation and atmospheric weather patterns during the mid-Permian produced seasonal monsoons near the coasts and an arid climate in the vast continental interior. As the supercontinent formed, the ecologically diverse and productive coastal areas shrank. The shallow aquatic environments were eliminated and exposed formerly protected organisms of the rich continental shelves to increased environmental volatility. Pangaea's formation depleted marine life at near catastrophic rates. However, Pangaea's effect on land extinctions is thought to have been smaller. In fact, the advance of the therapsids and increase in their diversity is attributed to the late Permian, when Pangaea's global effect was thought to have peaked. While Pangaea's formation certainly initiated a long period of marine extinction, its impact on the "Great Dying" and the end of the Permian is uncertain. John Gribbin argues that the Solar System last passed through a spiral arm of the Milky Way around 250 million years ago and that the resultant dusty gas clouds may have caused a dimming of the Sun which combined with the effect of Pangaea to produce an ice age. A hypothesis published in 2014 posits that a genus of anaerobic methanogenic archaea known as "Methanosarcina" was responsible for the event. Three lines of evidence suggest that these microbes acquired a new metabolic pathway via gene transfer at about that time, enabling them to efficiently metabolize acetate into methane. That would have led to their exponential reproduction, allowing them to rapidly consume vast deposits of organic carbon that had accumulated in the marine sediment. The result would have been a sharp buildup of methane and carbon dioxide in the Earth's oceans and atmosphere, in a manner that may be consistent with the 13C/12C isotopic record. Massive volcanism facilitated this process by releasing large amounts of nickel, a scarce metal which is a cofactor for enzymes involved in producing methane. On the other hand, in the canonical Meishan sections, the nickel concentration increases somewhat after the concentrations have begun to fall. Possible causes supported by strong evidence appear to describe a sequence of catastrophes, each worse than the last: the Siberian Traps eruptions were bad enough alone, but because they occurred near coal beds and the continental shelf, they also triggered very large releases of carbon dioxide and methane. The resultant global warming may have caused perhaps the most severe anoxic event in the oceans' history: according to this theory, the oceans became so anoxic, anaerobic sulfur-reducing organisms dominated the chemistry of the oceans and caused massive emissions of toxic hydrogen sulfide. However, there may be some weak links in this chain of events: the changes in the 13C/12C ratio expected to result from a massive release of methane do not match the patterns seen throughout the early Triassic; and the types of oceanic thermohaline circulation that may have existed at the end of the Permian are not likely to have supported deep-sea anoxia.
https://en.wikipedia.org/wiki?curid=24749
Porter Blanchard Porter George Blanchard (1886–1973) was an American silversmith living and working in Pacoima, California. He is considered to be part of the Arts and Crafts Movement. Blanchard learned the trade of the silversmith from his father, George Porter Blanchard in Gardner, Massachusetts. In 1923, Blanchard moved to Burbank, California, where he established a studio for silversmithing. Between the 1930s and 1950, he operated a shop in Hollywood. He then worked from his home in Pacoima from the 1940s until his death in 1973. His daughter Alice Blanchard married Lewis Wise, who conducted business as Porter Blanchard Silversmiths in Calabasas, California. After 1955, all Porter Blanchard flatware was made at the Calabasas shop, while the holloware was made at Blanchard's Pacoima home. His daughter Rebecca married Allan Adler, who continued designing as a silversmith in the Arts and Crafts tradition. Blanchard was a member of the Boston Society of Arts and Crafts and was awarded their title of medalist in 1944. Many of his papers, including photographs of his shop, are collected in the Archives of American Art at the Smithsonian Institution in Washington, D.C. They were donated to the Archives by his daughters, Rebecca Adler and Alice E. Wise. Blanchard's works are in the collections of various museums, including the Cooper-Hewitt National Design Museum, the Los Angeles County Museum of Art, and the Oakland Museum of California.
https://en.wikipedia.org/wiki?curid=24750
Punjab, Pakistan Punjab (Urdu & (), ; ) is Pakistan's most populous province, with an estimated population of 110,012,442 as of 2017. Forming the bulk of the transnational Punjab region, it is bordered by the Pakistani provinces of Sindh, Balochistan, and Khyber Pakhtunkhwa, the enclave of Islamabad, and Azad Kashmir. It also shares borders with the Indian states of Punjab, Rajasthan, and the Indian-administered territory of Jammu and Kashmir. The capital is Lahore, a cultural, historical, economic and cosmopolitan centre of Pakistan where the country's cinema industry, and much of its fashion industry, are based. Punjab is also the world's fifth-most populous subnational entity, and the most populous outside China or India. Punjab has been inhabited since ancient times. The Indus Valley Civilization, dating to 2600 BCE, was first discovered at Harappa. Punjab features heavily in the Hindu epic poem, the Mahabharata, and is home to Taxila, site of what is considered by many to be the oldest university in the world. In 326 BCE, Alexander the Great defeated King Porus at the Battle of the Hydaspes near Mong, Punjab. The Umayyad empire conquered Punjab in the 8th century CE. In the subsequent centuries, Punjab was invaded and conquered by the Ghaznavids, Ghurids, Delhi Sultanate, Mughals, Durranis and the Sikhs. Punjab reached the height of its splendour during the reign of the Mughal Empire, which for a time ruled from Lahore. During the 18th century, Nader Shah's invasion of the Mughal Empire caused Mughal authority in the Punjab to fall apart and it thus fell into chaos. The Durrani Afghans under Ahmad Shah Durrani wrested control of Punjab but lost it to the Sikhs after a successful rebellion which allowed Sikh armies to claim Lahore in 1759. The Sikh Empire was established in 1799 under the rule of Ranjit Singh with his capital based in Lahore, until its defeat by the British. Punjab was central to the independence movements of both India and Pakistan, with Lahore being site of both the Declaration of Indian Independence, and the resolution calling for the establishment of Pakistan. The province was formed when the Punjab province of British India was divided along religious boundaries in 1947 by the Radcliffe Line after Partition. Punjab is Pakistan's most industrialised province with the industrial sector making up 24% of the province's gross domestic product. Punjab is known in Pakistan for its relative prosperity, and has the lowest rate of poverty amongst all Pakistani provinces. A clear divide is present between the northern and southern portions of the province; with poverty rates in prosperous northern Punjab amongst the lowest in Pakistan, while some in south Punjab are amongst the most impoverished. Punjab is also one of South Asia's most urbanised regions with approximately 40% of people living in urban areas. Its human development index rankings are high relative to the rest of Pakistan. The province has been strongly influenced by Sufism, with numerous Sufi shrines spread across Punjab which attract millions of devotees annually. The founder of the Sikh faith, Guru Nanak, was born in the Punjab town of Nankana Sahib near Lahore. Punjab is also the site of the Katasraj Temple, which features prominently in Hindu mythology. Several UNESCO World Heritage Sites are located in Punjab, including the Shalimar Gardens, the Lahore Fort, the archaeological excavations at Taxila, and the Rohtas Fort. Punjab was referred to in the Rig Veda as the Sapta Sindhu, meaning the “land of the seven rivers.” The region was known to the ancient Greeks as "Pentapotamia", meaning the “region of five rivers,” while the Sanskrit name for the region, as mentioned in the Ramayana and Mahabharata, was "Panchanada" which means "Land of the Five Rivers.” The Persians later referred to the region as "Punjab" after the Muslim conquests, which also means “Land of the five rivers.” The word "Punjab" was formally introduced in the early 17th century CE as an elision of the Persian words "panj" ("five") and "āb" ("water"), thus meaning the (land of) "five rivers", similar in meaning to the Sanskrit and Greek name for the region. The five rivers, namely Chenab, Jhelum, Ravi, Beas and Sutlej, flow via the Panjnad River into the Indus River and eventually into the Arabian Sea. Of the five great rivers of Punjab, four course through Pakistan's Punjab province. Due to its location, the Punjab region came under constant attack and witnessed centuries of foreign invasions by the Persians, Greeks, Scythians, Turks and Afghans. The northwestern part of South Asia, including Punjab, was repeatedly invaded or conquered by various mighty foreign armies throughout history, including those of Alexander the Great, Genghis Khan, Timur, Nadir Shah and others. The oldest evidence of life in Pakistan has been found in Soan River valley. It was here that some of the earliest signs of humans have been discovered during the excavations of prehistoric mounds. Tools up to two million years old have been recovered in the Potohar Plateau. In the Soan River, many fossil bearing rocks are exposed on the surface. Punjab was part of the Indus Valley Civilization, more than 4000 years ago. The main site in Punjab was the city of Harrapa. The Indus Valley Civilization spanned much of what is today Pakistan and eventually evolved into the Indo-Aryan civilisation. The Vedic civilisation flourished along the length of the Indus River. Punjab during the Mahabharata times was known as Panchanada. Although the archaeological site at Harappa was partially damaged in 1857 when engineers constructing the Lahore-Multan railroad used brick from the Harappa ruins for track ballast, an abundance of artefacts have nevertheless been found. Punjab was part of both classical Western Eurasian empires including the Achaemenid, Macedonian, Kushan, Greco-Bactrian, Indo-Greek and Hindu Shahi empires, as well as Indo-Gangetic (North Indian) empires such as that of the Maurya and Gupta. It also was for a period of time part of the Gurjara-Pratihara empire. Agriculture flourished and trading cities (such as Multan and Lahore) grew in wealth. The city of Taxila, founded around 1000 BCE, was reputed to house one of the earliest universities in the world. One of its instructors was the Mauryan statesman and philosopher Chanakya. Taxila was a major centre of political control, intellectual discourse and trade between the Hellenistic kingdoms and the Maurya Empire. Taxila is a UNESCO World Heritage Site, valued for its archaeological and religious history. Gandhāra was an ancient kingdom situated in the northwestern region of Pakistan, in the Peshawar valley and Potohar plateau with its capital at Taxila in modern northwestern Pakistan. Gandhara existed since the time of the Rigveda (c. 1500–1200 BC), as well as the Zoroastrian Avesta, which mentions it as "Vaēkərəta", the sixth most beautiful place on earth, created by Ahura Mazda. Gandhara was conquered by the Achaemenid Empire in the 6th century BC. Later it was conquered by Alexander the Great in 327 BC, it subsequently became part of the Maurya Empire and then the Indo-Greek Kingdom. The name of the Gandhāris is attested in the Rigveda (RV 1.126.7) and in ancient inscriptions dating back to Achaemenid Persia. The primary cities of Gandhara were Puruṣapura (Peshawar), Takṣaśilā (Taxila), and Pushkalavati (Charsadda). Gandhara's language was a Prakrit or "Middle Indo-Aryan" dialect, usually called Gāndhārī. The language used the Kharosthi script, which died out about the 4th century. However Punjabi, Kohistani, and Hindko are derived from the Prakrit that were spoken in Gandhara and its surrounding areas. The Achaemenid Persian empire included Gandhara and western Punjab, administered through the satrapies of Gandāra, Hindush and Sattagydia. Having conquered Drangiana, Arachosia, Gedrosia and Seistan in ten days, Alexander the Great (known in Urdu as 'Sikander-e-Azam') crossed the Hindu Kush and was thus fully informed of the magnificence of the country and its riches in gold, gems and pearls. However, Alexander had to encounter and reduce the tribes on the border of Punjab before entering the luxuriant plains. Having taken a northeasterly direction, he marched against the Aspii (mountaineers), who offered vigorous resistance, but were subdued. Alexander then marched through Ghazni, blockaded Magassa, and then marched to Ora and Bazira. Turning to the northeast, Alexander marched to Pucela, the capital of the district now known as Pakhli. He entered Western Punjab, where the ancient city of Nysa (at the site of modern-day Mong) was situated. A coalition was formed against Alexander by the Cathians, the people of Multan, who were very skilful in war. Alexander invested many troops, eventually killing 17,000 Cathians in this battle, and the city of Sagala (present-day Sialkot) was razed to the ground. The Battle of the Hydaspes was fought astride the Jhelum River in western Punjab against the regional chieftain Porus, and the Siege of the Malli Tribe which occurred at the confluence of the Indus and Hydaspes Rivers near modern Multan (during which Alexander suffered a near-fatal arrow wound). Alexander left Punjab in 326 B.C. and continued to campaign down the course of the Indus River in modern-day Sindh and Baluchistan. The Indo-Greek Kingdom or Graeco-Indian Kingdom was a Hellenistic kingdom covering most of the Punjab. The kingdom was founded when the Graeco-Bactrian king Demetrius invaded the subcontinent early in the 2nd century BC. The city of Sirkap founded by Demetrius combines Greek and Indian influences without signs of segregation between the two cultures. The most famous Indo-Greek ruler was Menander (Milinda). He had his capital at Sagala in the Punjab (present-day Sialkot). The Indo-Greeks were involved with local faiths, particularly with Buddhism, but also with Hinduism'. Buddhism flourished under the Indo-Greek kings, and their rule, especially that of Menander, has been remembered as benevolent. The Indo-Scythian king Maues invaded Indo-Greek territories in Punjab and established an Indo-Scythian empire. Maues first conquered Gandhara and Taxila around 80 BCE, but his kingdom disintegrated after his death. The Indo-Scythians ultimately established a kingdom in the northwest south Asia, based near Taxila, with two great Satraps, one in Mathura in the east, and one in Surastrene (Gujarat) in the southwest. The Indo-Scythians seem to have been followers of Buddhism, and many of their practices apparently continued those of the Indo-Greeks. The Indo-Parthian Kingdom was ruled by the Gondopharid dynasty with its capital at Taxila, Punjab. Gondophares, founder of the Indo-Parthian kingdom, was a ruler of Seistan in what is today eastern Iran, probably a vassal or relative of the Apracarajas. Around 20–10 BCE, he made conquests in the former Indo-Scythian kingdom, perhaps after the death of the important ruler Azes. Gondophares became the ruler of areas comprising Arachosia, Seistan, Sindh, Punjab, and the Kabul valley. The temple of Jandial, Taxila is usually interpreted as a Zoroastrian fire temple from the period of the Indo-Parthians. The Kushan Empire was a syncretic empire, formed by the Yuezhi, in the Bactrian territories in the early 1st century. Around 75 CE under the leadership of Kujula Kadphises they gained control of Gandhara and other parts of what is now Pakistan. The Kushan period is considered the Golden Period of Gandhara. Gandhara's culture peaked during the reign of the Kushan ruler Kanishka the Great (128–151). The cities of Taxila (Takṣaśilā) at Sirsukh and Peshawar were built. Kanishka was a great patron of the Buddhist faith; Buddhism spread to Central Asia and the Far East across Bactria and Sogdia, where his empire met the Han Empire of China. Buddhist art spread from Gandhara to other parts of Asia. Under Kanishka, Gandhara became a holy land of Buddhism and attracted Chinese pilgrims eager to view the monuments associated with many Jatakas. The Hephthalite Huns captured Gandhara around 451, and did not adopt Buddhism, but in fact "perpetrated frightful massacres." Mihirakula became a "terrible persecutor" of the Buddhist religion. During their rule, Hinduism revived itself and the Buddhist civilisation in Gandhara declined. The Punjabis followed different faiths throughout history, mainly Buddhism and later a non Brahmanical form of Hinduism, however Islam had the biggest impact on the region culturally. Parts of Punjab first came into contact with Islam after the Umayyad Caliphate commander Muhammad bin Qasim defeated Raja Dahir in 712. Punjab later became a part of different Muslim Empires consisting mostly of Afghans and Turkic peoples in co-operation with some local Punjabi tribes and others. In the 11th century, during the reign of the Ghaznavid ruler Mahmud of Ghazni, Punjab was for the first time annexed in its entirety by a Muslim ruler and became an important region of learning and culture in the eastern Islamic world. Middle Eastern and Central Asian soldiers, merchants, scholars, administrators, architects, theologians and Sufis migrated from the rest of the Islamic world to the expanding sultanates in the Indian subcontinent with migrations increasing rapidly after the Mongol invasions as the sultanates began to be seen as a safe haven and refuge for Muslims fleeing Mongol persecution. The Punjab region was therefore gradually Islamized due to Sufi saints whose dargahs dot the landscape of Punjab; one such saint was Data Ganj Baksh, an 11th-century Iranian Sayyid mystic from Ghazni now buried at Data Darbar in Lahore. Another equally important saint for Punjabis was the Punjabi Muslim Fariduddin Ganjshakar also known reverentially as Bābā Farīd or Shaikh Farīd. He is widely considered to be one of the earliest proponents of Punjabi as the language of poetry. Farīd laid the basis for a vernacular Punjabi literature and ultimately a Punjabi identity. The area subsequently came under various other Islamic rulers until finally becoming part of the Mughal Empire in 1526. The Punjab region rose to significance in the Mughal Empire when Lahore became the capital for the royal family in 1584, the legacy of which is seen today in its rich display of Mughal architecture all over modern day Punjab, Pakistan. The Mughals left an indelible mark on the landscape of Punjab from 1556 to 1739 by commissioning the construction of great gardens, forts, tombs, baths and mosques such as the Shalimar Gardens, Lahore Fort, Tomb of Jahangir, Tomb of Nur Jahan, Shahi Hammam, Akbari Sarai, Wazir Khan Mosque, and the Badshahi Mosque, all situated in Lahore, as well as architectural projects such as Hiran Minar and others elsewhere in Punjab. Akbar established two of his original twelve subahs (imperial top-level provinces) in Punjab : After Nader Shah's invasion of the Mughal empire in 1739, Mughal authority significantly weakened allowing swaths of what is now Punjab to be invaded in 1747 by Nader Shah's commader Ahmad Shah Durrani who was now ruler of the Afghan Durrani Empire. In 1758 Raghunath Rao, the general of the Hindu Maratha Empire, conquered Lahore and Attock. Timur Shah Durrani, the son and viceroy of Ahmad Shah Durrani, was driven out of Punjab. Lahore, Multan, Dera Ghazi Khan, Kashmir and other subahs (ex-Mughal provinces) on the south and eastern side of Peshawar were under Maratha rule for the most part. In Punjab and Kashmir, the Marathas were now major players. The Third Battle of Panipat took place on 1761, Ahmad Shah Abdali defeated the Marathas and reversed their gains in the regions of Punjab and Kashmir by re-consolidating control over them. In the mid-fifteenth century, the religion of Sikhism was born. During the Mughal empire, many Hindus increasingly adopted Sikhism. The Sikhs became a formidable military force after the death of Aurangzeb in 1707 and challenged the Mughals and later the Durrani Afghans for power in Punjab. After fighting Ahmad Shah Durrani in the later eighteenth century, the Sikh Misls took control of Punjab and its capital Lahore was captured by the Bhangi Misl. In 1799 Ranjit Singh, ruler of the Sukerchakia Misl, defeated the Bhangi Misl and captured Lahore thereby proclaiming himself as the "Maharaja of Punjab" at the age of 21. Ranjit Singh made Lahore his capital and formed a sophisticated Sikh Empire which lasted from 1799 to 1849. Ranjit Singh modernized his Sikh Khalsa army by using Franco-British principles and by employing veterans of the Napoleonic Wars to train the infantry in European style. Ranjit Singh expanded his empire so that by his death in 1839 his empire included most of Punjab, Khyber Pakhtunkhwa and Kashmir. Ranjit Singh was not without opponents who challenged his authority in the regions he had conquered. He faced huge opposition from Nawab Muzaffar Khan, Azim Khan, Syed Ahmad Barelvi and Mir Painda Khan. In 1818 Nawab Muzaffar Khan was killed by the Sikhs at the Battle of Multan after putting up stout resistance for many years. Azim Khan was the governor of Kashmir from 1812 until 1819 when Ranjit Singh captured it for himself. In 1823 Azim Khan took control of Peshawar and with support from Pashtun tribesmen faced off against the encroaching Khalsa army in the Battle of Nowshera. He abandoned his troops whilst they regrouped to continue fighting until they were defeated. Azim Khan retreated to Kabul where he died shortly thereafter due to grief. Syed Ahmad Barelvi was an Indian Muslim who declared Jihad against the Sikhs by garnering support from local Pashtun tribesmen and attempted to create an Islamic state with strict enforcement of Sharia. In 1821 Syed Ahmad Barelvi spent two years organizing popular and material support for his Punjab campaign. In December 1826 Sayyid Ahmad Barelvi and his followers clashed with Sikh troops at Akora Khattak, with no decisive result. Barelvi's movement weakened after there was infighting with his Pashtun followers and in a major battle near the town of Balakot in 1831, Syed Ahmad Barelvi and Shah Ismail Shaheed with volunteer Muslims were defeated and killed by the Sikh Army. Only Mir Painda Khan was able to maintain his independence at Tanawal in Hazara from the Sikh Empire. From about 1813 he began a series of rebellions against the Sikhs which continued throughout his lifetime inflicting defeats on the Sikhs whilst also losing territory to them before being poisoned in 1844. James Abbott, British officer and deputy commissioner at Hazara in 1851, described Mir Painda Khan as "a chief renowned on the border, a wild and energetic man who was never subjugated by the Sikhs". Maharaja Ranjit Singh's death in the summer of 1839 brought political chaos and the subsequent battles of succession and the bloody infighting between the factions at court weakened the state. Relationships with neighbouring British territories then broke down, starting the First Anglo-Sikh War; this led to a British official being resident in Lahore and the annexation in 1849 of territory south of the Satluj to British India. After the Second Anglo-Sikh War in 1849, the Sikh Empire became the last territory to be merged into British India. In Jhelum 35 British soldiers of the HM XXIV regiment were killed by the local resistance during the Indian Rebellion of 1857. In 1947 the Punjab province of British India was divided along religious lines into West Punjab and East Punjab. Western Punjab was assimilated into the new country of Pakistan, while East Punjab became a part of modern-day India. This led to massive rioting as both sides committed atrocities against fleeing refugees. The part of the Punjab now in Pakistan once formed a major region of British Punjab, and was home to a large minority population of Punjabi Hindus and Sikhs up to 1947 apart from the Muslim majority. Migration between Eastern and Western Punjab was continuous before independence. By the 1900s Western Punjab was predominantly Muslim and supported the Muslim League and Pakistan Movement. After independence, the minority Hindus and Sikhs migrated to India while Muslim refugees from India settled in the Western Punjab and across Pakistan, having fled pogroms that almost entirely depopulated Eastern Punjab of its Muslim population. Since the 1950s, Punjab industrialised rapidly. New factories were established in Lahore, Sargodha, Multan, Gujrat, Gujranwala, Sialkot, Wah and Rawalpindi. Agriculture continues to be the largest sector of Punjab's economy. The province is the breadbasket of the country as well as home to the largest ethnic group in Pakistan, the Punjabis. Unlike neighbouring India, there was no large-scale redistribution of agricultural land. As a result, most rural areas are dominated by a small set of feudalistic land-owning families. In the 1950s there was tension between the eastern and western halves of Pakistan. To address the situation, a new formula resulted in the abolition of the province status for Punjab in 1955. It was merged into a single province West Pakistan. In 1972, after East Pakistan seceded and became Bangladesh, Punjab again became a province. Punjab witnessed major battles between the armies of India and Pakistan in the wars of 1965 and 1971. Since the 1990s Punjab hosted several key sites of Pakistan's nuclear program such as Kahuta. It also hosts major military bases such as at Sargodha and Rawalpindi. The peace process between India and Pakistan, which began in earnest in 2004, has helped pacify the situation. Trade and people-to-people contacts through the Wagah border are now starting to become common. Indian Sikh pilgrims visit holy sites such as Nankana Sahib. Starting in the 1980s, large numbers of Punjabis migrated to the Middle East, Britain, Spain, Canada and the United States for economic opportunities, forming the large Punjabi diaspora, resulting in growing economic ties between Punjab and these countries. Punjab is Pakistan's second largest province by area after Balochistan with an area of . It occupies 25.8% of the total landmass of Pakistan. Punjab province is bordered by Sindh to the south, the province of Balochistan to the southwest, the province of Khyber Pakhtunkhwa to the west, and the Islamabad Capital Territory and Azad Kashmir in the north. Punjab borders Jammu and Kashmir in the north, and the Indian states of Punjab and Rajasthan to the east. The capital and largest city is Lahore which was the historical capital of the wider Punjab region. Other important cities include Faisalabad, Rawalpindi, Gujranwala, Sargodha, Multan, Sialkot, Bahawalpur, Gujrat, Sheikhupura, Jhelum and Sahiwal. The undivided Punjab region was home to six rivers, of which five flow through Pakistan's Punjab province. From west to east, the rivers are: the Indus, Jhelum, Chenab, Ravi and Sutlej. It is the nation's only province that touches every other province; it also surrounds the federal enclave of the national capital city at Islamabad. In the acronym "P-A-K-I-S-T-A-N", the P is for "Punjab". Punjab's landscape consists mostly consists of fertile alluvial plains of the Indus River and its four major tributaries in Pakistan, the Jhelum, Chenab, Ravi, and Sutlej rivers which traverse Punjab north to south – the fifth of the "five waters" of Punjab, the Beas River, lies exclusively in the Indian state of Punjab. The landscape is amongst the most heavily irrigated on earth and canals can be found throughout the province. Punjab also includes several mountainous regions, including the Sulaiman Mountains in the southwest part of the province, the Margalla Hills in the north near Islamabad, and the Salt Range which divides the most northerly portion of Punjab, the Pothohar Plateau, from the rest of the province. Sparse deserts can be found in southern Punjab near the border with Rajasthan and near the Sulaiman Range. Punjab also contains part of the Thal and Cholistan deserts. In the South, Punjab's elevation reaches near the hill station of Fort Munro in Dera Ghazi Khan. Most areas in Punjab experience extreme weather with foggy winters, often accompanied by rain. By mid-February the temperature begins to rise; springtime weather continues until mid-April, when the summer heat sets in. The onset of the southwest monsoon is anticipated to reach Punjab by May, but since the early 1970s, the weather pattern has been irregular. The spring monsoon has either skipped over the area or has caused it to rain so hard that floods have resulted. June and July are oppressively hot. Although official estimates rarely place the temperature above 46 °C, newspaper sources claim that it reaches 51 °C and regularly carry reports about people who have succumbed to the heat. Heat records were broken in Multan in June 1993, when the mercury was reported to have risen to 54 °C. In August the oppressive heat is punctuated by the rainy season, referred to as "barsat", which brings relief in its wake. The hardest part of the summer is then over, but cooler weather does not come until late October. Recently the province experienced one of the coldest winters in the last 70 years. Punjab's region temperature ranges from −2° to 45 °C, but can reach 50 °C (122 °F) in summer and can touch down to −10 °C in winter. Climatically, Punjab has three major seasons: Weather extremes are notable from the hot and barren south to the cool hills of the north. The foothills of the Himalayas are found in the extreme north as well, and feature a much cooler and wetter climate, with snowfall common at higher altitudes. The province is home to over half the population of Pakistan, and is the world's fifth-most populous subnational entity, and the most populous outside China or India.Punjabis are a heterogeneous group comprising different tribes, clans () and communities. In Pakistani Punjab, non-tribal social distinctions are primarily based on traditional occupations such as blacksmiths or artisans, as opposed to rigid social stratifications. Punjab has the lowest poverty rates in Pakistan, although a divide is present between the northern and southern parts of the province. Sialkot District in the prosperous northern part of the province has a poverty rate of 5.63%, while Rajanpur District in the poorer south has a poverty rate of 60.05%. The major and native language spoken in the Punjab is Punjabi (which is written in the Shahmukhi script - one of the oldest scripts used for writing Punjabi) and Punjabis comprise the largest ethnic group in the country. Punjabi is the provincial language of Punjab but is not given any official recognition in the Constitution of Pakistan at the national level. Saraiki is mostly spoken in south Punjab, and Pashto, spoken in some parts of north west Punjab, especially in Attock District and Mianwali District near Khyber Pakhtunkhwa province. The use of Urdu and English as the near exclusive languages of broadcasting, the public sector, and formal education have led some to fear that Punjabi in Pakistan is being relegated to a low-status language and that it is being denied an environment where it can flourish. Several prominent educational leaders, researchers, and social commentators have echoed the opinion that the intentional promotion of Urdu and the continued denial of any official sanction or recognition of the Punjabi language amounts to a process of "Urdu-isation" that is detrimental to the health of the Punjabi language In August 2015, the Pakistan Academy of Letters, International Writer's Council (IWC) and World Punjabi Congress (WPC) organised the "Khawaja Farid Conference" and demanded that a Punjabi-language university should be established in Lahore and that Punjabi language should be declared as the medium of instruction at the primary level. In September 2015, a case was filed in Supreme Court of Pakistan against Government of Punjab, Pakistan as it did not take any step to implement the Punjabi language in the province. Additionally, several thousand Punjabis gather in Lahore every year on International Mother Language Day. Hafiz Saeed, chief of Jama'at-ud-Da'wah (JuD) has questioned Pakistan's decision to adopt Urdu as its national language in a country where majority of people speak Punjabi language, citing his interpretation of Islamic doctrine as encouraging education in the mother-tongue. The list of thinktanks, political organisations, cultural projects, and individuals that demand authorities at the national and provincial level to promote the use of the language in the public and official spheres includes: The population of Punjab (Pakistan) is estimated to be 97.21% Muslim with a Sunni Hanafi majority and Shia Ithna 'ashariyah minority. The largest non-Muslim minority is Christians and make up 2.3% of the population. The other minorities include Ahmadiyya, Hindus, Sikhs, Parsis and Bahá'í. The Government of Punjab is a provincial government in the federal structure of Pakistan, is based in Lahore, the capital of the Punjab Province. The Chief Minister of Punjab (CM) is elected by the Provincial Assembly of the Punjab to serve as the head of the provincial government in Punjab, Pakistan. The current Chief Minister is Sardar Usman Buzdar He got elected as a result of 25 July 2018 elections. The Provincial Assembly of the Punjab is a unicameral legislature of elected representatives of the province of Punjab, which is located in Lahore in eastern Pakistan. The Assembly was established under Article 106 of the Constitution of Pakistan as having a total of 371 seats, with 66 seats reserved for women and eight reserved for non-Muslims. There are 48 departments in Punjab government. Each Department is headed by a Provincial Minister (Politician) and a Provincial Secretary (A civil servant of usually BPS-20 or BPS-21). All Ministers report to the Chief Minister, who is the Chief Executive. All Secretaries report to the Chief Secretary of Punjab, who is usually a BPS-22 Civil Servant. The Chief Secretary in turn, reports to the Chief Minister. In addition to these departments, there are several Autonomous Bodies and Attached Departments that report directly to either the Secretaries or the Chief Secretary. When the divisions were restored as a tier of government in 2008, a tenth division – Sheikhupura Division – was created from part of Lahore Division. Punjab has the largest economy in Pakistan, contributing most to the national GDP. The province's economy has quadrupled since 1972. Its share of Pakistan's GDP was 54.7% in 2000 and 59% as of 2010. It is especially dominant in the service and agriculture sectors of Pakistan's economy. With its contribution ranging from 52.1% to 64.5% in the Service Sector and 56.1% to 61.5% in the agriculture sector. It is also a major manpower contributor because it has the largest pool of professionals and highly skilled (technically trained) manpower in Pakistan. It is also dominant in the manufacturing sector, though the dominance is not as huge, with historical contributions ranging from a low of 44% to a high of 52.6%. In 2007, Punjab achieved a growth rate of 7.8% and during the period 2002–03 to 2007–08, its economy grew at a rate of between 7% to 8% per year. and during 2008–09 grew at 6% against the total GDP growth of Pakistan at 4%. Despite the lack of a coastline, Punjab is the most industrialised province of Pakistan; its manufacturing industries produce textiles, sports goods, heavy machinery, electrical appliances, surgical instruments, vehicles, auto parts, metals, sugar mill plants, aircraft, cement, agricultural machinery, bicycles and rickshaws, floor coverings, and processed foods. In 2003, the province manufactured 90% of the paper and paper boards, 71% of the fertilizers, 69% of the sugar and 40% of the cement of Pakistan. Despite its tropical wet and dry climate, extensive irrigation makes it a rich agricultural region. Its canal-irrigation system established by the British is the largest in the world. Wheat and cotton are the largest crops. Other crops include rice, sugarcane, millet, corn, oilseeds, pulses, vegetables, and fruits such as kinoo. Livestock and poultry production are also important. Despite past animosities, the rural masses in Punjab's farms continue to use the Hindu calendar for planting and harvesting. Punjab contributes about 76% to annual food grain production in the country. Cotton and rice are important crops. They are the cash crops that contribute substantially to the national exchequer. Attaining self-sufficiency in agriculture has shifted the focus of the strategies towards small and medium farming, stress on barani areas, farms-to-market roads, electrification for tube-wells and control of water logging and salinity. Punjab has also more than 68 thousand industrial units. There are 39,033 small and cottage industrial units. The number of textile units is 14,820. The ginning industries are 6,778. There are 7,355 units for processing of agricultural raw materials including food and feed industries. Lahore and Gujranwala Divisions have the largest concentration of small light engineering units. The district of Sialkot excels in sports goods, surgical instruments and cutlery goods.Industrial estates are being developed by Punjab government to boost industrialization in province , Quaid e Azam Business Park Sheikhupura is one of the industrial area which is being developed near Sheikhupura on Lahore-Islamabad motorway. Punjab is also a mineral-rich province with extensive mineral deposits of coal, iron, gas, petrol, rock salt (with the second largest salt mine in the world), dolomite, gypsum, and silica-sand. The Punjab Mineral Development Corporation is running over a hundred economically viable projects. Manufacturing includes machine products, cement, plastics, and various other goods. The incidence of poverty differs between the different regions of Punjab. With Northern and Central Punjab facing much lower levels of poverty than Western and Southern Punjab. Those living in Southern and Western Punjab are also a lot more dependent on agriculture due to lower levels of industrialisation in those regions. , Pakistan's electricity problems were so severe that violent riots were taking place across Punjab. According to protesters, load shedding was depriving the cities of electricity 20–22 hours a day, causing businesses to go bust and making living extremely hard. Gujranwala, Toba Tek Singh, Faisalabad, Sialkot, Bahawalnagar and communities across Khanewal District saw widespread rioting and violence on Sunday 17 June 2012, with the houses of several members of parliament being attacked as well as the offices of regional energy suppliers Fesco, Gepco and Mepco being ransacked or attacked. The literacy rate has increased greatly over the last 40 years (see the table below). Punjab has the highest Human Development Index out of all of Pakistan's provinces at 0.567. Sources: This is a chart of the education market of Punjab estimated by the government in 1998. Punjab has been the cradle of civilisation since time immemorial. The ruins of Harappa show an advanced urban culture that flourished over 8000 years ago. Ancient Taxila, another historic landmark also stands out as a proof of the achievements of the area in learning, arts and crafts. The ancient Hindu Katasraj temple and the Salt Range temples are regaining attention and are in need of repair. Mosques abound all over Punjab and vary in architectural style. Calligraphic inscriptions from the Quran decorate mosques and mausoleums in Punjab. The inscriptions on bricks and tiles of the mausoleum of Shah Rukn-e-Alam (1320 AD) at Multan are outstanding specimens of architectural calligraphy. The earliest existing building in South Asia with enamelled tile-work is the tomb of Shah Yusuf Gardezi (1150 AD) at Multan. A specimen of the sixteenth century tile-work at Lahore is the tomb of Sheikh Musa Ahangar, with its brilliant blue dome. The tile-work of Emperor Shah Jahan is of a richer and more elaborate nature. The pictured wall of Lahore Fort is the last line in the tile-work in the entire world. The culture of Punjab derives its basis from the institution of Sufi saints, who spread Islam and preached and lived the Muslim way of life. People have festivities to commemorate these traditions. The fairs and festivals of Punjab reflect the entire gamut of its folk life and cultural traditions. These mainly fall in the following categories: Religious fairs are held on special days of Islamic significance like Eid ul-Adha, Eid-ul-Fitr, Eid-e-Milad-un-Nabi, Shahb-e-Barat, Ashura, Laylat al-Qadr and Jumu'ah-tul-Wida. The main activities on these special occasions are confined to congregational prayers and rituals. Melas are also held to mark these occasions. The fairs held at the shrines of Sufi saints are called urs. They generally mark the death anniversary of the saint. On these occasions, devotees assemble in large numbers and pay homage to the memory of the saint. Soul inspiring music is played and devotees dance in ecstasy. The music on these occasions is essentially folk and appealing. It forms a part of the folk music through mystic messages. The most important urs are: urs of Data Ganj Buksh at Lahore, urs of Sultan Bahu at Jhang, urs of Shah Jewna at Jhang, urs of Mian Mir at Lahore, urs of Baba Farid Ganj Shakar at Pakpattan, urs of Bahaudin Zakria at Multan, urs of Sakhi Sarwar Sultan at Dera Ghazi Khan, urs of Shah Hussain at Lahore, urs of Bulleh Shah at Kasur, urs of Imam Bari (Bari Shah Latif) at Rawalpindi-Islamabad and urs of Shah Inayat Qadri (the murrshad of Bulleh Shah) in Lahore. A big fair/mela is organised at Jandiala Sher Khan in district Sheikhupura on the mausoleum of Syed Waris Shah who is the most loved Sufi poet of Punjab due to his classic work, Heer Ranjha. The shrine of Heer Ranjha in Jhang is one of the most visited shrines in Punjab. Exhibitions and annual horse shows in all districts and a national horse and cattle show at Lahore are held with the official patronage. The national horse and cattle show at Lahore is the biggest festival where sports, exhibitions, and livestock competitions are held. It not only encourages and patronises agricultural products and livestock through the exhibitions of agricultural products and cattle but is also a colourful documentary on the rich cultural heritage of the province with its strong rural roots. Vaisakhi, also called Besakhi, is a harvest festival to celebrate harvesting the wheat crop. Colourful festivals are held at the time of Besakhi when farmers are free to enjoy their leisure time. Various literary festivals and fairs are organised in many places. Basant is a seasonal festival and is celebrated as a spring festival of kites. The day is marked by wearing yellow, eating food with yellow colouring such as potatoes with turmeric and saffron rice, and holding parties. The crafts in the Punjab are of two types: the crafts produced in the rural areas and the royal crafts. The province is home to several historical sites, including the Shalimar Gardens, the Lahore Fort, the Badshahi Mosque, the Rohtas Fort and the ruins of the ancient city of Harrapa. The Anarkali Market and Jahangir's Tomb are prominent in the city of Lahore as is the Lahore Museum, while the ancient city of Taxila in the northwest was once a major centre of Buddhist and Hindu influence. Several important Sikh shrines are in the province, including the birthplace of the first Guru, Guru Nanak. (born at Nankana Sahib). There are a few famous hill stations, including Murree, Bhurban, Patriata and Fort Munro. Katasraj Mandir is a Hindu temple complex situated in Katas village near Choa Saidanshah in the Chakwal district. Dedicated to Shiva, the temple has, according to Hindu legend, existed since the days of Mahābhārata and the Pandava brothers spent a substantial part of their exile at the site and later Krishna himself laid the foundation of this temple. The Khewra Salt Mine is a tourist attraction. Tours are accompanied by guides as the mine itself is very large and the complex interconnected passages are like a maze. There is a small but beautiful mosque inside the mine made from salt stone. A clinical ward with 20 beds was established in 2007 for the treatment of asthma and other respiratory diseases using salt therapy. Classical music forms, such as Pakistani classical music, are an important part of the cultural wealth of the Punjab. The Muslim musicians have contributed a large number of ragas to the repository of classical music. The most common instruments used are the tabla and harmonium. Among the Punjabi poets, the names of Baba Farid, Shah Hussain, Sultan Bahu, Bulleh Shah, Waris Shah and Mian Muhammad Baksh are well known. Amongst folk singers the likes of Inayat Hussain Bhatti, Tufail Niazi, Alam Lohar, Sain Marna, Mansoor Malangi, Allah Ditta Lonawala, Talib Hussain Dard, Attaullah Khan Essa Khailwi, Gamoo Tahliwala, Mamzoo Gha-lla, Akbar Jat, Arif Lohar, Ahmad Nawaz Cheena and Hamid Ali Bela are well-known. In the composition of classical ragas, there are such masters as "Malika-i-Mauseequi" (Queen of Music) Roshan Ara Begum, Ustad Amanat Ali Khan, Salamat Ali Khan and Ustad Fateh Ali Khan. Alam Lohar has made significant contributions to folklore and Punjabi literature, by being a very influential Punjabi folk singer from 1930 until 1979. For the popular taste however, light music, particularly Ghazals and folk songs, which have an appeal of their own, the names of Mehdi Hassan, Ghulam Ali, Nur Jehan, Malika Pukhraj, Farida Khanum, Roshen Ara Begum, and Nusrat Fateh Ali Khan are well-known. Folk songs and dances of the Punjab reflect a wide range of moods: the rains, sowing and harvesting seasons. Luddi, Bhangra and Sammi depict the joy of living. Love legends of Heer Ranjha, Mirza Sahiban, Sohni Mahenwal and Saiful Mulk are sung in different styles. For the most popular music from the region, Bhangra, the names of Abrar-Ul-Haq, Arif Lohar, Attaullah Khan Essa Khailwi, Jawad Ahmed, Sajjad Ali, Legacy, and Malkoo are renowned. Folklore songs, ballads, epics and romances are generally written and sung in the various Punjabi dialects. There are a number of folk tales that are popular in different parts of the Punjab. These are the folk tales of Mirza Sahiban, Sayful Muluk, Yusuf Zulekha, Heer Ranjha, Sohni Mahiwal, Dulla Bhatti, and Sassi Punnun. The mystic folk songs include the "Kafees" of Khwaja Farid in Saraiki, Punjabi and the "Shalooks" by Baba Farid. They also include "Baits", "Dohas", "Lohris", "Sehra", and "Jugni". The most famous of the romantic love songs are "Mayhiah", "Dhola" and "Boliyan". Punjabi romantic dances include Dharees, Dhamaal, Bhangra, Giddha, Dhola, and Sammi. One social/educational issue is the status of Punjabi language. According to Manzur Ejaz, "In Central Punjab, Punjabi is neither an official language of the province nor it is used as medium of education at any level. There are only two daily newspapers published in Punjabi in the Central areas of Punjab. Only a few monthly literary magazines constitute Punjabi press in Pakistan".
https://en.wikipedia.org/wiki?curid=24751
Pope Julius II Pope Julius II (; ) (5 December 1443 – 21 February 1513), born Giuliano della Rovere, was Pope and ruler of the Papal States from 1503 to his death in 1513. Nicknamed the "Warrior Pope" or the "Fearsome Pope", he chose his papal name not in honour of Pope Julius I but in emulation of Julius Caesar. One of the most powerful and influential popes, Julius II was a central figure of the High Renaissance and left a significant cultural and political legacy. Julius II became Pope in the context of the Italian Wars, a period in which the major powers of Europe fought for primacy in the Italian peninsula. Louis XII of France controlled the Duchy of Milan, previously held by the Sforza, and French influence had replaced that of the Medici in the Republic of Florence. The Kingdom of Naples was under Spanish rule, and the Borja family from Spain was a major political faction in the Papal States following the reign of Alexander VI. The Archduke of Austria Maximilian I was hostile to France and Venice, and desired to descend in Italy in order to obtain the Papal coronation as Holy Roman Emperor. The conclave capitulation preceding his election included several terms, such as the opening of an ecumenical council and the organization of a crusade against the Ottoman Turks. Once crowned, Julius II proclaimed instead his goal to centralize the Papal States (in large part a patchwork of communes and "signorie") and "free Italy from the barbarians". In his early years as Pope, Julius II removed the Borjas from power and exiled them to Spain. Cesare Borgia, Duke of Romagna, shared the same fate and lost his possessions. In 1506, Julius II established the Vatican Museums and initiated the rebuilding of the St. Peter's Basilica. The same year he organized the renowned Swiss Guards for his personal protection and commanded a successful campaign in Romagna against local lords. The interests of Julius II lay also in the New World as he ratified the Treaty of Tordesillas, establishing the first bishoprics in the Americas and beginning the catholicization of Latin America. In 1508, he commissioned the Raphael Rooms and Michelangelo's paintings in the Sistine Chapel. He also joined an anti-Venetian league formed in Cambrai between France, Spain, and Austria, with the goal of capturing the coast of Romagna from the Venetian Republic. Having achieved this goal, he formed an anti-French "Holy League" with Venice following the defeat of the latter at the Battle of Agnadello. His main goal was now again to "expel the barbarians" ("Fuori i Barbari!"). Julius II brought the Catholic Ferdinand of Spain into the alliance, declaring Naples a papal fief and promising a formal investiture. Having previously declared that the Imperial election was sufficient for Maximilian to style himself as Holy Roman Emperor, he later obtained Habsburg support against France as well. Julius II personally led the Papal armed forces at the victorious Siege of Mirandola and, despite subsequent defeats and great losses at the Battle of Ravenna, he ultimately forced the French troops of Louis XII to retreat behind the Alps after the arrival of Swiss mercenaries from the Holy Roman Empire. At the Congress of Mantua in 1512, Julius II ordered to restore Italian families to power in the vacuum of French rule: the Imperial Swiss led by Massimiliano Sforza restored Sforza rule in Milan, and a Spanish army led by Giovanni de Medici restored Medici rule in Florence. The Kingdom of Naples was recognized as a papal fief. The Venetians regained their territories lost to France, and the Papal States annexed Parma and Modena. The conciliarist movement promoted by foreign monarchs was crushed, and Julius II affirmed ultramontanism at the Fifth Lateran Council. This is often presented in traditional historiography as the moment in which Renaissance Italy came the closest to unification after the end of the Italic League of the 15th century. However, Julius II was far away from the possibility to form a single Italian kingdom, if that was his goal at all, since foreign armies were largely involved in his wars and the French were preparing new campaigns against the Swiss for Milan. Naples, even if recognized as a papal fief, was still under Spain and in fact Julius II was planning to end Spanish presence in the south. Nevertheless, by the end of his pontificate, the papal objective to make the Church the main force in the Italian Wars was achieved. At the Roman Carnival of 1513, Julius II presented himself as the "liberator of Italy". Julius planned to call for a crusade against the Ottoman Empire in order to retake Constantinople, but died before making official announcements. His successor, Pope Leo X, along with Emperor Maximilian, would re-establish the status quo ante bellum by ratifying the treaties of Brussels and Noyon in 1516; France regained control of Milan after the victory of Francis I at the Battle of Marignano, and Spain was recognized as the direct ruler of Naples. However, the Papal States remained independent and centralized as a result of Julius' policies and the office of the papacy would remain crucial, diplomatically and politically, during the entirety of the 16th century in Italy and Europe. Julius II was described by Machiavelli in his works as the ideal prince. Indulgences (which remit the temporal effects of sins that have already been forgiven) involve the person receiving the indulgence doing some sort of good work, like donating to a charitable cause. Pope Julius II allowed people seeking indulgences to donate money to the Church which would be used for the construction of Saint Peter's Basilica. In his "Julius Excluded from Heaven", the scholar Erasmus of Rotterdam described a Pope Julius II in the afterlife planning to storm Heaven when he is denied entry. Giuliano della Rovere Albisola, was born near Savona in the Republic of Genoa. He was of a noble but impoverished family, the son of Raffaelo della Rovere. and Theodora Manerola, a lady of Greek ancestry. He had three brothers; Bartolomeo, a Franciscan friar who then became Bishop of Ferrara (1474–1494); Leonardo; and Giovanni, Prefect of the City of Rome (1475–1501) and Prince of Sorea and Senigallia. He also had a sister, Lucina (later the mother of Cardinal Sisto Gara della Rovere). Giuliano was educated by his uncle, Fr. Francesco della Rovere, O.F.M. among the Franciscans, who took him under his special charge. He was later sent by this same uncle (who by that time had become Minister General of the Franciscans (1464–1469)), to the Franciscan friary in Perugia, where he could study the sciences at the University. Della Rovere, as a young man, showed traits of being rough, coarse and given to bad language. During the late 1490s, he became more closely acquainted with Cardinal Medici and his nephew (both relatives), and the two dynasties became uneasy allies in the context of papal politics. Both houses desired an end to the occupation of Italian lands by the armies of France. He seemed less enthused by theology; rather Paul Strathern argues his imagined heroes were military leaders such as Frederic Colonna. After his uncle was elected Pope Sixtus IV on 10 August 1471, Giuliano was appointed Bishop of Carpentras in the Comtat Venaissin on 16 October 1471. In an act of literal nepotism he was immediately raised to the cardinalate on 16 December 1471, and assigned the same titular church as that formerly held by his uncle, San Pietro in Vincoli. Guilty of serial simony and pluralism he held several powerful offices at once: in addition to the archbishopric of Avignon he held no fewer than eight bishoprics, including Lausanne from 1472, and Coutances (1476–1477). In 1474, Giuliano led an army to Todi, Spoleto, and Città di Castello as papal legate. He returned to Rome in May, in the company of Duke Federigo of Urbino, who promised his daughter in marriage to Giuliano's brother Giovanni, who was subsequently named Lord of Senigallia and of Mondovì. On 22 December 1475, Pope Sixtus IV created the new Archdiocese of Avignon, assigning to it as suffragan dioceses the Sees of Vaison, Cavaillon, and Carpentras. He appointed Giuliano as the first archbishop. Giuliano held the archdiocese until his later election to the papacy. In 1476 the office of Legate was added, and he left Rome for France in February. On 22 August 1476 he founded the "Collegium de Ruvere" in Avignon. He returned to Rome on 4 October 1476. In 1479, Cardinal Giuliano served his one-year term as Chamberlain of the College of Cardinals. In this office he was responsible for collecting all the revenues owed to the cardinals as a group (from "ad limina" visits, for example) and for the proper disbursements of appropriate shares to cardinals who were in service in the Roman Curia. Giuliano was again named Papal Legate to France on 28 April 1480, and left Rome on 9 June. As Legate, his mission was threefold: to make peace between King Louis XI and the Emperor Maximilian of Austria; to raise funds for a war against the Ottoman Turks; and to negotiate the release of Cardinal Jean Balue and Bishop Guillaume d'Harancourt (who by then had been imprisoned by Louis for eleven years on charges of treason). He reached Paris in September, and finally, on 20 December 1480, Louis gave orders that Balue be handed over to the Archpriest of Loudun, who had been commissioned by the Legate to receive him in the name of the Pope. He returned to Rome on 3 February 1482. Shortly thereafter the sum of 300,000 ecus of gold was received from the French in a subsidy of the war. On 31 January 1483 Cardinal della Rovere was promoted suburbicarian Bishop of Ostia, in succession to Cardinal Guillaume d'Estouteville who had died on 22 January. It was the privilege of the Bishop of Ostia to consecrate an elected pope a bishop, if he were not already a bishop. This actually occurred in the case of Pius III (Francesco Todeschini-Piccolomini), who was ordained a priest on 30 September 1503 and consecrated a bishop on 1 October 1503 by Cardinal Giuliano della Rovere. Around this time, in 1483, an illegitimate daughter was born, Felice della Rovere. On 3 November 1483, Cardinal della Rovere was named Bishop of Bologna and Papal Legate, succeeding Cardinal Francesco Gonzaga, who had died on 21 October. He held the diocese until 1502. On 28 December 1484, Giuliano participated in the investiture of his brother Giovanni as Captain-General of the Papal Armies by Pope Innocent VIII. By 1484 Giuliano was living in the new palazzo which he had constructed next to the Basilica of the Twelve Apostles, which he had also restored. Pope Sixtus IV paid a formal visit to the newly restored building on 1 May 1482, and it may be that Giuliano was already in residence then. Sixtus IV died on 12 August 1484 and was succeeded by Innocent VIII. After the ceremonies of the election of Pope Innocent were completed, the cardinals were dismissed to their own homes, but Cardinal della Rovere accompanied the new Pope to the Vatican Palace and was the only one to remain with him. Ludwig Pastor quotes the Florentine ambassador as remarking, "[Pope Innocent] gives the impression of a man who is guided rather by the advice of others than by his own lights." The ambassador of Ferrara stated, "While with his uncle [Della Rovere] had not the slightest influence, he now obtains whatever he likes from the new Pope." Della Rovere was one of the five cardinals named to the committee to make the arrangements for the Coronation. In 1485 Pope Innocent and Cardinal della Rovere (as the Pope's new principal advisor), decided to involve themselves in the political affairs of the Kingdom of Naples, in what was called the "Conspiracy of the Barons". On Palm Sunday, 20 March, Cardinal della Rovere, concealing his activities from his principal rival, Cardinal Rodrigo Borgia (later Pope Alexander VI), rode out of Rome and took ship at Ostia, intending to head for Genoa and Avignon to prepare to wage war between the Church and the King of Naples, Ferdinand I (Ferrante). On 28 June the Pope sent back to Naples the token gift of a palfrey which symbolized the King of Naples' submission and demanded the full feudal submission of the Kingdom of Naples to the Roman Church according to long-standing tradition. In a second attempt to overthrow the Aragonese monarchy, the Prince of Salerno Antonello II di Sanseverino, on the advice of Antonello Petrucci and Francesco Coppola, gathered together several feudal families belonging to the Guelph faction and supporting the Angevin claim to Naples. Antonello de Sanseverino was the brother-in-law of Cardinal della Rovere's brother Giovanni, who was a noble of Naples because of his fief of Sora. The principal complaints of the barons were the heavy taxation imposed by Ferdinand to finance his war against the Saracens, who had occupied Bari in 1480; and the vigorous efforts of Ferrante to centralize the administrative apparatus of the kingdom, moving it away from a feudal to a bureaucratic system. The barons seized L'Aquila and appealed to the Pope for assistance as their feudal overlord. Genoa and Venice supported the Papacy, while Florence and Milan opted for Naples. In Rome, the Orsini allied themselves with Ferrante's son Alfonso, and therefore the Colonna supported the Pope in the street fighting that ensued. Ferrante reacted by seizing the fiefs of the barons, and, when the two parties met to negotiate a settlement, Ferrante had them arrested, and eventually executed. The prestige of the della Rovere family was seriously damaged, and in an attempt to exculpate himself Pope Innocent began to withdraw his support for them. Peace was restored in 1487, but Innocent VIII's papacy was discredited. On 23 March 1486, the pope sent Giuliano as Papal Legate to the Court of King Charles VIII of France to ask for help. A French entourage arrived in Rome on 31 May, but immediately relations broke down with the pro-Spanish Cardinal Rodrigo. But Ferrante's army decided the pope's humiliation, Innocent backed down and on 10 August signed a treaty. Innocent looked for new allies and settled on the Republic of Florence. On 2 March 1487, Giuliano was appointed legate in the March of Ancona and to the Republic of Venice. He encouraged trade with the sizable Turkish community at these ports. But urgent reports arrived from the King of Hungary that the Ottoman Sultan was threatening Italy. He returned on 8 April 1488, and again took up his residence in the Palazzo Colonna next to the Basilica of the XII Apostles. In the Conclave of 1492, following the death of Innocent VIII, Cardinal della Rovere was supported for election by both King Charles VIII of France and by Charles' enemy King Ferrante of Naples. It was reported that France had deposited 200,000 ducats into a bank account to promote della Rovere's candidature, while the Republic of Genoa had deposited 100,000 ducats to the same end. Della Rovere, however, had enemies, both because of the influence he had exercised over Pope Sixtus IV, and because of his French sympathies. His rivals included Cardinal Ardicio della Porta and Cardinal Ascanio Sforza, both patronized by the Milanese. Kellogg, Baynes & Smith, continue, a "rivalry had, however, gradually grown up between [della Rovere] and [then-Cardinal] Rodrigo Borgia, and on the death of Innocent VIII in 1492 Borgia by means of a secret agreement and simony with Ascanio Sforza succeeded in being elected by a large majority, under the name of Pope Alexander VI." Della Rovere, jealous and angry, hated Borgia for being elected over him. On 31 August 1492 the new Pope, Alexander VI, held a consistory in which he named six cardinal legates, one of whom was Giuliano della Rovere, who was appointed Legate in Avignon. Cardinal Giuliano was increasingly alarmed by the powerful position assumed by Cardinal Ascanio Sforza and the Milanese faction in the Court of Alexander VI, and after Christmas Day in December 1492 chose to withdraw to his fortress in the town and diocese of Ostia, at the mouth of the Tiber River. In that same month, Federico of Altamura, the second son of King Ferdinando (Ferrante) of Naples was in Rome to pay homage to the new pope, and he reported back to his father that Alexander and Cardinal Sforza were working on establishing new alliances, which would upset Ferrante's security arrangements. Ferrante, therefore, decided to use Della Rovere as the center of an anti-Sforza party at the papal court, a prospect made easier since Ferrante had prudently repaired his relations with Cardinal Giuliano after the War of the Barons. He also warned King Ferdinand and Queen Isabella of Spain that Alexander was intriguing with the French, which brought an immediate visit of a Spanish ambassador to the Pope. In June Federico of Altamura was back in Rome, and held conversations with Della Rovere, assuring him of Neapolitan protection. On 24 July 1493, Cardinal della Rovere returned to Rome (despite the warnings of Virginius Orsini) and dined with the Pope. Della Rovere at once determined to take refuge from Borgia's wrath at Ostia. On 23 April 1494, the Cardinal took ship, having placed his fortress at Ostia in the hands of his brother Giovanni della Rovere, and traveled to Genoa and then to Avignon. He was summoned by King Charles VIII to Lyons, where the two met on 1 June 1494. He joined Charles VIII of France who undertook to take Italy back from the Borgias by military force. The King entered Rome with his army on 31 December 1495, with Giuliano della Rovere riding on one side and Cardinal Ascanio Sforza riding on the other. The King made several demands of Pope Alexander, one of which was that the Castel S. Angelo be turned over to French forces. This Pope Alexander refused to do, claiming that Cardinal della Rovere would occupy it and become master of Rome. Charles soon conquered Naples, making his triumphal entry on 22 February 1495, but he was forced to remove most of his army. As he was returning to the north, his army was defeated at the Battle of Foronovo on 5 July 1495, and his Italian adventure came to an end. The last remnants of the French invasion were gone by November 1496. Ostia, however, remained in French hands until March 1497, making difficulties in the provisioning of the city of Rome. Back in Lyon in 1496, Charles VIII and Giuliano della Rovere were planning another war. Giuliano was traveling back and forth from Lyon to Avignon, raising troops. It was being reported in France by June 1496, moreover, that King Charles intended to have a papal election in France and to have Cardinal della Rovere elected pope. In March 1497 Pope Alexander deprived Cardinal della Rovere of his benefices as an enemy of the Apostolic See, and Giovanni della Rovere of the Prefecture of Rome. His action against the Cardinal was done not only without the consent of the cardinals in consistory, but in fact over their vigorous objections. By June, however, the Pope was in negotiations with the Cardinal for reconciliation and return to Rome. His benefices were restored to him after an apparent reconciliation with the Pope in August 1498. King Charles VIII of France, the last of the senior branch of the House of Valois, died on 7 April 1498 of after accidentally striking his head on the lintel of a door at the Château d'Amboise. When Cesare Borgia passed through southern France in October 1498 on his way to meet King Louis XII for his investiture as Duke of Valentinois, he stopped in Avignon and was magnificently entertained by Cardinal della Rovere. They then moved on to meet the King at Chinon, where Cesare Borgia fulfilled one of the terms of the treaty between Louis and Alexander by producing the red hat of a cardinal, which had been promised for the Archbishop of Rouen, Georges d'Amboise. It was Cardinal della Rovere, the Papal Legate, who placed the hat on Amboise's head. Louis wanted an annulment from Queen Joan so he could marry Anne of Brittany, in the hope of annexing the Duchy of Brittany; Alexander, in turn, wanted a French princess as wife for Cesare. Della Rovere, who was trying to repair his relations with the House of Borgia, was also involved in another clause of the treaty, the marriage between Cesare Borgia and Carlotta, the daughter of the King of Naples, who had been brought up at the French Court. Della Rovere was in favor of the marriage, but, according to Pope Alexander, King Louis XII was not, and, most especially, Carlotta was stubbornly refusing her consent. Alexander's plan of securing a royal throne for his son fell through, and he was very angry. Louis offered Cesare another of his relatives, the "beautiful and rich" Charlotte d'Albret, whom Cesare married at Blois on 13 May 1499. The marriage produced a complete "volta facie" in Pope Alexander. He became an open partisan of the French and Venice, and accepted their goal, the destruction of the Sforza hold on Milan. On 14 July, Cardinal Ascanio Sforza, della Rovere's sworn enemy, fled Rome with all his property and friends. Meanwhile, the French army crossed the Alps and captured Alessandria in Piedmont. On 1 September 1499 Lodovico "Il Moro" fled Milan, and on 6 September the city surrendered to the French. Cardinal Giuliano was in the King's entourage when he entered Milan on 6 October. Pope Alexander then turned his attention, stimulated by the Venetians, to the threat of the Osmanli Turks. In the autumn of 1499, he called for a crusade and sought aid and money from all Christendom. The rulers of Europe paid little attention, but to show his sincerity Alexander imposed a tithe on all the residents of the Papal States and a tithe on the clergy of the entire world. A list of cardinals and their incomes, drawn up for the occasion, shows that Cardinal della Rovere was the second-richest cardinal, with an annual income of 20,000 ducats. Another break in relations between Pope Alexander and Cardinal Giuliano came at the end of 1501 or the beginning of 1502 when Giuliano was transferred from the Bishopric of Bologna to the diocese of Vercelli. On 21 June 1502, Pope Alexander sent his secretary, Francesco Troche (Trochia), and Cardinal Amanieu d'Albret (brother-in-law of Cesare Borgia) to Savona to seize Cardinal della Rovere by stealth and bring him back to Rome as quickly as possible and turn him over to the Pope. The kidnapping party returned to Rome on 12 July, without having accomplished its mission. On 20 July 1502, Cardinal Giovanni Battista Ferrari died in his rooms at the Vatican Palace; he had been poisoned, and his property was claimed by the Borgia. On 3 January 1503, Cardinal Orsini was arrested and sent to the Castel S. Angelo; on 22 February he died there, poisoned on orders of Alexander VI. A veteran of the Sacred College, della Rovere had won influence for the election of Pope Pius III with the help of Florentine Ambassador to Naples, Lorenzo de' Medici. In spite of a violent temper della Rovere succeeded by dexterous diplomacy in winning the support of Cesare Borgia, whom he won over by his promise of money and continued papal backing for Borgia policies in the Romagna. This election was, in Ludwig von Pastor's view, certainly achieved by means of bribery with money, but also with promises. "Giuliano, whom the popular voice seemed to indicate as the only possible pope, was as unscrupulous as any of his colleagues in the means which he employed. Where promises and persuasions were unavailing, he did not hesitate to have recourse to bribery." Indeed, his election on 1 November 1503 took only a few hours, and the only two votes he did not receive were his own and the one of Georges d'Amboise, his most vigorous opponent and the favourite of the French monarchy. In the end, as in all papal elections, the vote is made unanimous after the leading candidate has achieved the required number of votes for election. Giuliano Della Rovere thenceforth took the name of his fourth-century predecessor, Julius I, and was pope for nine years, from 1503 to 1513. From the beginning, Julius II set out to defeat the various powers that challenged his temporal authority; in a series of complicated stratagems, he first succeeded in rendering it impossible for the Borgias to retain their power over the Papal States. Indeed, on the day of his election, he declared: Others indicate that his decision was taken on 26 November 1507, not in 1503. The Borgia Apartments were turned to other uses. The "Sala de Papi" was redecorated by two pupils of Raphael by order of Pope Leo X. The rooms were used to accommodate Emperor Charles V on his visit to the Vatican after the Sack of Rome (1527), and subsequently, they became the residence of the Cardinal-nephew and then the Secretary of State. Julius used his influence to reconcile two powerful Roman families, the Orsini and Colonna. Decrees were made in the interests of the Roman nobility, in whose shoes the new pope now stepped. Being thus secure in Rome and the surrounding country, he set himself the task to expel the Republic of Venice from Faenza, Rimini, and the other towns and fortresses of Italy which it occupied after the death of Pope Alexander. In 1504, finding it impossible to succeed with the Doge of Venice by remonstrance, he brought about a union of the conflicting interests of France and the Holy Roman Empire, and sacrificed temporarily to some extent the independence of Italy to conclude with them an offensive and defensive alliance against Venice. The combination was, however, at first little more than nominal, and was not immediately effective in compelling the Venetians to deliver up more than a few unimportant places in the Romagna. With a campaign in 1506, he personally led an army to Perugia and Bologna, freeing the two papal cities from their despots, Giampolo Baglioni and Giovanni II Bentivoglio. In December 1503, Julius issued a dispensation allowing the future Henry VIII of England to marry Catherine of Aragon; Catherine had previously been briefly married to Henry's older brother Prince Arthur, who had died, but Henry later argued that she had remained a virgin for the five months of the marriage. Some twenty years later, when Henry was attempting to wed Anne Boleyn (since his son by Catherine of Aragon survived only a few days, and two of her sons were stillborn, and therefore he had no male heir), he sought to have his marriage annulled, claiming that the dispensation of Pope Julius should never have been issued. The retractation of the dispensation was refused by Pope Clement VII. The Bull entitled "Ea quae pro bono pacis" issued on 24 January 1506, confirmed papal approval of the "mare clausum" policy being pursued by Spain and Portugal amid their explorations, and approved the changes of the 1494 Treaty of Tordesillas to previous papal bulls. In the same year, the Pope founded the Swiss Guard to provide a constant corps of soldiers to protect the Vatican City. As part of the Renaissance program of reestablishing the glory of antiquity for the Christian capital, Rome, Julius II took considerable effort to present himself as a sort of emperor-pope, capable of leading a Latin-Christian empire. On Palm Sunday, 1507, "Julius II entered Rome . . . both as a second Julius Caesar, heir to the majesty of Rome's imperial glory, and in the likeness of Christ, whose vicar the pope was, and who in that capacity governed the universal Roman Church." Julius, who modeled himself after his namesake Caesar, would personally lead his army across the Italian peninsula under the imperial war-cry, "Drive out the barbarians." Yet, despite the imperial rhetoric, the campaigns were highly localized. Perugia voluntarily surrendered in March 1507 to direct control, as it had always been within the Papal States; it was in these endeavors he had enlisted French mercenaries. Urbino's magnificent court palace was infiltrated by French soldiers in the pay of the Duke of Gonzaga; the Montefeltro Conspiracy against his loyal cousins earned the occupying armies the Pope's undying hatred. Julius relied upon Guidobaldo's help to raise his nephew and heir Francesco Maria della Rovere; the intricate web of nepotism helped secure the Italian Papacy. Moreover, the Pope's interest in Urbino was widely known in the French court. Julius left a spy at the Urbino Palace, possibly Galeotto Franciotti della Rovere, Cardinal of San Pietro, to watch the Mantua stables in total secret; the secular progress of the Papal Curia was growing in authority and significance. In Rome, the Pope watched from his private chapel to see how his court behaved. This was an age of Renaissance conspiracy. In addition to an active military policy, the new pope personally led troops into battle on at least two occasions, the first to expel Giovanni Bentivoglio from Bologna (17 August 1506 – 23 March 1507), which was achieved successfully with the assistance of the Duchy of Urbino. The second was an attempt to recover Ferrara for the Papal States (1 September 1510 – 29 June 1512). In 1508, Julius was fortuitously able to form the League of Cambrai with Louis XII, King of France, Maximilian I, Holy Roman Emperor (proclaimed without coronation as Emperor by Pope Julius II at Trent in 1508) and Ferdinand II, King of Aragon. The League fought against the Republic of Venice. Among other things, Julius wanted possession of Venetian Romagna; Emperor Maximilian I wanted Friuli and Veneto; Louis XII wanted Cremona, and Ferdinand II desired the Apulian ports. This war was a conflict in what was collectively known as the "Italian Wars". In the spring of 1509, the Republic of Venice was placed under an interdict by Julius, In May 1509 Julius sent troops to fight against the Venetians who had occupied parts of the Romagna winning back the Papal States in a decisive battle near Cremona. During the War of the Holy League alliances kept changing: in 1510 Venice and France switched places, and by 1513, Venice had joined France. The achievements of the League soon outstripped the primary intention of Julius. In one single battle, the Battle of Agnadello on 14 May 1509, the dominion of Venice in Italy was practically lost to the pope. Yet neither the King of France nor the Holy Roman Emperor was satisfied with merely effecting the purposes of the Pope, the latter found it necessary to enter into an arrangement with the Venetians to defend himself from those who immediately before had been his allies. The Venetians, on making humble submission, were absolved at the beginning of 1510, and shortly afterward France was placed under papal interdict. Attempts to cause a rupture between France and England proved unsuccessful; on the other hand, at a synod convened by Louis at Tours in September 1510, the French bishops withdrew from papal obedience, and resolved, with the Emperor's co-operation, to seek dethronement of the pope. With some courage Julius marched his army to Bologna and then against the French to Mirandola. In November 1511, a council met at Pisa, called by rebel cardinals with support from the French king and the Empire, they demanded the deposition of Charles II at Pisa. Despite being seriously he refused to shave showing utter contempt for the hated French occupation. "per vendicarsi et diceva...anco fuora scazato el re Ludovico Franza d'Italia." Whereupon Julius entered into another Holy League of 1511: in alliance with Ferdinand II of Aragon and the Venetians he conspired against the Gallican liberties. In a short time, both Henry VIII, King of England (1509–47), and Maximilian I also joined the Holy League of 1511 against France. Ferdinand of Spain now recognized Naples as a papal fief, invested in 1511, and therefore Julius II now regarded France as the main foreign power in the Italian peninsula hostile to Papal interests. Louis XII defeated the alliance at Battle of Ravenna on 11 April 1512. When a desperate battle felled over 20,000 men in a bloodbath the Pope commanded his protege, a newly-released young Cardinal Medici to re-take Florence with a Spanish army. The rescue of the city on 1 September 1512 saved Rome from another invasion, ousting Soderini, and returning the dynastic rule of the Medici. Julius had seemingly restored "fortuna" or control by exercising his manly "vertu", just as Machiavelli wrote. This re-asserted a strong relation between Florence and Rome; a lasting legacy of Julius II. Yet Machiavelli and his methods would not outlast Julius' Papacy. Julius hired Swiss mercenaries to fight against the French in Milan in May 1512. When Swiss mercenaries came to the Pope's aid, the French army withdrew across the Alps into Savoy in 1512. The papacy gained control of Parma and Piacenza in central Italy. With the French out of Italy and Spain recognizing Naples as a papal fief, a Congress was held in Mantua by Julius II to declare the liberation of the peninsula. Nevertheless, although Julius had centralized and expanded the Papal States, he was far from realizing his dream of an independent Italian kingdom. Italy wasn't at peace either. The French were preparing new campaigns to reconquer Milan, and Julius II confessed to a Venetian ambassador a plan to invest his counselor Luigi d'Aragona with the kingdom of Naples in order to end Spanish presence in the south. In fact, after the death of Julius, war would resume and the treaties of Noyon and Brussels in 1516 will again formalize the division of much of Italy between French and Spanish influence. In May 1512 a general or ecumenical council, the Fifth Council of the Lateran, was held in Rome. According to an oath taken on his election to observe the Electoral Capitulations of the Conclave of October 1513, Julius had sworn to summon a general council, but it had been delayed, he affirmed, because of the occupation of Italy by his enemies. The real stimulus came from a false council which took place in 1511, called the "Conciliabulum Pisanum", inspired by Louis XII and Maximilian I as a tactic to weaken Julius, and which threatened Julius II with deposition. Julius' reply was the issuing of the bull "Non-sini gravi" of 18 July 1511, which fixed the date of 19 April 1512 for the opening of his own council. The Council actually convened on 3 May, and Paris de Grassis reports that the crowd at the basilica was estimated at 50,000. It held its first working session on 10 May. In the third plenary session, on 3 December 1512, Julius attended, though he was ill; but he wanted to witness and receive the formal adhesion of Emperor Maximilian to the Lateran Council and his repudiation of the "Conciliabulum Pisanum". This was one of Julius' great triumphs. The Pope was again in attendance at the fourth session on 10 December, this time to hear the accrediting of the Venetian Ambassador as the Serene Republic's representative at the Council; he then had the letter of King Louis XI (of 27 November 1461), in which he announced the revocation of the Pragmatic Sanction, read out to the assembly, and demanded that all persons who accepted the Pragmatic Sanction appear before the Council within sixty days to justify their conduct. This was directed against King Louis XII. The fifth session was held on 16 February, but Pope Julius was too ill to attend. Cardinal Raffaele Riario, the Dean of the College of Cardinals and Bishop of Ostia, presided. The Bishop of Como, Scaramuccia Trivulzio, then read from the pulpit a bull of Pope Julius, "Si summus rerum", dated that very day and containing within its text the complete bull of 14 January 1505, "Cum tam divino". The bull was submitted to the Council fathers for their consideration and ratification. Julius wanted to remind everyone of his legislation on papal conclaves, in particular against simony, and to fix his regulations firmly in canon law so that they could not be dispensed or ignored. Julius was fully aware that his death was imminent, and though he had been a witness to a good deal of simony at papal conclaves and had been a practitioner himself, he was determined to stamp out the abuse. The reading of the bull "Cum tam divino" became a regular feature of the first day of every conclave. On the Vigil of Pentecost in May 1512, Pope Julius, aware that he was seriously ill and that his health was failing, despite comments on the part of some cardinals about how well he looked, remarked to Paris de Grassis, "They are flattering me; I know better; my strength diminishes from day to day and I cannot live much longer. Therefore I beg you not to expect me at Vespers or at Mass from henceforth." Nonetheless, he continued his restless activities, including Masses, visits to churches, and audiences. On 24 June, in the morning Paris found the Pope "debilem et semifebricantem". On Christmas Eve, Julius ordered Paris to summon the College of Cardinals and the Sacristan of the Apostolic Palace, "quia erat sic infirmus, quod non-speraret posse diu supravivere." From then until 6 January he was confined to bed, and most of the time with a fever; he had lost his appetite, but the doctors were unable to diagnose his languor. On 4 February he had an extensive conversation with Paris concerning the arrangements for his funeral. Pope Julius was reported to be seriously ill in a dispatch received in Venice on 10 February 1513. He received Holy Communion and was granted the plenary indulgence on the morning of 19 February, according to the Venetian Ambassador. On the 20th, according to Paris de Grassis, he received Holy Communion from the hands of Cardinal Raffaele Riario, the Camerlengo. He died of a fever in the night of 20–21 February 1513. On the evening of 21 February, Paris de Grassis conducted the funeral of Julius II, even though the Canons of the Vatican Basilica and the beneficiati refused to cooperate. The body was placed for a time at the Altar of Saint Andrew in the Basilica and was then carried by the Imperial Ambassador, the papal Datary, and two of Paris' assistants to the altar of the Chapel of Pope Sixtus, where the Vicar of the Vatican Basilica performed the final absolution. At the third hour of the evening, the body was laid in a sepulcher between the altar and the wall of the tribune. Despite the fact that the so-called "Tomb of Julius" by Michelangelo is in San Pietro in Vincoli in Rome, Julius is in fact buried in the Vatican. Michelangelo's tomb was not completed until 1545 and represents a much-abbreviated version of the planned original, which was initially intended for the new St. Peter's Basilica. His remains lay alongside his uncle, Pope Sixtus IV, but were later desecrated during the Sack of Rome in 1527. Today both men lie in St. Peter's Basilica on the floor in front of the monument to Pope Clement X. A simple marble tombstone marks the site. Julius II was succeeded by Pope Leo X. In 1484 Cardinal Giuliano della Rovere had begun negotiations to persuade Marquis Francesco Gonzaga of Mantua to allow Andrea Mantegna to come to Rome, which finally bore fruit in 1488; Mantegna was given the commission to decorate the chapel of the Belvedere for Pope Innocent VIII, on which he spent two years. Beyond Julius II's political and military achievements, he enjoys a title to honor in his patronage of art, architecture, and literature. He did much to improve and beautify the city. Early in his papacy, Julius decided to revive the plan for replacing the dilapidated Constantinian basilica of St. Peter's. The idea was not his, but originally that of Nicholas V, who had commissioned designs from Bernardo Rossellino. Other more pressing problems distracted the attention of Nicholas and subsequent popes, but Julius was not the sort of person to be distracted once he had settled on an idea, in this case, for the greatest building on earth, for the glory of Saint Peter and himself. In the competition for a building plan, the design of Rossellino was immediately rejected as being out of date. A second design was submitted by Giuliano da Sangallo, an old friend of Julius, who had worked on several projects for him before, including the palazzo at S. Pietro in Vincoli, and who had left Rome with Julius when he fled the wrath of Alexander VI in 1495. Through Cardinal della Rovere, Sangallo had presented Charles VIII a plan for a palace, and in 1496 he had made a tour of the architectural monuments of Provence, returning to his native Florence in 1497. His proposals for S. Peter's, however, were not accepted despite what he believed to be a promise, and he retired in anger to Florence. On 18 April 1506 Pope Julius II laid the foundation stone of the new St. Peter's Basilica for the successful architect, Donato Bramante. However, he also began the demolition of the old St. Peter's Basilica, which had stood for more than 1,100 years. He was a friend and patron of Bramante and Raphael, and a patron of Michelangelo. Several of Michelangelo's greatest works (including the painting of the ceiling of the Sistine Chapel) were commissioned by Julius. Long before he became Pope, Julius had a violent temper. He often treated subordinates and people who worked for him very badly. His manner was gruff and coarse, just as his peasant-like sense of humour. Others suggest that Julius had little sense of humor. Ludwig von Pastor wrote, "Paris de Grassis, his Master of Ceremonies, who has handed on to us so many characteristic features of his master's life, says that he hardly ever jested. He was generally absorbed in deep and silent thought..." To most historians Julius was manly and virile, an energetic man of action, whose courage saved the Papacy. There was a sense that war caused him serious illness, exhaustion, and fatigue, that most popes could not have withstood. To many Julius II has been described as the best in an era of exceptionally bad popes: Alexander VI was evil and despotic, exposing the future Julius II to a number of assassination attempts that required tremendous fortitude. Julius II is usually depicted with a beard, after his appearance in the celebrated portrait by Raphael, the artist whom he first met in 1509. However, the pope only wore his beard from 27 June 1511 to March 1512, as a sign of mourning at the loss of the city of Bologna by the Papal States. He was nevertheless the first pope since antiquity to grow facial hair, a practice otherwise forbidden by canon law since the 13th century. The pope's hirsute chin may have raised severe, even vulgar criticism, as at one Bologna banquet held in 1510 at which papal legate Marco Cornaro was present. In overturning the ban on beards Pope Julius challenged Gregorian conventional wisdom in dangerous times. Julius shaved his beard again before his death, and his immediate successors were clean-shaven; nonetheless Pope Clement VII sported a beard when mourning the sack of Rome. Thenceforward, all popes were bearded until the death of Pope Innocent XII in 1700. The frescoes on the ceiling of Stanza d'Eliodoro in the stanze of Raphael depict the traumatic events in 1510–11 when the Papacy regained its freedom. Although Raphael's original was lost, it was thought to relate closely to the personal iconography of Stanza della Segnatura, commissioned by Pope Julius himself. The Lateran Council that formed the Holy League marked a high point in his personal success. Saved by an allegory to the Expulsion of Helidorus, the French gone, Julius collapsed once again in late 1512, very seriously ill once more. Julius was not the first pope to have fathered children before being elevated to high office, and had a daughter born to Lucrezia Normanni in 1483 – after he had been made a cardinal. Felice della Rovere survived into adulthood. Shortly after Felice was born, Julius arranged for Lucrezia to marry Bernardino de Cupis, Chamberlain to Julius's cousin, Cardinal Girolamo Basso della Rovere. Despite producing an illegitimate daughter (and having at least one mistress), it was suggested that Julius may have had homosexual lovers - although it is not possible to establish this claim. His confrontational style inevitably created enemies and sodomy was the "common currency of insult and innuendo". Such accusations were made to discredit him, but perhaps in so doing his accusers were exploiting a generally "perceived weakness". The Venetians, who were implacably opposed to the pope's new military policy, were among the most vociferous opponents; notable among them was the diarist Girolamo Priuli. Erasmus also impropriated sexual misconduct in his 1514 dialogues ""Julius Excluded from Heaven""; a theme picked up in the denunciation made at the conciliabulum of Pisa. Criticism was furthermore made of the sinister influence exerted by his advisor, Francesco Alidosi, whom Julius had made a cardinal in 1505. However, it is likely that the closeness was down to the fact that he simply knew how to handle him well. This sexual reputation survived Julius, and the accusation continued to be made without reservation by Protestant opponents in their polemics against "papism" and Catholic decadence. The French writer Philippe de Mornay (1549–1623) accused all Italians of being sodomites, but added specifically: "This horror is ascribed to good Julius."
https://en.wikipedia.org/wiki?curid=24755
Persian Gulf The Persian Gulf () is a mediterranean sea in Western Asia. The body of water is an extension of the Indian Ocean (Gulf of Oman) through the Strait of Hormuz and lies between Iran to the northeast and the Arabian Peninsula to the southwest. The Shatt al-Arab river delta forms the northwest shoreline. The body of water is historically and internationally known as the "Persian Gulf". Some Arab governments refer to it as the "Arabian Gulf" () or "The Gulf". The name "Gulf of Iran (Persian Gulf)" is used by the International Hydrographic Organization. The Persian Gulf was a battlefield of the 1980–1988 Iran–Iraq War, in which each side attacked the other's oil tankers. It is the namesake of the 1991 Gulf War, the largely air- and land-based conflict that followed Iraq's invasion of Kuwait. The Persian Gulf has many fishing grounds, extensive reefs (mostly rocky, but also coral), and abundant pearl oysters, but its ecology has been damaged by industrialization and oil spills. The Persian Gulf is in the Persian Gulf Basin, which is of Cenozoic origin and related to the subduction of the Arabian Plate under the Zagros Mountains. The current flooding of the basin started 15,000 years ago due to rising sea levels of the Holocene glacial retreat. This inland sea of some is connected to the Gulf of Oman in the east by the Strait of Hormuz; and its western end is marked by the major river delta of the Shatt al-Arab, which carries the waters of the Euphrates and the Tigris. In Iran this is called "Arvand Rood", where "Rood" means "river". Its length is , with Iran covering most of the northern coast and Saudi Arabia most of the southern coast. The Persian Gulf is about wide at its narrowest, in the Strait of Hormuz. The waters are overall very shallow, with a maximum depth of and an average depth of . Countries with a coastline on the Persian Gulf are (clockwise, from the north): Iran; Oman's Musandam exclave; the United Arab Emirates; Saudi Arabia; Qatar, on a peninsula off the Saudi coast; Bahrain, on an island; Kuwait; and Iraq in the northwest. Various small islands also lie within the Persian Gulf, some of which are the subject of territorial disputes between the states of the region. The International Hydrographic Organization defines the Persian Gulf's southern limit as "The Northwestern limit of Gulf of Oman". This limit is defined as "A line joining Ràs Limah (25°57'N) on the coast of Arabia and Ràs al Kuh (25°48'N) on the coast of Iran (Persia)". The Persian Gulf is connected to the Indian Ocean through the Strait of Hormuz. Writing the water balance budget for the Persian Gulf, the inputs are river discharges from Iran and Iraq (estimated to be per second), as well as precipitation over the sea which is around /year in Qeshm Island. The evaporation of the sea is high, so that after considering river discharge and rain contributions, there is still a deficit of per year. This difference is supplied by currents at the Strait of Hormuz. The water from the Persian Gulf has a higher salinity, and therefore exits from the bottom of the Strait, while ocean water with less salinity flows in through the top. Another study revealed the following numbers for water exchanges for the Persian Gulf: evaporation = -/year, precipitation = /year, inflow from the Strait = /year, outflow from the Strait = -/year, and the balance is 0 m (0 ft)/year. Data from different 3D computational fluid mechanics models, typically with spatial resolution of and depth each element equal to are predominantly used in computer models. The Persian Gulf and its coastal areas are the world's largest single source of petroleum, and related industries dominate the region. Safaniya Oil Field, the world's largest offshore oilfield, is located in the Persian Gulf. Large gas finds have also been made, with Qatar and Iran sharing a giant field across the territorial median line (North Field in the Qatari sector; South Pars Field in the Iranian sector). Using this gas, Qatar has built up a substantial liquefied natural gas (LNG) and petrochemical industry. In 2002, the Persian Gulf nations of Bahrain, Iran, Iraq, Kuwait, Qatar, Saudi Arabia, and the UAE produced about 25% of the world's oil, held nearly two-thirds of the world's crude oil reserves, and about 35% of the world's natural gas reserves. The oil-rich countries (excluding Iraq) that have a coastline on the Persian Gulf are referred to as the "Persian Gulf States". Iraq's egress to the Persian gulf is narrow and easily blockaded consisting of the marshy river delta of the Shatt al-Arab, which carries the waters of the Euphrates and the Tigris rivers, where the east bank is held by Iran. In 550 BC, the Achaemenid Empire established the first ancient empire in Persis ("Pars", or modern "Fars"), in the southwestern region of the Iranian plateau. Consequently, in the Greek sources, the body of water that bordered this province came to be known as the "Persian Gulf". During the years 550 to 330 BC, coinciding with the sovereignty of the Achaemenid Persian Empire over the Middle East area, especially the whole part of the Persian Gulf and some parts of the Arabian Peninsula, the name of "Pars Sea" is widely found in the compiled written texts. In the travel account of Pythagoras, several chapters are related to description of his travels accompanied by the Achaemenid king Darius the Great, to Susa and Persepolis, and the area is described. From among the writings of others in the same period, there is the inscription and engraving of Darius the Great, installed at junction of waters of Red Sea and the Nile river and the Rome river (current Mediterranean) which belongs to the 5th century BC where Darius the Great has named the Persian Gulf Water Channel: "Pars Sea" ("Persian Sea"). King Darius says: Considering the historical background of the name Persian Gulf, Sir Arnold Wilson mentions in a book published in 1928 that "no water channel has been so significant as the Persian Gulf to the geologists, archaeologists, geographers, merchants, politicians, excursionists, and scholars whether in past or in present. This water channel which separates the Iran Plateau from the Arabia Plate, has enjoyed an Iranian Identity since at least 2200 years ago." Before being given its present name, the Persian Gulf was called many different names. The classical Greek writers, like Herodotus, called it "the Red Sea." In Babylonian texts, it was known as "the sea above Akkad." The name of this gulf, historically and internationally known as the Persian Gulf after the land of Persia (Iran), has been disputed by some Arab countries since the 1960s. Rivalry between Iran and some Arab states, along with the emergence of pan-Arabism and Arab nationalism, has seen the name "Arabian Gulf" become predominant in most Arab countries. Names beyond these two have also been applied to or proposed for this body of water. Earliest evidence of human presence on Persian Gulf islands dates back to Middle Paleolithic and consist of stone tools discovered at Qeshm Island. The world's oldest known civilization (Sumer) developed along the Persian Gulf and southern Mesopotamia. The shallow basin that now underlies the Persian Gulf was an extensive region of river valley and wetlands during the transition between the end of the Last Glacial Maximum and the start of the Holocene, which, according to University of Birmingham archaeologist Jeffrey Rose, served as an environmental refuge for early humans during periodic hyperarid climate oscillations, laying the foundations for the legend of Dilmun. For most of the early history of the settlements in the Persian Gulf, the southern shores were ruled by a series of nomadic tribes. During the end of the fourth millennium BC, the southern part of the Persian Gulf was dominated by the Dilmun civilization. For a long time the most important settlement on the southern coast of the Persian Gulf was Gerrha. In the 2nd century the Lakhum tribe, who lived in what is now Yemen, migrated north and founded the Lakhmid Kingdom along the southern coast. Occasional ancient battles took place along the Persian Gulf coastlines, between the Sassanid Persian empire and the Lakhmid Kingdom, the most prominent of which was the invasion led by Shapur II against the Lakhmids, leading to Lakhmids' defeat, and advancement into Arabia, along the southern shore lines. During the 7th century the Sassanid Persian empire conquered the whole of the Persian Gulf, including southern and northern shores. Between 625 BC and 226 AD, the northern side was dominated by a succession of Persian empires including the Median, Achaemenid, Seleucid and Parthian empires. Under the leadership of the Achaemenid king Darius the Great (Darius I), Persian ships found their way to the Persian Gulf. Persian naval forces laid the foundation for a strong Persian maritime presence in Persian Gulf, that started with Darius I and existed until the arrival of the British East India Company, and the Royal Navy by mid-19th century AD. Persians were not only stationed on islands of the Persian Gulf, but also had ships often of 100 to 200 capacity patrolling empire's various rivers including Shatt-al-Arab, Tigris, and the Nile in the west, as well as Sind waterway, in India. The Achaemenid high naval command had established major naval bases located along Shatt al-Arab river, Bahrain, Oman, and Yemen. The Persian fleet would soon not only be used for peacekeeping purposes along the Shatt al-Arab but would also open the door to trade with India via Persian Gulf. Following the fall of Achaemenid Empire, and after the fall of the Parthian Empire, the Sassanid empire ruled the northern half and at times the southern half of the Persian Gulf. The Persian Gulf, along with the Silk Road, were important trade routes in the Sassanid empire. Many of the trading ports of the Persian empires were located in or around Persian Gulf. Siraf, an ancient Sassanid port that was located on the northern shore of the Persian gulf, located in what is now the Iranian province of Bushehr, is an example of such commercial port. Siraf, was also significant in that it had a flourishing commercial trade with China by the 4th century, having first established connection with the far east in 185 AD. Portuguese influence the Persian Gulf lasted for 250 years , however since the beginning of the 16th century portuguese dominance of the Persian Gulf contended with the local powers and the Ottoman Empire. Following the arrival of the English and the Dutch, the Safavid Empire allied with the newcomers to contest Portuguese dominance of the seas in the 17th century. Portuguese expansion into the Indian Ocean in the early 16th century following Vasco da Gama's voyages of exploration saw them battle the Ottomans up the coast of the Persian Gulf. In 1521, a Portuguese force led by commander Antonio Correia invaded Bahrain to take control of the wealth created by its pearl industry. On April 29, 1602, Shāh Abbās, the Persian emperor of the Safavid Persian Empire expelled the Portuguese from Bahrain, and that date is commemorated as National Persian Gulf day in Iran. With the support of the British fleet, in 1622 'Abbās took the island of Hormuz from the Portuguese; much of the trade was diverted to the town of Bandar 'Abbās, which he had taken from the Portuguese in 1615 and had named after himself. The Persian Gulf was therefore opened to a flourishing commerce with the Portuguese, Dutch, French, Spanish and the British merchants, who were granted particular privileges. The Ottoman Empire reasserted itself into Eastern Arabia in 1871. Under military and political pressure from the governor of the Ottoman Vilayet of Baghdad, Midhat Pasha, the ruling Al Thani tribe submitted peacefully to Ottoman rule. The Ottomans were forced to withdraw from the area with the start of World War I and the need for troops in various other frontiers. In World War II, the Western Allies used Iran as a conduit to transport military and industrial supply to the USSR, through a pathway known historically as the "Persian Corridor". Britain utilized the Persian Gulf as the entry point for the supply chain in order to make use of the Trans-Iranian Railway. The Persian Gulf therefore became a critical maritime path through which the Allies transported equipment to Soviet Union against the Nazi invasion.The piracy in the Persian Gulf was prevalent until the 19th century. Many of the most notable historical instances of piracy were perpetrated by the Al Qasimi tribe. This led to the British mounting the Persian Gulf campaign of 1819. The campaign led to the signing of the General Maritime Treaty of 1820 between the British and the Sheikhs of what was then known as the 'Pirate Coast'. From 1763 until 1971, the British Empire maintained varying degrees of political control over some of the Persian Gulf states, including the United Arab Emirates (originally called the Trucial States) and at various times Bahrain, Kuwait, Oman, and Qatar through the British Residency of the Persian Gulf. The United States' role in the Persian Gulf grew in the second half of the Twentieth Century. On July 3, 1988, Iran Air Flight 655 was shot down by the U.S. military (which had mistaken the Airbus A300 operating the flight for an Iranian F-14 Tomcat) while it was flying over the Persian Gulf, killing all 290 people on board. The United Kingdom maintains a profile in the region; in 2006 alone, over 1 million British nationals visited Dubai. In 2018, the UK opened a permanent military base, , in the Persian Gulf, the first since it withdrew from East of Suez in 1971 and is developing a support facility in Oman. The Persian Gulf is home to many islands such as Bahrain, an Arab state. Geographically the biggest island in the Persian Gulf is Qeshm island located in the Strait of Hormuz and belongs to Iran. Other significant islands in the Persian Gulf include Greater Tunb, Lesser Tunb and Kish administered by Iran, Bubiyan administered by Kuwait, Tarout administered by Saudi Arabia, and Dalma administered by UAE. In recent years, there has also been addition of artificial islands for tourist attractions, such as The World Islands in Dubai and The Pearl-Qatar in Doha. Persian Gulf islands are often also historically significant, having been used in the past by colonial powers such as the Portuguese and the British in their trade or as acquisitions for their empires. Eight nations have coasts along the Persian Gulf: Bahrain, Iran, Iraq, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates. The Persian gulf's strategic location has made it an ideal place for human development over time. Today, many major cities of the Middle East are located in this region. The wildlife of the Persian Gulf is diverse, and entirely unique due to the Persian gulf's geographic distribution and its isolation from the international waters only breached by the narrow Strait of Hormuz. The Persian Gulf has hosted some of the most magnificent marine fauna and flora, some of which are near extirpation or at serious environmental risk. From corals, to dugongs, Persian Gulf is a diverse cradle for many species who depend on each other for survival. However, the Persian gulf is not as biologically diverse as the Red Sea. Overall, the wild life of the Persian Gulf is endangered from both global factors, and regional, local negligence. Most pollution is from ships; land generated pollution counts as the second most common source of pollution. Along the mediterranean regions of the Arabian Sea, including the Persian Gulf, the Red Sea, the Gulf of Kutch, the Gulf of Suez, the Gulf of Aqaba, the Gulf of Aden, and the Gulf of Oman, dolphins and finless porpoises are the most common marine mammals in the waters, while larger whales and orcas are rarer today. Historically, whales had been abundant in the Persian gulf before commercial hunts wiped them out. Whales were reduced even further by illegal mass hunts by the Soviet Union and Japan in the 1960s and 1970s. Along with Bryde's whales,
https://en.wikipedia.org/wiki?curid=24761
P53 Tumor protein p53, also known as p53, cellular tumor antigen p53 (UniProt name), the Guardian of the Genome, phosphoprotein p53, tumor suppressor p53, antigen NY-CO-13, or transformation-related protein 53 (TRP53), is any isoform of a protein encoded by homologous genes in various organisms, such as "TP53" (humans) and "Trp53" (mice). This homolog (originally thought to be, and often spoken of as, a single protein) is crucial in multicellular vertebrates, where it prevents cancer formation, and thus functions as a tumor suppressor. As such, p53 has been described as "the guardian of the genome" because of its role in conserving stability by preventing genome mutation. Hence "TP53" is classified as a tumor suppressor gene. The name p53 was given in 1979 describing the apparent molecular mass; SDS-PAGE analysis indicates that it is a 53-kilodalton (kDa) protein. However, the actual mass of the full-length p53 protein (p53α) based on the sum of masses of the amino acid residues is only 43.7 kDa. This difference is due to the high number of proline residues in the protein, which slow its migration on SDS-PAGE, thus making it appear heavier than it actually is. In addition to the full-length protein, the human "TP53" gene encodes at least 15 protein isoforms, ranging in size from 3.5 to 43.7 kDa. All these p53 proteins are called the p53 isoforms. The TP53 gene is the most frequently mutated gene (>50%) in human cancer, indicating that the "TP53" gene plays a crucial role in preventing cancer formation. "TP53" gene encodes proteins that bind to DNA and regulate gene expression to prevent mutations of the genome. In humans, the "TP53" gene is located on the short arm of chromosome 17 (17p13.1). The gene spans 20 kb, with a non-coding exon 1 and a very long first intron of 10 kb. The coding sequence contains five regions showing a high degree of conservation in vertebrates, predominantly in exons 2, 5, 6, 7 and 8, but the sequences found in invertebrates show only distant resemblance to mammalian TP53. "TP53" orthologs have been identified in most mammals for which complete genome data are available. In humans, a common polymorphism involves the substitution of an arginine for a proline at codon position 72. Many studies have investigated a genetic link between this variation and cancer susceptibility; however, the results have been controversial. For instance, a meta-analysis from 2009 failed to show a link for cervical cancer. A 2011 study found that the "TP53" proline mutation did have a profound effect on pancreatic cancer risk among males. A study of Arab women found that proline homozygosity at "TP53" codon 72 is associated with a decreased risk for breast cancer. One study suggested that "TP53" codon 72 polymorphisms, MDM2 SNP309, and A2164G may collectively be associated with non-oropharyngeal cancer susceptibility and that MDM2 SNP309 in combination with "TP53" codon 72 may accelerate the development of non-oropharyngeal cancer in women. A 2011 study found that "TP53" codon 72 polymorphism was associated with an increased risk of lung cancer. Meta-analyses from 2011 found no significant associations between "TP53" codon 72 polymorphisms and both colorectal cancer risk and endometrial cancer risk. A 2011 study of a Brazilian birth cohort found an association between the non-mutant arginine "TP53" and individuals without a family history of cancer. Another 2011 study found that the p53 homozygous (Pro/Pro) genotype was associated with a significantly increased risk for renal cell carcinoma. Mutations that deactivate p53 in cancer usually occur in the DBD. Most of these mutations destroy the ability of the protein to bind to its target DNA sequences, and thus prevents transcriptional activation of these genes. As such, mutations in the DBD are recessive loss-of-function mutations. Molecules of p53 with mutations in the OD dimerise with wild-type p53, and prevent them from activating transcription. Therefore, OD mutations have a dominant negative effect on the function of p53. Wild-type p53 is a labile protein, comprising folded and unstructured regions that function in a synergistic manner. p53 plays a role in regulation or progression through the cell cycle, apoptosis, and genomic stability by means of several mechanisms: WAF1/CIP1 encoding for p21 and hundreds of other down-stream genes. p21 (WAF1) binds to the G1-S/CDK (CDK4/CDK6, CDK2, and CDK1) complexes (molecules important for the G1/S transition in the cell cycle) inhibiting their activity. When p21(WAF1) is complexed with CDK2, the cell cannot continue to the next stage of cell division. A mutant p53 will no longer bind DNA in an effective way, and, as a consequence, the p21 protein will not be available to act as the "stop signal" for cell division. Studies of human embryonic stem cells (hESCs) commonly describe the nonfunctional p53-p21 axis of the G1/S checkpoint pathway with subsequent relevance for cell cycle regulation and the DNA damage response (DDR). Importantly, p21 mRNA is clearly present and upregulated after the DDR in hESCs, but p21 protein is not detectable. In this cell type, p53 activates numerous microRNAs (like miR-302a, miR-302b, miR-302c, and miR-302d) that directly inhibit the p21 expression in hESCs. The p21 protein binds directly to cyclin-CDK complexes that drive forward the cell cycle and inhibits their kinase activity thereby causing cell cycle arrest to allow repair to take place. p21 can also mediate growth arrest associated with differentiation and a more permanent growth arrest associated with cellular senescence. The p21 gene contains several p53 response elements that mediate direct binding of the p53 protein, resulting in transcriptional activation of the gene encoding the p21 protein. The p53 and RB1 pathways are linked via p14ARF, raising the possibility that the pathways may regulate each other. p53 expression can be stimulated by UV light, which also causes DNA damage. In this case, p53 can initiate events leading to tanning. Levels of p53 play an important role in the maintenance of stem cells throughout development and the rest of human life. In human embryonic stem cells (hESCs)s, p53 is maintained at low inactive levels. This is because activation of p53 leads to rapid differentiation of hESCs. Studies have shown that knocking out p53 delays differentiation and that adding p53 causes spontaneous differentiation, showing how p53 promotes differentiation of hESCs and plays a key role in cell cycle as a differentiation regulator. When p53 becomes stabilized and activated in hESCs, it increases p21 to establish a longer G1. This typically leads to abolition of S-phase entry, which stops the cell cycle in G1, leading to differentiation. Work in mouse embryonic stem cells has recently shown however that the expression of P53 does not necessarily lead to differentiation. p53 also activates miR-34a and miR-145, which then repress the hESCs pluripotency factors, further instigating differentiation. In adult stem cells, p53 regulation is important for maintenance of stemness in adult stem cell niches. Mechanical signals such as hypoxia affect levels of p53 in these niche cells through the hypoxia inducible factors, HIF-1α and HIF-2α. While HIF-1α stabilizes p53, HIF-2α suppresses it. Suppression of p53 plays important roles in cancer stem cell phenotype, induced pluripotent stem cells and other stem cell roles and behaviors, such as blastema formation. Cells with decreased levels of p53 have been shown to reprogram into stem cells with a much greater efficiency than normal cells. Papers suggest that the lack of cell cycle arrest and apoptosis gives more cells the chance to be reprogrammed. Decreased levels of p53 were also shown to be a crucial aspect of blastema formation in the legs of salamanders. p53 regulation is very important in acting as a barrier between stem cells and a differentiated stem cell state, as well as a barrier between stem cells being functional and being cancerous. Apart from the cellular and molecular effects above, p53 has a tissue-level anticancer effect that works by inhibiting angiogenesis. As tumors grow they need to recruit new blood vessels to supply them, and p53 inhibits that by (i) interfering with regulators of humor hypoxia that also affect angiogenesis, such as HIF1 and HIF2, (ii) inhibiting the production of angiogenic promoting factors, and (iii) directly increasing the production of angiogenesis inhibitors, such as arresten. p53 by regulating Leukemia Inhibitory Factor has been shown to facilitate implantation in the mouse and possibly humans reproduction. p53 becomes activated in response to myriad stressors, including but not limited to DNA damage (induced by either UV, IR, or chemical agents such as hydrogen peroxide), oxidative stress, osmotic shock, ribonucleotide depletion, and deregulated oncogene expression. This activation is marked by two major events. First, the half-life of the p53 protein is increased drastically, leading to a quick accumulation of p53 in stressed cells. Second, a conformational change forces p53 to be activated as a transcription regulator in these cells. The critical event leading to the activation of p53 is the phosphorylation of its N-terminal domain. The N-terminal transcriptional activation domain contains a large number of phosphorylation sites and can be considered as the primary target for protein kinases transducing stress signals. The protein kinases that are known to target this transcriptional activation domain of p53 can be roughly divided into two groups. A first group of protein kinases belongs to the MAPK family (JNK1-3, ERK1-2, p38 MAPK), which is known to respond to several types of stress, such as membrane damage, oxidative stress, osmotic shock, heat shock, etc. A second group of protein kinases (ATR, ATM, CHK1 and CHK2, DNA-PK, CAK, TP53RK) is implicated in the genome integrity checkpoint, a molecular cascade that detects and responds to several forms of DNA damage caused by genotoxic stress. Oncogenes also stimulate p53 activation, mediated by the protein p14ARF. In unstressed cells, p53 levels are kept low through a continuous degradation of p53. A protein called Mdm2 (also called HDM2 in humans), binds to p53, preventing its action and transports it from the nucleus to the cytosol. Mdm2 also acts as an ubiquitin ligase and covalently attaches ubiquitin to p53 and thus marks p53 for degradation by the proteasome. However, ubiquitylation of p53 is reversible. On activation of p53, Mdm2 is also activated, setting up a feedback loop. p53 levels can show oscillations (or repeated pulses) in response to certain stresses, and these pulses can be important in determining whether the cells survive the stress, or die. MI-63 binds to MDM2, reactivating p53 in situations where p53's function has become inhibited. A ubiquitin specific protease, USP7 (or HAUSP), can cleave ubiquitin off p53, thereby protecting it from proteasome-dependent degradation via the ubiquitin ligase pathway . This is one means by which p53 is stabilized in response to oncogenic insults. USP42 has also been shown to deubiquitinate p53 and may be required for the ability of p53 to respond to stress. Recent research has shown that HAUSP is mainly localized in the nucleus, though a fraction of it can be found in the cytoplasm and mitochondria. Overexpression of HAUSP results in p53 stabilization. However, depletion of HAUSP does not result to a decrease in p53 levels but rather increases p53 levels due to the fact that HAUSP binds and deubiquitinates Mdm2. It has been shown that HAUSP is a better binding partner to Mdm2 than p53 in unstressed cells. USP10 however has been shown to be located in the cytoplasm in unstressed cells and deubiquitinates cytoplasmic p53, reversing Mdm2 ubiquitination. Following DNA damage, USP10 translocates to the nucleus and contributes to p53 stability. Also USP10 does not interact with Mdm2. Phosphorylation of the N-terminal end of p53 by the above-mentioned protein kinases disrupts Mdm2-binding. Other proteins, such as Pin1, are then recruited to p53 and induce a conformational change in p53, which prevents Mdm2-binding even more. Phosphorylation also allows for binding of transcriptional coactivators, like p300 and PCAF, which then acetylate the carboxy-terminal end of p53, exposing the DNA binding domain of p53, allowing it to activate or repress specific genes. Deacetylase enzymes, such as Sirt1 and Sirt7, can deacetylate p53, leading to an inhibition of apoptosis. Some oncogenes can also stimulate the transcription of proteins that bind to MDM2 and inhibit its activity. If the "TP53" gene is damaged, tumor suppression is severely compromised. People who inherit only one functional copy of the "TP53" gene will most likely develop tumors in early adulthood, a disorder known as Li-Fraumeni syndrome. The "TP53" gene can also be modified by mutagens (chemicals, radiation, or viruses), increasing the likelihood for uncontrolled cell division. More than 50 percent of human tumors contain a mutation or deletion of the "TP53" gene. Loss of p53 creates genomic instability that most often results in an aneuploidy phenotype. Increasing the amount of p53 may seem a solution for treatment of tumors or prevention of their spreading. This, however, is not a usable method of treatment, since it can cause premature aging. Restoring endogenous normal p53 function holds some promise. Research has shown that this restoration can lead to regression of certain cancer cells without damaging other cells in the process. The ways by which tumor regression occurs depends mainly on the tumor type. For example, restoration of endogenous p53 function in lymphomas may induce apoptosis, while cell growth may be reduced to normal levels. Thus, pharmacological reactivation of p53 presents itself as a viable cancer treatment option. The first commercial gene therapy, Gendicine, was approved in China in 2003 for the treatment of head and neck squamous cell carcinoma. It delivers a functional copy of the p53 gene using an engineered adenovirus. Certain pathogens can also affect the p53 protein that the "TP53" gene expresses. One such example, human papillomavirus (HPV), encodes a protein, E6, which binds to the p53 protein and inactivates it. This mechanism, in synergy with the inactivation of the cell cycle regulator pRb by the HPV protein E7, allows for repeated cell division manifested clinically as warts. Certain HPV types, in particular types 16 and 18, can also lead to progression from a benign wart to low or high-grade cervical dysplasia, which are reversible forms of precancerous lesions. Persistent infection of the cervix over the years can cause irreversible changes leading to carcinoma in situ and eventually invasive cervical cancer. This results from the effects of HPV genes, particularly those encoding E6 and E7, which are the two viral oncoproteins that are preferentially retained and expressed in cervical cancers by integration of the viral DNA into the host genome. The p53 protein is continually produced and degraded in cells of healthy people, resulting in damped oscillation. The degradation of the p53 protein is associated with binding of MDM2. In a negative feedback loop, MDM2 itself is induced by the p53 protein. Mutant p53 proteins often fail to induce MDM2, causing p53 to accumulate at very high levels. Moreover, the mutant p53 protein itself can inhibit normal p53 protein levels. In some cases, single missense mutations in p53 have been shown to disrupt p53 stability and function. Suppression of p53 in human breast cancer cells is shown to lead to increased CXCR5 chemokine receptor gene expression and activated cell migration in response to chemokine CXCL13. One study found that p53 and Myc proteins were key to the survival of Chronic Myeloid Leukaemia (CML) cells. Targeting p53 and Myc proteins with drugs gave positive results on mice with CML. Most p53 mutations are detected by DNA sequencing. However, it is known that single missense mutations can have a large spectrum from rather mild to very severe functional affects. The large spectrum of cancer phenotypes due to mutations in the "TP53" gene is also supported by the fact that different isoforms of p53 proteins have different cellular mechanisms for prevention against cancer. Mutations in "TP53" can give rise to different isoforms, preventing their overall functionality in different cellular mechanisms and thereby extending the cancer phenotype from mild to severe. Recents studies show that p53 isoforms are differentially expressed in different human tissues, and the loss-of-function or gain-of-function mutations within the isoforms can cause tissue-specific cancer or provides cancer stem cell potential in different tissues. TP53 mutation also hits energy metabolism and increases glycolysis in breast cancer cells. The dynamics of p53 proteins, along with its antagonist Mdm2, indicate that the levels of p53, in units of concentration, oscillate as a function of time. This "damped" oscillation is both clinically documented and mathematically modelled. Mathematical models also indicate that the p53 concentration oscillates much faster once teratogens, such as double-stranded breaks (DSB) or UV radiation, are introduced to the system. This supports and models the current understanding of p53 dynamics, where DNA damage induces p53 activation (see p53 regulation for more information). Current models can also be useful for modelling the mutations in p53 isoforms and their effects on p53 oscillation, thereby promoting "de novo" tissue-specific pharmacological drug discovery. p53 was identified in 1979 by Lionel Crawford, David P. Lane, Arnold Levine, and Lloyd Old, working at Imperial Cancer Research Fund (UK) Princeton University/UMDNJ (Cancer Institute of New Jersey), and Memorial Sloan-Kettering Cancer Center, respectively. It had been hypothesized to exist before as the target of the SV40 virus, a strain that induced development of tumors. The "TP53" gene from the mouse was first cloned by Peter Chumakov of the Russian Academy of Sciences in 1982, and independently in 1983 by Moshe Oren in collaboration with David Givol (Weizmann Institute of Science). The human "TP53" gene was cloned in 1984 and the full length clone in 1985. It was initially presumed to be an oncogene due to the use of mutated cDNA following purification of tumor cell mRNA. Its role as a tumor suppressor gene was revealed in 1989 by Bert Vogelstein at the Johns Hopkins School of Medicine and Arnold Levine at Princeton University. Warren Maltzman, of the Waksman Institute of Rutgers University first demonstrated that TP53 was responsive to DNA damage in the form of ultraviolet radiation. In a series of publications in 1991–92, Michael Kastan of Johns Hopkins University, reported that TP53 was a critical part of a signal transduction pathway that helped cells respond to DNA damage. In 1993, p53 was voted "molecule of the year" by Science magazine. As with 95% of human genes, TP53 encodes more than one protein. Several isoforms were discovered in 2005, and so far 12 human p53 isoforms have been identified (p53α, p53β, p53γ, ∆40p53α, ∆40p53β, ∆40p53γ, ∆133p53α, ∆133p53β, ∆133p53γ, ∆160p53α, ∆160p53β, ∆160p53γ). Furthermore, p53 isoforms are expressed in a tissue dependent manner and p53α is never expressed alone. The full length p53 isoform proteins can be subdivided into different protein domains. Starting from the N-terminus, there are first the amino-terminal transactivation domains (TAD 1, TAD 2), which are needed to induce a subset of p53 target genes. This domain is followed by the proline rich domain (PXXP), whereby the motif PXXP is repeated (P is a proline and X can be any amino acid). It is required among others for p53 mediated apoptosis. Some isoforms lack the proline rich domain, such as Δ133p53β,γ and Δ160p53α,β,γ; hence some isoforms of p53 are not mediating apoptosis, emphasizing the diversifying roles of the "TP53" gene. Afterwards there is the DNA binding domain (DBD), which enables the proteins to sequence specific binding. The carboxyl terminal domain completes the protein. It includes the nuclear localization signal (NLS), the nuclear export signal (NES) and the oligomerisation domain (OD). The NLS and NES are responsible for the subcellular regulation of p53. Through the OD, p53 can form a tetramer and then bind to DNA. Among the isoforms, some domains can be missing, but all of them share most of the highly conserved DNA-binding domain. The isoforms are formed by different mechanisms. The beta and the gamma isoforms are generated by multiple splicing of intron 9, which leads to a different C-terminus. Furthermore, the usage of an internal promoter in intron 4 causes the ∆133 and ∆160 isoforms, which lack the TAD domain and a part of the DBD. Moreover, alternative initiation of translation at codon 40 or 160 bear the ∆40p53 and ∆160p53 isoforms. Due to the isoformic nature of p53 proteins, there have been several sources of evidence showing that mutations within the "TP53" gene giving rise to mutated isoforms are causative agents of various cancer phenotypes, from mild to severe, due to single mutation in the "TP53" gene (refer to section Experimental analysis of p53 mutations for more details). p53 has been shown to interact with:
https://en.wikipedia.org/wiki?curid=24762
Pointless topology In mathematics, pointless topology (also called point-free or pointfree topology, or locale theory) is an approach to topology that avoids mentioning points. Traditionally, a topological space consists of a set of points together with a "topology", a system of subsets called open sets that with the operations of intersection and union forms a lattice with certain properties. Point-free topology is based on the concept of a "realistic spot" instead of a point without extent. Spots can be joined (forming a complete lattice) and if a spot meets a join of others it has to meet some of the constituents, which, roughly speaking, leads to the distributive law formula_1. The basic concept is that of a frame, a complete lattice satisfying the distributive law above; frame homomorphisms respect all joins (in particular, the least element of the lattice) and finite meets (in particular, the greatest element of the lattice). Frames, together with frame homomorphisms, form a category. In classical topology, represented on a set formula_2 by the system formula_3 of open sets, formula_3 (partially ordered by inclusion) is a frame, and if formula_5 is a continuous map, formula_6 defined by formula_7 is a frame homomorphism. For sober spaces such formula_8 are precisely the frame homomorphisms formula_9. Hence formula_10 is a full embedding of the category of sober spaces into the dual of the category of frames (usually called of the category of locales). This justifies thinking of frames (locales) as of generalized topological spaces. A frame is "spatial" if it is isomorphic to a formula_3. There are plenty of non-spatial ones and this fact turned out to be helpful in several problems. The theory of frames and locales in the contemporary sense was initiated in the late 1950s (Charles Ehresmann, Jean Bénabou, Hugh Dowker, Dona Papert) and developed through the following decades (John Isbell, Peter Johnstone, Harold Simmons, , , Till Plewe, Japie Vermeulen, Steve Vickers) into a lively branch of topology, with application in various fields, in particular also in theoretical computer science. For more on the history of locale theory see. It is possible to translate most concepts of point-set topology into the context of locales, and prove analogous theorems. Regarding the advantages of the point-free approach let us point out, for example, the fact that some important facts of classical topology depending on choice principles become choice-free (that is, constructive, which is, in particular, appealing for computer science). Thus for instance, products of compact locales are compact constructively, or completions of uniform locales are constructive. This can be useful if one works in a topos that does not have the axiom of choice. Other advantages include the much better behaviour of paracompactness, or the fact that subgroups of localic groups are always closed. Another point where locale theory and topology diverge strongly is the concepts of subspaces versus sublocales: by Isbell's density theorem, every locale has a smallest dense sublocale. This has absolutely no equivalent in the realm of topological spaces. A general introduction to pointless topology is This is, in its own words, to be read as the trailer for Johnstone's excellent monograph (which appeared already in 1982 and can still be used for basic reference): There is a recent monograph where one also finds a more extensive bibliography. For relations with logic: For a more concise account see the respective chapters in:
https://en.wikipedia.org/wiki?curid=24764
Pizza Pizza (, ) is a savory dish of Italian origin consisting of a usually round, flattened base of leavened wheat-based dough topped with tomatoes, cheese, and often various other ingredients (such as anchovies, mushrooms, onions, olives, pineapple, meat, etc.) which is then baked at a high temperature, traditionally in a wood-fired oven. A small pizza is sometimes called a pizzetta. In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced, and is eaten with the use of a knife and fork. In casual settings, however, it is cut into wedges to be eaten while held in the hand. The term "pizza" was first recorded in the 10th century in a Latin manuscript from the Southern Italian town of Gaeta in Lazio, on the border with Campania. Modern pizza was invented in Naples, and the dish and its variants have since become popular in many countries. It has become one of the most popular foods in the world and a common fast food item in Europe and North America, available at pizzerias (restaurants specializing in pizza), restaurants offering Mediterranean cuisine, and via pizza delivery. Many companies sell ready-baked frozen pizzas to be reheated in an ordinary home oven. The "Associazione Verace Pizza Napoletana" (lit. True Neapolitan Pizza Association) is a non-profit organization founded in 1984 with headquarters in Naples that aims to promote traditional Neapolitan pizza. In 2009, upon Italy's request, Neapolitan pizza was registered with the European Union as a Traditional Speciality Guaranteed dish, and in 2017 the art of its making was included on UNESCO's list of intangible cultural heritage. The word "pizza" first appeared in a Latin text from the central Italian town of Gaeta, then still part of the Byzantine Empire, in 997 AD; the text states that a tenant of certain property is to give the bishop of Gaeta "duodecim pizze" ("twelve pizzas") every Christmas Day, and another twelve every Easter Sunday. Suggested etymologies include: Foods similar to pizza have been made since the Neolithic Age. Records of people adding other ingredients to bread to make it more flavorful can be found throughout ancient history. In the 6th century BC, the Persian soldiers of Achaemenid Empire during the rule King Darius I baked flatbreads with cheese and dates on top of their battle shields and the ancient Greeks supplemented their bread with oils, herbs, and cheese. An early reference to a pizza-like food occurs in the Aeneid, when Celaeno, queen of the Harpies, foretells that the Trojans would not find peace until they are forced by hunger to eat their tables (Book III). In Book VII, Aeneas and his men are served a meal that includes round cakes (like pita bread) topped with cooked vegetables. When they eat the bread, they realize that these are the "tables" prophesied by Celaeno. In the 16th century, Italian chef Bartolomeo Scappi's recipe collection there are recipes for pizzas made with puff pastry layers that include sugar and other sweet and savority ingredients. A 19th century recipe for "pizza alla napoletana" is made with almonds, vanilla and short pastry, and is topped with icing sugar. Modern pizza evolved from similar flatbread dishes in Naples, Italy, in the 18th or early 19th century. Prior to that time, flatbread was often topped with ingredients such as garlic, salt, lard, and cheese. It is uncertain when tomatoes were first added and there are many conflicting claims. Until about 1830, pizza was sold from open-air stands and out of pizza bakeries. A popular contemporary legend holds that the archetypal pizza, "pizza Margherita", was invented in 1889, when the Royal Palace of Capodimonte commissioned the Neapolitan pizzaiolo (pizza maker) Raffaele Esposito to create a pizza in honor of the visiting Queen Margherita. Of the three different pizzas he created, the Queen strongly preferred a pizza swathed in the colors of the Italian flag — red (tomato), green (basil), and white (mozzarella). Supposedly, this kind of pizza was then named after the Queen, although later research cast doubt on this legend. An official letter of recognition from the Queen's "head of service" remains on display in Esposito's shop, now called the Pizzeria Brandi. Pizza was brought to the United States with Italian immigrants in the late nineteenth century and first appeared in areas where Italian immigrants concentrated. The country's first pizzeria, Lombardi's, opened in 1905. Following World War II, veterans returning from the Italian Campaign, who were introduced to Italy's native cuisine, proved a ready market for pizza in particular. Pizza is sold fresh or frozen, and whole or as portion-size slices or pieces. Methods have been developed to overcome challenges such as preventing the sauce from combining with the dough and producing a crust that can be frozen and reheated without becoming rigid. There are frozen pizzas with raw ingredients and self-rising crusts. Another form of uncooked pizza is available from take and bake pizzerias. This pizza is assembled in the store, then sold to customers to bake in their own ovens. Some grocery stores sell fresh dough along with sauce and basic ingredients, to complete at home before baking in an oven. In restaurants, pizza can be baked in an oven with stone bricks above the heat source, an electric deck oven, a conveyor belt oven, or, in the case of more expensive restaurants, a wood or coal-fired brick oven. On deck ovens, pizza can be slid into the oven on a long paddle, called a peel, and baked directly on the hot bricks or baked on a screen (a round metal grate, typically aluminum). Prior to use, a peel may be sprinkled with cornmeal to allow pizza to easily slide onto and off of it. When made at home, it can be baked on a pizza stone in a regular oven to reproduce the effect of a brick oven. Cooking directly in a metal oven results in too rapid heat transfer to the crust, burning it. Aficionado home-chefs sometimes use a specialty wood-fired pizza oven, usually installed outdoors. Dome-shaped pizza ovens have been used for centuries, which is one way to achieve true heat distribution in a wood-fired pizza oven. Another option is grilled pizza, in which the crust is baked directly on a barbecue grill. Greek pizza, like Chicago-style pizza, is baked in a pan rather than directly on the bricks of the pizza oven. When it comes to preparation, the dough and ingredients can be combined on any kind of table. With mass production of pizza, the process can be completely automated. Most restaurants still use standard and purpose-built pizza preparation tables. Pizzerias nowadays can even opt for hi tech pizza preparation tables that combine mass production elements with traditional techniques. The bottom of the pizza, called the "crust", may vary widely according to style, thin as in a typical hand-tossed Neapolitan pizza or thick as in a deep-dish Chicago-style. It is traditionally plain, but may also be seasoned with garlic or herbs, or stuffed with cheese. The outer edge of the pizza is sometimes referred to as the "cornicione". Pizza dough often contains sugar, both to help its yeast rise and enhance browning of the crust. Dipping sauce specifically for pizza was invented by American pizza chain Papa John's Pizza in 1984 and has since become popular when eating pizza, especially the crust. Mozzarella is commonly used on pizza, with the highest quality buffalo mozzarella produced in the surroundings of Naples. Eventually, other cheeses were used well as pizza ingredients, particularly Italian cheeses including provolone, pecorino romano, ricotta, and scamorza. Less expensive processed cheeses or cheese analogues have been developed for mass-market pizzas to produce desirable qualities like browning, melting, stretchiness, consistent fat and moisture content, and stable shelf life. This quest to create the ideal and economical pizza cheese has involved many studies and experiments analyzing the impact of vegetable oil, manufacturing and culture processes, denatured whey proteins, and other changes in manufacture. In 1997, it was estimated that annual production of pizza cheese was in the U.S. and in Europe. Authentic Neapolitan pizza ("pizza napoletana") is made with San Marzano tomatoes, grown on the volcanic plains south of Mount Vesuvius, and mozzarella di bufala Campana, made with milk from water buffalo raised in the marshlands of Campania and Lazio. This mozzarella is protected with its own European protected designation of origin. Other traditional pizzas include "pizza alla marinara", which is topped with marinara sauce and is supposedly the most ancient tomato-topped pizza, pizza capricciosa, which is prepared with mozzarella cheese, baked ham, mushroom, artichoke, and tomato, and pizza pugliese, prepared with tomato, mozzarella, and onions. A popular variant of pizza in Italy is Sicilian pizza (locally called "sfincione" or "sfinciuni"), a thick-crust or deep-dish pizza originating during the 17th century in Sicily: it is essentially a focaccia that is typically topped with tomato sauce and other ingredients. Until the 1860s, "sfincione" was the type of pizza usually consumed in Sicily, especially in the Western portion of the island. Other variations of pizzas are also found in other regions of Italy, for example "pizza al padellino" or "pizza al tegamino", a small-sized, thick-crust, deep-dish pizza typically served in Turin, Piedmont. 13% of the United States population consumes pizza on any given day. Pizza chains such as Domino's Pizza, Pizza Hut, and Papa John's, pizzas from take and bake pizzerias, and chilled or frozen pizzas from supermarkets make pizza readily available nationwide. The first pizzeria in the U.S. was opened in New York City's Little Italy in 1905. Common toppings for pizza in the United States include anchovies, ground beef, chicken, ham, mushrooms, olives, onions, peppers, pepperoni, pineapple, salami, sausage, spinach, steak, and tomatoes. Distinct regional types developed in the 20th century, including Buffalo, California, Chicago, Detroit, Greek, New Haven, New York, and St. Louis styles. These regional variations include deep-dish, stuffed, pockets, turnovers, rolled, and pizza-on-a-stick, each with seemingly limitless combinations of sauce and toppings. Another variation is grilled pizza, created by taking a fairly thin, round (more typically, irregularly shaped) sheet of yeasted pizza dough, placing it directly over the fire of a grill, and then turning it over once the bottom has baked and placing a thin layer of toppings on the baked side. Toppings may be sliced thin to ensure that they heat through, and chunkier toppings such as sausage or peppers may be precooked before being placed on the pizza. Garlic, herbs, or other ingredients are sometimes added to the pizza or the crust to maximize the flavor of the dish. Grilled pizza was offered in the United States at the Al Forno restaurant in Providence, Rhode Island by owners Johanne Killeen and George Germon in 1980. Although it was inspired by a misunderstanding that confused a wood-fired brick oven with a grill, grilled pizza did exist prior to 1980, both in Italy, and in Argentina where it is known as "pizza a la parrilla". It has become a popular cookout dish, and there are even some pizza restaurants that specialize in the style. The traditional style of grilled pizza employed at Al Forno restaurant uses a dough coated with olive oil, strained tomato sauce, thin slices of fresh mozzarella, and a garnish made from shaved scallions, and is served uncut. The final product can be likened to flatbread with pizza toppings. Another Providence establishment, Bob & Timmy's Grilled Pizza, was featured in a Providence-themed episode of the Travel Channel's "Man v. Food Nation" in 2011. Argentina, and more specifically Buenos Aires, received a massive Italian immigration at the turn of the 19th century. Immigrants from Naples and Genoa opened the first pizza bars, though over time Spanish residents came to own the majority of the pizza businesses. Standard Argentine pizza has a thicker crust, called "media masa" (half dough) than traditional Italian style pizza and includes more cheese. Argentine gastronomy tradition, served pizza with fainá, which is a Genovese chick pea-flour dough placed over the piece of pizza, and moscato wine. The most popular variety of pizza is called "muzzarella" (mozzarella), similar to Neapolitan pizza (bread, tomato sauce and cheese) but made with a thicker "media masa" crust, triple cheese and tomato sauce, usually also with olives. It can be found in nearly every corner of the country; Buenos Aires is considered the city with the most pizza bars by person of the world. Other popular varieties include jam, tomato slices, red pepper and longaniza. Two Argentine-born varieties of pizza with onion, are also very popular: fugazza with cheese and fugazzetta. The former one consists in a regular pizza crust topped with cheese and onions; the later has the cheese between two pizza crusts, with onions on top. The world's largest pizza was prepared in Rome in December 2012, and measured . The pizza was named "Ottavia" in homage to the first Roman emperor Octavian Augustus, and was made with a gluten-free base. The world's longest pizza was made in Fontana, California in 2017 and measured . The world's most expensive pizza listed by "Guinness World Records" is a commercially available thin-crust pizza at Maze restaurant in London, United Kingdom, which costs . The pizza is wood fire-baked, and is topped with onion puree, white truffle paste, fontina cheese, baby mozzarella, pancetta, cep mushrooms, freshly picked wild mizuna lettuce, and fresh shavings of a rare Italian white truffle. There are several instances of more expensive pizzas, such as the "Pizza Royale 007" at Haggis restaurant in Glasgow, Scotland, which is topped with caviar, lobster, and 24-carat gold dust, and the caviar pizza made by Nino's Bellissima pizzeria in New York City, New York. However, these are not officially recognized by "Guinness World Records". Additionally, a pizza was made by the restaurateur Domenico Crolla that included toppings such as sunblush-tomato sauce, Scottish smoked salmon, medallions of venison, edible gold, lobster marinated in cognac, and champagne-soaked caviar. The pizza was auctioned for charity in 2007, raising . In 2017, the world pizza market was $128 billion, and in the US it was $44 billion spread over 76,000 pizzerias. Overall, 13% of the U.S. population aged 2 years and over consumed pizza on any given day. A Technomic study concluded that 83% of consumers eat pizza at least once per month. According to PMQ in 2018 60.47% of respondents reported an increase in sales over the previous year. Some mass-produced pizzas by fast food chains have been criticized as having an unhealthy balance of ingredients. Pizza can be high in salt, fat, and calories (food energy). The USDA reports an average sodium content of 5,101 mg per pizza in fast food chains. There are concerns about negative health effects. Food chains have come under criticism at various times for the high salt content of some of their meals. Frequent pizza eaters in Italy have been found to have a relatively low incidence of cardiovascular disease and digestive tract cancers relative to infrequent pizza eaters, although the nature of the association between pizza and such perceived benefits is unclear. Pizza consumption in Italy might only indicate adherence to traditional Mediterranean dietary patterns, which have been shown to have various health benefits. Some attribute the apparent health benefits of pizza to the lycopene content in pizza sauce, which research indicates likely plays a role in protecting against cardiovascular disease and various cancers. National Pizza Month is an annual observance that occurs for the month of October in the United States and some areas of Canada. This observance began in October 1984, and was created by Gerry Durnell, the publisher of "Pizza Today" magazine. During this time, some people observe National Pizza Month by consuming various types of pizzas or pizza slices, or going to various pizzerias.
https://en.wikipedia.org/wiki?curid=24768
Phase modulation Phase modulation (PM) is a modulation pattern for conditioning communication signals for transmission. It encodes a message signal as variations in the instantaneous phase of a carrier wave. Phase modulation is one of the two principal forms of angle modulation, together with frequency modulation. The phase of a carrier signal is modulated to follow the changing signal level (amplitude) of the message signal. The peak amplitude and the frequency of the carrier signal are maintained constant, but as the amplitude of the message signal changes, the phase of the carrier changes correspondingly. Phase modulation is widely used for transmitting radio waves and is an integral part of many digital transmission coding schemes that underlie a wide range of technologies like Wi-Fi, GSM and satellite television. PM is used for signal and waveform generation in digital synthesizers, such as the Yamaha DX7, to implement FM synthesis. A related type of sound synthesis called phase distortion is used in the Casio CZ synthesizers. PM changes the phase angle of the complex envelope in direct proportion to the message signal. If "m(t)" is the message signal to be transmitted and the carrier onto which the signal is modulated is then the modulated signal is This shows how formula_3 modulates the phase - the greater m(t) is at a point in time, the greater the phase shift of the modulated signal at that point. It can also be viewed as a change of the frequency of the carrier signal, and phase modulation can thus be considered a special case of FM in which the carrier frequency modulation is given by the time derivative of the phase modulation. The modulation signal could here be The mathematics of the spectral behavior reveals that there are two regions of particular interest: As with other modulation indices, this quantity indicates by how much the modulated variable varies around its unmodulated level. It relates to the variations in the phase of the carrier signal: where formula_9 is the peak phase deviation. Compare to the modulation index for frequency modulation.
https://en.wikipedia.org/wiki?curid=24772
Piston A piston is a component of reciprocating engines, reciprocating pumps, gas compressors and pneumatic cylinders, among other similar mechanisms. It is the moving component that is contained by a cylinder and is made gas-tight by piston rings. In an engine, its purpose is to transfer force from expanding gas in the cylinder to the crankshaft via a piston rod and/or connecting rod. In a pump, the function is reversed and force is transferred from the crankshaft to the piston for the purpose of compressing or ejecting the fluid in the cylinder. In some engines, the piston also acts as a valve by covering and uncovering ports in the cylinder. An internal combustion engine is acted upon by the pressure of the expanding combustion gases in the combustion chamber space at the top of the cylinder. This force then acts downwards through the connecting rod and onto the crankshaft. The connecting rod is attached to the piston by a swivelling gudgeon pin (US: wrist pin). This pin is mounted within the piston: unlike the steam engine, there is no piston rod or crosshead (except big two stroke engines). The typical piston design is on the picture. This type of piston is widely used in car diesel engines. According to purpose, supercharging level and working conditions of engines the shape and proportions can be changed. High-power diesel engines work in difficult conditions. Maximum pressure in the combustion chamber can reach 20 MPa and maximum temperature of some piston surfaces can exceed 450°C. It is possible to improve piston cooling by creating special cooling cavity. Injector supplies this cooling cavity «A» with oil through oil supply channel «B». For better temperature reduction construction should be carefully calculated and analyzed. Oil flow in cooling cavity should be not less than 80% from oil flow through the injector. The pin itself is of hardened steel and is fixed in the piston, but free to move in the connecting rod. A few designs use a 'fully floating' design that is loose in both components. All pins must be prevented from moving sideways and the ends of the pin digging into the cylinder wall, usually by circlips. Gas sealing is achieved by the use of piston rings. These are a number of narrow iron rings, fitted loosely into grooves in the piston, just below the crown. The rings are split at a point in the rim, allowing them to press against the cylinder with a light spring pressure. Two types of ring are used: the upper rings have solid faces and provide gas sealing; lower rings have narrow edges and a U-shaped profile, to act as oil scrapers. There are many proprietary and detail design features associated with piston rings. Pistons are cast from aluminium alloys. For better strength and fatigue life, some racing pistons may be forged instead. Billet pistons are also used in racing engines because they do not rely on the size and architecture of available forgings, allowing for last-minute design changes. Although not commonly visible to the naked eye, pistons themselves are designed with a certain level of ovality and profile taper, meaning they are not perfectly round, and their diameter is larger near the bottom of the skirt than at the crown. Early pistons were of cast iron, but there were obvious benefits for engine balancing if a lighter alloy could be used. To produce pistons that could survive engine combustion temperatures, it was necessary to develop new alloys such as Y alloy and Hiduminium, specifically for use as pistons. A few early gas engines had double-acting cylinders, but otherwise effectively all internal combustion engine pistons are single-acting. During World War II, the US submarine "Pompano" was fitted with a prototype of the infamously unreliable H.O.R. double-acting two-stroke diesel engine. Although compact, for use in a cramped submarine, this design of engine was not repeated. Trunk pistons are long relative to their diameter. They act both as a piston and cylindrical crosshead. As the connecting rod is angled for much of its rotation, there is also a side force that reacts along the side of the piston against the cylinder wall. A longer piston helps to support this. Trunk pistons have been a common design of piston since the early days of the reciprocating internal combustion engine. They were used for both petrol and diesel engines, although high speed engines have now adopted the lighter weight slipper piston. A characteristic of most trunk pistons, particularly for diesel engines, is that they have a groove for an oil ring below the gudgeon pin, in addition to the rings between the gudgeon pin and crown. The name 'trunk piston' derives from the 'trunk engine', an early design of marine steam engine. To make these more compact, they avoided the steam engine's usual piston rod with separate crosshead and were instead the first engine design to place the gudgeon pin directly within the piston. Otherwise these trunk engine pistons bore little resemblance to the trunk piston; they were extremely large diameter and double-acting. Their 'trunk' was a narrow cylinder mounted in the centre of the piston. Large slow-speed Diesel engines may require additional support for the side forces on the piston. These engines typically use crosshead pistons. The main piston has a large piston rod extending downwards from the piston to what is effectively a second smaller-diameter piston. The main piston is responsible for gas sealing and carries the piston rings. The smaller piston is purely a mechanical guide. It runs within a small cylinder as a trunk guide and also carries the gudgeon pin. Lubrication of the crosshead has advantages over the trunk piston as its lubricating oil is not subject to the heat of combustion: the oil is not contaminated by combustion soot particles, it does not break down owing to the heat and a thinner, less viscous oil may be used. The friction of both piston and crosshead may be only half of that for a trunk piston. Because of the additional weight of these pistons, they are not used for high-speed engines. A slipper piston is a piston for a petrol engine that has been reduced in size and weight as much as possible. In the extreme case, they are reduced to the piston crown, support for the piston rings, and just enough of the piston skirt remaining to leave two lands so as to stop the piston rocking in the bore. The sides of the piston skirt around the gudgeon pin are reduced away from the cylinder wall. The purpose is mostly to reduce the reciprocating mass, thus making it easier to balance the engine and so permit high speeds. In racing applications, slipper piston skirts can be configured to yield extremely light weight while maintaining the rigidity and strength of a full skirt. Reduced inertia also improves mechanical efficiency of the engine: the forces required to accelerate and decelerate the reciprocating parts cause more piston friction with the cylinder wall than the fluid pressure on the piston head. A secondary benefit may be some reduction in friction with the cylinder wall, since the area of the skirt, which slides up and down in the cylinder is reduced by half. However, most friction is due to the piston rings, which are the parts which actually fit the tightest in the bore and the bearing surfaces of the wrist pin, and thus the benefit is reduced. Deflector pistons are used in two-stroke engines with crankcase compression, where the gas flow within the cylinder must be carefully directed in order to provide efficient scavenging. With cross scavenging, the transfer (inlet to the cylinder) and exhaust ports are on directly facing sides of the cylinder wall. To prevent the incoming mixture passing straight across from one port to the other, the piston has a raised rib on its crown. This is intended to deflect the incoming mixture upwards, around the combustion chamber. Much effort, and many different designs of piston crown, went into developing improved scavenging. The crowns developed from a simple rib to a large asymmetric bulge, usually with a steep face on the inlet side and a gentle curve on the exhaust. Despite this, cross scavenging was never as effective as hoped. Most engines today use Schnuerle porting instead. This places a pair of transfer ports in the sides of the cylinder and encourages gas flow to rotate around a vertical axis, rather than a horizontal axis. In racing engines, piston strength and stiffness is typically much higher than that of a passenger car engine, while the weight is much less, to achieve the high engine RPM necessary in racing. Steam engines are usually double-acting (i.e. steam pressure acts alternately on each side of the piston) and the admission and release of steam is controlled by slide valves, piston valves or poppet valves. Consequently, steam engine pistons are nearly always comparatively thin discs: their diameter is several times their thickness. (One exception is the trunk engine piston, shaped more like those in a modern internal-combustion engine.) Another factor is that since almost all steam engines use crossheads to translate the force to the drive rod, there are few lateral forces acting to try and "rock" the piston, so a cylinder-shaped piston skirt isn't necessary. Piston pumps can be used to move liquids or compress gases. There are two special type of pistons used in air cannons: close tolerance pistons and double pistons. In close tolerance pistons O-rings serve as a valve, but O-rings are not used in double piston types.
https://en.wikipedia.org/wiki?curid=24776
Five precepts The Five precepts (, ); or five rules of training (, ; ) is the most important system of morality for Buddhist lay people. They constitute the basic code of ethics to be undertaken by lay followers of Buddhism. The precepts are commitments to abstain from killing living beings, stealing, sexual misconduct, lying and intoxication. Within the Buddhist doctrine, they are meant to develop mind and character to make progress on the path to enlightenment. They are sometimes referred to as the śrāvakayāna precepts in the Mahāyāna tradition, contrasting them with the "bodhisattva" precepts. The five precepts form the basis of several parts of Buddhist doctrine, both lay and monastic. With regard to their fundamental role in Buddhist ethics, they have been compared with the ten commandments in Abrahamic religions or the ethical codes of Confucianism. The precepts have been connected with utilitarianist, deontological and virtue approaches to ethics, though by 2017, such categorization by western terminology had mostly been abandoned by scholars. The precepts have been compared with human rights because of their universal nature, and some scholars argue they can complement the concept of human rights. The five precepts were common to the religious milieu of 6th-century BCE India, but the Buddha's focus on awareness through the fifth precept was unique. As shown in Early Buddhist Texts, the precepts grew to be more important, and finally became a condition for membership of the Buddhist religion. When Buddhism spread to different places and people, the role of the precepts began to vary. In countries where Buddhism had to compete with other religions, such as China, the ritual of undertaking the five precepts developed into an initiation ceremony to become a Buddhist lay person. On the other hand, in countries with little competition from other religions, such as Thailand, the ceremony has had little relation to the rite of becoming Buddhist, as many people are presumed Buddhist from birth. Undertaking and upholding the five precepts is based on the principle of non-harming (Pāli and ). The Pali Canon recommends one to compare oneself with others, and on the basis of that, not to hurt others. Compassion and a belief in karmic retribution form the foundation of the precepts. Undertaking the five precepts is part of regular lay devotional practice, both at home and at the local temple. However, the extent to which people keep them differs per region and time. People keep them with an intention to develop themselves, but also out of fear of a bad rebirth. ·The first precept consists of a prohibition of killing, both humans and all animals. Scholars have interpreted Buddhist texts about the precepts as an opposition to and prohibition of capital punishment, suicide, abortion and euthanasia. In practice, however, many Buddhist countries still use the death penalty. With regard to abortion, Buddhist countries take the middle ground, by condemning though not prohibiting it. The Buddhist attitude to violence is generally interpreted as opposing all warfare, but some scholars have raised exceptions. ·The second precept prohibits theft. ·The third precept refers to adultery in all its forms, and has been defined by modern teachers with terms such as "sexual responsibility" and "long-term commitment". ·The fourth precept involves falsehood spoken or committed to by action, as well as malicious speech, harsh speech and gossip. ·The fifth precept prohibits intoxication through alcohol, drugs or other means. Early Buddhist Texts nearly always condemn alcohol, and so do Chinese Buddhist post-canonical texts. Buddhist attitudes toward smoking differ per time and region, but are generally permissive. In modern times, traditional Buddhist countries have seen revival movements to promote the five precepts. As for the West, the precepts play a major role in Buddhist organizations. They have also been integrated in mindfulness training programs, though many mindfulness specialists do not support this because of the precepts' religious import. Lastly, many conflict prevention programs make use of the precepts. Buddhist scriptures explain the five precepts as the minimal standard of Buddhist morality. It is the most important system of morality in Buddhism, together with the monastic rules. "Śīla" (Sanskrit; ) is used to refer to Buddhist precepts, including the five. But the word also refers to the virtue and morality which lies at the foundation of the spiritual path to enlightenment, which is the first of the three forms of training on the path. Thus, the precepts are rules or guidelines to develop mind and character to make progress on the path to enlightenment. The five precepts are part of the right speech, action and livelihood aspects of the Noble Eightfold Path, the core teaching of Buddhism. Moreover, the practice of the five precepts and other parts of "śīla" are described as forms of merit-making, means to create good karma. The five precepts have been described as social values that bring harmony to society, and breaches of the precepts described as antithetical to a harmonious society. On a similar note, in Buddhist texts, the ideal, righteous society is one in which people keep the five precepts. Comparing different parts of Buddhist doctrine, the five precepts form the basis of the eight precepts, which are lay precepts stricter than the five precepts, similar to monastic precepts. Secondly, the five precepts form the first half of the ten or eleven precepts for a person aiming to become a Buddha ("bodhisattva"), as mentioned in the "Brahmajala Sūtra" of the Mahāyāna tradition. Contrasting these precepts with the five precepts, the latter were commonly referred to by Mahāyānists as the "śrāvakayāna" precepts, or the precepts of those aiming to become enlightened disciples (; ) of a Buddha, but not Buddhas themselves. The teneleven "bodhisattva" precepts presuppose the five precepts, and are partly based on them. The five precepts are also partly found in the teaching called the ten good courses of action, referred to in Theravāda () and Tibetan Buddhism (; ). Finally, the first four of the five precepts are very similar to the most fundamental rules of monastic discipline (), and may have influenced their development. In conclusion, the five precepts lie at the foundation of all Buddhist practice, and in that respect, can be compared with the ten commandments in Christianity and Judaism or the ethical codes of Confucianism. The five precepts were part of early Buddhism and are common to nearly all schools of Buddhism. In early Buddhism, the five precepts were regarded as an ethic of restraint, to restrain unwholesome tendencies and thereby purify one's being to attain enlightenment. The five precepts were based on the "pañcaśīla", prohibitions for pre-Buddhist Brahmanic priests, which were adopted in many Indic religions around 6th century BCE. The first four Buddhist precepts were nearly identical to these "pañcaśīla", but the fifth precept, the prohibition on intoxication, was new in Buddhism: the Buddha's emphasis on awareness () was unique. In some schools of ancient Indic Buddhism, Buddhist devotees could choose to adhere to only a number of precepts, instead of the complete five. The schools that would survive in later periods, however, that is Theravāda and Mahāyāna Buddhism, were both ambiguous about this practice. Some early Mahāyāna texts allow it, but some do not; Theravāda texts do not discuss this practice at all. The prohibition on killing had motivated early Buddhists to form a stance against animal sacrifice, a common ritual practice in ancient India. According to the Pāli Canon, however, early Buddhists did not adopt a vegetarian lifestyle. In Early Buddhist Texts, the role of the five precepts gradually develops. First of all, the precepts are combined with a declaration of faith in the triple gem (the Buddha, his teaching and the monastic community). Next, the precepts develop to become the foundation of lay practice. The precepts are seen as a preliminary condition for the higher development of the mind. At a third stage in the texts, the precepts are actually mentioned together with the triple gem, as though they are part of it. Lastly, the precepts, together with the triple gem, become a required condition for the practice of Buddhism, as lay people have to undergo a formal initiation to become a member of the Buddhist religion. When Buddhism spread to different places and people, the role of the precepts began to vary. In countries in which Buddhism was adopted as the main religion without much competition from other religious disciplines, such as Thailand, the relation between the initiation of a lay person and the five precepts has been virtually non-existent. In such countries, the taking of the precepts has become a sort of ritual cleansing ceremony. People are presumed Buddhist from birth without much of an initiation. The precepts are often committed to by new followers as part of their installment, yet this is not very pronounced. However, in some countries like China, where Buddhism was not the only religion, the precepts became an ordination ceremony to initiate lay people into the Buddhist religion. In China, the five precepts were introduced in the first centuries CE, both in their "śrāvakayāna" and "bodhisattva" formats. During this time, it was particularly Buddhist teachers who promoted abstinence from alcohol (the fifth precept), since Daoism and other thought systems emphasized moderation rather than full abstinence. Chinese Buddhists interpreted the fifth precept strictly, even more so than in Indic Buddhism. For example, the monk Daoshi ( 600–83) dedicated large sections of his encyclopedic writings to abstinence from alcohol. However, in some parts of China, such as Dunhuang, considerable evidence has been found of alcohol consumption among both lay people and monastics. Later, from the 8th century onward, strict attitudes of abstinence led to a development of a distinct tea culture among Chinese monastics and lay intellectuals, in which tea gatherings replaced gatherings with alcoholic beverages, and were advocated as such. These strict attitudes were formed partly because of the religious writings, but may also have been affected by the bloody An Lushan Rebellion of 775, which had a sobering effect on 8th-century Chinese society. When the five precepts were integrated in Chinese society, they were associated and connected with karma, Chinese cosmology and medicine, a Daoist worldview, and Confucian virtue ethics. In the Theravāda tradition, the precepts are recited in a standardized fashion, using Pāli language. In Thailand, a leading lay person will normally request the monk to administer the precepts by reciting the following three times: After this, the monk administering the precepts will recite a reverential line of text to introduce the ceremony, after which he guides the lay people in declaring that they take their refuge in the three refuges or triple gem. He then continues with reciting the five precepts: After the lay people have repeated the five precepts after the monk, the monk will close the ceremony reciting: The format of the ceremony for taking the precepts occurs several times in the Chinese Buddhist Canon, in slightly different forms. One formula of the precepts can be found in the "Treatise on Taking Refuge and the Precepts" (): Similarly, in the Mūla-Sarvāstivāda texts used in Tibetan Buddhism, the precepts are formulated such that one takes the precepts upon oneself for one's entire lifespan, following the examples of the enlightened disciples of the Buddha ("arahant"). The five precepts can be found in many places in the Early Buddhist Texts. The precepts are regarded as means to building good character, or as an expression of such character. The Pāli Canon describes them as means to avoid harm to oneself and others. It further describes them as gifts toward oneself and others. Moreover, the texts say that people who uphold them will be confident in any gathering of people, will have wealth and a good reputation, and will die a peaceful death, reborn in heaven or as a human being. On the other hand, living a life in violation of the precepts is believed to lead to rebirth in an unhappy destination. They are understood as principles that define a person as human in body and mind. The precepts are normative rules, but are formulated and understood as "undertakings" rather than commandments enforced by a moral authority, according to the voluntary and gradualist standards of Buddhist ethics. They are forms of restraint formulated in negative terms, but are also accompanied by virtues and positive behaviors, which are cultivated through the practice of the precepts. The most important of these virtues is non-harming (Pāli and ), which underlies all of the five precepts. Precisely, the texts say that one should keep the precepts, adhering to the principle of comparing oneself with others: In other words, all living beings are alike in that they want to be happy and not suffer. Comparing oneself with others, one should therefore not hurt others as one would not want to be hurt. Ethicist Pinit Ratanakul argues that the compassion which motivates upholding the precepts comes from an understanding that all living beings are equal and of a nature that they are 'not-self' (). Another aspect that is fundamental to this is the belief in karmic retribution. In the upholding or violation of the precepts, intention is crucial. In the Pāli scriptures, an example is mentioned of a person stealing an animal only to set it free, which was not seen as an offense of theft. In the Pāli commentaries, a precept is understood to be violated when the person violating it finds the object of the transgression (e.g. things to be stolen), is aware of the violation, has the intention to violate it, does actually act on that intention, and does so successfully. Upholding the precepts is sometimes distinguished in three levels: to uphold them without having formally undertaken them; to uphold them formally, willing to sacrifice one's own life for it; and finally, to spontaneously uphold them. The latter refers to the "arahant", who is understood to be morally incapable of violating the first four precepts. A layperson who upholds the precepts is described in the texts as a "jewel among laymen". On the other hand, the most serious violations of the precepts are the five actions of immediate retribution, which are believed to lead the perpetrator to an unavoidable rebirth in hell. These consist of injuring a Buddha, killing an "arahant", killing one's father or mother, and causing the monastic community to have a schism. Lay followers often undertake these training rules in the same ceremony as they take the refuges. Monks administer the precepts to the laypeople, which creates an additional psychological effect. Buddhist lay people may recite the precepts regularly at home, and before an important ceremony at the temple to prepare the mind for the ceremony. The five precepts are at the core of Buddhist morality. In field studies in some countries like Sri Lanka, villagers describe them as the core of the religion. Anthropologist Barend Terwiel found in his fieldwork that most Thai villagers knew the precepts by heart, and many, especially the elderly, could explain the implications of the precepts following traditional interpretations. Nevertheless, Buddhists do not all follow them with the same strictness. Devotees who have just started keeping the precepts will typically have to exercise considerable restraint. When they become used to the precepts, they start to embody them more naturally. Researchers doing field studies in traditional Buddhist societies have found that the five precepts are generally considered demanding and challenging. For example, anthropologist Stanley Tambiah found in his field studies that strict observance of the precepts had "little positive interest for the villager ... not because he devalues them but because they are not normally open to him". Observing precepts was seen to be mostly the role of a monk or an elderly lay person. More recently, in a 1997 survey in Thailand, only 13.8% of the respondents indicated they adhered to the five precepts in their daily lives, with the fourth and fifth precept least likely to be adhered to. Yet, people do consider the precepts worth striving for, and do uphold them out of fear of bad karma and being reborn in hell, or because they believe in that the Buddha issued these rules, and that they therefore should be maintained. Anthropologist Melford Spiro found that Burmese Buddhists mostly upheld the precepts to avoid bad karma, as opposed to expecting to gain good karma. Scholar of religion Winston King observed from his field studies that the moral principles of Burmese Buddhists were based on personal self-developmental motives rather than other-regarding motives. Scholar of religion Richard Jones concludes that the moral motives of Buddhists in adhering to the precepts are based on the idea that renouncing self-service, ironically, serves oneself. In East Asian Buddhism, the precepts are intrinsically connected with the initiation as a Buddhist lay person. Early Chinese translations such as the "Upāsaka-śila Sūtra" hold that the precepts should only be ritually transmitted by a monastic. The texts describe that in the ritual the power of the Buddhas and "bodhisattvas" is transmitted, and helps the initiate to keep the precepts. This "lay ordination" ritual usually occurs after a stay in a temple, and often after a monastic ordination (); has taken place. The ordained lay person is then given a religious name. The restrictions that apply are similar to a monastic ordination, such as permission from parents. In the Theravāda tradition, the precepts are usually taken "each separately" (), to indicate that if one precept should be broken, the other precepts are still intact. In very solemn occasions, or for very pious devotees, the precepts may be taken as a group rather than each separately. This does not mean, however, that only some of the precepts can be undertaken; they are always committed to as a complete set. In East Asian Buddhism, however, the vow of taking the precepts is considered a solemn matter, and it is not uncommon for lay people to undertake only the precepts that they are confident they can keep. The act of taking a vow to keep the precepts is what makes it karmically effective: Spiro found that someone who did not violate the precepts, but did not have any intention to keep them either, was not believed to accrue any religious merit. On the other hand, when people took a vow to keep the precepts, and then broke them afterwards, the negative karma was considered larger than in the case no vow was taken to keep the precepts. Several modern teachers such as Thich Nhat Hanh and Sulak Sivaraksa have written about the five precepts in a wider scope, with regard to social and institutional relations. In these perspectives, mass production of weapons or spreading untruth through media and education also violates the precepts. On a similar note, human rights organizations in Southeast Asia have attempted to advocate respect for human rights by referring to the five precepts as guiding principles. The first precept prohibits the taking of life of a sentient being. It is violated when someone intentionally and successfully kills such a sentient being, having understood it to be sentient and using effort in the process. Causing injury goes against the spirit of the precept, but does, technically speaking, not violate it. The first precept includes taking the lives of animals, even small insects. However, it has also been pointed out that the seriousness of taking life depends on the size, intelligence, benefits done and the spiritual attainments of that living being. Killing a large animal is worse than killing a small animal (also because it costs more effort); killing a spiritually accomplished master is regarded as more severe than the killing of another "more average" human being; and killing a human being is more severe than the killing of an animal. But all killing is condemned. Virtues that accompany this precept are respect for dignity of life, kindness and compassion, the latter expressed as "trembling for the welfare of others". A positive behavior that goes together with this precept is protecting living beings. Positive virtues like sympathy and respect for other living beings in this regard are based on a belief in the cycle of rebirththat all living beings must be born and reborn. The concept of the fundamental Buddha nature of all human beings also underlies the first precept. The description of the first precept can be interpreted as a prohibition of capital punishment. Suicide is also seen as part of the prohibition. Moreover, abortion (of a sentient being) goes against the precept, since in an act of abortion, the criteria for violation are all met. In Buddhism, human life is understood to start at conception. A prohibition of abortion is mentioned explicitly in the monastic precepts, and several Buddhist tales warn of the harmful karmic consequences of abortion. Bioethicist Damien Keown argues that Early Buddhist Texts do not allow for exceptions with regard to abortion, as they consist of a "consistent' (i.e. exceptionless) pro-life position". Keown further proposes that a middle way approach to the five precepts is logically hard to defend. Asian studies scholar Giulo Agostini argues, however, that Buddhist commentators in India from the 4th century onward thought abortion did not break the precepts under certain circumstances. Ordering another person to kill is also included in this precept, therefore requesting or administering euthanasia can be considered a violation of the precept, as well as advising another person to commit abortion. With regard to euthanasia and assisted suicide, Keown quotes the Pāli "Dīgha Nikāya" that says a person upholding the first precept "does not kill a living being, does not cause a living being to be killed, does not approve of the killing of a living being". Keown argues that in Buddhist ethics, regardless of motives, death can never be the aim of one's actions. Interpretations of how Buddhist texts regard warfare are varied, but in general Buddhist doctrine is considered to oppose all warfare. In many "Jātaka" tales, such as that of Prince Temiya, as well as some historical documents, the virtue of non-violence is taken as an opposition to all war, both offensive and defensive. At the same time, though, the Buddha is often shown not to explicitly oppose war in his conversations with political figures. Buddhologist André Bareau points out that the Buddha was reserved in his involvement of the details of administrative policy, and concentrated on the moral and spiritual development of his disciples instead. He may have believed such involvement to be futile, or detrimental to Buddhism. Nevertheless, at least one disciple of the Buddha is mentioned in the texts who refrained from retaliating his enemies because of the Buddha, that is King Pasenadi (). The texts are ambiguous in explaining his motives though. In some later Mahāyāna texts, such as in the writings of Asaṅga, examples are mentioned of people who kill those who persecute Buddhists. In these examples, killing is justified by the authors because protecting Buddhism was seen as more important than keeping the precepts. Another example that is often cited is that of King Duṭṭhagāmaṇī, who is mentioned in the post-canonical Pāli Mahāvaṃsa chronicle. In the chronicle, the king is saddened with the loss of life after a war, but comforted by a Buddhist monk, who states that nearly everyone who was killed did not uphold the precepts anyway. Buddhist studies scholar Lambert Schmithausen argues that in many of these cases Buddhist teachings like that of emptiness were misused to further an agenda of war or other violence. Field studies in Cambodia and Burma have shown that many Buddhists considered the first precept the most important, or the most blamable. In some traditional communities, such as in Kandal Province in pre-war Cambodia, as well as Burma in the 1980s, it was uncommon for Buddhists to slaughter animals, to the extent that meat had to be bought from non-Buddhists. In his field studies in Thailand in the 1960s, Terwiel found that villagers did tend to kill insects, but were reluctant and self-conflicted with regard to killing larger animals. In Spiro's field studies, however, Burmese villagers were highly reluctant even to kill insects. Early Buddhists did not adopt a vegetarian lifestyle. Indeed, in several Pāli texts vegetarianism is described as irrelevant in the spiritual purification of the mind. There are prohibitions on certain types of meat, however, especially those which are condemned by society. The idea of abstaining from killing animal life has also led to a prohibition on professions that involve trade in flesh or living beings, but not to a full prohibition of all agriculture that involves cattle. In modern times, referring to the law of supply and demand or other principles, some Theravādin Buddhists have attempted to promote vegetarianism as part of the five precepts. For example, the Thai Santi Asoke movement practices vegetarianism. Furthermore, among some schools of Buddhism, there has been some debate with regard to a principle in the monastic discipline. This principle states that a Buddhist monk cannot accept meat if it comes from animals especially slaughtered for him. Some teachers have interpreted this to mean that when the recipient has no knowledge on whether the animal has been killed for him, he cannot accept the food either. Similarly, there has been debate as to whether laypeople should be vegetarian when adhering to the five precepts. Though vegetarianism among Theravādins is generally uncommon, it has been practiced much in East Asian countries, as some Mahāyāna texts, such as the "Mahāparanirvana Sūtra" and the "Laṅkāvatāra Sūtra", condemn the eating of meat. Nevertheless, even among Mahāyāna Buddhistsand East Asian Buddhiststhere is disagreement on whether vegetarianism should be practiced. In the "Laṅkāvatāra Sūtra", biological, social and hygienic reasons are given for a vegetarian diet; however, historically, a major factor in the development of a vegetarian lifestyle among Mahāyāna communities may have been that Mahāyāna monastics cultivated their own crops for food, rather than living from alms. Already from the 4th century CE, Chinese writer Xi Chao understood the five precepts to include vegetarianism. Apart from trade in flesh or living beings, there are also other professions considered undesirable. Vietnamese teacher Thich Nhat Hanh gives a list of examples, such as working in the arms industry, the military, police, producing or selling poison or drugs such as alcohol and tobacco. In general, the first precept has been interpreted by Buddhists as a call for non-violence and pacifism. But there have been some exceptions of people who did not interpret the first precept as an opposition to war. For example, in the twentieth century, some Japanese Zen teachers wrote in support of violence in war, and some of them argued this should be seen as a means to uphold the first precept. There is some debate and controversy surrounding the problem whether a person can commit suicide, such as self-immolation, to reduce other people's suffering in the long run, such as in protest to improve a political situation in a country. Teachers like the Dalai Lama and Shengyan have rejected forms of protest like self-immolation, as well as other acts of self-harming or fasting as forms of protest. Although capital punishment goes against the first precept, as of 2001, many countries in Asia still maintained the death penalty, including Sri Lanka, Thailand, China and Taiwan. In some Buddhist countries, such as Sri Lanka and Thailand, capital punishment was applied during some periods, while during other periods no capital punishment was used at all. In other countries with Buddhism, like China and Taiwan, Buddhism, or any religion for that matter, has had no influence in policy decisions of the government. Countries with Buddhism that have abolished capital punishment include Cambodia and Hong Kong. In general, Buddhist traditions oppose abortion. In many countries with Buddhist traditions such as Thailand, Taiwan, Korea and Japan, however, abortion is a widespread practice, whether legal or not. Many people in these countries consider abortion immoral, but also think it should be less prohibited. Ethicist Roy W. Perrett, following Ratanakul, argues that this field research data does not so much indicate hypocrisy, but rather points at a "middle way" in applying Buddhist doctrine to solve a moral dilemma. Buddhists tend to take "both sides" on the pro-lifepro-choice debate, being against the taking of life of a fetus in principle, but also believing in compassion toward mothers. Similar attitudes may explain the Japanese "mizuko kuyō" ceremony, a Buddhist memorial service for aborted children, which has led to a debate in Japanese society concerning abortion, and finally brought the Japanese to a consensus that abortion should not be taken lightly, though it should be legalized. This position, held by Japanese Buddhists, takes the middle ground between the Japanese neo-Shinto "pro-life" position, and the liberationist, "pro-choice" arguments. Keown points out, however, that this compromise does not mean a Buddhist middle way between two extremes, but rather incorporates two opposite perspectives. In Thailand, women who wish to have abortion usually do so in the early stages of pregnancy, because they believe the karmic consequences are less then. Having had abortion, Thai women usually make merits to compensate for the negative karma. The second precept prohibits theft, and involves the intention to steal what one perceives as not belonging to oneself ("what is not given") and acting successfully upon that intention. The severity of the act of theft is judged by the worth of the owner and the worth of that which is stolen. Underhand dealings, fraud, cheating and forgery are also included in this precept. Accompanying virtues are generosity, renunciation, and right livelihood, and a positive behavior is the protection of other people's property. The second precept includes different ways of stealing and fraud. Borrowing without permission is sometimes included, as well as gambling. Psychologist Vanchai Ariyabuddhiphongs did studies in the 2000s and 2010s in Thailand and discovered that people who did not adhere to the five precepts more often tended to believe that money was the most important goal in life, and would more often pay bribes than people who did adhere to the precepts. On the other hand, people who observed the five precepts regarded themselves as wealthier and happier than people who did not observe the precepts. Professions that are seen to violate the second precept include working in the gambling industry or marketing products that are not actually required for the customer. The third precept condemns sexual misconduct. This has been interpreted in classical texts to include adultery with a married or engaged person, rape, incest, sex with a minor (or a person "protected by any relative"), and sex with a prostitute. In later texts, details such as intercourse at an inappropriate time or inappropriate place are also counted as breaches of the third precept. Masturbation goes against the spirit of the precept, though in the early texts it is not prohibited for laypeople. The third precept is explained as leading to greed in oneself and harm to others. The transgression is regarded as more severe if the other person is a good person. Virtues that go hand-in-hand with the third precept are contentment, especially with one's partner, and recognition and respect for faithfulness in a marriage. The third precept is interpreted as avoiding harm to another by using sensuality in the wrong way. This means not engaging with inappropriate partners, but also respecting one's personal commitment to a relationship. In some traditions, the precept also condemns adultery with a person whose spouse agrees with the act, since the nature of the act itself is condemned. Furthermore, flirting with a married person may also be regarded as a violation. Though prostitution is discouraged in the third precept, it is usually not actively prohibited by Buddhist teachers. With regard to applications of the principles of the third precept, the precept, or any Buddhist principle for that matter, is usually not connected with a stance against contraception. In traditional Buddhist societies such as Sri Lanka, pre-marital sex is considered to violate the precept, though this is not always adhered to by people who already intend to marry. In the interpretation of modern teachers, the precept includes any person in a sexual relationship with another person, as they define the precept by terms such as "sexual responsibility" and "long-term commitment". Some modern teachers include masturbation as a violation of the precept, others include certain professions, such as those that involve sexual exploitation, prostitution or pornography, and professions that promote unhealthy sexual behavior, such as in the entertainment industry. The fourth precept involves falsehood spoken or committed to by action. Avoiding other forms of wrong speech are also considered part of this precept, consisting of malicious speech, harsh speech and gossip. A breach of the precept is considered more serious if the falsehood is motivated by an ulterior motive (rather than, for example, "a small white lie"). The accompanying virtue is being honest and dependable, and involves honesty in work, truthfulness to others, loyalty to superiors and gratitude to benefactors. In Buddhist texts, this precept is considered second in importance to the first precept, because a lying person is regarded to have no shame, and therefore capable of many wrongs. Untruthfulness is not only to be avoided because it harms others, but also because it goes against the Buddhist ideal of finding the truth. The fourth precept includes avoidance of lying and harmful speech. Some modern teachers such as Thich Nhat Hanh interpret this to include avoiding spreading false news and uncertain information. Work that involves data manipulation, false advertising or online scams can also be regarded as violations. Terwiel reports that among Thai Buddhists, the fourth precept is also seen to be broken when people insinuate, exaggerate or speak abusively or deceitfully. The fifth precept prohibits intoxication through alcohol, drugs or other means, and its virtues are mindfulness and responsibility, applied to food, work, behavior, and with regard to the nature of life. Awareness, meditation and heedfulness can also be included here. Medieval Pāli commentator Buddhaghosa writes that whereas violating the first four precepts may be more or less blamable depending on the person or animal affected, the fifth precept is always "greatly blamable", as it hinders one from understanding the Buddha's teaching and may lead one to "madness". In ancient China, Daoshi described alcohol as the "doorway to laxity and idleness" and as a cause of suffering. Nevertheless, he did describe certain cases when drinking was considered less of a problem, such as in the case of a queen distracting the king by alcohol to prevent him from murder. However, Daoshi was generally strict in his interpretations: for example, he allowed medicinal use of alcohol only in extreme cases. Early Chinese translations of the Tripitaka describe negative consequences for people breaking the fifth precept, for themselves and their families. The Chinese translation of the "Upāsikaśila Sūtra", as well as the Pāli version of the Sigālovāda Sutta, speak of ill consequences such as loss of wealth, ill health, a bad reputation and "stupidity", concluding in a rebirth in hell. The "Dīrghāgama" adds to that that alcohol leads to quarreling, negative states of mind and damage to one's intelligence. The Mahāyāna "Brahmajāla Sūtra" describes the dangers of alcohol in very strong terms, including the selling of alcohol. Similar arguments against alcohol can be found in Nāgārjuna's writings. The strict interpretation of prohibition of alcohol consumption can be supported by the "Upāli Sūtra"'s statement that a disciple of the Buddha should not drink any alcohol, "even a drop on the point of a blade of grass". However, in the writing of some Abhidharma commentators, consumption was condemned or condoned, depending on the intention with which alcohol was consumed. The fifth precept is regarded as important, because drinking alcohol is condemned for the sluggishness and lack of self-control it leads to, which might lead to breaking the other precepts. In Spiro's field studies, violating the fifth precept was seen as the worst of all the five precepts by half of the monks interviewed, citing the harmful consequences. Nevertheless, in practice it is often disregarded by lay people. In Thailand, drinking alcohol is fairly common, even drunkenness. Among Tibetans, drinking beer is common, though this is only slightly alcoholic. Medicinal use of alcohol is generally not frowned upon, and in some countries like Thailand and Laos, smoking is usually not regarded as a violation of the precept. Thai and Laotian monks have been known to smoke, though monks who have received more training are less likely to smoke. On a similar note, as of 2000, no Buddhist country prohibited the sale or consumption of alcohol, though in Sri Lanka Buddhist revivalists attempted unsuccessfully to get a full prohibition passed in 1956. Moreover, pre-Communist Tibet used to prohibit smoking in some areas of the capital. Monks were prohibited from smoking, and the import of tobacco was banned. Thich Nhat Hanh also includes mindful consumption in this precept, which consists of unhealthy food, unhealthy entertainment and unhealthy conversations, among others. In modern times, adherence to the precepts among Buddhists is less strict than it traditionally was. This is especially true for the third precept. For example, in Cambodia in the 1990s and 2000s, standards with regard to sexual restraint were greatly relaxed. Some Buddhist movements and communities have tried to go against the modern trend of less strict adherence to the precepts. In Cambodia, a millenarian movement led by Chan Yipon promoted the revival of the five precepts. And in the 2010s, the Supreme Sangha Council in Thailand ran a nationwide program called "The Villages Practicing the Five Precepts", aiming to encourage keeping the precepts, with an extensive classification and reward system. In many Western Buddhist organizations, the five precepts play a major role in developing ethical guidelines. Furthermore, Buddhist teachers such as Philip Kapleau, Thich Nhat Hanh and Robert Aitken have promoted mindful consumption in the West, based on the five precepts. In another development in the West, some scholars working in the field of mindfulness training have proposed that the five precepts be introduced as a component in such trainings. Specifically, to prevent organizations from using mindfulness training to further an economical agenda with harmful results to its employees, the economy or the environment, the precepts could be used as a standardized ethical framework. As of 2015, several training programs made explicit use of the five precepts as secular, ethical guidelines. However, many mindfulness training specialists consider it problematic to teach the five precepts as part of training programs in secular contexts because of their religious origins and import. Peace studies scholar Theresa Der-lan Yeh notes that the five precepts address physical, economical, familial and verbal aspects of interaction, and remarks that many conflict prevention programs in schools and communities have integrated the five precepts in their curriculum. On a similar note, peace studies founder Johan Galtung describes the five precepts as the "basic contribution of Buddhism in the creation of peace". Studying lay and monastic ethical practice in traditional Buddhist societies, Spiro argued ethical guidelines such as the five precepts are adhered to as a means to a higher end, that is, a better rebirth or enlightenment. He therefore concluded that Buddhist ethical principles like the five precepts are similar to Western utilitarianism. Keown, however, has argued that the five precepts are regarded as rules that cannot be violated, and therefore may indicate a deontological perspective in Buddhist ethics. On the other hand, Keown has also suggested that Aristotle's virtue ethics could apply to Buddhist ethics, since the precepts are considered good in themselves, and mutually dependent on other aspects of the Buddhist path of practice. Philosopher Christopher Gowans disagrees that Buddhist ethics are deontological, arguing that virtue and consequences are also important in Buddhist ethics. Gowans argues that there is no moral theory in Buddhist ethics that covers all conceivable situations such as when two precepts may be in conflict, but is rather characterized by "a commitment to and nontheoretical grasp of the basic Buddhist moral values". As of 2017, many scholars of Buddhism no longer think it is useful to try to fit Buddhist ethics into a Western philosophical category. Keown has argued that the five precepts are very similar to human rights, with regard to subject matter and with regard to their universal nature. Other scholars, as well as Buddhist writers and human rights advocates, have drawn similar comparisons. For example, the following comparisons are drawn: Keown describes the relationship between Buddhist precepts and human rights as "look[ing] both ways along the juridical relationship, both to what one is due to do, and to what is due to one". On a similar note, Cambodian human rights advocates have argued that for human rights to be fully implemented in society, the strengthening of individual morality must also be addressed. Buddhist monk and scholar Phra Payutto sees the Human Rights Declaration as an unfolding and detailing of the principles that are found in the five precepts, in which a sense of ownership is given to the individual, to make legitimate claims on one's rights. He believes that human rights should be seen as a part of human development, in which one develops from moral discipline (), to concentration () and finally wisdom (). He does not believe, however, that human rights are natural rights, but rather human conventions. Buddhism scholar Somparn Promta disagrees with him. He argues that human beings do have natural rights from a Buddhist perspective, and refers to the "attūpanāyika-dhamma", a teaching in which the Buddha prescribes a kind of golden rule of comparing oneself with others. ("See §Principles, above.") From this discourse, Promta concludes that the Buddha has laid down the five precepts in order to protect individual rights such as right of life and property: human rights are implicit within the five precepts. Academic Buntham Phunsap argues, however, that though human rights are useful in culturally pluralistic societies, they are in fact not required when society is entirely based on the five precepts. Phunsap therefore does not see human rights as part of Buddhist doctrine.
https://en.wikipedia.org/wiki?curid=24780
Pente Pente is a strategy board game for two or more players, created in 1977 by Gary Gabrel, a dishwasher at Hideaway Pizza, in Stillwater, Oklahoma. Customers played Pente at Hideaway Pizza on checkerboard tablecloths while waiting for their orders to arrive. Thirty years later, patrons are still playing Pente at Hideaway Pizza, although now with roll-up Pente boards. Pente is based on the Japanese game ninuki-renju, a variant of renju or gomoku that is played on a Go board of 19x19 intersections with white and black stones. Like "ninuki-renju," Pente allows captures, but Pente added a new opening rule. In the nineteenth century, "gomoku" was introduced to Britain where it was known as "Go Bang" (this term is borrowed from Japanese "goban" 碁盤 meaning "go board"). Pente is a registered trademark of Hasbro for strategy game equipment. Pente (πέντε) is the number five in Greek. Hasbro ceased distribution of Pente in 1993. It later licensed the name to Winning Moves, a classic games publisher that resurrected the game in 2004. The 2004 version includes 4 extra stones, called power stones, that can be played in the Pente Plus version. The players alternate in placing stones of their color on free intersections, with White always assuming the opening move. The players aim to align five stones of the same color in vertical, horizontal or diagonal lines. Captures are obtained by flanking pairs of an opponent's stones in any same direction. Captures must consist of exactly two stones; flanking a single stone or three stones does not result in a capture. For example, if the stones are X O O _ and you place your stone so it becomes X O O X, then your opponent's stones are removed from the board, leaving X _ _ X. A stone may legally be placed on any empty intersection, even if it forms a pair between two enemy stones. For example, if the stones are X O _ X you may place your stone so it becomes X O O X. Your stones are NOT captured in this case. When playing with multiple players the inside stones can be different colors, but the two stones on the outside must be the same colors. A player wins by scoring five stones in a row. It can be horizontal, vertical, or diagonal. A player can also win by capturing five pairs of opponent stones. Pente can also be played by four people, with pairs of two acting as partners. It can also be played with multiple independent players when each player has their own different colored stones. In this common variation, the first player's second move is restricted — it must be at least three intersections away from the center of the board. The tournament rule was created by Tom Braunlich to reduce the advantage held by the first player.
https://en.wikipedia.org/wiki?curid=24782
Pompatus Pompatus (or Pompitus) () is a nonce word coined by Steve Miller famously in his 1973 hit single "The Joker". The word is probably a corruption or imagined version of the word "puppetuse", an original coinage of the 14-year-old Vernon Green, and subsequently released in 1954 as the doo-wop song "The Letter" performed by him and The Medallions -- a song which also included another original coinage, "pismotality." In other songs, "Enter Maurice" and "The Conversation," Miller adds the word "epismetology" to "pompatus," one in spoken word, the style of "The Letter," in a likely homage to The Medallions' song. The oddness of the word "pompatus" occasioned some attention and further use, including being used in the title of a movie. The lyrics of "The Joker" include the quatrain: Each line references a track on a previous Miller album: "Space Cowboy" on "Brave New World" (1969); "Gangster of Love" on "Sailor" (1968); and "Enter Maurice" on "Recall the Beginning...A Journey from Eden" (1972), which includes the lines: Although Miller claims he invented the words "epismetology" (a metathesis of the word epistemology) and "pompatus", both are variants of words which Miller most likely heard in a song by Vernon Green called "The Letter," which was recorded by the Los Angeles doo-wop group The Medallions in 1954. Green's "The Letter" as performed by the Medallions had the lines: Green describes the lyrics as a description of his dream woman. ""Pizmotality" described words of such secrecy that they could only be spoken to the one you loved", Green explained. He coined the term "puppetutes" "to mean a secret paper-doll fantasy figure who would be my everything and bear my children". Because of its peculiarity, the word "pompatus" has secured a niche in 20th century pop culture. Wolfman Jack frequently referenced the phrase and there is a sound clip of him using the line within the song "Clap for the Wolfman" by The Guess Who. "The Pompatus of Love", a 1996 film starring Jon Cryer, featured four men discussing a number of assorted themes, including attempts to determine the meaning of the phrase. Jon Cryer was also a writer of the film, and describes finding out the meaning of the phrase during a phone call with Vernon Green in his autobiography "So That Happened" in chapter 22, page 217. The line has been mentioned in various television show gags, including "The Simpsons" and "South Park". Humor columnist Dave Barry frequently refers to the song line as a source of comedic value, particularly in his 1997 book "Dave Barry's Book of Bad Songs". 'Pompatus' is used by Michael Ondaatje in his 2001 book "Anil's Ghost". Stephen King uses the word in his 2006 novel "Lisey's Story". Tim Dorsey uses the word in his 2010 novel, "Gator a-Go-Go". It was the subject of the October 9, 2011 "Over the Hedge" comic strip.
https://en.wikipedia.org/wiki?curid=24783
UGM-27 Polaris The UGM-27 Polaris missile was a two-stage solid-fueled nuclear-armed submarine-launched ballistic missile. As the United States Navy's first SLBM, it served from 1961 to 1996. In the mid-1950s the Navy was involved in the Jupiter missile project with the U.S. Army, and had influenced the design by making it squat so it would fit in submarines. However, they had concerns about the use of liquid fuel rockets onboard ships, and some consideration was given to a solid fuel version, Jupiter S. In 1956, during an anti-submarine study known as Project Nobska, Edward Teller suggested that very small hydrogen bomb warheads were possible. A crash program to develop a missile suitable for carrying such warheads began as Polaris, launching its first shot less than four years later, in February 1960. As the Polaris missile was fired underwater from a moving platform, it was essentially invulnerable to counterattack. This led the Navy to suggest, starting around 1959, that they be given the entire nuclear deterrent role. This led to new infighting between the Navy and the U.S. Air Force, the latter responding by developing the counterforce concept that argued for the strategic bomber and ICBM as key elements in flexible response. Polaris formed the backbone of the U.S. Navy's nuclear force aboard a number of custom-designed submarines. In 1963, the Polaris Sales Agreement led to the Royal Navy taking over the United Kingdom's nuclear role, and while some tests were carried out by the Italian Navy, this did not lead to use. The Polaris missile was gradually replaced on 31 of the 41 original SSBNs in the U.S. Navy by the MIRV-capable Poseidon missile beginning in 1972. During the 1980s, these missiles were replaced on 12 of these submarines by the Trident I missile. The 10 - and SSBNs retained Polaris A-3 until 1980 because their missile tubes were not large enough to accommodate Poseidon. With beginning sea trials in 1980, these submarines were disarmed and redesignated as attack submarines to avoid exceeding the SALT II strategic arms treaty limits. The Polaris missile program's complexity led to the development of new project management techniques, including the Program Evaluation and Review Technique (PERT) to replace the simpler Gantt chart methodology. The Polaris missile replaced an earlier plan to create a submarine-based missile force based on a derivative of the U.S. Army Jupiter Intermediate-range ballistic missile. Chief of Naval Operations Admiral Arleigh Burke appointed Rear Admiral W. F. "Red" Raborn as head of a Special Project Office to develop Jupiter for the Navy in late 1955. The Jupiter missile's large diameter was a product of the need to keep the length short enough to fit in a reasonably-sized submarine. At the seminal Project Nobska conference in 1956, with Admiral Burke present, nuclear physicist Edward Teller stated that a physically small one-megaton warhead could be produced for Polaris within a few years, and this prompted Burke to leave the Jupiter program and concentrate on Polaris in December of that year. Polaris was spearheaded by the Special Project Office's Missile Branch under Rear Admiral Roderick Osgood Middleton, and is still under the Special Project Office. Admiral Burke later was instrumental in determining the size of the Polaris submarine force, suggesting that 40-45 submarines with 16 missiles each would be sufficient. Eventually, the number of Polaris submarines was fixed at 41. The was the first submarine capable of deploying U.S. developed submarine-launched ballistic missiles (SLBM). The responsibility of the development of SLBMs was given to the Navy and the Army. The Air Force was charged with developing a land-based intermediate range ballistic missile (IRBM), while an IRBM which could be launched by land or by sea was tasked to the Navy and Army. The Navy Special Projects (SP) office was at the head of the project. It was led by Rear Admiral William Raborn. On September 13, 1955, James R. Killian, head of a special committee organized by President Eisenhower, recommended that both the Army and Navy come together under a program aimed at developing an intermediate-range ballistic missile (IRBM). The missile, later known as Jupiter, would be developed under the Joint Army-Navy Ballistic Missile Committee approved by Secretary of Defense Charles E. Wilson in early November of that year. The first IRBM boasted a liquid-fueled design. Liquid fuel is compatible with aircraft; it is less compatible with submarines. Solid fuels, on the other hand, make logistics and storage simpler and are safer. Not only was the Jupiter a liquid fuel design, it was also very large; even after it was designed for solid fuel, it was still a whopping 160,000 pounds. A smaller, new design would weigh much less, estimated at 30,000 pounds. The Navy would rather develop a smaller, more easily manipulated design. Edward Teller was one of the scientists encouraging the progress of smaller rockets. He argued that the technology needed to be discovered, rather than apply technology that is already created. Raborn was also convinced he could develop smaller rockets. He sent officers to make independent estimates of size to determine the plausibility of a small missile; while none of the officers could agree on a size, their findings were encouraging nonetheless. The U.S. Navy began work on nuclear-powered submarines in 1946. They launched the first one, the in 1955. Nuclear powered submarines were the least vulnerable to a first strike from the Soviet Union.The next question that led to further development was what kind of arms the nuclear-powered submarines should be equipped with. In the summer of 1956, the navy sponsored a study by the National Academy of Sciences on anti-submarine warfare at Nobska Point in Woods Hole, Massachusetts, known as Project NOBSKA. The navy's intention was to have a new missile developed that would be lighter than existing missiles and cover a range up to fifteen hundred miles. A problem that needed to be solved was that this design would not be able to carry the desired one-megaton thermonuclear warhead. This study brought Edward Teller from the recently formed nuclear weapons laboratory at Livermore and J. Carson Mark, representing the Los Alamos nuclear weapons laboratory. Teller was already known as a nuclear salesman, but this became the first instance where there was a big betting battle where he outbid his Los Alamos counterpart. The two knew each other well: Mark was named head of the theoretical division of Los Alamos in 1947, a job that was originally offered for Teller. Mark was a cautious physicist and no match for Teller in a bidding war. At the NOBSKA summer study, Edward Teller made his famous contribution to the FBM program. Teller offered to develop a lightweight warhead of one-megaton strength within five years. He suggested that nuclear-armed torpedoes could be substituted for conventional ones to provide a new anti-submarine weapon. Livermore received the project. When Teller returned to Livermore, people were astonished by the boldness of Teller's promise. It seemed inconceivable with the current size of nuclear warheads, and Teller was challenged to support his assertion. He pointed out the trend in warhead technology, which indicated reduced weight to yield ratios in each succeeding generation. When Teller was questioned about the application of this to the FBM program, he asked, ‘Why use a 1958 warhead in a 1965 weapon system?’ Mark disagreed with Teller's prediction that the desired one-megaton warhead could be made to fit the missile envelope within the timescale envisioned. Instead, Mark suggested that half a megaton would be more realistic and he quoted a higher price and a longer deadline. This simply confirmed the validity of Teller's prediction in the Navy's eyes. Whether the warhead was half or one megaton mattered little so long as it fitted the missile and would be ready by the deadline. Almost four decades later, Teller said, referring to Mark's performance, that it was “an occasion when I was happy about the other person being bashful.” When the Atomic Energy Commission backed up Teller's estimate in early September, Admiral Burke and the Navy Secretariat decided to support SPO in heavily pushing for the new missile, now named Polaris by Admiral Raborn. There is a contention that the Navy's "Jupiter" missile program was unrelated to the Army program. The Navy also expressed an interest in Jupiter as an SLBM, but left the collaboration to work on their Polaris. At first, the newly assembled SPO team had the problem of making the large, liquid-fuel Jupiter IRBM to work properly. Jupiter retained the short, squat shape intended to fit in naval submarines. Its sheer size and volatility of its fuel made it very unsuited to submarine launching and was only slightly more attractive for deployment on ships. The missile continued to be developed by the Army's German team in collaboration with their main contractor, Chrysler Corporation. SPO's responsibility was to develop a sea-launching platform with necessary fire control and stabilization systems for that very purpose. The original schedule was to have a ship-based IRBM system ready for operation evaluation by January 1, 1960, and a submarine-based one by January 1, 1965. However, the Navy was deeply dissatisfied with the liquid fuel IRBM. The first concern was that the cryogenic liquid fuel was not only extremely dangerous to handle, but launch-preparations were also very time-consuming. Second, an argument was made that liquid-fueled rockets gave relatively low initial acceleration, which is disadvantageous in launching a missile from a moving platform in certain sea states. By mid-July 1956, the Secretary of Defense's Scientific Advisory Committee had recommended that a solid-propellant missile program be fully instigated but not using the unsuitable Jupiter payload and guidance system. By October 1956, a study group comprising key figures from Navy, industry and academic organizations considered various design parameters of the Polaris system and trade-offs between different sub-sections. The estimate that a 30,000-pound missile could deliver a suitable warhead over 1500 nautical miles was endorsed. With this optimistic assessment, the Navy now decided to scrap the Jupiter program altogether and sought out the Department of Defense to back a separate Navy missile. A huge surfaced submarine would carry four "Jupiter" missiles, which would be carried and launched horizontally. This was probably the never-built SSM-N-2 Triton program. However, a history of the Army's Jupiter program states that the Navy was involved in the Army program, but withdrew at an early stage. Originally, the Navy favored cruise missile systems in a strategic role, such as the Regulus missile deployed on the earlier and a few other submarines, but a major drawback of these early cruise missile launch systems (and the Jupiter proposals) was the need to surface, and remain surfaced for some time, to launch. Submarines were very vulnerable to attack during launch, and a fully or partially fueled missile on deck was a serious hazard. The difficulty of preparing a launch in rough weather was another major drawback for these designs, but rough sea conditions did not unduly affect Polaris' submerged launches. It quickly became apparent that solid-fueled ballistic missiles had advantages over cruise missiles in range and accuracy, and could be launched from a submerged submarine, improving submarine survivability. The prime contractor for all three versions of Polaris was Lockheed Missiles and Space Company (now Lockheed Martin). The Polaris program started development in 1956. , the first U.S. missile submarine, successfully launched the first Polaris missile from a submerged submarine on July 20, 1960. The A-2 version of the Polaris missile was essentially an upgraded A-1, and it entered service in late 1961. It was fitted on a total of 13 submarines and served until June 1974.(1). Ongoing problems with the W-47 warhead, especially with its mechanical arming and safing equipment, led to large numbers of the missiles being recalled for modifications, and the U.S. Navy sought a replacement with either a larger yield or equivalent destructive power. The result was the W-58 warhead used in a "cluster" of three warheads for the Polaris A-3, the final model of the Polaris missile. One of the initial problems the Navy faced in creating an SLBM was that the sea moves, while a launch platform on land does not. Waves and swells rocking the boat or submarine, as well as possible flexing of the ship's hull, had to be taken into account to properly aim the missile. The Polaris development was kept on a tight schedule and the only influence that changed this was the USSR's launching of SPUTNIK on October 4, 1957. This caused many working on the project to want to accelerate development. The launch of a second Russian satellite and pressing public and government opinions caused Secretary Wilson to move the project along more quickly. The Navy favored an underwater launch of an IRBM, although the project began with an above-water launch goal. They decided to continue the development of an underwater launch, and developed two ideas for this launch: wet and dry. Dry launch meant encasing the missile in a shell that would peel away when the missile reached the water's surface. Wet launch meant shooting the missile through the water without a casing. While they Navy was in favor of a wet launch, they developed both methods as a failsafe. They did this with the development of gas and air propulsion of the missile out of the submerged tube as well. The first Polaris missile tests were given the names “AX-#” and later renamed “A1X-#”. Testing of the missiles occurred: Sept 24, 1958: AX-1, at Cape Canaveral from a launch pad; the missile was destroyed, after it failed to turn into the correct trajectory following a programming-error. October 1958: AX-2, at Cape Canaveral from a launch pad; exploded on the launch pad. December 30, 1958: AX-3, at Cape Canaveral from a launch pad; launched correctly, but was destroyed because of the fuel overheating. January 19, 1959: AX-4, at Cape Canaveral from launch pad: launched correctly but began to behave erratically and was destroyed. February 27, 1959: AX-5, at Cape Canaveral from launch pad: launched correctly but began to behave erratically and was destroyed. April 20, 1959: AX-6, at Cape Canaveral from launch pad: this test was a success. The missile launched, separated, and splashed into the Atlantic 300 miles off shore. It was in between these two tests that the inertial guidance system was developed and implemented for testing. July 1, 1959: AX-11 at Cape Canaveral from a launch pad: this launch was successful, but pieces of the missile detached causing failure. It did show that the new guidance systems worked. At the time that the Polaris project went live, submarine navigation systems were and at this time that standard was sufficient enough to sustain effective military efforts given the existing weapons systems in use by the Army, Air Force and Navy. Initially, developers of Polaris were set to utilize the existing 'Stable Platform' configuration of the inertial guidance system. Created at the MIT Instrumentation Laboratory, this Ships Inertial Navigation System (SINS) was supplied to the Navy in 1954. The developers of Polaris encountered many issues from the birth of the project, however, perhaps the most unsettling for them was the outdated technology of the gyroscopes they would be implementing. This 'Stable Platform' configuration did not account for the change in gravitational fields that the submarine would experience while it was in motion, nor did it account for the ever-altering position of the Earth. This problem raised many concerns, as this would make it nearly impossible for navigational read outs to remain accurate and reliable. A submarine equipped with Ballistic Missiles was of little to no use if operators had no way to direct them. Polaris was thus forced to seek elsewhere and quickly found hope in a guidance system that had been abandoned by the U.S. Air Force. The Autonetics Division of North American Aviation had previously been faced with the task of developing a guidance system for the U.S. Air Force Navaho known as the XN6 Autonavigator. The XN6 was a system designed for air-breathing Cruise missiles, but by 1958 had proved useful for installment on submarines. A predecessor to the GPS satellite navigation system, the Transit system (later called NAVSAT), was developed because the submarines needed to know their position at launch in order for the missiles to hit their targets. Two American physicists, William Guier and George Weiffenbach, at Johns Hopkins's Applied Physics Laboratory (APL), began this work in 1958. A computer small enough to fit through a submarine hatch was developed in 1958, the AN/UYK-1. It was used to interpret the Transit satellite data and send guidance information to the Polaris, which had its own guidance computer made with ultra miniaturized electronics, very advanced for its time, because there wasn't much room in a Polaris—there were 16 on each submarine. The Ship's Inertial Navigation System (SINS) was developed earlier to provide a continuous dead reckoning update of the submarine's position between position fixes via other methods, such as LORAN. This was especially important in the first few years of Polaris, because Transit was not operational until 1964. By 1965 microchips similar to the Texas Instruments units made for the Minuteman II were being purchased by the Navy for the Polaris. The Minuteman guidance systems each required 2000 of these, so the Polaris guidance system may have used a similar number. To keep the price under control, the design was standardized and shared with Westinghouse Electric Company and RCA. In 1962, the price for each Minuteman chip was $50, the price dropped to $2 in 1968. This missile replaced the earlier A-1 and A-2 models in the U.S. Navy, and also equipped the British Polaris force. The A-3 had a range extended to and a new weapon bay housing three Mk 2 re-entry vehicles (ReB or Re-Entry Body in U.S. Navy and British usage); and the new W-58 warhead of 200 kt yield. This arrangement was originally described as a "cluster warhead" but was replaced with the term Multiple Re-Entry Vehicle (MRV). The three warheads, also known as "bomblets", were spread out in a "shotgun" like pattern above a single target and were not independently targetable (such as a MIRV missile is). The three warheads were stated to be equivalent in destructive power to a single one-megaton warhead due to their spread out pattern on the target. The first Polaris submarine outfitted with MRV A-3's was the USS "Daniel Webster" in 1964. Later the Polaris A-3 missiles (but not the ReBs) were also given limited hardening to protect the missile electronics against nuclear electromagnetic pulse effects while in the boost phase. This was known as the A-3T ("Topsy") and was the final production model. The initial test model of the Polaris was referred to as the AX series and made its maiden flight from Cape Canaveral on September 24, 1958. The missile failed to perform its pitch and roll maneuver and instead just flew straight up, however the flight was considered a partial success (at that time, "partial success" was used for any missile test that returned usable data). The next flight on October 15 failed spectacularly when the second stage ignited on the pad and took off by itself. Range Safety blew up the errant rocket while the first stage sat on the pad and burned. The third and fourth tests (December 30 and January 9) had problems due to overheating in the boattail section. This necessitated adding extra shielding and insulation to wiring and other components. When the final AX flight was conducted a year after the program began, 17 Polaris missiles had been flown of which five met all of their test objectives. The first operational version, the Polaris A-1, had a range of and a single Mk 1 re-entry vehicle, carrying a single W-47-Y1 600 kt nuclear warhead, with an inertial guidance system which provided a circular error probable (CEP) of . The two-stage solid propellant missile had a length of , a body diameter of , and a launch weight of . Work on its W47 nuclear warhead began in 1957 at the facility that is now called the Lawrence Livermore National Laboratory by a team headed by John Foster and Harold Brown. The Navy accepted delivery of the first 16 warheads in July 1960. On May 6, 1962, a Polaris A-2 missile with a live W47 warhead was tested in the "Frigate Bird" test of Operation Dominic by in the central Pacific Ocean, the only American test of a live strategic nuclear missile. The two stages were both steered by thrust vectoring. Inertial navigation guided the missile to about a 900 m (3,000-foot) CEP, insufficient for use against hardened targets. They were mostly useful for attacking dispersed military surface targets (airfields or radar sites), clearing a pathway for heavy bombers, although in the general public perception Polaris was a strategic second-strike retaliatory weapon. The Polaris A-1 missile was developed to complement the limited number of medium-range systems deployed throughout Europe. As those systems lacked the range to attack major Soviet targets, Polaris was developed to increase the level of nuclear deterrence. At this time there was little threat of counterforce strikes, as few systems had the accuracy to destroy missile systems. The primary advantages of ballistic missile submarines was their ability to launch submerged, which offered improved survivability for the submarine while also (like their Regulus predecessors) keeping shorter ranged systems within range. The USN had forward-basing arrangements for its Atlantic-based Polaris fleet with both the United Kingdom and Spain, permitting the use of bases at the Holy Loch in Scotland (established in 1961) and at Naval Station Rota (Polaris base established 1964) in the Bay of Cadiz. The forward deployment bases were much closer to patrol areas than U.S. East Coast bases, avoiding the necessity for lengthy transit times. In the Pacific, a Polaris base was also established at Guam in 1964. The Regulus missile program was deactivated with the advent of Polaris in the Pacific. The forward-basing arrangement was continued when Poseidon replaced Polaris, starting in 1972, in what by then were the 31 Atlantic Fleet SSBNs. The 10 older SSBNs that could not use Poseidon were assigned to the Pacific Fleet in the 1970s. Polaris was not accurate enough to destroy hardened targets, but would have been effective against dispersed surface targets, such as airfields, radar and SAM sites, as well as military and industrial centers of strategic importance. The military authorities, however, regarded Polaris as but one part of a nuclear triad including ICBMs and bombers, each with its own function. The task allotted to Polaris of 'taking out' peripheral defenses was well-suited to its characteristics and limitations. The forward deployment strategy required some infrastructure. To allow quick establishment of bases and to minimize the impact on the host country, each base was centered around a submarine tender and a floating drydock, with minimal facilities on shore, mostly family support for the tender's crew. The first Polaris submarine tender was , a World War II tender that was refitted in 1959–60 with the insertion of a midships missile storage compartment and handling crane. "Proteus" established each of the three forward deployment bases. Four additional Polaris tenders (, , , and ) were commissioned 1962–65. A two-crew concept was established for SSBNs, combined with forward deployment to maximize the time each submarine would spend on patrol. The crews were named Blue and Gold after the U.S. Naval Academy colors. The crews were deployed for 105 days and at their home bases for 95 days, with a 3-day turnover period on each end of the deployed period. Crews were flown from their home bases to and from the forward deployment bases. After taking over the boat, the crew would perform a 30-day refit assisted by the tender, followed by a 70-day deterrent patrol. Sometimes a port visit would be arranged in the middle of the patrol. The home bases for Atlantic Fleet crews were Groton, Connecticut and Charleston, South Carolina. Pacific Fleet crews were based at Pearl Harbor, Hawaii. Two Polaris missile depots were established in the United States, Polaris Missile Facility Atlantic (POMFLANT) at Charleston, South Carolina in 1960 and later Strategic Weapons Facility Pacific (SWFPAC) at Bangor, Washington. To transport missiles and other supplies from the missile depots to the forward deployment bases, several cargo ships were converted to carry missiles and were designated as T-AKs, operated by the Military Sealift Command with a mostly-civilian crew. The advent of the Trident I missile, refitted to 12 Atlantic Fleet SSBNs starting in 1979 and with a much greater range than Polaris or Poseidon, meant that SSBNs could be based in the United States. The 18 s, slated to replace the 41 older SSBNs, also started commissioning in 1981, initially carrying 24 Trident I missiles but later refitted with the much larger and more capable Trident II missile. In the late 1970s it was decided that Pacific Fleet "Ohio"-class SSBNs would be based at Bangor, WA, collocated with SWFPAC, and that the refitted Trident I SSBNs and additional "Ohio"-class SSBNs would be based at a new facility in King's Bay, Georgia. Also, a new missile depot, Strategic Weapons Facility Atlantic (SWFLANT), was constructed at King's Bay to replace POMFLANT. The SSBN facility at Rota was closed in 1979 as King's Bay began refitting submarines. As commenced sea trials in 1980, the 10 remaining Polaris submarines in the Pacific Fleet were disarmed and reclassified as SSNs to avoid exceeding SALT II treaty limits. The SSBN base at Guam was closed at this time. By 1992, the Soviet Union had collapsed, 12 "Ohio"-class SSBNs had been commissioned, and the START I treaty had gone into effect, so Holy Loch was closed and the remaining 31 original SSBNs disarmed. Most of these were decommissioned and later scrapped in the Ship-Submarine Recycling Program, but a few were converted to other roles. Two remain in service but decommissioned as nuclear power training vessels attached to Naval Nuclear Power School at Charleston, SC, and . To meet the need for greater accuracy over the longer ranges the Lockheed designers included a reentry vehicle concept, improved guidance, fire control, and navigation systems to achieve their goals. To obtain the major gains in performance of the Polaris A3 in comparison to early models, there were many improvements, including propellants and material used in the construction of the burn chambers. The later versions (the A-2, A-3, and B-3) were larger, weighed more, and had longer ranges than the A-1. The range increase was most important: The A-2 range was , the A-3 , and the B-3 . The A-3 featured multiple re-entry vehicles (MRVs) which spread the warheads about a common target, and the B-3 was to have penetration aids to counter Soviet Anti-Ballistic Missile defenses. The U.S. Navy began to replace Polaris with Poseidon in 1972. The B-3 missile evolved into the C-3 Poseidon missile, which abandoned the decoy concept in favor of using the C3's greater throw-weight for larger numbers (10–14) of new hardened high-re-entry-speed reentry vehicles that could overwhelm Soviet defenses by sheer weight of numbers, and its high speed after re-entry. This turned out to be a less than reliable system and soon after both systems were replaced by the Trident. A proposed Undersea Long-Range Missile System (ULMS) program outlined a long-term plan which proposed the development of a longer-range missile designated as ULMS II, which was to achieve twice the range of the existing Poseidon (ULMS I) missile. In addition to a longer-range missile, a larger submarine (Ohio-class) was proposed to replace the submarines currently being used with Poseidon. The ULMS II missile system was designed to be retrofitted to the existing SSBNs, while also being fitted to the proposed Ohio-class submarine. In May 1972, the term ULMS II was replaced with Trident. The Trident was to be a larger, higher-performance missile with a range capacity greater than 6000 miles. Under the agreement, the United Kingdom paid an additional 5% of their total procurement cost of 2.5 billion dollars to the U.S. government as a research and development contribution. In 2002, the United States Navy announced plans to extend the life of the submarines and the D5 missiles to the year 2040. This requires a D5 Life Extension Program (D5LEP), which is currently underway. The main aim is to replace obsolete components at minimal cost by using commercial off the shelf (COTS) hardware; all the while maintaining the demonstrated performance of the existing Trident II missiles. STARS, a strategic targeting system, is a BMDO program managed by the U. S. Army Space and Strategic Defense Command (SSDC). It began in 1985 in response to concerns that the supply of surplus Minuteman I boosters used to launch targets and other experiments on intercontinental ballistic missile flight trajectories in support of the Strategic Defense Initiative would be depleted by 1988. SSDC tasked Sandia National Laboratories, a Department of Energy laboratory, to develop an alternative launch vehicle using surplus Polaris boosters. The Sandia National Laboratories developed two STARS booster configurations: STARS I and STARS II. STARS I consisted of refurbished Polaris first and second stages and a commercially procured Orbis I third stage. It can deploy single or multiple payloads, but the multiple payloads cannot be deployed in a manner that simulates the operation of a post-boost vehicle. To meet this specific need, Sandia developed an Operations and Deployment Experiments Simulator (ODES), which functions as a PBV. When ODES was added to STARS I, the configuration is became known as STARS II. The development phase of the STARS program was completed in 1994, and BMDO provided about $192.1 million for this effort. The operational phase began in 1995. The first STARS I flight, a hardware check-out flight, was launched in February 1993, and the second flight, a STARS I reentry vehicle experiment, was launched in August 1993. The third flight, a STARS II development mission, was launched in July 1994, with all three flights considered to be successful by BMDO. The Secretary of Defense conducted a comprehensive review in 1993 of the nation's defense strategy, which drastically reduced the number of STARS launches required to support National Missile Defense (NMD)2 and BMDO funding. Due to the launch and budget reductions, the STARS office developed a draft long-range plan for the STARS program. The study examined three options: When the STARS program was started in 1985 it was perceived that there would be four launches per year. Because of the large number of anticipated launches and an unknown defect rate for surplus Polaris motors, the STARS office acquired 117 first-stage and 102 second-stage surplus motors. As of December 1994, seven first-stage and five second-stage refurbished motors were available for future launches. BMDO is currently evaluating STARS as a potential long-range system for launching targets for development tests of future Theater Missile Defense 3 systems. STARS I was first launched in 1993, and from 2004 onwards has served as the standard booster for trials of the Ground-Based Interceptor. From the early days of the Polaris program, American senators and naval officers suggested that the United Kingdom might use Polaris. In 1957 Chief of Naval Operations Arleigh Burke and First Sea Lord Louis Mountbatten began corresponding on the project. After the cancellations of the Blue Streak and Skybolt missiles in the 1960s, under the 1962 Nassau Agreement that emerged from meetings between Harold Macmillan and John F. Kennedy, the United States would supply Britain with Polaris missiles, launch tubes, ReBs, and the fire-control systems. Britain would make its own warheads and initially proposed to build five ballistic missile submarines, later reduced to four by the incoming Labour government of Harold Wilson, with 16 missiles to be carried on each boat. The Nassau Agreement also featured very specific wording. The intention of wording the agreement in this manner was to make it intentionally opaque. The sale of the Polaris was malleable in how an individual country could interpret it due to the diction choices taken in the Nassau Agreement. For the United States of America, the wording allowed for the sale to fall under the scope of NATO's deterrence powers. On the other hand, for the British, the sale could be viewed as a solely British deterrent. The Polaris Sales Agreement was signed on April 6, 1963. In return, the British agreed to assign control over their Polaris missile targeting to the SACEUR (Supreme Allied Commander, Europe), with the provision that in a national emergency when unsupported by the NATO allies, the targeting, permission to fire, and firing of those Polaris missiles would reside with the British national authorities. Nevertheless, the consent of the British Prime Minister is and has been always required for the use of British nuclear weapons, including SLBMs. The operational control of the Polaris submarines was assigned to another NATO Supreme Commander, the SACLANT (Supreme Allied Commander, Atlantic), who is based near Norfolk, Virginia, although the SACLANT routinely delegated control of the missiles to his deputy commander in the Eastern Atlantic area, COMEASTLANT, who was always a British admiral. Polaris was the largest project in the Royal Navy's peacetime history. Although in 1964 the new Labour government considered cancelling Polaris and turning the submarines into conventionally armed hunter-killers, it continued the program as Polaris gave Britain a global nuclear capacity—perhaps east of Suez—at a cost £150 million less than that of the V bomber force. By adopting many established, American, methodologies and components Polaris was finished on time and within budget. On 15 February 1968, , the lead ship of her class, became the first British vessel to fire a Polaris. All Royal Navy SSBNs have been based at Faslane, only a few miles from Holy Loch. Although one submarine of the four was always in a shipyard undergoing a refit, recent declassifications of archived files disclose that the Royal Navy deployed four boatloads of reentry vehicles and warheads, plus spare warheads for the Polaris A3T, retaining a limited ability to re-arm and put to sea the submarine that was in refit. When replaced by the Chevaline warhead, the sum total of deployed RVs and warheads was reduced to three boatloads. The original U.S. Navy Polaris had not been designed to penetrate anti-ballistic missile (ABM) defenses, but the Royal Navy had to ensure that its small Polaris force operating alone, and often with only one submarine on deterrent patrol, could penetrate the ABM screen around Moscow. Britain's submarines featured the Polaris A3T missiles, a modification to the model of the Polaris used by the U.S. from 1968 to 1972. Similar concerns were present in the U.S. as well, resulting in a new American defense program. The program became known as Antelope, and its purpose was to alter the Polaris. Various aspects of the Polaris, such as increasing deployment efficiency and creating ways to improve the penetrative power were specific items considered in the tests conducted during the Antelope program. The British's uncertainty with their missiles led to the examination of the Antelope program. The assessments of Antelope occurred at Aldermaston. Evidence from the evaluation of Antelope led to the British decision to undertake their program following that of the United States. The result was a programme called "Chevaline" that added multiple decoys, chaff, and other defensive countermeasures. Its existence was only revealed in 1980, partly because of the cost overruns of the project, which had almost quadrupled the original estimate given when the project was finally approved in January 1975. The program also ran into trouble when dealing with the British Labour Party. Their Chief Scientific Adviser, Solly Zuckerman, believed that Britain no longer needed new designs for nuclear weapons and no more nuclear warhead tests would be necessary. Though the Labour party provided a clear platform on nuclear weapons, the Chevaline program found supporters. One such individual who supported modification to the Polaris was the Secretary of state for Defense, Denis Healey. Despite the approval of the program, the expenses caused hurdles that augmented the time it took for the system to come to fruition. The cost of the project led to Britain's revisit of disbanding the program in 1977. The system became operational in mid-1982 on , and the last British SSBN submarine was equipped with it in mid-1987. Chevaline was withdrawn from service in 1996. Though Britain adopted the Antelope program methods, no input on the design came from the United States. Aldermaston was solely responsible for the Chevaline warheads. The British did not ask to extend the Polaris Sales Agreement to cover the Polaris successor Poseidon due to its cost. The Ministry of Defence upgraded its nuclear missiles to the longer-ranged Trident after much political wrangling within the Callaghan Labour Party government over its cost and whether it was necessary. The outgoing Prime Minister James Callaghan made his government's papers on Trident available to Margaret Thatcher's new incoming Conservative Party government, which took the decision to acquire the Trident C4 missile. A subsequent decision to upgrade the missile purchase to the even larger, longer-ranged Trident D5 missile was possibly taken to ensure that there was missile commonality between the U.S. Navy and the Royal Navy, which was considerably important when the Royal Navy Trident submarines were also to use the Naval Submarine Base Kings Bay. Even though the U.S. Navy initially deployed the Trident C4 missile in the original set of its "Ohio"-class submarines, it was always planned to upgrade all of these submarines to the larger and longer-ranged Trident D5 missile—and that eventually, all of the C4 missiles would be eliminated from the U.S. Navy. This change-over has been completely carried out, and no Trident C4 missiles remain in service. The Polaris missile remained in Royal Navy service long after it had been completely retired and scrapped by the U.S. Navy in 1980–1981. Consequently, many spare parts and repair facilities for the Polaris that were located in the U.S. ceased to be available (such as at Lockheed, which had moved on first to the Poseidon and then to the Trident missile). During its reconstruction program in 1957–1961, the was fitted with four Polaris missile launchers located in the aft part of the ship. The Italian usage of Polaris missiles was partially the result of the Kennedy administration. Prior to 1961, the Italian and Turkish fleets were outfitted with Jupiter missiles. Three factors were instrumental in the movement away from the Jupiter project in Italy and Turkey: the president's view of the project, new understanding about weapons systems and the diminished necessity of the Jupiter missile. The Joint Congressional Committee report on Atomic Energy accentuated the three previous factors in Italy's decision to switch to the Polaris missiles.Successful tests held in 1961–1962 induced the United States to study a NATO Multilateral Nuclear Force (MLF), consisting of 25 international surface vessels from the US, United Kingdom, France, Italy, and West Germany, equipped with 200 Polaris nuclear missiles, enabling European allies to participate in the management of the NATO nuclear deterrent. The report advocated a change from the outdated Jupiter missiles, already housed by the Italians, to the newer missile, Polaris. The report resulted in Secretary of State Dean Rusk and Assistant Secretary of Defense Paul Nitze discussing the possibility of changing the warheads in the Mediterranean. The Italians were not swayed by the American's interest in modernizing their warheads. However, after the Cuban Missile Crisis, Kennedy met the Italian leader Amitore Fanfani in Washington. Fanfani conceded and went along with Kennedy's Polaris plan, despite the Italians hoping to stick with the Jupiter missile. The MLF plan, as well as the Italian Polaris Program, were abandoned, both for political reasons (in consequence of the Cuban Missile Crisis) and the initial operational availability of the first SSBN , which was capable of launching SLBMs while submerged, a solution preferable to surface-launched missiles. Italy developed a new domestic version of the missile, the SLBM-designated Alfa. That program was cancelled in 1975 after Italy ratified the Nuclear Non-Proliferation Treaty, with the final launch of the third prototype in 1976. Two Italian Navy cruisers, commissioned in 1963–1964, were "fitted for but not with" two Polaris missile launchers per ship. All four launchers were built but never installed, and were stored at the La Spezia naval facility. The , launched in 1969, was also "fitted for but not with" four Polaris missile launchers. During refit periods in 1980–1983, these facilities were removed and used for other weapons and systems. Notes Bibliography
https://en.wikipedia.org/wiki?curid=24787
UGM-73 Poseidon The UGM-73 Poseidon missile was the second US Navy nuclear-armed submarine-launched ballistic missile (SLBM) system, powered by a two-stage solid-fuel rocket. It succeeded the UGM-27 Polaris beginning in 1972, bringing major advances in warheads and accuracy. It was followed by Trident I in 1979, and Trident II in 1990. A development study for a longer range version of the Polaris missile—achieved by enlarging it to the maximum possible size allowed by existing launch tubes—started in 1963. Tests had already shown that Polaris missiles could be operated without problems in launch tubes that had their fiberglass liners and locating rings removed. The project was given the title Polaris B3 in November, but the missile was eventually named Poseidon C3 to emphasize the technical advances over its predecessor. The C3 was the only version of the missile produced, and it was also given the designation UGM-73A. Slightly longer and considerably wider and heavier than Polaris A3, Poseidon had the same range, greater payload capacity, improved accuracy, and multiple independently targetable reentry vehicle (MIRV) capability. MIRV capacity has been given as up to either ten or fourteen W68 thermonuclear warheads contained in Mark 3 reentry vehicles to multiple targets. As with Polaris, starting a rocket motor when the missile was still in the submarine was considered very dangerous. Therefore, the missile was ejected from its launch tube using high pressure steam produced by a solid-fueled boiler. The main rocket motor ignited automatically when the missile had risen approximately above the submarine. The first test launch took place on 16 August 1968, the first successful at-sea launch was from a surface ship, the (from July 1 to December 16, 1969), earning the ship the Meritorious Unit Commendation, and the first test launch from a submarine took place on the on 3 August 1970. The weapon officially entered service on 31 March 1971. It eventually equipped 31 -, -, and -class submarines. The Royal Navy also considered adopting Poseidon in the 1970s as an upgrade to its Polaris A3T boats, and like the US this would have kept the existing hulls. Although the Navy's favoured option, the British government instead adopted Chevaline, a two warhead MRV system with decoys, on the existing Polaris airframes and later moved to the Trident D5 in new boats. Beginning in 1979, 12 Poseidon-equipped SSBNs were refitted with Trident I. By 1992, the Soviet Union had collapsed, 12 Ohio-class submarines had been commissioned, and the START I treaty had gone into effect, so the 31 older Poseidon- and Trident I-armed SSBNs were disarmed, withdrawing Poseidon from service.
https://en.wikipedia.org/wiki?curid=24788
Private (rank) A private is a soldier of the lowest military rank (equivalent to NATO Rank Grades OR-1 to OR-3 depending on the force served in). In modern military writing, "private" is abridged to "Pte" in the United Kingdom and other Commonwealth of Nations countries and to "Pvt" in the United States. The term derives from the medieval term "private soldiers" (a term still used in the British Army), denoting individuals who were either hired, conscripted, or mustered into service by a feudal nobleman commanding a battle group of an army. The usage of "private" dates from the 18th century. In Indonesia, this rank is referred to as (specifically "Prajurit" which means, soldier), which is the lowest rank in the Indonesian National Armed Forces and special Police Force. In the Indonesian Army, Indonesian Marine Corps, and Indonesian Air Force, "Private" has three levels, which are: Private ("Prajurit Dua"), Private First Class ("Prajurit Satu"), and Master Private ("Prajurit Kepala"). After this rank, it is promoted the rank: Corporal. In the Israel Defense Forces, ("private") refers to the lowest enlisted rank. After 7–10 months of service (7 for combatants, 8 for combat support and 10 for non-combatants) soldiers are promoted from private to corporal ( or ), if they performed their duties appropriately during this time. Soldiers who take a commander's course, are prisoner instructors or practical engineers become corporals earlier. An IDF private wears no uniform insignia and is sometimes described as having a "slick sleeve" for this reason. The equivalent ranks to privates within the North and South Korean armies are "ilbyeong" (private first class) and "ibyeong" (private second class). The symbol for this rank is 1 line ( | ) or 2 lines ( || ). Private second class is known by 1 line, while private first class is 2 lines. Once recruits complete their Basic Military Training (BMT) or Basic Rescue Training (BRT), they attain the rank of private (PTE). Privates do not wear ranks on their rank holder. PTEs who performed well are promoted to the rank of Lance Corporal (LCP). The PFC rank is rarely awarded nowadays by SAF. All private enlistees can be promoted directly to lance corporal should they meet the minimum qualifying requirements, conduct appraisal and work performance. Recruits who did not complete BMT but completed 2 years of National Service will be promoted to private. In the Australian Army, a soldier of private rank wears no insignia. Like its British Army counterpart, the Australian Army rank of private (PTE) has other titles, depending on the corps and specification of that service member. The following alternative ranks are available for privates in the Australian Army: "sainik" (সৈনিক) Bangladesh Army In the Bangladesh Army the lowest enlisted rank is "sainik" (সৈনিক), literally meaning "soldier". In the Canadian Armed Forces, "private" is the lowest rank for members who wear army uniform. There are three levels of private: private (recruit), private (basic), and private (trained). All persons holding the rank of private are referred to as such and the qualifier shown in brackets is used on employment records only. The air force rank of aviator was formerly called "private", but this changed when traditional air force rank insignia were restored in 2014. The French-language equivalent of private is . Canadian Army privates may be known by other titles, depending on their military trade and their unit’s tradition: In the Indian Army and Pakistan Army the lowest enlisted rank is sepoy (/ˈsiːpɔɪ/), literally meaning "soldier" derived from Persian. A sepoy does not wear any rank insignia on his uniform. They are commonly referred to as "jawans". In the South African Army the lowest enlisted rank is Private. Privates don't wear insignia on their uniforms. In the different corps it is known with different titles. In the British Army, a private (Pte) equates to both OR-1 and OR-2 on the NATO scale, although there is no difference in rank. Privates wear no insignia. Many regiments and corps use other distinctive and descriptive names instead of private, some of these ranks have been used for centuries, others are less than 100 years old. In the contemporary British Armed Forces, the army rank of private is broadly equivalent to able seaman in the Royal Navy, aircraftman, leading aircraftman and senior aircraftman in the Royal Air Force, and marine (Mne) or bandsman, as appropriate equivalent rank in the Royal Marines. In the Boys' Brigade the rank of private is used when a boy moves from the junior section to the company section. Distinctive equivalents for private include: In the Corps of Royal Marines the rank structure follows that of British infantry regiments with the exception that the Royal Marines equivalent of private is Marine (Mne). During the course of the First World War, some Royal Marines also took the rank of Sapper, this was usually found as part of the Royal Marine Divisional Engineers of the Royal Naval Division. The lowest rank in the Austrian Armed Forces is the "Rekrut" (literally "Recruit"). For recruits in training to become non-commissioned or commissioned officers the rank bears an additional silver crossbar. Up until 1998 the rank was called "Wehrmann". In 2017 the silver crossbar was removed, as the system of the 'officers career' changed. The equivalent rank to private in the Spanish, Mexican, Colombian, Dominican and Argentinian army is the "soldado raso" meaning "rankless soldier" or simply "soldado". On enlistment in the Belgian army one is given the rank of (Dutch) or (French), whether one wishes to be a volunteer, non-commissioned officer or officer. Subsequent rank depends on the branch of the service: for example, at the Royal Military Academy (for officer training) one is soon promoted to the rank of (Dutch) or (French) i.e. "corporal". The insignia is a simple black mark or the simplified version of the Royal Military Academy's coat of arms for candidate officers. "Soldado" is the rank equivalent to private in the Brazilian and Portuguese Armed Forces. "Soldado" means "soldier" in Portuguese. The Finnish equivalent rank is "sotamies" (literally "war man"), although since 1973 this has been purely a paper term as all infantry troopers were renamed as "jääkäri" troops, previously reserved only to mobile light infantry. As in the British army, the various branches use different names: In the Finnish Air Force, the basic rank is "lentosotamies" ("flight war man"). In the Finnish Navy, the basic rank is "matruusi" ("seaman") or "tykkimies" ("cannon man") in the marine infantry. Special corps troopers may be referred by their function or unit, such as "kaartinjääkäri" (Guards jaeger), "panssarijääkäri" (panzer jaeger), "laskuvarjojääkäri" (paratroop jaeger), "rajajääkäri" (border jaeger) or "rannikkojääkäri" (coastal jaeger). In the French army "soldat de seconde classe" is the lowest military rank. This rank is also referred to as "recrue" ("recruit"). The German "Bundeswehr" modern-day equivalent of the private rank (NATO-standard code OR-2) is Gefreiter. The equivalent of the lowest rank (NATO-standard code OR-1) is either "Schütze" (rifleman), "Kanonier" (gunner) or "Jäger" (light-infantryman otherwise ranger), and sometimes in general simply "Soldat" (soldier), as well as other unit-specific distinctions. Up until 1918 it was "Gemeine" (Ordinary [soldier]) as well as unit-specific distinctions such as "Musketier" (musketeer), "Infanterist" (infantryman), "Kürassier" (cuirassier), "Jäger" (light-infantryman otherwise ranger), "Füsilier" (fusilier) etc., until 1945 "Soldat" (soldier) and unit-specific distinctions such as "Schütze" (rifleman), "Grenadier" (grenadier) etc. The navy equivalent of the OR-1 rank is known as "Matrose" (sailor or seaman), and the German Air Force equivalent is "Flieger" (aviator or airman) which is also used by army aviators. The name of the lowest rank in the Hungarian army ("Magyar Honvédség") is the "honvéd" which means "homeland defender". The word is also used informally for a soldier in general of any rank (i.e. "our "honvéds"" or an officer referred as a "honvédtiszt", "honvéd" officer). This is because Hungarian military traditions are strictly defensive, despite the Hungarian army participating in offensives on foreign soil in both world wars. The word "honvéd" has been in use since the Hungarian Revolution of 1848. The term is not used for soldiers of foreign armies: a foreign soldier with no rank is called "közlegény", literally "common lad" or "common man". Private (Pte) ("saighdiúr singil" in Irish), is the lowest enlisted rank in the Irish Army. Soldiers enlist as recruits then undergo a basic course of instruction. There are three grades of private in the army. After basic training the soldier is upgraded (rather than promoted) from recruit to private 2 star (Pte 2*) ("saighdiúr singil, 2 réalta"). After more corps-specific training (usually lasting eight weeks) the soldier is upgraded to private 3 star (Pte 3*) ("saighdiúr singil, 3 réalta"). All are usually just addressed as "private", although before being upgraded, recruits may be addressed as "recruit". In corps units, the rank designation changes. In the artillery, the rank is known as gunner (Gnr), but usually only after the completion of a gunners' course, and in the cavalry it is known as trooper (Tpr). Communications and Information Services privates are known as signalman or signalwoman. Medical orderlies are sometimes referred to as medic, although this can apply to privates and corporals. In the Italian Army is the lowest military rank. This rank is also referred to as (meaning recruit). In the Royal Netherlands Army, the "Landmacht", the equivalent ranks are "soldaat" (soldier), similar to the original French, with different classes: Depending on where the "soldaat" serves, he may be deemed a "kanonnier" (gunner in the artillery), "huzaar" (hussar in the cavalry) or "fuselier" (rifleman in the rifles) as well as "commando", "jager" or "rijder". There is less differentiation than in other countries between different armed forces. A "soldaat" can be promoted to "korporaal" (corporal). In the Swedish Armed Forces a recruit is given the rank of in the army and in the navy. After basic training which is roughly 3 months other terms can be used such as ’’soldat’’ (soldier), ’’jägare’’, etc. In the Swiss Armed Forces a recruit is given the rank of (equivalent to NATO OR-2), usually after completion of the first 12 weeks of basic training, also referred to as recruit school. In the Turkish Land Forces, Turkish Air Force and Turkish Naval Forces; "Er" (Private) is the lowest rank possible. This rank does not have any insignia. The rank is used by the National Bolivarian Armed Forces of Venezuela and has no insignia. In the United States Army, private is used for the two lowest enlisted ranks, just below private first class (E-3) or PFC. The lowest rank is "Private (E-1)" or PV1, sometimes referred to as "recruit", but this rank can also be held by some soldiers after punishment through the Uniform Code of Military Justice, or prisoners after conviction and demotion until they are discharged. A PV1 wears no uniform rank insignia; since the advent of the Army Combat Uniform (ACU), the slang term "fuzzy" has come into vogue, referring to the blank velcro patch on the ACU where the rank would normally be placed. The second rank, "Private (E-2)" or PV2, wears a single chevron, known colloquially as "mosquito wings". Advancement to PV2 is automatic after six months' time in service, but may be shortened to four months by a waiver. A person who earned the Eagle Scout award, the Gold Award, or completed at least two years of JROTC may enlist at any time at the rank of PV2. The term of address, "Private," may be properly applied to any Army soldier E-1 (PV1) to E-3 (PFC). The abbreviation "PVT" may be used whenever the specific grade of private is immaterial (such as in Tables of Organization and Equipment). In the United States Marine Corps, "private" (Pvt) refers only to the lowest enlisted rank, just below private first class. A Marine Corps private wears no uniform insignia and is sometimes described as having a "slick sleeve" for this reason. Most new, non-officer Marines begin their military career as a private. In the Marine Corps, privates first class are not referred to as "private"; it is more appropriate to use either "private first class" or "PFC".
https://en.wikipedia.org/wiki?curid=24795
Proclus Proclus Lycaeus (; 8 February 412 – 17 April 485 AD), called the Successor (Greek , "Próklos ho Diádokhos"), was a Greek Neoplatonist philosopher, one of the last major classical philosophers (see Damascius). He set forth one of the most elaborate and fully developed systems of Neoplatonism. He stands near the end of the classical development of philosophy and influenced Western medieval philosophy (Greek and Latin). Proclus was born on February 8, 412 AD (his birth date is deduced from a horoscope cast by a disciple, Marinus) in Constantinople to a family of high social status in Lycia (his father Patricius was a high legal official, very important in the Eastern Roman Empire's court system) and raised in Xanthus. He studied rhetoric, philosophy and mathematics in Alexandria, with the intent of pursuing a judicial position like his father. Before completing his studies, he returned to Constantinople when his rector, his principal instructor (one Leonas), had business there. Proclus became a successful practicing lawyer. However, the experience of the practice of law made Proclus realize that he truly preferred philosophy. He returned to Alexandria, and began determinedly studying the works of Aristotle under Olympiodorus the Elder. He also began studying mathematics during this period as well with a teacher named Heron (no relation to Hero of Alexandria, who was also known as Heron). As a gifted student, he eventually became dissatisfied with the level of philosophical instruction available in Alexandria, and went to Athens, the pre-eminent philosophical center of the day, in 431 to study at the Neoplatonic successor of the famous Academy founded 800 years earlier (in 387 BC) by Plato; there he was taught by Plutarch of Athens (not to be confused with Plutarch of Chaeronea), Syrianus, and Asclepigenia; he succeeded Syrianus as head of the Academy, and would in turn be succeeded on his death by Marinus of Neapolis. He lived in Athens as a vegetarian bachelor, prosperous and generous to his friends, until the end of his life, except for a voluntary one-year exile, which was designed to lessen the pressure put on him by his political-philosophical activity, little appreciated by the Christian rulers; he spent the exile traveling and being initiated into various mystery cults. He was also instructed in the "theurgic" Neoplatonism, as derived from the Orphic and Chaldean Oracles. His house has been discovered recently in Athens, under the pavement of Dionysiou Areopagitou Street, south of Acropolis, opposite the theater of Dionysus. He had a great devotion to the goddess Athena, who he believed guided him at key moments in his life. Marinus reports that when Christians removed the statue of the goddess from the Parthenon, a beautiful woman appeared to Proclus in a dream and announced that the "Athenian Lady" wished to stay at his home. Proclus died aged 73, and was buried near Mount Lycabettus in a tomb. It is reported that he was writing 700 lines each day. The majority of Proclus's works are commentaries on dialogues of Plato ("Alcibiades", "Cratylus", "Parmenides", "Republic", "Timaeus"). In these commentaries he presents his own philosophical system as a faithful interpretation of Plato, and in this he did not differ from other Neoplatonists, as he considered that "nothing in Plato’s corpus is unintended or there by chance", that "that Plato’s writings were divinely inspired" (ὁ θεῖος Πλάτων "ho theios Platon"—the divine Plato, inspired by the gods), that "the formal structure and the content of Platonic texts imitated those of the universe", and therefore that they spoke often of things under a veil, hiding the truth from the philosophically uninitiate. Proclus was however a close reader of Plato, and quite often makes very astute points about his Platonic sources. A number of his Platonic commentaries are lost. Proclus, the scholiast to Euclid, knew Eudemus of Rhodes' "History of Geometry" well, and gave a short sketch of the early history of geometry, which appeared to be founded on the older, lost book of Eudemus. The passage has been referred to as "the Eudemian summary," and determines some approximate dates, which otherwise might have remained unknown. The influential commentary on the first book of Euclid's "Elements of Geometry" is one of the most valuable sources we have for the history of ancient mathematics, and its Platonic account of the status of mathematical objects was influential. In this work, Proclus also listed the first mathematicians associated with Plato: a mature set of mathematicians (Leodamas of Thasos, Archytas of Taras, and Theaetetus), a second set of younger mathematicians (Neoclides, Eudoxus of Cnidus), and a third yet younger set (Amyntas, Menaechmus and his brother Dinostratus, Theudius of Magnesia, Hermotimus of Colophon and Philip of Opus). Some of these mathematicians were influential in arranging the Elements that Euclid later published. In addition to his commentaries, Proclus wrote two major systematic works. The "Elements of Theology" (Στοιχείωσις θεολογική) consists of 211 propositions, each followed by a proof, beginning from the existence of the One (divine Unity) and ending with the descent of individual souls into the material world. The "Platonic Theology" (Περὶ τῆς κατὰ Πλάτωνα θεολογίας) is a systematisation of material from Platonic dialogues, showing from them the characteristics of the divine orders, the part of the universe which is closest to the One. We also have three essays, extant only in Latin translation: "Ten doubts concerning providence" ("De decem dubitationibus circa providentiam"); "On providence and fate" ("De providentia et fato"); "On the existence of evils" ("De malorum subsistentia"). He also wrote a number of minor works, which are listed in the bibliography below. Proclus's system, like that of the other Neoplatonists, is a combination of Platonic, Aristotelian, and Stoic elements. In its broad outlines, Proclus's system agrees with that of Plotinus. However, following Iamblichus, Plutarch of Athens, and his master Syrianus, Proclus presents a much more elaborate universe than Plotinus, subdividing the elements of Plotinus's system into their logically distinct parts, and positing these parts as individual things. This multiplication of entities is balanced by the monism which is common to all Neoplatonists. What this means is that, on the one hand the universe is composed of hierarchically distinct things, but on the other all things are part of a single continuous emanation of power from the One. From this latter perspective, the many distinctions to be found in the universe are a result of the divided perspective of the human soul, which needs to make distinctions in its own thought in order to understand unified realities. The idealist tendency is taken further in John Scotus Eriugena. There is a double motivation found in Neoplatonic systems. The first is a need to account for the origin and character of all things in the universe. The second is a need to account for how we can know this origin and character of things. These two aims are related: they begin from the assumption that we can know reality, and then ask the question of what reality must be like, in its origin and unfolding, so that we can know it. An important element in the Neoplatonic answer to these questions is its reaction to Scepticism. In response to the sceptical position that we only know the appearances presented by our senses, and not the world as it is, Plotinus placed the object of knowledge inside the soul itself, and accounted for this interior truth through the soul's kinship with its own productive principles. The first principle in Neoplatonism is the One (Greek: "to Hen"). Being proceeds from the One. The One cannot itself be a being. If it were a being, it would have a particular nature, and so could not be universally productive. Because it is "beyond being" ("epekeina tes ousias", a phrase from Plato's "Republic" 509b), it is also beyond thought, because thinking requires the determinations which belong to being: the division between subject and object, and the distinction of one thing from another. For this reason, even the name "The One" is not a positive name, but rather the most non-multiple name possible, a name derived from our own inadequate conception of the simplicity of the first principle. The One causes all things by conferring unity, in the form of individuality, on them, and in Neoplatonism existence, unity, and form tend to become equivalent. The One causes things to exist by donating unity, and the particular manner in which a thing is one is its form (a dog and a house are individual in different manners, for example). Because the One makes things exist by giving them the individuality which makes them what they are as distinct and separate beings, the Neoplatonists thought of it also as the source of the good of everything. So the other name for the One is the Good. Despite appearances, the first principle is not double; all things have a double relation to it, as coming from them (One) and then being oriented back towards them to receive their perfection or completion (Good). The particular characteristic of Proclus's system is his elaboration of a level of individual ones, called "henads," between the One which is before being and intelligible divinity. The henads exist "superabundantly", also beyond being, but they stand at the head of chains of causation ("seirai") and in some manner give to these chains their particular character. He identifies them with the Greek gods, so one henad might be Apollo and be the cause of all things apollonian, while another might be Helios and be the cause of all "sunny" things. Each henad participates in every other henad, according to its character. What appears to be multiplicity is not multiplicity at all, because any henad may rightly be considered the center of the polycentric system. The principle which is produced below the level of the One and the Henads is the divine Intellect ("Nous"). The One cannot have a determinate nature if it is to be the source of all determinate natures, so what it produces is the totality of all determinate natures, or Being. By determination is meant existence within boundaries, a being "this" and not "that". The most important determinate natures are the "Greatest Kinds" from Plato's "Sophist" (Being, Same, Other, Rest, Motion) and Aristotle's ten categories (Quantity, Quality, etc.). In other words, the One produces what Plato called the Forms, and the Forms are understood to be the first determinations into which all things fall. The One produces the Forms through the activity of thinking. The One itself does not think, but instead produces a divine mind, Intellect, whose thoughts are themselves the Forms. Intellect is both Thinking and Being. It is a mind which has its own contents as its object. All things relate to the first principle as both One and Good. As Being, Intellect is the product of the One. But it also seeks to return to its cause, and so in Thinking it attempts to grasp the One as its Good. But because the simplicity of the One/Good does not allow Intellect to grasp it, what Intellect does is generate a succession of perspectives around its simple source. Each of these perspectives is itself a Form, and is how Intellect generates for itself its own content. Plotinus speaks about the generation of Intellect from the One, and Intellect's attempt to return to the One in a thinking which is also a desiring. Proclus systematises this production through a threefold movement of remaining, procession, and return ("mone, proodos, epistrophe"). Intellect remains in the One, which means that it has the One as its origin. It proceeds from the One, which means that it comes to be as a separate entity. But it returns to the One, which means that it does not cut itself off from its source, but receives the good which is its identity from the One. This threefold motion is used by Proclus to structure all levels of his system below the One and above material reality, so that all things except those mentioned remain, proceed, and return. Proclus also gives a much more elaborate account of Intellect than does Plotinus. In Plotinus we find the distinction between Being and Thinking in Intellect. Proclus, in keeping with his triadic structure of remaining, procession, and return, distinguishes three moments in Intellect: Intelligible, Intelligible-Intellectual, and Intellectual. They correspond to the object of thought, the power of the object to be grasped by the subject, and the thinking subject. These three divisions are elaborated further, so that the intelligible moment consists of three triads (Being, Eternity, and the Living Being or Paradigm from Plato's "Timaeus"). The intelligible-intellectual moment also consists of three triads, and the intellectual moment is a hebdomad (seven elements), among which is numbered the Demiurge from Plato's "Timaeus" and also the monad of Time (which is before temporal things). In this elaboration of Intellect as a whole, Proclus is attempting to give a hierarchical ordering to the various metaphysical elements and principles that other philosophers have discussed, by containing them within a single triadic logic of unfolding. Proclus's universe unfolds according to the smallest steps possible, from unity to multiplicity. With Intellect emerges the multiplicity which allows one being to be different from another being. But as a divine mind, Intellect has a complete grasp of all its moments in one act of thought. For this reason, Intellect is outside of Time. Intellect as the second principle also gives rise to individual intellects, which hold various places within Proclus's cosmos. In terms of his sources, Intellect is like taking the Platonic Forms and placing them in the self-thinking thought which is Aristotle's Unmoved Mover. Soul ("Psyche") is produced by Intellect, and so is the third principle in the Neoplatonic system. It is a mind, like Intellect, but it does not grasp all of its own content as one. Therefore with Soul, Time comes to be, as a measure of Soul's movement from one object of thought to another. Intellect tries to grasp the One, and ends up producing its own ideas as its content. Soul attempts to grasp Intellect in its return, and ends up producing its own secondary unfoldings of the Forms in Intellect. Soul, in turn, produces Body, the material world. In his commentary on Plato's "Timaeus" Proclus explains the role the Soul as a principle has in mediating the Forms in Intellect to the body of the material world as a whole. The Soul is constructed through certain proportions, described mathematically in the "Timaeus", which allow it to make Body as a divided image of its own arithmetical and geometrical ideas. Individual souls have the same overall structure as the principle of Soul, but they are weaker. They have a tendency to be fascinated with the material world, and be overpowered by it. It is at this point that individual souls are united with a material body (i.e. when they are born). Once in the body, our passions have a tendency to overwhelm our reason. According to Proclus, philosophy is the activity which can liberate the soul from a subjection to bodily passions, remind it of its origin in Soul, Intellect, and the One, and prepare it not only to ascend to the higher levels while still in this life, but to avoid falling immediately back into a new body after death. Because the soul's attention, while inhabiting a body, is turned so far away from its origin in the intelligible world, Proclus thinks that we need to make use of bodily reminders of our spiritual origin. In this he agrees with the doctrines of theurgy put forward by Iamblichus. Theurgy is possible because the powers of the gods (the "henads") extend through their series of causation even down to the material world. And by certain power-laden words, acts, and objects, the soul can be drawn back up the series, so to speak. Proclus himself was a devotee of many of the religions in Athens, considering that the power of the gods could be present in these various approaches. For Proclus, philosophy is important because it is one of the primary ways to rescue the soul from a fascination with the body and restore it to its station. However, beyond its own station, the soul has Intellect as its goal, and ultimately has unification with the One as its goal. So higher than philosophy is the non-discursive reason of Intellect, and the pre-intellectual unity of the One. Philosophy is therefore a means of its own overcoming, in that it points the soul beyond itself. Proclus can be considered as the spokesman of mature Neoplatonism. His works had a great influence on the history of western philosophy. The extent of this influence, however, is obscured by the channels through which it was exercised. An important source of Procline ideas was through the Pseudo-Dionysius. This late-5th- or early-6th-century Christian Greek author wrote under the pseudonym Dionysius the Areopagite, the figure converted by St. Paul in Athens. Because of this fiction, his writings were taken to have almost apostolic authority. He is an original Christian writer, and in his works can be found a great number of Proclus's metaphysical principles. Another important source for the influence of Proclus on the Middle Ages is Boethius's "Consolation of Philosophy", which has a number of Proclus principles and motifs. The central poem of Book III is a summary of Proclus's "Commentary on the Timaeus", and Book V contains the important principle of Proclus that things are known not according to their own nature, but according to the character of the knowing subject. A summary of Proclus's "Elements of Theology" circulated under the name "Liber de Causis" (the "Book of Causes"). This book is of uncertain origin, but circulated in the Arabic world as a work of Aristotle, and was translated into Latin as such. It had great authority because of its supposed Aristotelian origin, and it was only when Proclus's "Elements" were translated into Latin that Thomas Aquinas realised its true origin. Proclus's works also exercised an influence during the Renaissance through figures such as Georgius Gemistus Pletho and Marsilio Ficino. Before the contemporary period, the most significant scholar of Proclus in the English-speaking world was Thomas Taylor, who produced English translations of most of his works, with commentaries. His work inspired the New England Transcendentalists, including Ralph Waldo Emerson, who declared in 1843 that, in reading Proclus, "I am filled with hilarity & spring, my heart dances, my sight is quickened, I behold shining relations between all beings, and am impelled to write and almost to sing." Modern scholarship on Proclus essentially begins with E. R. Dodds edition of the "Elements of Theology" in 1933. Since then he has attracted considerable attention, especially in the French-speaking world. Procline scholarship, however, still (2006) falls far short of the attention paid to Plotinus. The following epigram is engraved on the tomb which houses Proclus and his master Syrianus: The crater Proclus on the Moon is named after him. A number of other minor works or fragments of works survive. A number of major commentaries have been lost. The "Liber de Causis" (Book of Causes) is not a work by Proclus, but a summary of his work the "Elements of Theology", likely written by an Arabic interpreter. It was mistakenly thought in the Middle Ages to be a work of Aristotle, but was recognised by Aquinas not to be so. A list of modern editions and translations of his surviving works is available at: Monographs Collections of essays Studies on particular aspects of Proclus' philosophy Bibliographic resources
https://en.wikipedia.org/wiki?curid=24797
Prophet In religion, a prophet is an individual who is regarded as being in contact with a divine being and is said to speak on that entity's behalf, serving as an intermediary with humanity by delivering messages or teachings from the supernatural source to other people. The message that the prophet conveys is called a prophecy. Claims of prophethood have existed in many cultures throughout history, including Judaism, Christianity, Islam, in ancient Greek religion, Zoroastrianism, Manichaeism, and many others. The English word "prophet" is a compound Greek word, from "pro" (in advance) and the verb "phesein" (to tell); thus, a προφήτης ("prophétés") is someone who foretells future events, and also conveys messages from the divine to humans; in a different interpretation, it means advocate or speaker. In Hebrew, the word נָבִיא ("nāvî"), "spokesperson", traditionally translates as "prophet". The second subdivision of the Tanakh, (Nevi'im), is devoted to the Hebrew prophets. The meaning of "navi" is perhaps described in Deuteronomy 18:18, where God said, "...and I will put My words in his mouth, and he shall speak unto them all that I shall command him." Thus, the "navi" was thought to be the "mouth" of God. The root nun-bet-alef ("navi") is based on the two-letter root nun-bet which denotes hollowness or openness; to receive transcendental wisdom, one must make oneself "open". In addition to writing and speaking messages from God, Israelite or Judean nevi'im ("spokespersons", "prophets") often acted out prophetic parables in their life. For example, in order to contrast the people's disobedience with the obedience of the Rechabites, God has Jeremiah invite the Rechabites to drink wine, in disobedience to their ancestor's command. The Rechabites refuse, for which God commends them. Other prophetic parables acted out by Jeremiah include burying a linen belt so that it gets ruined to illustrate how God intends to ruin Judah's pride. Likewise, Jeremiah buys a clay jar and smashes it in the Valley of Ben Hinnom in front of elders and priests to illustrate that God will smash the nation of Judah and the city of Judah beyond repair. God instructs Jeremiah to make a yoke from wood and leather straps and to put it on his own neck to demonstrate how God will put the nation under the yoke of Nebuchadnezzar, king of Babylon. In a similar way, the prophet Isaiah had to walk stripped and barefoot for three years to illustrate the coming captivity, and the prophet Ezekiel had to lie on his side for 390 days and eat measured food to illustrate the coming siege. The prophetic assignment is not always portrayed as positive in the Hebrew Bible, and prophets were often the target of persecution and opposition. God's personal prediction for Jeremiah, "Attack you they will, overcome you they can't," was performed many times in the biblical narrative as Jeremiah warned of destruction of those who continued to refuse repentance and accept more moderate consequences. In return for his adherence to God's discipline and speaking God's words, Jeremiah was attacked by his own brothers, beaten and put into the stocks by a priest and false prophet, imprisoned by the king, threatened with death, thrown into a cistern by Judah's officials, and opposed by a false prophet. Likewise, Isaiah was told by his hearers who rejected his message, "Leave the way! Get off the path! Let us hear no more about the Holy One of Israel!" The life of Moses being threatened by Pharaoh is another example. According to I Samuel 9:9, the old name for navi is "ro'eh", רֹאֶה, which literally means "Seer". That could document an ancient shift, from viewing prophets as seers for hire to viewing them as moral teachers. L.C. Allen (1971) comments that in the First Temple Era, there were essentially seer-priests, who formed a guild, divined, performed rituals and sacrifices, and were scribes, and then there were canonical prophets, who did none of these (and were against divination) and had instead a message to deliver. The seer-priests were usually attached to a local shrine or temple, such as Shiloh, and initiated others as priests in that priesthood: it was a mystical craft-guild with apprentices and recruitment. Canonical prophets were not organised this way. Some examples of prophets in the Tanakh include Abraham, Moses, Miriam, Isaiah, Samuel, Ezekiel, Malachi, and Job. In Jewish tradition Daniel is not counted in the list of prophets. A Jewish tradition suggests that there were twice as many prophets as the number which left Egypt, which would make 1,200,000 prophets. The Talmud recognizes the existence of 48 male prophets who bequeathed permanent messages to mankind. According to the Talmud there were also seven women who are counted as prophetesses whose message bears relevance for all generations: Sarah, Miriam, Devorah, Hannah (mother of the prophet Samuel), Abigail (a wife of King David), Huldah (from the time of Jeremiah), and Esther. The Talmudic and Biblical commentator Rashi points out that Rebecca, Rachel, and Leah were also prophets. Isaiah 8:3-4 refers he married "the prophetess", which conceived and gave to him a son, named by God Mahèr-salàl-cash-baz. Her name isn't elsewhere specified. Prophets in Tanakh are not always Jews. The story of Balaam in Numbers 22 describes a non-Jewish prophet. According to the Talmud, Obadiah is said to have been a convert to Judaism. The last nevi'im ("spokespersons", "prophets") mentioned in the Jewish Bible are Haggai, Zechariah, and Malachi, all of whom lived at the end of the 70-year Babylonian exile. The Talmud (Sanhedrin 11a) states that Haggai, Zachariah, and Malachi were the last prophets, and nowadays only the "Bath Kol" (בת קול, lit. "daughter of a voice", "voice of God") exists. In Christianity, a prophet (or seer) is one inspired by God through the Holy Spirit to deliver a message. Some Christian denominations limit a prophet's message to words intended only for the entire church congregation, excluding personal messages not intended for the body of believers; but in the Bible on a number of occasions prophets were called to deliver personal messages. The reception of a message is termed revelation and the delivery of the message is termed prophecy. The term "prophet" applies to those who receive public or private revelation. Public revelation, in Catholicism, is part of the Deposit of faith, the revelation of which was completed by Jesus; whereas private revelation does not add to the Deposit. The term "deposit of faith" refers to the entirety of Jesus Christ's revelation, and is passed to successive generations in two different forms, sacred scripture (the Bible) and sacred tradition. The Bible terms anyone who claims to speak God's words or to teach in his name without being a prophet a false prophet. One Old Testament text in Deuteronomy contains a warning against those who prophesy events which do not come to pass and says they should be put to death. Elsewhere a false prophet may be someone who is purposely trying to deceive, is delusional, under the influence of Satan or is speaking from his own spirit. Some Christians believe that the Holy Spirit gives spiritual gifts to Christians. These may include prophecy, tongues, miraculous healing ability, and discernment (Matthew 12:32 KJV "Whosoever speaketh a word against the Son of Man, it shall be forgiven him: but whosoever speaketh against the Holy Ghost, it shall not be forgiven him, neither in this world, neither in the world to come."). Cessationists believe that these gifts were given only in New Testament times and that they ceased after the last apostle died. New Testament passages that explicitly discuss prophets existing after the death and resurrection of Christ include Revelation 11:10, Matthew 10:40–41 and 23:34, John 13:20 and 15:20 and Acts 11:25–30, 13:1 and 15:32. The "Didache" gives extensive instruction in how to distinguish between true and false prophets, as well as commands regarding tithes to prophets in the church. Irenaeus, wrote of 2nd-century believers with the gift of prophecy, while Justin Martyr argued in his "Dialogue with Trypho" that prophets were not found among the Jews in his time, but that the church had prophets. "The Shepherd of Hermas" describes revelation in a vision regarding the proper operation of prophecy in the church. Eusebius mentions that Quadratus and Ammia of Philadelphia were both prominent prophets following the age of the Twelve Apostles. Tertullian, writing of the church meetings of the Montanists (to whom he belonged), described in detail the practice of prophecy in the 2nd-century church. A number of later Christian saints were claimed to have powers of prophecy, such as Columba of Iona (521-597), Saint Malachy (1094-1148) or Padre Pio (1887-1968). Marian apparitions like those at Fatima in 1917 or at Kibeho in Rwanda in the 1980s often included prophetic predictions regarding the future of the world as well as of the local areas they occurred in. Prophetic movements in particular can be traced throughout the Christian Church's history, expressing themselves in (for example) Montanism, Novatianism, Donatism, Franciscanism, Anabaptism, Camisard enthusiasm, Puritanism, Quakerism, Quietism, Lutheranism and Pietism. Modern Pentecostals and Charismatics, members of movements which together comprised approximately 584 million people , believe in the contemporary function of the gift of prophecy, and some in these movements allow for idea that God may continue to gift the church with some individuals who are prophets. Some Christian sects recognize the existence of a "modern-day" prophets. One such denomination is The Church of Jesus Christ of Latter-day Saints, which teaches that God still communicates with mankind through prophecy. The Quran identifies a number of men as "Prophets of Islam" ( "nabī"; pl. "anbiyāʾ"). Muslims believe such individuals were assigned a special mission by God to guide humanity. Besides Muhammad, this includes prophets such as Abraham ("Ibrāhīm"), Moses ("Mūsā") and Jesus ("ʿĪsā"). Although only twenty-five prophets are mentioned by name in the Quran, a hadith (no. 21257 in "Musnad Ahmad ibn Hanbal") mentions that there were (more or less) 124,000 prophets in total throughout history. Other traditions place the number of prophets at 224,000. Some scholars hold that there are an even greater number in the history of mankind, and only God knows. The Quran says that God has sent a prophet to every group of people throughout time, and that Muhammad is the last of the prophets, sent for the whole of humankind. The message of all the prophets is believed to be the same. In Islam, all prophetic messengers are prophets (such as Adam, Noah, Abraham, Moses, Jesus, and Muhammad) though not all prophets are prophetic messengers. The primary distinction is that a prophet is required to demonstrate God's law through his actions, character, and behavior without necessarily calling people to follow him, while a prophetic messenger is required to pronounce God's law (i.e. revelation) and call his people to submit and follow him. Muhammad is distinguished from the rest of the prophetic messengers and prophets in that he was commissioned by God to be the prophetic messenger to all of mankind. Many of these prophets are also found in the texts of Judaism (The Torah, the Prophets, and the Writings) and Christianity. Muslims often refer to Muhammad as "the Prophet", in the form of a noun. Jesus is the result of a virgin birth in Islam as in Christianity, and is regarded as a prophet. Traditionally, four prophets are believed to have been sent holy books: the Torah ("Tawrat") to Moses, the Psalms ("Zābūr") to David, the Gospel to Jesus, and the Quran to Muhammad; those prophets are considered "Messengers" or "rasūl". Other main prophets are considered messengers or "nabī", even if they didn't receive a Book from God. Examples include the messenger-prophet Aaron| ("Hārūn"), the messenger-prophet Ishmael ("Ismāʿīl")) and the messenger-prophet Joseph ("Yūsuf"). Although it offers many incidents from the lives of many prophets, the Quran focuses with special narrative and rhetorical emphasis on the careers of the first four of these five major prophets. Of all the figures before Muhammad, the significance of Jesus in Islam is reflected in his being mentioned in the Quran in 93 verses with various titles attached such as "Son of Mary" and other relational terms, mentioned directly and indirectly, over 187 times. He is thus the most mentioned person in the Quran by reference; 25 times by the name Isa, third-person 48 times, first-person 35 times, and the rest as titles and attributes. Moses("Musa") and Abraham("ibrahim") are also referred to frequently in the Quran. As for the fifth, the Quran is frequently addressed directly to Muhammad, and it often discusses situations encountered by him. Direct use of his name in the text, however, is rare. Rarer still is the mention of Muhammad's contemporaries. In modern times the term "prophet" can be somewhat controversial. Many Christians with Pentecostal or charismatic beliefs believe in the continuation of the gift of prophecy and the continuation of the role of prophet as taught in Ephesians 4. The content of prophecies can vary widely. Prophecies are often spoken as quotes from God. They may contain quotes from scripture, statements about the past or current situation, or predictions of the future. Prophecies can also 'make manifest the secrets' of the hearts of other people, telling about the details of their lives. Sometimes, more than one person in a congregation will receive the same message in prophecy, with one giving it before another. Other movements claim to have prophets. In France, Michel Potay says he received a revelation, called "The Revelation of Arès", dictated by Jesus in 1974, then by God in 1977. He is considered a prophet by his followers, the Pilgrims of Arès. The Bahá'í Faith refers to what are commonly called prophets as "Manifestations of God" who are directly linked with the concept of Progressive revelation. Bahá'ís believe that God expresses this will at all times and in many ways, including through a series of divine messengers referred to as "Manifestations of God" or "divine educators". In expressing God's intent, these Manifestations are seen to establish religion in the world. Thus they are seen as an intermediary between God and humanity. The Manifestations of God are not seen as incarnations of God, and are also not seen as ordinary mortals. Instead, the Bahá'í concept of the Manifestation of God emphasizes simultaneously the humanity of that intermediary and the divinity in the way they show forth the will, knowledge and attributes of God; thus they have both human and divine stations. In addition to the Manifestations of God, there are also minor prophets. While the Manifestations of God, or major prophets, are compared to the Sun (which produces its own heat and light), minor prophets are compared to the Moon (which receives its light from the sun). Moses, for example, is taught as having been a Manifestation of God and his brother Aaron a minor prophet. Moses spoke on behalf of God, and Aaron spoke on behalf of Moses (Exodus 4:14–17). Other Jewish prophets are considered minor prophets, as they are considered to have come in the shadow of the dispensation of Moses to develop and consolidate the process he set in motion. A number of modern catholic saints have been claimed to have powers of prophecy, such as Padre Pio and Alexandrina Maria da Costa. In addition to this many modern Marian apparitions included prophecies in them about the world and about the local areas. The Fátima apparition in 1917 included a prophecy given by Mary to three children, that on October 13, 1917 there would be a great miracle for all to see at Fátima, Portugal, and on that day tens of thousands of people headed to Fátima to see what would happen including newspaper journalists. Many witnesses, including journalists, claimed to see the sun "dance" in the sky in the afternoon of that day, exactly as the visionaries had predicted several months before. The Kibeho apparition in Rwanda in the 1980s included many prophecies about great violence and destruction that was coming, and the Rwandan genocide only ten years later was interpreted by the visionaries as the fulfilment of these prophecies Several miracles and a vision of the identity of the last 112 Popes were attributed to Saint Malachy, the Archbishop of Armagh (1095–1148). Jehovah's Witnesses do not consider any single person in their modern-day organization to be a prophet. Their literature has referred to their organization collectively as God's "prophet" on earth; this is understood, however, in the sense of declaring their interpretation of God's judgments from the Bible along with God's guidance of His Holy Spirit. Their publishing company, Watch Tower, and official position magazine, "The Watchtower", have asserted: "Ever since "The Watchtower" began to be published in July 1879 it has looked ahead into the future... No, "The Watchtower" is no inspired prophet, but it follows and explains a Book of prophecy the predictions in which have proved to be unerring and unfailing till now. "The Watchtower" is therefore under safe guidance. It may be read with confidence, for its statements may be checked against that prophetic Book." They also claim that they are God's one and only true channel to mankind on earth, and used by God for this purpose. They have made many eschatological forecasts, some of which have led people (including followers) to incorrect assumptions. One example is "The Watchtower's" assertions that the end of the "Gentile times" or "times of the nations" would occur in 1914; even prominent Watch Tower representatives such as A. H. Macmillan incorrectly concluded and overstated their expectations. As a result, "The Watchtower" has acknowledged that Jehovah's Witnesses "have made mistakes in their understanding of what would occur at the end of certain time periods." Concurrently with these exceptions, Jehovah's Witnesses in their literature and assemblies have taught their leadership was personally chosen by Jesus Christ in 1919 (a prophetic year in Jehovah's Witnesses eschatology) and that they are "God's sole channel on earth," and "Jehovah's spirit directed organization". Joseph Smith, who established the Church of Christ in 1830, is considered a prophet by members of the Latter Day Saint movement, of which The Church of Jesus Christ of Latter-day Saints (LDS Church) is the largest denomination. Additionally, many churches within the movement believe in a succession of modern prophets (accepted by Latter Day Saints as "prophets, seers, and revelators") since the time of Joseph Smith. Russell M. Nelson is the current Prophet and President of The Church of Jesus Christ of Latter-day Saints. Baptist preacher William Miller is credited with beginning the mid-19th century North American religious movement now known as Adventism. He announced a Second Coming, resulting in the Great Disappointment. The Seventh-day Adventist Church, which was established in 1863, believes that Ellen G. White, one of the church's founders, was given the spiritual gift of prophecy. The Branch Davidians are a religious cult which was founded in 1959 by Benjamin Roden as an offshoot of the Seventh-Day Adventist Church. David Koresh, who died in the well-known Waco Siege in 1993, claimed to be their final prophet and "the Son of God, the Lamb" in 1983. The Ahmadiyya movement in Islam believes that Mirza Ghulam Ahmad was a non law-bearing Prophet, who claimed to be a fulfillment of the various Islamic prophecies regarding the spiritual second advent of Jesus of Nazareth near the end times. Nathan of Gaza was a theologian and author who became famous as a prophet for the alleged messiah, Sabbatai Zevi. Divination remains an important aspect of the lives of the people of contemporary Africa, especially amongst the usually rural, socially traditionalistic segments of its population. In arguably its most influential manifestation, the system of prophecy practiced by the Babalawos and Iyanifas of the historically Yoruba regions of West Africa have bequeathed to the world a corpus of fortune-telling poetic methodologies so intricate that they have been added by UNESCO to its official "intangible cultural heritage of the World list". Tenrikyo's prophet, Nakayama Miki, is believed by Tenrikyoans to have been a messenger of God. The Great Peacemaker (sometimes referred to as "Deganawida" or "Dekanawida") co-founded the Haudenosaunee league in pre-Columbian times. In retrospect, his prophecy of the boy seer could appear to refer to the conflict between natives and Europeans (white serpent). From 1805 until the Battle of Tippecanoe that falsified his predictions in 1811, the "Shawnee prophet" Tenskwatawa lead an Indian alliance to stop Europeans to take more and more land going west. He reported visions he had. He is said to have accurately predicted a solar eclipse. His brother Tecumseh re-established the alliance for Tecumseh's War, that ended with the latter's death in 1813. Tecumseh fought together with British forces that, in the area of the Great Lakes, occupied essentially today's territory of Canada. Francis the Prophet, influenced by Tecumseh and Tenskwatawa, was a leader of the Red Stick faction of the Creek Indians. He traveled to England in 1815 as a representative of the "four Indian nations" in an unsuccessful attempt to get Great Britain to help them resist the expansionism of the white settlers. 20 years later (1832), Wabokieshiek, the "Winnebago Prophet", after whom Prophetstown has been named, (also called "White Cloud") claimed that British forces would support the Indians in the Black Hawk War against the United States as 20 years earlier (based on "visions"). They did not, and no longer he was considered a "prophet". In 1869, the Paiute Wodziwob founded the Ghost Dance movement. The dance rituals were an occasion to announce his visions of an earthquake that would swallow the whites. He seems to have died in 1872. The Northern Paiute Wovoka claimed he had a vision during the solar eclipse of January 1, 1889, that the Paiute dead would come back and the whites would vanish from America, provided the natives performed Ghost Dances. This idea spread among other Native American peoples. The government were worried about a rebellion and sent troops, which lead to the death of Sitting Bull and to the Wounded Knee massacre in 1890. Clifford Trafzer compiled an anthology of essays on the topic, American Indian Prophets. In the late 20th century the appellation of "prophet" has been used to refer to individuals particularly successful at analysis in the field of economics, such as in the derogatory "prophet of greed". Alternatively, social commentators who suggest escalating crisis are often called "prophets of doom."
https://en.wikipedia.org/wiki?curid=24805
Pleading In law as practiced in countries that follow the English models, a pleading is a formal written statement of a party's claims or defenses to another party's claims in a civil action. The parties' pleadings in a case define the issues to be adjudicated in the action. The Civil Procedure Rules (CPR) govern pleading in England and Wales. Federal Rules of Civil Procedure govern pleading in United States federal courts. Each state in the United States has its own statutes and rules that govern pleading in the courts of that state. In the United States, a "complaint" is the first pleading filed by a plaintiff which initiates a lawsuit. A complaint sets forth the relevant allegations of fact that give rise to one or more legal causes of action along with a prayer for relief and sometimes a statement of damages claimed (an ad quod damnum clause). In some situations, a complaint is called a "petition", in which case the party filing it is called the petitioner and the other party is the respondent. In equity, sometimes called chancery, the initial pleading may be called either a "petition" or a "bill of complaint in chancery". In England and Wales, the first pleading is a Claim Form, issued under either Part 7 or Part 8 of the Civil Procedure Rules, which sets out the nature of the action and the relief sought, and may give brief particulars of the claim. The Claimant also has the option, under Practice Direction 7A.61 to serve Particulars of Claim (a document setting out the allegations which found the cause of action) within 14 days of issue of the Claim Form. When used in civil proceedings in England and Wales, the term "complaint" refers to the mechanism by which civil proceedings are instituted in the magistrates' court and may be either written or oral. A "demurrer" is a pleading (usually filed by a defendant) which objects to the legal sufficiency of the opponent's pleading (usually a complaint) and demands that the court rule immediately about whether the pleading is legally adequate before the party must plead on the merits in response. Since demurrer procedure required an immediate ruling like a motion, many common law jurisdictions therefore went to a narrower understanding of pleadings as framing the issues in a case but not being motions in and of themselves, and replaced the demurrer with the motion to dismiss for failure to state a cause of action or the application to strike out particulars of claim. An "answer" is a pleading filed by a defendant which admits or denies the specific allegations set forth in a complaint and constitutes a general appearance by a defendant. In England and Wales, the equivalent pleading is called a Defence. A defendant may also file a cross-complaint against another defendant named by the plaintiff, and may also file a "third-party complaint" bring other parties into a case by the process of impleader. A defendant may file a "counter-claim" to raise a cause of action to defend, reduce or set off the claim of the plaintiff. Common law pleading was the system of civil procedure used in England, which early on developed a strong emphasis on the form of action rather than the cause of action (as a result of the Provisions of Oxford, which severely limited the evolution of the common law writ system). The emphasis was on procedure over substance. Law and equity evolved as separate judicial systems, each with its own procedures and remedies. Because the types of claims eligible for consideration was capped early during the development of the English legal system, claims that might have been acceptable to the courts' evolving sense of justice often did not match up perfectly with any of the established forms of action. Lawyers had to engage in great ingenuity to shoehorn their clients' claims into existing forms of action. The result was that at common law, pleadings were stuffed full of awkward legal fictions that had little to do with the actual "real-world" facts of the case. The placeholder name John Doe (still commonly used in American pleading to name unknown parties) is a remnant of this period. In its final form in the 19th century, common law pleading was terribly complex and slow by modern standards. The parties would normally go through several rounds of pleadings before the parties were deemed to have clearly stated their controversy, so that the case was "at issue" and could proceed to trial. A case would begin with a complaint in which the plaintiff alleged the facts entitling him to relief, then the defendant would file any one of a variety of pleas as an answer, followed by a replication from the plaintiff, a rejoinder from the defendant, a surrejoinder from the plaintiff, a rebutter from the defendant, and a surrebutter from the plaintiff. At each stage, a party could file a demurrer to the other's pleading (essentially a request that the court immediately rule on whether the pleading was legally adequate before they had to file a pleading in response) or simply file another pleading in response. Generally, a plea could be dilatory or peremptory. There were three kinds of dilatory plea: to the jurisdiction, in suspension, or in abatement. The first challenged the court's jurisdiction, the second asked the court to stay the action, and the third asked the court to dismiss the action without prejudice to the other side's right to bring the claims in another action or another court. A peremptory plea had only one kind: a plea in bar. A party making a plea in bar could either traverse the other side's pleading (i.e., deny all or some of the facts pleaded) or confess and avoid it (i.e., admit the facts pleaded but plead new ones that would dispel their effect). A traverse could be general (deny everything) or specific. Either side could plead imparlance in order to get more time to plead on the merits. Once the case was at issue, the defendant could reopen the pleadings in order to plead a newly discovered defense (and start the whole sequence again) by filing a plea puis darrein. The result of all this complexity was that to ascertain what was "at issue" in a case, a stranger to the case (i.e., such as a newly appointed judge) would have to sift through a huge pile of pleadings to figure out what had happened to the original averments of the complaint and whether there was anything left to be actually adjudicated by the court. Code pleading was first introduced in 1850 in New York and in 1872 in California, and eventually spread to 22 other states. Code pleading sought to abolish the distinction between law and equity. It unified civil procedure for all types of actions as much as possible. The focus shifted from pleading the right form of action (that is, the right procedure) to pleading the right cause of action (that is, a substantive right to be enforced by the law). Under code pleading, the required elements of each action are supposed to be set out in carefully codified statutes. Code pleading stripped out most of the legal fictions that had encrusted common law pleading by requiring parties to plead "ultimate facts." This means that to plead a cause of action, the pleader has to plead each element and also allege specific facts which, if proven with evidence at trial, would constitute proof of that element. Failure to provide such detail could lead to dismissal of the case if the defendant successfully demurred to the complaint on the basis that it merely stated "legal conclusions" or "evidentiary facts." Code pleading also drastically shortened the pleading process. Most of the old common law pleadings were abolished. From now on, a case required only a complaint and an answer, with an optional cross-complaint and cross-answer, and with the demurrer kept as the standard attack on improper pleadings. Instead of piling layers and layers of pleadings and averments on top of each other, a pleading that was attacked by demurrer would either be completely superseded by an amended pleading or would proceed immediately "at issue" as to the validly pleaded parts. This meant that to determine what the parties were currently fighting about, a stranger to a case would no longer have to read the entire case file from scratch, but could (in theory) look "only" at the most recent version of the complaint filed by the plaintiff, the defendant's most recent answer to that complaint, and any court orders on demurrers to either pleading. Code pleading was criticized because many lawyers felt that it was too difficult to fully research all the facts needed to bring a complaint "before" one had even initiated the action, and thus meritorious plaintiffs could not bring their complaints in time before the statute of limitations expired. Code pleading has also been criticized as promoting "hypertechnical reading of legal papers". Notice pleading is the dominant form of pleading used in the United States today. In 1938, the Federal Rules of Civil Procedure were adopted to govern civil procedure in United States federal courts. One goal of the Federal Rules of Civil Procedure was to relax the strict rules of code pleading. However, each state also has its own rules of civil procedure, which may require different, looser, or stricter rules in state court. Louisiana, a state that derives its legal tradition from the Spanish and French (as opposed to English common law), employs a system of fact pleading wherein it is only necessary to plead the facts that give rise to a cause of action. It is not necessary even for the petitioner to identify the cause of action being pleaded. Mere conclusory allegations such as "the defendant was negligent" are not, by themselves, sufficient to sustain a cause of action. Other states, including Connecticut and New Jersey, are also fact-pleading jurisdictions. Illinois, for example, requires that a complaint "must assert a legally recognized cause of action and it must plead facts which bring the particular case within that cause of action." In alternative pleading, legal fiction is employed to permit a party to argue two mutually exclusive possibilities, for example, submitting an injury complaint alleging that the harm to the plaintiff caused by the defendant was so outrageous that it must have either been intended as a malicious attack or, if not, must have been due to gross negligence. The use of "pleaded" versus "pled" as the past tense version of "pleading" has been a subject of controversy among many of those that practice law. "Pled" is almost never used in Australian publications, while being somewhat common in American, British, and Canadian publications. In a 2010 search of the Westlaw legal database, "pled" is used in a narrow majority of cases over "pleaded". The AP stylebook and "The Chicago Manual of Style" call for "pleaded", and a Westlaw search shows the US Supreme Court has used pleaded in over 3,000 opinions and pled in only 26.
https://en.wikipedia.org/wiki?curid=24807
Personal Communications Service At the most basic level, Personal Communications Service (PCS) describes a set of communications capabilities which allows some combination of terminal mobility, personal mobility, and service profile management. More specifically, PCS refers to any of several types of wireless voice or wireless data communications systems, typically incorporating digital technology, providing services similar to advanced cellular mobile or paging services. In addition, PCS can also be used to provide other wireless communications services, including services which allow people to place and receive communications while away from their home or office, as well as wireless communications to homes, office buildings and other fixed locations. Described in more commercial terms, PCS is a generation of wireless-phone technology that combines a range of features and services surpassing those available in analog- and digital-cellular phone systems, providing a user with an all-in-one wireless phone, paging, messaging, and data service. The International Telecommunication Union describes Personal Communications Services as a component of the IMT-2000 (3G) standard. PCS and the IMT-2000 standard of which PCS is a part do not specify a particular air interface and channel access method. Wireless service providers may deploy equipment using any of several air interface and channel access methods, as long as the network meets the service description characteristics described in the standard. In Canada, Mexico and the United States, PCS are provided in the "1900 MHz band" (specifically 1850–1990 MHz). This frequency band was designated by the United States FCC and Industry Canada to be used for new wireless services to alleviate capacity caps inherent in the original AMPS and D-AMPS cellular networks in the "850 MHz band" (specifically 800–894 MHz). These frequency bands are particular to North America and other frequency bands may be designated in other regions. In the United States, Sprint PCS was the first company to build and operate a PCS network, launching service in November 1995 under the "Sprint Spectrum" brand in the Baltimore-Washington metropolitan area. Sprint originally built out the network using GSM radio interface equipment. Sprint PCS later selected CDMA as the radio interface for its nationwide network and built out a parallel CDMA network in the Baltimore-Washington area, launching service in 1997. Sprint operated the two networks in parallel until finishing a migration of its area customers to the CDMA network. After completing the customer migration, Sprint PCS sold the GSM radio interface network equipment to Omnipoint Communications in January 2000. Omnipoint was later purchased by VoiceStream Wireless which subsequently became T-Mobile USA.
https://en.wikipedia.org/wiki?curid=24808