text
stringlengths
10
951k
source
stringlengths
39
44
Power law In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of the initial size of those quantities: one quantity varies as a power of another. For instance, considering the area of a square in terms of the length of its side, if the length is doubled, the area is multiplied by a factor of four. The distributions of a wide variety of physical, biological, and man-made phenomena approximately follow a power law over a wide range of magnitudes: these include the sizes of craters on the moon and of solar flares, the foraging pattern of various species, the sizes of activity patterns of neuronal populations, the frequencies of words in most languages, frequencies of family names, the species richness in clades of organisms, the sizes of power outages, criminal charges per convict, volcanic eruptions, human judgements of stimulus intensity and many other quantities. Few empirical distributions fit a power law for all their values, but rather follow a power law in the tail. Acoustic attenuation follows frequency power-laws within wide frequency bands for many complex media. Allometric scaling laws for relationships between biological variables are among the best known power-law functions in nature. One attribute of power laws is their scale invariance. Given a relation formula_1, scaling the argument formula_2 by a constant factor formula_3 causes only a proportionate scaling of the function itself. That is, where formula_5 denotes direct proportionality. That is, scaling by a constant formula_3 simply multiplies the original power-law relation by the constant formula_7. Thus, it follows that all power laws with a particular scaling exponent are equivalent up to constant factors, since each is simply a scaled version of the others. This behavior is what produces the linear relationship when logarithms are taken of both formula_8 and formula_2, and the straight-line on the log–log plot is often called the "signature" of a power law. With real data, such straightness is a necessary, but not sufficient, condition for the data following a power-law relation. In fact, there are many ways to generate finite amounts of data that mimic this signature behavior, but, in their asymptotic limit, are not true power laws (e.g., if the generating process of some data follows a Log-normal distribution). Thus, accurately fitting and validating power-law models is an active area of research in statistics; see below. A power-law formula_10 has a well-defined mean over formula_11 only if formula_12, and it has a finite variance only if formula_13; most identified power laws in nature have exponents such that the mean is well-defined but the variance is not, implying they are capable of black swan behavior. This can be seen in the following thought experiment: imagine a room with your friends and estimate the average monthly income in the room. Now imagine the world's richest person entering the room, with a monthly income of about 1 billion US$. What happens to the average income in the room? Income is distributed according to a power-law known as the Pareto distribution (for example, the net worth of Americans is distributed according to a power law with an exponent of 2). On the one hand, this makes it incorrect to apply traditional statistics that are based on variance and standard deviation (such as regression analysis). On the other hand, this also allows for cost-efficient interventions. For example, given that car exhaust is distributed according to a power-law among cars (very few cars contribute to most contamination) it would be sufficient to eliminate those very few cars from the road to reduce total exhaust substantially. The median does exist, however: for a power law "x" –"k", with exponent , it takes the value 21/("k" – 1)"x"min, where "x"min is the minimum value for which the power law holds. The equivalence of power laws with a particular scaling exponent can have a deeper origin in the dynamical processes that generate the power-law relation. In physics, for example, phase transitions in thermodynamic systems are associated with the emergence of power-law distributions of certain quantities, whose exponents are referred to as the critical exponents of the system. Diverse systems with the same critical exponents—that is, which display identical scaling behaviour as they approach criticality—can be shown, via renormalization group theory, to share the same fundamental dynamics. For instance, the behavior of water and CO2 at their boiling points fall in the same universality class because they have identical critical exponents. In fact, almost all material phase transitions are described by a small set of universality classes. Similar observations have been made, though not as comprehensively, for various self-organized critical systems, where the critical point of the system is an attractor. Formally, this sharing of dynamics is referred to as universality, and systems with precisely the same critical exponents are said to belong to the same universality class. Scientific interest in power-law relations stems partly from the ease with which certain general classes of mechanisms generate them. The demonstration of a power-law relation in some data can point to specific kinds of mechanisms that might underlie the natural phenomenon in question, and can indicate a deep connection with other, seemingly unrelated systems; see also universality above. The ubiquity of power-law relations in physics is partly due to dimensional constraints, while in complex systems, power laws are often thought to be signatures of hierarchy or of specific stochastic processes. A few notable examples of power laws are Pareto's law of income distribution, structural self-similarity of fractals, and scaling laws in biological systems. Research on the origins of power-law relations, and efforts to observe and validate them in the real world, is an active topic of research in many fields of science, including physics, computer science, linguistics, geophysics, neuroscience, sociology, economics and more. However, much of the recent interest in power laws comes from the study of probability distributions: The distributions of a wide variety of quantities seem to follow the power-law form, at least in their upper tail (large events). The behavior of these large events connects these quantities to the study of theory of large deviations (also called extreme value theory), which considers the frequency of extremely rare events like stock market crashes and large natural disasters. It is primarily in the study of statistical distributions that the name "power law" is used. In empirical contexts, an approximation to a power-law formula_14 often includes a deviation term formula_15, which can represent uncertainty in the observed values (perhaps measurement or sampling errors) or provide a simple way for observations to deviate from the power-law function (perhaps for stochastic reasons): Mathematically, a strict power law cannot be a probability distribution, but a distribution that is a truncated power function is possible: formula_17 for formula_18 where the exponent formula_19 (Greek letter alpha, not to be confused with scaling factor formula_20 used above) is greater than 1 (otherwise the tail has infinite area), the minimum value formula_21 is needed otherwise the distribution has infinite area as "x" approaches 0, and the constant "C" is a scaling factor to ensure that the total area is 1, as required by a probability distribution. More often one uses an asymptotic power law – one that is only true in the limit; see power-law probability distributions below for details. Typically the exponent falls in the range formula_22, though not always. More than a hundred power-law distributions have been identified in physics (e.g. sandpile avalanches), biology (e.g. species extinction and body mass), and the social sciences (e.g. city sizes and income). Among them are: A broken power law is a piecewise function, consisting of two or more power laws, combined with a threshold. For example, with two power laws: A power law with an exponential cutoff is simply a power law multiplied by an exponential function: In a looser sense, a power-law probability distribution is a distribution whose density function (or mass function in the discrete case) has the form, for large values of formula_2, where formula_30, and formula_31 is a slowly varying function, which is any function that satisfies formula_32 for any positive factor formula_33. This property of formula_31 follows directly from the requirement that formula_35 be asymptotically scale invariant; thus, the form of formula_31 only controls the shape and finite extent of the lower tail. For instance, if formula_31 is the constant function, then we have a power law that holds for all values of formula_2. In many cases, it is convenient to assume a lower bound formula_39 from which the law holds. Combining these two cases, and where formula_2 is a continuous variable, the power law has the form where the pre-factor to formula_42 is the normalizing constant. We can now consider several properties of this distribution. For instance, its moments are given by which is only well defined for formula_44. That is, all moments formula_45 diverge: when formula_46, the average and all higher-order moments are infinite; when formula_47, the mean exists, but the variance and higher-order moments are infinite, etc. For finite-size samples drawn from such distribution, this behavior implies that the central moment estimators (like the mean and the variance) for diverging moments will never converge – as more data is accumulated, they continue to grow. These power-law probability distributions are also called Pareto-type distributions, distributions with Pareto tails, or distributions with regularly varying tails. A modification, which does not satisfy the general form above, with an exponential cutoff, is In this distribution, the exponential decay term formula_49 eventually overwhelms the power-law behavior at very large values of formula_2. This distribution does not scale and is thus not asymptotically as a power law; however, it does approximately scale over a finite region before the cutoff. (Note that the pure form above is a subset of this family, with formula_51.) This distribution is a common alternative to the asymptotic power-law distribution because it naturally captures finite-size effects. The Tweedie distributions are a family of statistical models characterized by closure under additive and reproductive convolution as well as under scale transformation. Consequently, these models all express a power-law relationship between the variance and the mean. These models have a fundamental role as foci of mathematical convergence similar to the role that the normal distribution has as a focus in the central limit theorem. This convergence effect explains why the variance-to-mean power law manifests so widely in natural processes, as with Taylor's law in ecology and with fluctuation scaling in physics. It can also be shown that this variance-to-mean power law, when demonstrated by the method of expanding bins, implies the presence of 1/"f" noise and that 1/"f" noise can arise as a consequence of this Tweedie convergence effect. Although more sophisticated and robust methods have been proposed, the most frequently used graphical methods of identifying power-law probability distributions using random samples are Pareto quantile-quantile plots (or Pareto Q–Q plots), mean residual life plots and log–log plots. Another, more robust graphical method uses bundles of residual quantile functions. (Please keep in mind that power-law distributions are also called Pareto-type distributions.) It is assumed here that a random sample is obtained from a probability distribution, and that we want to know if the tail of the distribution follows a power law (in other words, we want to know if the distribution has a "Pareto tail"). Here, the random sample is called "the data". Pareto Q–Q plots compare the quantiles of the log-transformed data to the corresponding quantiles of an exponential distribution with mean 1 (or to the quantiles of a standard Pareto distribution) by plotting the former versus the latter. If the resultant scatterplot suggests that the plotted points " asymptotically converge" to a straight line, then a power-law distribution should be suspected. A limitation of Pareto Q–Q plots is that they behave poorly when the tail index formula_19 (also called Pareto index) is close to 0, because Pareto Q–Q plots are not designed to identify distributions with slowly varying tails. On the other hand, in its version for identifying power-law probability distributions, the mean residual life plot consists of first log-transforming the data, and then plotting the average of those log-transformed data that are higher than the "i"-th order statistic versus the "i"-th order statistic, for "i" = 1, ..., "n", where n is the size of the random sample. If the resultant scatterplot suggests that the plotted points tend to "stabilize" about a horizontal straight line, then a power-law distribution should be suspected. Since the mean residual life plot is very sensitive to outliers (it is not robust), it usually produces plots that are difficult to interpret; for this reason, such plots are usually called Hill horror plots Log–log plots are an alternative way of graphically examining the tail of a distribution using a random sample. Caution has to be exercised however as a log–log plot is necessary but insufficient evidence for a power law relationship, as many non power-law distributions will appear as straight lines on a log–log plot. This method consists of plotting the logarithm of an estimator of the probability that a particular number of the distribution occurs versus the logarithm of that particular number. Usually, this estimator is the proportion of times that the number occurs in the data set. If the points in the plot tend to "converge" to a straight line for large numbers in the x axis, then the researcher concludes that the distribution has a power-law tail. Examples of the application of these types of plot have been published. A disadvantage of these plots is that, in order for them to provide reliable results, they require huge amounts of data. In addition, they are appropriate only for discrete (or grouped) data. Another graphical method for the identification of power-law probability distributions using random samples has been proposed. This methodology consists of plotting a "bundle for the log-transformed sample". Originally proposed as a tool to explore the existence of moments and the moment generation function using random samples, the bundle methodology is based on residual quantile functions (RQFs), also called residual percentile functions, which provide a full characterization of the tail behavior of many well-known probability distributions, including power-law distributions, distributions with other types of heavy tails, and even non-heavy-tailed distributions. Bundle plots do not have the disadvantages of Pareto Q–Q plots, mean residual life plots and log–log plots mentioned above (they are robust to outliers, allow visually identifying power laws with small values of formula_19, and do not demand the collection of much data). In addition, other types of tail behavior can be identified using bundle plots. In general, power-law distributions are plotted on doubly logarithmic axes, which emphasizes the upper tail region. The most convenient way to do this is via the (complementary) cumulative distribution (cdf), formula_54, Note that the cdf is also a power-law function, but with a smaller scaling exponent. For data, an equivalent form of the cdf is the rank-frequency approach, in which we first sort the formula_56 observed values in ascending order, and plot them against the vector formula_57. Although it can be convenient to log-bin the data, or otherwise smooth the probability density (mass) function directly, these methods introduce an implicit bias in the representation of the data, and thus should be avoided. The cdf, on the other hand, is more robust to (but not without) such biases in the data and preserves the linear signature on doubly logarithmic axes. Though a cdf representation is favored over that of the pdf while fitting a power law to the data with the linear least square method, it is not devoid of mathematical inaccuracy. Thus, while estimating exponents of a power law distribution, maximum likelihood estimator is recommended. There are many ways of estimating the value of the scaling exponent for a power-law tail, however not all of them yield unbiased and consistent answers. Some of the most reliable techniques are often based on the method of maximum likelihood. Alternative methods are often based on making a linear regression on either the log–log probability, the log–log cumulative distribution function, or on log-binned data, but these approaches should be avoided as they can all lead to highly biased estimates of the scaling exponent. For real-valued, independent and identically distributed data, we fit a power-law distribution of the form to the data formula_59, where the coefficient formula_42 is included to ensure that the distribution is normalized. Given a choice for formula_61, the log likelihood function becomes: The maximum of this likelihood is found by differentiating with respect to parameter formula_19, setting the result equal to zero. Upon rearrangement, this yields the estimator equation: where formula_65 are the formula_56 data points formula_67. This estimator exhibits a small finite sample-size bias of order formula_68, which is small when "n" > 100. Further, the standard error of the estimate is formula_69. This estimator is equivalent to the popular Hill estimator from quantitative finance and extreme value theory. For a set of "n" integer-valued data points formula_65, again where each formula_71, the maximum likelihood exponent is the solution to the transcendental equation where formula_73 is the incomplete zeta function. The uncertainty in this estimate follows the same formula as for the continuous equation. However, the two equations for formula_74 are not equivalent, and the continuous version should not be applied to discrete data, nor vice versa. Further, both of these estimators require the choice of formula_61. For functions with a non-trivial formula_31 function, choosing formula_61 too small produces a significant bias in formula_78, while choosing it too large increases the uncertainty in formula_74, and reduces the statistical power of our model. In general, the best choice of formula_61 depends strongly on the particular form of the lower tail, represented by formula_31 above. More about these methods, and the conditions under which they can be used, can be found in . Further, this comprehensive review article provides usable code (Matlab, Python, R and C++) for estimation and testing routines for power-law distributions. Another method for the estimation of the power-law exponent, which does not assume independent and identically distributed (iid) data, uses the minimization of the Kolmogorov–Smirnov statistic, formula_82, between the cumulative distribution functions of the data and the power law: with where formula_85 and formula_86 denote the cdfs of the data and the power law with exponent formula_19, respectively. As this method does not assume iid data, it provides an alternative way to determine the power-law exponent for data sets in which the temporal correlation can not be ignored. This criterion can be applied for the estimation of power-law exponent in the case of scale free distributions and provides a more convergent estimate than the maximum likelihood method. It has been applied to study probability distributions of fracture apertures. In some contexts the probability distribution is described, not by the cumulative distribution function, by the cumulative frequency of a property "X", defined as the number of elements per meter (or area unit, second etc.) for which "X" > "x" applies, where "x" is a variable real number. As an example, the cumulative distribution of the fracture aperture, "X", for a sample of "N" elements is defined as 'the number of fractures per meter having aperture greater than "x" . Use of cumulative frequency has some advantages, e.g. it allows one to put on the same diagram data gathered from sample lines of different lengths at different scales (e.g. from outcrop and from microscope). Although power-law relations are attractive for many theoretical reasons, demonstrating that data does indeed follow a power-law relation requires more than simply fitting a particular model to the data. This is important for understanding the mechanism that gives rise to the distribution: superficially similar distributions may arise for significantly different reasons, and different models yield different predictions, such as extrapolation. For example, log-normal distributions are often mistaken for power-law distributions: a data set drawn from a lognormal distribution will be approximately linear for large values (corresponding to the upper tail of the lognormal being close to a power law), but for small values the lognormal will drop off significantly (bowing down), corresponding to the lower tail of the lognormal being small (there are very few small values, rather than many small values in a power law). For example, Gibrat's law about proportional growth processes produce distributions that are lognormal, although their log–log plots look linear over a limited range. An explanation of this is that although the logarithm of the lognormal density function is quadratic in , yielding a "bowed" shape in a log–log plot, if the quadratic term is small relative to the linear term then the result can appear almost linear, and the lognormal behavior is only visible when the quadratic term dominates, which may require significantly more data. Therefore, a log–log plot that is slightly "bowed" downwards can reflect a log-normal distribution – not a power law. In general, many alternative functional forms can appear to follow a power-law form for some extent. Stumpf proposed plotting the empirical cumulative distribution function in the log-log domain and claimed that a candidate power-law should cover at least two orders of magnitude. Also, researchers usually have to face the problem of deciding whether or not a real-world probability distribution follows a power law. As a solution to this problem, Diaz proposed a graphical methodology based on random samples that allow visually discerning between different types of tail behavior. This methodology uses bundles of residual quantile functions, also called percentile residual life functions, which characterize many different types of distribution tails, including both heavy and non-heavy tails. However, Stumpf claimed the need for both a statistical and a theoretical background in order to support a power-law in the underlying mechanism driving the data generating process. One method to validate a power-law relation tests many orthogonal predictions of a particular generative mechanism against data. Simply fitting a power-law relation to a particular kind of data is not considered a rational approach. As such, the validation of power-law claims remains a very active field of research in many areas of modern science. Notes Bibliography
https://en.wikipedia.org/wiki?curid=24522
PH In chemistry, pH () (abbr. power of hydrogen or potential for hydrogen) is a scale used to specify how acidic or basic a water-based solution is. Acidic solutions have a lower pH, while basic solutions have a higher pH. At room temperature (25°C or 77°F), pure water is neither acidic nor basic and has a pH of 7. The pH scale is logarithmic and inversely indicates the concentration of hydrogen ions in the solution (a lower pH indicates a higher concentration of hydrogen ions). This is because the formula used to calculate pH approximates the negative of the base 10 logarithm of the molar concentration of hydrogen ions in the solution. More precisely, pH is the negative of the base 10 logarithm of the activity of the hydrogen ion. At 25 °C, solutions with a pH less than 7 are acidic, and solutions with a pH greater than 7 are basic. The neutral value of the pH depends on the temperature, being lower than 7 if the temperature increases. The pH value can be less than 0 for very strong acids, or greater than 14 for very strong bases. The pH scale is traceable to a set of standard solutions whose pH is established by international agreement. Primary pH standard values are determined using a concentration cell with transference, by measuring the potential difference between a hydrogen electrode and a standard electrode such as the silver chloride electrode. The pH of aqueous solutions can be measured with a glass electrode and a pH meter, or a color-changing indicator. Measurements of pH are important in chemistry, agronomy, medicine, water treatment, and many other applications. The concept of pH was first introduced by the Danish chemist Søren Peder Lauritz Sørensen at the Carlsberg Laboratory in 1909 and revised to the modern pH in 1924 to accommodate definitions and measurements in terms of electrochemical cells. In the first papers, the notation had the "H" as a subscript to the lowercase "p", as so: pH. The exact meaning of the "p" in "pH" is disputed, as Sørensen did not explain why he used it. He describes a way of measuring it using "potential" differences, and it represents the negative "power" of 10 in the concentration of hydrogen ions. All the words for these start with "p" in French, German and Danish, all languages Sørensen published in: Carlsberg Laboratory was French-speaking, German was the dominant language of scientific publishing, and Sørensen was Danish. He also used "q" in much the same way elsewhere in the paper. So the "p" could stand for the French "puissance," German "Potenz," or Danish "potens", meaning "power", or it could mean "potential". He might also just have labelled the test solution "p" and the reference solution "q" arbitrarily; these letters are often paired. There is little to support the suggestion that "pH" stands for the Latin terms "pondus hydrogenii" (quantity of hydrogen) or "potentia hydrogenii" (power of hydrogen). Currently in chemistry, the p stands for "decimal cologarithm of", and is also used in the term p"K"a, used for acid dissociation constants and pOH, the equivalent for hydroxide ions. Bacteriologist Alice C. Evans, famed for her work's influence on dairying and food safety, credited William Mansfield Clark and colleagues (of whom she was one) with developing pH measuring methods in the 1910s, which had a wide influence on laboratory and industrial use thereafter. In her memoir, she does not mention how much, or how little, Clark and colleagues knew about Sørensen's work a few years prior. She said: In these studies [of bacterial metabolism] Dr. Clark's attention was directed to the effect of acid on the growth of bacteria. He found that it is the intensity of the acid in terms of hydrogen-ion concentration that affects their growth. But existing methods of measuring acidity determined the quantity, not the intensity, of the acid. Next, with his collaborators, Dr. Clark developed accurate methods for measuring hydrogen-ion concentration. These methods replaced the inaccurate titration method of determining the acid content in use in biologic laboratories throughout the world. Also they were found to be applicable in many industrial and other processes in which they came into wide usage. The first electronic method for measuring pH was invented by Arnold Orville Beckman, a professor at California Institute of Technology in 1934. It was in response to local citrus grower Sunkist that wanted a better method for quickly testing the pH of lemons they were picking from their nearby orchards. pH is defined as the decimal logarithm of the reciprocal of the hydrogen ion activity, "a"H+, in a solution. For example, for a solution with a hydrogen ion activity of (at that level, this is essentially the number of moles of hydrogen ions per litre of solution) there is , thus such a solution has a pH of . For a commonplace example based on the facts that the masses of a mole of water, a mole of hydrogen ions, and a mole of hydroxide ions are respectively 18 g, 1 g, and 17 g, a quantity of 107 moles of pure (pH 7) water, or 180 tonnes (18×107 g), contains close to 1 g of dissociated hydrogen ions (or rather 19 g of H3O+ hydronium ions) and 17 g of hydroxide ions. Note that pH depends on temperature. For instance at 0 °C the pH of pure water is about 7.47. At 25 °C it is 7.00, and at 100 °C it is 6.14. This definition was adopted because ion-selective electrodes, which are used to measure pH, respond to activity. Ideally, electrode potential, "E", follows the Nernst equation, which, for the hydrogen ion can be written as where "E" is a measured potential, "E"0 is the standard electrode potential, "R" is the gas constant, "T" is the temperature in kelvins, "F" is the Faraday constant. For H+ number of electrons transferred is one. It follows that electrode potential is proportional to pH when pH is defined in terms of activity. Precise measurement of pH is presented in International Standard ISO 31-8 as follows: A galvanic cell is set up to measure the electromotive force (e.m.f.) between a reference electrode and an electrode sensitive to the hydrogen ion activity when they are both immersed in the same aqueous solution. The reference electrode may be a silver chloride electrode or a calomel electrode. The hydrogen-ion selective electrode is a standard hydrogen electrode. Firstly, the cell is filled with a solution of known hydrogen ion activity and the emf, "E"S, is measured. Then the emf, "E"X, of the same cell containing the solution of unknown pH is measured. The difference between the two measured emf values is proportional to pH. This method of calibration avoids the need to know the standard electrode potential. The proportionality constant, 1/"z" is ideally equal to formula_4 the "Nernstian slope". To apply this process in practice, a glass electrode is used rather than the cumbersome hydrogen electrode. A combined glass electrode has an in-built reference electrode. It is calibrated against buffer solutions of known hydrogen ion activity. IUPAC has proposed the use of a set of buffer solutions of known H+ activity. Two or more buffer solutions are used in order to accommodate the fact that the "slope" may differ slightly from ideal. To implement this approach to calibration, the electrode is first immersed in a standard solution and the reading on a pH meter is adjusted to be equal to the standard buffer's value. The reading from a second standard buffer solution is then adjusted, using the "slope" control, to be equal to the pH for that solution. Further details, are given in the IUPAC recommendations. When more than two buffer solutions are used the electrode is calibrated by fitting observed pH values to a straight line with respect to standard buffer values. Commercial standard buffer solutions usually come with information on the value at 25 °C and a correction factor to be applied for other temperatures. The pH scale is logarithmic and therefore pH is a dimensionless quantity. This was the original definition of Sørensen in 1909, which was superseded in favor of pH in 1924. [H] is the concentration of hydrogen ions, denoted [H+] in modern chemistry, which appears to have units of concentration. More correctly, the thermodynamic activity of H+ in dilute solution should be replaced by [H+]/c0, where the standard state concentration c0 = 1 mol/L. This ratio is a pure number whose logarithm can be defined. However, it is possible to measure the concentration of hydrogen ions directly, if the electrode is calibrated in terms of hydrogen ion concentrations. One way to do this, which has been used extensively, is to titrate a solution of known concentration of a strong acid with a solution of known concentration of strong alkaline in the presence of a relatively high concentration of background electrolyte. Since the concentrations of acid and alkaline are known, it is easy to calculate the concentration of hydrogen ions so that the measured potential can be correlated with concentrations. The calibration is usually carried out using a Gran plot.
https://en.wikipedia.org/wiki?curid=24530
Pastel A pastel (, ) is an art medium in the form of a stick, consisting of powdered pigment and a binder. The pigments used in pastels are similar to those used to produce some other colored visual arts media, such as oil paints; the binder is of a neutral hue and low saturation. The color effect of pastels is closer to the natural dry pigments than that of any other process. Pastels have been used by artists since the Renaissance, and gained considerable popularity in the 18th century, when a number of notable artists made pastel their primary medium. An artwork made using pastels is called a pastel (or a pastel drawing or pastel painting). "Pastel" used as a verb means to produce an artwork with pastels; as an adjective it means pale in color. Pastel sticks or crayons consist of powdered pigment combined with a binder. The exact composition and characteristics of an individual pastel stick depend on the type of pastel and the type and amount of binder used. It also varies by individual manufacturer. Dry pastels have historically used binders such as gum arabic and gum tragacanth. Methyl cellulose was introduced as a binder in the twentieth century. Often a chalk or gypsum component is present. They are available in varying degrees of hardness, the softer varieties being wrapped in paper. Some pastel brands use pumice in the binder to abrade the paper and create more tooth. Dry pastel media can be subdivided as follows: In addition, pastels using a different approach to manufacture have been developed: There has been some debate within art societies as to what exactly counts as a pastel. The Pastel Society within the UK (the oldest pastel society) states the following are acceptable media for its exhibitions: "Pastels, including Oil pastel, Charcoal, Pencil, Conté, Sanguine, or any dry media". The emphasis appears to be on "dry media" but the debate continues. In order to create hard and soft pastels, pigments are ground into a paste with water and a gum binder and then rolled, pressed or extruded into sticks. The name "pastel" comes from Medieval Latin "pastellum", woad paste, from Late Latin "pastellus", paste. The French word "pastel" first appeared in 1662. Most brands produce gradations of a color, the original pigment of which tends to be dark, from pure pigment to near-white by mixing in differing quantities of chalk. This mixing of pigments with chalks is the origin of the word "pastel" in reference to "pale color" as it is commonly used in cosmetic and fashion venues. A pastel is made by letting the sticks move over an abrasive ground, leaving color on the grain of the paper, sandboard, canvas etc. When fully covered with pastel, the work is called a pastel "painting"; when not, a pastel "sketch" or "drawing". Pastel paintings, being made with a medium that has the highest pigment concentration of all, reflect light without darkening refraction, allowing for very saturated colors. Pastel supports need to provide a "tooth" for the pastel to adhere and hold the pigment in place. Supports include: Pastels can be used to produce a permanent work of art if the artist meets appropriate archival considerations. This means: For these reasons, some pastelists avoid the use of a fixative except in cases where the pastel has been overworked so much that the surface will no longer hold any more pastel. The fixative will restore the "tooth" and more pastel can be applied on top. It is the tooth of the painting surface that holds the pastels, not a fixative. Abrasive supports avoid or minimize the need to apply further fixative in this way. SpectraFix, a modern casein fixative available premixed in a pump misting bottle or as concentrate to be mixed with alcohol, is not toxic and does not darken or dull pastel colors. However, SpectraFix takes some practice to use because it's applied with a pump misting bottle instead of an aerosol spray can. It is easy to use too much SpectraFix and leave puddles of liquid that may dissolve passages of color; also it takes a little longer to dry than conventional spray fixatives between light layers. Glassine (paper) is used by artists to protect artwork which is being stored or transported. Some good quality books of pastel papers also include glassine to separate pages. Pastel techniques can be challenging since the medium is mixed and blended directly on the working surface, and unlike paint, colors cannot be tested on a palette before applying to the surface. Pastel errors cannot be covered the way a paint error can be painted out. Experimentation with the pastel medium on a small scale in order to learn various techniques gives the user a better command over a larger composition. Pastels have some techniques in common with painting, such as blending, masking, building up layers of color, adding accents and highlighting, and shading. Some techniques are characteristic of both pastels and sketching mediums such as charcoal and lead, for example, hatching and crosshatching, and gradation. Other techniques are particular to the pastel medium. Pastels are a dry medium and produce a great deal of dust, which can cause respiratory irritation. More seriously, pastels use the same pigments as artists' paints, many of which are toxic. For example, exposure to cadmium pigments, which are common and popular bright yellows, oranges, and reds, can lead to cadmium poisoning. Pastel artists, who use the pigments without a strong painting binder, are especially susceptible to such poisoning. For this reason, many modern pastels are made using substitutions for cadmium, chromium, and other toxic pigments, while retaining the traditional pigment names. The manufacture of pastels originated in the 15th century. The pastel medium was mentioned by Leonardo da Vinci, who learned of it from the French artist Jean Perréal after that artist's arrival in Milan in 1499. Pastel was sometimes used as a medium for preparatory studies by 16th-century artists, notably Federico Barocci. The first French artist to specialize in pastel portraits was Joseph Vivien. During the 18th century the medium became fashionable for portrait painting, sometimes in a mixed technique with gouache. Pastel was an important medium for artists such as Jean-Baptiste Perronneau, Maurice Quentin de La Tour (who never painted in oils), and Rosalba Carriera. The pastel still life paintings and portraits of Jean-Baptiste-Siméon Chardin are much admired, as are the works of the Swiss-French artist Jean-Étienne Liotard. In 18th-century England the outstanding practitioner was John Russell. In Colonial America, John Singleton Copley used pastel occasionally for portraits. In France, pastel briefly became unpopular during and after the Revolution, as the medium was identified with the frivolity of the Ancien Régime. By the mid-19th century, French artists such as Eugène Delacroix and especially Jean-François Millet were again making significant use of pastel. Their countryman Édouard Manet painted a number of portraits in pastel on canvas, an unconventional ground for the medium. Edgar Degas was an innovator in pastel technique, and used it with an almost expressionist vigor after about 1885, when it became his primary medium. Odilon Redon produced a large body of works in pastel. James Abbott McNeill Whistler produced a quantity of pastels around 1880, including a body of work relating to Venice, and this probably contributed to a growing enthusiasm for the medium in the United States. In particular, he demonstrated how few strokes were required to evoke a place or an atmosphere. Mary Cassatt, an American artist active in France, introduced the Impressionists and pastel to her friends in Philadelphia and Washington. According to the Metropolitan Museum of Art's "Time Line of Art History: Nineteenth Century American Drawings": On the East Coast of the United States, the Society of Painters in Pastel was founded in 1883 by William Merritt Chase, Robert Blum, and others. The Pastellists, led by Leon Dabo, was organized in New York in late 1910 and included among its ranks Everett Shinn and Arthur Bowen Davies. On the American West Coast the influential artist and teacher Pedro Joseph de Lemos, who served as Chief Administrator of the San Francisco Art Institute and Director of the Stanford University Museum and Art Gallery, popularized pastels in regional exhibitions. Beginning in 1919 de Lemos published a series of articles on “painting” with pastels, which included such notable innovations as allowing the intensity of light on the subject to determine the distinct color of laid paper and the use of special optics for making “night sketches” in both urban and rural settings. His night scenes, which were often called “dreamscapes” in the press, were influenced by French Symbolism, and especially Odilon Redon. Pastels have been favored by many modern artists because of the medium's broad range of bright colors. Modern notable artists who have worked extensively in pastels include Fernando Botero, Francesco Clemente, Daniel Greene, Wolf Kahn, and R. B. Kitaj.
https://en.wikipedia.org/wiki?curid=24533
Pen pal Pen pals (or penpals, pen-pals, penfriends or pen friends) are people who regularly write to each other, particularly via postal mail. Pen pals are usually strangers whose relationship is based primarily, or even solely, on their exchange of letters. Occasionally pen pals may already have a relationship that is not regularly conducted in person. A pen pal relationship is often used to practice reading and writing in a foreign language, to improve literacy, to learn more about other countries and life-styles, and to make friendships. As with any friendships in life, some people remain pen pals for only a short time, while others continue to exchange letters and presents for life. Some pen pals eventually arrange to meet face to face; sometimes leading to serious relationships, or even marriage. Pen pals come in all ages, nationalities, cultures, languages and interests. Pals may seek new penfriends based on their own age group, a specific occupation, hobby, or select someone totally different from them to gain knowledge about the world around them. A modern variation on the traditional pen pal arrangement is to have a keypal - also called e-pal - with whom one exchanges email addresses as well as - or instead of - paper letters. This has the advantage of saving money and being more immediate, allowing many messages to be exchanged in a short period of time. The disadvantage is that the communication can be very ephemeral if the email messages are not routinely saved. Many people prefer to receive paper letters, gaining the satisfaction of seeing their name carefully printed on a thick envelope in the letterbox. Using postal mail, it is possible to trade coupons, swap slips, postcards, stamps and anything else light and flat enough to fit inside an envelope, often called "tuck-ins". Many pen pals like to trade sheets of stickers, notecards and stationery sets. While the expansion of the Internet has reduced the number of traditional pen pals, pen pal clubs can nowadays be found on the Internet, in magazine columns, newspapers, and sometimes through clubs or special interest groups. Some people are looking for romantic interests, while others just want to find friends. It seems, on the internet, that the term "pen pals" defines those looking to correspond with others that live in a different place, where pen pals originated via postal mail correspondences and has evolved to mean something more. Pen pals also make and pass around friendship books, slams and crams. Many pen pals meet each other through organizations that bring people together for this purpose. Organizations can be split into three main categories: free, partial subscription, and subscription-based clubs. Free clubs are usually funded by advertising and profiles are not reviewed, whereas subscription-based clubs will usually not contain any advertising and will have an administrator approving profiles to the database. While the traditional snail mail pen pal relationship has fallen into a decline due to modern technology closing the world's communication gap, prison pen pal services have combined technology with traditional letter writing. These sites allow prisoners to place pen pal ads online; however, inmates in the United States and most of the world are not permitted to access the Internet. Therefore, the pen pal relationships with inmates are still conducted via postal mail. Other pen pal organizations have survived by embracing the technology of the Internet. The Australian author Geraldine Brooks wrote a memoir entitled "Foreign Correspondence" (1997), about her childhood which was enriched by her exchanges of letters with other children in Australia and overseas, and her travels as an adult in search of the people they had become. In the 1970s, the syndicated children's television program "Big Blue Marble" often invited viewers to write to them for their own pen pal. On another children's TV show, "Pee-wee's Playhouse", Pee-wee Herman would often receive pen pal letters. At the 1964/1965 World's Fair in New York, the Parker Pen pavilion provided a computer pen pal matching service; Parker Pen officially terminated the service in 1967. This service did not work in conjunction with any other pen friend clubs. The computer system and database used for this service were not sold, taken over, or continued in any way. In the "Peanuts" comic strip from the 1960s and 1970s, Charlie Brown tries to write to a pen pal using a fountain pen, but after several literally "botched" attempts, Charlie switches to using a pencil and referring to his penpal as his "pencil-pal"; his first letter to his "pencil-pal" explains the reason for the name change. The Bollywood film "Romance" (1983) is about two people, Amar (from India) and Sonia (from the UK) who fall in love after becoming pen pals. The Bollywood film "Sirf Tum" (1999) has a similar storyline. The film "You've Got Mail" (1998) is a romantic comedy about two people in a pen pal email courtship who are unaware that they are also business rivals. The action-drama film "Out of Reach" (2004) is about a pen pal relationship between a Vietnam War veteran and a 13-year-old orphaned girl from Poland. When the letters suddenly stop coming, the veteran heads to Poland to find out the reason. The film "A Cinderella Story" (2004) is a teen romantic comedy about two people in a pen pal email courtship who plan to meet in person at their high school's Halloween dance. The claymation film "Mary and Max" (2009) is about the pen pal relationship between an American man and an Australian girl. The plot of the novel "Penpal" (2012) takes a dark twist to this innocent idea; the protagonist is stalked ever since sending his letter. Musicians Jetty Rae and Heath McNease have collaborated under the moniker "Pen Pals".
https://en.wikipedia.org/wiki?curid=24537
Philip Glass Philip Glass (born January 31, 1937) is an American composer and pianist. He is widely regarded as one of the most influential composers of the late 20th century. Glass's work has been described as "minimal music", having similar qualities to other "minimalist" composers such as La Monte Young, Steve Reich, and Terry Riley. Glass describes himself as a composer of "music with repetitive structures", which he has helped evolve stylistically. Glass founded the Philip Glass Ensemble, with which he still performs on keyboards. He has written numerous operas and musical theatre works, twelve symphonies, eleven concertos, eight string quartets and various other chamber music, and film scores. Three of his film scores have been nominated for Academy Awards. Glass was born in Baltimore, Maryland, the son of Ida (née Gouline) and Benjamin Charles Glass. His family were Lithuanian Jewish emigrants. His father owned a record store and his mother was a librarian. In his memoir, Glass recalls that at the end of World War II his mother aided Jewish Holocaust survivors, inviting recent arrivals to America to stay at their home until they could find a job and a place to live. She developed a plan to help them learn English and develop skills so they could find work. His sister, Sheppie, would later do similar work as an active member of the International Rescue Committee. Glass developed his appreciation of music from his father, discovering later his father's side of the family had many musicians. His cousin Cevia was a classical pianist, while others had been in vaudeville. He learned his family was also related to Al Jolson. Glass's father often received promotional copies of new recordings at his music store. He spent many hours listening to them, developing his knowledge and taste in music. This openness to modern sounds affected Glass at an early age: The elder Glass promoted both new recordings and a wide selection of composers to his customers, sometimes convincing them to try something new by allowing them to return records they didn't like. His store soon developed a reputation as Baltimore's leading source of modern music. Glass built a sizable record collection from the unsold records in his father's store, including modern classical music such as Hindemith, Bartók, Schoenberg, Shostakovich and Western classical music including Beethoven's string quartets and Schubert's B Piano Trio. Glass cites Schubert's work as a "big influence" growing up. He studied the flute as a child at the university-preparatory school of the Peabody Institute. At the age of 15, he entered an accelerated college program at the University of Chicago where he studied mathematics and philosophy. In Chicago he discovered the serialism of Anton Webern and composed a twelve-tone string trio. In 1954 Glass traveled to Paris, where he encountered the films of Jean Cocteau, which made a lasting impression on him. He visited artists' studios and saw their work; "the bohemian life you see in [Cocteau's] "Orphée" was the life I ... was attracted to, and those were the people I hung out with." Glass studied at the Juilliard School of Music where the keyboard was his main instrument. His composition teachers included Vincent Persichetti and William Bergsma. Fellow students included Steve Reich and Peter Schickele. In 1959, he was a winner in the BMI Foundation's BMI Student Composer Awards, an international prize for young composers. In the summer of 1960, he studied with Darius Milhaud at the summer school of the Aspen Music Festival and composed a violin concerto for a fellow student, Dorothy Pixley-Rothschild. After leaving Juilliard in 1962, Glass moved to Pittsburgh and worked as a school-based composer-in-residence in the public school system, composing various choral, chamber and orchestral music. In 1964, Glass received a Fulbright Scholarship; his studies in Paris with the eminent composition teacher Nadia Boulanger, from autumn of 1964 to summer of 1966, influenced his work throughout his life, as the composer admitted in 1979: "The composers I studied with Boulanger are the people I still think about most—Bach and Mozart." Glass later stated in his autobiography "Music by Philip Glass" (1987) that the new music performed at Pierre Boulez's "Domaine Musical" concerts in Paris lacked any excitement for him (with the notable exceptions of music by John Cage and Morton Feldman), but he was deeply impressed by new films and theatre performances. His move away from modernist composers such as Boulez and Stockhausen was nuanced, rather than outright rejection: "That generation wanted disciples and as we didn't join up it was taken to mean that we hated the music, which wasn't true. We'd studied them at Juilliard and knew their music. How on earth can you reject Berio? Those early works of Stockhausen are still beautiful. But there was just no point in attempting to do their music better than they did and so we started somewhere else." He encountered revolutionary films of the French New Wave, such as those of Jean-Luc Godard and François Truffaut, which upended the rules set by an older generation of artists, and Glass made friends with American visual artists (the sculptor Richard Serra and his wife Nancy Graves), actors and directors (JoAnne Akalaitis, Ruth Maleczech, David Warrilow, and Lee Breuer, with whom Glass later founded the experimental theatre group Mabou Mines). Together with Akalaitis (they married in 1965), Glass in turn attended performances by theatre groups including Jean-Louis Barrault's Odéon theatre, The Living Theatre and the Berliner Ensemble in 1964 to 1965. These significant encounters resulted in a collaboration with Breuer for which Glass contributed music for a 1965 staging of Samuel Beckett's "Comédie" ("Play", 1963). The resulting piece (written for two soprano saxophones) was directly influenced by the play's open-ended, repetitive and almost musical structure and was the first one of a series of four early pieces in a minimalist, yet still dissonant, idiom. After "Play", Glass also acted in 1966 as music director of a Breuer production of Brecht's "Mother Courage and Her Children", featuring the theatre score by Paul Dessau. In parallel with his early excursions in experimental theatre, Glass worked in winter 1965 and spring 1966 as a music director and composer on a film score ("Chappaqua", Conrad Rooks, 1966) with Ravi Shankar and Alla Rakha, which added another important influence on Glass's musical thinking. His distinctive style arose from his work with Shankar and Rakha and their perception of rhythm in Indian music as being entirely additive. He renounced all his compositions in a moderately modern style resembling Milhaud's, Aaron Copland's, and Samuel Barber's, and began writing pieces based on repetitive structures of Indian music and a sense of time influenced by Samuel Beckett: a piece for two actresses and chamber ensemble, a work for chamber ensemble and his first numbered string quartet (No. 1, 1966). Glass then left Paris for northern India in 1966, where he came in contact with Tibetan refugees and began to gravitate towards Buddhism. He met Tenzin Gyatso, the 14th Dalai Lama, in 1972, and has been a strong supporter of the Tibetan independence ever since. Shortly after arriving in New York City in March 1967, Glass attended a performance of works by Steve Reich (including the ground-breaking minimalist piece "Piano Phase"), which left a deep impression on him; he simplified his style and turned to a radical "consonant vocabulary". Finding little sympathy from traditional performers and performance spaces, Glass eventually formed an ensemble with fellow ex-student Jon Gibson, and others, and began performing mainly in art galleries and studio lofts of SoHo. The visual artist Richard Serra provided Glass with Gallery contacts, while both collaborated on various sculptures, films and installations; from 1971 to 1974 he became Serra's regular studio assistant. Between summer of 1967 and the end of 1968, Glass composed nine works, including "Strung Out" (for amplified solo violin, composed in summer of 1967), Gradus (for solo saxophone, 1968), "Music in the Shape of a Square" (for two flutes, composed in May 1968, an homage to Erik Satie), "How Now" (for solo piano, 1968) and "1+1" (for amplified tabletop, November 1968) which were "clearly designed to experiment more fully with his new-found minimalist approach". The first concert of Glass's new music was at Jonas Mekas's Film-Makers Cinemathèque (Anthology Film Archives) in September 1968. This concert included the first work of this series with "Strung Out" (performed by the violinist Pixley-Rothschild) and "Music in the Shape of a Square" (performed by Glass and Gibson). The musical scores were tacked on the wall, and the performers had to move while playing. Glass's new works met with a very enthusiastic response by the audience which consisted mainly of visual and performance artists who were highly sympathetic to Glass's reductive approach. Apart from his music career, Glass had a moving company with his cousin, the sculptor Jene Highstein, and also worked as a plumber and cab driver (during 1973 to 1978). He recounts installing a dishwasher and looking up from his work to see an astonished Robert Hughes, "Time" magazine's art critic, staring at him. During this time, he made friends with other New York-based artists such as Sol LeWitt, Nancy Graves, Michael Snow, Bruce Nauman, Laurie Anderson, and Chuck Close (who created a now-famous portrait of Glass). (Glass returned the compliment in 2005 with "A Musical Portrait of Chuck Close" for piano.) With "1+1" and "Two Pages" (composed in February 1969) Glass turned to a more "rigorous approach" to his "most basic minimalist technique, additive process", pieces which were followed in the same year by "Music in Contrary Motion" and "Music in Fifths" (a kind of homage to his composition teacher Nadia Boulanger, who pointed out "hidden fifths" in his works but regarded them as cardinal sins). Eventually Glass's music grew less austere, becoming more complex and dramatic, with pieces such as "Music in Similar Motion" (1969), and "Music with Changing Parts" (1970). These pieces were performed by The Philip Glass Ensemble in the Whitney Museum of American Art in 1969 and in the Solomon R. Guggenheim Museum in 1970, often encountering hostile reaction from critics, but Glass's music was also met with enthusiasm from younger artists such as Brian Eno and David Bowie (at the Royal College of Art ca. 1970). Eno described this encounter with Glass's music as one of the "most extraordinary musical experiences of [his] life", as a "viscous bath of pure, thick energy", concluding "this was actually the most detailed music I'd ever heard. It was all intricacy, exotic harmonics". In 1970 Glass returned to the theatre, composing music for the theatre group Mabou Mines, resulting in his first minimalist pieces employing voices: "Red Horse Animation" and "Music for Voices" (both 1970, and premiered at the Paula Cooper Gallery). After differences of opinion with Steve Reich in 1971, Glass formed the Philip Glass Ensemble (while Reich formed Steve Reich and Musicians), an amplified ensemble including keyboards, wind instruments (saxophones, flutes), and soprano voices. Glass's music for his ensemble culminated in the four-hour-long "Music in Twelve Parts" (1971–1974), which began as a single piece with twelve instrumental parts but developed into a cycle that summed up Glass's musical achievement since 1967, and even transcended it—the last part features a twelve-tone theme, sung by the soprano voice of the ensemble. "I had broken the rules of modernism and so I thought it was time to break some of my own rules", according to Glass. Though he finds the term minimalist inaccurate to describe his later work, Glass does accept this term for pieces up to and including "Music in 12 Parts", excepting this last part which "was the end of minimalism" for Glass. As he pointed out: "I had worked for eight or nine years inventing a system, and now I'd written through it and come out the other end." He now prefers to describe himself as a composer of "music with repetitive structures". Glass continued his work with a series of instrumental works, called "Another Look at Harmony" (1975–1977). For Glass this series demonstrated a new start, hence the title: "What I was looking for was a way of combining harmonic progression with the rhythmic structure I had been developing, to produce a new overall structure. ... I'd taken everything out with my early works and it was now time to decide just what I wanted to put in—a process that would occupy me for several years to come." Parts 1 and 2 of "Another Look at Harmony" were included in a collaboration with Robert Wilson, a piece of musical theater later designated by Glass as the first opera of his portrait opera trilogy: "Einstein on the Beach". Composed in spring to fall of 1975 in close collaboration with Wilson, Glass's first opera was first premiered in summer 1976 at the Festival d'Avignon, and in November of the same year to a mixed and partly enthusiastic reaction from the audience at the Metropolitan Opera in New York City. Scored for the Philip Glass Ensemble, solo violin, chorus, and featuring actors (reciting texts by Christopher Knowles, Lucinda Childs and Samuel M. Johnson), Glass's and Wilson's essentially plotless opera was conceived as a "metaphorical look at Albert Einstein: scientist, humanist, amateur musician—and the man whose theories ... led to the splitting of the atom", evoking nuclear holocaust in the climactic scene, as critic Tim Page pointed out. As with "Another Look at Harmony", ""Einstein" added a new functional harmony that set it apart from the early conceptual works". Composer Tom Johnson came to the same conclusion, comparing the solo violin music to Johann Sebastian Bach, and the "organ figures ... to those Alberti basses Mozart loved so much". The piece was praised by "The Washington Post" as "one of the seminal artworks of the century". "Einstein on the Beach" was followed by further music for projects by the theatre group Mabou Mines such as "Dressed like an Egg" (1975), and again music for plays and adaptations from prose by Samuel Beckett, such as "The Lost Ones" (1975), "Cascando" (1975), "Mercier and Camier" (1979). Glass also turned to other media; two multi-movement instrumental works for the Philip Glass Ensemble originated as music for film and TV: "North Star" (1977 score for the documentary "" by François de Menil and Barbara Rose) and four short cues for the children's TV series "Sesame Street" named "Geometry of Circles" (1979). Another series, "Fourth Series" (1977–79), included music for chorus and organ ("Part One", 1977), organ and piano ("Part Two" and "Part Four", 1979), and music for a radio adaption of Constance DeJong's novel "Modern Love" ("Part Three", 1978). "Part Two" and "Part Four" were used (and hence renamed) in two dance productions by choreographer Lucinda Childs (who had already contributed to and performed in "Einstein on the Beach"). "Part Two" was included in "Dance" (a collaboration with visual artist Sol LeWitt, 1979), and "Part Four" was renamed as "Mad Rush", and performed by Glass on several occasions such as the first public appearance of the 14th Dalai Lama in New York City in Fall 1981. The piece demonstrates Glass's turn to more traditional models: the composer added a conclusion to an open-structured piece which "can be interpreted as a sign that he [had] abandoned the radical non-narrative, undramatic approaches of his early period", as the pianist Steffen Schleiermacher points out. In Spring 1978, Glass received a commission from the Netherlands Opera (as well as a Rockefeller Foundation grant) which "marked the end of his need to earn money from non-musical employment". With the commission Glass continued his work in music theater, composing his opera "Satyagraha" (composed in 1978–1979, premiered in 1980 at Rotterdam), based on the early life of Mahatma Gandhi in South Africa, Leo Tolstoy, Rabindranath Tagore, and Martin Luther King Jr.. For "Satyagraha ", Glass worked in close collaboration with two "SoHo friends": the writer Constance deJong, who provided the libretto, and the set designer Robert Israel. This piece was in other ways a turning point for Glass, as it was his first work since 1963 scored for symphony orchestra, even if the most prominent parts were still reserved for solo voices and chorus. Shortly after completing the score in August 1979, Glass met the conductor Dennis Russell Davies, whom he helped prepare for performances in Germany (using a piano-four-hands version of the score); together they started to plan another opera, to be premiered at the Stuttgart State Opera. While planning a third part of his "Portrait Trilogy", Glass turned to smaller music theatre projects such as the non-narrative "Madrigal Opera" (for six voices and violin and viola, 1980), and "The Photographer", a biographic study on the photographer Eadweard Muybridge (1982). Glass also continued to write for the orchestra with the score of "Koyaanisqatsi" (Godfrey Reggio, 1981–1982). Some pieces which were not used in the film (such as "Façades") eventually appeared on the album "Glassworks" (1982, CBS Records), which brought Glass's music to a wider public. The "Portrait Trilogy" was completed with "Akhnaten" (1982–1983, premiered in 1984), a vocal and orchestral composition sung in Akkadian, Biblical Hebrew, and Ancient Egyptian. In addition, this opera featured an actor reciting ancient Egyptian texts in the language of the audience. "Akhnaten" was commissioned by the Stuttgart Opera in a production designed by Achim Freyer. It premiered simultaneously at the Houston Opera in a production directed by David Freeman and designed by Peter Sellars. At the time of the commission, the Stuttgart Opera House was undergoing renovation, necessitating the use of a nearby playhouse with a smaller orchestra pit. Upon learning this, Glass and conductor Dennis Russell Davies visited the playhouse, placing music stands around the pit to determine how many players the pit could accommodate. The two found they could not fit a full orchestra in the pit. Glass decided to eliminate the violins, which had the effect of "giving the orchestra a low, dark sound that came to characterize the piece and suited the subject very well". As Glass remarked in 1992, "Akhnaten" is significant in his work since it represents a "first extension out of a triadic harmonic language", an experiment with the polytonality of his teachers Persichetti and Milhaud, a musical technique which Glass compares to "an optical illusion, such as in the paintings of Josef Albers". Glass again collaborated with Robert Wilson on another opera, "" (1983, premiered in 1984), which also functioned as the final part ("the Rome section) of Wilson's epic work by the same name, originally planned for an "international arts festival that would accompany the Olympic Games in Los Angeles". (Glass also composed a prestigious work for chorus and orchestra for the opening of the Games, "The Olympian: Lighting of the Torch and Closing "). The premiere of "The CIVIL warS" in Los Angeles never materialized and the opera was in the end premiered at the Opera of Rome. Glass's and Wilson's opera includes musical settings of Latin texts by the 1st-century-Roman playwright Seneca and allusions to the music of Giuseppe Verdi and from the American Civil War, featuring the 19th century figures Giuseppe Garibaldi and Robert E. Lee as characters. In the mid-1980s, Glass produced "works in different media at an extraordinarily rapid pace". Projects from that period include music for dance (Glass Pieces choreographed for New York City Ballet by Jerome Robbins in 1983 to a score drawn from existing Glass compositions created for other media including an excerpt from "Akhnaten"; and "In the Upper Room", Twyla Tharp, 1986), music for theatre productions "Endgame" (1984) and "Company" (1983). Beckett vehemently disapproved of the production of "Endgame" at the American Repertory Theater (Cambridge, Massachusetts), which featured JoAnne Akalaitis's direction and Glass's "Prelude" for timpani and double bass, but in the end, he authorized the music for "Company", four short, intimate pieces for string quartet that were played in the intervals of the dramatization. This composition was initially regarded by the composer as a piece of Gebrauchsmusik ('music for use')—"like salt and pepper ... just something for the table", as he noted. Eventually "Company" was published as Glass's String Quartet No. 2 and in a version for string orchestra, being performed by ensembles ranging from student orchestras to renowned formations such as the Kronos Quartet and the Kremerata Baltica. This interest in writing for the string quartet and the string orchestra led to a chamber and orchestral film score for "" (Paul Schrader, 1984–85), which Glass recently described as his "musical turning point" that developed his "technique of film scoring in a very special way". Glass also dedicated himself to vocal works with two sets of songs, "Three Songs for chorus" (1984, settings of poems by Leonard Cohen, Octavio Paz and Raymond Levesque), and a song cycle initiated by CBS Masterworks Records: "Songs from Liquid Days" (1985), with texts by songwriters such as David Byrne, Paul Simon, in which the Kronos Quartet is featured (as it is in "Mishima") in a prominent role. Glass also continued his series of operas with adaptations from literary texts such as "The Juniper Tree" (an opera collaboration with composer Robert Moran, 1984), Edgar Allan Poe's "The Fall of the House of Usher" (1987), and also worked with novelist Doris Lessing on the opera "The Making of the Representative for Planet 8" (1985–86, and performed by the Houston Grand Opera and English National Opera in 1988). Compositions such as "Company", "Facades" and String Quartet No. 3 (the last two extracted from the scores to "Koyaanisqatsi" and "Mishima") gave way to a series of works more accessible to ensembles such as the string quartet and symphony orchestra, in this returning to the structural roots of his student days. In taking this direction his chamber and orchestral works were also written in a more and more traditional and lyrical style. In these works, Glass often employs old musical forms such as the chaconne and the passacaglia—for instance in "Satyagraha", the Violin Concerto No. 1 (1987), Symphony No. 3 (1995), "Echorus" (1995) and also recent works such as Symphony No. 8 (2005), and "Songs and Poems for Solo Cello" (2006). A series of orchestral works originally composed for the concert hall commenced with the 3-movement Violin Concerto No. 1 (1987). This work was commissioned by the American Composers Orchestra and written for and in close collaboration with the violinist Paul Zukofsky and the conductor Dennis Russel Davies, who since then has encouraged the composer to write numerous orchestral pieces. The Concerto is dedicated to the memory of Glass's father: "His favorite form was the violin concerto, and so I grew up listening to the Mendelssohn, the Paganini, the Brahms concertos. ... So when I decided to write a violin concerto, I wanted to write one that my father would have liked." Among its multiple recordings, in 1992, the Concerto was performed and recorded by Gidon Kremer and the Vienna Philharmonic. This turn to orchestral music was continued with a symphonic trilogy of "portraits of nature", commissioned by the Cleveland Orchestra, the Rotterdam Philharmonic Orchestra, and the Atlanta Symphony Orchestra: "The Light" (1987), "The Canyon" (1988), and "Itaipu" (1989). While composing for symphonic ensembles, Glass also composed music for piano, with the cycle of five movements titled "Metamorphosis" (adapted from music for a theatrical adaptation of Franz Kafka's "The Metamorphosis"), and for the Errol Morris film "The Thin Blue Line", 1988. In the same year Glass met the poet Allen Ginsberg by chance in a book store in the East Village of New York City, and they immediately "decided on the spot to do something together, reached for one of Allen's books and chose "Wichita Vortex Sutra"", a piece for reciter and piano which in turn developed into a music theatre piece for singers and ensemble, "Hydrogen Jukebox" (1990). Glass also returned to chamber music; he composed two String Quartets (No. 4 "Buczak" in 1989 and No. 5 in 1991), and chamber works which originated as incidental music for plays, such as "Music from "The Screens"" (1989/1990). This work originated in one of many theater music collaborations with the director JoAnne Akalaitis, who originally asked the Gambian musician Foday Musa Suso "to do the score [for Jean Genet's "The Screens"] in collaboration with a western composer". Glass had already collaborated with Suso in the film score to "Powaqqatsi" (Godfrey Reggio, 1988). "Music from "The Screens"" is on occasion a touring piece for Glass and Suso (one set of tours also included percussionist Yousif Sheronick ), and individual pieces found its way to the repertoire of Glass and the cellist Wendy Sutter. Another collaboration was a collaborative recording project with Ravi Shankar, initiated by Peter Baumann (a member of the band Tangerine Dream), which resulted in the album "Passages" (1990). In the late 1980s and early 1990s, Glass's projects also included two highly prestigious opera commissions based on the life of explorers: "The Voyage" (1992), with a libretto by David Henry Hwang, was commissioned by the Metropolitan Opera for the 500th anniversary of the discovery of America by Christopher Columbus; and "White Raven" (1991), about Vasco da Gama, a collaboration with Robert Wilson and composed for the closure of the 1998 World Fair in Lisbon. Especially in "The Voyage", the composer "explore[d] new territory", with its "newly arching lyricism", "Sibelian starkness and sweep", and "dark, brooding tone ... a reflection of its increasingly chromatic (and dissonant) palette", as one commentator put it. Glass remixed the S'Express song "Hey Music Lover", for the b-side of its 1989 release as a single. After these operas, Glass began working on a symphonic cycle, commissioned by the conductor Dennis Russell Davies, who told Glass at the time: "I'm not going to let you ... be one of those opera composers who never write a symphony". Glass responded with two 3-movement symphonies (""Low"" [1992], and Symphony No. 2 [1994]); his first in an ongoing series of symphonies is a combination of the composer's own musical material with themes featured in prominent tracks of the David Bowie/Brian Eno album "Low" (1977), whereas Symphony No. 2 is described by Glass as a study in polytonality. He referred to the music of Honegger, Milhaud, and Villa-Lobos as possible models for his symphony. With the Concerto Grosso (1992), Symphony No. 3 (1995), a Concerto for Saxophone Quartet and Orchestra (1995), written for the Rascher Quartet (all commissioned by conductor Dennis Russel Davies), and "Echorus" (1994/95), a more transparent, refined, and intimate chamber-orchestral style paralleled the excursions of his large-scale symphonic pieces. In the four movements of his Third Symphony, Glass treats a 19-piece string orchestra as an extended chamber ensemble. In the third movement, Glass re-uses the chaconne as a formal device; one commentator characterized Glass's symphony as one of the composer's "most tautly unified works" The third Symphony was closely followed by a fourth, subtitled "Heroes" (1996), commissioned the American Composers Orchestra. Its six movements are symphonic reworkings of themes by Glass, David Bowie, and Brian Eno (from their album ""Heroes"", 1977); as in other works by the composer, it is also a hybrid work and exists in two versions: one for the concert hall, and another, shorter one for dance, choreographed by Twyla Tharp. Another commission by Dennis Russell Davies was a second series for piano, the "Etudes" for Piano (dedicated to Davies as well as the production designer Achim Freyer); the complete first set of ten Etudes has been recorded and performed by Glass himself. Bruce Brubaker and Dennis Russell Davies have each recorded the original set of six. Most of the Etudes are composed in the post-minimalist and increasingly lyrical style of the times: "Within the framework of a concise form, Glass explores possible sonorities ranging from typically Baroque passagework to Romantically tinged moods". Some of the pieces also appeared in different versions such as in the theatre music to Robert Wilson's "Persephone" (1994, commissioned by the Relache Ensemble) or "Echorus" (a version of Etude No. 2 for two violins and string orchestra, written for Edna Mitchell and Yehudi Menuhin 1995). Glass's prolific output in the 1990s continued to include operas with an opera triptych (1991–1996), which the composer described as an "homage" to writer and film director Jean Cocteau, based on his prose and cinematic work: "Orphée" (1949), "La Belle et la Bête" (1946), and the novel "Les Enfants Terribles" (1929, later made into a film by Cocteau and Jean-Pierre Melville, 1950). In the same way the triptych is also a musical homage to the work of the group of French composers associated with Cocteau, Les Six (and especially to Glass's teacher Darius Milhaud), as well as to various 18th-century composers such as Gluck and Bach whose music featured as an essential part of the films by Cocteau. The inspiration of the first part of the trilogy, "Orphée" (composed in 1991, and premiered in 1993 at the American Repertory Theatre) can be conceptually and musically traced to Gluck's opera "Orfeo ed Euridice" ("Orphée et Euridyce", 1762/1774), which had a prominent part in Cocteau's 1949 film "Orphee". One theme of the opera, the death of Eurydice, has some similarity to the composer's personal life: the opera was composed after the unexpected death in 1991 of Glass's wife, artist Candy Jernigan: "... One can only suspect that Orpheus' grief must have resembled the composer's own", K. Robert Schwartz suggests. The opera's "transparency of texture, a subtlety of instrumental color, ... a newly expressive and unfettered vocal writing" was praised, and "The Guardian's" critic remarked "Glass has a real affinity for the French text and sets the words eloquently, underpinning them with delicately patterned instrumental textures". For the second opera, "La Belle et la Bête" (1994, scored for either the Philip Glass Ensemble or a more conventional chamber orchestra), Glass replaced the soundtrack (including Georges Auric's film music) of Cocteau's film, wrote "a new fully operatic score and synchronize[d] it with the film". The final part of the triptych returned again to a more traditional setting with the "Dance Opera" "Les Enfants Terribles" (1996), scored for voices, three pianos and dancers, with choreography by Susan Marshall. The characters are depicted by both singers and dancers. The scoring of the opera evokes Bach's Concerto for Four Harpsichords, but in another way also "the snow, which falls relentlessly throughout the opera ... bearing witness to the unfolding events. Here time stands still. There is only music, and the movement of children through space" (Glass). In the late 1990s and early 2000s, Glass's lyrical and romantic styles peaked with a variety of projects: operas, theatre and film scores (Martin Scorsese's "Kundun", 1997, Godfrey Reggio's "Naqoyqatsi", 2002, and Stephen Daldry's "The Hours", 2002), a series of five concerts, and three symphonies centered on orchestra-singer and orchestra-chorus interplay. Two symphonies, Symphony No. 5 "Choral" (1999) and Symphony No. 7 "Toltec" (2004), and the song cycle "Songs of Milarepa" (1997) have a meditative theme. The operatic Symphony No. 6 "Plutonian Ode" (2002) for soprano and orchestra was commissioned by the Brucknerhaus, Linz, and Carnegie Hall in celebration of Glass's sixty-fifth birthday, and developed from Glass's collaboration with Allen Ginsberg (poet, piano—Ginsberg, Glass), based on his poem of the same name. Besides writing for the concert hall, Glass continued his ongoing operatic series with adaptions from literary texts: "The Marriages of Zones 3, 4 and 5" ([1997] story-libretto by Doris Lessing), "In the Penal Colony" (2000, after the story by Franz Kafka), and the chamber opera "The Sound of a Voice" (2003, with David Henry Hwang), which features the Pipa, performed by Wu Man at its premiere. Glass also collaborated again with the co-author of "Einstein on the Beach", Robert Wilson, on "Monsters of Grace" (1998), and created a biographic opera on the life of astronomer Galileo Galilei (2001). In the early 2000s, Glass started a series of five concerti with the "Tirol Concerto for Piano and Orchestra" (2000, premiered by Dennis Russell Davies as conductor and soloist), and the "Concerto Fantasy for Two Timpanists and Orchestra" (2000, for the timpanist Jonathan Haas). The "Concerto for Cello and Orchestra" (2001) had its premiere performance in Beijing, featuring cellist Julian Lloyd Webber; it was composed in celebration of his fiftieth birthday. These concertos were followed by the concise and rigorously neo-baroque "Concerto for Harpsichord and Orchestra" (2002), demonstrating in its transparent, chamber orchestral textures Glass's classical technique, evocative in the "improvisatory chords" of its beginning a toccata of Froberger or Frescobaldi, and 18th century music. Two years later, the concerti series continued with "Piano Concerto No. 2: After Lewis and Clark" (2004), composed for the pianist Paul Barnes. The concerto celebrates the pioneers' trek across North America, and the second movement features a duet for piano and Native American flute. With the chamber opera "The Sound of a Voice", Glass's Piano Concerto No. 2 might be regarded as bridging his traditional compositions and his more popular excursions to World Music, also found in "Orion" (also composed in 2004). "Waiting for the Barbarians", an opera from J. M. Coetzee's novel (with the libretto by Christopher Hampton), had its premiere performance in September 2005. Glass defined the work as a "social/political opera", as a critique on the Bush administration's war in Iraq, a "dialogue about political crisis", and an illustration of the "power of art to turn our attention toward the human dimension of history". While the opera's themes are Imperialism, apartheid, and torture, the composer chose an understated approach by using "very simple means, and the orchestration is very clear and very traditional; it's almost classical in sound", as the conductor D. Russell Davies notes. Two months after the premiere of this opera, in November 2005, Glass's Symphony No. 8, commissioned by the Bruckner Orchestra Linz, was premiered at the Brooklyn Academy of Music in New York City. After three symphonies for voices and orchestra, this piece was a return to purely orchestral and abstract composition; like previous works written for the conductor Dennis Russell Davies (the 1992 Concerto Grosso and the 1995 Symphony No. 3), it features extended solo writing. Critic Allan Kozinn described the symphony's chromaticism as more extreme, more fluid, and its themes and textures as continually changing, morphing without repetition, and praised the symphony's "unpredictable orchestration", pointing out the "beautiful flute and harp variation in the melancholy second movement". Alex Ross, remarked that "against all odds, this work succeeds in adding something certifiably new to the overstuffed annals of the classical symphony. ... The musical material is cut from familiar fabric, but it's striking that the composer forgoes the expected bustling conclusion and instead delves into a mood of deepening twilight and unending night." "The Passion of Ramakrishna" (2006), was composed for the Pacific Symphony Orchestra, the Pacific Chorale and the conductor Carl St. Clair. The 45 minutes choral work is based on the writings of Indian Spiritual leader Ramakrishna, which seem "to have genuinely inspired and revived the composer out of his old formulas to write something fresh", as one critic remarked, whereas another noted "The musical style breaks little new ground for Glass, except for the glorious Handelian ending ... the "composer's style ideally fits the devotional text". A cello suite, composed for the cellist Wendy Sutter, "Songs and Poems for Solo Cello" (2005–2007), was equally lauded by critics. It was described by Lisa Hirsch as "a major work, ... a major addition to the cello repertory" and "deeply Romantic in spirit, and at the same time deeply Baroque". Another critic, Anne Midgette of "The Washington Post", noted the suite "maintains an unusual degree of directness and warmth"; she also noted a kinship to a major work by Johann Sebastian Bach: "Digging into the lower registers of the instrument, it takes flight in handfuls of notes, now gentle, now impassioned, variously evoking the minor-mode keening of klezmer music and the interior meditations of Bach's cello suites". Glass himself pointed out "in many ways it owes more to Schubert than to Bach". In 2007, Glass also worked alongside Leonard Cohen on an adaptation of Cohen's poetry collection "Book of Longing". The work, which premiered in June 2007 in Toronto, is a piece for seven instruments and a vocal quartet, and contains recorded spoken word performances by Cohen and imagery from his collection. "Appomattox", an opera surrounding the events at the end of the American Civil War, was commissioned by the San Francisco Opera and premiered on October 5, 2007. As in "Waiting for the Barbarians", Glass collaborated with the writer Christopher Hampton, and as with the preceding opera and Symphony No. 8, the piece was conducted by Glass's long-time collaborator Dennis Russell Davies, who noted "in his recent operas the bass line has taken on an increasing prominence... (an) increasing use of melodic elements in the deep register, in the contrabass, the contrabassoon—he's increasingly using these sounds and these textures can be derived from using these instruments in different combinations. ... He's definitely developed more skill as an orchestrator, in his ability to conceive melodies and harmonic structures for specific instrumental groups. ... what he gives them to play is very organic and idiomatic." Apart from this large-scale opera, Glass added a work to his catalogue of theater music in 2007, and continuing—after a gap of twenty years—to write music for the dramatic work of Samuel Beckett. He provided a "hypnotic" original score for a compilation of Beckett's short plays "Act Without Words I", "Act Without Words II", "Rough for Theatre I" and "Eh Joe", directed by JoAnne Akalaitis and premiered in December 2007. Glass's work for this production was described by "The New York Times" as "icy, repetitive music that comes closest to piercing the heart". 2008 to 2010 Glass continued to work on a series of chamber music pieces which started with "Songs and Poems": the "Four Movements for Two Pianos" (2008, premiered by Dennis Davies and Maki Namekawa in July 2008), a "Sonata for Violin and Piano" composed in "the Brahms tradition" (completed in 2008, premiered by violinist Maria Bachman and pianist Jon Klibonoff in February 2009); a "String sextet" (an adaption of the Symphony No. 3 of 1995 made by Glass's musical director Michael Riesman) followed in 2009. "Pendulum" (2010, a one-movement piece for violin and piano), a second Suite of cello pieces for Wendy Sutter (2011), and "Partita for solo violin" for violinist Tim Fain (2010, first performance of the complete work 2011), are recent entries in the series. Other works for the theater were a score for Euripides' "The Bacchae" (2009, directed by JoAnne Akalaitis), and "Kepler" (2009), yet another operatic biography of a scientist or explorer. The opera is based on the life of 17th century astronomer Johannes Kepler, against the background of the Thirty Years' War, with a libretto compiled from Kepler's texts and poems by his contemporary Andreas Gryphius. It is Glass's first opera in German, and was premiered by the Bruckner Orchestra Linz and Dennis Russell Davies in September 2009. LA Times critic Mark Swed and others described the work as "oratorio-like"; Swed pointed out the work is Glass's "most chromatic, complex, psychological score" and "the orchestra dominates ... I was struck by the muted, glowing colors, the character of many orchestral solos and the poignant emphasis on bass instruments". In 2009 and 2010, Glass returned to the concerto genre. Violin Concerto No. 2 in four movements was commissioned by violinist Robert McDuffie, and subtitled "The American Four Seasons" (2009), as an homage to Vivaldi's set of concertos "Le quattro stagioni". It premiered in December 2009 by the Toronto Symphony Orchestra, and was subsequently performed by the London Philharmonic Orchestra in April 2010. The Double Concerto for Violin and Cello and Orchestra (2010) was composed for soloists Maria Bachmann and Wendy Sutter and also as a ballet score for the Nederlands Dans Theater. Other orchestral projects of 2010 are short orchestral scores for films; to a multimedia presentation based on the novel "Icarus at the Edge of Time" by theoretical physicist Brian Greene, which premiered on June 6, 2010, and the score for the Brazilian film "Nosso Lar" (released in Brazil on September 3, 2010). Glass also donated a short work, "Brazil", to the video game "Chime", which was released on February 3, 2010. In January 2011, Glass performed at the MONA FOMA festival in Hobart, Tasmania. The festival promotes a broad range of art forms, including experimental sound, noise, dance, theatre, visual art, performance and new media. In August 2011, Glass presented a series of music, dance, and theater performances as part of the Days and Nights Festival. Along with the Philip Glass Ensemble, scheduled performers include Molissa Fenley and Dancers, John Moran with Saori Tsukada, as well as a screening of "Dracula" with Glass's score. Glass hopes to present this festival annually, with a focus on art, science, and conservation. Other works completed since 2010 include Symphony No. 9 (2010–2011), Symphony No. 10 (2012), Cello Concerto No. 2 (2012, based on the film score to Naqoyqatsi) as well as String Quartet No. 6 and No. 7. Glass's Ninth Symphony was co-commissioned by the Bruckner Orchestra Linz, the American Composers Orchestra and the Los Angeles Philharmonic Orchestra. The symphony's first performance took place on January 1, 2012, at the Brucknerhaus in Linz, Austria (Dennis Russell Davies conducting the Bruckner Orchestra Linz); the American premiere was on January 31, 2012, (Glass's 75th birthday), at Carnegie Hall (Dennis Russell Davies conducting the American Composers Orchestra), and the West Coast premiere with the Los Angeles Philharmonic under the baton of John Adams on April 5. Glass's Tenth Symphony, written in five movements, was commissioned by the for its 30th anniversary. The symphony's first performance took place on August 9, 2012 at the in Aix-en-Provence under Dennis Russell Davies. The opera "The Perfect American" was composed in 2011 to a commission from Teatro Real Madrid. The libretto is based on a book of the same name by Peter Stephan Jungk and covers the final months of the life of Walt Disney. The world premiere was at the Teatro Real, Madrid, on January 22, 2013 with British baritone Christopher Purves taking the role of Disney. The UK premiere took place on June 1, 2013 in a production by the English National Opera at the London Coliseum. The US premiere took place on March 12, 2017 in a production by Long Beach Opera. His opera "", based on a play by Austrian playwright and novelist Peter Handke, "Die Spuren der Verirrten" (2007), premiered at the in April 2013, conducted by Dennis Russell Davies and directed by David Pountney. On June 28, 2013, Glass's piano piece "Two Movements for Four Pianos" was premiered at the Museum Kunstpalast, performed by Katia and Marielle Labèque, Maki Namekawa and Dennis Russell Davies. In May 2015, Glass's Double Concerto for Two Pianos was premiered by Katia and Marielle Labèque, Gustavo Dudamel and the Los Angeles Philharmonic. Glass published his memoir, "Words Without Music", in 2015. His 11th symphony, commissioned by the Bruckner Orchestra Linz, the Istanbul International Music Festival, and the Queensland Symphony Orchestra, premiered on January 31, 2017, Glass's 80th birthday, at Carnegie Hall, Dennis Russell Davies conducting the Bruckner Orchestra. On September 22, 2017 his Piano Concerto No. 3 was premiered by pianist Simone Dinnerstein with the strings of the chamber orchestra A Far Cry at Jordan Hall at the New England Conservatory of Music, Boston, Massachusetts. Glass's 12th symphony was premiered by the Los Angeles Philharmonic under John Adams at the Walt Disney Concert Hall in Los Angeles on January 10, 2019. In collaboration with stage auteur, performer and co-director (with Kirsty Housley) Phelim McDermott, he composed the score for the new work "Tao of Glass", which premiered at the 2019 Manchester International Festival before touring to the 2020 Perth Festival. Glass describes himself as a "classicist", pointing out he is trained in harmony and counterpoint and studied such composers as Franz Schubert, Johann Sebastian Bach, and Wolfgang Amadeus Mozart with Nadia Boulanger. Aside from composing in the Western classical tradition, his music has ties to rock, ambient music, electronic music, and world music. Early admirers of his minimalism include musicians Brian Eno and David Bowie. In the 1990s, Glass composed the aforementioned symphonies "Low" (1992) and "Heroes" (1996), thematically derived from the Bowie-Eno collaboration albums "Low" and ""Heroes"" composed in late 1970s Berlin. Glass has collaborated with recording artists such as Paul Simon, Suzanne Vega, Mick Jagger, Leonard Cohen, David Byrne, Uakti, Natalie Merchant, S'Express (Glass remixed their track "Hey Music Lover" in 1989) and Aphex Twin (yielding an orchestration of "Icct Hedral" in 1995 on the "Donkey Rhubarb" EP). Glass's compositional influence extends to musicians such as Mike Oldfield (who included parts from Glass's "North Star" in "Platinum"), and bands such as Tangerine Dream and Talking Heads. Glass and his sound designer Kurt Munkacsi produced the American post-punk/new wave band Polyrock (1978 to the mid-1980s), as well as the recording of John Moran's "The Manson Family (An Opera)" in 1991, which featured punk legend Iggy Pop, and a second (unreleased) recording of Moran's work featuring poet Allen Ginsberg. Glass had begun using the Farfisa portable organ out of convenience, and he has used it in concert. It is featured on several recordings including "North Star" and on "Dance No. 1" and "Dance No. 3". In 1970, Glass and Klaus Kertess (owner of the Bykert Gallery) formed a record label named "Chatham Square Productions" (named after the location of the studio of a Philip Glass Ensemble member Dick Landry). In 1993 Glass formed another record label, Point Music; in 1997, Point Music released "Music for Airports", a live, instrumental version of Eno's composition of the same name, by Bang on a Can All-Stars. In 2002, Glass and his producer Kurt Munkacsi and artist Don Christensen founded the Orange Mountain Music company, dedicated to "establishing the recording legacy of Philip Glass" and, to date, have released sixty albums of Glass's music. Glass has composed many film scores, starting with the orchestral score for "Koyaanisqatsi" (1982), and continuing with two biopics, "" (1985, resulting in the String Quartet No. 3) and "Kundun" (1997) about the Dalai Lama, for which he received his first Academy Award nomination. In 1968 he composed and conducted the score for director Harrison Engle's minimalist comedy short, "Railroaded," played by the Philip Glass Ensemble. This was one of his earliest film efforts. The year after scoring "Hamburger Hill" (1987), Glass began a long collaboration with the filmmaker Errol Morris with his music for Morris's celebrated documentaries, including "The Thin Blue Line" (1988) and "A Brief History of Time" (1991). He continued composing for the Qatsi trilogy with the scores for "Powaqqatsi" (1988) and "Naqoyqatsi" (2002). In 1995 he composed the theme for Reggio's short independent film "Evidence". He made a cameo appearance—briefly visible performing at the piano—in Peter Weir's "The Truman Show" (1998), which uses music from "Powaqqatsi", "Anima Mundi" and "Mishima", as well as three original tracks by Glass. In the 1990s, he also composed scores for "Bent" (1997) and the thriller "Candyman" (1992) and its sequel, "" (1995), plus a film adaptation of Joseph Conrad's "The Secret Agent" (1996). In 1999, he finished a new soundtrack for the 1931 film "Dracula". "The Hours" (2002) earned him a second Academy Award nomination, and was followed by another Morris documentary, "The Fog of War" (2003). In the mid-2000s Glass provided the scores to films such as "Secret Window" (2004), "Neverwas" (2005), "The Illusionist" and "Notes on a Scandal", garnering his third Academy Award nomination for the latter. Glass's most recent film scores include "No Reservations" (Glass makes a brief cameo in the film sitting at an outdoor café), "Cassandra's Dream" (2007), "Les Regrets" (2009), "Mr Nice" (2010), the Brazilian film "Nosso Lar" (2010) and "Fantastic Four" (2015, in collaboration with Marco Beltrami). In 2009, Glass composed original theme music for "Transcendent Man", about the life and ideas of Ray Kurzweil by filmmaker Barry Ptolemy. In the 2000s Glass's work from the 1980s again became known to wider public through various media. In 2005 his Concerto for Violin and Orchestra (1987) was featured in the surreal French thriller, "La Moustache", providing a tone intentionally incongruous to the banality of the movie's plot. "Metamorphosis: Metamorphosis One" from "Solo Piano" (1989) was featured in the reimagined "Battlestar Galactica" in the episode "Valley of Darkness" and also in the final episode ("return 0") of "Person of Interest". In 2008, Rockstar Games released "Grand Theft Auto IV" featuring Glass's "Pruit Igoe" (from "Koyaanisqatsi"). "Pruit Igoe" and "Prophecies" (also from "Koyaanisqatsi") were used both in a trailer for "Watchmen" and in the film itself. "Watchmen" also included two other Glass pieces in the score: "Something She Has To Do" from "The Hours" and "Protest" from "Satyagraha", act 2, scene 3. In 2013 Glass contributed a piano piece "Duet" to the Park Chan-wook film "Stoker" which is performed diegetically in the film. In 2017 Glass scored the National Geographic Films documentary "Jane" (a documentary on the life of renowned British primatologist Jane Goodall). Glass's music was featured in two award-winning films by Russian director Andrey Zvyagintsev, "Elena" (2011) and "Leviathan" (2014). For television, Glass composed the theme for "Night Stalker" (2005) and the soundtrack for "Tales from the Loop" (2020). Glass has described himself as "a Jewish-Taoist-Hindu-Toltec-Buddhist", and he is a supporter of the Tibetan independence movement. In 1987, he co-founded the Tibet House US with Columbia University professor Robert Thurman and the actor Richard Gere at the request of the 14th Dalai Lama. Glass is a vegetarian. Glass has four children and one granddaughter. Juliet (b. 1968) and Zachary (b. 1971) are his children from his first marriage, to theater director JoAnne Akalaitis (married 1965, divorced 1980). His second marriage was to Luba Burtyk; the two were later divorced. His third wife, the artist Candy Jernigan, died of liver cancer in 1991, aged 39. He had two sons, Cameron (b. 2002) and Marlowe (b. 2003) with his fourth wife, restaurant manager Holly Critchlow (married in 2001), whom Glass later divorced. Glass lives in New York and in Cape Breton, Nova Scotia. He was romantically involved with cellist Wendy Sutter for approximately five years. his partner was Japanese-born dancer Saori Tsukada. Glass is the first cousin once removed of Ira Glass, host of the radio show "This American Life". Ira interviewed Glass onstage at Chicago's Field Museum; this interview was broadcast on NPR's "Fresh Air". Ira interviewed Glass a second time at a fundraiser for St. Ann's Warehouse; this interview was given away to public radio listeners as a pledge drive thank you gift in 2010. Ira and Glass recorded a version of the composition Glass wrote to accompany his friend Allen Ginsberg's poem "Wichita Vortex Sutra". In an interview, Glass said Franz Schubert—with whom he shares a birthday—is his favorite composer. In June 2012, Glass was featured on the cover of issue No. 79 of "The Fader". In 1978 Sylvère Lotringer conducted a 14-page interview with Glass in Columbia University's philosophy department publication of Semiotext(e) called "Schizo-Culture: The Event, The Book". Glass counts many artists among his friends and collaborators, including visual artists (Richard Serra, Chuck Close, Fredericka Foster), writers (Doris Lessing, David Henry Hwang, Allen Ginsberg), film and theatre directors (including Errol Morris, Robert Wilson, JoAnne Akalaitis, Godfrey Reggio, Paul Schrader, Martin Scorsese, Christopher Hampton, Bernard Rose, and many others), choreographers (Lucinda Childs, Jerome Robbins, Twyla Tharp), and musicians and composers (Ravi Shankar, David Byrne, the conductor Dennis Russell Davies, Foday Musa Suso, Laurie Anderson, Linda Ronstadt, Paul Simon, Pierce Turner, Joan La Barbara, Arthur Russell, David Bowie, Brian Eno, Roberto Carnevale, Patti Smith, Aphex Twin, Lisa Bielawa, Andrew Shapiro, John Moran, Bryce Dessner and Nico Muhly). Among recent collaborators are Glass's fellow New Yorker Woody Allen, Stephen Colbert, and poet and songwriter Leonard Cohen. Justin Davidson of "New York" magazine has criticized Glass, saying, "Glass never had a good idea he didn't flog to death: He repeats the haunting scale 30 mind-numbing times, until it's long past time to go home." Richard Schickel of "Time" criticized Glass's score for "The Hours", saying, "This ultimately proves insufficient to lend meaning to their lives or profundity to a grim and uninvolving film, for which Philip Glass unwittingly provides the perfect score—tuneless, oppressive, droning, painfully self-important." Michael White of "The Daily Telegraph" described Glass' "Violin Concerto No. 2" as being Best Original Score Anthony Asquith Award for Film Music Best Original Score
https://en.wikipedia.org/wiki?curid=24540
Phenotype Phenotype () is the term used in genetics for the composite observable characteristics or traits of an organism. The term covers the organism's morphology or physical form and structure, its developmental processes, its biochemical and physiological properties, its behavior, and the products of behavior. An organism's phenotype results from two basic factors: the expression of an organism's genetic code, or its genotype, and the influence of environmental factors. Both factors may interact, further affecting phenotype. When two or more clearly different phenotypes exist in the same population of a species, the species is called polymorphic. A well-documented example of polymorphism is Labrador Retriever coloring; while the coat color depends on many genes, it is clearly seen in the environment as yellow, black, and brown. Richard Dawkins in 1978 and then again in his 1982 book "The Extended Phenotype" suggested that one can regard bird nests and other built structures such as caddis-fly larvae cases and beaver dams as "extended phenotypes". Wilhelm Johannsen proposed the genotype-phenotype distinction in 1911 to make clear the difference between an organism's heredity and what that heredity produces. The distinction resembles that proposed by August Weismann (1834-1914), who distinguished between germ plasm (heredity) and somatic cells (the body). The genotype-phenotype distinction should not be confused with Francis Crick's central dogma of molecular biology, a statement about the directionality of molecular sequential information flowing from DNA to protein, and not the reverse. Despite its seemingly straightforward definition, the concept of the phenotype has hidden subtleties. It may seem that anything dependent on the genotype is a phenotype, including molecules such as RNA and proteins. Most molecules and structures coded by the genetic material are not visible in the appearance of an organism, yet they are observable (for example by Western blotting) and are thus part of the phenotype; human blood groups are an example. It may seem that this goes beyond the original intentions of the concept with its focus on the (living) organism in itself. Either way, the term phenotype includes inherent traits or characteristics that are observable or traits that can be made visible by some technical procedure. A notable extension to this idea is the presence of "organic molecules" or metabolites that are generated by organisms from chemical reactions of enzymes. The term "phenotype" has sometimes been incorrectly used as a shorthand for phenotypic difference from wild type, yielding the statement that a "mutation has no phenotype". Another extension adds behavior to the phenotype, since behaviors are observable characteristics. Behavioral phenotypes include cognitive, personality, and behavioral patterns. Some behavioral phenotypes may characterize psychiatric disorders or syndromes. Phenotypic variation (due to underlying heritable genetic variation) is a fundamental prerequisite for evolution by natural selection. It is the living organism as a whole that contributes (or not) to the next generation, so natural selection affects the genetic structure of a population indirectly via the contribution of phenotypes. Without phenotypic variation, there would be no evolution by natural selection. The interaction between genotype and phenotype has often been conceptualized by the following relationship: A more nuanced version of the relationship is: Genotypes often have much flexibility in the modification and expression of phenotypes; in many organisms these phenotypes are very different under varying environmental conditions (see ecophenotypic variation). The plant "Hieracium umbellatum" is found growing in two different habitats in Sweden. One habitat is rocky, sea-side cliffs, where the plants are bushy with broad leaves and expanded inflorescences; the other is among sand dunes where the plants grow prostrate with narrow leaves and compact inflorescences. These habitats alternate along the coast of Sweden and the habitat that the seeds of "Hieracium umbellatum" land in, determine the phenotype that grows. An example of random variation in "Drosophila" flies is the number of ommatidia, which may vary (randomly) between left and right eyes in a single individual as much as they do between different genotypes overall, or between clones raised in different environments. The concept of phenotype can be extended to variations below the level of the gene that affect an organism's fitness. For example, silent mutations that do not change the corresponding amino acid sequence of a gene may change the frequency of guanine-cytosine base pairs (GC content). These base pairs have a higher thermal stability ("melting point") than adenine-thymine, a property that might convey, among organisms living in high-temperature environments, a selective advantage on variants enriched in GC content. Richard Dawkins described a phenotype that included all effects that a gene has on its surroundings, including other organisms, as an extended phenotype, arguing that "An animal's behavior tends to maximize the survival of the genes 'for' that behavior, whether or not those genes happen to be in the body of the particular animal performing it." For instance, an organism such as a beaver modifies its environment by building a beaver dam; this can be considered an expression of its genes, just as its incisor teeth are—which it uses to modify its environment. Similarly, when a bird feeds a brood parasite such as a cuckoo, it is unwittingly extending its phenotype; and when genes in an orchid affect orchid bee behavior to increase pollination, or when genes in a peacock affect the copulatory decisions of peahens, again, the phenotype is being extended. Genes are, in Dawkins's view, selected by their phenotypic effects. Other biologists broadly agree that the extended phenotype concept is relevant, but consider that its role is largely explanatory, rather than assisting in the design of experimental tests. Although a phenotype is the ensemble of observable characteristics displayed by an organism, the word "phenome" is sometimes used to refer to a collection of traits, while the simultaneous study of such a collection is referred to as "phenomics". Phenomics is an important field of study because it can be used to figure out which genomic variants affect phenotypes which then can be used to explain things like health, disease, and evolutionary fitness. Phenomics forms a large part of the Human Genome Project Phenomics has widespread applications in the agricultural industry. With an exponentially growing population and inconsistent weather patterns due to global warming, it has become increasingly difficult to cultivate enough crops to support the world's population. Advantageous genomic variations, like drought and heat resistance, can be identified through the use of phenomics to create more durable GMOs. Phenomics is also a crucial stepping stone towards personalized medicine, particularly drug therapy. This application of phenomics has the greatest potential to avoid testing drug therapies that will prove to be ineffective or unsafe. Once the phenomic database has acquired more data, patient phenomic information can be used to select specific drugs tailored to the patient. As the regulation of phenomics develops there is a potential that new knowledge bases will help achieve the promise of personalized medicine and treatment of neuropsychiatric syndromes.
https://en.wikipedia.org/wiki?curid=24543
Photosynthesis Photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organisms' activities. This chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water – hence the name "photosynthesis", from the Greek "phōs" (), "light", and "sunthesis" (), "putting together". In most cases, oxygen is also released as a waste product. Most plants, most algae, and cyanobacteria perform photosynthesis; such organisms are called photoautotrophs. Photosynthesis is largely responsible for producing and maintaining the oxygen content of the Earth's atmosphere, and supplies most of the energy necessary for life on Earth. Although photosynthesis is performed differently by different species, the process always begins when energy from light is absorbed by proteins called reaction centres that contain green chlorophyll pigments. In plants, these proteins are held inside organelles called chloroplasts, which are most abundant in leaf cells, while in bacteria they are embedded in the plasma membrane. In these light-dependent reactions, some energy is used to strip electrons from suitable substances, such as water, producing oxygen gas. The hydrogen freed by the splitting of water is used in the creation of two further compounds that serve as short-term stores of energy, enabling its transfer to drive other reactions: these compounds are reduced nicotinamide adenine dinucleotide phosphate (NADPH) and adenosine triphosphate (ATP), the "energy currency" of cells. In plants, algae and cyanobacteria, long-term energy storage in the form of sugars is produced by a subsequent sequence of reactions called the Calvin cycle; some bacteria use different mechanisms, such as the reverse Krebs cycle, to achieve the same end. In the Calvin cycle, atmospheric carbon dioxide is incorporated into already existing organic carbon compounds, such as ribulose bisphosphate (RuBP). Using the ATP and NADPH produced by the light-dependent reactions, the resulting compounds are then reduced and removed to form further carbohydrates, such as glucose. The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen or hydrogen sulfide, rather than water, as sources of electrons. Cyanobacteria appeared later; the excess oxygen they produced contributed directly to the oxygenation of the Earth, which rendered the evolution of complex life possible. Today, the average rate of energy capture by photosynthesis globally is approximately 130 terawatts, which is about eight times the current power consumption of human civilization. Photosynthetic organisms also convert around 100–115 billion tons (91-104 petagrams) of carbon into biomass per year. Photosynthetic organisms are photoautotrophs, which means that they are able to synthesize food directly from carbon dioxide and water using energy from light. However, not all organisms use carbon dioxide as a source of carbon atoms to carry out photosynthesis; photoheterotrophs use organic compounds, rather than carbon dioxide, as a source of carbon. In plants, algae, and cyanobacteria, photosynthesis releases oxygen. This is called oxygenic photosynthesis and is by far the most common type of photosynthesis used by living organisms. Although there are some differences between oxygenic photosynthesis in plants, algae, and cyanobacteria, the overall process is quite similar in these organisms. There are also many varieties of anoxygenic photosynthesis, used mostly by certain types of bacteria, which consume carbon dioxide but do not release oxygen. Carbon dioxide is converted into sugars in a process called carbon fixation; photosynthesis captures energy from sunlight to convert carbon dioxide into carbohydrate. Carbon fixation is an endothermic redox reaction. In general outline, photosynthesis is the opposite of cellular respiration: while photosynthesis is a process of reduction of carbon dioxide to carbohydrate, cellular respiration is the oxidation of carbohydrate or other nutrients to carbon dioxide. Nutrients used in cellular respiration include carbohydrates, amino acids and fatty acids. These nutrients are oxidized to produce carbon dioxide and water, and to release chemical energy to drive the organism's metabolism. Photosynthesis and cellular respiration are distinct processes, as they take place through different sequences of chemical reactions and in different cellular compartments. The general equation for photosynthesis as first proposed by Cornelis van Niel is therefore: Since water is used as the electron donor in oxygenic photosynthesis, the equation for this process is: This equation emphasizes that water is both a reactant in the light-dependent reaction and a product of the light-independent reaction, but canceling "n" water molecules from each side gives the net equation: Other processes substitute other compounds (such as arsenite) for water in the electron-supply role; for example some microbes use sunlight to oxidize arsenite to arsenate: The equation for this reaction is: Photosynthesis occurs in two stages. In the first stage, "light-dependent reactions" or "light reactions" capture the energy of light and use it to make the energy-storage molecules ATP and NADPH. During the second stage, the "light-independent reactions" use these products to capture and reduce carbon dioxide. Most organisms that utilize oxygenic photosynthesis use visible light for the light-dependent reactions, although at least three use shortwave infrared or, more specifically, far-red radiation. Some organisms employ even more radical variants of photosynthesis. Some archaea use a simpler method that employs a pigment similar to those used for vision in animals. The bacteriorhodopsin changes its configuration in response to sunlight, acting as a proton pump. This produces a proton gradient more directly, which is then converted to chemical energy. The process does not involve carbon dioxide fixation and does not release oxygen, and seems to have evolved separately from the more common types of photosynthesis. In photosynthetic bacteria, the proteins that gather light for photosynthesis are embedded in cell membranes. In its simplest form, this involves the membrane surrounding the cell itself. However, the membrane may be tightly folded into cylindrical sheets called thylakoids, or bunched up into round vesicles called "intracytoplasmic membranes". These structures can fill most of the interior of a cell, giving the membrane a very large surface area and therefore increasing the amount of light that the bacteria can absorb. In plants and algae, photosynthesis takes place in organelles called chloroplasts. A typical plant cell contains about 10 to 100 chloroplasts. The chloroplast is enclosed by a membrane. This membrane is composed of a phospholipid inner membrane, a phospholipid outer membrane, and an intermembrane space. Enclosed by the membrane is an aqueous fluid called the stroma. Embedded within the stroma are stacks of thylakoids (grana), which are the site of photosynthesis. The thylakoids appear as flattened disks. The thylakoid itself is enclosed by the thylakoid membrane, and within the enclosed volume is a lumen or thylakoid space. Embedded in the thylakoid membrane are integral and peripheral membrane protein complexes of the photosynthetic system. Plants absorb light primarily using the pigment chlorophyll. The green part of the light spectrum is not absorbed but is reflected which is the reason that most plants have a green color. Besides chlorophyll, plants also use pigments such as carotenes and xanthophylls. Algae also use chlorophyll, but various other pigments are present, such as phycocyanin, carotenes, and xanthophylls in green algae, phycoerythrin in red algae (rhodophytes) and fucoxanthin in brown algae and diatoms resulting in a wide variety of colors. These pigments are embedded in plants and algae in complexes called antenna proteins. In such proteins, the pigments are arranged to work together. Such a combination of proteins is also called a light-harvesting complex. Although all cells in the green parts of a plant have chloroplasts, the majority of those are found in specially adapted structures called leaves. Certain species adapted to conditions of strong sunlight and aridity, such as many "Euphorbia" and cactus species, have their main photosynthetic organs in their stems. The cells in the interior tissues of a leaf, called the mesophyll, can contain between 450,000 and 800,000 chloroplasts for every square millimeter of leaf. The surface of the leaf is coated with a water-resistant waxy cuticle that protects the leaf from excessive evaporation of water and decreases the absorption of ultraviolet or blue light to reduce heating. The transparent epidermis layer allows light to pass through to the palisade mesophyll cells where most of the photosynthesis takes place. In the light-dependent reactions, one molecule of the pigment chlorophyll absorbs one photon and loses one electron. This electron is passed to a modified form of chlorophyll called pheophytin, which passes the electron to a quinone molecule, starting the flow of electrons down an electron transport chain that leads to the ultimate reduction of NADP to NADPH. In addition, this creates a proton gradient (energy gradient) across the chloroplast membrane, which is used by ATP synthase in the synthesis of ATP. The chlorophyll molecule ultimately regains the electron it lost when a water molecule is split in a process called photolysis, which releases a dioxygen (O2) molecule as a waste product. The overall equation for the light-dependent reactions under the conditions of non-cyclic electron flow in green plants is: Not all wavelengths of light can support photosynthesis. The photosynthetic action spectrum depends on the type of accessory pigments present. For example, in green plants, the action spectrum resembles the absorption spectrum for chlorophylls and carotenoids with absorption peaks in violet-blue and red light. In red algae, the action spectrum is blue-green light, which allows these algae to use the blue end of the spectrum to grow in the deeper waters that filter out the longer wavelengths (red light) used by above ground green plants. The non-absorbed part of the light spectrum is what gives photosynthetic organisms their color (e.g., green plants, red algae, purple bacteria) and is the least effective for photosynthesis in the respective organisms. In plants, light-dependent reactions occur in the thylakoid membranes of the chloroplasts where they drive the synthesis of ATP and NADPH. The light-dependent reactions are of two forms: cyclic and non-cyclic. In the non-cyclic reaction, the photons are captured in the light-harvesting antenna complexes of photosystem II by chlorophyll and other accessory pigments (see diagram at right). The absorption of a photon by the antenna complex frees an electron by a process called photoinduced charge separation. The antenna system is at the core of the chlorophyll molecule of the photosystem II reaction center. That freed electron is transferred to the primary electron-acceptor molecule, pheophytin. As the electrons are shuttled through an electron transport chain (the so-called "Z-scheme" shown in the diagram), it initially functions to generate a chemiosmotic potential by pumping proton cations (H+) across the membrane and into the thylakoid space. An ATP synthase enzyme uses that chemiosmotic potential to make ATP during photophosphorylation, whereas NADPH is a product of the terminal redox reaction in the "Z-scheme". The electron enters a chlorophyll molecule in Photosystem I. There it is further excited by the light absorbed by that photosystem. The electron is then passed along a chain of electron acceptors to which it transfers some of its energy. The energy delivered to the electron acceptors is used to move hydrogen ions across the thylakoid membrane into the lumen. The electron is eventually used to reduce the co-enzyme NADP with a H+ to NADPH (which has functions in the light-independent reaction); at that point, the path of that electron ends. The cyclic reaction is similar to that of the non-cyclic, but differs in that it generates only ATP, and no reduced NADP (NADPH) is created. The cyclic reaction takes place only at photosystem I. Once the electron is displaced from the photosystem, the electron is passed down the electron acceptor molecules and returns to photosystem I, from where it was emitted, hence the name "cyclic reaction". Linear electron transport through a photosystem will leave the reaction center of that photosystem oxidized. Elevating another electron will first require re-reduction of the reaction center. The excited electrons lost from the reaction center (P700) of photosystem I are replaced by transfer from plastocyanin, whose electrons come from electron transport through photosystem II. Photosystem II, as the first step of the "Z-scheme", requires an external source of electrons to reduce its oxidized chlorophyll "a" reaction center, called P680. The source of electrons for photosynthesis in green plants and cyanobacteria is water. Two water molecules are oxidized by four successive charge-separation reactions by photosystem II to yield a molecule of diatomic oxygen and four hydrogen ions. The electrons yielded are transferred to a redox-active tyrosine residue that then reduces the oxidized P680. This resets the ability of P680 to absorb another photon and release another photo-dissociated electron. The oxidation of water is catalyzed in photosystem II by a redox-active structure that contains four manganese ions and a calcium ion; this oxygen-evolving complex binds two water molecules and contains the four oxidizing equivalents that are used to drive the water-oxidizing reaction (Dolai's S-state diagrams). Photosystem II is the only known biological enzyme that carries out this oxidation of water. The hydrogen ions are released in the thylakoid lumen and therefore contribute to the transmembrane chemiosmotic potential that leads to ATP synthesis. Oxygen is a waste product of light-dependent reactions, but the majority of organisms on Earth use oxygen for cellular respiration, including photosynthetic organisms. In the light-independent (or "dark") reactions, the enzyme RuBisCO captures CO2 from the atmosphere and, in a process called the Calvin cycle, it uses the newly formed NADPH and releases three-carbon sugars, which are later combined to form sucrose and starch. The overall equation for the light-independent reactions in green plants is Carbon fixation produces the intermediate three-carbon sugar product, which is then converted into the final carbohydrate products. The simple carbon sugars produced by photosynthesis are then used in the forming of other organic compounds, such as the building material cellulose, the precursors for lipid and amino acid biosynthesis, or as a fuel in cellular respiration. The latter occurs not only in plants but also in animals when the energy from plants is passed through a food chain. The fixation or reduction of carbon dioxide is a process in which carbon dioxide combines with a five-carbon sugar, ribulose 1,5-bisphosphate, to yield two molecules of a three-carbon compound, glycerate 3-phosphate, also known as 3-phosphoglycerate. Glycerate 3-phosphate, in the presence of ATP and NADPH produced during the light-dependent stages, is reduced to glyceraldehyde 3-phosphate. This product is also referred to as 3-phosphoglyceraldehyde (PGAL) or, more generically, as triose phosphate. Most (5 out of 6 molecules) of the glyceraldehyde 3-phosphate produced is used to regenerate ribulose 1,5-bisphosphate so the process can continue. The triose phosphates not thus "recycled" often condense to form hexose phosphates, which ultimately yield sucrose, starch and cellulose. The sugars produced during carbon metabolism yield carbon skeletons that can be used for other metabolic reactions like the production of amino acids and lipids. In hot and dry conditions, plants close their stomata to prevent water loss. Under these conditions, will decrease and oxygen gas, produced by the light reactions of photosynthesis, will increase, causing an increase of photorespiration by the oxygenase activity of ribulose-1,5-bisphosphate carboxylase/oxygenase and decrease in carbon fixation. Some plants have evolved mechanisms to increase the concentration in the leaves under these conditions. Plants that use the C4 carbon fixation process chemically fix carbon dioxide in the cells of the mesophyll by adding it to the three-carbon molecule phosphoenolpyruvate (PEP), a reaction catalyzed by an enzyme called PEP carboxylase, creating the four-carbon organic acid oxaloacetic acid. Oxaloacetic acid or malate synthesized by this process is then translocated to specialized bundle sheath cells where the enzyme RuBisCO and other Calvin cycle enzymes are located, and where released by decarboxylation of the four-carbon acids is then fixed by RuBisCO activity to the three-carbon 3-phosphoglyceric acids. The physical separation of RuBisCO from the oxygen-generating light reactions reduces photorespiration and increases fixation and, thus, the photosynthetic capacity of the leaf. plants can produce more sugar than plants in conditions of high light and temperature. Many important crop plants are plants, including maize, sorghum, sugarcane, and millet. Plants that do not use PEP-carboxylase in carbon fixation are called C3 plants because the primary carboxylation reaction, catalyzed by RuBisCO, produces the three-carbon 3-phosphoglyceric acids directly in the Calvin-Benson cycle. Over 90% of plants use carbon fixation, compared to 3% that use carbon fixation; however, the evolution of in over 60 plant lineages makes it a striking example of convergent evolution. Xerophytes, such as cacti and most succulents, also use PEP carboxylase to capture carbon dioxide in a process called Crassulacean acid metabolism (CAM). In contrast to metabolism, which "spatially" separates the fixation to PEP from the Calvin cycle, CAM "temporally" separates these two processes. CAM plants have a different leaf anatomy from plants, and fix the at night, when their stomata are open. CAM plants store the mostly in the form of malic acid via carboxylation of phosphoenolpyruvate to oxaloacetate, which is then reduced to malate. Decarboxylation of malate during the day releases inside the leaves, thus allowing carbon fixation to 3-phosphoglycerate by RuBisCO. Sixteen thousand species of plants use CAM. Cyanobacteria possess carboxysomes, which increase the concentration of around RuBisCO to increase the rate of photosynthesis. An enzyme, carbonic anhydrase, located within the carboxysome releases CO2 from the dissolved hydrocarbonate ions (HCO). Before the CO2 diffuses out it is quickly sponged up by RuBisCO, which is concentrated within the carboxysomes. HCO ions are made from CO2 outside the cell by another carbonic anhydrase and are actively pumped into the cell by a membrane protein. They cannot cross the membrane as they are charged, and within the cytosol they turn back into CO2 very slowly without the help of carbonic anhydrase. This causes the HCO ions to accumulate within the cell from where they diffuse into the carboxysomes. Pyrenoids in algae and hornworts also act to concentrate around RuBisCO. The overall process of photosynthesis takes place in four stages: Plants usually convert light into chemical energy with a photosynthetic efficiency of 3–6%. Absorbed light that is unconverted is dissipated primarily as heat, with a small fraction (1–2%) re-emitted as chlorophyll fluorescence at longer (redder) wavelengths. This fact allows measurement of the light reaction of photosynthesis by using chlorophyll fluorometers. Actual plants' photosynthetic efficiency varies with the frequency of the light being converted, light intensity, temperature and proportion of carbon dioxide in the atmosphere, and can vary from 0.1% to 8%. By comparison, solar panels convert light into electric energy at an efficiency of approximately 6–20% for mass-produced panels, and above 40% in laboratory devices. The efficiency of both light and dark reactions can be measured but the relationship between the two can be complex. For example, the ATP and NADPH energy molecules, created by the light reaction, can be used for carbon fixation or for photorespiration in C3 plants. Electrons may also flow to other electron sinks. For this reason, it is not uncommon for authors to differentiate between work done under non-photorespiratory conditions and under photorespiratory conditions. Chlorophyll fluorescence of photosystem II can measure the light reaction, and Infrared gas analyzers can measure the dark reaction. It is also possible to investigate both at the same time using an integrated chlorophyll fluorometer and gas exchange system, or by using two separate systems together. Infrared gas analyzers and some moisture sensors are sensitive enough to measure the photosynthetic assimilation of CO2, and of ΔH2O using reliable methods CO2 is commonly measured in μmols/(m2/s), parts per million or volume per million and H2O is commonly measured in mmol/(m2/s) or in mbars. By measuring CO2 assimilation, ΔH2O, leaf temperature, barometric pressure, leaf area, and photosynthetically active radiation or PAR, it becomes possible to estimate, "A" or carbon assimilation, "E" or transpiration, "gs" or stomatal conductance, and Ci or intracellular CO2. However, it is more common to used chlorophyll fluorescence for plant stress measurement, where appropriate, because the most commonly used measuring parameters FV/FM and Y(II) or F/FM' can be made in a few seconds, allowing the measurement of larger plant populations. Gas exchange systems that offer control of CO2 levels, above and below ambient, allow the common practice of measurement of A/Ci curves, at different CO2 levels, to characterize a plant's photosynthetic response. Integrated chlorophyll fluorometer – gas exchange systems allow a more precise measure of photosynthetic response and mechanisms. While standard gas exchange photosynthesis systems can measure Ci, or substomatal CO2 levels, the addition of integrated chlorophyll fluorescence measurements allows a more precise measurement of CC to replace Ci. The estimation of CO2 at the site of carboxylation in the chloroplast, or CC, becomes possible with the measurement of mesophyll conductance or gm using an integrated system. Photosynthesis measurement systems are not designed to directly measure the amount of light absorbed by the leaf. But analysis of chlorophyll-fluorescence, P700- and P515-absorbance and gas exchange measurements reveal detailed information about e.g. the photosystems, quantum efficiency and the CO2 assimilation rates. With some instruments even wavelength-dependency of the photosynthetic efficiency can be analyzed. A phenomenon known as quantum walk increases the efficiency of the energy transport of light significantly. In the photosynthetic cell of an algae, bacterium, or plant, there are light-sensitive molecules called chromophores arranged in an antenna-shaped structure named a photocomplex. When a photon is absorbed by a chromophore, it is converted into a quasiparticle referred to as an exciton, which jumps from chromophore to chromophore towards the reaction center of the photocomplex, a collection of molecules that traps its energy in a chemical form that makes it accessible for the cell's metabolism. The exciton's wave properties enable it to cover a wider area and try out several possible paths simultaneously, allowing it to instantaneously "choose" the most efficient route, where it will have the highest probability of arriving at its destination in the minimum possible time. Because that quantum walking takes place at temperatures far higher than quantum phenomena usually occur, it is only possible over very short distances, due to obstacles in the form of destructive interference that come into play. These obstacles cause the particle to lose its wave properties for an instant before it regains them once again after it is freed from its locked position through a classic "hop". The movement of the electron towards the photo center is therefore covered in a series of conventional hops and quantum walks. Early photosynthetic systems, such as those in green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, and used various other molecules than water as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and sulfur as electron donors. Green nonsulfur bacteria used various amino and other organic acids as an electron donor. Purple nonsulfur bacteria used a variety of nonspecific organic molecules. The use of these molecules is consistent with the geological evidence that Earth's early atmosphere was highly reducing at that time. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old. More recent studies, reported in March 2018, also suggest that photosynthesis may have begun about 3.4 billion years ago. The main source of oxygen in the Earth's atmosphere derives from oxygenic photosynthesis, and its first appearance is sometimes referred to as the oxygen catastrophe. Geological evidence suggests that oxygenic photosynthesis, such as that in cyanobacteria, became important during the Paleoproterozoic era around 2 billion years ago. Modern photosynthesis in plants and most photosynthetic prokaryotes is oxygenic. Oxygenic photosynthesis uses water as an electron donor, which is oxidized to molecular oxygen () in the photosynthetic reaction center. Several groups of animals have formed symbiotic relationships with photosynthetic algae. These are most common in corals, sponges and sea anemones. It is presumed that this is due to the particularly simple body plans and large surface areas of these animals compared to their volumes. In addition, a few marine mollusks "Elysia viridis" and "Elysia chlorotica" also maintain a symbiotic relationship with chloroplasts they capture from the algae in their diet and then store in their bodies (see Kleptoplasty). This allows the mollusks to survive solely by photosynthesis for several months at a time. Some of the genes from the plant cell nucleus have even been transferred to the slugs, so that the chloroplasts can be supplied with proteins that they need to survive. An even closer form of symbiosis may explain the origin of chloroplasts. Chloroplasts have many similarities with photosynthetic bacteria, including a circular chromosome, prokaryotic-type ribosome, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those found in cyanobacteria. DNA in chloroplasts codes for redox proteins such as those found in the photosynthetic reaction centers. The CoRR Hypothesis proposes that this co-location of genes with their gene products is required for redox regulation of gene expression, and accounts for the persistence of DNA in bioenergetic organelles. Symbiotic and kleptoplastic organisms excluded: Except for the euglenids, all of them belongs to the Diaphoretickes. Archaeplastida and the photosynthetic Paulinella got their plastids through primary endosymbiosis in two separate events by engulfing a cyanobacterium. The plastids in all the other groups have either a red or green algal origin, and are referred to as the "red lineages" and the "green lineages". While able to perform photosynthesis, many of them are mixotrophs and practice heterotrophy to various degrees. The biochemical capacity to use water as the source for electrons in photosynthesis evolved once, in a common ancestor of extant cyanobacteria (formerly called blue-green algae), which are the only prokaryotes performing oxygenic photosynthesis. The geological record indicates that this transforming event took place early in Earth's history, at least 2450–2320 million years ago (Ma), and, it is speculated, much earlier. Because the Earth's atmosphere contained almost no oxygen during the estimated development of photosynthesis, it is believed that the first photosynthetic cyanobacteria did not generate oxygen. Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma, but the question of when oxygenic photosynthesis evolved is still unanswered. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already-diverse biota of Cyanobacteria. Cyanobacteria remained the principal primary producers of oxygen throughout the Proterozoic Eon (2500–543 Ma), in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. Green algae joined cyanobacteria as the major primary producers of oxygen on continental shelves near the end of the Proterozoic, but it was only with the Mesozoic (251–66 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did the primary production of oxygen in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers of oxygen in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae. Although some of the steps in photosynthesis are still not completely understood, the overall photosynthetic equation has been known since the 19th century. Jan van Helmont began the research of the process in the mid-17th century when he carefully measured the mass of the soil used by a plant and the mass of the plant as it grew. After noticing that the soil mass changed very little, he hypothesized that the mass of the growing plant must come from the water, the only substance he added to the potted plant. His hypothesis was partially accurate – much of the gained mass also comes from carbon dioxide as well as water. However, this was a signaling point to the idea that the bulk of a plant's biomass comes from the inputs of photosynthesis, not the soil itself. Joseph Priestley, a chemist and minister, discovered that, when he isolated a volume of air under an inverted jar, and burned a candle in it (which gave off CO2), the candle would burn out very quickly, much before it ran out of wax. He further discovered that a mouse could similarly "injure" air. He then showed that the air that had been "injured" by the candle and the mouse could be restored by a plant. In 1778, Jan Ingenhousz, repeated Priestley's experiments. He discovered that it was the influence of sunlight on the plant that could cause it to revive a mouse in a matter of hours. In 1796, Jean Senebier, a Swiss pastor, botanist, and naturalist, demonstrated that green plants consume carbon dioxide and release oxygen under the influence of light. Soon afterward, Nicolas-Théodore de Saussure showed that the increase in mass of the plant as it grows could not be due only to uptake of CO2 but also to the incorporation of water. Thus, the basic reaction by which photosynthesis is used to produce food (such as glucose) was outlined. Cornelis Van Niel made key discoveries explaining the chemistry of photosynthesis. By studying purple sulfur bacteria and green bacteria he was the first to demonstrate that photosynthesis is a light-dependent redox reaction, in which hydrogen reduces (donates its – electron to) carbon dioxide. Robert Emerson discovered two light reactions by testing plant productivity using different wavelengths of light. With the red alone, the light reactions were suppressed. When blue and red were combined, the output was much more substantial. Thus, there were two photosystems, one absorbing up to 600 nm wavelengths, the other up to 700 nm. The former is known as PSII, the latter is PSI. PSI contains only chlorophyll "a", PSII contains primarily chlorophyll "a" with most of the available chlorophyll "b", among other pigment. These include phycobilins, which are the red and blue pigments of red and blue algae respectively, and fucoxanthol for brown algae and diatoms. The process is most productive when the absorption of quanta are equal in both the PSII and PSI, assuring that input energy from the antenna complex is divided between the PSI and PSII system, which in turn powers the photochemistry. Robert Hill thought that a complex of reactions consisting of an intermediate to cytochrome b6 (now a plastoquinone), another is from cytochrome f to a step in the carbohydrate-generating mechanisms. These are linked by plastoquinone, which does require energy to reduce cytochrome f for it is a sufficient reductant. Further experiments to prove that the oxygen developed during the photosynthesis of green plants came from water, were performed by Hill in 1937 and 1939. He showed that isolated chloroplasts give off oxygen in the presence of unnatural reducing agents like iron oxalate, ferricyanide or benzoquinone after exposure to light. The Hill reaction is as follows: where A is the electron acceptor. Therefore, in light, the electron acceptor is reduced and oxygen is evolved. Samuel Ruben and Martin Kamen used radioactive isotopes to determine that the oxygen liberated in photosynthesis came from the water. Melvin Calvin and Andrew Benson, along with James Bassham, elucidated the path of carbon assimilation (the photosynthetic carbon reduction cycle) in plants. The carbon reduction cycle is known as the Calvin cycle, which ignores the contribution of Bassham and Benson. Many scientists refer to the cycle as the Calvin-Benson Cycle, Benson-Calvin, and some even call it the Calvin-Benson-Bassham (or CBB) Cycle. Nobel Prize-winning scientist Rudolph A. Marcus was able to discover the function and significance of the electron transport chain. Otto Heinrich Warburg and Dean Burk discovered the I-quantum photosynthesis reaction that splits the CO2, activated by the respiration. In 1950, first experimental evidence for the existence of photophosphorylation "in vivo" was presented by Otto Kandler using intact "Chlorella" cells and interpreting his findings as light-dependent ATP formation. In 1954, Daniel I. Arnon et al. discovered photophosphorylation "in vitro" in isolated chloroplasts with the help of P32. Louis N.M. Duysens and Jan Amesz discovered that chlorophyll a will absorb one light, oxidize cytochrome f, chlorophyll a (and other pigments) will absorb another light, but will reduce this same oxidized cytochrome, stating the two light reactions are in series. In 1893, Charles Reid Barnes proposed two terms, "photosyntax" and "photosynthesis", for the biological process of "synthesis of complex carbon compounds out of carbonic acid, in the presence of chlorophyll, under the influence of light". Over time, the term "photosynthesis" came into common usage as the term of choice. Later discovery of anoxygenic photosynthetic bacteria and photophosphorylation necessitated redefinition of the term. After WWII at late 1940 at the University of California, Berkeley, the details of photosynthetic carbon metabolism were sorted out by the chemists Melvin Calvin, Andrew Benson, James Bassham and a score of students and researchers utilizing the carbon-14 isotope and paper chromatography techniques. The pathway of CO2 fixation by the algae "Chlorella" in a fraction of a second in light resulted in a 3 carbon molecule called phosphoglyceric acid (PGA). For that original and ground-breaking work, a Nobel Prize in Chemistry was awarded to Melvin Calvin in 1961. In parallel, plant physiologists studied leaf gas exchanges using the new method of infrared gas analysis and a leaf chamber where the net photosynthetic rates ranged from 10 to 13 μmol CO2·m−2·s−1, with the conclusion that all terrestrial plants having the same photosynthetic capacities that were light saturated at less than 50% of sunlight. Later in 1958–1963 at Cornell University, field grown maize was reported to have much greater leaf photosynthetic rates of 40 μmol CO2·m−2·s−1 and was not saturated at near full sunlight. This higher rate in maize was almost double those observed in other species such as wheat and soybean, indicating that large differences in photosynthesis exist among higher plants. At the University of Arizona, detailed gas exchange research on more than 15 species of monocot and dicot uncovered for the first time that differences in leaf anatomy are crucial factors in differentiating photosynthetic capacities among species. In tropical grasses, including maize, sorghum, sugarcane, Bermuda grass and in the dicot amaranthus, leaf photosynthetic rates were around 38−40 μmol CO2·m−2·s−1, and the leaves have two types of green cells, i. e. outer layer of mesophyll cells surrounding a tightly packed cholorophyllous vascular bundle sheath cells. This type of anatomy was termed Kranz anatomy in the 19th century by the botanist Gottlieb Haberlandt while studying leaf anatomy of sugarcane. Plant species with the greatest photosynthetic rates and Kranz anatomy showed no apparent photorespiration, very low CO2 compensation point, high optimum temperature, high stomatal resistances and lower mesophyll resistances for gas diffusion and rates never saturated at full sun light. The research at Arizona was designated Citation Classic by the ISI 1986. These species was later termed C4 plants as the first stable compound of CO2 fixation in light has 4 carbon as malate and aspartate. Other species that lack Kranz anatomy were termed C3 type such as cotton and sunflower, as the first stable carbon compound is the 3-carbon PGA. At 1000 ppm CO2 in measuring air, both the C3 and C4 plants had similar leaf photosynthetic rates around 60 μmol CO2·m−2·s−1 indicating the suppression of photorespiration in C3 plants. There are three main factors affecting photosynthesis and several corollary factors. The three main are: Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis. The process of photosynthesis provides the main input of free energy into the biosphere, and is one of four main ways in which radiation is important for plant life. The radiation climate within plant communities is extremely variable, with both time and space. In the early 20th century, Frederick Blackman and Gabrielle Matthaei investigated the effects of light intensity (irradiance) and temperature on the rate of carbon assimilation. These two experiments illustrate several important points: First, it is known that, in general, photochemical reactions are not affected by temperature. However, these experiments clearly show that temperature affects the rate of carbon assimilation, so there must be two sets of reactions in the full process of carbon assimilation. These are the light-dependent 'photochemical' temperature-independent stage, and the light-independent, temperature-dependent stage. Second, Blackman's experiments illustrate the concept of limiting factors. Another limiting factor is the wavelength of light. Cyanobacteria, which reside several meters underwater, cannot receive the correct wavelengths required to cause photoinduced charge separation in conventional photosynthetic pigments. To combat this problem, a series of proteins with different pigments surround the reaction center. This unit is called a phycobilisome. As carbon dioxide concentrations rise, the rate at which sugars are made by the light-independent reactions increases until limited by other factors. RuBisCO, the enzyme that captures carbon dioxide in the light-independent reactions, has a binding affinity for both carbon dioxide and oxygen. When the concentration of carbon dioxide is high, RuBisCO will fix carbon dioxide. However, if the carbon dioxide concentration is low, RuBisCO will bind oxygen instead of carbon dioxide. This process, called photorespiration, uses energy, but does not produce sugars. RuBisCO oxygenase activity is disadvantageous to plants for several reasons: The salvaging pathway for the products of RuBisCO oxygenase activity is more commonly known as photorespiration, since it is characterized by light-dependent oxygen consumption and the release of carbon dioxide.
https://en.wikipedia.org/wiki?curid=24544
Pierre-Auguste Renoir Pierre-Auguste Renoir, commonly known as Auguste Renoir ( ; ; 25 February 1841 – 3 December 1919), was a French artist who was a leading painter in the development of the Impressionist style. As a celebrator of beauty and especially feminine sensuality, it has been said that "Renoir is the final representative of a tradition which runs directly from Rubens to Watteau." He was the father of actor Pierre Renoir (1885–1952), filmmaker Jean Renoir (1894–1979) and ceramic artist Claude Renoir (1901–1969). He was the grandfather of the filmmaker Claude Renoir (1913–1993), son of Pierre. Pierre-Auguste Renoir was born in Limoges, Haute-Vienne, France, in 1841. His father, Léonard Renoir, was a tailor of modest means, so in 1844, Renoir's family moved to Paris in search of more favorable prospects. The location of their home, in rue d’Argenteuil in central Paris, placed Renoir in proximity to the Louvre. Although the young Renoir had a natural proclivity for drawing, he exhibited a greater talent for singing. His talent was encouraged by his teacher, Charles Gounod, who was the choir-master at the Church of St Roch at the time. However, due to the family's financial circumstances, Renoir had to discontinue his music lessons and leave school at the age of thirteen to pursue an apprenticeship at a porcelain factory. Although Renoir displayed a talent for his work, he frequently tired of the subject matter and sought refuge in the galleries of the Louvre. The owner of the factory recognized his apprentice's talent and communicated this to Renoir's family. Following this, Renoir started taking lessons to prepare for entry into Ecole des Beaux Arts. When the porcelain factory adopted mechanical reproduction processes in 1858, Renoir was forced to find other means to support his learning. Before he enrolled in art school, he also painted hangings for overseas missionaries and decorations on fans. In 1862, he began studying art under Charles Gleyre in Paris. There he met Alfred Sisley, Frédéric Bazille, and Claude Monet. At times, during the 1860s, he did not have enough money to buy paint. Renoir had his first success at the Salon of 1868 with his painting "Lise with a Parasol" (1867), which depicted Lise Tréhot, his lover at the time. Although Renoir first started exhibiting paintings at the Paris Salon in 1864, recognition was slow in coming, partly as a result of the turmoil of the Franco-Prussian War. During the Paris Commune in 1871, while Renoir painted on the banks of the Seine River, some Communards thought he was a spy and were about to throw him into the river, when a leader of the "Commune", Raoul Rigault, recognized Renoir as the man who had protected him on an earlier occasion. In 1874, a ten-year friendship with Jules Le Cœur and his family ended, and Renoir lost not only the valuable support gained by the association but also a generous welcome to stay on their property near Fontainebleau and its scenic forest. This loss of a favorite painting location resulted in a distinct change of subjects. Renoir was inspired by the style and subject matter of previous modern painters Camille Pissarro and Edouard Manet. After a series of rejections by the Salon juries, he joined forces with Monet, Sisley, Pissarro, and several other artists to mount the first Impressionist exhibition in April 1874, in which Renoir displayed six paintings. Although the critical response to the exhibition was largely unfavorable, Renoir's work was comparatively well received. That same year, two of his works were shown with Durand-Ruel in London. Hoping to secure a livelihood by attracting portrait commissions, Renoir displayed mostly portraits at the second Impressionist exhibition in 1876. He contributed a more diverse range of paintings the next year when the group presented its third exhibition; they included "Dance at Le Moulin de la Galette" and "The Swing". Renoir did not exhibit in the fourth or fifth Impressionist exhibitions, and instead resumed submitting his works to the Salon. By the end of the 1870s, particularly after the success of his painting "Mme Charpentier and her Children" (1878) at the Salon of 1879, Renoir was a successful and fashionable painter. In 1881, he traveled to Algeria, a country he associated with Eugène Delacroix, then to Madrid, to see the work of Diego Velázquez. Following that, he traveled to Italy to see Titian's masterpieces in Florence and the paintings of Raphael in Rome. On 15 January 1882, Renoir met the composer Richard Wagner at his home in Palermo, Sicily. Renoir painted Wagner's portrait in just thirty-five minutes. In the same year, after contracting pneumonia which permanently damaged his respiratory system, Renoir convalesced for six weeks in Algeria. In 1883, Renoir spent the summer in Guernsey, one of the islands in the English Channel with a varied landscape of beaches, cliffs, and bays, where he created fifteen paintings in little over a month. Most of these feature "Moulin Huet", a bay in Saint Martin's, Guernsey. These paintings were the subject of a set of commemorative postage stamps issued by the Bailiwick of Guernsey in 1983. While living and working in Montmartre, Renoir employed Suzanne Valadon as a model, who posed for him ("The Large Bathers", 1884–87; "Dance at Bougival", 1883) and many of his fellow painters; during that time she studied their techniques and eventually became one of the leading painters of the day. In 1887, the year when Queen Victoria celebrated her Golden Jubilee, and upon the request of the queen's associate, Phillip Richbourg, Renoir donated several paintings to the "French Impressionist Paintings" catalog as a token of his loyalty. In 1890, he married Aline Victorine Charigot, a dressmaker twenty years his junior, who, along with a number of the artist's friends, had already served as a model for "Le Déjeuner des canotiers" ("Luncheon of the Boating Party" – she is the woman on the left playing with the dog) in 1881, and with whom he had already had a child, Pierre, in 1885. After his marriage, Renoir painted many scenes of his wife and daily family life including their children and their nurse, Aline's cousin Gabrielle Renard. The Renoirs had three sons: Pierre Renoir (1885-1952), who became a stage and film actor; Jean Renoir (1894-1979), who became a filmmaker of note; and Claude Renoir (1901-1969), who became a ceramic artist. Around 1892, Renoir developed rheumatoid arthritis. In 1907, he moved to the warmer climate of "Les Collettes," a farm at Cagnes-sur-Mer, close to the Mediterranean coast. Renoir painted during the last twenty years of his life even after his arthritis severely limited his mobility. He developed progressive deformities in his hands and ankylosis of his right shoulder, requiring him to change his painting technique. It has often been reported that in the advanced stages of his arthritis, he painted by having a brush strapped to his paralyzed fingers, but this is erroneous; Renoir remained able to grasp a brush, although he required an assistant to place it in his hand. The wrapping of his hands with bandages, apparent in late photographs of the artist, served to prevent skin irritation. In 1919, Renoir visited the Louvre to see his paintings hanging with those of the old masters. During this period, he created sculptures by cooperating with a young artist, Richard Guino, who worked the clay. Due to his limited joint mobility, Renoir also used a moving canvas, or picture roll, to facilitate painting large works. Renoir's portrait of Austrian actress Tilla Durieux (1914) contains playful flecks of vibrant color on her shawl that offset the classical pose of the actress and highlight Renoir's skill just five years before his death. Renoir died in the village of Cagnes-sur-Mer, Provence-Alpes-Côte d'Azur, on 3 December 1919. Pierre-Auguste Renoir's great-grandson, Alexandre Renoir, has also become a professional artist. In 2018, the Monthaven Arts and Cultural Center in Hendersonville, Tennessee hosted an exhibition of Alexandre's works titled "Beauty Remains." The exhibition title comes from a famous quote by Pierre-Auguste who, when asked why he continued to paint with his painful arthritis in his advanced years, once said "The pain passes, but the beauty remains." Renoir's paintings are notable for their vibrant light and saturated color, most often focusing on people in intimate and candid compositions. The female nude was one of his primary subjects. However, in 1876, a reviewer in Le Figaro wrote "Try to explain to Monsieur Renoir that a woman's torso is not a mass of decomposing flesh with those purplish green stains that denote a state of complete putrefaction in a corpse." Yet in characteristic Impressionist style, Renoir suggested the details of a scene through freely brushed touches of colour, so that his figures softly fuse with one another and their surroundings. His initial paintings show the influence of the colorism of Eugène Delacroix and the luminosity of Camille Corot. He also admired the realism of Gustave Courbet and Édouard Manet, and his early work resembles theirs in his use of black as a color. Renoir admired Edgar Degas' sense of movement. Other painters Renoir greatly admired were the 18th-century masters François Boucher and Jean-Honoré Fragonard. A fine example of Renoir's early work and evidence of the influence of Courbet's realism, is "Diana", 1867. Ostensibly a mythological subject, the painting is a naturalistic studio work; the figure carefully observed, solidly modeled and superimposed upon a contrived landscape. If the work is a "student" piece, Renoir's heightened personal response to female sensuality is present. The model was Lise Tréhot, the artist's mistress at that time, and inspiration for a number of paintings. In the late 1860s, through the practice of painting light and water "en plein air" (outdoors), he and his friend Claude Monet discovered that the color of shadows is not brown or black, but the reflected color of the objects surrounding them, an effect known today as diffuse reflection. Several pairs of paintings exist in which Renoir and Monet worked side-by-side, depicting the same scenes ("La Grenouillère", 1869). One of the best known Impressionist works is Renoir's 1876 "Dance at Le Moulin de la Galette (Bal du moulin de la Galette)". The painting depicts an open-air scene, crowded with people at a popular dance garden on the "Butte Montmartre" close to where he lived. The works of his early maturity were typically Impressionist snapshots of real life, full of sparkling color and light. By the mid-1880s, however, he had broken with the movement to apply a more disciplined formal technique to portraits and figure paintings, particularly of women. It was a trip to Italy in 1881 when he saw works by Raphael and other Renaissance masters, that convinced him that he was on the wrong path, and for the next several years he painted in a more severe style in an attempt to return to classicism. Concentrating on his drawing and emphasizing the outlines of figures, he painted works such as "Blonde Bather" (1881 and 1882) and "The Large Bathers" (1884–87; Philadelphia Museum of Art) during what is sometimes called his "Ingres period". After 1890 he changed direction again. To dissolve outlines, as in his earlier work, he returned to thinly brushed color. From this period onward he concentrated on monumental nudes and domestic scenes, fine examples of which are "Girls at the Piano", 1892, and "Grandes Baigneuses", 1887. The latter painting is the most typical and successful of Renoir's late, abundantly fleshed nudes. A prolific artist, he created several thousand paintings. The warm sensuality of Renoir's style made his paintings some of the most well-known and frequently reproduced works in the history of art. The single largest collection of his works—181 paintings in all—is at the Barnes Foundation, in Philadelphia. A five-volume "catalogue raisonné" of Renoir's works (with one supplement) was published by Bernheim-Jeune between 1983 and 2014. Bernheim-Jeune is the only surviving major art dealer that was used by Renoir. The Wildenstein Institute is preparing, but has not yet published, a critical catalogue of Renoir's work. A disagreement between these two organizations concerning an unsigned work in Picton Castle was at the centre of the second episode of the fourth season of the television series "Fake or Fortune". In 1919, Ambroise Vollard, a renowned art dealer, published a book on the life and work of Renoir, "La Vie et l'Œuvre de Pierre-Auguste Renoir", in an edition of 1000 copies. In 1986, Vollard's heirs started reprinting the copper plates, generally, etchings with hand applied watercolor. These prints are signed by Renoir in the plate and are embossed "Vollard" in the lower margin. They are not numbered, dated or signed in pencil. A small version of "Bal du moulin de la Galette" sold for $78.1 million May 17, 1990 at Sotheby's New York. In 2012, Renoir's "Paysage Bords de Seine" was offered for sale at auction but the painting was discovered to have been stolen from the Baltimore Museum of Art in 1951. The sale was cancelled. On December 7, 2019 the Alberta Symphony Orchestra presented a Tribute to Renoir at Triffo Theater in Edmonton, Alberta, Canada, under the direction of pianist and conductor Emilio De Mercato, for the 100th anniversary of the death of Renoir.
https://en.wikipedia.org/wiki?curid=24546
Poll tax A poll tax, also known as head tax or capitation, is a tax levied as a fixed sum on every liable individual. Head taxes were important sources of revenue for many governments from ancient times until the 19th century. In the United Kingdom, poll taxes were levied by the governments of John of Gaunt in the 14th century, Charles II in the 17th and Margaret Thatcher in the 20th century. In the United States, voting poll taxes (whose payment was a precondition to voting in an election) have been used to disenfranchise impoverished and minority voters (especially under Reconstruction). Poll taxes are considered very regressive taxes, are usually very unpopular and have been implicated in many uprisings. The word "poll" is an archaic term for "head" or "top of the head". The sense of "counting heads" is found in phrases like polling place and opinion poll. As prescribed in Exodus (30: 11-16) Jewish law imposed a poll tax of half-shekel, payable by every man above the age of twenty ("the rich shall not pay more and the poor shall not pay less"). The money was designated for the Tabernacle in the Exodus narrative and later for the upkeep of the Temple of Jerusalem. Priests, women, slaves and minors were exempted, although they could offer it voluntarily. Payment by Samaritans or Gentiles was rejected. It was collected yearly during the month of Adar, both at the Temple and at special collection bureaux in the provinces. Zakat al-Fitr is an obligatory charity that must be given by every Muslim (or their guardian) near the end of every Ramadan. Muslims in dire poverty are exempt from it. The amount is 2 kg of wheat or barley, or its cash equivalent. Zakat al-Fitr is to be given to the poor. Jizya was a poll tax imposed under Islamic law on non-Muslims permanently residing in a Muslim state as part of their "dhimmi" status. The tax is levied on free-born abled-bodied men of military age. The indigent were exempt, as well as slaves, women, children, the old, the sick, monks and hermits. Several rationales for the jizya have been advanced. They include the argument that jizya was a fee in exchange for the dhimma (permission to practice one's faith, enjoy communal autonomy, and to be entitled to Muslim protection from outside aggression), and the argument that imposition of jizya on non-Muslims is similar to the imposition of zakat (one of the Five Pillars of Islam, an obligatory wealth tax paid on certain assets which are not used productively for a period of a year) on Muslims. Although jizya is often called a poll tax, its assessment and collection was commonly qualified by income. For instance, Amr ibn al-As, after conquering Egypt, set up a census to measure the population for the jizya, and thus the total expected jizya revenue for the whole province, but organized the actual collection by partitioning the population into wealth classes, so that the rich paid more and the poor less jizya of that total sum. Elsewhere, it is reported customary to partition into three classes, e.g. 48 dirhams for the rich, 24 for middle class and 12 for the poor. In 1855, the Ottoman Empire abolished the jizya tax, as part of reforms to equalize the status of Muslims and non-Muslims. It was replaced by a military-exemption tax on non-Muslims, the Bedel-i Askeri. The Chinese head tax was a fixed fee charged to each Chinese person entering Canada. The head tax was first levied after the Canadian parliament passed the Chinese Immigration Act of 1885 and was meant to discourage Chinese people from entering Canada after the completion of the Canadian Pacific Railway. The tax was abolished by the Chinese Immigration Act of 1923, which stopped all Chinese immigration except for business people, clergy, educators, students, and other categories. The poll tax was essentially a lay subsidy, a tax on the movable property of most of the population, to help fund war. It had first been levied in 1275 and continued under different names until the 17th century. People were taxed a percentage of the assessed value of their movable goods. That percentage varied from year to year and place to place, and which goods could be taxed differed between urban and rural locations. Churchmen were exempt, as were the poor, workers in the Royal Mint, inhabitants of the Cinque Ports, tin workers in Cornwall and Devon, and those who lived in the Palatinate counties of Cheshire and Durham. The Hilary Parliament, held between January and March 1377, levied a poll tax in 1377 to finance the war against France at the request of John of Gaunt who, since King Edward III was mortally sick, was the de facto head of government at the time. This tax covered almost 60% of the population, far more than lay subsidies had earlier. It was levied two more times, in 1379 and 1381. Each time the taxation basis was slightly different. In 1377, every lay person over the age of 14 years who was not a beggar had to pay a groat (4d) to the Crown. By 1379 that had been graded by social class, with the lower age limit changed to 16, and to 15 two years later. The levy of 1381 operated under a combination of both flat rate and graduated assessments. The minimum amount payable was set at 4d, however tax collectors had to account for a 12d a head mean assessment. Payments were therefore variable; the poorest would theoretically pay the lowest rate, with the deficit being met by a higher payment from those able to afford it. The 1381 tax has been credited as one of the main reasons behind the Peasants' Revolt in that year, due in part to attempts to restore feudal conditions in rural areas. The poll tax was resurrected during the 17th century, usually related to a military emergency. It was imposed by Charles I in 1641 to finance the raising of the army against the Scottish and Irish uprisings. With the Restoration of Charles II in 1660, the Convention Parliament of 1660 instituted a poll tax to finance the disbanding of the New Model Army (pay arrears, etc.) (12 Charles II c.9). The poll tax was assessed according to "rank", e.g. dukes paid £100, earls £60, knights £20, esquires £10. Eldest sons paid 2/3rds of their father's rank, widows paid a third of their late husband's rank. The members of the livery companies paid according to company's rank (e.g. masters of first-tier guilds like the Mercers paid £10, whereas masters of fifth-tier guilds, like the Clerks, paid 5 shillings). Professionals also paid differing rates, e.g. physicians (£10), judges (£20), advocates (£5), attorneys (£3), and so on. Anyone with property (land, etc.) paid 40 shillings per £100 earned, anyone over the age of 16 and unmarried paid twelvepence and everyone else over 16 paid sixpence. The poll tax was imposed again by William III and Mary II in 1689 (1 Will. & Mar. c.13), reassessed in 1690 adjusting rank for fortune, and then again in 1691 back to rank irrespective of fortune. The poll tax was imposed again in 1692, and one final time in 1698 (the last poll tax in England until the 20th century). A poll tax was imposed on Scotland between 1694 and 1699. As the greater weight of the 17th century poll taxes fell primarily upon the wealthy and powerful, it was not too unpopular. There were grumblings within the taxed ranks about lack of differentiation by income within ranks. Ultimately, it was the inefficiency of their collection (what they brought in routinely fell far short of expected revenues) that prompted the government to abandon the poll tax after 1698. Far more controversial was the hearth tax introduced in 1662 (13 & 14 Charles II c.10), which imposed a hefty two shillings on every hearth in a family dwelling, which was easier to count than persons. Heavier, more permanent and more regressive than the poll tax proper, the intrusive entry of tax inspectors into private homes to count hearths was a very sore point, and it was promptly repealed with the Glorious Revolution in 1689. It was replaced with a "window tax" in 1695 since inspectors could count windows from outside homes. The Community Charge, popularly dubbed the "poll tax", was a tax to fund local government, instituted in 1989 by the government of Margaret Thatcher. It replaced the rates that were based on the notional rental value of a house. The abolition of rates was in the Conservative Party manifesto for the 1979 general election; the replacement was proposed in the Green Paper of 1986, "Paying for Local Government" based on ideas developed by Dr. Madsen Pirie and Douglas Mason of the Adam Smith Institute. It was a fixed tax per adult resident, but there was a reduction for those with lower household income. Each person was to pay for the services provided in their community. This proposal was contained in the Conservative Party manifesto for the 1987 general election. The new tax replaced the rates in Scotland from the start of the 1989/90 financial year and in England and Wales from the start of the 1990/91 financial year. The system was very unpopular since many thought it shifted the tax burden from the rich to the poor, as it was based on the number of occupants living in a house, rather than on the estimated market value of the house. Many tax rates set by local councils proved to be much higher than earlier predictions since the councils realised that not they but the central government would be blamed for the tax, which led to resentment, even among some who had supported the introduction of it. The tax in different boroughs differed because local taxes paid by businesses varied and grants by central government to local authorities sometimes varied capriciously. Mass protests were called by the All Britain Anti-Poll Tax Federation with which the vast majority of local Anti-Poll Tax Unions (APTUs) were affiliated. In Scotland, the APTUs called for mass nonpayment, which rapidly gathered widespread support and spread as far as England and Wales even though no-payment meant that people could be prosecuted. In some areas, 30% of former ratepayers defaulted. While owner-occupiers were easy to tax, nonpayers who regularly changed accommodation were almost impossible to trace. The cost of collecting the tax rose steeply, and its returns fell. Unrest grew and resulted in a number of poll tax riots. The most serious was in a protest at Trafalgar Square, London, on 31 March 1990, of more than 200,000 protesters. Terry Fields, Labour MP for Liverpool Broadgreen, was jailed for 60 days for his refusal to pay the poll tax. This unrest was a factor in the fall of Thatcher. Her successor, John Major, replaced the Community Charge with the Council Tax, similar to the rating system that preceded the Community Charge. The main differences were that it was levied on capital value rather than notional rental value of a property, and that it had a 25% discount for single-occupancy dwellings. In 2015, Lord Waldegrave reflected in his memoirs that the Community Charge was all his own work and that it was a serious mistake. Although he felt the policy looked like it would work, it was implemented differently from his predictions "They went gung-ho and introduced it overnight in one go, which was never my plan and I thought they must know what they were doing - but they didn't." In France, a poll tax, the capitation, was first imposed by King Louis XIV in 1695 as a temporary measure to finance the War of the League of Augsburg, and thus repealed in 1699. It was resumed during the War of Spanish Succession and in 1704 set on a permanent basis, remaining until the end of the "Ancien regime". Like the English poll tax, the French capitation tax was assessed on rank – for taxation persons, French society was divided in twenty-two "classes", with the Dauphin (a class by himself) paying 2,000 livres, princes of the blood paying 1500 livres, and so on down to the lowest class, composed of day laborers and servants, who paid 1 livre each. The bulk of the common population was covered by four classes, paying 40, 30, 10 and 3 livres respectively. Unlike most other direct French taxes, nobles and clergy were not exempted from capitation taxes. It did, however, exempt the mendicant orders and the poor who contributed less than 40 sous. The French clergy managed to temporarily escape capitation assessment by promising to pay a total sum of 4 million livres per annum in 1695, and then obtained permanent exemption in 1709 with a lump sum payment of 24 million livres. The "Pays d'états" (Brittany, Burgundy, etc.) and many towns also escaped assessment by promising annual fixed payments. The nobles did not escape assessment, but they obtained the right to appoint their own capitation tax assessors, which allowed them to escape most of the burden (in one calculation, they escaped of it). Compounding the burden, the assessment on the capitation did not remain stable. The "pays de taille personelle" (basically, Pays d'élection, the bulk of France and Aquitaine) secured the ability to assess the capitation tax proportionally to the taille – which effectively meant adjusting the burden heavily against the lower classes. According to the estimates of Jacques Necker in 1788, the capitation tax was so riddled in practice, that the privileged classes (nobles and clergy and towns) were largely exempt, while the lower classes were heavily crushed: the lowest peasant class, originally assessed to pay 3 livres, were now paying 24, the second lowest, assessed at 10 livres, were now paying 60 and the third-lowest assessed at 30 were paying 180. The total collection from the capitation, according to Necker in 1788, was 41 million livres, well short of the 54 million estimate, and it was projected that the revenues could have doubled if the exemptions were revoked and the original 1695 assessment properly restored. The old capitation tax was repealed with the French Revolution and replaced, on 13 January 1791, with a new poll tax as part of the "contribution personnelle mobilière", which lasted well into the late 19th century. It was fixed for every individual at "three days's labor" (assessed locally, but by statute, no less than 1 franc 50 centimes and no more than 4 francs 50 centimes, depending on the area). A dwelling tax ("impôt sur les portes et fenêtres", similar to the English window-tax) was imposed in 1798. New Zealand imposed a poll tax on Chinese immigrants during the 19th and early 20th centuries as part of their broader efforts to reduce the number of Chinese immigrants . The poll tax was effectively lifted in the 1930s following the invasion of China by Japan, and was finally repealed in 1944. Prime Minister Helen Clark offered New Zealand's Chinese community an official apology for the poll tax on 12 February 2002. The Jewish poll tax was a poll tax imposed on the Jews in Polish–Lithuanian Commonwealth. It was later absorbed into the "hiberna" tax. The ancient Romans imposed a "tributum capitis" (poll tax) as one of the principal direct taxes on the peoples of the Roman provinces ("Digest" 50, tit.15). In the Republican period, poll taxes were principally collected by private tax farmers ("publicani"), but from the time of Emperor Augustus, the collections were gradually transferred to magistrates and the senates of provincial cities. The Roman census was conducted periodically in the provinces to draw up and update the poll tax register. The Roman poll tax fell principally on Roman subjects in the provinces, but not on Roman citizens. Towns in the provinces who possessed the "Jus Italicum" (enjoying the "privileges of Italy") were exempted from the poll tax. The 212 edict of Emperor Caracalla (which formally conferred Roman citizenship on all residents of Roman provinces) did not, however, exempt them from the poll tax. The Roman poll tax was deeply resented—Tertullian bewailed the poll tax as a "badge of slavery"—and it provoked numerous revolts in the provinces. Perhaps most famous is the Zealot revolt in Judaea of 66 AD. After the destruction of the temple in 70 AD, the Emperor imposed an extra poll tax on Jews throughout the empire, the "fiscus judaicus", of two denarii each. The Italian revolt of the 720s, organized and led by Pope Gregory II, was originally provoked by the attempt of the Constantinople Emperor Leo III the Isaurian to introduce a poll tax in the Italian provinces of the Byzantine Empire in 722, and set in motion the permanent separation of Italy from the Byzantine empire. When King Aistulf of the Lombards availed himself of the Italian dissent and invaded the Exarchate of Ravenna in 751, one of his first acts was to institute a crushing poll tax of one gold solidus per head on every Roman citizen. Seeking relief from this burden, Pope Stephen II appealed to Pepin the Short of the Franks for assistance, that led to the establishment of the Papal States in 756. The Russian Empire imposed a poll tax in 1718. Nikolay Bunge, Finance Minister from 1881 to 1886 under Emperor Alexander III, abolished it in 1886. Prior to the mid 20th century, a poll tax was implemented in some U.S. state and local jurisdictions and paying it was a requirement before one could exercise one's right to vote. After this right was extended to all races by the Fifteenth Amendment to the Constitution, many Southern states enacted poll taxes as a means of excluding African-American voters, most of whom were poor and unable to pay a tax. So as not to disenfranchise the many poor whites, such laws typically included a grandfather clause, exempting from the tax any adult male whose father or grandfather had voted. The effect was to exempt whites from the tax blacks had to pay, because no black fathers or grandfathers had been able to vote. The poll tax, along with literacy tests and extra-legal intimidation, achieved the desired effect of disenfranchising African Americans. Often in US discussions, the term "poll tax" is used to mean a tax that must be paid in order to vote, rather than a capitation tax simply. (For example, a bill that passed the Florida House of Representatives in April 2019 has been compared to a poll tax because it requires former felons to pay all "financial obligations" related to their sentence, including court fines, fees, and judgments, before their voting rights will be restored as required by a referendum that passed with 64% of the vote in 2018.) The Twenty-fourth Amendment, ratified in 1964, prohibits both Congress and the states from conditioning the right to vote on payment of a poll tax or any other type of tax. The ninth section of Article One of the Constitution places several limits on Congress's powers. Among them: "No capitation, or other direct, tax shall be laid, unless in proportion to the census or enumeration herein before directed to be taken". Capitation here means a tax of a uniform, fixed amount per taxpayer. Direct tax means a tax levied directly by the United States federal government on taxpayers, as opposed to a tax on events or transactions. The United States government levied direct taxes from time to time during the 18th and early 19th centuries. It levied direct taxes on the owners of houses, land, slaves and estates in the late 1790s but cancelled the taxes in 1802. An income tax is neither a poll tax nor a capitation, as the amount of tax will vary from person to person, depending on each person's income. Until a United States Supreme Court decision in 1895, all income taxes were deemed to be excises (i.e., indirect taxes). The Revenue Act of 1861 established the first income tax in the United States, to pay for the cost of the American Civil War. This income tax was abolished after the war, in 1872. Another income tax statute in 1894 was overturned in "Pollock v. Farmers' Loan & Trust Co." in 1895, where the Supreme Court held that income taxes on income from property, such as rent income, interest income, and dividend income (however excepting income taxes on income from "occupations and labor" if only for the reason of not having been challenged in the case, "We have considered the act only in respect of the tax on income derived from real estate, and from invested personal property") were to be treated as direct taxes. Because the statute in question had not apportioned income taxes on income from property by population, the statute was ruled unconstitutional. Finally, ratification of the Sixteenth Amendment to the United States Constitution in 1913 made possible modern income taxes, by limiting the Sixteenth Amendment income tax to the class of indirect excises (i.e. excises, duties, and imposts) – thus requiring no apportionment, a practice that would remain unchanged into the 21st century. Various cities, including Chicago and Denver, have levied head taxes with a set rate per employee targeted at large employers. After Cupertino postponed head tax proposals to 2020, Mountain View became the only city in Silicon Valley, California, to continue to pursue such type of taxes. In 2018, the Seattle city council proposed a "head tax" of $500 per year per employee. The proposed tax was lowered to $275 per year per employee, was passed, and became "the biggest head tax in U.S. history", though it was repealed less than a month later.
https://en.wikipedia.org/wiki?curid=24547
Patriotism Patriotism or national pride is the feeling of love, devotion and sense of attachment to a homeland and alliance with other citizens who share the same sentiment. This attachment can be a combination of many different feelings relating to one's own homeland, including ethnic, cultural, political or historical aspects. It encompasses a set of concepts closely related to nationalism. Some manifestations of patriotism emphasise the "land" element in love for one's native land and use the symbolism of agriculture and the soil – compare "Blut und Boden". An excess of patriotism in the defense of a nation is called chauvinism; another related term is "jingoism". The English term "patriot" is first attested in the Elizabethan era; it came via Middle French from Late Latin (6th century) "patriota", meaning "countryman", ultimately , . The abstract noun "patriotism" appears in the early 18th century. The general notion of civic virtue and group dedication has been attested in culture globally throughout the historical period. For the Enlightenment thinkers of 18th-century Europe, loyalty to the state was chiefly considered in contrast to loyalty to the Church. It was argued that clerics should not be allowed to teach in public schools since their "patrie" was heaven, so that they could not inspire love of the homeland in their students. One of the most influential proponents of this classical notion of patriotism was Jean-Jacques Rousseau. Enlightenment thinkers also criticized what they saw as the excess of patriotism. In 1774, Samuel Johnson published "The Patriot", a critique of what he viewed as false patriotism. On the evening of 7 April 1775, he made the famous statement, "Patriotism is the last refuge of the scoundrel." James Boswell, who reported this comment in his "Life of Johnson", does not provide context for the quote, and it has therefore been argued that Johnson was in fact attacking the false use of the term "patriotism" by contemporaries such as John Stuart, 3rd Earl of Bute (the patriot-minister) and his supporters; Johnson spoke elsewhere in favor of what he considered "true" patriotism. However, there is no direct evidence to contradict the widely held belief that Johnson's famous remark was a criticism of patriotism itself. Patriotism may be strengthened by adherence to a national religion (a civil religion or even a theocracy). This is the opposite of the separation of church and state demanded by the Enlightenment thinkers who saw patriotism and faith as similar and opposed forces. Michael Billig and Jean Bethke Elshtain have both argued that the difference between patriotism and faith is difficult to discern and relies largely on the attitude of the one doing the labelling. Christopher Heath Wellman, professor of philosophy at Washington University in St. Louis, describes that a popular view of the "patriotist" position is robust obligations to compatriots and only minimal samaritan responsibilities to foreigners. Wellman calls this position "patriotist" rather than "nationalist" to single out the members of territorial, political units rather than cultural groups. George Orwell, in his influential essay "Notes on Nationalism" distinguished patriotism from the related concept of nationalism: By 'patriotism' I mean devotion to a particular place and a particular way of life, which one believes to be the best in the world but has no wish to force upon other people. Patriotism is of its nature defensive, both militarily and culturally. Nationalism, on the other hand, is inseparable from the desire for power. The abiding purpose of every nationalist is to secure more power and more prestige, "not" for himself but for the nation or other unit in which he has chosen to sink his own individuality. Voltaire stated that "It is lamentable, that to be a good patriot one must become the enemy of the rest of mankind." Arthur Schopenhauer wrote in his "The World as Will and Representation" that “The cheapest sort of pride is national pride; for if a man is proud of his own nation, it argues that he has no qualities of his own of which a person can be proud” Marxists have taken various stances regarding patriotism. On one hand, Karl Marx famously stated that "The working men have no country" and that "the supremacy of the proletariat will cause them [national differences] to vanish still faster." The same view is promoted by present-day Trotskyists such as Alan Woods, who is "in favour of tearing down all frontiers and creating a socialist world commonwealth." On the other hand, Stalinists and Maoists are usually in favour of socialist patriotism based on the theory of socialism in one country. In the European Union, thinkers such as Jürgen Habermas have advocated a "Euro-patriotism", but patriotism in Europe is usually directed at the nation-state and more often than not coincides with "Euroscepticism". Several surveys have tried to measure patriotism for various reasons, such as the Correlates of War project which found some correlation between war propensity and patriotism. The results from different studies are time dependent. For example, patriotism in Germany before World War I ranked at or near the top, whereas today it ranks at or near the bottom of patriotism surveys. Since 1981, the World Values Survey explores people's national values and beliefs and refer to the average answer "for high income residents" of a country to the question "Are you proud to be [insert nationality]?". It ranges from 1 (not proud) to 4 (very proud).
https://en.wikipedia.org/wiki?curid=24552
Protein biosynthesis Protein biosynthesis (or protein synthesis) is a core biological process, occurring inside cells, balancing the loss of cellular proteins (via degradation or export) through the production of new proteins. Proteins perform a variety of critical functions as enzymes, structural proteins or hormones and therefore, are crucial biological components. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences. Protein synthesis can be divided broadly into two phases - transcription and translation. During transcription, a section of DNA encoding a protein, known as a gene, is converted into a template molecule called messenger RNA. This conversion is carried out by enzymes, known as RNA polymerases, in the nucleus of the cell. In eukaryotes, this messenger RNA (mRNA) is initially produced in a premature form (pre-mRNA) which undergoes post-transcriptional modifications to produce mature mRNA. The mature mRNA is exported from the nucleus via nuclear pores to the cytoplasm of the cell for translation to occur. During translation, the mRNA is read by ribosomes which use the nucleotide sequence of the mRNA to determine the sequence of amino acids. The ribosomes catalyse the formation of covalent peptide bonds between the encoded amino acids to form a polypeptide chain. Following translation the polypeptide chain must fold to form a functional protein, for example, to function as an enzyme the polypeptide chain must fold correctly to produce a functional active site. In order to adopt a functional three-dimensional (3D) shape, the polypeptide chain must first form a series of smaller underlying structures called secondary structures. The polypeptide chain in these secondary structures then folds to produce the overall 3D tertiary structure. Once correctly folded, the protein can undergo further maturation through different post-translational modifications. Post-translational modifications can alter the protein's ability to function, where it is located within the cell (e.g. cytoplasm or nucleus) and the protein's ability to interact with other proteins. Protein biosynthesis has a key role in disease as changes and errors in this process, through underlying DNA mutations or protein misfolding, are often the underlying causes of a disease. DNA mutations change the subsequent mRNA sequence, which then alters the mRNA encoded amino acid sequence. Mutations can cause the polypeptide chain to be shorter by generating a stop sequence which causes early termination of translation. Alternatively, a mutation in the mRNA sequence changes the specific amino acid encoded at that position in the polypeptide chain. This amino acid change can impact the proteins ability to function or to fold correctly. Misfolded proteins are often implicated in disease as improperly folded proteins have a tendency to stick together to form dense protein clumps. These clumps are linked to a range of diseases, often neurological, including Alzheimer's disease and Parkinson's disease. Transcription occurs in the nucleus using DNA as a template to produce mRNA. In eukaryotes, this mRNA molecule is known as pre-mRNA as it undergoes post-transcriptional modifications in the nucleus to produce a mature mRNA molecule. However, in prokaryotes post-transcriptional modifications are not required so the mature mRNA molecule is immediately produced by transcription. Initially, an enzyme known as a helicase acts on the molecule of DNA. DNA has an antiparallel, double helix structure composed of two, complementary polynucleotide strands, held together by hydrogen bonds between the base pairs. The helicase disrupts the hydrogen bonds causing a region of DNA - corresponding to a gene - to unwind, separating the two DNA strands and exposing a series of bases. Despite DNA being a double stranded molecule, only one of the strands acts as a template for pre-mRNA synthesis - this strand is known as the template strand. The other DNA strand (which is complementary to the template strand) is known as the coding strand. Both DNA and RNA have intrinsic directionality, meaning there are two distinct ends of the molecule. This property of directionality is due to the asymmetrical underlying nucleotide subunits, with a phosphate group on one side of the pentose sugar and a base on the other. The five carbons in the pentose sugar are numbered from 1' (where ' means prime) to 5'. Therefore, the phosphodiester bonds connecting the nucleotides are formed by joining the hydroxyl group of on the 3' carbon of one nucleotide to the phosphate group on the 5' carbon of another nucleotide. Hence, the coding strand of DNA runs in a 5' to 3' direction and the complementary, template DNA strand runs in the opposite direction from 3' to 5'. The enzyme RNA polymerase binds to the exposed template strand and reads from the gene in the 3' to 5' direction. Simultaneously, the RNA polymerase synthesises a single strand of pre-mRNA in the 5'-to-3' direction by catalysing the formation of phosphodiester bonds between activated nucleotides (free in the nucleus) that are capable of complementary base pairing with the template strand. Behind the moving RNA polymerase the two strands of DNA rejoin, so only 12 base pairs of DNA are exposed at one time. RNA polymerase builds the pre-mRNA molecule at a rate of 20 nucleotides per second enabling the production of thousands of pre-mRNA molecules from the same gene in an hour. Despite the fast rate of synthesis, the RNA polymerase enzyme contains its own proofreading mechanism. The proofreading mechanisms allows the RNA polymerase to remove incorrect nucleotides (which are not complementary to the template strand of DNA) from the growing pre-mRNA molecule through an excision reaction. When RNA polymerases reaches a specific DNA sequence which terminates transcription, RNA polymerase detaches and pre-mRNA synthesis is complete. The pre-mRNA molecule synthesised is complementary to the template DNA strand and shares the same nucleotide sequence as the coding DNA strand. However, there is one crucial difference in the nucleotide composition of DNA and mRNA molecules. DNA is composed of the bases - guanine, cytosine, adenine and thymine (G, C, A and T) - RNA is also composed of four bases - guanine, cytosine, adenine and uracil. In RNA molecules, the DNA base thymine is replaced by uracil which is able to base pair with adenine. Therefore, in the pre-mRNA molecule, all complementary bases which would be thymine in the coding DNA strand are replaced by uracil. Once transcription is complete, the pre-mRNA molecule undergoes post-transcriptional modifications to produce a mature mRNA molecule. There are 3 key steps within post-transcriptional modifications: The 5' cap is added to the 5' end of the pre-mRNA molecule and is composed of a guanine nucleotide modified through methylation. The purpose of the 5' cap is to prevent break down of mature mRNA molecules before translation, the cap also aids binding of the ribosome to the mRNA to start translation and enables mRNA to be differentiated from other RNAs in the cell. In contrast, the 3' Poly(A) tail is added to the 3' end of the mRNA molecule and is composed of 100-200 adenine bases. These distinct mRNA modifications enable the cell to detect that the full mRNA message is intact if both the 5' cap and 3' tail are present. This modified pre-mRNA molecule then undergoes the process of RNA splicing. Genes are composed of a series of introns and exons, introns are nucleotide sequences which do not encode a protein while, exons are nucleotide sequences that directly encode a protein. Introns and exons are present in both the underlying DNA sequence and the pre-mRNA molecule, therefore, in order to produce a mature mRNA molecule encoding a protein, splicing must occur. During splicing, the intervening introns are removed from the pre-mRNA molecule by a multi-protein complex known as a spliceosome (composed of over 150 proteins and RNA). This mature mRNA molecule is then exported into the cytoplasm through nuclear pores in the envelope of the nucleus. During translation, ribosomes synthesise polypeptide chains from mRNA template molecules. In eukaryotes, translation occurs in the cytoplasm of the cell, where the ribosomes are located either free floating or attached to the endoplasmic reticulum. In prokaryotes, which lack a nucleus, the processes of both transcription and translation occur in the cytoplasm. Ribosomes are complex molecular machines, made of a mixture of protein and ribosomal RNA, arranged into two subunits (a large and a small subunit), which surround the mRNA molecule. The ribosome reads the mRNA molecule in a 5'-3' direction and uses it as a template to determine the order of amino acids in the polypeptide chain. In order to translate the mRNA molecule, the ribosome uses small molecules, known as transfer RNAs (tRNA), to deliver the correct amino acids to the ribosome. Each tRNA is composed of 70-80 nucleotides and adopts a characteristic cloverleaf structure due to the formation of hydrogen bonds between the nucleotides within the molecule. There are around 60 different types of tRNAs, each tRNA binds to a specific sequence of three nucleotides (known as a codon) within the mRNA molecule and delivers a specific amino acid. The ribosome initially attaches to the mRNA at the start codon (AUG) and begins to translate the molecule. The mRNA nucleotide sequence is read in triplets - three adjacent nucleotides in the mRNA molecule correspond to a single codon. Each tRNA has an exposed sequence of three nucleotides, known as the anticodon, which are complementary in sequence to a specific codon that may be present in mRNA. For example, the first codon encountered is the start codon composed of the nucleotides AUG. The correct tRNA with the anticodon (complementary 3 nucleotide sequence UAC) binds to the mRNA using the ribosome. This tRNA delivers the correct amino acid corresponding to the mRNA codon, in the case of the start codon, this is the amino acid methionine. The next codon (adjacent to the start codon) is then bound by the correct tRNA with complementary anticodon, delivering the next amino acid to ribosome. The ribosome then uses its peptidyl transferase enzymatic activity to catalyse the formation of the covalent peptide bond between the two adjacent amino acids. The ribosome then moves along the mRNA molecule to the third codon. The ribosome then releases the first tRNA molecule, as only two tRNA molecules can be brought together by a single ribosome at one time. The next complementary tRNA with the correct anticodon complementary to the third codon is selected, delivering the next amino acid to the ribosome which is covalently joined to the growing polypeptide chain. This process continues with the ribosome moving along the mRNA molecule adding up to 15 amino acids per second to the polypeptide chain. Behind the first ribosome, up to 50 additional ribosomes can bind to the mRNA molecule forming a polysome, this enables simultaneous synthesis of multiple identical polypeptide chains. Termination of the growing polypeptide chain occurs when the ribosome encounters a stop codon (UAA, UAG, or UGA) in the mRNA molecule. When this occurs, no tRNA can recognise it and a release factor induces the release of the complete polypeptide chain from the ribosome. Once synthesis of the polypeptide chain is complete, the polypeptide chain folds to adopt a specific structure which enables the protein to carry out its functions. The basic form of protein structure is known as the primary structure, which is simply the polypeptide chain i.e. a sequence of covalently bonded amino acids. The primary structure of a protein is encoded by a gene. Therefore, any changes to the sequence of the gene can alter the primary structure of the protein and all subsequent levels of protein structure, ultimately changing the overall structure and function. The primary structure of a protein (the polypeptide chain) can then fold or coil to form the secondary structure of the protein. The most common types of secondary structure are known as an alpha helix or beta sheet, these are small structures produced by hydrogen bonds forming within the polypeptide chain. This secondary structure then folds to produce the tertiary structure of the protein. The tertiary structure is the proteins overall 3D structure which is made of different secondary structures folding together. In the tertiary structure, key protein features e.g. the active site, are folded and formed enabling the protein to function. Finally, some proteins may adopt a complex quaternary structure. Most proteins are made of a single polypeptide chain, however, some proteins are composed of multiple polypeptide chains (known as subunits) which fold and interact to form the quaternary structure. Hence, the overall protein is a multi-subunit complex composed of multiple folded, polypeptide chain subunits e.g. haemoglobin. When protein folding into the mature, functional 3D state is complete, it is not necessarily the end of the protein maturation pathway. A folded protein can still undergo further processing through post-translational modifications. There are over 200 known types of post-translational modification, these modifications can alter protein activity, the ability of the protein to interact with other proteins and where the protein is found within the cell e.g. in the cell nucleus or cytoplasm. Through post-translational modifications, the diversity of proteins encoded by the genome is expanded by 2 to 3 orders of magnitude. There are four key classes of post-translational modification: Cleavage of proteins is an irreversible post-translational modification carried out by enzymes known as proteases. These proteases are often highly specific and cause hydrolysis of a limited number of peptide bonds within the target protein. The resulting shortened protein has an altered polypeptide chain with different amino acids at the start and end of the chain. This post-translational modification often alters the proteins function, the protein can be inactivated or activated by the cleavage and can display new biological activities. Following translation, small chemical groups can be added onto amino acids within the mature protein structure. Examples of processes which add chemical groups to the target protein include methylation, acetylation and phosphorylation. Methylation is the reversible addition of a methyl group onto an amino acid catalysed by methyltransferase enzymes. Methylation occurs on at least 9 of the 20 common amino acids, however, it mainly occurs on the amino acids lysine and arginine. One example of a protein which is commonly methylated is a histone. Histones are proteins found in the nucleus of the cell. DNA is tightly wrapped round histones and held in place by other proteins and interactions between negative charges in the DNA and positive charges on the histone. A highly specific pattern of amino acid methylation on the histone proteins is used to determine which regions of DNA are tightly wound and unable to be transcribed and which regions are loosely wound and able to be transcribed. Histone-based regulation of DNA transcription is also modified by acetylation. Acetylation is the reversible covalent addition of an acetyl group onto a lysine amino acid by the enzyme acetyltransferase. The acetyl group is removed from a donor molecule known as acetyl coenzyme A and transferred onto the target protein. Histones undergo acetylation on their lysine residues by enzymes known as histone acetyltransferase. The effect of acetylation is to weaken the charge interactions between the histone and DNA, thereby making more genes in the DNA accessible for transcription. The final, prevalent post-translational chemical group modification is phosphorylation. Phosphorylation is the reversible, covalent addition of a phosphate group to specific amino acids (serine, threonine and tyrosine) within the protein. The phosphate group is removed from the donor molecule ATP by a protein kinase and transferred onto the hydroxyl group of the target amino acid, this produces adenosine diphosphate as a biproduct. This process can be reversed and the phosphate group removed by the enzyme protein phosphatase. Phosphorylation can create a binding site on the phosphorylated protein which enables it to interact with other proteins and generate large, multi-protein complexes. Alternatively, phosphorylation can change the level of protein activity by altering the ability of the protein to bind its substrate. Post-translational modifications can incorporate more complex, large molecules into the folded protein structure. One common example of this is glycosylation, the addition of a polysaccharide molecule, which is widely considered to be most common post-translational modification. In glycosylation, a polysaccharide molecule (known as a glycan) is covalently added to the target protein by glycosyltransferases enzymes and modified by glycosidases in the endoplasmic reticulum and Golgi apparatus. Glycosylation can have a critical role in determining the final, folded 3D structure of the target protein. In some cases glycosylation is necessary for correct folding. N-linked glycosylation promotes protein folding by increasing solubility and mediates the protein binding to protein chaperones. Chaperones are proteins responsible for folding and maintaining the structure of other proteins. There are broadly two types of glycosylation, N-linked glycosylation and O-linked glycosylation. N-linked glycosylation starts in the endoplasmic reticulum with the addition of a precursor glycan. The precursor glycan is modified in the Golgi apparatus to produce complex glycan bound covalently to the nitrogen in an asparagine amino acid. In contrast, O-linked glycosylation is the sequential covalent addition of individual sugars onto the oxygen in the amino acids serine and threonine within the mature protein structure. Many proteins produced within the cell are secreted outside the cell, therefore, these proteins function as extracellular proteins. Extracellular proteins are exposed to a wide variety of conditions. In order to stabilise the 3D protein structure, covalent bonds are formed either within the protein or between the different polypeptide chains in the quaternary structure. The most prevalent type is a disulfide bond (also known as a disulfide bridge). A disulfide bond is formed between two cysteine amino acids using their side chain chemical groups containing a sulphur atom, these chemical groups are known as thiol functional groups. Disulfide bonds act to stabilise the pre-existing structure of the protein. Disulfide bonds are formed in an oxidation reaction between two thiol groups and therefore, need an oxidising environment to react. As a result, disulfide bonds are typically formed in the oxidising environment of the endoplasmic reticulum catalysed by enzymes called protein disulfide isomerases. Disulfide bonds are rarely formed in the cytoplasm as it is a reducing environment. Many diseases are caused by mutations in genes, due to the direct connection between the DNA nucleotide sequence and the amino acid sequence of the encoded protein. Changes to the primary structure of the protein can result in the protein mis-folding or malfunctioning. Mutations within a single gene have been identified as a cause of multiple diseases, including sickle cell disease, known as single gene disorders. Sickle cell disease is a group of diseases caused by a mutation in a subunit of haemoglobin, a protein found in red blood cells responsible for transporting oxygen. The most dangerous of the sickle cell diseases is known as sickle cell anaemia. Sickle cell anaemia is the most common homozygous recessive single gene disorder, meaning the sufferer must carry a mutation in both copies of the affected gene (one inherited from each parent) to suffer from the disease. Haemoglobin has a complex quaternary structure and is composed of four polypeptide subunits - two A subunits and two B subunits. Patients suffering from sickle cell anaemia have a missense or substitution mutation in the gene encoding the haemoglobin B subunit polypeptide chain. A missense mutation means the nucleotide mutation alters the overall codon triplet such that a different amino acid is paired with the new codon. In the case of sickle cell anaemia, the most common missense mutation is a single nucleotide mutation from thymine to adenine in the haemoglobin B subunit gene. This changes codon 6 from encoding the amino acid glutamic acid to encoding valine. This change in the primary structure of the haemoglobin B subunit polypeptide chain alters the functionality of the haemoglobin multi-subunit complex in low oxygen conditions. When red blood cells unload oxygen into the tissues of the body, the mutated haemoglobin protein starts to stick together to form a semi-solid structure within the red blood cell. This distorts the shape of the red blood cell, resulting in the characteristic "sickle" shape, and reduces cell flexibility. This rigid, distorted red blood cell can accumulate in blood vessels creating a blockage. The blockage prevents blood flow to tissues and can lead to tissue death which causes great pain to the individual.
https://en.wikipedia.org/wiki?curid=24553
Photosynthetic pigment A photosynthetic pigment (accessory pigment; chloroplast pigment; antenna pigment) is a pigment that is present in chloroplasts or photosynthetic bacteria and captures the light energy necessary for photosynthesis. Plants pigments (in order of increasing polarity): Chlorophyll "a" is the most common of the six, present in every plant that performs photosynthesis. The reason that there are so many pigments is that each absorbs light more efficiently in a different part of the electromagnetic spectrum. well at a wavelength of about 400–450 nm and at 650–700 nm; chlorophyll "b" at 450–500 nm and at 600–650 nm. Xanthophyll absorbs well at 400–530 nm. However, none of the pigments absorbs well in the green-yellow region, which is responsible for the abundant green we see in nature. Like plants, the cyanobacteria use water as an electron donor for photosynthesis and therefore liberate oxygen; they also use chlorophyll as a pigment. In addition, most cyanobacteria use phycobiliproteins, water-soluble pigments which occur in the cytoplasm of the chloroplast, to capture light energy and pass it on to the chlorophylls. (Some cyanobacteria, the prochlorophytes, use chlorophyll b instead of phycobilin.) It is thought that the chloroplasts in plants and algae all evolved from cyanobacteria. Several other groups of bacteria use the bacteriochlorophyll pigments (similar to the chlorophylls) for photosynthesis. Unlike the cyanobacteria, these bacteria do not produce oxygen; they typically use hydrogen sulfide rather than water as the electron donor. Recently, a very different pigment has been found in some marine γ-proteobacteria: proteorhodopsin. It is similar to and probably originated from bacteriorhodopsin (see below: under #Archaea). Bacterial chlorophyll b has been isolated from Rhodopseudomonas spp. but its structure is not yet known to us. Halobacteria use the pigment bacteriorhodopsin which acts directly as a proton pump when exposed to light.
https://en.wikipedia.org/wiki?curid=24555
Peter J. Carroll Peter James Carroll (born 8 January 1953, in Patching, England) is a modern occultist, author, cofounder of the Illuminates of Thanateros, and practitioner of chaos magic theory. Carroll studied science at the University of London and graduated with a "precisely calculated minimum pass". After university, Carroll was employed as a school teacher, and spent four years in India and the Himalayas. Carroll's 1987 book "Liber Null & Psychonaut" is considered one of the defining works of the chaos magic movement. Carroll was a co-founder of the loosely organized group called the Illuminates of Thanateros. In 1995, Carroll announced his desire to step down from the "roles of magus and pontiff of chaos". This statement was originally delivered at the same IOT international meeting which Carroll discussed in an article titled "The Ice War" in "Chaos International". Carroll has written columns for the "Chaos International" magazine currently edited by Ian Read, under the names Peter Carroll and "Stokastikos", his magical name within the IOT. In 2005, he appeared as a chaos magic instructor at Maybe Logic Academy at the request of Robert Anton Wilson, and later founded Arcanorium Occult College with other known chaos magicians on staff including Lionel Snell, Ian Read and Jaq D. Hawkins. This experience re-awoke his interest in the subject of magic and he has since continued writing.
https://en.wikipedia.org/wiki?curid=24557
Intact dilation and extraction Intact dilation and extraction (D&X, IDX, intact D&E) is a surgical procedure that removes an intact fetus from the uterus. The procedure is used both after miscarriages and for abortions in the second and third trimesters of pregnancy. It is also known as intact dilation and evacuation (D&E) and, in United States federal law, as partial-birth abortion. Partial-birth abortion is not an accepted medical term and is not used by abortion practitioners or the medical community at large. In 2000, although only 0.17% (2,232 of 1,313,000) of all abortions in the United States were performed using this procedure, it developed into a focal point of the abortion debate. Intact D&E of a fetus with a heartbeat was outlawed in most cases by the 2003 federal Partial-Birth Abortion Ban Act, which was upheld by the United States Supreme Court in the case of "Gonzales v. Carhart". As with non-intact D&E or labor induction in the second trimester, the purpose of D&E is to end a pregnancy by removing the fetus and placenta.  Patients who are experiencing a miscarriage or who have a fetus diagnosed with severe congenital anomalies may prefer an intact procedure to allow for viewing of the remains, grieving, and achieving closure. In cases where an autopsy is requested, an intact procedure allows for a more complete examination of the remains. An intact D&E is also used in abortions to minimize the passage of instruments into the uterus, reducing the risk of trauma. It also reduces the risk of cervical lacerations that may be caused by the removal of bony parts from the uterus and the risk of retention of fetal parts in the uterus. As with non-intact D&E, intact D&E may be safely performed in freestanding clinics, ambulatory surgical centers, and in hospitals. Intra-operative pain control is usually dependent on the setting and patient characteristics but commonly involves local analgesia with either IV sedation or general anesthesia. Preoperative antibiotics are administered to reduce the risk of infection. In cases where the woman is Rh-negative, Rho(D) immunoglobulin (RhoGam) is administered to prevent the risk of developing erythroblastosis fetalis (hemolytic disease of the newborn) in subsequent pregnancies. Intact D&E is more feasible among women with higher parity, at higher gestational ages, and when cervical dilation is greater. There are no absolute contraindications. The surgery is preceded by cervical preparation which may take several days. Osmotic dilators, natural or synthetic rods that absorb moisture from the cervix, are placed in the cervix and mechanically dilate the cervix over the course of hours to days. Misoprostol can be used to soften the cervix further. Intact D&E can only be performed with 2-5 centimeters of cervical dilation. Feticidal injection of digoxin or potassium chloride may be administered at the beginning of the procedure to allow for softening of the fetal bones or to comply with relevant laws in the physician's jurisdiction and the U.S. federal Partial-Birth Abortion Ban Act. Umbilical cord transection can also be used to induce fetal demise during the procedure. During the surgery, the fetus is removed from the uterus in the breech position. If the fetal presentation is not breech, forceps or manual manipulation can be used to turn it to a breech presentation while in the uterus (internal version). The fetal skull is usually the largest part of the fetal body and its removal may require mechanical collapse if it is too large to fit through the cervical canal. Decompression of the skull can be accomplished by incision and suction of the contents or by using forceps to collapse the skull. Recovery from an intact D&E is similar to recovery from a non-intact D&E. Postoperative pain is usually minimal and managed with NSAIDs. In cases of uterine atony and corresponding blood loss, methergine or misoprostol can be given to encourage uterine contraction and achieve hemostasis. Patients who have recently undergone an intact D&E are monitored for signs of coagulopathy, uterine perforation, uterine atony, retained tissue, or hemorrhage. The risks of intact D&E are similar to the risks of non-intact D&E and include postoperative infection, hemorrhage, or uterine injury. Overall, the complication rate is low, with rates of serious complications (those requiring blood transfusion, surgery, or hospital treatment) ranging from 0 per 1,000 cases to 2.94 per 1,000 cases. The rate of minor complications is approximately 50 in 1,000 (5%), the same as the minor complication rate for non-intact D&E; the rate of serious complications is higher in non-intact D&E. Data directly comparing the safety of non-intact to intact D&E are limited. There is no difference in postoperative blood loss or major complications when compared to non-intact D&E. There is no difference in risk of subsequent preterm delivery. The risk of retained tissue is lower since the fetus is removed intact. In some cases, the physician may not be able to remove the fetus intact due to anatomical limitations. This may present a psychological problem for the patient who wishes to view the remains, or make a comprehensive autopsy impossible, precluding an accurate postmortem diagnosis of fetal anomalies. The term "partial-birth abortion" is primarily used in political discourse—chiefly regarding the legality of abortion in the United States. The term is not recognized as a medical term by the American Medical Association nor the American College of Obstetricians and Gynecologists. This term was first suggested in 1995 by Congressman Charles T. Canady, while developing the original proposed Partial-Birth Abortion Ban. According to Keri Folmar, the lawyer responsible for the bill's language, the term was developed in early 1995 in a meeting among herself, Charles T. Canady, and National Right to Life Committee lobbyist Douglas Johnson. Canady could not find this particular abortion practice named in any medical textbook, and therefore he and his aides named it. "Partial-birth abortion" was first used in the media on 4 June 1995 in a "Washington Times" article covering the bill. In the U.S., a federal statute defines "partial-birth abortion" as any abortion in which the life of the fetus is terminated after having been extracted from the mother's body to a point "past the navel [of the fetus]" or "in the case of head-first presentation, the entire fetal head is outside the body of the mother" at the time the life is terminated. The U.S. Supreme Court has held that the terms "partial-birth abortion" and "intact dilation and extraction" are basically synonymous. However, there are cases where these overlapping terms do not coincide. For example, the intact D&E procedure may be used to remove a deceased fetus (e.g., due to a miscarriage or feticide) that is developed enough to require dilation of the cervix for its extraction. Removing a dead fetus does not meet the federal legal definition of "partial-birth abortion," which specifies that partial live delivery must precede "the overt act, other than completion of delivery, that kills the partially delivered, living fetus." In addition to the federal ban, there have also been a number of state partial-birth abortion bans. There, courts have found that state legislation (rather than federal legislation) intended to ban "partial-birth abortions" could be interpreted to apply to some non-intact dilation and evacuation (D&E) procedures. Non-intact D&E, though performed at similar gestational ages, is a fundamentally different procedure. Intact D&E is a target of pro-life advocates who believe the procedure illustrates their contention that abortion, and especially late-term abortion, is the taking of a human life, and therefore both immoral and illegal. Critics consider the procedure to be infanticide, a position that many in the pro-life movement extend to cover all abortions. Some advocates, both for and against abortion rights, see the Intact D&E issue as a central battleground in the wider abortion debate, attempting to set a legal precedent so as to either gradually reduce or gradually increase access to all abortion methods. Dr. Martin Haskell has called the Intact D&E procedure "a quick, surgical outpatient method" for late second-trimester and early third-trimester abortions. The Partial-Birth Abortion Ban Act of 2003 describes it as "a gruesome and inhumane procedure that is never medically necessary." According to a BBC report about the U.S. Supreme Court's decision in "Gonzales v. Carhart", "government lawyers and others who favour the ban, have said there are alternative and more widely used procedures that are still legal - which involves dismembering the fetus in the uterus." An article in "Harper's" magazine stated that, "Defending the Partial-Birth Abortion Ban... requires arguing to judges that pulling a fetus from a woman's body in dismembered pieces is legal, medically acceptable, and safe; but that pulling a fetus out intact, so that if the woman wishes the fetus can be wrapped in a blanket and handed to her, is appropriately punishable by a fine, or up to two years' imprisonment, or both." Alternately, pro-life advocates frame the issue as one in which a partially-born infant's life is disposable, whereas pulling the infant only a few more inches down the birth canal automatically transforms it into "a living person, possessing rights and deserving of protection." The U.S. Supreme Court has stated that intact D&E remains legal as long as there is first a feticidal injection while the fetus is still completely inside of the mother's body. There is also controversy about why this procedure is used. Although prominent defenders of the method asserted during 1995 and 1996 that it was used only or mostly in acute medical circumstances, lobbyist Ron Fitzsimmons, executive director of the National Coalition of Abortion Providers (a trade association of abortion providers), told "The New York Times" (February 26, 1997): "In the vast majority of cases, the procedure is performed on a healthy mother with a healthy fetus that is 20 weeks or more along." Some prominent pro-life advocates quickly defended the accuracy of Fitzsimmons's statements, whilst others condemned Fitzsimmons as self-serving. In support of the Partial-Birth Abortion Ban Act, a nurse who witnessed three intact D&E procedures found them deeply disturbing, and described one performed on a 26½-week fetus with Down Syndrome in testimony before a Judiciary subcommittee of the US House of Representatives. A journalist observed three intact and two non-intact D&E procedures involving fetuses ranging from 19 to 23 weeks. She "watched for any signs of fetal distress, but ... [she] could see no response, no reflexive spasm, nothing. Whether this was a result of the anesthesia or an undeveloped fetal system for pain sensitivity, one thing was clear: There was no discernible response by the fetus." Abortion provider Warren Hern asserted in 2003 that "No peer-reviewed articles or case reports have ever been published describing anything such as 'partial-birth' abortion, 'Intact D&E' (for 'dilation and extraction'), or any of its synonyms." Therefore, Hern expressed uncertainty about what all of these terms mean. The U.S. Supreme Court held in "Gonzales v. Carhart" that these terms of the federal statute are not vague because the statute specifically detailed the procedure being banned: it specified anatomical landmarks past which the fetus must not be delivered, and criminalized such a procedure only if an "overt" fatal act is performed on the fetus after "partial delivery." Since 1995, led by Republicans in Congress, the U.S. House of Representatives and U.S. Senate have moved several times to pass measures banning the procedure. Congress passed two such measures by wide margins during Bill Clinton's presidency, but Clinton vetoed those bills in April 1996 and October 1997 on the grounds that they did not include health exceptions. Subsequent congressional attempts at overriding the veto were unsuccessful. A major part of the legal battle over banning the procedure relates to health exceptions, which would permit the procedure in special circumstances. The 1973 Supreme Court decision "Roe v. Wade", which declared many state-level abortion restrictions unconstitutional, allowed states to ban abortions of post-viable fetuses unless an abortion was "necessary to preserve the life or health of the mother." The companion ruling, "Doe v. Bolton", upheld against a vagueness challenge a state law that defined health to include mental as well as physical health. The Court has never explicitly held, as a matter of constitutional law, that states have to allow abortions of post-viable fetuses if doing so is necessary for the woman's mental health, but many read "Doe" as implying as much. The concern that the health exception can be read so liberally partly explains why supporters of the Partial-Birth Abortion Ban Act did not want to include one. In 2003, the Partial-Birth Abortion Ban Act (H.R. 760, S. 3) was signed into law; the House passed it on October 2 with a vote of 281-142, the Senate passed it on October 21 with a vote of 64-34, and President George W. Bush signed it into law on November 5. Beginning in early 2004, the Planned Parenthood Federation of America, the National Abortion Federation, and abortion doctors in Nebraska challenged the ban in federal district courts in the Northern District of California, Southern District of New York, and District of Nebraska. All three district courts ruled the ban unconstitutional that same year. Their respective federal courts of appeals—the Ninth Circuit, Second Circuit, and Eighth Circuit, respectively—affirmed these rulings on appeal. The three cases were all appealed to the U.S. Supreme Court, and were consolidated into the case "Gonzales v. Carhart". On April 18, 2007, the Supreme Court voted to uphold the Partial-Birth Abortion Ban Act by a decision of 5-4. Justice Kennedy wrote for the majority and was joined by Justices Thomas, Scalia, Alito, and Chief Justice Roberts. A dissenting opinion was written by Justice Ginsburg and joined by Justices Stevens, Souter and Breyer. Many states have bans on late-term abortions which apply to intact D&E if it is performed after viability. Many states have also passed bans specifically on intact D&E. The first was Ohio, which in 1995 enacted a law that referred to the procedure as "dilation and extraction". In 1997, the United States Court of Appeals for the Sixth Circuit found the law unconstitutional on the grounds that it placed a substantial and unconstitutional obstacle in the path of women seeking pre-viability abortions in the second trimester. Between 1995 and 2000, 28 more states passed Partial-Birth Abortion bans, all similar to the proposed federal bans and all lacking an exemption for the health of the woman. Many of these state laws faced legal challenges, with Nebraska's the first to reach decision in "Stenberg v. Carhart". The Federal District Court held Nebraska's statute unconstitutional on two counts. One being the bill's language was too broad, potentially rendering a range of abortion procedures illegal, and thus, creating an undue burden on a woman's ability to choose. The other count was the bill failed to provide a necessary exception for the health of the woman. The decision was appealed to and affirmed by both the Eighth Circuit and the Supreme Court in June 2000, thus resolving the legal challenges to similar state bans nationwide. Since the "Stenberg v. Carhart" decision, Virginia, Michigan, and Utah have adopted legislation very similar to the Nebraska law overturned as unconstitutional. The Michigan law was similarly struck down for broadness and failure to provide a health exemption. Utah's law remains on the books, pending trial, but is unenforceable under a court-ordered preliminary injunction. Virginia's Law was initially ruled invalid, but was reversed and remanded to the District Court in the wake of the Gonzales v. Carhart decision, where it was upheld as constitutional. This is despite the fact the Virginia law criminalizes abortions for accidental or intentional intact D&E. In 2000 Ohio introduced another "partial-birth abortion" ban. The law differed from previous attempts at the ban in that it specifically excluded D&E procedures, while also providing a narrow health exception. This law was upheld on appeal to the Sixth Circuit in 2003 on the grounds that "it permitted the partial birth procedure when necessary to prevent significant health risks." In 2003 the Michigan Senate introduced Senate Bill No. 395, which would have changed the definition of birth and therefore effectively ban intact D&E. would in effect ban partial birth abortions. The definition of birth as defined in the bill was that once any part of the body had passed beyond the introitus, it is considered a birth. The bill included an exemption for the mother's health. The bill was passed by both the Senate and House of Representatives but was vetoed by governor Jennifer Granholm. Since the passage of the Partial-Birth Abortion Ban Act in the United States and similar state laws, providers of later abortions typically induce and document fetal death before beginning any later abortion procedure. Since the bans only apply to abortions of living fetuses, this protects the abortion providers from prosecution. The most common method of inducing fetal demise is to inject digoxin intrafetally or potassium chloride intrathoracically. Questioned about the policy of the UK government on the issue in Parliament, Baroness Andrews stated that We are not aware of the procedure referred to as "partial-birth abortion" being used in Great Britain. It is the Royal College of Obstetricians and Gynaecologists' (RCOG) belief that this method of abortion is never used as a primary or pro-active technique and is only ever likely to be performed in unforeseen circumstances in order to reduce maternal mortality or severe morbidity.
https://en.wikipedia.org/wiki?curid=24560
Plutocracy A plutocracy (, ', 'wealth' + , ', 'power') or plutarchy is a society that is ruled or controlled by people of great wealth or income. The first known use of the term in English dates from 1631. Unlike systems such as democracy, capitalism, liberalism, socialism or anarchism, plutocracy is not rooted in an established political philosophy. The term "plutocracy" is generally used as a pejorative to describe or warn against an undesirable condition. Throughout history, political thinkers such as Winston Churchill, 19th-century French sociologist and historian Alexis de Tocqueville, 19th-century Spanish monarchist Juan Donoso Cortés and today Noam Chomsky have condemned plutocrats for ignoring their social responsibilities, using their power to serve their own purposes and thereby increasing poverty and nurturing class conflict, corrupting societies with greed and hedonism. Historic examples of plutocracies include the Roman Empire, some city-states in Ancient Greece, the civilization of Carthage, the Italian city-states/merchant republics of Venice, Florence, pre-French Revolution Kingdom of France, Genoa, and the pre-World War II Empire of Japan (the "zaibatsu"). According to Noam Chomsky and Jimmy Carter, the modern United States resembles a plutocracy though with democratic forms. A former chairman of the Federal Reserve, Paul Volcker, also believed the US to be developing into a plutocracy. One modern, formal example of a plutocracy, according to some critics, is the City of London. The City (also called the Square Mile of ancient London, corresponding to the modern financial district, an area of about 2.5 km2) has a unique electoral system for its local administration, separate from London proper. More than two-thirds of voters are not residents, but rather representatives of businesses and other bodies that occupy premises in the City, with votes distributed according to their numbers of employees. The principal justification for this arrangement is that most of the services provided by the City of London Corporation are used by the businesses in the City. In fact about 450,000 non-residents constitute the city's day-time population, far outnumbering the City's 7,000 residents. Some modern historians, politicians, and economists argue that the United States was effectively plutocratic for at least part of the "Gilded Age" and "Progressive Era" periods between the end of the Civil War until the beginning of the Great Depression. President Theodore Roosevelt became known as the "trust-buster" for his aggressive use of United States antitrust law, through which he managed to break up such major combinations as the largest railroad and Standard Oil, the largest oil company. According to historian David Burton, "When it came to domestic political concerns TR's bête noire was the plutocracy." In his autobiographical account of taking on monopolistic corporations as president, TR recounted …we had come to the stage where for our people what was needed was a real democracy; and of all forms of tyranny the least attractive and the most vulgar is the tyranny of mere wealth, the tyranny of a plutocracy. The Sherman Antitrust Act had been enacted in 1890, with large industries reaching monopolistic or near-monopolistic levels of market concentration and financial capital increasingly integrating corporations, a handful of very wealthy heads of large corporations began to exert increasing influence over industry, public opinion and politics after the Civil War. Money, according to contemporary progressive and journalist Walter Weyl, was "the mortar of this edifice", with ideological differences among politicians fading and the political realm becoming ""a mere branch" in a still larger, integrated business. The state, which through the party formally sold favors to the large corporations, became one of their departments." In his book "The Conscience of a Liberal", in a section entitled The Politics of Plutocracy, economist Paul Krugman says plutocracy took hold because of three factors: at that time, the poorest quarter of American residents (African-Americans and non-naturalized immigrants) were ineligible to vote, the wealthy funded the campaigns of politicians they preferred, and vote buying was "feasible, easy and widespread", as were other forms of electoral fraud such as ballot-box stuffing and intimidation of the other party's voters. The U.S. instituted progressive taxation in 1913, but according to Shamus Khan, in the 1970s, elites used their increasing political power to lower their taxes, and today successfully employ what political scientist Jeffrey Winters calls "the income defense industry" to greatly reduce their taxes. In 1998, Bob Herbert of "The New York Times" referred to modern American plutocrats as "The Donor Class" (list of top donors) and defined the class, for the first time, as "a tiny group – just one-quarter of 1 percent of the population – and it is not representative of the rest of the nation. But its money buys plenty of access." In modern times, the term is sometimes used pejoratively to refer to societies rooted in state-corporate capitalism or which prioritize the accumulation of wealth over other interests. According to Kevin Phillips, author and political strategist to Richard Nixon, the United States is a plutocracy in which there is a "fusion of money and government." Chrystia Freeland, author of "Plutocrats: The Rise of the New Global Super-Rich and the Fall of Everyone Else", says that the present trend towards plutocracy occurs because the rich feel that their interests are shared by society. When the Nobel-Prize winning economist Joseph Stiglitz wrote the 2011 "Vanity Fair" magazine article entitled "Of the 1%, by the 1%, for the 1%", the title and content supported Stiglitz's claim that the United States is increasingly ruled by the wealthiest 1%. Some researchers have said the US may be drifting towards a form of oligarchy, as individual citizens have less impact than economic elites and organized interest groups upon public policy. A study conducted by political scientists Martin Gilens (Princeton University) and Benjamin Page (Northwestern University), which was released in April 2014, stated that their "analyses suggest that majorities of the American public actually have little influence over the policies our government adopts". Gilens and Page do not characterize the U. S. as an "oligarchy" or "plutocracy" per se; however, they do apply the concept of "civil oligarchy" as used by Jeffrey A. Winters with respect to the US. In the political jargon and propaganda of Fascist Italy, Nazi Germany and the Communist International, Western democratic states were referred to as plutocracies, with the implication being that a small number of extremely wealthy individuals were controlling the countries and holding them to ransom. Plutocracy replaced democracy and capitalism as the principal fascist term for the United States and Great Britain during the Second World War. For the Nazis, the term was often a code word for "the Jews". Reasons why a plutocracy develops are complex. In a nation that is experiencing rapid economic growth, income inequality will tend to increase as the rate of return on innovation increases. In other scenarios, plutocracy may develop when a country is collapsing due to resource depletion as the elites attempt to hoard the diminishing wealth or expand debts to maintain stability, which will tend to enrich creditors and financiers. Economists have also suggested that free market economies tend to drift into monopolies and oligopolies because of the greater efficiency of larger businesses (see economies of scale). Other nations may become plutocratic through kleptocracy or rent-seeking.
https://en.wikipedia.org/wiki?curid=24561
Pareto principle The Pareto principle (also known as the 80/20 rule, the law of the vital few, or the principle of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of the causes. Management consultant Joseph M. Juran suggested the principle and named it after Italian economist Vilfredo Pareto, who noted the 80/20 connection while at the University of Lausanne in 1896. In his first work, "Cours d'économie politique", Pareto showed that approximately 80% of the land in Italy was owned by 20% of the population. The Pareto principle is only tangentially related to Pareto efficiency. Pareto developed both concepts in the context of the distribution of income and wealth among the population. Mathematically, the 80/20 rule is roughly followed by a power law distribution (also known as a Pareto distribution) for a particular set of parameters, and many natural phenomena have been shown empirically to exhibit such a distribution. It is an axiom of business management that "80% of sales come from 20% of clients". The original observation was in connection with population and wealth. Pareto noticed that approximately 80% of Italy's land was owned by 20% of the population. He then carried out surveys on a variety of other countries and found to his surprise that a similar distribution applied. A chart that gave the inequality a very visible and comprehensible form, the so-called "champagne glass" effect, was contained in the 1992 United Nations Development Program Report, which showed that distribution of global income is very uneven, with the richest 20% of the world's population controlling 82.7% of the world's income. Still, the Gini index of the world shows that nations have wealth distributions that vary greatly. The Pareto principle also could be seen as applying to taxation. In the US, the top 20% of earners have paid roughly 80-90% of Federal income taxes in 2000 and 2006, and again in 2018. However, it is important to note that while there have been associations of such with meritocracy, the principle should not be confused with further reaching implications. As Alessandro Pluchino at the University of Catania in Italy points out, other attributes do not necessarily correlate. Using talent as an example, he and other researchers state, “The maximum success never coincides with the maximum talent, and vice-versa,” and that such factors are the result of chance. The physicist Victor Yakovenko of the University of Maryland, College Park and AC Silva analyzed income data from the US Internal Revenue Service from 1983 to 2001, and found that the income distribution among the upper class (1–3% of the population) follows Pareto's principle. In computer science the Pareto principle can be applied to optimization efforts. For example, Microsoft noted that by fixing the top 20% of the most-reported bugs, 80% of the related errors and crashes in a given system would be eliminated. Lowell Arthur expressed that "20 percent of the code has 80 percent of the errors. Find them, fix them!" It was also discovered that in general the 80% of a certain piece of software can be written in 20% of the total allocated time. Conversely, the hardest 20% of the code takes 80% of the time. This factor is usually a part of COCOMO estimating for software coding. It has been inferred that the Pareto principle applies to athletic training, where roughly 20% of the exercises and habits have 80% of the impact, and the trainee should not focus so much on a varied training. This does not necessarily mean that having a healthy diet or going to the gym are not important, but they are not as significant as the key activities. It is also important to note this 80/20 rule has yet to be scientifically tested in controlled studies of athletic training. In baseball, the Pareto principle has been perceived in Wins Above Replacement (an attempt to combine multiple statistics to determine a player's overall importance to a team). "15% of all the players last year produced 85% of the total wins with the other 85% of the players creating 15% of the wins. The Pareto principle holds up pretty soundly when it is applied to baseball." Occupational health and safety professionals use the Pareto principle to underline the importance of hazard prioritization. Assuming 20% of the hazards account for 80% of the injuries, and by categorizing hazards, safety professionals can target those 20% of the hazards that cause 80% of the injuries or accidents. Alternatively, if hazards are addressed in random order, a safety professional is more likely to fix one of the 80% of hazards that account only for some fraction of the remaining 20% of injuries. Aside from ensuring efficient accident prevention practices, the Pareto principle also ensures hazards are addressed in an economical order, because the technique ensures the utilized resources are best used to prevent the most accidents. In engineering control theory, such as for electromechanical energy converters, the 80/20 principle applies to optimization efforts. The law of the few can be also seen in betting, where it is said that with 20% effort you can match the accuracy of 80% of the bettors. In the systems science discipline, Joshua M. Epstein and Robert Axtell created an agent-based simulation model called Sugarscape, from a decentralized modeling approach, based on individual behavior rules defined for each agent in the economy. Wealth distribution and Pareto's 80/20 principle became emergent in their results, which suggests the principle is a collective consequence of these individual rules. The Pareto principle has many applications in quality control. It is the basis for the Pareto chart, one of the key tools used in total quality control and Six Sigma techniques. The Pareto principle serves as a baseline for ABC-analysis and XYZ-analysis, widely used in logistics and procurement for the purpose of optimizing stock of goods, as well as costs of keeping and replenishing that stock. In health care in the United States, in one instance 20% of patients have been found to use 80% of health care resources. Some cases of super-spreading conform to the 20/80 rule, where approximately 20% of infected individuals are responsible for 80% of transmissions, although super-spreading can still be said to occur when super-spreaders account for a higher or lower percentage of transmissions. In epidemics with super-spreading, the majority of individuals infect relatively few secondary contacts. The Dunedin Study has found 80% of crimes are committed by 20% of criminals. This statistic has been used to support both stop-and-frisk policies and broken windows policing, as catching those criminals committing minor crimes will supposedly net many criminals wanted for (or who would normally commit) larger ones. Many video rental shops reported in 1988 that 80% of revenue came from 20% of videotapes. A video-chain executive discussed the ""Gone with the Wind" syndrome", however, in which every store had to offer classics like "Gone with the Wind", "Casablanca", or "The African Queen" to appear to have a large inventory, even if customers very rarely rented them. The idea has a rule of thumb application in many places, but it is commonly misused. For example, it is a misuse to state a solution to a problem "fits the 80/20 rule" just because it fits 80% of the cases; it must also be that the solution requires only 20% of the resources that would be needed to solve all cases. Additionally, it is a misuse of the 80/20 rule to interpret a small number of categories or observations. This is a special case of the wider phenomenon of Pareto distributions. If the Pareto index α, which is one of the parameters characterizing a Pareto distribution, is chosen as α = log45 ≈ 1.16, then one has 80% of effects coming from 20% of causes. It follows that one also has 80% of that top 80% of effects coming from 20% of that top 20% of causes, and so on. Eighty percent of 80% is 64%; 20% of 20% is 4%, so this implies a "64/4" law; and similarly implies a "51.2/0.8" law. Similarly for the bottom 80% of causes and bottom 20% of effects, the bottom 80% of the bottom 80% only cause 20% of the remaining 20%. This is broadly in line with the world population/wealth table above, where the bottom 60% of the people own 5.5% of the wealth, approximating to a 64/4 connection. The 64/4 correlation also implies a 32% 'fair' area between the 4% and 64%, where the lower 80% of the top 20% (16%) and upper 20% of the bottom 80% (also 16%) relates to the corresponding lower top and upper bottom of effects (32%). This is also broadly in line with the world population table above, where the second 20% control 12% of the wealth, and the bottom of the top 20% (presumably) control 16% of the wealth. The term 80/20 is only a shorthand for the general principle at work. In individual cases, the distribution could just as well be, say, nearer to 90/10 or 70/30. There is no need for the two numbers to add up to the number 100, as they are measures of different things, (e.g., 'number of customers' vs 'amount spent'). However, each case in which they do not add up to 100%, is equivalent to one in which they do. For example, as noted above, the "64/4 law" (in which the two numbers do not add up to 100%) is equivalent to the "80/20 law" (in which they do add up to 100%). Thus, specifying two percentages independently does not lead to a broader class of distributions than what one gets by specifying the larger one and letting the smaller one be its complement relative to 100%. Thus, there is only one degree of freedom in the choice of that parameter. Adding up to 100 leads to a nice symmetry. For example, if 80% of effects come from the top 20% of sources, then the remaining 20% of effects come from the lower 80% of sources. This is called the "joint ratio", and can be used to measure the degree of imbalance: a joint ratio of 96:4 is very imbalanced, 80:20 is significantly imbalanced (Gini index: 76%), 70:30 is moderately imbalanced (Gini index: 28%), and 55:45 is just slightly imbalanced (Gini index 14%). The Pareto principle is an illustration of a "power law" relationship, which also occurs in phenomena such as brush fires and earthquakes. Because it is self-similar over a wide range of magnitudes, it produces outcomes completely different from Normal or Gaussian distribution phenomena. This fact explains the frequent breakdowns of sophisticated financial instruments, which are modeled on the assumption that a Gaussian relationship is appropriate to something like stock price movements. Using the ""A" : "B"" notation (for example, 0.8:0.2) and with "A" + "B" = 1, inequality measures like the Gini index (G) "and" the Hoover index (H) can be computed. In this case both are the same.
https://en.wikipedia.org/wiki?curid=24562
Prefix A prefix is an affix which is placed before the stem of a word. Adding it to the beginning of one word changes it into another word. For example, when the prefix "un-" is added to the word "happy", it creates the word "unhappy". Particularly in the study of languages, a prefix is also called a preformative, because it alters the form of the words to which it is affixed. Prefixes, like other affixes, can be either inflectional, creating a new form of the word with the same basic meaning and same lexical category (but playing a different role in the sentence), or derivational, creating a new word with a new semantic meaning and sometimes also a different lexical category. Prefixes, like all other affixes, are usually bound morphemes. In English, there are no inflectional prefixes; English uses suffixes instead for that purpose. The word "prefix" is itself made up of the stem "fix" (meaning "attach", in this case), and the prefix "pre-" (meaning "before"), both of which are derived from Latin roots. This is a fairly comprehensive, although not exhaustive, list of derivational prefixes in English. Depending on precisely how one defines a derivational prefix, some of the neoclassical combining forms may or may not qualify for inclusion in such a list. This list takes the broad view that "acro-" and "auto-" count as English derivational prefixes because they function the same way that prefixes such as "over-" and "self-" do. As for numeral prefixes, only the most common members of that class are included here. There is a large separate table covering them all at Numeral prefix > Table of number prefixes in English. The choice between hyphenation or solid styling for prefixes in English is covered at Hyphen > Prefixes and suffixes. The most commonly used prefix in Japanese, お "o-", is used as part of the honorific system of speech. It is a marker for politeness, showing respect for the person or thing it is affixed to. In the Bantu languages of Africa, which are agglutinating, the noun class is conveyed through prefixes, which is declined and agrees with all of its arguments accordingly. Verbs in the Navajo language are formed from a word stem and multiple affixes. For example, each verb requires one of four non-syllabic prefixes (∅, ł, d, l) to create a verb theme. In the Sunwar language of Eastern Nepal, the prefix ma- म is used to create negative verbs. It is the only verbal prefix in the language. As a part of the formation of nouns, prefixes are less common in Russian than suffixes, but alter the meaning of a word. In German, derivatives formed with prefixes may be classified in two categories: those used with substantives and adjectives, and those used with verbs. For derivative substantives and adjectives, only two productive prefixes are generally addable to any substantive or adjective as of 1970: "un-", which expresses negation (as in "ungesund", from "gesund"), and "ur-", which means "original, primitive" in substantives, and has an emphatic function in adjectives. "ge-", on the other hand, expresses union or togetherness, and cannot simply be added to any noun or adjective. Verbal prefixes commonly in use are "be-", "er-", "ent-", "ge-", "ver-", "zer-", and "miss-" (see also Separable verb). "be-" expresses strengthening or generalization. "ent-" expresses negation. "ge-" indicates the completion of an action, and that's why its most common use has become the forming of the past participle of verbs; "ver-" has an emphatic function, or it is used to turn a substantive or an adjective into a verb. In some cases, the prefix particle "ent-" (negation) can be considered the opposite of particle "be-", while "er-" can be considered the opposite of "ver-". The prefix "er-" usually indicates the successful completion of an action, and sometimes the conclusion means death. With fewer verbs, it indicates the beginning of and action. The prefix "er-" is also used to form verbs from adjectives (e.g. "erkalten" is equivalent to "kalt werden" which means "to get cold").
https://en.wikipedia.org/wiki?curid=24564
Palaeography Palaeography (UK) or paleography (US; ultimately from , "palaiós", "old", and , "gráphein", "to write") is the study of ancient and historical handwriting (that is to say, of the forms and processes of writing; not the textual content of documents). Included in the discipline is the practice of deciphering, reading, and dating historical manuscripts, and the cultural context of writing, including the methods with which writing and books were produced, and the history of scriptoria. The discipline is one of the auxiliary sciences of history. It is important for understanding, authenticating, and dating ancient texts. However, it generally cannot be used to pinpoint dates with high precision. Palaeography can be an essential skill for historians and philologists, as it tackles two main difficulties. First, since the style of a single alphabet in each given language has evolved constantly, it is necessary to know how to decipher its individual characters as they existed in various eras. Second, scribes often used many abbreviations, usually so as to write more quickly and sometimes to save space, so the specialist-palaeographer must know how to interpret them. Knowledge of individual letter-forms, ligatures, punctuation, and abbreviations enables the palaeographer to read and understand the text. The palaeographer must know, first, the language of the text (that is, one must become expert in the relevant earlier forms of these languages); and second, the historical usages of various styles of handwriting, common writing customs, and scribal or notarial abbreviations. Philological knowledge of the language, vocabulary, and grammar generally used at a given time or place can help palaeographers identify ancient or more recent forgeries versus authentic documents. Knowledge of writing materials is also essential to the study of handwriting and to the identification of the periods in which a document or manuscript may have been produced. An important goal may be to assign the text a date and a place of origin: this is why the palaeographer must take into account the style and formation of the manuscript and the handwriting used in it. Palaeography can be used to provide information about the date at which a document was written. However, "paleography is a last resort for dating" and, "for book hands, a period of 50 years is the least acceptable spread of time" with it being suggested that "the 'rule of thumb' should probably be to avoid dating a hand more precisely than a range of at least seventy or eighty years". In a 2005 e-mail addendum to his 1996 "The Paleographical Dating of P-46" paper Bruce W. Griffin stated "Until more rigorous methodologies are developed, it is difficult to construct a 95% confidence interval for NT manuscripts without allowing a century for an assigned date." William M Schniedewind went even further in the abstract to his 2005 paper "Problems of Paleographic Dating of Inscriptions" and stated that "The so-called science of paleography often relies on circular reasoning because there is insufficient data to draw precise conclusion about dating. Scholars also tend to oversimplify diachronic development, assuming models of simplicity rather than complexity". The Aramaic language was the international trade language of the Ancient Middle East, originating in what is modern-day Syria, between 1000 and 600 BC. It spread from the Mediterranean coast to the borders of India, becoming extremely popular and being adopted by many people, both with or without any previous writing system. The Aramaic script was written in a consonantal form with a direction from right to left. The Aramaic alphabet, a modified form of Phoenician, was the ancestor of the modern Arabic and Hebrew scripts, as well as the Brāhmī script, the parent writing system of most modern abugidas in India, Southeast Asia, Tibet, and Mongolia. Initially, the Aramaic script did not differ from the Phoenician, but then the Aramaeans simplified some of the letters, thickened and rounded their lines: a specific feature of its letters is the distinction between d and r. One innovation in Aramaic is the "matres lectionis" system to indicate certain vowels. Early Phoenician-derived scripts did not have letters for vowels, and so most texts recorded just consonants. Most likely as a consequence of phonetic changes in North Semitic languages, the Aramaeans reused certain letters in the alphabet to represent long vowels. The letter "aleph" was employed to write /ā/, "he" for /ō/, "yod" for /ī/, and "vav" for /ū/. Aramaic writing and language supplanted Babylonian cuneiform and Akkadian language, even in their homeland in Mesopotamia. The wide diffusion of Aramaic letters led to its writing being used not only in monumental inscriptions, but also on papyrus and potsherds. Aramaic papyri have been found in large numbers in Egypt, especially at Elephantine—among them are official and private documents of the Jewish military settlement in 5 BC. In the Aramaic papyri and potsherds, words are separated usually by a small gap, as in modern writing. At the turn of the 3rd to 2nd centuries BC, the heretofore uniform Aramaic letters developed new forms, as a result of dialectal and political fragmentation in several subgroups. The most important of these is the so-called square Hebrew block script, followed by Palmyrene, Nabataean, and the much later Syriac script. Aramaic is usually divided into three main parts: The term Middle Aramaic refers to the form of Aramaic which appears in pointed texts and is reached in the 3rd century AD with the loss of short unstressed vowels in open syllables, and continues until the triumph of Arabic. Old Aramaic appeared in the 11th century BC as the official language of the first Aramaean states. The oldest witnesses to it are inscriptions from northern Syria of the 10th to 8th centuries BC, especially extensive state treaties (c. 750 BC) and royal inscriptions. The early Old Ancient should be classified as "Ancient Aramaic" and consists of two clearly distinguished and standardised written languages, the Early Ancient Aramaic and the Late Ancient Aramaic. Aramaic was influenced at first principally by Akkadian, then from the 5th century BC by Persian and from the 3rd century BC onwards by Greek, as well as by Hebrew, especially in Palestine. As Aramaic evolved into the imperial language of the Neo-Assyrian Empire, the script used to write it underwent a change into something more cursive. The best examples of this script come from documents written on papyrus from Egypt. About 500 BC, Darius I (522–486) made the Aramaic used by the Achaemenid imperial administration into the official language of the western half of the Persian Empire. This so-called "Imperial Aramaic" (the oldest dated example, from Egypt, belonging to 495 BC) is based on an otherwise unknown written form of Ancient Aramaic from Babylonia. In orthography, Imperial Aramaic preserves historical forms—alphabet, orthography, morphology, pronunciation, vocabulary, syntax and style are highly standardised. Only the formularies of the private documents and the Proverbs of Ahiqar have maintained an older tradition of sentence structure and style. Imperial Aramaic immediately replaced Ancient Aramaic as a written language and, with slight modifications, it remained the official, commercial and literary language of the Near East until gradually, beginning with the fall of the Persian Empire (331 BC) and ending in the 4th century AD, it was replaced by Greek, Persian, the eastern and western dialects of Aramaic and Arabic, though not without leaving its traces in the written form of most of these. In its original Achaemenid form, Imperial Aramaic is found in texts of the 5th to 3rd centuries BC. These come mostly from Egypt and especially from the Jewish military colony of Elephantine, which existed at least from 530 to 399 BC. A history of Greek handwriting must be incomplete owing to the fragmentary nature of evidence. If one rules out the inscriptions on stone or metal, which belong to the science of , we are practically dependent for the period preceding the 4th or 5th century AD on the papyri from Egypt (cf. ), the earliest of which take back our knowledge only to the end of the 4th century BC. This limitation is less serious than might appear, since the few manuscripts not of Egyptian origin which have survived from this period, like the parchments from Avroman or Dura, the Herculaneum papyri, and a few documents found in Egypt but written elsewhere, reveal a uniformity of style in the various portions of the Greek world; but some differences can be discerned, and it is probable that, were there more material, distinct local styles could be traced. Further, during any given period several types of hand may exist together. There was a marked difference between the hand used for literary works (generally called "uncials" but, in the papyrus period, better styled "book-hand") and that of documents ("cursive") and within each of these classes several distinct styles were employed side by side; and the various types are not equally well represented in the surviving papyri. The development of any hand is largely influenced by the materials used. To this general rule the Greek script is no exception. Whatever may have been the period at which the use of papyrus or leather as a writing material began in Greece (and papyrus was employed in the 5th century BC), it is highly probable that for some time after the introduction of the alphabet the characters were incised with a sharp tool on stones or metal far oftener than they were written with a pen. In cutting a hard surface, it is easier to form angles than curves; in writing the reverse is the case; hence the development of writing was from angular letters ("capitals") inherited from epigraphic style to rounded ones ("uncials"). But only certain letters were affected by this development, in particular E (uncial ε), Σ (c), Ω (ω), and to a lesser extent A (α). The earliest Greek papyrus yet discovered is probably that containing the "Persae" of Timotheus, which dates from the second half of the 4th century BC and its script has a curiously archaic appearance. E, Σ, and Ω have the capital form, and apart from these test letters the general effect is one of stiffness and angularity. More striking is the hand of the earliest dated papyrus, a contract of 311 BC. Written with more ease and elegance, it shows little trace of any development towards a truly cursive style; the letters are not linked, and though the uncial c is used throughout, E and Ω have the capital forms. A similar impression is made by the few other papyri, chiefly literary, dating from about 300 BC; E may be slightly rounded, Ω approach the uncial form, and the angular Σ occurs as a letter only in the Timotheus papyrus, though it survived longer as a numeral (= 200), but the hands hardly suggest that for at least a century and a half the art of writing on papyrus had been well established. Yet before the middle of the 3rd century BC, one finds both a practised book-hand and a developed and often remarkably handsome cursive. These facts may be due to accident, the few early papyri happening to represent an archaic style which had survived along with a more advanced one; but it is likely that there was a rapid development at this period, due partly to the opening of Egypt, with its supplies of papyri, and still more to the establishment of the great Alexandrian Library, which systematically copied literary and scientific works, and to the multifarious activities of Hellenistic bureaucracy. From here onward, the two types of script were sufficiently distinct (though each influenced the other) to require separate treatment. Some literary papyri, like the roll containing Aristotle's "Constitution of Athens", were written in cursive hands, and, conversely, the book-hand was occasionally used for documents. Since the scribe did not date literary rolls, such papyri are useful in tracing the development of the book-hand. The documents of the mid-3rd century BC show a great variety of cursive hands. There are none from chancelleries of the Hellenistic monarchs, but some letters, notably those of Apollonius, the finance minister of Ptolemy II, to this agent, Zeno, and those of the Palestianian sheikh, Toubias, are in a type of script which cannot be very unlike the Chancery hand of the time, and show the Ptolemaic cursive at its best. These hands have a noble spaciousness and strength, and though the individual letters are by no means uniform in size there is a real unity of style, the general impression being one of breadth and uprightness. H, with the cross-stroke high, Π, Μ, with the middle stroke reduced to a very shallow curve, sometimes approaching a horizontal line, Υ, and Τ, with its cross-bar extending much further to the left than to the right of the up-stroke, Γ and Ν, whose last stroke is prolonged upwards above the line, often curving backwards, are all broad; ε, c, θ and β, which sometimes takes the form of two almost perpendicular strokes joined only at the top, are usually small; ω is rather flat, its second loop reduced to a practically straight line. Partly by the broad flat tops of the larger letters, partly by the insertion of a stroke connecting those (like H, Υ) which are not naturally adapted to linking, the scribes produced the effect of a horizontal line along the top of the writing, from which the letters seem to hang. This feature is indeed a general characteristic of the more formal Ptolemaic script, but it is specially marked in the 3rd century BC. Besides these hand of Chancery type, there are numerous less elaborate examples of cursive, varying according to the writer's skill and degree of education, and many of them strikingly easy and handsome. In some cursiveness is carried very far, the linking of letters reaching the point of illegibility, and the characters sloping to the right. A is reduced to a mere acute angle (∠), T has the cross-stroke only on the left, ω becomes an almost straight line, H acquires a shape somewhat like h, and the last stroke of N is extended far upwards and at times flattened out until it is little more than a diagonal stroke to the right. The attempt to secure a horizontal line along the top is here abandoned. This style was not due to inexpertness, but to the desire for speed, being used especially in accounts and drafts, and was generally the work of practised writers. How well established the cursive hand had now become is shown in some wax tablets of this period, the writing on which, despite the difference of material, closely resemble the hands of papyri. Documents of the late 3rd and early 2nd centuries BC show, perhaps partly by the accident of survival (there is nothing analogous to the Apollonius letters, a loss of breadth and spaciousness. In the more formal types the letters stand rather stiffly upright, often without the linking strokes, and are more uniform in size; in the more cursive they are apt to be packed closely together. These features are more marked in the hands of the 2nd century. The less cursive often show am approximation to the book-hand, the letters growing rounder and less angular than in the 3rd century; in the more cursive linking was carried further, both by the insertion of coupling strokes and by the writing of several letters continuously without raising the pen, so that before the end of the century an almost current hand was evolved. A characteristic letter, which survived into the early Roman period, is T, with its cross-stroke made in two portions (variants:). In the 1st century, the hand tended, so far as can be inferred from surviving examples, to disintegrate; one can recognise the signs which portend a change of style, irregularity, want of direction, and the loss of the feeling for style. A fortunate accident has preserved two Greek parchments written in Parthia, one dated 88 BC, in a practically unligatured hand, the other, 22/21 BC, in a very cursive script of Ptolemaic type; and though each has non-Egyptian features the general character indicates a uniformity of style in the Hellenistic world. The development of the Ptolemaic book-hand is difficult to trace, as there are few examples, mostly not datable on external grounds. Only for the 3rd century BC have we a secure basis. The hands of that period have an angular appearance; there is little uniformity in the size of individual letters, and though sometimes, notably in the Petrie papyrus containing the "Phaedo" of Plato, a style of considerable delicacy is attained, the book-hand in general shows less mastery than the contemporary cursive. In the 2nd century the letters grew rounder and more uniform in size, but in the 1st century there is perceptible, here as in the cursive hand, a certain disintegration. Probably at no time did the Ptolemaic book-hand acquire such unity of stylistic effect as the cursive. Papyri of the Roman period are far more numerous and show greater variety. The cursive of the 1st century has a rather broken appearance, part of one character being often made separately from the rest and linked to the next letter. A form characteristic of the 1st and 2nd century and surviving after that only as a fraction sign (=⅛) is η in the shape . By the end of the 1st century, there had been developed several excellent types of cursive, which, though differing considerably both in the forms of individual letters and in general appearance, bear a family likeness to one another. Qualities which are specially noticeable are roundness in the shape of letters, continuity of formation, the pen being carried on from character to character, and regularity, the letters not differing strikingly in size and projecting strokes above or below the line being avoided. Sometimes, especially in tax-receipts and in stereotyped formulae, cursiveness is carried to an extreme. In a letter of the prefect, dated in 209, we have a fine example of the Chancery hand, with tall and laterally compressed letters, ο very narrow and α and ω often written high in the line. This style, from at least the latter part of the 2nd century, exercised considerable influence on the local hands, many of which show the same characteristics less pronounced; and its effects may be traced into the early part of the 4th century. Hands of the 3rd century uninfluenced by it show a falling off from the perfection of the 2nd century; stylistic uncertainty and a growing coarseness of execution mark a period of decline and transition. Several different types of book-hand were used in the Roman period. Particularly handsome is a round, upright hand seen, for example, in a British Museum papyrus containing "Odyssey" III. The cross-stroke of ε is high, Μ deeply curved and Α has the form α. Uniformity of size is well attained, and a few strokes project, and these but slightly, above or below the line. Another type, well called by palaeographer Schubart the "severe" style, has a more angular appearance and not infrequently slopes to the right; though handsome, it has not the sumptuous appearance of the former. There are various classes of a less pretentious style, in which convenience rather than beauty was the first consideration and no pains were taken to avoid irregularities in the shape and alignment of the letters. Lastly may be mentioned a hand which is of great interest as being the ancestor of the type called (from its later occurrence in vellum codices of the Bible) the biblical hand. This, which can be traced back at least the late 2nd century, has a square, rather heavy appearance; the letters, of uniform size, stand upright, and thick and thin strokes are well distinguished. In the 3rd century the book-hand, like the cursive, appears to have deteriorated in regularity and stylistic accomplishment. In the charred rolls found at Herculaneum and dating from about the beginning of our era, are specimens of Greek literary hands from outside Egypt; and a comparison with the Egyptian papyri reveals great similarity in style and shows that conclusions drawn from the henads of Egypt may, with caution, be applied to the development of writing in the Greek world generally. The cursive hand of the 4th century shows some uncertainty of character. Side by side with the style founded on the Chancery hand, regular in formation and with tall and narrow letters, which characterised the period of Diocletian, and lasted well into the century, we find many other types mostly marked by a certain looseness and irregularity. A general progress towards a florid and sprawling hand is easily recognisable, but a consistent and deliberate style was hardly evolved before the 5th century, from which unfortunately few dated documents have survived. Byzantine cursive tends to an exuberant hand, in which the long strokes are excessively extended and individual letters often much enlarged. But not a few hands of the 5th and 6th centuries are truly handsome and show considerable technical accomplishment. Both an upright and a sloping type occur and there are many less ornamental hands, but there gradually emerged towards the 7th century two general types, one (especially used in letters and contracts) a current hand, sloping to the right, with long strokes in such characters at τ, ρ, ξ, η (which has the h shape), ι, and κ, and with much linking of letters, and another (frequent in accounts), which shows, at least in essence, most of the forms of the later minuscule. (cf. below.) This is often upright, though a slope to the right is quite common, and sometimes, especially in one or two documents of the early Arab period, it has an almost calligraphic effect. In the Byzantine period, the book-hand, which in earlier times had more than once approximated to the contemporary cursive, diverged widely from it. The change from papyrus to vellum involved no such modification in the forms of letters as followed that from metal to papyrus. The justification for considering the two materials separately is that after the general adoption of vellum, the Egyptian evidence is first supplemented and later superseded by that of manuscripts from elsewhere, and that during this period the hand most used was one not previously employed for literary purposes. The prevailing type of book-hand during what in papyrology is called the Byzantine period, that is, roughly from AD 300 to 650, is known as the biblical hand. It went back to at least the end of the 2nd century and had had originally no special connection with Christian literature. In manuscripts, whether vellum or paper, of the 4th century found in Egypt are met other forms of script, particularly a sloping, rather inelegant hand derived from the literary hand of the 3rd century, which persisted to at least the 5th century; but the three great early codices of the Bible are all written in uncials of the biblical type. In the Vaticanus, placed in the 4th century, the characteristics of the hand are least strongly marked; the letters have the forms characteristic of the type but without the heavy appearance of later manuscripts, and the general impression is one of greater roundness. In the Sinaiticus, which is not much later, the letters are larger and more heavily made; and in the Alexandrinus (5th century) a later development is seen, with emphatic distinction of thick and thin strokes. By the 6th century, alike in vellum and in papyrus manuscripts, the heaviness had become very marked, though the hand still retained, in its best examples, a handsome appearance; but after this it steadily deteriorated, becoming ever more mechanical and artificial. The thick strokes grew heavier; the cross strokes of T and Θ and the base of Δ were furnished with drooping spurs. The hand, which is often singularly ugly, passed through various modifications, now sloping, now upright, though it is not certain that these variations were really successive rather than concurrent. A different type of uncials, derived from the Chancery hand and seen in two papyrus examples of the Festal letters despatched annually by the Patriarch of Alexandria, was occasionally used, the best known example being the Codex Marchalianus (6th or 7th century). A combination of this hand with the other type is also known. The uncial hand lingered on, mainly for liturgical manuscripts, where a large and easily legible script was serviceable, as late as the 12th century, but in ordinary use it had long been superseded by a new type of hand, the minuscule, which originated in the 8th century, as an adaptation to literary purposes of the second of the types of Byzantine cursive mentioned above. A first attempt at a calligraphic use of this hand, seen in one or two manuscripts of the 8th or early 9th century, in which it slopes to the right and has a narrow, angular appearance, did not find favour, but by the end of the 9th century a more ornamental type, from which modern Greek script descended, was already established. It has been suggested that it was evolved in the Monastery of Stoudios at Constantinople. In its earliest examples it is upright and exact but lacks flexibility; accents are small, breathings square in formation, and in general only such ligatures are used as involve no change in the shape of letters. The single forms have a general resemblance (with considerable differences in detail) both to the minuscule cursive of late papyri, and to those used in modern Greek type; uncial forms were avoided. In the course of the 10th century the hand, without losing its beauty and exactness, gained in freedom. Its finest period was from the 9th to the 12th century, after which it rapidly declined. The development was marked by a tendency But from the first there were several styles, varying from the formal, regular hands characteristic of service books to the informal style, marked by numerous abbreviations, used in manuscripts intended only for a scholar's private use. The more formal hands were exceedingly conservative, and there are few classes of script more difficult to date than the Greek minuscule of this class. In the 10th, 11th and 12th centuries a sloping hand, less dignified than the upright, formal type, but often very handsome, was especially used for manuscripts of the classics. Hands of the 11th century are marked in general (though there are exceptions) by a certain grace and delicacy, exact but easy; those of the 12th by a broad, bold sweep and an increasing freedom, which readily admits uncial forms, ligatures and enlarged letters but has not lost the sense of style and decorative effect. In the 13th and still more in the 14th centuries there was a steady decline; the less formal hands lost their beauty and exactness, becoming ever more disorderly and chaotic in their effect, while formal style imitated the precision of an earlier period without attaining its freedom and naturalness, and often appears singularly lifeless. In the 15th century, especially in the West, where Greek scribes were in request to produce manuscripts of the classical authors, there was a revival, and several manuscripts of this period, though markedly inferior to those of the 11th and 12th centuries, are by no means without beauty. In the book-hand of early papyri, neither accents nor breathings were employed. Their use was established by the beginning of the Roman period, but was sporadic in papyri, where they were used as an aid to understanding, and therefore more frequently in poetry than prose, and in lyrical oftener than in other verse. In the cursive of papyri they are practically unknown, as are marks of punctuation. Punctuation was effected in early papyri, literary and documentary, by spaces, reinforced in the book-hand by the paragraphos, a horizontal stroke under the beginning of the line. The coronis, a more elaborate form of this, marked the beginning of lyrics or the principal sections of a longer work. Punctuation marks, the comma, the high, low and middle points, were established in the book-hand by the Roman period; in early Ptolemaic papyri, a double point (:) is found. In vellum and paper manuscripts, punctuation marks and accents were regularly used from at least the 8th century, though with some differences from modern practice. At no period down to the invention of printing did Greek scribes consistently separate words. The book-hand of papyri aimed at an unbroken succession of letters, except for distinction of sections; in cursive hands, especially where abbreviations were numerous, some tendency to separate words may be recognised, but in reality it was phrases or groups of letters rather than words which were divided. In the later minuscule word-division is much commoner but never became systematic, accents and breathings serving of themselves to indicate the proper division. The view that the art of writing in India developed gradually, as in other areas of the world, by going through the stages of pictographic, ideographic and transitional phases of the phonetic script, which in turn developed into syllabic and alphabetic scripts was challenged by Falk and others in the early 1990s. In the new paradigm, Indian alphabetic writing, called Brāhmī, was discontinuous with earlier, undeciphered, glyphs, and was invented specifically by King Ashoka for application in his royal edicts. In the subcontinent, three scripts like Indus, Kharoṣṭhī and Brāhmī became prevalent. In addition, Greek and Arabic scripts were also added to the Indian context after their penetration in the early centuries of the common era (CE). The decipherment and subsequent development of Indus glyphs is also a matter for continuing research and discussion. After a lapse of a few centuries the Kharoṣṭhī script became obsolete; the Greek script in India went through a similar fate and disappeared. But the Brāhmī and Arabic scripts endured for a much longer period. Moreover, there was a change and development in the Brāhmī script which may be traced in time and space through the Maurya, Kuṣāṇa, Gupta and early medieval periods. The present day Nāgarī script is derived from Brāhmī. The Brāhmī is also the ancestral script of many other Indian scripts, in northern and southern South Asia. Legends and inscriptions in Brāhmī are engraved upon leather, wood, terracotta, ivory, stone, copper, bronze, silver and gold. Arabic got an important place, particularly in the royalty, during the medieval period and it provides rich material for history writing. Most of the available inscriptions and manuscripts written in the above scripts—in languages like Prākrita, Pāḷi, Saṃskṛta, Apabhraṃśa, Tamil and Persian—have been read and exploited for history writing, but numerous inscriptions preserved in different museums still remain undeciphered for lack of competent palaeographic Indologists, as there is a gradual decline in the subcontinent of such disciplines as palaeography, epigraphy and numismatics. The discipline of ancient Indian scripts and the languages they are written needs new scholars who, by adopting traditional palaeographic methods and modern technology, may decipher, study and transcribe the various types of epigraphs and legends still extant today. The language of the earliest written records, that is, the Edicts of Ashoka, is Prakrit. Besides Prakrit, the Ashokan edicts are also written in Greek and Aramaic. Moreover, all the edicts of Ashoka engraved in the Kharoshthi and Brahmi scripts are in the Prakrit language: thus, originally the language employed in the inscriptions was Prakrit, with Sanskrit adopted at a later stage. Past the period of the Maurya Empire, the use of Prakrit continued in inscriptions for a few more centuries. In north India, Prakrit was replaced by Sanskrit by the end of the 3rd century, while this change took place about a century later in south India. Some of the inscriptions though written in Prakrit, were influenced by Sanskrit and vice versa. The epigraphs of the Kushana kings are found in a mixture of Prakrit and Sanskrit, while the Mathura inscriptions of the time of Sodasa, belonging to the first quarter of the 1st century, contain verses in classical Sanskrit. From the 4th century onwards, the Guptas came to power and made Sanskrit flourish by supporting it in language and literature. In western India and also in some regions of Andhra Pradesh and Karnataka, Prakrit was used till the 4th century, mostly in the Buddhist writings though in a few contemporary records of the Ikshvakus of Nagarjunakonda, Sanskrit was applied. The inscription of Yajna Sri Satakarni (2nd century) from Amaravati is considered to be the earliest so far. The earlier writings (4th century) of Salankayanas of the Telugu region are in Prakrit, while their later records (belonging to the 5th century) are written in Sanskrit. In the Kannada speaking area, inscriptions belonging to later Satavahanas and Chutus were written in Prakrit. From the 4th century onwards, with the rise of the Guptas, Sanskrit became the predominant language of India and continued to be employed in texts and inscriptions of all parts of India along with the regional languages in the subsequent centuries. The copper-plate charters of the Pallavas, the Cholas and the Pandyas documents are written in both Sanskrit and Tamil. Kannada is used in texts dating from about the 5th century and the Halmidi inscription is considered to be the earliest epigraph written in the Kannada language. Inscriptions in Telugu began to appear from the 6th or 7th century. Malayalam made its beginning in writings from the 15th century onwards. In north India, the Brahmi script was used over a vast area; however, Ashokan inscriptions are also found using Kharoshthi, Aramaic and Greek scripts. With the advent of the Saka-Kshatrapas and the Kushanas as political powers in north India, the writing system underwent a definite change due to the use of new writing tools and techniques. Further development of the Brahmi script and perceivable changes in its evolutionary trend can be discerned during the Gupta period: in fact, the Gupta script is considered to be the successor of the Kushana script in north India. From the 6th to about the 10th century of the common era, the inscriptions in north India were written in a script variously named, e.g., Siddhamatrika and Kutila ("Rañjanā script"). From the 8th century, Siddhamatrika developed into the Śāradā script in Kashmir and Punjab, into Proto-Bengali or Gaudi in Bengal and Orissa, and into Nagari in other parts of north India. Nāgarī script was used widely in northern India from the 10th century onwards. The use of Nandinagari, a variant of Nagari script, is mostly confined to the Karnataka region. In central India, mostly in Madhya Pradesh, the inscriptions of the Vakatakas, and the kings of Sarabhapura and Kosala were written in what are known as "box-headed" and "nail-headed" characters. It may be noted that the early Kadambas of Karnataka also employed "nail-headed" characters in some of their inscriptions. During the 3rd–4th century, the script used in the inscriptions of Ikshvakus of Nagarjunakonda developed a unique style of letter-forms with elongated verticals and artistic flourishes, which did not continue after their rule. The earliest attested form of writing in South India is represented by inscriptions found in caves, associated with the Chalukya and Chera dynasties. These are written in variants of what is known as the Cave character, and their script differs from the Northern version in being more angular. Most of the modern scripts of South India have evolved from this script, with the exception of Vatteluttu, the exact origins of which are unknown, and Nandinagari, which is a variant of Devanagari that developed due to later Northern influence. In south India from the 7th century of the common era onwards, a number of inscriptions belonging to the dynasties of Pallava, Chola and Pandya are found. These records are written in three different scripts known as Tamil, Vattezhuttu and Grantha scripts, the last variety being used to write Sanskrit inscriptions. In the Kerala region, the Vattezhuttu script developed into a still more cursive script called Kolezhuthu during the 14th and 15th centuries. At the same time, the modern Malayalam script developed out of the Grantha script. The early form of the Telugu-Kannada script is found in the inscriptions of the early Kadambas of Banavasi and the early Chalukyas of Badami in the west, and Salankayana and the early Eastern Chalukyas in the east who ruled the Kannada and Telugu speaking areas respectively, during the 4th to 7th centuries. Attention should be drawn at the outset to certain fundamental definitions and principles of the science. The original characters of an alphabet are modified by the material and the implements used. When stone and chisel are discarded for papyrus and reed-pen, the hand encounters less resistance and moves more rapidly. This leads to changes in the size and position of the letters, and then to the joining of letters, and, consequently, to altered shapes. We are thus confronted at an early date with quite distinct types. The majuscule style of writing, based on two parallel lines, ADPL, is opposed to the minuscule, based on a system of four lines, with letters of unequal height, adpl. Another classification, according to the care taken in forming the letters, distinguishes between the set book-hand and the cursive script. The difference in this case is determined by the subject matter of the text; the writing used for books ("scriptura libraria") is in all periods quite distinct from that used for letters and documents ("epistolaris, diplomatica"). While the set book-hand, in majuscule or minuscule, shows a tendency to stabilise the forms of the letters, the cursive, often carelessly written, is continually changing in the course of years and according to the preferences of the writers. This being granted, a summary survey of the morphological history of the Latin alphabet shows the zenith of its modifications at once, for its history is divided into two very unequal periods, the first dominated by majuscule and the second by minuscule writing. Jean Mabillon, a French Benedictine monk, scholar and antiquary, whose work "De re diplomatica" was published in 1681, is widely regarded as the founder of the twin disciplines of palaeography and diplomatics. However, the actual term "palaeography" was coined (in Latin) by Bernard de Montfaucon, a Benedictine monk, in the title of his "Palaeographia Graeca" (1708), which remained a standard work in the specific field of Greek palaeography for more than a century. With their establishment of palaeography, Mabillon and his fellow Benedictines were responding to the Jesuit Daniel Papebroch, who doubted the authenticity of some of the documents which the Benedictines offered as credentials for the authorisation of their monasteries. In the 19th century such scholars as Wilhelm Wattenbach, Leopold Delisle and Ludwig Traube contributed greatly to making palaeography independent from diplomatic. In the 20th century, the 'New French School' of palaeographers, especially Jean Mallon, gave a new direction to the study of scripts by stressing the importance of ductus (the shape and order of the strokes used to compose letters) in studying the historical development of scripts. The Latin alphabet first appears in the epigraphic type of majuscule writing, known as capitals. These characters form the main stem from which developed all the branches of Latin writing. On the oldest monuments (the "inscriptiones bello Hannibalico antiquiores" of the "Corpus Inscriptionum Latinarum = CIL"), it is far from showing the orderly regularity of the later period. Side by side with upright and square characters are angular and sloping forms, sometimes very distorted, which seem to indicate the existence of an early cursive writing from which they would have been borrowed. Certain literary texts clearly allude to such a hand. Later, the characters of the cursive type were progressively eliminated from formal inscriptions, and capital writing reached its perfection in the Augustan Age. Epigraphists divide the numerous inscriptions of this period into two quite distinct classes: "tituli", or formal inscriptions engraved on stone in elegant and regular capitals, and "acta", or legal texts, documents, etc., generally engraved on bronze in cramped and careless capitals. Palaeography inherits both these types. Reproduced by scribes on papyrus or parchment, the elegant characters of the inscriptions become the square capitals of the manuscripts, and the "actuaria", as the writing of the "acta" is called, becomes the rustic capital. Of the many books written in square capitals, the "éditions de luxe" of ancient times, only a few fragments have survived, the most famous being pages from manuscripts of Virgil. The finest examples of rustic capitals, the use of which is attested by papyri of the 1st century, are to be found in manuscripts of Virgil and Terence. Neither of these forms of capital writing offers any difficulty in reading, except that no space is left between the words. Their dates are still uncertain, in spite of attempts to determine them by minute observation. The rustic capitals, more practical than the square forms, soon came into general use. This was the standard form of writing, so far as books are concerned, until the 5th century, when it was replaced by a new type, the uncial, which is discussed below. While the set book-hand, in square or rustic capitals, was used for the copying of books, the writing of everyday life, letters and documents of all kinds, was in a cursive form, the oldest examples of which are provided by the graffiti on walls at Pompeii ("CIL", iv), a series of waxen tablets, also discovered at Pompeii ("CIL", iv, supplement), a similar series found at Verespatak in Transylvania ("CIL", iii) and a number of papyri. From a study of a number of documents which exhibit transitional forms, it appears that this cursive was originally simplified capital writing. The evolution was so rapid, however, that at quite an early date the "scriptura epistolaris" of the Roman world can no longer be described as capitals. By the 1st century, this kind of writing began to develop the principal characteristics of two new types: the uncial and the minuscule cursive. With the coming into use of writing surfaces which were smooth, or offered little resistance, the unhampered haste of the writer altered the shape, size and position of the letters. In the earliest specimens of writing on wax, plaster or papyrus, there appears a tendency to represent several straight strokes by a single curve. The cursive writing thus foreshadows the specifically uncial forms. The same specimens show great inequality in the height of the letters; the main strokes are prolonged upwards (= b; = d) or downwards (= q; = s). In this direction, the cursive tends to become a minuscule hand. Although the characteristic forms of the uncial type appear to have their origin in the early cursive, the two hands are nevertheless quite distinct. The uncial is a "libraria", closely related to the capital writing, from which it differs only in the rounding off of the angles of certain letters, principally . It represents a compromise between the beauty and legibility of the capitals and the rapidity of the cursive, and is clearly an artificial product. It was certainly in existence by the latter part of the 4th century, for a number of manuscripts of that date are written in perfect uncial hands ("Exempla", pl. XX). It presently supplanted the capitals and appears in numerous manuscripts which have survived from the 5th, 6th and 7th centuries, when it was at its height. By this time it had become an imitative hand, in which there was generally no room for spontaneous development. It remained noticeably uniform over a long period. It is difficult therefore to date the manuscripts by palaeographical criteria alone. The most that can be done is to classify them by centuries, on the strength of tenuous data. The earliest uncial writing is easily distinguished by its simple and monumental character from the later hands, which become progressively stiff and affected. In the ancient cursive writing, from the 1st century onward, there are symptoms of transformation in the form of certain letters, the shape and proportions of which correspond more closely to the definition of minuscule writing than to that of majuscule. Rare and irregular at first, they gradually become more numerous and more constant and by degrees supplant the majuscule forms, so that in the history of the Roman cursive there is no precise boundary between the majuscule and minuscule periods. The oldest example of minuscule cursive writing that has been discovered is a letter on papyrus, found in Egypt, dating from the 4th century. This marks a highly important date in the history of Latin writing, for with only one known exception, not yet adequately explained—two fragments of imperial rescripts of the 5th century—the minuscule cursive was consequently the only "scriptura epistolaris" of the Roman world. The ensuing succession of documents show a continuous improvement in this form of writing, characterised by the boldness of the strokes and by the elimination of the last lingering majuscule forms. The Ravenna deeds of the 5th and 6th centuries exhibit this hand at its perfection. At this period, the minuscule cursive made its appearance as a "book hand", first as marginal notes, and later for the complete books themselves. The only difference between the book-hand and that used for documents is that the principal strokes are shorter and the characters thicker. This form of the hand is usually called "semi-cursive". The fall of the Empire and the establishment of the barbarians within its former boundaries did not interrupt the use of the Roman minuscule cursive hand, which was adopted by the newcomers. But for gaps of over a century in the chronological series of documents which have been preserved, it would be possible to follow the evolution of the Roman cursive into the so-called "national hands", forms of minuscule writing which flourished after the barbarian invasions in Italy, France, Spain, England and Ireland, and which are still known as Lombardic, Merovingian, Visigothic, Anglo-Saxon and Irish. These names came into use at a time when the various national hands were believed to have been invented by the peoples who used them, but their connotation is merely geographical. Nevertheless, in spite of a close resemblance which betrays their common origin, these hands are specifically different, perhaps because the Roman cursive was developed by each nation in accordance with its artistic tradition. In Italy, after the close of the Roman and Byzantine periods, the writing is known as Lombardic, a generic term which comprises several local varieties. These may be classified under four principal types: two for the "scriptura epistolaris", the old Italian cursive and the papal chancery hand, or "littera romana", and two for the "libraria", the old Italian book-hand and Lombardic in the narrow sense, sometimes known as "Beneventana" on account of the fact that it flourished in the principality of Benevento. The oldest preserved documents written in the old Italian cursive show all the essential characteristics of the Roman cursive of the 6th century. In northern Italy, this hand began in the 9th century to be influenced by a minuscule book-hand which developed, as will be seen later, in the time of Charlemagne; under this influence it gradually disappeared, and ceased to exist in the course of the 12th century. In southern Italy, it persisted far on into the later Middle Ages. The papal chancery hand, a variety of Lombardic peculiar to the vicinity of Rome and principally used in papal documents, is distinguished by the formation of the letters "a, e, q, t". It is formal in appearance at first, but is gradually simplified, under the influence of the Carolingian minuscule, which finally prevailed in the bulls of Honorius II (1124–1130). The notaries public in Rome continued to use the papal chancery hand until the beginning of the 13th century. The old Italian book-hand is simply a semi-cursive of the type already described as in use in the 6th century. The principal examples are derived from "scriptoria" in northern Italy, where it was displaced by the Carolingian minuscule during the 9th century. In southern Italy, this hand persisted, developing into a calligraphic form of writing, and in the 10th century took on a very artistic angular appearance. The "Exultet" rolls provide the finest examples. In the 9th century, it was introduced in Dalmatia by the Benedictine monks and developed there, as in Apulia, on the basis of the archetype, culminating in a rounded "Beneventana" known as the "Bari type". The offshoot of the Roman cursive which developed in Gaul under the first dynasty of kings is called Merovingian writing. It is represented by thirty-eight royal diplomas, a number of private charters and the authenticating documents of relics. Though less than a century intervenes between the Ravenna cursive and the oldest extant Merovingian document (AD 625), there is a great difference in appearance between the two writings. The facile flow of the former is replaced by a cramped style, in which the natural slope to the right gives way to an upright hand, and the letters, instead of being fully outlined, are compressed to such an extent that they modify the shape of other letters. Copyists of books used a cursive similar to that found in documents, except that the strokes are thicker, the forms more regular, and the heads and tails shorter. The Merovingian cursive as used in books underwent simplification in some localities, undoubtedly through the influence of the minuscule book-hand of the period. The two principal centres of this reform were Luxeuil and Corbie. In Spain, after the Visigothic conquest, the Roman cursive gradually developed special characteristics. Some documents attributed to the 7th century display a transitional hand with straggling and rather uncouth forms. The distinctive features of Visigothic writing, the most noticeable of which is certainly the q-shaped g, did not appear until later, in the book-hand. The book-hand became set at an early date. In the 8th century it appears as a sort of semi-cursive; the earliest example of certain date is ms lxxxix in the Capitular Library in Verona. From the 9th century the calligraphic forms become broader and more rounded until the 11th century, when they become slender and angular. The Visigothic minuscule appears in a cursive form in documents about the middle of the 9th century, and in the course of time grows more intricate and consequently less legible. It soon came into competition with the Carolingian minuscule, which supplanted it as a result of the presence in Spain of French elements such as Cluniac monks and warriors engaged in the campaign against the Moors. The Irish and Anglo-Saxon hands, which were not directly derived from the Roman minuscule cursive, will be discussed in a separate sub-section below. One by one, the national minuscule cursive hands were replaced by a set minuscule hand which has already been mentioned and its origins may now be traced from the beginning. The early cursive was the medium in which the minuscule forms were gradually evolved from the corresponding majuscule forms. Minuscule writing was therefore cursive in its inception. As the minuscule letters made their appearance in the cursive writing of documents, they were adopted and given calligraphic form by the copyists of literary texts, so that the set minuscule alphabet was constituted gradually, letter by letter, following the development of the minuscule cursive. Just as some documents written in the early cursive show a mixture of majuscule and minuscule forms, so certain literary papyri of the 3rd century, and inscriptions on stone of the 4th century yield examples of a mixed set hand, with minuscule forms side by side with capital and uncial letters. The number of minuscule forms increases steadily in texts written in the mixed hand, and especially in marginal notes, until by the end of the 5th century the majuscule forms have almost entirely disappeared in some manuscripts. This quasi-minuscule writing, known as the "half-uncial" thus derives from a long line of mixed hands which, in a synoptic chart of Latin scripts, would appear close to the oldest "librariae", and between them and the "epistolaris" (cursive), from which its characteristic forms were successively derived. It had a considerable influence on the continental "scriptura libraria" of the 7th and 8th centuries. The half-uncial hand was introduced in Ireland along with Latin culture in the 5th century by priests and laymen from Gaul, fleeing before the barbarian invasions. It was adopted there to the exclusion of the cursive, and soon took on a distinct character. There are two well established classes of Irish writing as early as the 7th century: a large round half-uncial hand, in which certain majuscule forms frequently appear, and a pointed hand, which becomes more cursive and more genuinely minuscule. The latter developed out of the former. One of the distinguishing marks of manuscripts of Irish origin is to be found in the initial letters, which are ornamented by interlacing, animal forms, or a frame of red dots. The most certain evidence, however, is provided by the system of abbreviations and by the combined square and cuneiform appearance of the minuscule at the height of its development. The two types of Irish writing were introduced in the north of Great Britain by the monks, and were soon adopted by the Anglo-Saxons, being so exactly copied that it is sometimes difficult to determine the origin of an example. Gradually, however, the Anglo-Saxon writing developed a distinct style, and even local types, which were superseded after the Norman conquest by the Carolingian minuscule. Through St Columba and his followers, Irish writing spread to the continent, and manuscripts were written in the Irish hand in the monasteries of Bobbio Abbey and St Gall during the 7th and 8th centuries. James J. John points out that the disappearance of imperial authority around the end of the 5th century in most of the Latin-speaking half of the Roman Empire does not entail the disappearance of the Latin scripts, but rather introduced conditions that would allow the various provinces of the West gradually to drift apart in their writing habits, a process that began around the 7th century. Pope Gregory I (Gregory the Great, d. 604) was influential in the spread of Christianity to Britain and also sent Queens Theodelinde and Brunhilda, as well as Spanish bishops, copies of manuscripts. Furthermore, he sent the Roman monk Augustine of Canterbury to Britain on a missionary journey, on which Augustine may have brought manuscripts. Although Italy's dominance as a centre of manuscript production began to decline, especially after the Gothic War (535–554) and the invasions by the Lombards, its manuscripts—and more important, the scripts in which they were written—were distributed across Europe. From the 6th through the 8th centuries, a number of so-called 'national hands' were developed throughout the Latin-speaking areas of the former Roman Empire. By the late 6th century Irish scribes had begun transforming Roman scripts into Insular minuscule and majuscule scripts. A series of transformations, for book purposes, of the cursive documentary script that had grown out of the later Roman cursive would get under way in France by the mid-7th century. In Spain half-uncial and cursive would both be transformed into a new script, the Visigothic minuscule, no later than the early 8th century. Beginning in the 8th century, as Charlemagne began to consolidate power over a large area of western Europe, scribes developed a minuscule script (Caroline minuscule) that effectively became the standard script for manuscripts from the 9th to the 11th centuries. The origin of this hand is much disputed. This is due to the confusion which prevailed before the Carolingian period in the "libraria" in France, Italy and Germany as a result of the competition between the cursive and the set hands. In addition to the calligraphic uncial and half-uncial writings, which were imitative forms, little used and consequently without much vitality, and the minuscule cursive, which was the most natural hand, there were innumerable varieties of mixed writing derived from the influence of these hands on each other. In some, the uncial or half-uncial forms were preserved with little or no modification, but the influence of the cursive is shown by the freedom of the strokes; these are known as rustic, semi-cursive or cursive uncial or half-uncial hands. Conversely, the cursive was sometimes affected, in varying degrees, by the set "librariae"; the cursive of the "epistolaris" became a semi-cursive when adopted as a "libraria". Nor is this all. Apart from these reciprocal influences affecting the movement of the hand across the page, there were morphological influences at work, letters being borrowed from one alphabet for another. This led to compromises of all softs and of infinite variety between the uncial and half-uncial and the cursive. It will readily be understood that the origin of the Carolingian minuscule, which must be sought in this tangle of pre-Carolingian hands, involves disagreement. The new writing is admittedly much more closely related to the "epistolaris" than the primitive minuscule; this is shown by certain forms, such as the open a (), which recall the cursive, by the joining of certain letters, and by the clubbing of the tall letters b d h l, which resulted from a cursive "ductus". Most palaeographers agree in assigning the new hand the place shown in the following table: Controversy turns on the question whether the Carolingian minuscule is the primitive minuscule as modified by the influence of the cursive or a cursive based on the primitive minuscule. Its place of origin is also uncertain: Rome, the Palatine school, Tours, Reims, Metz, Saint-Denis and Corbie have been suggested, but no agreement has been reached. In any case, the appearance of the new hand is a turning point in the history of culture. So far as Latin writing is concerned, it marks the dawn of modern times. In the 12th century, Carolingian minuscule underwent a change in its appearance and adopted bold and broken Gothic letter-forms. This style remained predominant, with some regional variants, until the 15th century, when the Renaissance humanistic scripts revived a version of Carolingian minuscule. It then spread from the Italian Renaissance all over Europe. These humanistic scripts are the base for the antiqua and the handwriting forms in western and southern Europe. In Germany and Austria, the "Kurrentschrift" was rooted in the cursive handwriting of the later Middle Ages. With the name of the calligrapher Ludwig Sütterlin, this handwriting counterpart to the blackletter typefaces was abolished by Hitler in 1941. After World War II, it was taught as an alternative script in some areas until the 1970s; it is no longer taught. Secretary hand is an informal business hand of the Renaissance. There are undeniable points of contact between architecture and palaeography, and in both it is possible to distinguish a Romanesque and a Gothic period . The creative effort which began in the post-Carolingian period culminated at the beginning of the 12th century in a calligraphy and an architecture which, though still somewhat awkward, showed unmistakable signs of power and experience, and at the end of that century and in the first half of the 13th both arts reached their climax and made their boldest flights. The topography of later medieval writing is still being studied; national varieties can, of course, be identified but the problem of distinguishing features becomes complicated as a result of the development of international relations, and the migration of clerks from one end of Europe to the other. During the later centuries of the Middle Ages the Gothic minuscule continued to improve within the restricted circle of "de luxe" editions and ceremonial documents. In common use, it degenerated into a cursive which became more and more intricate, full of superfluous strokes and complicated by abbreviations. In the first quarter of the 15th century an innovation took place which exercised a decisive influence on the evolution of writing in Europe. The Italian humanists were struck by the eminent legibility of the manuscripts, written in the improved Carolingian minuscule of the 10th and 11th centuries, in which they discovered the works of ancient authors, and carefully imitated the old writing. In Petrarch's compact book hand, the wider leading and reduced compression and round curves are early manifestations of the reaction against the crabbed Gothic secretarial minuscule we know today as "blackletter". Petrarch was one of the few medieval authors to have written at any length on the handwriting of his time; in his essay on the subject, "La scrittura" he criticized the current scholastic hand, with its laboured strokes ("artificiosis litterarum tractibus") and exuberant ("luxurians") letter-forms amusing the eye from a distance, but fatiguing on closer exposure, as if written for other purpose than to be read. For Petrarch the gothic hand violated three principles: writing, he said, should be simple ("castigata"), clear ("clara") and orthographically correct. Boccaccio was a great admirer of Petrarch; from Boccaccio's immediate circle this post-Petrarchan "semi-gothic" revised hand spread to "literati" in Florence, Lombardy and the Veneto. A more thorough reform of handwriting than the Petrarchan compromise was in the offing. The generator of the new style ("illustration") was Poggio Bracciolini, a tireless pursuer of ancient manuscripts, who developed the new humanist script in the first decade of the 15th century. The Florentine bookseller Vespasiano da Bisticci recalled later in the century that Poggio had been a very fine calligrapher of "lettera antica" and had transcribed texts to support himself—presumably, as Martin Davies points out— before he went to Rome in 1403 to begin his career in the papal curia. Berthold Ullman identifies the watershed moment in the development of the new humanistic hand as the youthful Poggio's transcription of Cicero's "Epistles to Atticus". By the time the Medici library was catalogued in 1418, almost half the manuscripts were noted as in the "lettera antica". The new script was embraced and developed by the Florentine humanists and educators Niccolò de' Niccoli and Coluccio Salutati. The papal chancery adopted the new fashion for some purposes, and thus contributed to its diffusion throughout Christendom. The printers played a still more significant part in establishing this form of writing by using it, from the year 1465, as the basis for their types. The humanistic minuscule soon gave rise to a sloping cursive hand, known as the Italian, which was also taken up by printers in search of novelty and thus became the italic type. In consequence, the Italian hand became widely used, and in the 16th century began to compete with the Gothic cursive. In the 17th century, writing masters were divided between the two schools, and there was in addition a whole series of compromises. The Gothic characters gradually disappeared, except a few that survived in Germany. The Italian became universally used, brought to perfection in more recent times by English calligraphers.
https://en.wikipedia.org/wiki?curid=24566
Pollutant A pollutant is a substance or energy introduced into the environment that has undesired effects, or adversely affects the usefulness of a resource. A pollutant may cause long- or short-term damage by changing the growth rate of plant or animal species, or by interfering with human amenities, comfort, health, or property values. Some pollutants are biodegradable and therefore will not persist in the environment in the long term. However, the degradation products of some pollutants are themselves polluting such as the products DDE and DDD produced from the degradation of DDT. Pollutants, towards which the environment has low absorptive capacity are called "stock pollutants". (e.g. persistent organic pollutants such as PCBs, non-biodegradable plastics and heavy metals). Stock pollutants accumulate in the environment over time. The damage they cause increases as more pollutant is emitted, and persists as the pollutant accumulates. Stock pollutants can create a burden for the future generations, bypassing on the damage that persists well after the benefits received from incurring that damage, have been forgotten. Notable pollutants include the following groups Fund pollutants are those for which the environment has the moderate absorptive capacity. Fund pollutants do not cause damage to the environment unless the emission rate exceeds the receiving environment's absorptive capacity (e.g. carbon dioxide, which is absorbed by plants and oceans). Fund pollutants are not destroyed, but rather converted into less harmful substances, or diluted/dispersed to non-harmful concentrations. Light pollution is the impact that anthropogenic light has on the visibility of the night sky. It also encompasses ecological light pollution which describes the effect of artificial light on individual organisms and on the structure of ecosystems as a whole. Primary Pollutants: Pollutants which are emitted directly to the environment are called primary pollutants Pollutants can also be defined by their zones of influence, both horizontally and vertically. The horizontal zone refers to the area that is damaged by a pollutant. Local pollutants cause damage near the emission source. Regional pollutants cause damage further from the emission source. The vertical zone refers to whether the damage is ground-level or atmospheric. Surface pollutants cause damage by accumulating near the Earth's surface. Global pollutants cause damage by concentrating on the atmosphere. Pollutants can cross international borders and therefore international regulations are needed for their control. The Stockholm Convention on Persistent Organic Pollutants, which entered into force in 2004, is an international legally binding agreement for the control of persistent organic pollutants. Pollutant Release and Transfer Registers (PRTR) are systems to collect and disseminate information on environmental releases and transfers of toxic chemicals from industrial and other facilities. The European Pollutant Emission Register is a type of PRTR providing access to information on the annual emissions of industrial facilities in the Member States of the European Union, as well as Norway. Clean Air Act standards. Under the Clean Air Act, the National Ambient Air Quality Standards (NAAQS) are developed by the Environmental Protection Agency (EPA) for six common air pollutants, also called "criteria pollutants": particulates; smog and ground-level ozone; carbon monoxide; sulfur oxides; nitrogen oxides; and lead. The National Emissions Standards for Hazardous Air Pollutants are additional emission standards that are set by EPA for toxic air pollutants. Clean Water Act standards. Under the Clean Water Act, EPA promulgated national standards for municipal sewage treatment plants, also called "publicly owned treatment works," in the "Secondary Treatment Regulation." National standards for industrial dischargers are called "Effluent guidelines" (for existing sources) and New Source Performance Standards, and currently cover over 50 industrial categories. In addition, the Act requires states to publish water quality standards for individual water bodies to provide additional protection where the national standards are insufficient. RCRA standards. The Resource Conservation and Recovery Act (RCRA) regulates the management, transport and disposal of municipal solid waste, hazardous waste and underground storage tanks.
https://en.wikipedia.org/wiki?curid=24567
Pepsi Pepsi is a carbonated soft drink manufactured by PepsiCo. Originally created and developed in 1893 by Caleb Bradham and introduced as Brad's Drink, it was renamed as Pepsi-Cola in 1898, and then shortened to Pepsi in 1961. Pepsi was first introduced as "Brad's Drink" in New Bern, North Carolina, United States, in 1893 by Caleb Bradham, who made it at his drugstore where the drink was sold. It was renamed Pepsi-Cola in 1898 after the Greek word for "digestion" ("πέψις", pronounced Pepsis), which the drink was purported to aid, and "cola" after the kola nut. The original recipe also included sugar and vanilla. Bradham sought to create a fountain drink that was appealing and would aid in digestion and boost energy. In 1903, Bradham moved the bottling of Pepsi-Cola from his drugstore to a rented warehouse. That year, Bradham sold 7,968 gallons of syrup. The next year, Pepsi was sold in six-ounce bottles, and sales increased to 19,848 gallons. In 1909, automobile race pioneer Barney Oldfield was the first celebrity to endorse Pepsi-Cola, describing it as "A bully drink...refreshing, invigorating, a fine bracer before a race." The advertising theme "Delicious and Healthful" was then used over the next two decades. In 1923, the Pepsi-Cola Company entered bankruptcy—in large part due to financial losses incurred by speculating on the wildly fluctuating sugar prices as a result of World War I. Assets were sold and Roy C. Megargel bought the Pepsi trademark. Megargel was unsuccessful in efforts to find funding to revive the brand and soon Pepsi's assets were purchased by Charles Guth, the president of Loft, Inc. Loft was a candy manufacturer with retail stores that contained soda fountains. He sought to replace Coca-Cola at his stores' fountains after the Coca-Cola Company refused to give him additional discounts on syrup. Guth then had Loft's chemists reformulate the Pepsi-Cola syrup formula. On three separate occasions between 1922 and 1933, the Coca-Cola Company was offered the opportunity to purchase the Pepsi-Cola company, and it declined on each occasion. During the Great Depression, Pepsi-Cola gained popularity following the introduction in 1934 of a 12-ounce bottle. Prior to that, Pepsi and Coca-Cola sold their drinks in 6.5-ounce servings for about $0.05 a bottle. With a radio advertising campaign featuring the popular jingle "Nickel, Nickel" – first recorded by the Tune Twisters in 1940 – Pepsi encouraged price-conscious consumers to double the volume their nickels could purchase. The jingle is arranged in a way that loops, creating a never-ending tune:"Pepsi-Cola hits the spot / Twelve full ounces, that's a lot / Twice as much for a nickel, too / Pepsi-Cola is the drink for you."Coming at a time of economic crisis, the campaign succeeded in boosting Pepsi's status. From 1936 to 1938, Pepsi-Cola's profits doubled. Pepsi's success under Guth came while the Loft Candy business was faltering. Since he had initially used Loft's finances and facilities to establish the new Pepsi success, the near-bankrupt Loft Company sued Guth for possession of the Pepsi-Cola company. A long legal battle, "Guth v. Loft", then ensued, with the case reaching the Delaware Supreme Court and ultimately ending in a loss for Guth. From the 1930s through the late 1950s, "Pepsi-Cola Hits The Spot" was the most commonly used slogan in the days of old radio, classic motion pictures, and later television. Its jingle (conceived in the days when Pepsi cost only five cents) was used in many different forms with different lyrics. With the rise of radio, Pepsi utilized the services of a young, up-and-coming actress named Polly Bergen to promote products, oftentimes lending her singing talents to the classic "...Hits The Spot" jingle. Film actress Joan Crawford, after marrying Pepsi-Cola president Alfred N. Steele became a spokesperson for Pepsi, appearing in commercials, television specials, and televised beauty pageants on behalf of the company. Crawford also had images of the soft drink placed prominently in several of her later films. When Steele died in 1959, Crawford was appointed to the Board of Directors of Pepsi-Cola, a position she held until 1973, although she was not a board member of the larger PepsiCo, created in 1965. Pepsi has been featured in several films, including "Back to the Future" (1985), "Home Alone" (1990), "Wayne's World" (1992), "Fight Club" (1999), and "World War Z" (2013). In 1992, the Pepsi Number Fever marketing campaign in the Philippines accidentally distributed 800,000 winning bottle caps for a 1 million peso grand prize, leading to riots and the deaths of five people. In 1996, PepsiCo launched the highly successful Pepsi Stuff marketing strategy. "Project Blue" was launched in several international markets outside the United States in April. The launch included extravagant publicity stunts, such as a Concorde aeroplane painted in blue colors (which was owned by Air France) and a banner on the Mir space station. The Project Blue design arrived in the United States test marketed in June 1997, and finally released in 1998 worldwide to celebrate Pepsi's 100th anniversary. It was at this point the logo began to be referred to as the Pepsi Globe. In October 2008, Pepsi announced that it would be redesigning its logo and re-branding many of its products by early 2009. In 2009, Pepsi, Diet Pepsi, and Pepsi Max began using all lower-case fonts for name brands. The brand's blue and red globe trademark became a series of "smiles", with the central white band arcing at different angles depending on the product until 2010. Pepsi released this logo in U.S. in late 2008, and later it was released in 2009 in Canada (the first country outside of the United States for Pepsi's new logo), Brazil, Bolivia, Guatemala, Nicaragua, Honduras, El Salvador, Colombia, Argentina, Puerto Rico, Costa Rica, Panama, Chile, Dominican Republic, the Philippines, and Australia. In the rest of the world, the new logo was released in 2010. The old logo is still used in several international markets, and has been phased out most recently in France and Mexico. Walter Mack was named the new president of Pepsi-Cola and guided the company through the 1940s. Mack, who supported progressive causes, noticed that the company's strategy of using advertising for a general audience either ignored African Americans or used ethnic stereotypes in portraying blacks. Up until the 1940s, the full revenue potential of what was called "the Negro market" was largely ignored by white-owned manufacturers in the U.S. Mack realized that blacks were an untapped niche market and that Pepsi stood to gain market share by targeting its advertising directly towards them. To this end, he hired Hennan Smith, an advertising executive "from the Negro newspaper field" to lead an all-black sales team, which had to be cut due to the onset of World War II. In 1947, Walter Mack resumed his efforts, hiring Edward F. Boyd to lead a twelve-man team. They came up with advertising portraying black Americans in a positive light, such as one with a smiling mother holding a six pack of Pepsi while her son (a young Ron Brown, who grew up to be Secretary of Commerce) reaches up for one. Another ad campaign, titled "Leaders in Their Fields", profiled twenty prominent African Americans such as Nobel Peace Prize winner Ralph Bunche and photographer Gordon Parks. Boyd also led a sales team composed entirely of blacks around the country to promote Pepsi. Racial segregation and Jim Crow laws were still in place throughout much of the U.S.; Boyd's team faced a great deal of discrimination as a result, from insults by Pepsi co-workers to threats by the Ku Klux Klan. On the other hand, it was able to use its anti-racism stance as a selling point, attacking Coke's reluctance to hire blacks and support by the chairman of the Coca-Cola Company for segregationist governor of Georgia Herman Talmadge. As a result, Pepsi's market share as compared to Coca-Cola's shot up dramatically in the 1950s with African American soft-drink consumers three times more likely to purchase Pepsi over Coke. After the sales team visited Chicago, Pepsi's share in the city overtook that of Coke for the first time. Journalist Stephanie Capparell interviewed six men who were on the team in the late 1940s. The team members had a grueling schedule, working seven days a week, morning and night, for weeks on end. They visited bottlers, churches, ladies groups, schools, college campuses, YMCAs, community centers, insurance conventions, teacher and doctor conferences, and various civic organizations. They got famous jazzmen such as Duke Ellington and Lionel Hampton to promote Pepsi from the stage. No group was too small or too large to target for a promotion. Pepsi advertisements avoided the stereotypical images common in the major media that depicted Aunt Jemimas and Uncle Bens, whose role was to draw a smile from white customers. Instead, it portrayed black customers as self-confident middle-class citizens who showed very good taste in their soft drinks. They were economical too, as Pepsi bottles were twice the size. This focus on the market for black people caused some consternation within the company and among its affiliates. It did not want to seem focused on black customers for fear white customers would be pushed away. In a national meeting, Mack tried to assuage the 500 bottlers in attendance by pandering to them, saying "We don't want it to become known as a nigger drink." After Mack left the company in 1950, support for the black sales team faded and it was cut. Boyd was replaced in 1952 by Harvey C. Russell, who was notable for his marketing campaigns towards black youth in New Orleans. These campaigns, held at locales attended largely by black children, would encourage children to collect Pepsi bottle caps, which they could then exchange for rewards. One example is Pepsi's 1954 "Pepsi Day at the Beach" event, where New Orleans children could ride rides at an amusement park in exchange for Pepsi bottle caps. By the end of the event, 125,000 bottle caps been collected. According to "The Pepsi Cola World", the New Orleans campaign was a success; once people's supply of bottle caps ran out, the only way they could get more was to buy more Pepsi. According to Consumer Reports, in the 1970s, the rivalry continued to heat up the market. Pepsi conducted blind taste tests in stores, in what was called the "Pepsi Challenge". These tests suggested that more consumers preferred the taste of Pepsi to Coca-Cola. The sales of Pepsi started to climb, and Pepsi kicked off the "Challenge" across the nation. This became known as the "Cola Wars". In 1985, the Coca-Cola Company, amid much publicity, changed its formula. The theory has been advanced that New Coke, as the reformulated drink came to be known, was invented specifically in response to the Pepsi Challenge. However, a consumer backlash led to Coca-Cola quickly reintroducing the original formula as "Coca-Cola Classic". In 1989, Billy Joel mentioned the rivalry between the two companies in the song "We Didn't Start the Fire". The line "Rock & Roller Cola Wars" refers to Pepsi and Coke's usage of various musicians in advertising campaigns. Coke used Paula Abdul, while Pepsi used Michael Jackson. Both companies then competed to get other musicians to advertise its beverages. According to "Beverage Digest"s 2008 report on carbonated soft drinks, PepsiCo's U.S. market share is 30.8 percent, while the Coca-Cola Company's is 42.7 percent. Coca-Cola outsells Pepsi in most parts of the U.S., notable exceptions being central Appalachia, North Dakota, and Utah. In the city of Buffalo, New York, Pepsi outsells Coca-Cola by a two-to-one margin. Overall, Coca-Cola continues to outsell Pepsi in almost all areas of the world. However, exceptions include: Oman, India, Saudi Arabia, Pakistan, the Dominican Republic, Guatemala, the Canadian provinces of Quebec, Newfoundland and Labrador, Nova Scotia, and Prince Edward Island, and Northern Ontario. Pepsi had long been the drink of French-Canadians, and it continues to hold its dominance by relying on local Québécois celebrities (especially Claude Meunier, of "La Petite Vie" fame) to sell its product. PepsiCo introduced the Quebec slogan "here, it's Pepsi" ("Ici, c'est Pepsi") in response to Coca-Cola ads proclaiming "Around the world, it's Coke" ("Partout dans le monde, c'est Coke"). As of 2012, Pepsi is the third most popular carbonated drink in India, with a 15% market share, behind Sprite and Thums Up. In comparison, Coca-Cola is the fourth most popular carbonated drink, occupying a mere 8.8% of the Indian market share. By most accounts, Coca-Cola was India's leading soft drink until 1977, when it left India because of the new foreign exchange laws which mandated majority shareholding in companies to be held by Indian shareholders. The Coca-Cola Company was unwilling to dilute its stake in its Indian unit as required by the Foreign Exchange Regulation Act (FERA), thus sharing its formula with an entity in which it did not have majority shareholding. In 1988, PepsiCo gained entry to India by creating a joint venture with the Punjab government-owned Punjab Agro Industrial Corporation (PAIC) and Voltas India Limited. This joint venture marketed and sold Lehar Pepsi until 1991, when the use of foreign brands was allowed; PepsiCo bought out its partners and ended the joint venture in 1994. In 1993, the Coca-Cola Company returned in pursuance of India's Liberalization policy. In Russia, Pepsi initially had a larger market share than Coke, but it was undercut once the Cold War ended. In 1972, PepsiCo struck a barter agreement with the then government of the Soviet Union, in which PepsiCo was granted exportation and Western marketing rights to Stolichnaya vodka in exchange for importation and Soviet marketing of Pepsi. This exchange led to Pepsi being the first foreign product sanctioned for sale in the Soviet Union. Reminiscent of the way that Coca-Cola became a cultural icon and its global spread spawned words like "cocacolonization", Pepsi-Cola and its relation to the Soviet system turned it into an icon. In the early 1990s, the term "Pepsi-stroika" began appearing as a pun on "perestroika", the reform policy of the Soviet Union under Mikhail Gorbachev. Critics viewed the policy as an attempt to usher in Western products in deals there with the old elites. Pepsi, as one of the first American products in the Soviet Union, became a symbol of that relationship and the Soviet policy. This was reflected in Russian author Victor Pelevin's book "Generation P". In 1992, following the dissolution of the Soviet Union, Coca-Cola was introduced to the Russian market. As it came to be associated with the new system and Pepsi to the old, Coca-Cola rapidly captured a significant market share that might otherwise have required years to achieve. By July 2005, Coca-Cola enjoyed a market share of 19.4 percent, followed by Pepsi with 13 percent. Pepsi was introduced in Romania in 1966, during the early liberalization policies of Nicolae Ceaușescu, opening up a factory at Constanța in 1967. This was done as a barter agreement similar to the one in the USSR, however, Romanian wine would be sold in the United States instead. The product quickly became popular, especially among young people, but due to the austerity measures imposed in the 1980s, the product became scarce and rare to find. Starting from 1991, PepsiCo entered the new Romanian market economy, and still maintains a bigger popularity than its competitor, Coca-Cola, introduced in Romania in 1992, despite heavy competition during the 1990s (sometime between 2000 and 2005, Pepsi overtook Coca-Cola in sales in Romania). Pepsi did not sell soft drinks in Israel until 1991. Many Israelis and some American Jewish organizations attributed Pepsi's previous reluctance to expand operations in Israel to fears of an Arab boycott. Pepsi, which has a large and lucrative business in the Arab world, denied that, saying that economic, rather than political, reasons kept it out of Israel. Pepsiman is an official Pepsi mascot from Pepsi's Japanese corporate branch, created sometime around the mid-1990s. Pepsiman took on three different outfits, each one representing the current style of the Pepsi can in distribution. Twelve commercials were created featuring the character. His role in the advertisements is to appear with Pepsi to thirsty people or people craving soda. Pepsiman happens to appear at just the right time with the product. After delivering the beverage, sometimes Pepsiman would encounter a difficult and action-oriented situation which would result in injury. Another more minor mascot, Pepsiwoman, also featured in a few of her own commercials for Pepsi Twist; her appearance is basically a female Pepsiman wearing a lemon-shaped balaclava. In 1996, Sega-AM2 released the Sega Saturn version of its arcade fighting game "Fighting Vipers". In this game Pepsiman was included as a special character, with his specialty listed as being the ability to "quench one's thirst". He does not appear in any other version or sequel. In 1999, KID developed a video game for the PlayStation entitled "Pepsiman". As the titular character, the player runs "on rails" (forced motion on a scrolling linear path), skateboards, rolls, and stumbles through various areas, avoiding dangers and collecting cans of Pepsi, all while trying to reach a thirsty person as in the commercials. Pepsi has official sponsorship deals with the National Football League, National Hockey League, and National Basketball Association. It was the sponsor of Major League Soccer until December 2015 and Major League Baseball until April 2017, both leagues signing deals with Coca-Cola. Pepsi also has the naming rights to the Pepsi Center, an indoor sports facility in Denver, Colorado. In 1997, after his sponsorship with Coca-Cola ended, retired NASCAR Sprint Cup Series driver turned Fox NASCAR announcer Jeff Gordon signed a long-term contract with Pepsi, and he drives with the Pepsi logos on his car with various paint schemes for about 2 races each year, usually a darker paint scheme during nighttime races. Pepsi has remained as one of his sponsors ever since. Pepsi has also sponsored the NFL Rookie of the Year award since 2002. Pepsi also has sponsorship deals in international cricket teams. The Pakistani national cricket team is one of the teams that the brand sponsors. The team wears the Pepsi logo on the front of their test and ODI test match clothing. The Buffalo Bisons, an American Hockey League team, were sponsored by Pepsi-Cola in its later years; the team adopted the beverage's red, white, and blue color scheme along with a modification of the Pepsi logo (with the word "Buffalo" in place of the Pepsi-Cola wordmark). The Bisons ceased operations in 1970, making way for the Buffalo Sabres of the NHL. In 2017, Pepsi was the jersey sponsor of the Papua New Guinea national basketball team. In the United States, Pepsi is made with carbonated water, high fructose corn syrup, caramel color, sugar, phosphoric acid, caffeine, citric acid, and natural flavors. A can of Pepsi (12 fl ounces) has 41 grams of carbohydrates (all from sugars), 30 mg of sodium, 0 grams of fat, 0 grams of protein, 38 mg of caffeine, and 150 calories. Pepsi has 10 more calories and 2 more grams of sugar and carbohydrates than Coca-Cola. Caffeine-Free Pepsi contains the same ingredients but without the caffeine.
https://en.wikipedia.org/wiki?curid=24573
Paul Johann Anselm Ritter von Feuerbach Paul Johann Anselm Ritter von Feuerbach (14 November 177529 May 1833) was a German legal scholar. His major achievement was a reform of the Bavarian penal code which led to the abolition of torture and became a model for several other countries. He is also well-known for his work on Kaspar Hauser. He was born in Hainichen, near Jena. He received his early education at Frankfurt on Main, where his family had moved soon after his birth. At the age of sixteen, however, he ran away from home, and, going to Jena, was helped by relations there to study at the university. In spite of poor health and the most desperate poverty, he made rapid progress. He attended the lectures of Karl Leonhard Reinhold and Gottlieb Hufeland, and soon published some literary essays of more than ordinary merit. In 1795 he took the degree of doctor of philosophy, and in the same year, though possessing little money, he married. It was this step which led him to success and fame, by forcing him to turn from his favourite studies of philosophy and history to that of law, which was repugnant to him, but which offered a prospect of more rapid advancement. At 23 he came into prominence by a vigorous criticism of Thomas Hobbes' theory on civil power. Soon afterwards, in lectures on criminal jurisprudence he set forth his famous theory, that in administering justice judges should be strictly limited in their decisions by the penal code. This new doctrine gave rise to a party called Rigorists, who supported his theory. Von Feuerbach was the originator of the famous maxim "nullum crimen, nulla poena sine praevia lege poenali": "There is no crime and hence there shall not be punishment if at the time no penal law existed". In 1801 Feuerbach was appointed extraordinary professor of law without salary, at the University of Jena, and in the following year accepted a chair at Kiel, where he remained two years. His chief work was the framing of a penal code for Bavaria. In 1804, he had moved from the University of Kiel to the University of Landshut, but, on being commanded by King Maximilian Joseph to draft a penal code for Bavaria ("Strafgesetzbuch für das Königreich Bayern"), in 1805 he moved to Munich where he was given a high appointment in the Ministry of Justice and was ennobled in 1808. The practical reform of penal legislation in Bavaria was begun under his influence in 1806 by the abolition of torture. Out of his practical experience in the Ministry of Justice with evaluating death penalties by Bavarian courts for royal pardon he published the most notable cases 1808/11 in "Merkwürdige Criminalfälle" and 1828/29 a much enlarged collection "Aktenmäßige Darstellung merkwürdiger Verbrechen" (Notable crimes presented according to the court records). With this legal handbook of criminal cases in the tradition of the famous "Causes Célèbres" of the French lawyer Gayot de Pitaval (1673–1743) Feuerbach intend to establish a modern criminal psychology ("Seelenkunde") for crime investigation, criminal judges, etc. In the course of time, his work has been misinterpreted merely as a literary collection of fictional sensational crimes and many of his court cases have been edited and published as trivial popular crime stories. However, Gerold Schmidt has newly researched that Feuerbach has recorded true historical events at true localities and persons with their real names etc. and that, therefore, his work is a rich historical source for Bavarian local and social history, mentality, biography etc. In his "Betrachtungen über das Geschworenengericht" of 1811, Feuerbach declared against trial by jury, maintaining that the verdict of a jury was not adequate legal proof of a crime. Much controversy was aroused on the subject, and the author's view was subsequently to some extent modified. The result of his labours on the Bavarian penal code was promulgated in 1813. The influence of this code, the embodiment of Feuerbach's enlightened views, was immense. It was at once made the basis for new codes in Württemberg and Saxe-Weimar; it was adopted in its entirety in the grand-duchy of Oldenburg; and it was translated into Swedish by order of the king. Several of the Swiss cantons reformed their codes in conformity with it. Feuerbach had also undertaken to prepare a civil code for Bavaria, to be founded on the Code Napoléon. This was afterwards set aside, and the Codex Maximilianus adopted as a basis. But the project did not become law. During the war of liberation (1813–1814), Feuerbach showed himself an ardent patriot, and published several political brochures. In 1814 Feuerbach was appointed second president of the court of appeal at Bamberg, and three years later he became first president of the court of appeal at Ansbach. In 1821 he was deputed by the government to visit France, Belgium, and the Rhine provinces for the purpose of investigating their juridical institutions. As the fruit of this visit, he published his treatises "Betrachtungen über Öffentlichkeit und Mündigkeit der Gerechtigkeitspflege" (1821) and "Über die Gerichtsverfassung und das gerichtliche Verfahren Frankreichs" (1825). In these he pleaded unconditionally for publicity in all legal proceedings. In his later years, he took a deep interest in the fate of the strange foundling Kaspar Hauser who had excited much attention in Europe. He was the first to publish a critical summary of the ascertained facts, under the title "Kaspar Hauser, ein Beispiel eines Verbrechens am Seelenleben" (1832). Feuerbach died on 29 May 1833 in Frankfurt. There is some controversy over the cause and circumstances of his death which remain largely unclear - his family as well as he himself shortly before his death believed that he had been poisoned due to his protection of and research work on Kaspar Hauser, who himself died later the same year under suspicious circumstances. Feuerbach had five sons, and three daughters: Joseph Anselm Feuerbach (1798–1851), Karl Wilhelm Feuerbach (1800–1834), Eduard August Feuerbach (1803-1843), Ludwig Andreas Feuerbach (1804–1872), Heinrich Friedrich Feuerbach (1806–1880), Rebecca Magdalena (1808–1891), Leonore Feuerbach (1809-1885), and Elise Feuerbach (1813–1883).
https://en.wikipedia.org/wiki?curid=24576
Pub A pub, or public house, is an establishment licensed to serve alcoholic drinks for consumption on the premises. The term "public house" first appeared in the late 17th century, and was used to differentiate private houses from those which were, quite literally, open to the public as 'alehouses', 'taverns' and 'inns'. By Georgian times it had become common parlance, although taverns, as a distinct establishment, had largely ceased to exist by the beginning of the 19th century. Today, pubs have no strict definition, but CAMRA states a pub has four characteristics: The history of pubs can be traced to Roman taverns in Britain, and through Anglo-Saxon alehouses, but it was not until the early 19th century that pubs, as we know them today, first began to appear. The model also became popular in countries and regions of British influence, where pubs are often still considered to be an important aspect of their culture. In many places, especially in villages, pubs are the focal point of local communities. In his 17th-century diary, Samuel Pepys described the pub as "the heart of England". Although pubs generally focus on offering beer and cider, most also sell wine, spirits, coffee, and soft drinks. Many also offer meals and snacks. The owner, tenant or manager (licensee) is known as the landlord or landlady, or publican.The drinks served traditionally include draught beer and cider. Often colloquially referred to as their "local" by regulars, pubs are typically chosen for their proximity to home or work, good food, a social atmosphere, the presence of friends and acquaintances, and the availability of pub games such as darts or snooker. Pubs will often screen sports events, such as English Premier League and Scottish Premier League games (or for international tournaments, the FIFA World Cup). The pub quiz was established in the UK in the 1970s. Ale was a native British drink before the arrival of the Roman Empire in 1st century, but it was with the construction of the Roman road network that the first pubs, called tabernae, began to appear. The word eventually became corrupted into tavern. After the departure of Roman authority in the 5th century and the fall of the Romano-British kingdoms, the Anglo-Saxons established alehouses that may have grown out of domestic dwellings, first attested in the 10th century. These alehouses quickly evolved into meeting houses for folk to socially congregate, gossip and arrange mutual help within their communities. The Wantage law code of Æthelred the Unready proscribes fines for breaching the peace at meetings held in alehouses. A traveller in the early Middle Ages could obtain overnight accommodation in monasteries, but later a demand for hostelries grew with the popularity of pilgrimages and travel. The Hostellers of London were granted guild status in 1446 and in 1514 the guild became the Worshipful Company of Innholders. A survey in 1577 of drinking establishment in England and Wales for taxation purposes recorded 14,202 alehouses, 1,631 inns, and 329 taverns, representing one pub for every 187 people. Inns are buildings where travellers can seek lodging and, usually, food and drink. They are typically located in the country or along a highway. In Europe, they possibly first sprang up when the Romans built a system of roads two millennia ago. Some inns in Europe are several centuries old. In addition to providing for the needs of travellers, inns traditionally acted as community gathering places. In Europe, it is the provision of accommodation, if anything, that now distinguishes inns from taverns, alehouses and pubs. The latter tend to provide alcohol (and, in the UK, soft drinks and often food), but less commonly accommodation. Inns tend to be older and grander establishments: historically they provided not only food and lodging, but also stabling and fodder for the traveller's horse(s) and on some roads fresh horses for the mail coach. Famous London inns include The George, Southwark and The Tabard. There is, however, no longer a formal distinction between an inn and other kinds of establishment. Many pubs use "Inn" in their name, either because they are long established former coaching inns, or to summon up a particular kind of image, or in many cases simply as a pun on the word "in", as in "The Welcome Inn", the name of many pubs in Scotland. The original services of an inn are now also available at other establishments, such as hotels, lodges, and motels, which focus more on lodging customers than on other services, although they usually provide meals; pubs, which are primarily alcohol-serving establishments; and restaurants and taverns, which serve food and drink. In North America, the lodging aspect of the word "inn" lives on in hotel brand names like Holiday Inn, and in some state laws that refer to lodging operators as innkeepers. The Inns of Court and Inns of Chancery in London started as ordinary inns where barristers met to do business, but became institutions of the legal profession in England and Wales. Gin was popularised in England following the accession of William of Orange in 1688, largely because it provided an alternative to French brandy at a time of both political and religious conflict between Britain and France. Between 1689 and 1697 the British Government passed a range of legislation aimed at restricting brandy imports and encouraging domestic gin production. No licenses were required to make spirits and thousands of gin-shops sprang up all over England. Because of its cheapness, gin became popular with the poor, eventually leading to the Gin Craze and by 1727 over half of London's 15,000 drinking establishments were dedicated to gin. The drunkenness and lawlessness created by gin was seen to lead to the ruination and degradation of the working classes, as illustrated by William Hogarth in his engravings "Beer Street" and "Gin Lane". The Gin Act of 1736 and the Gin Act of 1743 were ineffective attempts to control the situation, but the Gin Act of 1751 proved more successful and succeeded in reducing consumption. By the early 19th century, encouraged by a reduction of duties, gin houses and gin palaces (an evolution of gin shops) began to spread from London to most towns and cities in Britain and gin consumption again began to rise. Alarmed at the prospect of a return to the Gin Craze, and under a banner of "reducing public drunkenness" the Government attempted to counter the threat by introducing the Beerhouse Act of 1830. The Act introduced a new lower tier of premises, "the beerhouse". At the time, beer was viewed as harmless and nutritious, even healthy. Young children were often given what was described as small beer, brewed to have a low alcohol content, as the local water was frequently unsafe. Even the evangelical church and temperance movements of the day viewed the drinking of beer very much as a secondary evil and a normal accompaniment to a meal. The beerhouse Act was an attempt to wean drinkers away from the evils of gin and encourage the consumption of a more wholesome beverage. Under the 1830 Act any householder, on payment of two guineas (roughly equal in value to £ today), was permitted to brew and sell beer or cider in his home. The permission did not extend to the sale of spirits and fortified wines, and any beerhouse discovered selling those items was closed down and the owner heavily fined. Beerhouses were not permitted to open on Sundays. The beer was usually served in jugs or dispensed directly from tapped wooden barrels on a table in the corner of the room. Often profits were so high the owners were able to buy the house next door to live in, turning every room in their former home into bars and lounges for customers. In the first year, 400 beer houses opened and within eight years there were 46,000 across the country, far outnumbering the combined total of long-established taverns, pubs, inns and hotels. Because it was so easy to obtain permission and the profits could be huge compared to the low cost of gaining permission, the number of beer houses continued to rise; in some towns nearly every other house in a street might be a beerhouse. Finally in 1869 the growth had to be checked by magisterial control. New licensing laws were introduced making it harder to get a licence, and the regime which operates today was established. It was not until the 19th century that pubs as we know them today first began to appear. Before this time alehouses were largely indistinguishable from private houses and the poor standard of rural roads meant that, away from the larger towns, the only beer available was often that which had been brewed by the publican himself. With the arrival of the Industrial Revolution, many areas of the United Kingdom were transformed by a surge in industrial activity and rapid population growth. There was huge demand for beer and for venues where the public could engage in social interaction, but there was also intense competition for customers. Gin houses and palaces were becoming increasingly popular, while the Beerhouse Act of 1830 resulted in a proliferation of beerhouses. By the mid-19th century pubs were being widely purpose-built, allowing their owners to incorporate architectural features which distinguished them from private houses and made them stand out from the competition. Many existing public houses were also redeveloped at this time, borrowing features from other building types and gradually developing the characteristics which go to make pubs the instantly recognisable institutions that exist today. In particular, and contrary to the intentions of the Beerhouse Act, many drew inspiration from the gin houses and palaces. Bar counters had been an early adoption, but ornate mirrors, etched glass, polished brass fittings and lavishly tiled surfaces were all features that had first made their appearance in gin houses. Innovations such as the introduction of hand pumps (or beer engines) allowed a greater number of people in less time, while technological advances in the brewing industry and improved transportation links made it possible for breweries to deliver their products far away from where they were produced. The latter half of the 19th century saw increased competition within the brewing industry and, in an attempt to secure markets for their own products, breweries began rapidly buying local pubs and directly employing publicans to run them. Although some tied houses had existed in larger British towns since the 17th century, this represented a fundamental shift in the way that many pubs were operated and the period is now widely regarded as the birth of the tied house system. Decreasing numbers of free houses and difficulties in obtaining new licences meant a continual expansion of their tied estates was the only feasible way for breweries to generate new trade. By the end of the century more than 90 percent of public houses in England were owned by breweries and the only practical way brewers could now grow their tied estates was to turn on each other. Buy-outs and amalgamations became commonplace and by the end of the 1980s there were only six large brewers left in the UK, collectively known as the Big Six; Allied, Bass, Courage, Grand Metropolitan, Scottish & Newcastle and Whitbread. In an attempt to increase the number of free houses, by forcing the big breweries to sell their tied houses, the Government introduced The Beer Orders in 1989. The result, however, was that the Big Six melted away into other sectors; selling their brewing assets and spinning off their tied houses, largely into the hands of branded pub chains, called pubcos. As these were not brewers, they were not governed by the Beer Orders and tens of thousands of pubs remain tied, much in the same way that they had been previously. In reality, government interference did very little to improve Britain's tied house system and all its large breweries are now in the hands of foreign or multi-national companies. There was regulation of public drinking spaces in England from at least the 15th century. In 1496, under, Henry VII, an act was passed, "against vagabonds and beggers" (11 Hen. VII c2), that included a clause empowering two justices of the peace, "to rejecte and put awey comen ale-selling in tounes and places where they shall think convenyent, and to take suertie of the keepers of ale-houses in their gode behavyng by the discrecion of the seid justices, and in the same to be avysed and aggreed at the tyme of their sessions." More legislation was introduced in the following centuries, together with licence fees, which became an income stream for the crown. Tavern owners were required to possess a licence to sell ale, and a separate licence for distilled spirits. From the mid-19th century on the opening hours of licensed premises in the UK were restricted. However licensing was gradually liberalised after the 1960s, until contested licensing applications became very rare, and the remaining administrative function was transferred to Local Authorities in 2005. The Wine and Beerhouse Act 1869 reintroduced the stricter controls of the previous century. The sale of beers, wines or spirits required a licence for the premises from the local magistrates. Further provisions regulated gaming, drunkenness, prostitution and undesirable conduct on licensed premises, enforceable by prosecution or more effectively by the landlord under threat of forfeiting his licence. Licences were only granted, transferred or renewed at special Licensing Sessions courts, and were limited to respectable individuals. Often these were ex-servicemen or ex-policemen; retiring to run a pub was popular amongst military officers at the end of their service. Licence conditions varied widely, according to local practice. They would specify permitted hours, which might require Sunday closing, or conversely permit all-night opening near a market. Typically they might require opening throughout the permitted hours, and the provision of food or lavatories. Once obtained, licences were jealously protected by the licensees (who were expected to be generally present, not an absentee owner or company), and even "Occasional Licences" to serve drinks at temporary premises such as fêtes would usually be granted only to existing licensees. Objections might be made by the police, rival landlords or anyone else on the grounds of infractions such as serving drunks, disorderly or dirty premises, or ignoring permitted hours. The Sunday Closing (Wales) Act 1881 required the closure of all public houses in Wales on Sundays, and was not repealed until 1961. Detailed licensing records were kept, giving the Public House, its address, owner, licensee and misdemeanours of the licensees, often going back for hundreds of years. Many of these records survive and can be viewed, for example, at the London Metropolitan Archives centre. The archives centre is responsible for records being publicized as well as permanently preserving the records. The records remaining permanent for 15–25 years until they are reviewed for the second time. A favourite goal of the Temperance movement led by Protestant nonconformists was to sharply reduce the heavy drinking by closing as many pubs as possible. In 1908 Prime Minister H.H. Asquith—although a heavy drinker—took the lead by proposing to close about a third of the 100,000 pubs in England and Wales, with the owners compensated through a new tax on surviving pubs. The brewers controlled the pubs and organized a stiff resistance, supported by the Conservatives, who repeatedly defeated the proposal in the House of Lords. However, the "People's Tax" of 1910 included a stiff tax on pubs. Beer and liquor consumption fell in half from 1900 to 1920, in part because there were many new leisure opportunities. The restrictions were tightened by the Defence of the Realm Act of August 1914, which, along with the introduction of rationing and the censorship of the press for wartime purposes, restricted pubs' opening hours to 12 noon–2:30 pm and 6:30 pm–9:30 pm. Opening for the full licensed hours was compulsory, and closing time was equally firmly enforced by the police; a landlord might lose his licence for infractions. Pubs were closed under the Act and compensation paid, for example in Pembrokeshire. There was a special case established under the State Management Scheme where the brewery and licensed premises were bought and run by the state until 1973, most notably in Carlisle. During the 20th century elsewhere, both the licensing laws and enforcement were progressively relaxed, and there were differences between parishes; in the 1960s, at closing time in Kensington at 10:30 pm, drinkers would rush over the parish boundary to be in good time for "Last Orders" in Knightsbridge before 11 pm, a practice observed in many pubs adjoining licensing area boundaries. Some Scottish and Welsh parishes remained officially "dry" on Sundays (although often this merely required knocking at the back door of the pub). These restricted opening hours led to the tradition of lock-ins. However, closing times were increasingly disregarded in the country pubs. In England and Wales by 2000 pubs could legally open from 11 am (12 noon on Sundays) through to 11 pm (10:30 pm on Sundays). That year was also the first to allow continuous opening for 36 hours from 11 am on New Year's Eve to 11 pm on New Year's Day. In addition, many cities had by-laws to allow some pubs to extend opening hours to midnight or 1 am, whilst nightclubs had long been granted late licences to serve alcohol into the morning. Pubs near London's Smithfield market, Billingsgate fish market and Covent Garden fruit and flower market could stay open 24 hours a day since Victorian times to provide a service to the shift working employees of the markets. Scotland's and Northern Ireland's licensing laws have long been more flexible, allowing local authorities to set pub opening and closing times. In Scotland, this stemmed from a late repeal of the wartime licensing laws, which stayed in force until 1976. The Licensing Act 2003, which came into force on 24 November 2005, consolidated the many laws into a single Act. This allowed pubs in England and Wales to apply to the local council for the opening hours of their choice. It was argued that this would end the concentration of violence around 11.30 pm, when people had to leave the pub, making policing easier. In practice, alcohol-related hospital admissions rose following the change in the law, with alcohol involved in 207,800 admissions in 2006/7. Critics claimed that these laws would lead to "24-hour drinking". By the time the law came into effect, 60,326 establishments had applied for longer hours and 1,121 had applied for a licence to sell alcohol 24 hours a day. However nine months later many pubs had not changed their hours, although some stayed open longer at the weekend, but rarely beyond 1:00 am. A "lock-in" is when a pub owner allows patrons to continue drinking in the pub after the legal closing time, on the theory that once the doors are locked, it becomes a private party rather than a pub. Patrons may put money behind the bar before official closing time, and redeem their drinks during the lock-in so no drinks are technically sold after closing time. The origin of the British lock-in was a reaction to 1915 changes in the licensing laws in England and Wales, which curtailed opening hours to stop factory workers from turning up drunk and harming the war effort. From then until the start of the 21st century, UK licensing laws changed very little, retaining these comparatively early closing times. The tradition of the lock-in therefore remained. Since the implementation of the Licensing Act 2003, premises in England and Wales may apply to extend their opening hours beyond 11 pm, allowing round-the-clock drinking and removing much of the need for lock-ins. Since the smoking ban, some establishments operated a lock-in during which the remaining patrons could smoke without repercussions but, unlike drinking lock-ins, allowing smoking in a pub was still a prosecutable offence. Ireland banned smoking in early 2004 in pubs and clubs. In March 2006, a law was introduced to forbid smoking in all enclosed public places in Scotland. Wales followed suit in April 2007, with England introducing the ban in July 2007. Pub landlords had raised concerns prior to the implementation of the law that a smoking ban would have a negative impact on sales. After two years, the impact of the ban was mixed; some pubs suffered declining sales, while others developed their food sales. The Wetherspoon pub chain reported in June 2009 that profits were at the top end of expectations; however, Scottish & Newcastle's takeover by Carlsberg and Heineken was reported in January 2008 as partly the result of its weakness following falling sales due to the ban. Similar bans are applied in Australian pubs with smoking only allowed in designated areas. By the end of the 18th century a new room in the pub was established: the saloon. Beer establishments had always provided entertainment of some sort—singing, gaming or sport. Balls Pond Road in Islington was named after an establishment run by a Mr. Ball that had a duck pond at the rear, where drinkers could, for a fee, go out and take a potshot at the ducks. More common, however, was a card room or a billiard room. The saloon was a room where, for an admission fee or a higher price of drinks, singing, dancing, drama, or comedy was performed and drinks would be served at the table. From this came the popular music hall form of entertainment—a show consisting of a variety of acts. A most famous London saloon was the Grecian Saloon in The Eagle, City Road, a pub which was referenced by name in the 18th century nursery rhyme: "Up and down the City Road / In and out The Eagle / That's the way the money goes / Pop goes the weasel." This meant that the customer had spent all his money at The Eagle, and needed to pawn his "weasel" to get some more. The meaning of the "weasel" is unclear but the two most likely definitions are: a flat iron used for finishing clothing; or rhyming slang for a coat ("weasel and stoat"). A few pubs have stage performances such as serious drama, stand-up comedy, musical bands, cabaret or striptease; however, juke boxes, karaoke and other forms of pre-recorded music have otherwise replaced the musical tradition of a piano or guitar and singing. The public bar, or tap room, was where the working class were expected to congregate and drink. It had unfurnished floorboards, sometimes covered with sawdust to absorb the spitting and spillages (known as "spit and sawdust"), bare bench seats and stools. Drinks were generally lower quality beers and liquors. Public bars were seen as exclusive areas for only men; strictly enforced social etiquettes barred women from entering public bars (some pubs did not lift this rule until the 1980s). This style was in marked contrast to the adjacent saloon or lounge bar which, by the early 20th century, was where male or accompanied female middle-class drinkers would drink. It had carpeted floors, upholstered seats, and a wider selection of better quality drinks that cost a penny or two more than those served in the public bar. By the mid 20th century, the standard of the public bar had generally improved. Pub patrons only had to choose between economy and exclusivity (or youth and age: a jukebox or dartboard). By the 1970s, divisions between saloons and public bars were being phased out, usually by the removal of the dividing wall or partition. While the names of saloon and public bar may still be seen on the doors of pubs, the prices (and often the standard of furnishings and decoration) are the same throughout the premises. Most present day pubs now comprise one large room, although with the advent of gastropubs, some establishments have returned to maintaining distinct rooms or areas. The "snug" was a small private room or area which typically had access to the bar and a frosted glass window, set above head height. A higher price was paid for beer in the snug and nobody could look in and see the drinkers. It was not only the wealthy visitors who would use these rooms. The snug was for patrons who preferred not to be seen in the public bar. Ladies would often enjoy a private drink in the snug in a time when it was frowned upon for women to be in a pub. The local police officer might nip in for a quiet pint, the parish priest for his evening whisky, or lovers for a rendezvous. Campaign for Real Ale (CAMRA) have surveyed the 50,000 pubs in Britain and they believe that there are very few pubs that still have classic snugs. These are on a historic interiors list in order that they can be preserved. The pub took the concept of the bar counter to serve the beer from gin palaces in the 18th century. Until that time beer establishments used to bring the beer out to the table or benches, as remains the practice in (for example) beer gardens and some other drinking establishments in Germany. A bar might be provided for the manager or publican to do paperwork while keeping an eye on his or her customers, and the term "bar" applied to the publican's office where one was built, but beer would be tapped directly from a cask or barrel sat on a table, or kept in a separate taproom and brought out in jugs. When purpose built Victorian pubs were built after the Beerhouse Act 1830, the main room was the public room with a large serving bar copied from the gin houses, the idea being to serve the maximum number of people in the shortest possible time. The other, more private, rooms had no serving bar—they had the beer brought to them from the public bar. There are a number of pubs in the Midlands or the North which still retain this set up, though these days the beer is fetched by the customer themself from the taproom or public bar. One of these is The Vine, known locally as The Bull and Bladder, in Brierley Hill near Birmingham, another the Cock at Broom, Bedfordshire a series of small rooms served drinks and food by waiting staff. In the Manchester district the public bar was known as the "vault", other rooms being the lounge and snug as usual elsewhere. By the early 1970s there was a tendency to change to one large drinking room as breweries were eager to invest in interior design and theming. Isambard Kingdom Brunel, the British engineer and railway builder, introduced the idea of a circular bar into the Swindon station pub in order that customers were served quickly and did not delay his trains. These island bars became popular as they also allowed staff to serve customers in several different rooms surrounding the bar. A "beer engine" is a device for pumping beer, originally manually operated and typically used to dispense beer from a cask or container in a pub's basement or cellar. The first beer pump known in England is believed to have been invented by John Lofting (b. Netherlands 1659-d. Great Marlow Buckinghamshire 1742) an inventor, manufacturer and merchant of London. The London Gazette of 17 March 1691 published a patent in favour of John Lofting for a fire engine, but remarked upon and recommended another invention of his, for a beer pump: "Whereas their Majesties have been Graciously Pleased to grant Letters patent to John Lofting of London Merchant for a New Invented Engine for Extinguishing Fires which said Engine have found every great encouragement. The said Patentee hath also projected a Very Useful Engine for starting of beer and other liquors which will deliver from 20 to 30 barrels an hour which are completely fixed with Brass Joints and Screws at Reasonable Rates. Any Person that hath occasion for the said Engines may apply themselves to the Patentee at his house near St Thomas Apostle London or to Mr. Nicholas Wall at the Workshoppe near Saddlers Wells at Islington or to Mr. William Tillcar, Turner, his agent at his house in Woodtree next door to the Sun Tavern London." "Their Majesties" referred to were William and Mary, who had recently arrived from the Netherlands and had been appointed joint monarchs. A further engine was invented in the late eighteenth century by the locksmith and hydraulic engineer Joseph Bramah (1748–1814). Strictly the term refers to the pump itself, which is normally manually operated, though electrically powered and gas powered pumps are occasionally used. When manually powered, the term "handpump" is often used to refer to both the pump and the associated handle. After the development of the large London Porter breweries in the 18th century, the trend grew for pubs to become tied houses which could only sell beer from one brewery (a pub not tied in this way was called a Free house). The usual arrangement for a tied house was that the pub was owned by the brewery but rented out to a private individual (landlord) who ran it as a separate business (even though contracted to buy the beer from the brewery). Another very common arrangement was (and is) for the landlord to own the premises (whether freehold or leasehold) independently of the brewer, but then to take a mortgage loan from a brewery, either to finance the purchase of the pub initially, or to refurbish it, and be required as a term of the loan to observe the solus tie. A trend in the late 20th century was for breweries to run their pubs directly, using managers rather than tenants. Most such breweries, such as the regional brewery Shepherd Neame in Kent and Young's and Fuller's in London, control hundreds of pubs in a particular region of the UK, while a few, such as Greene King, are spread nationally. The landlord of a tied pub may be an employee of the brewery—in which case he/she would be a manager of a managed house—or a self-employed tenant who has entered into a lease agreement with a brewery, a condition of which is the legal obligation (trade tie) only to purchase that brewery's beer. The beer selection is mainly limited to beers brewed by that particular company. The Beer Orders, passed in 1989, were aimed at getting tied houses to offer at least one alternative beer, known as a guest beer, from another brewery. This law has now been repealed but while in force it dramatically altered the industry. Some pubs still offer a regularly changing selection of guest beers. Organisations such as Wetherspoons, Punch Taverns and O'Neill's were formed in the UK in the wake of the Beer Orders. A PubCo is a company involved in the retailing but not the manufacture of beverages, while a Pub chain may be run either by a PubCo or by a brewery. Pubs within a chain will usually have items in common, such as fittings, promotions, ambience and range of food and drink on offer. A pub chain will position itself in the marketplace for a target audience. One company may run several pub chains aimed at different segments of the market. Pubs for use in a chain are bought and sold in large units, often from regional breweries which are then closed down. Newly acquired pubs are often renamed by the new owners, and many people resent the loss of traditional names, especially if their favourite regional beer disappears at the same time. In 2009 about half of Britain's pubs were owned by large pub companies. A brewery tap is the nearest outlet for a brewery's beers. It is usually a room or bar in the brewery itself, although the name may be applied to the nearest pub. The term is not applied to a brewpub which brews and sells its beer on the same premises. A pub has no strict definition, but CAMRA states that a pub has four characteristics: Together these characteristics differentiate pubs from restaurants and hotel bars, although some pubs also serve as restaurants or hotels. A gastropub is a hybrid pub and restaurant, notable for serving good quality beer, wine and food. The name is a portmanteau of gastronomy and public house, and was coined in 1991 when David Eyre and Mike Belben took over The Eagle pub in Clerkenwell, London. The concept of a restaurant in a pub reinvigorated both pub culture and British dining, In 2011, "The Good Food Guide" suggested that the term has become irrelevant such is its commonality these days. A "country pub" is simply a rural drinking establishment, though the term has acquired a romantic image typically of thatched roofs and whitewashed stone walls. As with urban pubs, the country pub can function as a social and recreational centre, providing opportunities for folk to meet, exchange news, and cooperate on local charitable events. However, that culture of functioning as a social centre for a village and rural community started to diminish in the later part of the 20th century as many country pubs either closed down, or were converted to restaurants or gastropubs. Those country pubs located on main routes may once have been coaching inns, providing accommodation or refreshment for travellers before the advent of motorised transport. The term roadhouse was originally applied to a coaching inn, but with the advent of popular travel by motor car in the 1920s and 1930s in the United Kingdom, a new type of roadhouse emerged, often located on the newly constructed arterial roads and bypasses. They were large establishments offering meals and refreshment and accommodation to motorists and parties travelling by charabanc. The largest roadhouses boasted facilities such as tennis courts and swimming pools. Their popularity ended with the outbreak of the Second World War when recreational road travel became impossible, and the advent of post-war drink driving legislation prevented their full recovery. Many of these establishments are now operated as pub restaurants or fast food outlets. A theme pub is a pub which aligns itself to a specific culture, style or activity; often with the intention of attracting a niche clientele. Many are decorated and furnished accordingly, with the theme sometimes dictating the style of food or drink on offer too. Examples of theme pubs include sports bars, rock pubs, biker pubs, Goth pubs, strip pubs, karaoke bars and Irish pubs. A micropub is a very small, modern, one room pub founded on principles set up by Martyn Hillier, the creator of the first micropub, The Butchers Arms in Herne, Kent in 2005. Micropubs are "based upon good ale and lively banter", with, commonly, a strong focus on local cask ale. It became easier to start a small pub after the passing of the 2003 Licensing Act, which became effective in 2005. In 1393, King Richard II of England compelled landlords to erect signs outside their premises. The legislation stated "Whosoever shall brew ale in the town with intention of selling it must hang out a sign, otherwise he shall forfeit his ale." This law was to make alehouses easily visible to passing inspectors, borough ale tasters, who would decide the quality of the ale they provided. William Shakespeare's father, John Shakespeare, was one such inspector. Another important factor was that during the Middle Ages a large proportion of the population would have been illiterate and so pictures on a sign were more useful than words as a means of identifying a public house. For this reason there was often no reason to write the establishment's name on the sign and inns opened without a formal written name, the name being derived later from the illustration on the pub's sign. The earliest signs were often not painted but consisted, for example, of paraphernalia connected with the brewing process such as bunches of hops or brewing implements, which were suspended above the door of the pub. In some cases local nicknames, farming terms and puns were used. Local events were often commemorated in pub signs. Simple natural or religious symbols such as 'The Sun', 'The Star' and 'The Cross' were incorporated into pub signs, sometimes being adapted to incorporate elements of the heraldry (e.g. the coat of arms) of the local lords who owned the lands upon which the pub stood. Some pubs have Latin inscriptions. Other subjects that lent themselves to visual depiction included the name of battles (e.g. Trafalgar), explorers, local notables, discoveries, sporting heroes and members of the royal family. Some pub signs are in the form of a pictorial pun or rebus. For example, a pub in Crowborough, East Sussex called "The Crow and Gate" had for some years an image of a crow with gates as wings. A "British Pathe News" film of 1956 shows artist Michael Farrar-Bell at work producing inn signs. Most British pubs still have decorated signs hanging over their doors, and these retain their original function of enabling the identification of the pub. Today's pub signs almost always bear the name of the pub, both in words and in pictorial representation. The more remote country pubs often have stand-alone signs directing potential customers to their door. Pub names are used to identify and differentiate each pub. Modern names are sometimes a marketing ploy or attempt to create "brand awareness", frequently using a comic theme thought to be memorable, "Slug and Lettuce" for a pub chain being an example. Interesting origins are not confined to old or traditional names, however. Names and their origins can be broken up into a relatively small number of categories. As many pubs are centuries old, many of their early customers were unable to read, and pictorial signs could be readily recognised when lettering and words could not be read. Pubs often have traditional names. A common name is the "Marquis of Granby". These pubs were named after John Manners, Marquess of Granby, who was the son of John Manners, 3rd Duke of Rutland and a general in the 18th-century British Army. He showed a great concern for the welfare of his men, and on their retirement, provided funds for many of them to establish taverns, which were subsequently named after him. All pubs granted their license in 1780 were called the Royal George, after King George III, and the twentieth anniversary of his coronation. Some names for pubs that seem absurd or whimsical have come from corruptions of old slogans or phrases, such as "The Bag o'Nails" (Bacchanals), "The Goat and Compasses" (God Encompasseth Us), "The Cat and the Fiddle" ("Chaton Fidèle": Faithful Kitten) and "The Bull and Bush", which purportedly celebrates the victory of Henry VIII at "Boulogne Bouche" or Boulogne-sur-Mer Harbour. Traditional games are played in pubs, ranging from the well-known darts, skittles, dominoes, cards and bar billiards, to the more obscure Aunt Sally, nine men's morris and ringing the bull. In the UK betting is legally limited to certain games such as cribbage or dominoes, played for small stakes. In recent decades the game of pool (both the British and American versions) has increased in popularity as well as other table based games such as snooker or table football becoming common. Increasingly, more modern games such as video games and slot machines are provided. Pubs hold special events, from tournaments of the aforementioned games to karaoke nights to pub quizzes. Some play pop music and hip-hop (dance bar), or show football and rugby union on big screen televisions (sports bar). Shove ha'penny and Bat and trap were also popular in pubs south of London. Some pubs in the UK also have football teams composed of regular customers. Many of these teams are in leagues that play matches on Sundays, hence the term "Sunday League Football". Bowling is found in association with pubs in some parts of the country and the local team will play matches against teams invited from elsewhere on the pub's bowling green. Pubs may be venues for pub songs and live music. During the 1970s pubs provided an outlet for a number of bands, such as Kilburn and the High Roads, Dr. Feelgood and The Kursaal Flyers, who formed a musical genre called Pub rock that was a precursor to Punk music. Some pubs have a long tradition of serving food, dating back to their historic usage as inns and hotels where travellers would stay. Many pubs were drinking establishments, and little emphasis was placed on the serving of food, other than sandwiches and "bar snacks", such as pork scratchings, pickled eggs, salted crisps and peanuts which helped to increase beer sales. In South East England (especially London) it was common until recent times for vendors selling cockles, whelks, mussels, and other shellfish to sell to customers during the evening and at closing time. Many mobile shellfish stalls would set up near pubs, a practice that continues in London's East End. Otherwise, pickled cockles and mussels may be offered by the pub in jars or packets. In the 1950s some British pubs would offer "a pie and a pint", with hot individual steak and ale pies made easily on the premises by the proprietor's wife during the lunchtime opening hours. The ploughman's lunch became popular in the late 1960s, as did the convenient "chicken in a basket", a portion of roast chicken with chips, served on a napkin in a wicker basket. Family chain pubs which served food in the evenings gained popularity in the 1970s, and included Berni Inn and Beefeater. Quality dropped but variety increased with the introduction of microwave ovens and freezer food. "Pub grub" expanded to include British food items such as steak and ale pie, shepherd's pie, fish and chips, bangers and mash, Sunday roast, ploughman's lunch, and pasties. In addition, dishes such as burgers, chicken wings, lasagne and chilli con carne are often served. Some pubs offer elaborate hot and cold snacks free to customers at Sunday lunchtimes, to prevent them getting hungry and leaving for their lunch at home. Since the 1990s food has become a more important part of a pub's trade, and today most pubs serve lunches and dinners at the table in addition to (or instead of) snacks consumed at the bar. They may have a separate dining room. Some pubs serve meals to a higher standard, to match good restaurant standards; these are sometimes termed gastropubs. CAMRA maintains a "National Inventory" of historical notability and of architecturally and decoratively notable pubs. The National Trust owns thirty-six public houses of historic interest including the George Inn, Southwark, London and The Crown Liquor Saloon, Belfast, Northern Ireland. The highest pub in the United Kingdom is the Tan Hill Inn, Yorkshire, at above sea level. The remotest pub on the British mainland is The Old Forge in the village of Inverie, Lochaber, Scotland. There is no road access and it may only be reached by an walk over mountains, or a sea crossing. Contenders for the smallest public house in the UK include: The list includes a small number of parlour pubs, one of which is the Sun Inn in Leintwardine, Herefordshire. The smallest public house in Wales is claimed by Y Goron Fach (The Little Crown) in Denbigh, with a single bar of . The largest pub in the UK is the Royal Victoria Pavilion, in Ramsgate, Kent. The venue was previously a casino and before that a theatre. A number of pubs claim to be the oldest surviving establishment in the United Kingdom, although in several cases original buildings have been demolished and replaced on the same site. Others are ancient buildings that were used for purposes other than as a pub previously in their history. Ye Olde Fighting Cocks in St Albans, Hertfordshire, holds the Guinness World Record for the oldest pub in England, as it is an 11th-century structure on an 8th-century site. Ye Olde Trip to Jerusalem in Nottingham is claimed to be the "oldest inn in England". It has a claimed date of 1189, based on the fact it is constructed on the site of the Nottingham Castle brewhouse; the present building dates from around 1650. Likewise, The Nags Head in Burntwood, Staffordshire only dates back to the 16th century, while it has been claimed that a pub on the site is mentioned in the Domesday book, Burntwood is not in fact listed. There is archaeological evidence that parts of the foundations of The Old Ferryboat Inn in Holywell may date to AD 460, and there is evidence of ale being served as early as AD 560. The Bingley Arms, Bardsey, Yorkshire, is claimed to date to 905 AD. Ye Olde Salutation Inn in Nottingham dates from 1240, although the building served as a tannery and a private residence before becoming an inn sometime before the English Civil War. The Adam and Eve in Norwich was first recorded in 1249, when it was an alehouse for the workers constructing nearby Norwich Cathedral. Ye Olde Man & Scythe in Bolton, Greater Manchester, is mentioned by name in a charter of 1251, but the current building is dated 1631. Its cellars are the only surviving part of the older structure. The town of Stalybridge in Greater Manchester is thought to have the pubs with both the longest and shortest names in the United Kingdom – The Old Thirteenth Cheshire Astley Volunteer Rifleman Corps Inn and the Q Inn, both operating (the Rifleman reopening in new premises, moving from Astley Street to premises two doors away from the Q Inn in Market Street in 2019, after being closed for three years). The original Rifleman building retains a pub sign, and a blue plaque from 1995 recording the recognition of the name in the Guinness Book of Records. The number of pubs in the UK has declined year on year, at least since 1982. Various reasons are put forward for this, such as the failure of some establishments to keep up with customer requirements. Others claim the smoking ban of 2007, intense competition from gastro-pubs, the availability of cheap alcohol in supermarkets or the general economic climate are either to blame, or are factors in the decline. Changes in demographics may be an additional factor. In 2015 the rate of pub closures came under the scrutiny of Parliament in the UK, with a promise of legislation to improve relations between owners and tenants. The Lost Pubs Project listed 31,301 closed English pubs on 19 July 2016, with photographs of over 16,000. In the fifteen years to 2017 a quarter of London's pubs had closed. The closures have been ascribed to factors such as changing tastes, rise in the cost of beer due to applied taxes and the increase in the Muslim population. Inns and taverns feature throughout English literature and poetry, from The Tabard Inn in Chaucer's "Canterbury Tales" onwards. The highwayman Dick Turpin used the Swan Inn at Woughton-on-the-Green in Buckinghamshire as his base. Jamaica Inn near Bolventor in Cornwall gave its name to a 1936 novel by Daphne du Maurier and a 1939 film directed by Alfred Hitchcock. In the 1920s John Fothergill (1876–1957) was the innkeeper of the Spread Eagle in Thame, Berkshire, and published his autobiography: "An Innkeeper's Diary" (London: Chatto & Windus, 1931). During his idiosyncratic occupancy many famous people came to stay, such as H. G. Wells. United States president George W. Bush fulfilled his lifetime ambition of visiting a 'genuine British pub' during his November 2003 state visit to the UK when he had lunch and a pint of non-alcoholic lager (Bush being a teetotaler) with British Prime Minister Tony Blair at the Dun Cow pub in Sedgefield, County Durham in Blair's home constituency. There were approximately 53,500 public houses in 2009 in the United Kingdom. This number has been declining every year, so that nearly half of the smaller villages no longer have a local pub. Many of London's pubs are known to have been used by famous people, but in some cases, such as the association between Samuel Johnson and Ye Olde Cheshire Cheese, this is speculative, based on little more than the fact that the person is known to have lived nearby. However, Charles Dickens is known to have visited the Cheshire Cheese, the Prospect of Whitby, Ye Olde Cock Tavern and many others. Samuel Pepys is also associated with the Prospect of Whitby and the Cock Tavern. The Fitzroy Tavern is a pub situated at 16 Charlotte Street in the Fitzrovia district, to which it gives its name. It became famous (or according to others, infamous) during a period spanning the 1920s to the mid-1950s as a meeting place for many of London's artists, intellectuals and bohemians such as Dylan Thomas, Augustus John, and George Orwell. Several establishments in Soho, London, have associations with well-known, post-war literary and artistic figures, including the Pillars of Hercules, The Colony Room and the Coach and Horses. The Canonbury Tavern, Canonbury, was the prototype for Orwell's ideal English pub, "The Moon Under Water". The Red Lion in Whitehall is close to the Palace of Westminster and is consequently used by political journalists and Members of Parliament. The pub is equipped with a Division bell that summons MPs back to the chamber when they are required to take part in a vote. The Punch Bowl, Mayfair was at one time jointly owned by Madonna and Guy Ritchie. The Coleherne public house in Earls Court was a well-known gay pub from the 1950s. It attracted many well-known patrons, such as Freddie Mercury, Kenny Everett and Rudolph Nureyev. It was used by the serial-killer Colin Ireland to pick up victims. Jack Straw's Castle was a pub named after Jack Straw, one of the three leaders of Peasants' Revolt, the pub was active since 14th century until its destruction by The Blitz during second world war. In 1966 The Blind Beggar in Whitechapel became infamous as the scene of a murder committed by gangster Ronnie Kray. The Ten Bells is associated with several of the victims of Jack the Ripper. In 1955, Ruth Ellis, the last woman executed in the United Kingdom, shot David Blakely as he emerged from "The Magdala" in South Hill Park, Hampstead, the bullet holes can still be seen in the walls outside. It is said that Vladimir Lenin and a young Joseph Stalin met in the "Crown and Anchor" pub (now known as "The Crown Tavern") on Clerkenwell Green when the latter was visiting London in 1903. The Angel, Islington was formerly a coaching inn, the first on the Great North Road, the main route northwards out of London, where Thomas Paine is believed to have written much of "The Rights of Man". It was mentioned by Charles Dickens, became a Lyons Corner House, and is now a Co-operative Bank. The Eagle and Child and the Lamb and Flag, Oxford, were regular meeting places of the Inklings, a writers' group which included J. R. R. Tolkien and C. S. Lewis. The Eagle in Cambridge is where Francis Crick interrupted patrons' lunchtime on 28 February 1953 to announce that he and James Watson had "discovered the secret of life" after they had come up with their proposal for the structure of DNA. The anecdote is related in Watson's book "The Double Helix". and commemorated with a blue plaque on the outside wall. Although "British" pubs found outside of Britain and its former colonies are often themed bars owing little to the original British pub, a number of "true" pubs may be found around the world. In Scandinavia, especially Denmark, a number of pubs have opened which eschew "theming", and which instead focus on the business of providing carefully conditioned beer, often independent of any particular brewery or chain, in an environment which would not be unfamiliar to a British pub-goer. Some import British cask ale, rather than beer in kegs, to provide the full British real ale experience to their customers. This newly established Danish interest in British cask beer and the British pub tradition is reflected by the fact that some 56 British cask beers were available at the 2008 European Beer Festival in Copenhagen, which was attended by more than 20,000 people. In Ireland, pubs are known for their atmosphere or "craic". In Irish, a pub is referred to as "teach tábhairne" ("tavernhouse") or "teach óil" ("drinkinghouse"). Live music, either sessions of traditional Irish music or varieties of modern popular music, is frequently featured in the pubs of Ireland. Pubs in Northern Ireland are largely identical to their counterparts in the Republic of Ireland except for the lack of spirit grocers. A side effect of "The Troubles" was that the lack of a tourist industry meant that a higher proportion of traditional bars have survived the wholesale refitting of Irish pub interiors in the 'English style' in the 1950s and 1960s. New Zealand sports a number of Irish pubs. The most popular term in English-speaking Canada used for a drinking establishment was "tavern", until the 1970s when the term "bar" became widespread as in the United States. In the 1800s the term used was "public house" as in England. A fake "English looking" pub trend started in the 1990s, built into existing storefronts, like regular bars. Most universities in Canada have campus pubs which are central to student life, as it would be bad form just to serve alcohol to students without providing some type of basic food. Often these pubs are run by the student's union. The gastropub concept has caught on, as traditional British influences are to be found in many Canadian dishes. Pubs are a common setting for fictional works, including novels, stories, films, video games, and other works. In many cases, authors and other creators develop imaginary pubs for their works, some of which have become notable fictional places. Notable fictional pubs include The Admiral Benbow Inn in the "Treasure Island" pirate story, The Garrison in the 1920s crime TV drama "Peaky Blinders", The Golden Perch in the high fantasy novel "Lord of the Rings", The Hog's Head pub in the "Harry Potter" fantasy series, Moe's Tavern, a working-class venue in "The Simpsons", and The Oak and Crosier in "" video game. The major soap operas on British television each feature a fictional pub, and these pubs have become household names in Britain. The Rovers Return is the pub in "Coronation Street", the British soap broadcast on ITV. The Queen Vic (short for the Queen Victoria) is the pub in "EastEnders", the major soap on BBC One and the Woolpack in ITV's "Emmerdale". The sets of each of the three major television soap operas have been visited by some of the members of the royal family, including Queen Elizabeth II. The centrepiece of each visit was a trip into the Rovers, the Queen Vic, or the Woolpack to be offered a drink. The Bull in the BBC Radio 4 soap opera "The Archers" is an important meeting point.
https://en.wikipedia.org/wiki?curid=24578
Pelvic inflammatory disease Pelvic inflammatory disease, also known as pelvic inflammatory disorder (PID), is an infection of the upper part of the female reproductive system, namely the uterus, fallopian tubes, and ovaries, and inside of the pelvis. Often, there may be no symptoms. Signs and symptoms, when present, may include lower abdominal pain, vaginal discharge, fever, burning with urination, pain with sex, bleeding after sex, or irregular menstruation. Untreated PID can result in long-term complications including infertility, ectopic pregnancy, chronic pelvic pain, and cancer. The disease is caused by bacteria that spread from the vagina and cervix. Infections by "Neisseria gonorrhoeae" or "Chlamydia trachomatis" are present in 75 to 90 percent of cases. Often, multiple different bacteria are involved. Without treatment, about 10 percent of those with a chlamydial infection and 40 percent of those with a gonorrhea infection will develop PID. Risk factors are generally similar to those of sexually transmitted infections and include a high number of sexual partners and drug use. Vaginal douching may also increase the risk. The diagnosis is typically based on the presenting signs and symptoms. It is recommended that the disease be considered in all women of childbearing age who have lower abdominal pain. A definitive diagnosis of PID is made by finding pus involving the fallopian tubes during surgery. Ultrasound may also be useful in diagnosis. Efforts to prevent the disease include not having sex or having few sexual partners and using condoms. Screening women at risk for chlamydial infection followed by treatment decreases the risk of PID. If the diagnosis is suspected, treatment is typically advised. Treating a woman's sexual partners should also occur. In those with mild or moderate symptoms, a single injection of the antibiotic ceftriaxone along with two weeks of doxycycline and possibly metronidazole by mouth is recommended. For those who do not improve after three days or who have severe disease, intravenous antibiotics should be used. Globally, about 106 million cases of chlamydia and 106 million cases of gonorrhea occurred in 2008. The number of cases of PID, however, is not clear. It is estimated to affect about 1.5 percent of young women yearly. In the United States, PID is estimated to affect about one million people each year. A type of intrauterine device (IUD) known as the Dalkon shield led to increased rates of PID in the 1970s. Current IUDs are not associated with this problem after the first month. Symptoms in PID range from none to severe. If there are symptoms, then fever, cervical motion tenderness, lower abdominal pain, new or different discharge, painful intercourse, uterine tenderness, adnexal tenderness, or irregular menstruation may be noted. Other complications include endometritis, salpingitis, tubo-ovarian abscess, pelvic peritonitis, periappendicitis, and perihepatitis. PID can cause scarring inside the reproductive system, which can later cause serious complications, including chronic pelvic pain, infertility, ectopic pregnancy (the leading cause of pregnancy-related deaths in adult females), and other complications of pregnancy. Occasionally, the infection can spread to the peritoneum causing inflammation and the formation of scar tissue on the external surface of the liver (Fitz-Hugh–Curtis syndrome). "Chlamydia trachomatis" and "Neisseria gonorrhoeae" are usually the main cause of PID. Data suggest that PID is often polymicrobial. Isolated anaerobes and facultative microorganisms have been obtained from the upper genital tract. "N. gonorrhoeae" has been isolated from fallopian tubes, facultative and anaerobic organisms were recovered from endometrial tissues. The anatomical structure of the internal organs and tissues of the female reproductive tract provides a pathway for pathogens to ascend from the vagina to the pelvic cavity thorough the infundibulum. The disturbance of the naturally occurring vaginal microbiota associated with bacterial vaginosis increases the risk of PID. "N. gonorrhoea" and "C. trachomati"s are the most common organisms. The least common were infections caused exclusively by anaerobes and facultative organisms. Anaerobes and facultative bacteria were also isolated from 50 percent of the patients from whom "Chlamydia" and "Neisseria" were recovered; thus, anaerobes and facultative bacteria were present in the upper genital tract of nearly two-thirds of the PID patients. PCR and serological tests have associated extremely fastidious organism with endometritis, PID, and tubal factor infertility. Microorganisms associated with PID are listed below. Rarely cases of PID have developed in people who have stated they have never had sex. Upon a pelvic examination, cervical motion, uterine, or adnexal tenderness will be experienced. Mucopurulent cervicitis and or urethritis may be observed. In severe cases more testing may be required such as laparoscopy, intra-abdominal bacteria sampling and culturing, or tissue biopsy. Laparoscopy can visualize "violin-string" adhesions, characteristic of Fitz-Hugh–Curtis perihepatitis and other abscesses that may be present. Other imaging methods, such as ultrasonography, computed tomography (CT), and magnetic imaging (MRI), can aid in diagnosis. Blood tests can also help identify the presence of infection: the erythrocyte sedimentation rate (ESR), the C-reactive protein (CRP) level, and chlamydial and gonococcal DNA probes. Nucleic acid amplification tests (NAATs), direct fluorescein tests (DFA), and enzyme-linked immunosorbent assays (ELISA) are highly sensitive tests that can identify specific pathogens present. Serology testing for antibodies is not as useful since the presence of the microorganisms in healthy people can confound interpreting the antibody titer levels, although antibody levels can indicate whether an infection is recent or long-term. Definitive criteria include histopathologic evidence of endometritis, thickened filled Fallopian tubes, or laparoscopic findings. Gram stain/smear becomes definitive in the identification of rare, atypical and possibly more serious organisms. Two thirds of patients with laparoscopic evidence of previous PID were not aware they had PID, but even asymptomatic PID can cause serious harm. Laparoscopic identification is helpful in diagnosing tubal disease; a 65 percent to 90 percent positive predictive value exists in patients with presumed PID. Upon gynecologic ultrasound, a potential finding is "tubo-ovarian complex", which is edematous and dilated pelvic structures as evidenced by vague margins, but without abscess formation. A number of other causes may produce similar symptoms including appendicitis, ectopic pregnancy, hemorrhagic or ruptured ovarian cysts, ovarian torsion, and endometriosis and gastroenteritis, peritonitis, and bacterial vaginosis among others. Pelvic inflammatory disease is more likely to reoccur when there is a prior history of the infection, recent sexual contact, recent onset of menses, or an IUD (intrauterine device) in place or if the partner has a sexually transmitted infection. Acute pelvic inflammatory disease is highly unlikely when recent intercourse has not taken place or an IUD is not being used. A sensitive serum pregnancy test is typically obtained to rule out ectopic pregnancy. Culdocentesis will differentiate hemoperitoneum (ruptured ectopic pregnancy or hemorrhagic cyst) from pelvic sepsis (salpingitis, ruptured pelvic abscess, or ruptured appendix). Pelvic and vaginal ultrasounds are helpful in the diagnosis of PID. In the early stages of infection, the ultrasound may appear normal. As the disease progresses, nonspecific findings can include free pelvic fluid, endometrial thickening, uterine cavity distension by fluid or gas. In some instances the borders of the uterus and ovaries appear indistinct. Enlarged ovaries accompanied by increased numbers of small cysts correlates with PID. Laparoscopy is infrequently used to diagnose pelvic inflammatory disease since it is not readily available. Moreover, it might not detect subtle inflammation of the fallopian tubes, and it fails to detect endometritis. Nevertheless, laparoscopy is conducted if the diagnosis is not certain or if the person has not responded to antibiotic therapy after 48 hours. No single test has adequate sensitivity and specificity to diagnose pelvic inflammatory disease. A large multisite U.S. study found that cervical motion tenderness as a minimum clinical criterion increases the sensitivity of the CDC diagnostic criteria from 83 percent to 95 percent. However, even the modified 2002 CDC criteria do not identify women with subclinical disease. Regular testing for sexually transmitted infections is encouraged for prevention. The risk of contracting pelvic inflammatory disease can be reduced by the following: Treatment is often started without confirmation of infection because of the serious complications that may result from delayed treatment. Treatment depends on the infectious agent and generally involves the use of antibiotic therapy although there is no clear evidence of which antibiotic regimen is more effective and safe in the management of PID. If there is no improvement within two to three days, the patient is typically advised to seek further medical attention. Hospitalization sometimes becomes necessary if there are other complications. Treating sexual partners for possible STIs can help in treatment and prevention. For women with PID of mild to moderate severity, parenteral and oral therapies appear to be effective. It does not matter to their short- or long-term outcome whether antibiotics are administered to them as inpatients or outpatients. Typical regimens include cefoxitin or cefotetan plus doxycycline, and clindamycin plus gentamicin. An alternative parenteral regimen is ampicillin/sulbactam plus doxycycline. Erythromycin-based medications can also be used. A single study suggests superiority of azithromycin over doxycycline. Another alternative is to use a parenteral regimen with ceftriaxone or cefoxitin plus doxycycline. Clinical experience guides decisions regarding transition from parenteral to oral therapy, which usually can be initiated within 24–48 hours of clinical improvement. Even when the PID infection is cured, effects of the infection may be permanent. This makes early identification essential. Treatment resulting in cure is very important in the prevention of damage to the reproductive system. Formation of scar tissue due to one or more episodes of PID can lead to tubal blockage, increasing the risk of the inability to get pregnant and long-term pelvic/abdominal pain. Certain occurrences such as a post pelvic operation, the period of time immediately after childbirth (postpartum), miscarriage or abortion increase the risk of acquiring another infection leading to PID. Globally about 106 million cases of chlamydia and 106 million cases of gonorrhea occurred in 2008. The number of cases of PID; however, is not clear. It is estimated to affect about 1.5 percent of young women yearly. In the United States PID is estimated to affect about one million people yearly. Rates are highest with teenagers and first time mothers. PID causes over 100,000 women to become infertile in the US each year.
https://en.wikipedia.org/wiki?curid=24579
Technology in Star Trek The technology in "Star Trek" has borrowed freely from the scientific world to provide storylines. Episodes are replete with references to tachyon beams, baryon sweeps, quantum fluctuations and event horizons. Many of the technologies created for the "Star Trek" universe were done so out of simple financial necessity—the transporter was created because the limited budget of the in the 1960s did not allow expensive shots of spaceships landing on planets. "Discovery Channel Magazine" stated that cloaking devices, faster-than-light travel and dematerialized transport were only dreams at the time the original series was made, but physicist Michio Kaku believes all these things are possible. William Shatner, who portrayed James T. Kirk in the original "Star Trek" series, believed this as well, and went on to cowrite the book "I'm Working on That", in which he investigated how "Star Trek" technology was becoming feasible. In the "Star Trek" fictional universe, subspace is a feature of space-time that facilitates faster-than-light transit, in the form of interstellar travel or the transmission of information. Subspace works similarly to the Alcubierre Drive, but obeys different laws of physics. Subspace has also been adopted and used in other fictional settings, such as the "Stargate" franchise, "The Hitchhiker's Guide to the Galaxy" series, and "". In most "Star Trek" series, subspace communications are a means to establish nearly instantaneous contact with people and places that are light years away. The physics of "Star Trek" describe infinite speed (expressed as Warp 10) as an impossibility; as such, even subspace communications which putatively travel at speeds over Warp 9.9 may take hours or weeks to reach certain destinations. Since subspace signals do not degrade with the square of the distance as do other methods of communication utilizing conventional bands of the electromagnetic spectrum (i.e. radio waves), signals sent from a great distance can be expected to reach their destination at a predictable time and with little relative degradation (barring any random subspace interference or spatial anomalies). Subspace communications have a limit of just over 20 light years before they must be boosted, although this limitation has been "written around" in several storylines.
https://en.wikipedia.org/wiki?curid=24583
Impulse drive In the fictional "Star Trek" universe, the impulse drive is the method of propulsion that starships and other spacecraft use when they are travelling below the speed of light. Typically powered by deuterium fusion reactors, impulse engines let ships travel interplanetary distances readily. Unlike the warp engines, impulse engines work on principles used in today's rocketry, throwing mass out the back as fast as possible to drive the ship forward. There are three practical challenges surrounding impulse drive design: acceleration, time dilation and conservation of energy. In the show, inertial dampers compensate for acceleration. These hypothetical devices would have to be set so that the propellant retained its inertia after leaving the craft otherwise the drive would be ineffective. Time dilation would become noticeable at appreciable fractions of the speed of light. Regarding energy conservation, the television series and books offer two explanations:
https://en.wikipedia.org/wiki?curid=24586
Punk subculture The punk subculture includes a diverse array of ideologies, fashion, and other forms of expression, visual art, dance, literature and film. It is largely characterised by anti-establishment views, the promotion of individual freedom, DIY ethics, and is centred on a loud, aggressive genre of rock music called punk rock. Punk politics cover the entire political spectrum. The punk ethos is primarily made up of beliefs such as non-conformity, anti-authoritarianism, anti-corporatism, a do-it-yourself ethic, anti-consumerist, anti-corporate greed, direct action and not "selling out". There is a wide range of punk fashion, including deliberately offensive T-shirts, leather jackets, Dr. Martens boots, hairstyles such as brightly coloured hair and spiked mohawks, cosmetics, tattoos, jewellery and body modification. Women in the hardcore scene typically wore masculine clothing. Punk aesthetics determine the type of art punks enjoy, which typically has underground, minimalist, iconoclastic and satirical sensibilities. Punk has generated a considerable amount of poetry and prose, and has its own underground press in the form of zines. Many punk-themed films and videos have been made. The punk subculture emerged in the United Kingdom and the United States in the mid-1970s. Exactly which region originated punk has long been a matter of controversy within the movement. Early punk had an abundance of antecedents and influences, and Jon Savage describes the subculture as a "bricolage" of almost every previous youth culture in the Western world since World War II, "stuck together with safety pins". Various musical, philosophical, political, literary and artistic movements influenced the subculture. In the late 1970s, the subculture began to diversify, which led to the proliferation of factions such as new wave, post-punk, 2 Tone, pop punk, hardcore punk, no wave, street punk and Oi!. Hardcore punk, street punk and Oi! sought to do away with the frivolities introduced in the later years of the original punk movement. The punk subculture influenced other underground music scenes such as alternative rock, indie music, crossover thrash and the extreme subgenres of heavy metal (mainly thrash metal, death metal, speed metal, and the NWOBHM). A new movement in the United States became visible in the early and mid-1990s that sought to revive the punk movement, doing away with some of the trappings of hardcore. The punk subculture is centered on a loud, aggressive genre of rock music called punk rock, usually played by bands consisting of a vocalist, one or two electric guitarists, an electric bassist and a drummer. In some bands, the musicians contribute backup vocals, which typically consist of shouted slogans, choruses or football-style chants. While most punk rock uses the distorted guitars and noisy drumming sounds derived from 1960s garage rock and 1970s pub rock, some punk bands incorporate elements from other subgenres, such as surf rock, rockabilly or reggae. Most punk rock songs are short, have simple and somewhat basic arrangements using relatively few chords, and they typically have lyrics that express punk ideologies and values, although some punk lyrics are about lighter topics such as partying or romantic relationships. Different punk subcultures often distinguish themselves by having a unique style of punk rock, although not every style of punk rock has its own associated subculture. The earliest form of music to be called "punk rock" was 1960s garage rock, and the term was applied to the genre retroactively by influential rock critics in the early 1970s. In the late 1960s, music now referred to as protopunk originated as a garage rock revival in the northeastern United States. The first distinct music scene to claim the "punk" label appeared in New York City between 1974 and 1976. Around the same time or soon afterward, a punk scene developed in London. Los Angeles subsequently became home to the third major punk scene. These three cities formed the backbone of the burgeoning movement, but there were also other punk scenes in cities such as Brisbane, Melbourne and Sydney in Australia, Vancouver and Montreal in Canada, and Boston and San Francisco in the United States. The punk subculture advocates a do-it-yourself (DIY) ethic. During the subculture's infancy members were almost all from a lower economic class, and had become tired of the affluence that was associated with popular rock music at the time. Punks would publish their own music or sign with small independent labels, in hopes to combat what they saw as a money hungry music industry. The DIY ethic is still popular with punks. The New York City punk rock scene arose from a subcultural underground promoted by artists, reporters, musicians and a wide variety of non-mainstream enthusiasts. The Velvet Underground's harsh and experimental yet often melodic sound in the mid to late-1960s, much of it relating to transgressive media work by visual artist Andy Warhol, is credited for influencing 1970s bands such as the New York Dolls, The Stooges and the Ramones. Early New York City punk bands were often short-lived, in part due to widespread use of recreational drugs, promiscuous sex, and sometimes violent power struggles, but the relative popularity of the music led to the evolution of punk into a movement and lifestyle. Although punks are frequently categorised as having left-wing, revolutionary, anarchist or progressive views, punk political ideology covers the entire political spectrum. Punk-related ideologies are mostly concerned with individual freedom and anti-establishment views. Common punk viewpoints include individual liberty, anti-authoritarianism, a DIY ethic, non-conformity, anti-collectivism, anti-corporatism, anti-government, direct action and not "selling out". Some individuals within the punk subculture hold right-wing views (such as those associated with the "Conservative Punk" website), libertarian views, neo-Nazi views (Nazi punk), or are apolitical (e.g., horror punk). Early British punks expressed nihilistic and anarchist views with the slogan "No Future", which came from the Sex Pistols song "God Save the Queen". In the United States, punks had a different approach to nihilism which was less anarchistic than the British punks. Punk nihilism was expressed in the use of "harder, more self-destructive, consciousness-obliterating substances like heroin, or methamphetamine" The issue of authenticity is important in the punk subculture—the pejorative term "poseur" is applied to those who associate with punk and adopt its stylistic attributes but are deemed not to share or understand the underlying values or philosophy. Early punk fashion adapted everyday objects for aesthetic effect: ripped clothing was held together by safety pins or wrapped with tape; ordinary clothing was customised by embellishing it with marker or adorning it with paint; a black bin liner became a dress, shirt or skirt; safety pins and razor blades were used as jewellery. Also popular have been leather, rubber, and PVC clothing that is often associated with transgressive sexuality, like BDSM and S&M. A designer associated with early UK punk fashion was Vivienne Westwood, who made clothes for Malcolm McLaren's boutique in the King's Road, which became famous as "SEX". Many punks wear tight "drainpipe" jeans, plaid/tartan trousers, kilts or skirts, T-shirts, leather jackets (often decorated with painted band logos, pins and buttons, and metal studs or spikes), and footwear such as high-cut Chuck Taylors, trainers, skate shoes, brothel creepers, Dr. Martens boots, and army boots. Early punks occasionally wore clothes displaying a swastika for shock value, but most contemporary punks are staunchly anti-racist and are more likely to wear a crossed-out swastika symbol than a pro-Nazi symbol. Some punks cut their hair into Mohawks or other dramatic shapes, style it to stand in spikes, and colour it with vibrant, unnatural hues. Some punks are "anti-fashion", arguing that punk should be defined by music or ideology. This is most common in the post-1980s US hardcore punk scene, where members of the subculture often dressed in plain T-shirts and jeans, rather than the more elaborate outfits and spiked, dyed hair of their British counterparts. Many groups adopt a look based on street clothes and working class outfits. Hardcore punk fans adopted a "dressed-down" style of T-shirts, jeans, combat boots or trainers and crewcuts. Women in the hardcore scene typically wore army trousers, band T-shirts, and hooded jumpers. The style of the 1980s hardcore scene contrasted with the more provocative fashion styles of late 1970s punk rockers (elaborate hairdos, torn clothes, patches, safety pins, studs, spikes, etc.). Circle Jerks frontman Keith Morris described early hardcore fashion as "the...punk scene was basically based on English fashion. But we had nothing to do with that. Black Flag and the Circle Jerks were so far from that. We looked like the kid who worked at the gas station or submarine shop." Henry Rollins echoes Morris' point, stating that for him getting dressed up meant putting on a black shirt and some dark pants; Rollins viewed an interest in fashion as being a distraction. Jimmy Gestapo from Murphy's Law describes his own transition from dressing in a punk style (spiked hair and a bondage belt) to adopting a hardcore style (i.e. boots and a shaved head) as being based on a need for more functional clothing. A punk scholar states that "hardcore kids do not look like punks", since hardcore scene members wore basic clothing and short haircuts, in contrast to the "embellished leather jackets and pants" worn in the punk scene. In contrast to Morris' and Rollins' views, another punk scholar claims that the standard hardcore punk clothing and styles included torn jeans, leather jackets, spiked armbands and dog collars, mohawk hairstyles, and DIY ornamentation of clothes with studs, painted band names, political statements, and patches. Yet another punk scholar describes the look that was common in the San Francisco hardcore scene as consisting of biker-style leather jackets, chains, studded wristbands, pierced noses and multiple piercings, painted or tattooed statements (e.g. an anarchy symbol) and hairstyles ranging from military-style haircuts dyed black or blonde, to mohawks and shaved heads. The Metropolitan Museum of Art in 2013 hosted a comprehensive exhibit, "PUNK: Chaos to Couture", that examined the techniques of hardware, distress, and re-purposing in punk fashion. In the United Kingdom, the advent of punk in the late 1970s with its "anyone can do it" ethos led to women making significant contributions. In contrast to the rock music and heavy metal scenes of the 1970s, which were dominated by men, the anarchic, counter-cultural mindset of the punk scene in mid- and late 1970s encouraged women to participate. "That was the beauty of the punk thing," Chrissie Hynde later said." [Sexual] discrimination didn't exist in that scene." This participation played a role in the historical development of punk music, especially in the U.S. and U.K. at that time, and continues to influence and enable future generations. Rock historian Helen Reddington states that the popular image of young punk women musicians as focused on the fashion aspects of the scene (fishnet stockings, spiky blond hair, etc.) was stereotypical. She states that many, if not most women punks were more interested in the ideology and socio-political implications, rather than the fashion. Music historian Caroline Coon contends that before punk, women in rock music were virtually invisible; in contrast, in punk, she argues "[i]t would be possible to write the whole history of punk music without mentioning any male bands at all – and I think a lot of [people] would find that very surprising." Johnny Rotten wrote that ‘During the Pistols era, women were out there playing with the men, taking us on in equal terms ... It wasn’t combative, but compatible.’ Women were involved in bands such as The Runaways, The Slits, The Raincoats, Mo-dettes, and Dolly Mixture, The Innocents. Others take issue with the notion of equal recognition, such as guitarist Viv Albertine, who stated that "the A&R men, the bouncers, the sound mixers, no one took us seriously.. So, no, we got no respect anywhere we went. People just didn't want us around." The anti-establishment stance of punk opened the space for women who were treated like outsiders in a male-dominated industry. Sonic Youth's Kim Gordon states, "I think women are natural anarchists, because you're always operating in a male framework." For some punks, the body was a symbol of opposition, a political statement expressing disgust of all that was "normal" and socially accepted. The idea was to make others outside of the subculture question their own views, which made gender, gender presentation and gender identity a popular factor to be played with. Men could look like women, women could look like men, or one could look like both or neither. In some ways, punk helped to tear apart the normalised view of gender as a dichotomy. There was a notable amount of cross-dressing in the punk scene; it was not unusual to see men wearing ripped-up skirts, fishnet tights and excessive makeup, or to see women with shaved heads wearing oversized plaid shirts and jean jackets and heavy combat boots. Punk created a new cultural space for androgyny and all kinds of gender expression. Some scholars have claimed that punk has been problematic towards gender by stating its overall resistance to expressing any kind of popular conceptions of femininity. In trying to reject societal norms, punk embraced one societal norm by deciding that strength and anger was best expressed through masculinity, defining masculine as the "default" in the world they were trying to create, where gender did not exist or had no meaning. However, the main reasoning behind this argument equates femininity with popular conceptions of beauty, which punk rejected. One part of punk was a creating explicitly outward identities of sexuality. Everything that was normally supposed to be hidden was brought to the front, both literally and figuratively. This could mean anything from wearing bras and underwear on top of clothing to wearing nothing but a bra and underwear. Although that act would seem sexualised in a normal context, to punks it was just another way to be obscene in the eyes of "others". Punk seemed to allow women to sexualize themselves and still be taken seriously; however, many argue that this was always in terms of what the male punks wanted. Conversely, the masculine nature of punk allowed many women to recreate an almost farcical masculinity by using their female bodies in the same way men tended to use theirs. Punk women could be filthy and horrible and use their femininity to make what they were doing even more shocking to their audience. It became popular for some punk women to accentuate their bodies in ridiculous ways, such as stuffing their pants to make exaggerated labia outlines, as if parodying male crotch stuffing. At one concert, Donita Sparks, lead singer of the band L7, pulled out her tampon and threw it into the audience. In many ways, female punks were showing unapologetically (and exaggeratedly) what it truly meant to be a woman, with nothing soft or "classically feminine" to hide behind. Riot grrrl is an underground feminist hardcore punk movement that originated in Washington, D.C. in the early 1990s, and the Pacific Northwest, especially Olympia, Washington. It is often associated with third-wave feminism, which is sometimes seen as its starting point. It has also been described as a musical genre that came out of indie rock, with the punk scene serving as an inspiration for a musical movement in which women could express themselves in the same way men had been doing for the past several years. Punk aesthetics determine the type of art punks enjoy, usually with underground, minimalistic, iconoclastic and satirical sensibilities. Punk artwork graces album covers, flyers for concerts, and punk zines. Usually straightforward with clear messages, punk art is often concerned with political issues such as social injustice and economic disparity. The use of images of suffering to shock and create feelings of empathy in the viewer is common. Alternatively, punk artwork may contain images of selfishness, stupidity, or apathy to provoke contempt in the viewer. Much of the earlier artwork was black and white, because it was distributed in zines reproduced by photocopying at work, school or at copy shops. Punk art also uses the mass production aesthetic of Andy Warhol's Factory studio. Punk played a hand in the revival of stencil art, spearheaded by Crass. The Situationists also influenced the look of punk art, particularity that of the Sex Pistols created by Jamie Reid. Punk art often utilises collage, exemplified by the art of Jamie Reid, Crass, The Clash, Dead Kennedys,and Winston Smith. John Holmstrom was a punk cartoonist who created work for the Ramones and "Punk". The Stuckism art movement had its origin in punk, and titled its first major show "The Stuckists Punk Victorian" at the Walker Art Gallery during the 2004 Liverpool Biennial. Charles Thomson, co-founder of the group, described punk as "a major breakthrough" in his art. Two dance styles associated with punk are pogo dancing and moshing. The pogo is a dance in which the dancers jump up and down, while either remaining on the spot or moving around; the dance takes its name from its resemblance to the use of a pogo stick, especially in a common version of the dance, where an individual keeps their torso stiff, their arms rigid, and their legs close together. Pogo dancing is closely associated with punk rock and is a precursor to moshing. Moshing or slamdancing is a style of dance where participants push or slam into each other, typically during a live music show. It is usually associated with "aggressive" music genres, such as hardcore punk and thrash metal. Stage diving and crowd surfing were originally associated with protopunk bands such as The Stooges, and have appeared at punk, metal and rock concerts. Ska punk promoted an updated version of skanking. Hardcore dancing is a later development influenced by all of the above-mentioned styles. Psychobillies prefer to "wreck", a form of slam dancing that involves people punching each other in the chest and arms as they move around the circle pit. Punk has generated a considerable amount of poetry and prose. Punk has its own underground press in the form of punk zines, which feature news, gossip, cultural criticism, and interviews. Some zines take the form of perzines. Important punk zines include "Maximum RocknRoll", "Punk Planet", "No Cure", "Cometbus", "Flipside", and "Search & Destroy". Several novels, biographies, autobiographies, and comic books have been written about punk. "Love and Rockets" is a comic with a plot involving the Los Angeles punk scene. Just as zines played an important role in spreading information in the punk era (e.g. British fanzines like Mark Perry's "Sniffin Glue" and Shane MacGowan's "Bondage"), zines also played an important role in the hardcore scene. In the pre-Internet era, zines enabled readers to learn about bands, shows, clubs, and record labels. Zines typically included reviews of shows and records, interviews with bands, letters to the editor, and advertisements for records and labels. Zines were DIY products, "proudly amateur, usually handmade, and always independent", and during the "’90s, zines were the primary way to stay up on punk and hardcore." They were the "blogs, comment sections, and social networks of their day." In the American Midwest, the zine "Touch and Go" described the regional hardcore scene from 1979 to 1983. "We Got Power" described the LA scene from 1981 to 1984, and included show reviews of and interviews with such bands as Vancouver's D.O.A., the Misfits, Black Flag, Suicidal Tendencies and the Circle Jerks. "My Rules" was a photo zine that included photos of hardcore shows from across the US. "In Effect", which began in 1988, described the New York City scene. Punk poets include: Richard Hell, Jim Carroll, Patti Smith, John Cooper Clarke, Seething Wells, Raegan Butcher, and Attila the Stockbroker. The Medway Poets performance group included punk musician Billy Childish and had an influence on Tracey Emin. Jim Carroll's autobiographical works are among the first known examples of punk literature. The punk subculture has inspired the cyberpunk and steampunk literature genres, and has even contributed (through Iggy Pop) to classical scholarship. Many punk-themed films have been made. The No Wave Cinema and Remodernist film movements owe much to punk aesthetics. Several famous punk bands have participated in movies, such as the Ramones in "Rock 'n' Roll High School", the Sex Pistols in "The Great Rock 'n' Roll Swindle" and Social Distortion in "Another State of Mind". Derek Jarman and Don Letts are notable punk filmmakers. Penelope Spheeris' first instalment of the documentary trilogy "The Decline of Western Civilization" (1981) focuses on the early Los Angeles punk scene through interviews and early concert footage from bands including Black Flag, Circle Jerks, Germs and Fear. The Decline of Western Civilization III" explores the gutter punk lifestyle in the 1990s. "Loren Cass" is another example of the punk subculture represented in film. Also, the documentary film AfroPunk covers the black experience in the punk DIY scene. More Films/Documentaries: Suburbia Bomb City Punks In Prague The Green Room Summer Of Sam Sid And Nancy CBGB SLC Punks "[Glue] sniffing was adopted by punks because public perceptions of sniffing fitted in with their self-image. Originally used experimentally and as a cheap high, adult disgust and hostility encouraged punks to use glue sniffing as a way of shocking society." Model airplane glue and contact cement were among the numerous solvents and inhalants used by punks to achieve euphoria and intoxication. Glue was typically inhaled by placing a quantity in a plastic bag and "huffing" (inhaling) the vapour. Liquid solvents were typically inhaled by soaking a rag with the solvent and inhaling the vapour. While users inhale solvents for the intoxicating effects, the practice can be harmful or fatal. Straight edge is a philosophy of hardcore punk culture, adherents of which refrain from using alcohol, tobacco, and other recreational drugs, in reaction to the excesses of punk subculture. For some, this extends to refraining from engaging in promiscuous sex, following a vegetarian or vegan diet, and not drinking coffee or taking prescribed medicine. The term "straight edge" was adopted from the 1981 song "Straight Edge" by the hardcore punk band Minor Threat. Straight edge emerged amid the early-1980s hardcore punk scene. Since then, a wide variety of beliefs and ideas have been associated with the movement, including vegetarianism and animal rights. Ross Haenfler writes that as of the late 1990s, approximately three out of four straight edge participants were vegetarian or vegan. While the commonly expressed aspects of the straight edge subculture have been abstinence from alcohol, nicotine, and illegal drugs, there have been considerable variations on how far to take the interpretations of "abstaining from intoxicants" or "living drug-free". Disagreements often arise as to the primary reasons for living straight edge. Straight edge politics are primarily left-wing and revolutionary but there have been conservative offshoots. In 1999, William Tsitsos wrote that straight edge had gone through three eras since its founding in the early 1980s. Bent edge began as a counter-movement to straight edge by members of the Washington, D.C. hardcore scene who were frustrated by the rigidity and intolerance in the scene. During the youth crew era, which started in the mid-1980s, the influence of music on the straight edge scene was at an all-time high. By the early 1990s, militant straight edge was a well-known part of the wider punk scene. In the early to mid-1990s, straight edge spread from the United States to Northern Europe, Eastern Europe, the Middle East, and South America. By the beginning of the 2000s, militant straight edge punks had largely left the broader straight edge culture and movement. Punks come from all culture and economic classes. Compared to some subcultures, punk ideology is much closer to gender equality. Although the punk subculture is mostly anti-racist, it is overwhelmingly white. However, members of other groups (such as African Americans, other black people, Latinos, and Asians) have contributed to the development of the subculture. Substance abuse has sometimes been a part of the punk scene, with the notable exception of the straight edge movement. Violence has also sometimes appeared in the punk subculture, but has been opposed by some subsets of the subculture, such as the pacifist strain anarcho-punk. Punks often form a local scene, which can have as few as half a dozen members in a small town, or as many as thousands of in a major city. A local scene usually has a small group of dedicated punks surrounded by a more casual periphery. A typical punk scene is made up of punk and hardcore bands, fans who attend concerts, protests, and other events, zine publishers, reviewers, and other writers, visual artists illustrating zines, and creating posters and album covers, show promoters, and people who work at music venues or independent record labels. Squatting plays a role in many punk communities, providing shelter and other forms of support. Squats in abandoned or condemned housing, and communal "punk houses" often provide bands a place to stay while they are touring. There are some punk communes, such as Essex's Dial House. The Internet has been playing an increasingly large role in punk, specifically in the form of virtual communities and file sharing programs for trading music files. In the punk and hardcore subcultures, members of the scene are often evaluated in terms of the authenticity of their commitment to the values or philosophies of the scene, which may range from political beliefs to lifestyle practices. In the punk subculture, the epithet "poseur" (or "poser") is used to describe "a person who habitually pretends to be something [they are] not." The term is used to refer to a person who adopts the dress, speech, and/or mannerisms of a particular subculture, yet who is deemed to not share or understand the values or philosophy of the subculture. While this perceived inauthenticity is viewed with scorn and contempt by members of the subculture, the definition of the term and to whom it should be applied is subjective. An article in "Drowned in Sound" argues that 1980s-era "hardcore is the true spirit of punk", because "after all the poseurs and fashionistas fucked off to the next trend of skinny pink ties with New Romantic haircuts, singing wimpy lyrics", the punk scene consisted only of people "completely dedicated to the DIY ethics". In the discussion of authenticity it is necessary to recognize the origins of punk music. Proto-punk bands came out of garage-rock during the late 1960s. Usually white working-class boys are credited for pioneering the genre, however there were many women and people of color who contributed to the original punk sound and aesthetic. Because the original subculture meant to challenge everything about the mainstream, usually in shocking ways, the "punk" that people usually picture became inauthentic once it was brought to the mainstream; "‘Inauthentic’ punk is a commercialized and debased form of an original ‘street’ form of punk"(Sabin, 1999). This is the paradox of punk; as a subculture it must always be evolving in order to stay out of the mainstream. "Punk Girls" written by Liz Ham is a photo-book featuring 100 portraits of Australian women in the punk subculture, and it was published in 2017 by Manuscript Daily. Discrimination against punk subculture is explored with her photographs in the book; these "girls" who are not mainstream, but "beautiful and talented". Glam rockers such as T.Rex, the New York Dolls and David Bowie had big influences on protopunk, early punk rock, and the crossover subgenre later called glam punk. Particularly, David Bowie himself supported the neophyte punk bands of this time, and he later said after punk somewhat fell out of fashion, "I think it's a crying shame that the category has dissipated its importance." Punk and hip hop emerged around the same time in the late 1970s New York City, and there has been some interaction between the two subcultures. Some of the first hip hop MCs called themselves punk rockers, and some punk fashions have found their way into hip hop dress and vice versa. Malcolm McLaren played roles in introducing both punk and hip hop to the United Kingdom. Hip hop later influenced some punk and hardcore bands, such as Hed PE, Blaggers I.T.A., Biohazard, E.Town Concrete, The Transplants and Refused. The skinhead subculture of the United Kingdom in the late 1960s – which had almost disappeared in the early 1970s — was revived in the late 1970s, partly because of the influence of punk rock, especially the Oi! punk subgenre. Conversely, ska and reggae, popular among traditionalist skinheads, has influenced several punk musicians. Punks and skinheads have had both antagonistic and friendly relationships, depending on the social circumstances, time period and geographic location. The punk and heavy metal subcultures have shared some similarities since punk's inception. The early 1970s protopunk scene had an influence on the development of heavy metal. Alice Cooper was a forerunner of the fashion and music of both the punk and metal subcultures. Motörhead, since their first album release in 1977, have enjoyed continued popularity in the punk scene, and their now-deceased frontman Lemmy was a fan of punk rock. Genres such as metalcore, grindcore and crossover thrash were greatly influenced by punk rock and heavy metal. The new wave of British heavy metal influenced the UK 82-style of bands like Discharge, and hardcore was a primary influence on thrash metal bands such as Metallica and Slayer. The early 1990s grunge subculture was a fusion of punk anti-fashion ideals and metal-influenced guitar sounds. However, hardcore punk and grunge developed in part as reactions against the heavy metal music that was popular during the 1980s. In punk's heyday, punks faced harassment and attacks from the general public and from members of other subcultures. In the 1980s in the UK, punks were sometimes involved in brawls with Teddy Boys, greasers, bikers, mods and members of other subcultures. There was also considerable enmity between Positive punks (known today as goths) and the glamorously dressed New Romantics. In the late 1970s, punks were known to have had confrontations with hippies due to the contrasting ideologies and backlash of the hippie culture. Nevertheless, Penny Rimbaud of the English anarcho-punk band Crass said that Crass was formed in memory of his friend, the hippie Wally Hope. Rimbaud also said that Crass were heavily involved with the hippie movement throughout the 1960s and 1970s, with Dial House being established in 1967. Many punks were often critical of Crass for their involvement in the hippie movement. Like Crass, Jello Biafra was influenced by the hippie movement and cited the yippies as a key influence on his political activism and thinking, though he did write songs critical of hippies. The industrial and rivethead subcultures have had several ties to punk, in terms of music, fashion and attitude. Power pop music (as defined by groups such as Badfinger, Cheap Trick, The Knack, and The Romantics) emerged in mostly the same time-frame and geographical area as punk rock, and they shared a great deal musically in terms of playing short songs loud and fast while trying to emphasize catchy feelings. More melodic and pop-influenced punk music have also often been wrapped alongside power pop bands under the general "new wave music" label. A good example of a genre-straddling 'power pop punk' band is the popular Northern Ireland group Protex. However, stylistically and lyrically, power pop bands have tended to have a very "not-punk" top 40 commercial pop music influence and a flashier, heavily teen-pop sense of fashion, especially modern power pop groups such as Stereo Skyline and All Time Low. The punk subculture has spread to many countries around the world. The fluidity of musical expression in particular makes it an ideal medium for this cross-cultural interpretation. In Mexico, punk culture is primarily a phenomenon among middle and lower class youth, many of whom were first exposed to punk music through travel to England. Because of low fees at public universities in Mexico, a significant minority of Mexican punks are university students. It is estimated approximately 5,000 young people are active punks in Mexico City, hosting two or three underground shows a week. These young people often form chavos banda—youth gangs—that organise subculture activity by creating formal meeting spaces and rituals and practices. Oral nicknames are a distinguishing feature of Mexican punk, where the tradition of oral culture has influenced the development of nicknames for almost all Mexican punks. Patches are widely used as an inexpensive way to alter clothing and express identity. Though English language bands like the Dead Kennedys are well known in Mexico, punks there prefer Spanish-language music or covers translated into Spanish. The slam dance style common in the California punk scene of the early 1980s is in the 2010s very popular. Performance practices reflect socio-economic circumstances of Mexican punks. Called "tocadas", shows are generally held in public spaces like basketball courts or community centers instead of places of business like bars and restaurants, as is more common in the United States and Europe. They usually take place in the afternoon and end early to accommodate the three or four hours it takes many punks to return home by public transit. Mexican punk groups rarely release vinyl or CD recordings, preferring cassettes. Though Mexican punk itself does not have an explicit political agenda, Mexican punks have been active in the Zapatista, Anarcho-punk, and Anti-globalisation movements. The anti-establishment punk sub-culture has appealed to Russians for decades, with punk media, fashion, and albums becoming enormously popular underground items in the late 1970s onwards. Musically, the sound of punk rock became a clear protest against the disco influenced, heavily electronic official Soviet regime songs. The government suppressed punks and ruthlessly censored their music. The founder of Russian punk is considered to be Yegor Letov with his band Grazhdanskaya Oborona, which started performing in the early 1980s. Letov also invented a word chanted by punk fans during concerts, Hoi (a mixture of the Oi! movement and the Russian profanity word Hui (literally "penis")). In the late 1980s Sektor Gaza formed, reaching cult status. They created a genre called "Kolkhoz punk", which mixed elements from village life into punk music. Another cult band which started a few years later was Korol i Shut, introducing horror punk, using costumes and lyrics in the form of tales and fables. Korol i Shut became one of the best selling and most highly regarded bands in the history of russian Rock. More recent expressions of punk sub-culture in Russia have included the formation of the feminist protest punk rock group Pussy Riot, which formed in 2011. With lyrical themes including feminism, LGBT rights and opposition to Russian President Vladimir Putin, along with the playing of unauthorised guerilla performances, Pussy Riot has gained notoriety, which has led to the incarceration of some of the group's members. The trial and sentencing of Pussy Riot's members has led to significant criticism, particularly in the West, including by Amnesty International. Punk arrived slowly in South Africa during the 1970s when waves of British tradesman welcomed by the then-apartheid government brought cultural influences like the popular British music magazine "NME", sold in South Africa six weeks after publication. South African punk developed separately in Johannesburg, Durban, and Cape Town and relied on live performances in townships and streets as the multi-racial composition of bands and fan bases challenged the legal and social conventions of the apartheid regime. Political participation is foundational to punk subculture in South Africa. During the apartheid regime, punk was second only to Rock music in its importance to multi-racial interactions in South Africa. Because of this, any involvement in the punk scene was in itself a political statement. Police harassment was common and the government often censored explicitly political lyrics. Johannesburg based band National Wake was routinely censored and even banned for songs like "International News," which challenged the South African government's refusal to acknowledge the racial and political conflict in the country. National Wake guitarist Ivan Kadey attributes the punk scene's ability to persevere despite the legal challenges of multi-racial mixing to punk subculture's DIY ethic and anti-establishment attitude. In post-apartheid South Africa, punk has attracted a greater number of white middle-class males than the more diverse makeup of the subculture during the apartheid era. Thabo Mbeki's African Renaissance movement has complicated the position of white South Africans in contemporary society. Punk provides young white men the opportunity to explore and express their minority identity. Cape Town band Hog Hoggidy Hog sings of the strange status of white Africans: Post-apartheid punk subculture continues to be active in South African politics, organising a 2000 festival called Punks Against Racism at Thrashers Statepark in Pretoria. Rather than the sense of despondency and fatalism that characterised 1970s British punk subculture, the politically engaged South African scene is more positive about the future of South Africa. In Peru punk traces its roots to the band Los Saicos, a Lima group that played the unique blend of garage and break dance music that would later be labeled punk as early as the 1960s. The early activity of Los Saicos has led many to claim that punk originated in Lima instead of the UK, as it typically assumed. Though their claim to be the first punk band in the world can be disputed, Los Saicos were undoubtedly the first in Latin America and released their first single in 1965. The group played to full houses and made frequent television appearances throughout the 1960s. Throughout the 1970s, the band was completely forgotten. Years later, a plaque that declares "here the global punk-rock movement was born" was placed at the corner of Miguel Iglesias and Julio C. Tello Streets in Lima. By the 1980s the punk scene in Peru was highly active. Peruvian punks call themselves "subtes" and appropriate the subversive implications of the English term "underground" through the Spanish term "subterraneo" (literally, subterranean). In the 1980s and 1990s subtes made almost exclusive use of cassette recording as a means of circulating music without participating in formal intellectual property and musical production industries. The current scene relies on digital distribution and assumes similar anti-establishment practices. Like many punk subcultures, subtes explicitly oppose the Peruvian state and advocate instead an anarchic resistance that challenges the political and mainstream cultural establishment. The origins of punk rock in Brazil go back to the late 1970s, as in most other countries mainly under the influence of the Sex Pistols, The Clash and The Ramones. However, particularly in São Paulo, more obscure names like Dutch band Speed Twins, as well as earlier protopunk artists such as MC5, The Stooges and The New York Dolls also had a big initial impact. Brazilian punk emerged in part from the ideals of the musician Douglas Viscaino, who, imbued with the pioneering ideas and unity of young people that fought against the Brazilian military regime, formed a band of protest called: Restos de Nada (Remnants of Nothing). Their musicians already had their punk ideals before 1978. Then came AI-5 and N.A.I. (later known as Condutores de Cadáver, "corpse riders") in São Paulo, as well as Carne Podre ("rotten flesh") in Curitiba (the capital of Paraná State), and Aborto Elétrico ("electric miscarriage") in Brasília (the national capital). Before punk proper bands emerged, two relatively famous glam and hard rock bands, Joelho de Porco (literally "pig knee") and Made in Brazil, used elements of the punk aesthetic around 1977 or 1978, and were called punk bands by the media without really playing punk rock music or defining themselves as such. Both bands, however, were important to the pre-punk context of the 1970s that offered few alternatives to the Música popular brasileira (MPB) and progressive rock artists that dominated the Brazilian music scene at the time. Joelho de Porco's lyrics dealing with São Paulo's urban reality was also influential. In the late 2000s, punk rock faced harsh criticism in Indonesia's province of Aceh. Punk rock is seen as a threat to Islamic values and, according to authorities, conflicts with Shariah law. The emergence of punk rock in Canada followed roughly the timelines of both the UK and the US, and was at first confined to Canada's largest cities. Since the mid-1980s, Canada's punk scene has spread over the entire country, including small rural towns and villages. In 1978, Vancouver had a fledgling punk scene, with such bands as D.O.A., Pointed Sticks, and The Subhumans. Edmonton's SNFU formed in 1981. They relocated to Vancouver in 1991 where, as of 2017, they were still active. Gerry "Useless" Hannah of The Subhumans received a ten-year prison sentence (of which he served five years) for his involvement in the "Direct Action" urban guerrilla cell, also known as the "Vancouver five" and the "Squamish five", which executed a series of attacks on civil infrastructure in BC and Ontario. A punk subculture originated in Cuba in the 1980s, referred to as Los Frikis. As Cuban radio stations rarely played rock music, Frikis often listened to music by picking up radio frequencies from stations in nearby Florida. While many Frikas in the early-1990s entered AIDS clinics by knowingly injecting HIV-positive blood into them, others began congregating at "El patio de María", a community centre in Havana that was one of the few venues in the city that allowed rock bands to play. Some Frikis also participate in squatting as an act of political defiance. In its beginning, the subculture was seen as a threat to the collectivism of Cuban society, leading to make Frikis becoming victims of discrimination and police brutality. According to the New Times Broward-Palm Beach some Frikis were "rejected by family and often jailed or fined by the government", the 1980s Friki woman Yoandra Cardoso however has that argued that much of the response was verbal harassment from law enforcement. Dionisio Arce, lead vocalist of Cuban heavy metal band Zeus spent six years in prison due to his part in the Frikis. Some schools would forcible shave the heads of young Frikis as a form of punishment.
https://en.wikipedia.org/wiki?curid=24589
Polyamory Polyamory (from Greek ', "many, several", and Latin ', "love") is the practice of, or desire for, intimate relationships with more than one partner, with the informed consent of all partners involved. It has been described as "consensual, ethical, and responsible non-monogamy". People who identify as polyamorous believe in an open relationship with a conscious management of jealousy; they reject the view that sexual and relational exclusivity are necessary for deep, committed, long-term loving relationships. "Polyamory" has come to be an umbrella term for various forms of non-monogamous, multi-partner relationships, or non-exclusive sexual or romantic relationships. Its usage reflects the choices and philosophies of the individuals involved, but with recurring themes or values, such as love, intimacy, honesty, integrity, equality, communication, and commitment. The word "polyamorous" first appeared in an article by Morning Glory Zell-Ravenheart, "A Bouquet of Lovers", published in May 1990 in "Green Egg Magazine", as "poly-amorous". In May 1992, Jennifer L. Wesp created the Usenet newsgroup "alt.polyamory", and the Oxford English Dictionary cites the proposal to create that group as the first verified appearance of the word. In 1999 Zell-Ravenheart was asked by the editor of the OED to provide a definition of the term, and had provided it for the UK version as "the practice, state or ability of having more than one sexual loving relationship at the same time, with the full knowledge and consent of all partners involved." The words "polyamory", "polyamorous", and "polyamorist" were added to the OED in 2006. Although some reference works define "polyamory" as a relational form (whether interpersonal or romantic or sexual) that involves multiple people with the consent of all the people involved, the North American version of the OED declares it a philosophy of life. Consensual non-monogamy, which polyamory falls under, can take many different forms, depending on the needs and preferences of the individual(s) involved in any specific relationship or set of relationships. As of 2019 fully one fifth of the United States population has, at some point in their lives, engaged in some sort of consensual non-monogamy. Separate from polyamory as a philosophical basis for relationships are the practical ways in which people who live polyamorously arrange their lives and handle certain issues, as compared to those of a more conventional monogamous arrangement. Polyamorous communities have been booming in countries within Europe, North America, and Oceania. In other parts of the world, such as, South America, Asia, and Africa there is a small growth in polyamory practices. There is not any particular gendered partner choice to polyamorous relationships. People of different sexual preferences are a part of the community. A large percentage of polyamorists define "fidelity" not as sexual exclusivity, but as faithfulness to the promises and agreements made about a relationship. As a relational practice, polyamory sustains a vast variety of open relationship or multi-partner constellations, which can differ in definition and grades of intensity, closeness and commitment. For some, polyamory functions as an umbrella term for the multiple approaches of 'responsible non-monogamy'. A secret sexual relationship that violates those accords would be seen as a breach of fidelity. Polyamorists generally base definitions of "commitment" on considerations other than sexual exclusivity, e.g. "trust and honesty" or "growing old together". Because there is no "standard model" for polyamorous relationships, and reliance upon common expectations may not be realistic, polyamorists often advocate explicitly negotiating with all involved to establish the terms of their relationships, and often emphasize that this should be an ongoing process of honest communication and respect. Polyamorists will usually take a pragmatic approach to their relationships; many accept that sometimes they and their partners will make mistakes and fail to live up to these ideals, and that communication is important for repairing any breaches. Most polyamorists emphasize respect, trust, and honesty for all partners. Ideally, a partner's partners are accepted as part of that person's life rather than merely tolerated, and usually a relationship that requires deception or a "don't ask don't tell" policy is seen as a less than ideal model. Many polyamorists view excessive restrictions on other deep relationships as less than desirable, as such restrictions can be used to replace trust with a framework of ownership and control. It is usually preferred or encouraged that a polyamorist strive to view their partners' other significant others, often referred to as metamours or OSOs, in terms of the gain to their partners' lives rather than a threat to their own (see compersion). Therefore, jealousy and possessiveness are generally viewed not so much as something to avoid or structure the relationships around, but as responses that should be explored, understood, and resolved within each individual, with compersion as a goal. Many things differentiate polyamory from other types of non-monogamous relationships. It is common for swinging and open couples to maintain emotional monogamy while engaging in extra-dyadic sexual relations. Similarly, the friend/partner boundary in monogamous relationships and other forms of non-monogamy is typically fairly clear. Unlike other forms of non-monogamy, though, "polyamory is notable for privileging emotional intimacy with others." Michael Shernoff cites two studies in his report on same-sex couples considering non-monogamy. Morin (1999) stated that a couple has a very good chance of adjusting to non-exclusivity if at least some of the following conditions exist: Green and Mitchell (2002) stated that direct discussion of the following issues can provide the basis for honest and important conversations: According to Shernoff, if the matter is discussed with a third party, such as a therapist, the task of the therapist is to "engage couples in conversations that let them decide for themselves whether sexual exclusivity or non-exclusivity is functional or dysfunctional for the relationship." Benefits of a polyamorous relationship might include: In 1998, a Tennessee court granted guardianship of a child to her grandmother and step-grandfather after the child's mother April Divilbiss and partners outed themselves as polyamorous on MTV. After contesting the decision for two years, Divilbiss eventually agreed to relinquish her daughter, acknowledging that she was unable to adequately care for her child and that this, rather than her polyamory, had been the grandparents' real motivation in seeking custody. Compersion is an empathetic state of happiness and joy experienced when another individual experiences happiness and joy. In the context of polyamorous relationships, it describes positive feelings experienced by an individual when their intimate partner is enjoying another relationship. The concept of compersion was originally coined by the Kerista Commune in San Francisco. Bertrand Russell published "Marriage and Morals" in 1929, questioning contemporary notions of morality regarding monogamy in sex and marriage. This viewpoint was criticized by John Dewey. A 2003 article in "The Guardian" proposed six primary reasons for choosing polyamory: Research into the prevalence of polyamory has been limited. A comprehensive government study of sexual attitudes, behaviors and relationships in Finland in 1992 (age 18–75, around 50% female and male) found that around 200 out of 2250 (8.9%) respondents "agreed or strongly agreed" with the statement ""I could maintain several sexual relationships at the same time"" and 8.2% indicated a relationship type "that best suits" at the present stage of life would involve multiple partners. By contrast, when asked about other relationships at the same time as a steady relationship, around 17% stated they had had other partners while in a steady relationship (50% no, 17% yes, 33% refused to answer). The article "What Psychology Professionals Should Know About Polyamory", based on a paper presented at the 8th Annual Diversity Conference in March 1999 in Albany, New York, states the following: The Oneida Community in the 1800s in New York (a Christian religious commune) believed strongly in a system of free love known as complex marriage, where any member was free to have sex with any other who consented. Possessiveness and exclusive relationships were frowned upon. Some people consider themselves Christian and polyamorous, but mainstream Christianity does not accept polyamory. In 2017, the Council on Biblical Manhood and Womanhood, an evangelical Christian organization, released a manifesto on human sexuality known as the "Nashville Statement". The statement was signed by 150 evangelical leaders, and includes 14 points of belief. Among other things, it states, "We deny that God has designed marriage to be a homosexual, polygamous, or polyamorous relationship." Some Jews are polyamorous, but mainstream Judaism does not accept polyamory; however, Sharon Kleinbaum, the senior rabbi at Congregation Beit Simchat Torah in New York, said in 2013 that polyamory is a choice that does not preclude a Jewishly observant and socially conscious life. In his book "A Guide to Jewish Practice: Volume 1 – Everyday Living," Rabbi David Teutsch wrote, “It is not obvious that monogamy is automatically a morally higher form of relationship than polygamy,” and that if practiced with honesty, flexibility, egalitarian rules, and trust, practitioners may "live enriched lives as a result". Some polyamorous Jews point to biblical patriarchs having multiple wives and concubines as evidence that polyamorous relationships can be sacred in Judaism. An email list was founded dedicated to polyamorous Jews, called "AhavaRaba", which roughly translates to "big love" in Hebrew, and whose name echoes God's "great" or "abounding" love mentioned in the Ahava rabbah prayer. LaVeyan Satanism is critical of Abrahamic sexual mores, considering them narrow, restrictive and hypocritical. Satanists are pluralists, accepting polyamorists, bisexuals, lesbians, gays, BDSM, transgender people, and asexuals. Sex is viewed as an indulgence, but one that should only be freely entered into with consent. The Eleven Satanic Rules of the Earth only give two instructions regarding sex: "Do not make sexual advances unless you are given the mating signal" and "Do not harm little children," though the latter is much broader and encompasses physical and other abuse. This has always been consistent part of CoS policy since its inception in 1966, as Peter H. Gillmore wrote in an essay supporting same-sex marriage: Unitarian Universalists for Polyamory Awareness, founded in 2001, has engaged in ongoing education and advocacy for greater understanding and acceptance of polyamory within the Unitarian Universalist Association. At the 2014 General Assembly, two UUPA members moved to include the category of "family and relationship structures" in the UUA's nondiscrimination rule, along with other amendments; the package of proposed amendments was ratified by the GA delegates. Bigamy is the act of marrying one person while already being married to another, and is legally prohibited in most countries in which monogamy is the cultural norm. Some bigamy statutes are broad enough to potentially encompass polyamorous relationships involving cohabitation, even if none of the participants claim marriage to more than one partner. In most countries, it is legal for three or more people to form and share a sexual relationship (subject sometimes to laws against homosexuality or adultery if two of the three are married). With only minor exceptions no developed countries permit "marriage" among more than two people, nor do the majority of countries give legal protection (e.g., of rights relating to children) to non-married partners. Individuals involved in polyamorous relationships are generally considered by the law to be no different from people who live together, or "date", under other circumstances. In 2017 John Alejandro Rodriguez, Victor Hugo Prada, and Manuel Jose Bermudez became Colombia's first polyamorous family to have a legally recognized relationship, though not a marriage: "By Colombian law a marriage is between two people, so we had to come up with a new word: a special patrimonial union." In many jurisdictions where same-sex couples can access civil unions or registered partnerships, these are often intended as parallel institutions to that of heterosexual monogamous marriage. Accordingly, they include parallel entitlements, obligations, and limitations. Among the latter, as in the case of the New Zealand Civil Union Act 2005, there are parallel prohibitions on civil unions with more than one partner, which is considered bigamy, or dual marriage/civil union hybrids with more than one person. Both are banned under Sections 205–206 of the Crimes Act 1961. In jurisdictions where same-sex marriage proper exists, bigamous same-sex marriages fall under the same set of legal prohibitions as bigamous heterosexual marriages. As yet, there is no case law applicable to these issues. Having multiple non-marital partners, even if married to one, is legal in most U.S. jurisdictions; at most it constitutes grounds for divorce if the spouse is non-consenting, or feels that the interest in a further partner has destabilized the marriage. In jurisdictions where civil unions or registered partnerships are recognized, the same principle applies to divorce in those contexts. There are exceptions to this: in North Carolina, a spouse can sue a third party for causing "loss of affection" in or "criminal conversation" (adultery) with their spouse, and more than twenty states in the US have laws against adultery although they are infrequently enforced. Some states were prompted to review their laws criminalizing consensual sexual activity in the wake of the Supreme Court's ruling in "Lawrence v. Texas". If marriage is intended, some countries provide for both a religious marriage and a civil ceremony (sometimes combined). These recognize and formalize the relationship. Few countries outside of Africa or Asia give legal recognition to marriages with three or more partners. While a recent case in the Netherlands was commonly read as demonstrating that Dutch law permitted multiple-partner civil unions, the relationship in question was a "", or "cohabitation contract", and not a registered partnership or marriage. The Netherlands' law concerning registered partnerships provides that Authors have explored legalistic ramifications of polyamorous marriage. In 2002, a paper titled "Working with polyamorous clients in the clinical setting" (Davidson) addressed the following areas of inquiry: Its conclusions were that "Sweeping changes are occurring in the sexual and relational landscape" (including "dissatisfaction with limitations of serial monogamy, i.e. exchanging one partner for another in the hope of a better outcome"); that clinicians need to start by "recognizing the array of possibilities that 'polyamory' encompasses" and "examine our culturally-based assumption that 'only monogamy is acceptable'" and how this bias impacts on the practice of therapy; the need for self-education about polyamory, basic understandings about the "rewards of the poly lifestyle" and the common social and relationship challenges faced by those involved, and the "shadow side" of polyamory, the potential existing for coercion, strong emotions in opposition, and jealousy. The paper also states that the configurations a therapist would be "most likely to see in practice" are individuals involved in primary-plus arrangements, monogamous couples wishing to explore non-monogamy for the first time, and "poly singles". A manual for psychotherapists who deal with polyamorous clients was published in September 2009 by the National Coalition for Sexual Freedom, called "What Psychotherapists Should Know About Polyamory". "" was an American reality television series on the American pay television network Showtime. The series followed polyamorous families as they navigated the challenges presented by polyamory. The series ran in 2012 and 2013. During a PinkNews question-and-answer session in May 2015, Redfern Jon Barrett questioned Natalie Bennett, leader of the Green Party of England and Wales, about her party's stance towards polyamorous marriage rights. Bennett responded by saying that her party is "open" to discussion on the idea of civil partnership or marriages between three people. Bennett's announcement aroused media controversy on the topic and led to major international news outlets covering her answer. A follow-up article written by Barrett was published by PinkNews on May 4, 2015, further exploring the topic. "You Me Her" is an American-Canadian comedy-drama television series that revolves around a suburban married couple who is entering a three-way romantic relationship. On May 29, 2017, in the last season of Steven Universe, Fluorite, a member of the Off Colors, a fusion of six different gems into one being, with fusion as the physical manifestation of a relationship, was introduced. This character reappeared in various episodes in the show's fifth season, along with one in Steven Universe Future ("Little Graduation") and in , with the latter two as non-speaking appearances. The series creator, Rebecca Sugar, confirmed that Flourite is a representation of a polyamorous relationship at the show's Comic Con panel in San Diego. Sugar said at the panel, and at another conference, that she was inspired after talking with children at an LBGTQ+ center in Long Beach, California, who wanted a polyamorous character in the show. Polyamory was the subject of the 2018 Louis Theroux documentary "Love Without Limits", where Theroux travels to Portland, Oregon to meet a number of people engaged in polyamorous relationships. Also in 2018, "195 Lewis," a web series about a black lesbian couple dealing with their relationship being newly polyamorous, received the Breakthrough Series – Short Form award from the Gotham Awards. The series premiered in 2017 and ran for five episodes. In 2019, "Simpsons" showrunner Al Jean said he saw Lisa Simpson as being "possibly polyamorous" in the future. Polyamory, along with other forms of consensual non-monogamy, is not without drawbacks. Morin (1999) and Fleckenstein (2014) noted that certain conditions are favorable to good experiences with polyamory, but that these differ from the general population. Heavy public promotion of polyamory can have the unintended effect of attracting people to it for whom it is not well-suited. Unequal power dynamics, such as financial dependence, can also inappropriately influence a person to agree to a polyamorous relationship against their true desires. Even in more equal power dynamic relationships, the reluctant partner may feel coerced into a proposed non-monogamous arrangement due to the implication that if they refuse, the proposer will pursue other partners anyway, will break off the relationship, or that the one refusing will be accused of intolerance. To date, scientific study of polyamory has run into bias and methodological issues. Polyamorous relationships present practical pitfalls. In 2001 Unitarian Universalists for Polyamory Awareness‘s first official membership meeting was held. In 2002 the rights of polyamorous people were added to the mission of the [American] National Coalition for Sexual Freedom. In 2010 the Canadian Polyamory Advocacy Association was founded. Polyamory-related media Polyamory-related media coverage Research and articles
https://en.wikipedia.org/wiki?curid=24591
Pope Theodore II Pope Theodore II (; 840 – December 897) was the bishop of Rome and ruler of the Papal States for twenty days in December 897. His short reign occurred during a period of partisan strife in the Catholic Church, which was entangled with a period of violence and disorder in central Italy. His main act as pope was to annul the recent Cadaver Synod, therefore reinstating the acts and ordinations of Pope Formosus, which had themselves been annulled by Pope Stephen VI. He also had the body of Formosus recovered from the river Tiber and reburied with honour. He died in office in late December 897. Little is known of Theodore's background; he is recorded as being born a Roman, and the son of Photios. His brother Theodosius (or Theosius) was also a bishop. Theodore was ordained as a priest by Pope Stephen V. In January 897, Pope Stephen VI held what is known as the Cadaver Synod. Because his predecessor, Formosus, sided with Arnulf of Carinthia rather than Stephen's ally, Lambert of Spoleto, in their struggle for the imperial dignity, Stephen had the corpse of Formosus exhumed and tried for "perjury, violating the canons prohibiting the translation of bishops, and coveting the papacy". The dead pope was found guilty, his body thrown in the Tiber, and all his acts and ordinations were annulled. Supporters of Formosus rebelled and deposed Stephen VI. His successor, Romanus, is generally assumed to have been pro-Formosus, but he too was soon deposed. Theodore II was elected to succeed the deposed Romanus as pope. The exact dates of Theodore II's reign are unknown, but modern sources generally agree that he was pope for twenty days during December 897. Flodoard, a tenth-century French chronicler, only credited Theodore with a twelve-day reign, while in his history of the popes, Alexis-François Artaud de Montor listed Theodore's reign as being twenty days, from 12 February to 3 March 898. Like Romanus, Theodore was a supporter of Formosus. Some historians believe that Romanus had been deposed because he had not acted to restore Formosus' honour quickly enough, though others suggest that he was removed by supporters of Stephen VI. In either case, Theodore immediately threw himself into the task of undoing the Cadaver Synod. He called his own synod, which annulled the rulings set out by Stephen VI. In so doing, he restored the acts and ordinations of Formosus, including the restoration of a large number of clergy and bishops to their offices. Theodore also ordered Formosus' body to be recovered from the harbour of Portus, where it had been secretly buried, and restored to the original grave at Old St. Peter's Basilica. Like Romanus before him, Theodore bestowed a privilege upon the See of Grado, and had a coin minted, bearing the name of Lambert on the obverse, and "Scs. Petrus" and "Thedr." on the reverse. Flodoard cast Theodore in a positive light, describing him as "beloved of the clergy, a friend of peace, temperate, chaste, affable and a great lover of the poor." He died in office, though the cause of his death is unknown. Because of this, some writers, such as, Wendy Reardon) suggest the possibility of foul play. Horace Kinder Mann offers a different suggestion in his papal history, noting that it is possible that popes who were "infirm or even older than [...] their predecessors" might have been elected intentionally. Theodore was buried at St. Peter's Basilica, but his tomb was destroyed during the demolition of the old basilica in the seventeenth century. After Theodore's death, both John IX and Sergius III claimed to have been elected pope; the latter was excommunicated and driven from the city, though he did later become pope in 904. John IX held synods reaffirming that of Theodore II, and he further banned the trial of people after their death. In turn, Sergius III later annulled the synods of Theodore II and John IX, and reinstated the validity of the Cadaver Synod.
https://en.wikipedia.org/wiki?curid=24592
Proteolysis Proteolysis is the breakdown of proteins into smaller polypeptides or amino acids. Uncatalysed, the hydrolysis of peptide bonds is extremely slow, taking hundreds of years. Proteolysis is typically catalysed by cellular enzymes called proteases, but may also occur by intra-molecular digestion. Low pH or high temperatures can also cause proteolysis non-enzymatically. Proteolysis in organisms serves many purposes; for example, digestive enzymes break down proteins in food to provide amino acids for the organism, while proteolytic processing of a polypeptide chain after its synthesis may be necessary for the production of an active protein. It is also important in the regulation of some physiological and cellular processes, as well as preventing the accumulation of unwanted or abnormal proteins in cells. Consequently, dis-regulation of proteolysis can cause disease. Proteolysis is used by some venoms. Proteolysis is important as an analytical tool for studying proteins in the laboratory, as well as industrially, for example in food processing and stain removal. Limited proteolysis of a polypeptide during or after translation in protein synthesis often occurs for many proteins. This may involve removal of the N-terminal methionine, signal peptide, and/or the conversion of an inactive or non-functional protein to an active one. The precursor to the final functional form of protein is termed proprotein, and these proproteins may be first synthesized as preproprotein. For example, albumin is first synthesized as preproalbumin and contains an uncleaved signal peptide. This forms the proalbumin after the signal peptide is cleaved, and a further processing to remove the N-terminal 6-residue propeptide yields the mature form of the protein. The initiating methionine (and, in prokaryotes, fMet) may be removed during translation of the nascent protein. For "E. coli", fMet is efficiently removed if the second residue is small and uncharged, but not if the second residue is bulky and charged. In both prokaryotes and eukaryotes, the exposed N-terminal residue may determine the half-life of the protein according to the N-end rule. Proteins that are to be targeted to a particular organelle or for secretion have an N-terminal signal peptide that directs the protein to its final destination. This signal peptide is removed by proteolysis after their transport through a membrane. Some proteins and most eukaryotic polypeptide hormones are synthesized as a large precursor polypeptide known as a polyprotein that requires proteolytic cleavage into individual smaller polypeptide chains. The polyprotein pro-opiomelanocortin (POMC) contains many polypeptide hormones. The cleavage pattern of POMC, however, may vary between different tissues, yielding different sets of polypeptide hormones from the same polyprotein. Many viruses also produce their proteins initially as a single polypeptide chain that were translated from a polycistronic mRNA. This polypeptide is subsequently cleaved into individual polypeptide chains. Common names for the polyprotein include "gag" (group-specific antigen) in retroviruses and "ORF1ab" in Nidovirales. The latter name refers to the fact that a slippery sequence in the mRNA that codes for the polypeptide causes ribosomal frameshifting, leading to two different lengths of peptidic chains ("a" and "ab") at an approximately fixed ratio. Many proteins and hormones are synthesized in the form of their precursors - zymogens, proenzymes, and prehormones. These proteins are cleaved to form their final active structures. Insulin, for example, is synthesized as preproinsulin, which yields proinsulin after the signal peptide has been cleaved. The proinsulin is then cleaved at two positions to yield two polypeptide chains linked by two disulfide bonds. Removal of two C-terminal residues from the B-chain then yields the mature insulin. Protein folding occurs in the single-chain Proinsulin form which facilitates formation of the ultimately inter-peptide disulfide bonds, and the ultimately intra-peptide disulfide bond, found in the native structure of insulin. Proteases in particular are synthesized in the inactive form so that they may be safely stored in cells, and ready for release in sufficient quantity when required. This is to ensure that the protease is activated only in the correct location or context, as inappropriate activation of these proteases can be very destructive for an organism. Proteolysis of the zymogen yields an active protein; for example, when trypsinogen is cleaved to form trypsin, a slight rearrangement of the protein structure that completes the active site of the protease occurs, thereby activating the protein. Proteolysis can, therefore, be a method of regulating biological processes by turning inactive proteins into active ones. A good example is the blood clotting cascade whereby an initial event triggers a cascade of sequential proteolytic activation of many specific proteases, resulting in blood coagulation. The complement system of the immune response also involves a complex sequential proteolytic activation and interaction that result in an attack on invading pathogens. Protein degradation may take place intracellularly or extracellularly. In digestion of food, digestive enzymes may be released into the environment for extracellular digestion whereby proteolytic cleavage breaks proteins into smaller peptides and amino acids so that they may be absorbed and used. In animals the food may be processed extracellularly in specialized organs or guts, but in many bacteria the food may be internalized via phagocytosis. Microbial degradation of protein in the environment can be regulated by nutrient availability. For example, limitation for major elements in proteins (carbon, nitrogen, and sulfur) induces proteolytic activity in the fungus "Neurospora crassa" as well as in of soil organism communities. Proteins in cells are broken into amino acids. This intracellular degradation of protein serves multiple functions: It removes damaged and abnormal protein and prevents their accumulation. It also serves to regulate cellular processes by removing enzymes and regulatory proteins that are no longer needed. The amino acids may then be reused for protein synthesis. The intracellular degradation of protein may be achieved in two ways - proteolysis in lysosome, or a ubiquitin-dependent process that targets unwanted proteins to proteasome. The autophagy-lysosomal pathway is normally a non-selective process, but it may become selective upon starvation whereby proteins with peptide sequence KFERQ or similar are selectively broken down. The lysosome contains a large number of proteases such as cathepsins. The ubiquitin-mediated process is selective. Proteins marked for degradation are covalently linked to ubiquitin. Many molecules of ubiquitin may be linked in tandem to a protein destined for degradation. The polyubiquinated protein is targeted to an ATP-dependent protease complex, the proteasome. The ubiquitin is released and reused, while the targeted protein is degraded. Different proteins are degraded at different rates. Abnormal proteins are quickly degraded, whereas the rate of degradation of normal proteins may vary widely depending on their functions. Enzymes at important metabolic control points may be degraded much faster than those enzymes whose activity is largely constant under all physiological conditions. One of the most rapidly degraded proteins is ornithine decarboxylase, which has a half-life of 11 minutes. In contrast, other proteins like actin and myosin have a half-life of a month or more, while, in essence, haemoglobin lasts for the entire life-time of an erythrocyte. The N-end rule may partially determine the half-life of a protein, and proteins with segments rich in proline, glutamic acid, serine, and threonine (the so-called PEST proteins) have short half-life. Other factors suspected to affect degradation rate include the rate deamination of glutamine and asparagine and oxidation of cystein, histidine, and methionine, the absence of stabilizing ligands, the presence of attached carbohydrate or phosphate groups, the presence of free α-amino group, the negative charge of protein, and the flexibility and stability of the protein. Proteins with larger degrees of intrinsic disorder also tend to have short cellular half-life, with disordered segments having been proposed to facilitate efficient initiation of degradation by the proteasome. The rate of proteolysis may also depend on the physiological state of the organism, such as its hormonal state as well as nutritional status. In time of starvation, the rate of protein degradation increases. In human digestion, proteins in food are broken down into smaller peptide chains by digestive enzymes such as pepsin, trypsin, chymotrypsin, and elastase, and into amino acids by various enzymes such as carboxypeptidase, aminopeptidase, and dipeptidase. It is necessary to break down proteins into small peptides (tripeptides and dipeptides) and amino acids so they can be absorbed by the intestines, and the absorbed tripeptides and dipeptides are also further broken into amino acids intracellularly before they enter the bloodstream. Different enzymes have different specificity for their substrate; trypsin, for example, cleaves the peptide bond after a positively charged residue (arginine and lysine); chymotrypsin cleaves the bond after an aromatic residue (phenylalanine, tyrosine, and tryptophan); elastase cleaves the bond after a small non-polar residue such as alanine or glycine. In order to prevent inappropriate or premature activation of the digestive enzymes (they may, for example, trigger pancreatic self-digestion causing pancreatitis), these enzymes are secreted as inactive zymogen. The precursor of pepsin, pepsinogen, is secreted by the stomach, and is activated only in the acidic environment found in stomach. The pancreas secretes the precursors of a number of proteases such as trypsin and chymotrypsin. The zymogen of trypsin is trypsinogen, which is activated by a very specific protease, enterokinase, secreted by the mucosa of the duodenum. The trypsin, once activated, can also cleave other trypsinogens as well as the precursors of other proteases such as chymotrypsin and carboxypeptidase to activate them. In bacteria, a similar strategy of employing an inactive zymogen or prezymogen is used. Subtilisin, which is produced by "Bacillus subtilis", is produced as preprosubtilisin, and is released only if the signal peptide is cleaved and autocatalytic proteolytic activation has occurred. Proteolysis is also involved in the regulation of many cellular processes by activating or deactivating enzymes, transcription factors, and receptors, for example in the biosynthesis of cholesterol, or the mediation of thrombin signalling through protease-activated receptors. Some enzymes at important metabolic control points such as ornithine decarboxylase is regulated entirely by its rate of synthesis and its rate of degradation. Other rapidly degraded proteins include the protein products of proto-oncogenes, which play central roles in the regulation of cell growth. Cyclins are a group of proteins that activate kinases involved in cell division. The degradation of cyclins is the key step that governs the exit from mitosis and progress into the next cell cycle. Cyclins accumulate in the course the cell cycle, then abruptly disappear just before the anaphase of mitosis. The cyclins are removed via a ubiquitin-mediated proteolytic pathway. Caspases are an important group of proteases involved in apoptosis or programmed cell death. The precursors of caspase, procaspase, may be activated by proteolysis through its association with a protein complex that forms apoptosome, or by granzyme B, or via the death receptor pathways. Autoproteolysis takes place in some proteins, whereby the peptide bond is cleaved in a self-catalyzed intramolecular reaction. Unlike zymogens, these autoproteolytic proteins participate in a "single turnover" reaction and do not catalyze further reactions post-cleavage. Examples include cleavage of the Asp-Pro bond in a subset of von Willebrand factor type D (VWD) domains and "Neisseria meningitidis" FrpC self-processing domain, cleavage of the Asn-Pro bond in "Salmonella" FlhB protein, "Yersinia" YscU protein, as well as cleavage of the Gly-Ser bond in a subset of sea urchin sperm protein, enterokinase, and agrin (SEA) domains. In some cases, the autoproteolytic cleavage is promoted by conformational strain of the peptide bond. Abnormal proteolytic activity is associated with many diseases. In pancreatitis, leakage of proteases and their premature activation in the pancreas results in the self-digestion of the pancreas. People with diabetes mellitus may have increased lysosomal activity and the degradation of some proteins can increase significantly. Chronic inflammatory diseases such as rheumatoid arthritis may involve the release of lysosomal enzymes into extracellular space that break down surrounding tissues. Abnormal proteolysis and generation of peptides that aggregate in cells and their ineffective removal may result in many age-related neurological diseases such as Alzheimer's. Proteases may be regulated by antiproteases or protease inhibitors, and imbalance between proteases and antiproteases can result in diseases, for example, in the destruction of lung tissues in emphysema brought on by smoking tobacco. Smoking is thought to increase the neutrophils and macrophages in the lung which release excessive amount of proteolytic enzymes such as elastase, such that they can no longer be inhibited by serpins such as α1-antitrypsin, thereby resulting in the breaking down of connective tissues in the lung. Other proteases and their inhibitors may also be involved in this disease, for example matrix metalloproteinases (MMPs) and tissue inhibitors of metalloproteinases (TIMPs). Other diseases linked to aberrant proteolysis include muscular dystrophy, degenerative skin disorders, respiratory and gastrointestinal diseases, and malignancy. Protein backbones are very stable in water at neutral pH and room temperature, although the rate of hydrolysis of different peptide bonds can vary. The half life of a peptide bond under normal conditions can range from 7 years to 350 years, even higher for peptides protected by modified terminus or within the protein interior. The rate of proteolysis however can be significantly increased by extremes of pH and heat. Strong mineral acids can readily hydrolyse the peptide bonds in a protein (acid hydrolysis). The standard way to hydrolyze a protein or peptide into its constituent amino acids for analysis is to heat it to 105 °C for around 24 hours in 6M hydrochloric acid. However, some proteins are resistant to acid hydrolysis. One well-known example is ribonuclease A, which can be purified by treating crude extracts with hot sulfuric acid so that other proteins become degraded while ribonuclease A is left intact. Certain chemicals cause proteolysis only after specific residues, and these can be used to selectively break down a protein into smaller polypeptides for laboratory analysis. For example, cyanogen bromide cleaves the peptide bond after a methionine. Similar methods may be used to specifically cleave tryptophanyl, aspartyl, cysteinyl, and asparaginyl peptide bonds. Acids such as trifluoroacetic acid and formic acid may be used for cleavage. Like other biomolecules, proteins can also be broken down by high heat alone. At 250 °C, the peptide bond may be easily hydrolyzed, with its half-life dropping to about a minute. Protein may also be broken down without hydrolysis through pyrolysis; small heterocyclic compounds may start to form upon degradation. Above 500 °C, polycyclic aromatic hydrocarbons may also form, which is of interest in the study of generation of carcinogens in tobacco smoke and cooking at high heat. Proteolysis is also used in research and diagnostic applications: Proteases may be classified according to the catalytic group involved in its active site. Certain types of venom, such as those produced by venomous snakes, can also cause proteolysis. These venoms are, in fact, complex digestive fluids that begin their work outside of the body. Proteolytic venoms cause a wide range of toxic effects, including effects that are:
https://en.wikipedia.org/wiki?curid=24594
Autosomal dominant polycystic kidney disease Autosomal dominant polycystic kidney disease (ADPKD) is the most prevalent, potentially lethal, monogenic human disorder. It is associated with large interfamilial and intrafamilial variability, which can be explained to a large extent by its genetic heterogeneity and modifier genes. It is also the most common of the inherited cystic kidney diseases — a group of disorders with related but distinct pathogenesis, characterized by the development of renal cysts and various extrarenal manifestations, which in case of ADPKD include cysts in other organs, such as the liver, seminal vesicles, pancreas, and arachnoid membrane, as well as other abnormalities, such as intracranial aneurysms and dolichoectasias, aortic root dilatation and aneurysms, mitral valve prolapse, and abdominal wall hernias. Over 50% of patients with ADPKD eventually develop end stage kidney disease and require dialysis or kidney transplantation. ADPKD is estimated to affect at least one in every 1000 individuals worldwide, making this disease the most common inherited kidney disorder with a diagnosed prevalence of 1:2000 and incidence of 1:3000-1:8000 in a global scale. ADPKD is genetically heterogeneous with two genes identified: "PKD1" (chromosome region 16p13.3; around 85% cases) and" PKD2" (4q21; around 15% cases). Several genetic mechanisms probably contribute to the phenotypic expression of the disease. Although evidence exists for a two-hit mechanism (germline and somatic inactivation of two PKD alleles) explaining the focal development of renal and hepatic cysts, haploinsufficiency is more likely to account for the vascular manifestations of the disease. Additionally, new mouse models homozygous for "PKD1" hypomorphic alleles 22 and 23 and the demonstration of increased renal epithelial cell proliferation in PKD2 +/− mice suggest that mechanisms other than the two-hit hypothesis also contribute to the cystic phenotype. Large interfamilial and intrafamilial variability occurs in ADPKD. Most individuals with "PKD1" mutations have kidney failure by age 70 years, whereas more than 50% of individuals with" PKD2" mutations have adequate renal function at that age (mean age of onset of end-stage renal disease: 54·3 years with "PKD1"; 74·0 years with "PKD2"). The significant intrafamilial variability observed in the severity of renal and extrarenal manifestations points to genetic and environmental modifying factors that may influence the outcome of ADPKD, and results of an analysis of the variability in renal function between monozygotic twins and siblings support the role of genetic modifiers in this disease. It is estimated that 43–78% of the variance in age to ESRD could be due to heritable modifying factors, with parents as likely as children to show more severe disease in studies of parent-child pairs. In many patients with ADPKD, kidney dysfunction is not clinically apparent until 40 or 50 years of life. However, an increasing body of evidence suggests the formation of renal cysts starts "in utero". Cysts initially form as small dilations in renal tubules, which then expand to form fluid-filled cavities of different sizes. Factors suggested to lead to cystogenesis include a germline mutation in one of the polycystin gene alleles, a somatic second hit that leads to the loss of the normal allele, and a third hit, which can be anything that triggers cell proliferation, leading to the dilation of the tubules. In the progression of the disease, continued dilation of the tubules through increased cell proliferation, fluid secretion, and separation from the parental tubule lead to the formation of cysts. ADPKD, together with many other diseases that present with renal cysts, can be classified into a family of diseases known as ciliopathies. Epithelial cells of the renal tubules, including all the segments of the nephron and the collecting ducts (with the exception of intercalated cells) show the presence of a single primary apical cilium. Polycystin-1, the protein encoded by the PKD1 gene, is present on these cilia and is thought to sense the flow with its large extracellular domains, activating the calcium channels associated with polycystin-2, the product of gene PKD2, as a result of the genetic setting of ADPKD as explained in the genetics sub-section above. Epithelial cell proliferation and fluid secretion that lead to cystogenesis are two hallmark features in ADPKD. During the early stages of cystogenesis, cysts are attached to their parental renal tubules and a derivative of the glomerular filtrate enters the cysts. Once these cysts expand to approximately 2 mm in diameter, the cyst closes off from its parental tubule and after that fluid can only enter the cysts through transepithelial secretion, which in turn is suggested to increase due to secondary effects from an increased intracellular concentrations of cyclic AMP (cAMP). Clinically, the insidious increase in the number and size of renal cysts translates as a progressive increment in kidney volume. Studies led by Mayo Clinic professionals established that the total kidney volume (TKV) in a large cohort of ADPKD patients was 1060 ± 642ml with a mean increase of 204ml over three years, or 5.27% per year in the natural course of the disease, among other important, novel findings that were extensively studied for the first time. Usually, the diagnosis of ADPKD is initially performed by renal imaging using ultrasound, CT scan, or MRI. However, molecular diagnostics can be necessary in the following situations: 1- when a definite diagnosis is required in young individuals, such as a potential living related donor in an affected family with equivocal imaging data; 2- in patients with a negative family history of ADPKD, because of potential phenotypic overlap with several other kidney cystic diseases; 3- in families affected by early-onset polycystic kidney disease, since in this cases hypomorphic alleles and/or oligogenic inheritance can be involved; and 4- in patients requesting genetic counseling, especially in couples wishing a pre-implantation genetic diagnosis. The findings of large echogenic kidneys without distinct macroscopic cysts in an infant/child at 50% risk for ADPKD are diagnostic. In the absence of a family history of ADPKD, the presence of bilateral renal enlargement and cysts, with or without the presence of hepatic cysts, and the absence of other manifestations suggestive of a different renal cystic disease provide presumptive, but not definite, evidence for the diagnosis. In some cases, intracranial aneurysms can be an associated sign of ADPKD, and screening can be recommended for patients with a family history of intracranial aneurysm. Molecular genetic testing by linkage analysis or direct mutation screening is clinically available; however, genetic heterogeneity is a significant complication to molecular genetic testing. Sometimes, a relatively large number of affected family members need to be tested in order to establish which one of the two possible genes is responsible within each family. The large size and complexity of "PKD1 "and "PKD2" genes, as well as marked allelic heterogeneity, present obstacles to molecular testing by direct DNA analysis. The sensitivity of testing is nearly 100% for all patients with ADPKD who are age 30 years or older and for younger patients with "PKD1" mutations; these criteria are only 67% sensitive for patients with "PKD2" mutations]] who are younger than age 30 years. Currently, the only clinical/pharmacological treatment available for ADPKD consists in reducing the speed in gain of total kidney volume (TKV) with aquaretics (i.e. tolvaptan), which can alleviate pain while giving the patients a better quality of life for over a mean of 3 years. After this period, patients can restart gaining TKV at pretreatment rates and may eventually have to go through dialysis and kidney transplant. Palliative treatment modalities involve symptomatic medications (nonopioid and opioid analgesics) for abdominal/retroperitoneal pain. Before the advent of aquaretic medication, the only option for analgesic-resistant pain was simple or complex surgical procedures (i.e. renal cyst aspiration, cyst decortication, renal denervation and nephrectomy), which can result in complications inherent to surgery. In 2014, Japan was the first country in the world to approve a pharmacological treatment for ADPKD followed by Canada and Europe, which approved the drug tolvaptan for ADPKD patients in the beginning of 2015. The USA FDA approved the use of tolvaptan in the treatment of ADPKD in 2018. Tolvaptan, an aquaretic drug, is a vasopressin receptor 2 (V2) antagonist. Pre-clinical studies had suggested that the molecule cAMP could be involved in the enlargement of ADPKD cysts, and studies on rodents confirmed the role of vasopressin in increasing the levels of cAMP in the kidney, which laid the basis for the conduction of clinical studies. Because data from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease (CRISP) led by Mayo Clinic showed that total kidney volume (TKV) predicted the risk of developing chronic kidney disease in patients with ADPKD, the TEMPO 3:4 trial, which enrolled patients from 129 sites worldwide from 2007 to 2009, evaluated TKV as a primary end-point to test the efficacy of tolvaptan in ADPKD patients. That study showed a significant decrease in the ratio of TKV increase and deterring of renal function decline in ADPKD patients after treatment with tolvaptan; however, because laboratory test results regarding liver function appeared elevated in a percentage of patients enrolled in that study, the approval of the drug was either delayed by regulatory agencies or, as in case of the US, altogether denied. Chronic pain in patients with ADPKD is often refractory to conservative, noninvasive treatments, but nonopioid analgesics and conservative interventions can be first used before opioid analgesics are considered; if pain continues, then surgical interventions can target renal or hepatic cysts to directly address the cause of pain, with surgical options including renal cyst decortication, renal denervation, and nephrectomy. Aspiration with ethanol sclerotherapy can be performed for the treatment of symptomatic simple renal cysts, but can be impractical in advanced patients with multiple cysts. The procedure itself consists in the percutaneous insertion of a needle into the identified cyst, under ultrasound guidance, with subsequent draining the contained liquid; the sclerotherapy is used to avoid liquid reaccumulation that can occur in the cyst, which can result in symptom recurrence. Laparoscopic cyst decortication (also referred to as marsupialization) consists in the removal of one or more kidney cysts through laparoscopic surgery, during which cysts are punctured, and the outer wall of the larger cysts is excised with care not to incise the renal parenchyma. This procedure can be useful for pain relief in patients with ADPKD, and is usually indicated after earlier cyst aspiration has confirmed that the cyst to be decorticated is responsible for pain. Nonrandomised controlled trials conducted in the '90s showed that patients with symptomatic simple renal cysts who had recurrence of symptoms after initial response to simple aspiration could be safely submitted to cyst decortication, with a mean pain-free life between 17 and 24 months after surgery. Laparoscopic decortication presents a 5% recurrence rate of renal cysts compared to an 82% recurrence rate obtained with sclerotherapy. A novel treatment of specifically the chronic pain suffered by many sufferers of ADPKD is Celiac plexus neurolysis. This involves the chemical ablation of the celiac plexus, to cause a temporary degeneration of targeted nerve fibers. When the nerve fibers degenerate, it causes an interruption in the transmission of nerve signals. This treatment, when successful, provides significant pain relief for a period ranging from a few days to over a year. The procedure may be repeated when the affected nerves have healed and the pain returns. Many ADPKD patients suffer symptomatic sequelae in consequence of the disease, such as cyst hemorrhage, flank pain, recurrent infections, nephrolithiasis, and symptoms of mass effect (i.e., early satiety, nausea and vomiting, and abdominal discomfort), from their enlarged kidneys. In such cases, nephrectomy can be required due to intractable symptoms or when in the course of preparing for kidney transplantation, the native kidneys are found to impinge upon the true pelvis and preclude the placement of a donor allograft. Additionally, native nephrectomy may be undertaken in the presence of suspected malignancy, as renal cell carcinoma (RCC) is two to three times more likely in the ADPKD population in end-stage kidney disease (ESKD) than in the ESKD patients without ADPKD. Although the indications for nephrectomy in ADPKD may be related to kidney size, the decision to proceed with native nephrectomy is often undertaken on an individual basis, without specific reference to kidney size measurements. Two modalities of dialysis can be used in the treatment of ADPKD patients: peritoneal dialysis and hemodialysis. Epidemiological data shows that ADPKD affects 5-13.4% of patients undergoing hemodialysis in Europe and in the United States, and about 3% in Japan. Peritoneal dialysis has usually been contra-indicated in ADPKD patients with large kidney and liver volumes, due to expected physical difficulties in the procedure and possible complications; however, no difference is seen in long-term morbidity between hemodialysis and peritoneal dialysis in ADPKD. Kidney transplantation is accepted as the preferred treatment for ADPKD patients with ESRD. Among American patients on the kidney-transplant waiting list (as of December 2011), 7256 (8.4%) were listed due to cystic kidney disease and of the 16,055 renal transplants performed in 2011, 2057 (12.8%) were done for patients with cystic kidney disease, with 1,189 from deceased donors and 868 from living donors. In ADPKD patients, gradual cyst development and expansion result in kidney enlargement, and during the course of the disease, glomerular filtration rate remains normal for decades before kidney function starts to progressively deteriorate, making early prediction of renal outcome difficult. The CRISP study, mentioned in the treatment section above, contributed to build a strong rationale supporting the prognostic value of total kidney volume (TKV) in ADPKD; TKV (evaluated by MRI) increases steadily and a higher rate of kidney enlargement correlated with accelerated decline of GFR, while patient height-adjusted TKV (HtTKV) ≥600 ml/m predicts the development of stage 3 chronic kidney disease within 8 years. Besides TKV and HtTKV, the estimated glomerular filtration rate (eGFR) has also been tentatively used to predict the progression of ADPKD. After the analysis of CT or MRI scans of 590 patients with ADPKD treated at the Mayo Translational Polycystic Kidney Disease Center, Irazabal and colleagues developed an imaging-based classification system to predict the rate of eGFR decline in patients with ADPKD. In this prognostic method, patients are divided into five subclasses of estimated kidney growth rates according to age-specific HtTKV ranges (1A, <1.5%; 1B, 1.5–3.0%; 1C, 3.0–4.5%; 1D, 4.5–6.0%; and 1E, >6.0%) as delineated in the CRISP study. The decline in eGFR over the years following initial TKV measurement is significantly different between all five patient subclasses, with those in subclass 1E having the most rapid decline. Some of the most common causes of death in patients with ADPKD are various infections (25%), a ruptured berry aneurysm (15%), or coronary/hypertensive heart disease (40%).
https://en.wikipedia.org/wiki?curid=24595
Potsdamer Platz Potsdamer Platz (, literally "Potsdam Square") is an important public square and traffic intersection in the center of Berlin, Germany, lying about south of the Brandenburg Gate and the Reichstag (German Parliament Building), and close to the southeast corner of the Tiergarten park. It is named after the city of Potsdam, some to the south west, and marks the point where the old road from Potsdam passed through the city wall of Berlin at the Potsdam Gate. After developing within the space of little over a century from an intersection of rural thoroughfares into the most bustling traffic intersection in Europe, it was totally destroyed during World War II and then left desolate during the Cold War era when the Berlin Wall bisected its former location. Since German reunification, Potsdamer Platz has been the site of major redevelopment projects. Potsdamer Platz began as a trading post where several country roads converged just outside Berlin's old customs wall. The history of Potsdamer Platz can probably be traced back to 29 October 1685, when the Tolerance Edict of Potsdam was signed, whereby Frederick William, Elector of Brandenburg-Prussia from 1640 to 1688, allowed large numbers of religious refugees, including Jews from Austria and Huguenots expelled from France, to settle on his territory. A key motivation behind the Edict was so the Elector could encourage the rapid repopulation, restabilising and economic recovery of his kingdom, following the ravages of the Thirty Years' War (1618–48). Altogether up to 15,000 Huguenots made new homes in the Brandenburg region, some 6,000 of these in its capital, Berlin (indeed, by 1700 and for a while afterwards as much as 20% of Berlin's population was French-speaking). Two other things resulted from this huge influx. Firstly, Berlin's medieval fortifications, recently rebuilt from 1658 to 1674 in the form of a Dutch-style star fort, on an enormous scale and at great expense (and similar to examples still in extant today in the Netherlands like Naarden and Bourtange), became virtually redundant overnight; and secondly, the already crowded city became even more congested. Several new districts were founded around the city's perimeter, just outside the old fortifications. The largest of these was Friedrichstadt, just south west of the historic core of Berlin, begun in 1688 and named after new Elector Frederick William III, who later became King Frederick I of Prussia. Its street layout followed the Baroque-style grid pattern much favoured at the time, and was based on two main axes: Friedrichstraße running north-south, and Leipziger Straße running east-west. All the new suburbs were absorbed into Berlin around 1709–10. In 1721-3 a south-westwards expansion of Friedrichstadt was planned under the orders of King Frederick William I, and this was completed in 1732-4 by architect Philipp Gerlach (1679–1748). In this expansion, a new north-south axis emerged: Wilhelmstrasse. In 1735-7, after Friedrichstadt's expansion was complete, a customs or excise wall, 17 km long and 4.2 m high, was erected around Berlin's new perimeter. Consisting of a wooden palisade at first, it was later replaced with a brick and stone wall, pierced by 14 gates (later increased to 18), where roads entered the city. Here taxes were levied on goods passing through, chiefly meat and flour. The most prestigious gate was the Brandenburg Gate, for the important road from Brandenburg, but 1 km to the south was the entry point of another road that gained even greater significance. This road had started out in the Middle Ages as a lane running out from Berlin to the hamlet of Schöneberg, but it had developed into part of a trading route running right across Europe from Paris to St. Petersburg via Aachen, Berlin and Königsberg. In 1660 the Elector Frederick William made it his route of choice to Potsdam, the location of his palace, which had recently been renovated. Starting in 1754 a daily stagecoach ran between Berlin and Potsdam, although the road was in poor shape. But in 1740 Frederick II had become King. Not a great lover of Berlin, he later built a new palace, the Sanssouci, at Potsdam in 1744-7, followed by the New Palace in 1763-9, so the road now had to be made fit for a King, plus all his courtiers and staff. After numerous other improvements, in 1791-3 this section was made into Prussia's first all-weather road. It later became Potsdamer Straße; its point of entry into Berlin, where it passed through the customs wall, became the "Potsdamer Tor" (Potsdam Gate); once inside the gate Leipziger Straße was its eastwards continuation, and Wilhelmstraße was the first north-south thoroughfare that intersected with it. It was around this gate that Potsdamer Platz was to develop. As a physical entity, Potsdamer Platz began as a few country roads and rough tracks fanning out from the Potsdam Gate. According to one old guide book, it was never a proper platz, but a five-cornered traffic knot on that old trading route across Europe. Just inside the gate was a large octagonal area, created at the time of Friedrichstadt's expansion in 1732-4 and bisected by Leipziger Straß; this was one of several parade grounds for the thousands of soldiers garrisoned in Berlin at the height of the Kingdom of Prussia. Initially known appropriately as the "Achteck" (Octagon), on 15 September 1814 it was renamed Leipziger Platz after the site of Prussia's final decisive defeat of Napoleon Bonaparte at the Battle of Leipzig, 16–19 October 1813, which brought to an end the Wars of Liberation that had been going on since 1806. The Potsdam Gate itself was redesignated the "Leipziger Tor" (Leipzig Gate) around the same time, but reverted to its old name a few years later. The history of Leipziger Platz has been inextricably linked with that of its neighbor almost since its creation. Yet their respective stories have in many ways been very different. The future Potsdamer Platz was most definitely "outside" Berlin, and therefore not subject to the planning guidelines and constraints that would normally be expected in a city keen to show itself off as the capital of an empire. It grew very rapidly in a piecemeal and haphazard way, and came to epitomise wildness and excess in a manner that contributed much to its legendary status. Leipziger Platz however, was "inside" the city (and had a name almost a century before its neighbor did), and always had an orderly, disciplined look about it. After all, it had been planned and built all in one go by Johann Philipp Gerlach. One late 18th-century artistic depiction shows a range of buildings relentless in their uniformity. Indeed, this, together with the grid pattern of the streets, is what one would expect in Prussia's chief garrison city. One writer of the time said that a stroll round Friedrichstadt was like walking round military barracks. In this respect the Potsdam Gate was a dividing line between two different worlds. It was not until later on that many of these buildings began to be replaced by important historical palaces and aristocratic mansions. By this time however, Leipziger Platz was no longer a parade ground, and there had been much speculation about a possible complete redesign for the whole area. Back in 1797 had come the first of two proposed schemes that would have afforded the future Potsdamer Platz the appearance of a proper square. Under both schemes the old rural intersection just outside the Potsdam Gate, and the Octagon (Leipziger Platz) just inside, were to be joined together to create a long rectangular space, with a gargantuan edifice standing in the middle of it. The 1797 scheme came from the renowned Prussian architect Friedrich David Gilly (1772–1800), who proposed a monument to the former Prussian King, Friedrich II. Though containing some Egyptian and French neo-Classicist features, the design was basically a huge Greek temple in the Doric style, loosely modelled on the Parthenon in Athens, though raised up on an enormous geometric plinth and flanked by numerous obelisks (the Egyptian element). A grand new Potsdam Gate formed part of the design. It was never built, but eighteen years later in 1815 Gilly's pupil, Karl Friedrich Schinkel (1781–1841), put forward plans for a National Memorial Cathedral to commemorate the recent victories in the Wars of Liberation. To be known as the "Residenzkirche", it was again, never built due to lack of funds, and in any case the national fervor of the period favored the long-awaited completion of Cologne Cathedral over a new building, but Schinkel went on to become one of the most prolific and celebrated architects of his time. So the layout stayed put, although in 1823-4 Schinkel did get to rebuild the Potsdam Gate. Formerly little more than a gap in the customs wall, it was replaced by a much grander affair consisting of two matching Doric-style stone gate-houses, like little temples (a nod to Friedrich Gilly perhaps), facing each other across Leipziger Straße. The one on the north side served as the customs house and excise collection point, while its southern counterpart was a military guardhouse, set up to prevent desertions of Prussian soldiers, which had become a major problem. The new gate was officially dedicated on 23 August 1824. The design also included a new look for Leipziger Platz. Attempts to create a market there to draw off some of the frenetic commercial activity in the centre of the city had not been successful. And so Schinkel proposed to turn it into a fine garden, although this part of the design was not implemented. It was a rival plan by gardener and landscape architect Peter Joseph Lenné (1789–1866), drawn up in 1826, that went ahead in 1828 but with modifications. In later years Lenné would completely redesign the Tiergarten, a large wooded park formerly the Royal Hunting Grounds, also give his name to Lennéstraße, a thoroughfare forming part of the southern boundary of the park, very close to Potsdamer Platz, and transform a muddy ditch to the south into one of Berlin's busiest waterways, the Landwehrkanal. Meanwhile, country peasantry were generally not welcome in the city, and so the gates also served to restrict access. However, the country folk were permitted to set up trading posts of their own just outside the gates, and the Potsdam Gate especially. It was hoped that this would encourage development of all the country lanes into proper roads; in turn it was hoped that these would emulate Parisian boulevards—broad, straight and magnificent, but the main intention was to enable troops to be moved quickly. Thus Potsdamer Platz was off and running. It was not called that until 8 July 1831, but the area outside the Potsdam Gate began to develop in the early 19th century as a district of quiet villas, for as Berlin became even more congested, many of its richer citizens moved outside the customs wall and built spacious new homes around the trading post, along the newly developing boulevards, and around the southern edge of the Tiergarten. Initially the development was fairly piecemeal, but in 1828 this area just to the west of Potsdamer Platz, sandwiched between the Tiergarten and the north bank of the future Landwehrkanal, received Royal approval for a more orderly and purposeful metamorphosis into a residential colony of the affluent, and gradually filled with houses and villas of a particularly palatial nature. These became the homes of civil servants, officers, bankers, artists and politicians among others, and earned the area the nickname "Millionaires' Quarter" although its official designation was "Friedrichvorstadt" (Friedrich's Suburb), or alternatively the "Tiergartenviertel" (Tiergarten Quarter). Many of the properties in the neighborhood were the work of architect Georg Friedrich Heinrich Hitzig (1811–81), a pupil of Schinkel who also built the original "English Embassy" in Leipziger Platz, where the vast Wertheim department store would later stand, although Friedrichvorstadt's focal point and most notable building was the work of another architect—and another pupil of Schinkel. The "Matthiaskirche" (St. Matthew's Church), built in 1844-6, was an Italian Romanesque-style building in alternating bands of red and yellow brick, and designed by Friedrich August Stüler (1800–65). This church, one of fewer than half a dozen surviving pre-World War II buildings in the entire area, forms the centrepiece of today's "Kulturforum" (Cultural Forum). Meanwhile, many of the Huguenots fleeing religious persecution in France, and their descendants, had also been living around the trading post and cultivating local fields. Noticing that traffic queues often built up at the Potsdam Gate due to delays in making the customs checks, these people had begun to offer coffee, bread, cakes and confectionery from their homes or from roadside stalls to travelers passing through, thus beginning the tradition of providing food and drink around the future Potsdamer Platz. In later years larger and more purpose-built establishments had begun to take their place, which in turn were superseded by even bigger and grander ones. The former district of quiet villas was by now anything but quiet: Potsdamer Platz had taken on an existence all its own whose sheer pace of life rivalled anything within the city. By the mid-1860s direct taxation had made the customs wall redundant, and so in 1866–7 most of it was demolished along with all the city gates except two – the Brandenburg Gate and the Potsdam Gate. The removal of the customs wall allowed its former route to be turned into yet another road running through Potsdamer Platz, thus increasing still further the amount of traffic passing through. This road, both north and south of the platz, was named Königgrätzer Straße after the Prussian victory over Austria at the Battle of Königgrätz on 3 July 1866, in the Austro-Prussian War. The railway first came to Berlin in 1838, with the opening of the Potsdamer Bahnhof, terminus of a 26 km line linking the city with Potsdam, opened throughout by 29 October (in 1848 the line would be extended to Magdeburg and beyond). Since the city authorities would not allow the new line to breach the customs wall, still standing at the time, it had to stop just short, at Potsdamer Platz, but it was this that kick-started the real transformation of the area, into the bustling focal point that Potsdamer Platz would eventually become. Just three years later a second railway terminus opened in the vicinity. Located 600 meters to the southeast, with a front facade facing Askanischer Platz, the Anhalter Bahnhof was the Berlin terminus of a line opened on 1 July 1841, as far as Jüterbog and later extended to Dessau, Kothen and beyond. Both termini began life as fairly modest affairs, but in order to cope with increasing demands both went on to much bigger and better things in later years, a new Potsdamer Bahnhof, destined to be Berlin's busiest station, opening on 30 August 1872 and a new Anhalter Bahnhof, destined to be the city's biggest and finest, following on 15 June 1880. This latter station benefitted greatly from the closure of a short-lived third terminus in the area – the Dresdner Bahnhof, located south of the Landwehrkanal, which lasted from 17 June 1875 until 15 October 1882. In addition, a railway line once ran through Potsdamer Platz itself. This was a connecting line opened in October 1851 and running around the city just inside the customs wall, crossing numerous streets and squares at street level, and whose purpose was to allow goods to be transported between the various Berlin stations, thus creating a hated traffic obstruction that lasted for twenty years. Half a dozen or more times a day, Potsdamer Platz ground to a halt while a train of 60 to 100 wagons trundled through at walking pace preceded by a railway official ringing a bell. The construction of the Ringbahn around the city's perimeter, linked to all the major stations, allowed the connecting line to be scrapped in 1871, although the Ringbahn itself was not complete and open for all traffic until 15 November 1877. In later years Potsdamer Platz was served by both of Berlin's two local rail systems. The U-Bahn arrived first, from the south; begun on 10 September 1896, it opened on 18 February 1902, with a new and better sited station being provided on 29 September 1907, and the line itself being extended north and east on 1 October 1908. In 1939 the S-Bahn followed, its North-South Link between Unter den Linden and Yorckstraße opening in stages during the year, the Potsdamer Platz S-Bahn station itself opening on 15 April. By the second half of the 19th century, Berlin had been growing at a tremendous rate for some time, but its growth accelerated even faster after the city became the capital of the new German Empire on 18 January 1871. Potsdamer Platz and neighbouring Leipziger Platz really started coming into their own from this time on. Now firmly in the centre of a metropolis whose population eventually reached 4.4 million, making it the third largest city in the world after London and New York, the area was ready to take on its most celebrated role. Vast hotels and department stores, hundreds of smaller shops, theatres, dance-halls, cafés, restaurants, bars, beer palaces, wine-houses and clubs, all started to appear. Some of these places became internationally known. Also, a very large government presence, with many German imperial departments, Prussian state authorities and their various sub-departments, came into the area, taking over 26 former palaces and aristocratic mansions in Leipziger Platz, Leipziger Straße and Wilhelmstraße. Even the Reichstag itself, the German Parliament, occupied the former home of the family of composer Felix Mendelssohn (1809–47) in Leipziger Straße before moving in 1894 to the vast new edifice near the Brandenburg Gate, erected by Paul Wallot (1841–1912). Next door, the Herrenhaus, or Prussian House of Lords (the Upper House of the Prussian State Parliament), occupied a former porcelain factory for a while, before moving to an impressive new building erected on the site of the former Mendelssohn family home in 1899–1904 by Friedrich Schulze Colditz (1843–1912). This building backed on to an equally grand edifice in the next street (Prinz-Albrecht-Straße), also by Colditz, that had been built for the Preußischer Landtag (the Prussian Lower House), in 1892-9. Potsdamer Platz was also the location of Germany's first electric street lights, installed in 1882 by the electrical giant Siemens, founded and based in the city. The heyday of Potsdamer Platz was in the 1920s and 1930s. By this time it had developed into the busiest traffic center in all of Europe, and the heart of Berlin's nightlife. It had acquired an iconic status, on a par with Piccadilly Circus in London or Times Square in New York. It was a key location that helped to symbolize Berlin; it was known worldwide, and a legend grew up around it. It represented the geographical center of the city, the meeting place of five of its busiest streets in a star-shaped intersection deemed the transport hub of the entire continent. These were: As well as the stations and other facilities and attractions already mentioned, in the immediate area was one of the world's biggest and most luxurious department stores: Wertheim. Founded by German merchant Georg Wertheim (1857–1939), designed by architect Alfred Messel (1853–1909), opened in 1897 and extended several times over the following 40 years, it ultimately possessed a floor area double that of the Reichstag, a 330-metre-long granite and plate glass facade along Leipziger Straße, 83 elevators, three escalators, 1,000 telephones, 10,000 lamps, five kilometers of pneumatic tubing for moving items from the various departments to the packing area, and a separate entrance directly from the nearby U-Bahn station. It also contained a summer garden, winter garden and roof garden, an enormous restaurant and several smaller eating areas, its own laundry, a theater and concert booking office, its own bank, whose strongrooms were underground at the eastern end of the building (and generated their own history decades later), and a large fleet of private delivery vehicles. In the run-up to Christmas Wertheim was transformed into a fairytale kingdom, and was well known to children from all over Germany and far beyond. In Stresemannstraße, and paralleling the Potsdamer Bahnhof on its eastern side, was another great magnet for shoppers and tourists alike – a huge multi-national-themed eating establishment: the Haus Vaterland. Designed by architect Franz Heinrich Schwechten (1841–1924), who was also responsible for the Anhalter Bahnhof and the Kaiser Wilhelm Memorial Church, it was erected in 1911–12 as the Haus Potsdam. 93 m in length and with a dome rising 35 m above the pavement at the north (Stresemannstraße) end, it contained the world's largest restaurant – the 2,500-seat Café Piccadilly, plus a 1,200-seat theatre and numerous offices. These included (from 1917 to 1927), the headquarters of Universum Film AG (aka UFA or Ufa), Germany's biggest film company. On 16 August 1914, less than three weeks after the start of World War I, the Café Piccadilly was given a new name – the more patriotic-sounding Café Vaterland. However, in 1927–8 the architect and entrepreneur Carl Stahl-Urach (1879–1933) transformed the whole building into a gastronomic fantasy land, financed and further elaborated upon by new owners the Kempinski organisation. It reopened on 31 August 1928 as the Haus Vaterland, offering "The World in One House," and could now hold up to 8,000 guests at a time. The Café Vaterland had remained largely untouched, but the 1,200-seat theatre was now a 1,400-seat cinema. The rest of the building had been turned into a large number of theme restaurants, all served from a central kitchen containing the largest gas-fueled cooking plant in Europe. These included: Rheinterrasse, Löwenbräu (Bavarian beer restaurant), Grinzing (Viennese café and wine bar), Bodega (Spanish winery), Csarda (Hungarian), Wild West Bar (aka the Arizona Bar) (American), Osteria (Italian), Kombüse (Bremen drinking den – literally "galley"), Rübchen (Teltow, named after the well-known turnip dish "Teltower Rübchen", made with turnips grown locally in the small town of Teltow just outside Berlin), plus a Turkish café and Japanese tearoom; additionally there was a large ballroom. Up to eight orchestras and dance bands regularly performed in different parts of the building, plus a host of singers, dancers and other entertainers. It should be pointed out here though that not all of these attractions existed simultaneously, owing to changes in those countries that Germany was or was not allied to, in the volatile years leading up to and during World War II, a good example being the closure of the Wild West Bar following America's entry into the war as an enemy of Germany. Among the major hotels at or near Potsdamer Platz were two designed by the same architect, Otto Rehnig (1864–1925), and opened in the same year, 1908. One was the 600-room Hotel Esplanade (sometimes known as the "Grand Hotel Esplanade"), in Bellevuestraße. Charlie Chaplin and Greta Garbo were guests there, and Kaiser Wilhelm II himself held regular "gentlemen's evenings" and other functions there in a room that came to be named after him – the Kaisersaal. The other was the Hotel Excelsior, also 600 rooms but superior provision of other facilities made it the largest hotel in Continental Europe, located in Stresemannstraße opposite the Anhalter Bahnhof and connected to it by a 100-metre-long subterranean passageway complete with a parade of underground shops. Two other hotels which shared the same architect, in this case Ludwig Heim (1844–1917), were the 68-room Hotel Bellevue (sometimes known as the "Grand Hotel Bellevue"), built 1887-8, and the 110-room Palast Hotel, built 1892-3 on the site of an earlier hotel. These stood on either side of the northern exit from Potsdamer Platz along Ebertstraße. The Bellevue was well known for its Winter Garden. Meanwhile, facing the Palast Hotel across the entrance to Leipziger Platz (the Potsdam Gate), was the 400-room Hotel Fürstenhof, by Richard Bielenberg (1871–1929) and Josef Moser (1872–1963), erected in 1906/07, also on the site of an earlier building. With its 200-metre-long main facade along Stresemannstraße, the Fürstenhof was less opulent than some of the other hotels mentioned, despite its size, but was still popular with business people. The new U-Bahn station was being built at the same time as the hotel and actually ran through the hotel's basement, cutting it in half, thus making the construction of both into something of a technical challenge, but unlike the Wertheim department store (and contrary to several sources), the hotel did not enjoy a separate entrance directly from the station. The Weinhaus Huth, with its distinctive corner cupola, was a wedge-shaped structure located in the angle between Potsdamer Straße and Linkstraße (literally "Left Street"), and with entrances in both streets. Wine merchant Friedrich Karl Christian Huth, whose great-grandfather had been "kellermeister" (cellar-master) to King Friedrich II back in 1769, had founded the firm in 1871 and taken over the former building in Potsdamer Straße on 23 March 1877. His son, the wine wholesale dealer William ("Willy") Huth (1877–1967), took over the business in 1904 and, a few years later, commissioned the replacement of the building by a new one on the same site. Running right through the block into Linkstraße, this new Weinhaus Huth was designed by the architects Conrad Heidenreich (1873–1937) and Paul Michel (1877–1938), and opened on 2 October 1912, and contained a wine restaurant on the ground floor, and wine storage space above, so it had to take a lot of weight. It was thus given a strong steel skeleton, which would stand the building in very good stead some three decades after its completion. Famous for its fine claret, numerous members of European society were made welcome there as guests. A total of 15 chefs were employed there, and Alois Hitler Jr., the stepbrother of the future Nazi dictator Adolf Hitler, was a waiter there in the 1920s, before he opened his own restaurant and hotel at Wittenbergplatz, in the western part of the city. Café Josty was one of two rival cafés (the other being the "Astoria", later "Café Eins A"), occupying the broad corner between Potsdamer Straße and Bellevuestraße. The Josty company had been founded in 1793 by two Swiss brothers, Johann and Daniel Josty, who had emigrated to Berlin from Sils in Switzerland and set up a bakery from which the café was a 1796 offshoot. It had occupied various locations including (from 1812 till 1880), a site in front of the Berlin City Palace, before moving to Potsdamer Platz in the latter year. A major player on the Berlin café scene, Josty attracted writers, artists, politicians and international society: it was one of "the" places to be seen. The writer Theodor Fontane, painter Adolf von Menzel, and Dadaist Kurt Schwitters were all guests; Karl Liebknecht, the Spartacus Communist movement leader read a lot here and even made some key political speeches from the pavement terrace, while author Erich Kästner wrote part of his 1929 bestseller for children, "Emil und die Detektive" (Emil and the Detectives), on the same terrace and made the café the setting for an important scene in the book. Despite the prestige associated with its name, Café Josty closed in 1930. It then went through an odyssey of re-openings, closures and relaunches under a number of different names including "Conditorei Friediger", "Café Wiener", "Engelhardt Brau" and "Kaffee Potsdamer Platz" (sometimes appearing to have two or more names simultaneously), before its eventual destruction in World War II. Among the many beer palaces around Potsdamer Platz were two in particular which contained an extensive range of rooms and halls covering a large area. The Alt-Bayern in Potsdamer Straße was erected by architect Wilhelm Walther (1857–1917) and opened in 1904. After closing in 1914, it underwent a revamp before reopening in 1926 under the new name Bayernhof. Meanwhile, in Bellevuestraße, sandwiched between Café Josty and the Hotel Esplanade but extending right through the block with a separate entrance in Potsdamer Straße, was the Weinhaus Rheingold, built by Bruno Schmitz (1858–1916) and opened on 6 February 1907. Originally intended to be a concert venue until concerns were raised about increased traffic problems in the already congested streets, it was ruled that it should serve a gastronomic purpose only. Altogether it could accommodate 4,000 guests at a time, 1,100 of these in its main hall alone. Many of the total of 14 banquet and beer halls had a Wagnerian theme – indeed, the very name of the complex was taken from the Wagner opera "Das Rheingold", the first of the four parts of the cycle Der Ring des Nibelungen, although this name did hark back to the building's planned former role as a concert venue. Another building by the same architect but which still stands – the "Rosengarten" in Mannheim, has a remarkably similar main facade. Finally, on the corner between Potsdamer Straße and the Potsdamer Bahnhof, stood Bierhaus Siechen, built by Johann Emil Schaudt (1874–1957), opened in 1910 and later relaunched under the new name Pschorr-Haus. At 8.00 p.m. on 8 October 1923, Germany's first radio broadcast was made, using the world's first medium-wave transmitter, from a building (Vox-Haus) close by in Potsdamer Straße. Standing alongside the Weinhaus Rheingold's Potsdamer Straße entrance, this five-storey steel-framed edifice had been erected as an office building in 1907-8 by architect and one-time Berlin inspector of buildings Otto Stahn (1859–1930), who was also responsible for the city's Oberbaumbrücke over the River Spree. In 1920 the Vox-group had taken over the building and the following year commissioned its remodelling by Swiss architect Rudolf Otto Salvisberg (1882–1940), and then erected two transmitting antennae. Despite several upgrades between December 1923 and July 1924, the nearby Hotel Esplanade's formidable bulk prevented the transmitter from functioning effectively and so in December 1924 it was superseded by a better sited new one, but Vox-Haus lived on as the home of Germany's first radio station, Radiostunde Berlin, founded in 1923, renamed Funkstunde in March 1924, but it moved to a new home in 1931 and closed in 1934. In addition, the former Millionaires' Quarter just to the west of Potsdamer Platz had become a much favoured location for other countries to site their embassies. By the early 1930s there were so many diplomats living and working in the area that it came to be redesignated the "Diplomatic Quarter". By 1938, 37 out of 52 embassies and legations in Berlin, and 28 out of 29 consulates, were situated here. The first traffic light tower in Germany was erected at Potsdamer Platz on 20 October 1924 and went into service on 15. December 1924 in an attempt to control the sheer volume of traffic passing through. This traffic had grown to extraordinary levels. Even in 1900, more than 100,000 people, 20,000 cars, horse-drawn vehicles and handcarts, plus many thousands of bicycles, passed through the platz daily. By the 1920s the number of cars had soared to 60,000. The trams added greatly to this. The first four lines had appeared in 1880, rising to 13 by 1897, all horse-drawn, but after electrification between 1898 and 1902 the number of lines had soared to 35 by 1908 and ultimately reached 40, carrying between them 600 trams every hour, day and night. Services were run by a large number of companies. After 1918 most of the tram companies joined. In 1923, at the peak of the Hyperinflation the tram traffic was stopped for two days and a new communal company called "Berliner Straßenbahn-Betriebs-GmbH" was founded. Finally in 1929 all communal traffic companies (Underground, Tram and Buses) were unified into the "Berliner Verkehrsbetriebe" (Berlin Transport Services) company. At the Potsdamer Platz up to 11 policemen at a time had tried to control all this traffic but with varying success. The delays in tram traffic increased and the job was very dangerous for the policemen. The "Berliner Straßenbahn-Betriebs-GmbH" started researches to control the traffic on the main streets and places in 1924. Berlin traffic experts visits colleagues Paris, London and New York. They had to organize the traffic, define traffic rules and select a solution to control the traffic. In New York, Fifth Avenue they found traffic light towers designed by Joseph H. Freedlander in 1922 which can be regarded as a model for the Berlin tower. The Potsdamer Platz five-sided 8.5 m high traffic tower was designed by Jean Kramer, a German architect. The traffic lights were delivered by Siemens & Halske and mounted on top of the tower cabin. A solitary policeman sat in a small cabin at the top of the tower and switched the lights around manually, until they were automated in 1926. Yet some officers still remained on the ground in case people did not pay any attention to the lights. The tower remained until October 1937, when it was removed to allow for excavations for the new S-Bahn underground line. On 26 September 1997, a replica of the tower was erected, just for show, close to its original location by Siemens, to celebrate the company's 150th anniversary. The replica was moved again on 29 September 2000, to the place where it stands today. The traffic problems that had blighted Potsdamer Platz for decades continued to be a big headache, despite the new lights, and these led to a strong desire to solve them once and for all. By now Berlin was a major centre of innovation in many different fields including architecture. In addition, the city's colossal pace of change (compared by some to that of Chicago), had caused its chief planner, Martin Wagner (1885–1957), to foresee the entire centre being made over totally as often as every 25 years. These factors combined to produce some far more radical and futuristic plans for Potsdamer Platz in the late 1920s and early 1930s, especially around 1928-9, when the creative fervour was at its peak. On the cards was an almost total redevelopment of the area. One design submitted by Wagner himself comprised an array of gleaming new buildings arranged around a vast multi-level system of fly-overs and underpasses, with a huge glass-roofed circular car-park in the middle. Unfortunately the worldwide Great Depression of the time, triggered by the Wall Street Crash of 1929, meant that most of the plans remained on the drawing board. However, in Germany this depression was virtually a continuation of an economic morass that had blighted the country since the end of World War I, partly the result of the war reparations the country had been made to pay, and this morass had brought about the closure and demolition of the Grand Hotel Belle Vue, on the corner of Bellevuestraße and Königgrätzer Straße, thus enabling one revolutionary new building to struggle through to reality despite considerable financial odds. Columbushaus was the result of a plan by the French retail company Les Galeries Lafayette, whose flagship store was the legendary Galeries Lafayette in Paris, to open a counterpart in Berlin, on the Grand Hotel Belle Vue's former site, but financial worries made them pull out. Undaunted, the architect, Erich Mendelsohn (1887–1953), erected vast advertising boards around the perimeter of the site, and the revenue generated by these enabled him to proceed with the development anyway. Columbushaus was a ten-storey ultra-modern office building, years ahead of its time, containing Germany's first artificial ventilation system, and whose elegance and clean lines won it much praise. However, despite a Woolworths store on its ground floor, a major travel company housed on the floor above, and a restaurant offering fine views over the city from the top floor, the economic situation of the time meant that it would not be followed by more buildings in that vein: no further redevelopment in the immediate vicinity of Potsdamer Platz occurred prior to World War II, and so Columbushaus would always seem out of place in that location. Nevertheless, its exact position showed that the platz was starting to be opened out: the former hotel had mostly stood on a large flagged area laid out in front of it, indicating that the new building curved away from the existing street line; this would have enabled future street widening to take place. Columbushaus was completed and opened in January 1933, the same month that the Nazi dictator Adolf Hitler (1889–1945) came to power. Hitler had big plans for Berlin, to transform it into the "Welthauptstadt" (World Capital) Germania, to be realised by his architect friend Albert Speer (1905–81). Under these plans the immediate vicinity of Potsdamer Platz would have got off fairly lightly, although the Potsdamer Bahnhof (and the Anhalter Bahnhof a short distance away) would have lost their function. The new North-South Axis, the linchpin of the scheme, would have severed their approach tracks, leaving both termini stranded on the wrong side of it. All trains arriving in Berlin would have run into either of two vast new stations located on the Ringbahn to the north and south of the centre respectively, to be known as "Nordbahnhof" (North Station) and "Südbahnhof" (South Station), located at Wedding and Südkreuz. In Speer's plan the former Anhalter Bahnhof was earmarked to become a public swimming pool; the intended fate of the Potsdamer Bahnhof has not been documented. Meanwhile, the North-South Axis would have cut a giant swathe passing just to the west of Potsdamer Platz, some 5 km long and up to 100 m wide, and lined with Nazi government edifices on a gargantuan scale. The eastern half of the former Millionaires' Quarter, including Stüler's Matthiaskirche, would have been totally eradicated. New U-Bahn and S-Bahn lines were planned to run directly beneath almost the whole length of the axis, and the city's entire underground network reoriented to gravitate towards this new hub (at least one tunnel section, around 220 metres in length, was actually constructed and still exists today, buried some 20 metres beneath the Tiergarten, despite having never seen a train). This was in addition to the S-Bahn North-South Link beneath Potsdamer Platz itself, which went forward to completion, opening in stages in 1939. In the event, a substantial amount of demolition did take place in Potsdamer Straße, between the platz itself and the Landwehrkanal, and this became the location of the one Germania building that actually went forward to a state of virtual completion: architect Theodor Dierksmeier's "Haus des Fremdenverkehrs" (House of Tourism), basically a giant state-run travel agency. More significantly, its curving eastern facade marked the beginnings of the "Runden Platz" (Round Platz), a huge circular public space at the point where the North-South Axis and Potsdamer Straße intersected. Additionally, the southern edge of the Tiergarten was to be redefined, with a new road planned to slice through the built-up area immediately to the north of Columbushaus (although Columbushaus itself would remain unscathed); this road would line up with Voßstraße, one block to the north of Leipziger Platz. Here Albert Speer erected Hitler's enormous new Reichskanzlei building, and yet even this was little more than a dry run for an even larger structure some distance further away. Meanwhile, the Nazi influence was no less evident at Potsdamer Platz than anywhere else in Berlin. As well as swastika flags and propaganda everywhere, Nazi-affiliated concerns occupied a great many buildings in the area, especially Columbushaus, where they took over most of the upper floors. As if to emphasise their presence, they used the building to advertise their own weekly publication: a huge neon sign on its roof proclaimed "DIE BRAUNE POST – N.S. SONNTAGSZEITUNG" (The Brown Post – N.S. Sunday Newspaper), the "N.S." standing for "Nationalsozialist" (National Socialist), i.e. Nazi. Probably Potsdamer Platz's most prominent landmark in the mid-1930s, the sign first appears in photographs dated 1935 but was gone again by 1938. On an even darker note, those Nazi concerns included the Gestapo, who set up a secret prison in an upper part of the building, complete with interrogation and torture rooms. Meanwhile, in another part of the building, the Information Office of the Olympic Games Organising Committee was housed. Here much of the planning of the 1936 Berlin Summer Olympic Games took place. As was the case in most of central Berlin, almost all of the buildings around Potsdamer Platz were turned to rubble by air raids and heavy artillery bombardment during the last years of World War II. The three most destructive raids (out of 363 that the city suffered), occurred on 23 November 1943, and 3 and 26 February 1945. Things were not helped by the very close proximity of Hitler's Reich Chancellery, just one block away in Voßstraße, and many other Nazi government edifices nearby as well, and so Potsdamer Platz was right in a major target area. Once the bombing and shelling had largely ceased, the ground invasion began as Soviet forces stormed the centre of Berlin street by street, building by building, aiming to capture the Reich Chancellery and other key symbols of the Nazi government. When the city was divided into sectors by the occupying Allies at the end of the war, the square found itself on the boundary between the American, British and Soviet sectors. Despite all the devastation, commercial life reappeared in the ruins around Potsdamer Platz within just a few weeks of war's end. The lower floors of a few buildings were patched up enough to allow business of a sort to resume. The U-Bahn and S-Bahn were partially operational again from 2 June 1946, fully from 16 November 1947 (although repairs were not completed until May 1948) and trams by 1952. Part of the Haus Vaterland reopened in 1948 in a much simplified form. The new East German state-owned retail business H.O. ("Handelsorganisation", meaning Trading Organisation), had seized almost all of Wertheim's former assets in the newly created German Democratic Republic but, unable to start up the giant Leipziger Platz store again (it was too badly damaged), it opened a new "Kaufhaus" (department store) on the ground floor of Columbushaus. An office of the "Kasernierte Volkspolizei" (literally "Barracked People's Police") – the military precursor of the "Nationale Volksarmee" (National People's Army), occupied the floor above. Meanwhile, a row of new single-storey shops was erected along Potsdamer Straße. Out on the streets, even the flower-sellers, for whom the area had once been renowned, were doing brisk business again. The area around Potsdamer Platz had also become a focus for black market trading. Since the American, British and Soviet Occupation Zones converged there, people theoretically only had to walk a few paces across sector boundaries to avoid the respective police officials. Meanwhile, friction between the Western Allies and Soviets was steadily rising. The Soviets even took to marking out their border by stationing armed soldiers along it at intervals of a few metres, day and night, in all weathers. Since there was not, as yet, a fixed marker, the borders were prone to abuse, which eventually resulted (in August 1948), in white lines in luminous paint appearing across roads and even through ruined buildings to try to deter the Soviets from making unauthorised incursions into the American and British zones. These measures were only partially successful: after further skirmishes in which shots were fired, barbed wire entanglements were stretched across some roads, a foretaste of things to come. Remembering the effective use of propaganda in the leadup to the second World War, the opposing camps later began berating one another with enormous signs displaying loud political slogans, facing each other across the border zone. That on the western side was erected first, in direct response to the ban on sales of Western newspapers in East Berlin, and comprised an illuminated display board 30 m wide and 1.5 m deep, facing east, supported on three steel lattice towers 25 m high and topped by the words "DIE FREIE BERLINER PRESSE MELDET" (The Free Berlin Press Announces). Important messages were spelt out on the display board using up to 2,000 bulbs. The sign was switched on for the first time on 10 October 1950, watched by a large crowd. A month later, on 18 November, the Communist authorities in the east ordered its destruction using a catapult made from a compressed air hose loaded with pebbles and small pieces of metal. However, the order was not executed and the sign lasted until 1974, an eventual victim of its own high maintenance costs. Not to be outdone, East Berlin had meanwhile erected a sign of its own. This was up and running by 25 November 1950, less than seven weeks after its western counterpart, albeit for a much shorter time period. (It was demolished on 29 January 1953.) Facing towards West Berlin was the proclamation "DER KLUGE BERLINER KAUFT BEI DER H.O." (The Wise Berliner Buys With The HO) Underneath were the words "NÄCHSTE VERKAUFSSTELLEN" (Next Sales Premises), between two arrows pointing left and right, suggesting that large shopping developments were forthcoming in the immediate vicinity, although these never appeared. What was not apparent from the western side however, was that East Berlin's construction boasted its own illuminated display board facing east, whose messages comprised the version of the news that the Communist authorities in the east wanted their citizens to believe. In addition, the East Berlin sign was carefully placed so that, when viewed from further away down Leipziger Straße, its display board obscured the West Berlin sign standing beyond it. Over the next two years, West Berlin would regularly raise or lower its sign to make it more easily visible from the East again – and then East Berlin would raise or lower its own construction to obscure it once more. Furthermore, the East German Government also exploited the huge facade of the nearby Columbushaus as a further propaganda tool. More significantly, living and working conditions in East Germany were rapidly worsening under Communist rule. Tensions finally reached breaking point and a Workers’ Uprising took place on 17 June 1953, to be quickly and brutally crushed when Soviet tanks rolled in, and some of the worst violence occurred around Potsdamer Platz, where several people were killed by the Volkspolizei. No one really knows how many people died during the uprising itself, or by the subsequent death sentences. There are 55 known victims, but other estimates state at least 125. West German estimates were much higher: in 1966 the West German Ministry for Inter-German Affairs claimed that 383 people died in the uprising, including 116 "functionaries of the SED regime", with an additional 106 executed under martial law or later condemned to death, while 1,838 were injured and 5,100 arrested, 1,200 of these later being sentenced to a total of 6,000 years in penal camps. It was also claimed that 17 or 18 Soviet soldiers were executed for refusing to shoot demonstrating workers, but this remains unconfirmed by post-1990 research. Whatever the casualty figures, for the second time in eight years, the "busiest and most famous square in Europe" had been transformed into a bloody battleground. Columbushaus, with its H.O. store on the ground floor and military police station above, had been a prime target in the insurrection and been burnt out yet again, along with the Haus Vaterland and other premises. This time, they were not rehabilitated. As Cold War tensions rose still further during the 1950s, restrictions were placed on travel between the Soviet sector (East Berlin) and the western sectors (West Berlin). For the second time in its history, the Potsdam Gate (or what remained of it), was like a dividing line between two different worlds. Lying on this invisible frontier, Potsdamer Platz was no longer an important destination for Berliners. Similarly, neither East Berlin nor West Berlin regarded their half as a priority area for redevelopment, seeking instead to distance themselves from the traditional heart of the city and develop two new centres for themselves, well away from the troubled border zone. West Berlin inevitably chose the Kurfürstendamm and the area around the Kaiser Wilhelm Memorial Church, while East Berlin built up Alexanderplatz and turned Frankfurter Allee (which they renamed Stalinallee in 1949, Karl-Marx-Allee in 1961), into their own showpiece boulevard. Potsdamer Platz, meanwhile, was more or less left to rot, as one by one the ruined buildings were cleared away, neither side having the will to repair or replace them. On the western side things did improve later on with the development of the Cultural Forum, whose site roughly equates with the former Millionaires' Quarter. With the construction of the Berlin Wall on 13 August 1961, along the intracity frontier, Potsdamer Platz now found itself physically divided in two. What had once been a busy intersection had become totally desolate. With the clearance of most of the remaining bomb-damaged buildings on both sides (on the eastern side, this was done chiefly to give border guards a clear view of would-be escapees and an uninterrupted line of fire), little was left in an area of dozens of hectares. Further demolitions occurred up until 1976 when the Haus Vaterland finally disappeared. After that, only two buildings in the immediate vicinity of Potsdamer Platz still stood – one complete, the other in a half-ruined fragmented form: the Weinhaus Huth's steel skeleton had enabled the building to withstand the pounding of World War II virtually undamaged, and it stood out starkly amid a great levelled wasteland, although now occupied only by groups of squatters. A short distance away stood portions of the former Hotel Esplanade, including the Kaisersaal, used at various times as a much scaled-down hotel, cinema, nightclub and occasional film-set (scenes from Cabaret were shot there). Apart from these, no other buildings remained. Below ground, the U-Bahn section through Potsdamer Platz had closed entirely; although the S-Bahn line itself remained open, it suffered from a quirk of geography in that it briefly passed through East German territory en route from one part of West Berlin to another. Consequently, Potsdamer Platz S-Bahn station became the most infamous of several "Geisterbahnhofe" (ghost stations), through which trains ran without stopping, its previously bustling platforms now decrepit, sealed off from the outside world, and patrolled by armed guards. During its 28 years in limbo, Potsdamer Platz exuded a strange fascination towards many people on the western side, especially tourists and also visiting politicians and heads of state. For the benefit of the former, the row of post-war single-storey shops in Potsdamer Straße now sold a wide variety of souvenir goods, many of which were purchased by coach-loads of curious visitors brought specially to this sad location. An observation platform had been erected, primarily for military personnel and police but used increasingly by members of the public, so that they could gaze over the Wall at the wilderness beyond. Meanwhile, among the many V.I.P.s who came to look were U.S. Senator Robert F. Kennedy (22 February 1962), Prime Minister Harold Wilson of the United Kingdom (6 March 1965), H.M. Queen Elizabeth II of the United Kingdom (27 May 1965), H.R.H. Charles, Prince of Wales (3 November 1972), U.S. President Jimmy Carter (15 July 1978), and U.S. Vice President (later President) George H. W. Bush (George Bush Senior) (1 February 1983). Some scenes of the 1987 Wim Wenders movie "Der Himmel über Berlin" (English title: "Wings of Desire)" were filmed on the old, almost entirely void Potsdamer Platz before the Berlin Wall fell. In one scene an old man named Homer, played by actor Curt Bois, searches in vain for Potsdamer Platz, but finds only rubble, weeds and the graffiti-covered Berlin Wall. The movie thus gives a good impression of the surroundings at the time, which are completely unlike what can be seen today. After the initial opening of the Berlin Wall on 9 November 1989, Potsdamer Platz became one of the earliest locations where the Wall was "breached" to create a new border crossing between East and West Berlin. The crossing began operating on 11 November 1989, earlier than the iconic Brandenburg Gate crossing which opened more than a month later. The crossing required the dismantling of both the inner and outer walls and the clearance of the "death zone" or "no man's land" between the two. A temporary road, lined with barriers, was created across this zone and checkpoints were set up just inside East German territory. Proper dismantling of the entire wall began on 15 May 1990 and all border checks were abolished on 1 July 1990 as East Germany joined West Germany in a currency union. On 21 July 1990, ex-Pink Floyd member Roger Waters staged a gigantic charity concert of his former band's rock extravaganza "The Wall" to commemorate the end of the division between East and West Germany. The concert took place at Potsdamer Platz – specifically an area of the former no man's land just to the north of the Reich Chancellery site, and featured many guest superstars. It was preparations for this concert, rather than historical interest, that brought about the first detailed post-Cold War survey of the area with a view to determining what, if anything, was left of Hitler's bunker and any other underground installations. Although sections of the main Führerbunker were found, partially destroyed or filled in, another bunker complex was found further north that even the East German authorities had apparently missed, plus other cavities beneath land bordering the east side of Ebertstraße, although these turned out to be underground garages belonging to a former SS accommodation block. After 1990, the square became the focus of attention again, as a large (some 60 hectares), attractive location which had suddenly become available in the centre of a major European city. It was widely seen as one of the hottest, most exciting building sites in Europe, and the subject of much debate amongst architects and planners. If Berlin needed to re-establish itself on the world stage, then Potsdamer Platz was one of the key areas where the city had an opportunity to express itself. More than just a building site, Potsdamer Platz was a "statement of intent". In particular, due to its location straddling the erstwhile border between east and west, it was widely perceived as a "linking element," reconnecting the two-halves of the city in a way that was symbolic as well as physical, helping to heal the historical wounds by providing an exciting new mecca attracting Berliners from both sides of the former divide. Whether fairly or unfairly, a great deal was riding on the project, and expectations were high. The Berlin Senate (city government) organised a design competition for the redevelopment of Potsdamer Platz and much of the surrounding area. Eventually attracting 17 entrants, a winning design was announced in October 1991, that from the Munich-based architectural firm of Hilmer & Sattler. They had to fight off some stiff competition though, including a last-minute entry by British architect Richard Rogers. The Berlin Senate then chose to divide the area into four parts, each to be sold to a commercial investor, who then planned new construction according to Hilmer & Sattler's masterplan. During the building phase Potsdamer Platz was the largest building site in Europe. While the resulting development is impressive in its scale and confidence, the quality of its architecture has been praised and criticised in almost equal measure. The largest of the four parts went to Daimler-Benz (later Daimler-Chrysler and now Daimler AG), who charged Italian architect Renzo Piano with creating an overall design for their scheme while sticking to the underlying requirements of Hilmer & Sattler's masterplan. A major development bordering the west side of the former Potsdamer Bahnhof site, some of its 19 individual buildings were then erected by other architects, who submitted their own designs while maintaining Piano's key elements. One of these was Richard Rogers, who played a part in the development after all (his great British rival, Norman Foster, was putting the new dome on the Reichstag at about the same time). The first spade at the start of the Daimler-Benz development was turned by the Mayor of Berlin, Eberhard Diepgen, on 11 October 1993, and the finished complex was officially opened by the Federal President of Germany, Roman Herzog, on 2 October 1998, in a glittering ceremony featuring large-scale celebrations and musical performances. The 19 buildings include the offices of Daimler-Benz themselves (actually their subsidiary "debis", whose 21-storey main tower rises to 106 metres and is the tallest building in the new Potsdamer Platz development), also offices of the major British professional services company PricewaterhouseCoopers, "Berliner Volksbank" (Germany's largest cooperative bank), and the remarkable 25-storey, 103-metre-high Potsdamer Platz No. 1, known as the "Kollhoff Tower" by architect Hans Kollhoff, home to a number of prestigious law firms. Potsdamer Platz No. 1 also houses the "Panoramapunkt" viewing platform, located 100 m above ground level, which is accessed by riding Europe's fastest elevator (8.65 metres per second). From the Panoramapunkt one can see such landmarks as the Brandenburg Gate, Reichstag, Federal Chancellery, Bellevue Palace, Cathedral, Television Tower, Gendarmes Market, Holocaust Memorial and Kaiser Wilhelm Memorial Church. Unfortunately the Kollhoff Tower's facade needed major repairs due to water penetration and frost damage just seven years after completion, and was under scaffolding for many months. The Daimler complex also contains the former Weinhaus Huth, now restored to its former glory and occupied by a restaurant, café, and Daimler AG's own art gallery ("Daimler Contemporary"). The second largest part went to Sony, who erected their new European headquarters on a triangular site immediately to the north of Daimler-Benz and separated from it by the re-routed Potsdamer Straße. This new Sony Centre, designed by Helmut Jahn, is an eye-catching monolith of glass and steel featuring an enormous tent-like conical roof, its shape reportedly inspired by Mount Fuji in Japan, covering an elliptical central public space up to 102 metres across, and thus differing substantially from Hilmer & Sattler's original plan for the site. Its 26-storey, 103-metre-high "Bahn Tower" is so named because it houses the corporate headquarters of Deutsche Bahn AG, the German state railway system. Surviving parts of the former Hotel Esplanade have been incorporated into the north side of the Sony development, including the Kaisersaal which, in a complex and costly operation in March 1996, was moved in one piece (all 1,300 tonnes of it), some 75 metres from its former location, to the spot that it occupies today (it even had to make two right-angled turns during the journey, while maintaining its own orientation). Nearby is a new Café Josty, opened early in 2001, while between the two is "Josty's Bar," which is housed in the Esplanade's former breakfast room. This, like the Kaisersaal, had to be relocated, but here the room was dismantled into some 500 pieces to be reassembled where it stands now. Topped out on 2 September 1998, the Sony Centre was formally opened on 14 June 2000 (although many of its public attractions had been up and running since 20 January), in another grand ceremony with more music – this time with Sony's Japanese chairman Norio Ohga himself conducting the Berlin Philharmonic Orchestra. A keen lover of classical music, he had helped to choose the site because of its close proximity to the orchestra's home in the Cultural Forum. The third part became the Beisheim Center and adjoining buildings, on another triangular site bordered on the east side by Ebertstraße, financed entirely out of his own pocket by the German businessman Otto Beisheim, the founder of the diversified retail and wholesale/cash and carry group Metro AG, based in Germany but with operations throughout Europe and in many other countries around the world. The fourth part is the Park Kolonnaden, a range of buildings running down the east side of the Potsdamer Bahnhof site, parallelling Daimler-Benz. This complex occupies the site of the former Haus Vaterland, and its principal building, which for a few years was the headquarters of the large German trade union ver.di ("Vereinte Dienstleistungsgewerkschaft", meaning United Services Union), rises to 45 metres and has a curving glass facade designed to evoke the shape of that erstwhile landmark. Other developments, more piecemeal in nature, have recreated the octagonal layout of neighbouring Leipziger Platz immediately to the east. One of these is "Kanada Haus", the new Embassy of Canada, on the platz's north-west diagonal. Its turf-cutting ceremony was carried out on 18 February 2002 by the Canadian Prime Minister, Jean Chrétien, and it was officially opened on 29 April 2005. The whole project was subject to much controversy from the start; not everyone approves of how the district was commercialised and replanned. The decision by the Berlin Senate to divide the land between just four investors – while numerous others had submitted bids – provoked scepticism. The remarkably low price Daimler-Benz paid to secure their plot prompted questions from the Berlin Auditor-General's office and the European Union in Brussels, which resulted in Daimler-Benz being billed an additional sum. There were wrangles over land-usage: although a central feature of the Daimler-Benz development is a top shopping mall – the "Arkaden" (Arcades), this did not form part of the plans until the Berlin Senate belatedly insisted that a shopping mall be included. Despite its undoubted success, this in turn led to what many saw as an "Americanisation" of the area, with even its private security force being kitted out in something resembling New York Police uniforms. Further wrangles effectively brought work on the north side of Leipziger Platz to a complete stop for several years; even now there are some "fake facades" where completed new buildings should be, while a long-running dispute over who owned the Wertheim department store site (or had claims to the revenue from its sale by the government), left another large gap in the central Berlin cityscape that is only now finally being redeveloped. This development, known as Leipziger Platz 12, is a large complex with facades in three streets (Leipziger Straße, Wilhelmstraße and Vossstraße) as well as Leipziger Platz itself, and when completed will contain 270 stores, 270 apartments, a hotel, a fitness centre and offices. However, this development brought about the demise (after several stays of execution), of the legendary Tresor nightclub and centre for techno music. Founded on 8 March 1991 in the basement strongrooms of the former Wertheim store's bank, these having survived the decades largely undamaged, the club finally closed on 16 April 2005 (it later reopened on 24 May 2007 in a renovated power plant on Köpenicker Straße). In spite of the controversy, the rebuilt Potsdamer Platz now attracts around 70,000 visitors a day, rising to 100,000 at weekends, and some critics have been surprised by the success of the new quarter. Fears that the streets would be dead after 6 pm have proven false. At almost any time of the day, the place is alive with people. It is a particularly popular attraction for visitors: the "Arkaden" shopping mall is 180 metres in length and contains 133 shops and restaurants on three levels giving a total sales floor area of approx. 40,000 square metres, the lowest (basement) level being a food floor; there are also four major hotels, and Europe's largest casino (the "Spielbank Berlin"). It was also very popular with film fans, as it had three cinemas with nearly 30 screens, including an IMAX screen, showing many films in their original versions (especially English-language films), plus a film academy and a film museum. Since 1 January 2020, the cinema previously owned by CineStar has been closed (including the IMAX screen). There is also a 1,750-seater theatre, the "Theater am Potsdamer Platz," which doubles up as another cinema (the "Berlinale Palast") for two weeks during the Berlin International Film Festival and serves as the principal venue of the festival. This venue sits above a popular night-spot: the "Adagio Nightlife," located entirely underground. After major refurbishment, the S-Bahn line and station reopened on 1 March 1992, followed by the U-Bahn on 13 November 1993. An additional station on the U-Bahn, called Mendelssohn-Bartholdy-Park, was opened immediately north of the Landwehrkanal on 1 October 1998. A new U-Bahn station has also been built at Potsdamer Platz itself, although a decision is still pending on whether to proceed with completion of the line passing through it; in the meantime the station area serves as an impromptu art gallery and exhibition space. A new underground main-line station or "Regionalbahnhof" (Bahnhof Potsdamer Platz) has also been constructed, opened on 26 July 2006. There are also plans to reintroduce trams to Potsdamer Platz. In addition, many bus routes pass through the platz, while for people with their own cars there are some 5,000 parking spaces, 3,500 of which are underground. The annual Berlin Marathon, which takes place in the last weekend of September, was first held in 1974 but due to the division of the city was confined to West Berlin up till and including 1989. Beginning in 1990 the course was re-routed into part of East Berlin, and in 2001 a further adjustment meant that the course has since run through Potsdamer Platz. Typically the leaders will pass through the platz about ten minutes before they cross the finish line. Another annual tradition that began in West Berlin (in 1952) and was re-routed into the east via Potsdamer Platz following German reunification is the "Weihnachtszug" (Christmas train). It now does a regular two-hour round trip at weekends in the run-up to Christmas for families with children, starting and finishing at the Potsdamer Platz S-Bahn station. It did not run in 2009 or 2010 due to equipment problems, but is expected to be operational again in 2011. On 2 March 2008, a statue by the Berlin artist Alexander Polzin dedicated to Italian philosopher, priest, cosmologist, and occultist Giordano Bruno (1548–1600), was erected inside one of the entrances to the Potsdamer Platz Regionalbahnhof. Whilst on the surface the new Potsdamer Platz appears so far to have lived up to its expectations as a futuristic centre of commerce at the heart of Europe's youngest capital city, there has been much debate as to just how successful it really is. Certainly its long term success and viability have become much harder to judge since the recent worldwide economic downturn, a situation compounded by the actions of its two principal owner-occupiers. Daimler and Sony caused a major surprise on 2 October 2007 when both announced that they were putting their respective complexes at Potsdamer Platz on the market. Whilst neither intended to move out, both felt it preferable to rent the space from new owners rather than continue to be the owners themselves (and so be responsible for the buildings' upkeep and maintenance). Daimler had recently come through a painful separation from their former American subsidiary Chrysler and needed a quick injection of cash in order to refocus on automotive production. The announcement came on the ninth anniversary of their complex's official opening, a fact not lost on many people. Sony meanwhile, put their decision down to a need to review their global strategy in the face of a fast-changing worldwide economic climate. The implications for Potsdamer Platz were ominous, with suggestions that overall confidence in the project was faltering, and more pessimistic claims that the development had largely failed in its original intentions. On 17 December 2007, Daimler announced that they were selling their entire complex of 19 buildings at Potsdamer Platz to SEB Asset Management, a Frankfurt-based subsidiary of the Swedish banking group SEB. On 28 February 2008, Sony made a similar announcement, of impending sale to a consortium led by American investment banking giant (now bank holding company) Morgan Stanley. Both deals were finalised by the end of March 2008. Whilst the amounts involved have not been publicly disclosed, it is believed that neither Daimler nor Sony recouped all of their original investments (what Daimler managed to get was reportedly well short). The long-term benefits (or otherwise) of these sales, remain to be seen, but whilst they may have baffled many people at the time, they may turn out to have been a shrewd move, as Daimler and Sony have avoided being saddled with something they might have found much harder to sell at a later date, just when they needed the cash the most. It is unarguable that the development is a considerable commercial success at street level. The numbers of shoppers visiting the Arkaden, guests passing through the doors of the many bars, cafes and restaurants, theatres and cinemas, hotels and casino (not to mention passengers thronging the platforms of the stations), all point to a thriving focal point right at the very heart of Berlin. Detractors however, may draw attention to the floors above and point out the high percentage of office and residential space that allegedly still stands empty more than a decade after its completion. Although examples of "over-provision" like this can be found all over Berlin, it is Potsdamer Platz that, rightly or wrongly, has been used to highlight the problem. The other major sticking point, which is reportedly causing concern at government level, is that the majority of people going to Potsdamer Platz are visitors to the city, implying that the original vision of the development as a linking element attracting Berliners themselves, and Berliners from both sides of the former divide, has not really materialised. There are criticisms that the development does not sit easily with or connect with its surroundings, and as a result Berliners have had difficulty accepting it as "theirs" (despite the fact that the choice of Hilmer & Sattler's masterplan was partly because it was the only one to address the way the development juxtaposed with the Cultural Forum immediately to the west, although the Cultural Forum has itself faced similar criticisms of its own). Another, more psychological factor that has played a part here is that a long-standing mutual distrust or antipathy felt between former East Berliners and West Berliners ("Ossis" and "Wessis" according to the well-known slang terms), is still very much in evidence in the city and elsewhere in Germany, and bold civil engineering projects and architectural statements are not going to make it go away by themselves. Politicians past and present have been accused of short-sightedness in speculating that they would. It was feared that the economic downturn might exacerbate all these problems. On the whole, however, Potsdamer Platz seems to have weathered the storm. Meanwhile, Deutsche Bahn AG were due to relocate to a purpose-built new structure at Berlin's new main train station (Berlin Hauptbahnhof), when the lease on the Sony Center's Bahn Tower expired in 2010. However, in April 2008 Deutsche Bahn announced that they were seeking to extend the lease on the Bahn Tower by another three years. This deal was finalised in late 2009. Since then the lease has been extended to 15 years. Before World War II, Potsdamer Platz had much streetcar traffic. The last remnants were removed in 1991. Unlike, for example, Friedrichstraße station, Potsdamer Platz is not a really important intersection point for the U and S-Bahn system. However, due to its location on the north-south route to the main station, parallel to the aboveground buildings, it was also connected to the regional traffic with a tunnel station. Regional trains of the DB and the ODEG, the S-Bahn (north-south tunnel) and the U2 underground line currently stop at Potsdamer Platz regional train station. Via numerous bus lines, the course can also be reached. In the medium term, a tram connection through the Leipziger Straße is planned, which can be supplemented or even replaced by the long-planned U3 underground line. In the north-south direction, another S-Bahn line (planning name: S21), in particular for better public transport development of the main station, to be built in the long term. Essentially, four major roads, in the east-west direction, Potsdamer Straße and Leipziger Straße, and in the north-south direction, Ebertstraße and Stresemannstraße, lead the motorized individual traffic to Potsdamer Platz. Smaller streets within the individual quarters provide for the connection of the underground parking garages. In addition, in 2006, a connection between the Uferstraße on the Landwehrkanal and the main tunnel was put into operation, the Tunnel Tiergarten Spreebogen is part of the Bundesstraße 96.
https://en.wikipedia.org/wiki?curid=24597
Pointing device A pointing device is an input interface (specifically a human interface device) that allows a user to input spatial (i.e., continuous and multi-dimensional) data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer (or cursor) and other visual changes. Common gestures are point and click and drag and drop. While the most common pointing device by far is the mouse, many more devices have been developed. However, the term "mouse" is commonly used as a metaphor for devices that move the cursor. For most pointing devices, Fitts's law can be used to predict the speed with which users can point at a higher speed. To classify several pointing devices, a certain number of features can be considered. For example, the device's movement, controlling, positioning or resistance. The following points should provide an overview of the different classifications. In case of a direct-input pointing device, the on-screen pointer is at the same physical position as the pointing device (e.g., finger on a touch screen, stylus on a tablet computer). An indirect-input pointing device is not at the same physical position as the pointer but translates its movement onto the screen (e.g., computer mouse, joystick, stylus on a graphics tablet). An absolute-movement input device (e.g., stylus, finger on touch screen) provides a consistent mapping between a point in the input space (location/state of the input device) and a point in the output space (position of pointer on screen). A relative-movement input device (e.g., mouse, joystick) maps displacement in the input space to displacement in the output state. It therefore controls the relative position of the cursor compared to its initial position. An isotonic pointing device is movable and measures its displacement (mouse, pen, human arm) whereas an isometric device is fixed and measures the force which acts on it (trackpoint, force-sensing touch screen). An elastic device increases its force resistance with displacement (joystick). A position-control input device (e.g., mouse, finger on touch screen) directly changes the absolute or relative position of the on-screen pointer. A rate-control input device (e.g., trackpoint, joystick) changes the speed and direction of the movement of the on-screen pointer. Another classification is the differentiation between whether the device is physically translated or rotated. Different pointing devices have different degrees of freedom (DOF). A computer mouse has two degrees of freedom, namely its movement on the x- and y-axis. However the Wiimote has 6 degrees of freedom: x-, y- and z-axis for movement as well as for rotation. As mentioned later in this article, pointing devices have different possible states. Examples for these states are "out of range, tracking or dragging". Examples The following table shows a classification of pointing devices by their number of dimensions (columns) and which property is sensed (rows) introduced by Bill Buxton. The sub-rows distinguish between mechanical intermediary (i.e. stylus) (M) and touch-sensitive (T). It is rooted in the human motor/sensory system. Continuous manual input devices are categorized. Sub-columns distinguish devices that use comparable motor control for their operation. The table is based on the original graphic of Bill Buxton's work on "Taxonomies of Input". This model describes different states that a pointing device can assume. The three common states as described by Buxton are "out of range, tracking and dragging". Not every pointing device can switch to all states. Fitts's law (often cited as Fitts' law) is a predictive model of human movement primarily used in human–computer interaction and ergonomics. This scientific law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. In other words, this means for example, that the user needs more time to click on a small button which is distant to the cursor, than he needs to click a large button near the cursor. Thereby it is generally possible to predict the speed which is needed for a selective movement to a certain target. The common metric to calculate the average time to complete the movement is the following: where: This results in the interpretation that, as mentioned before, large and close targets can be reached faster than little, distant targets. As mentioned above, the size and distance of an object influence its selection. Additionally this effects the user experience. Therefore, it is important, that Fitts' Law is considered while designing user interfaces. Below some basic principles are mentioned. The Control-Display Gain (or CD gain) describes the proportion between movements in the control space to the movements in the display space. For example, a hardware mouse moves in another speed or distance than the cursor on the screen. Even if these movements take place in two different spaces, the units for measurement have to be the same in order to be meaningful (e.g. meters instead of pixels). The CD gain refers to the scale factor of these two movements: The CD gain settings can be adjusted in most cases. However, a compromise has to be found: with high gains it is easier to approach a distant target, with low gains this takes longer. High gains hinder the selection of targets, whereas low gains facilitate this process. The operating systems Microsoft Windows, Apple OS X and Xorg have implemented mechanisms in order to adapt the CD gain to the user's needs, e.g. the CD gain increases when the user's movement velocity increases. A mouse is a small handheld device pushed over a horizontal surface. A mouse moves the graphical pointer by being slid across a smooth surface. The conventional roller-ball mouse uses a ball to create this action: the ball is in contact with two small shafts that are set at right angles to each other. As the ball moves these shafts rotate, and the rotation is measured by sensors within the mouse. The distance and direction information from the sensors is then transmitted to the computer, and the computer moves the graphical pointer on the screen by following the movements of the mouse. Another common mouse is the optical mouse. This device is very similar to the conventional mouse but uses visible or infrared light instead of a roller-ball to detect the changes in position. Additionally there is the mini-mouse, which is a small egg-sized mouse for use with laptop computers; usually small enough for use on a free area of the laptop body itself, it is typically optical, includes a retractable cord and uses a USB port to save battery life. A trackball is a pointing device consisting of a ball housed in a socket containing sensors to detect rotation of the ball about two axis, similar to an upside-down mouse: as the user rolls the ball with a thumb, fingers, or palm the pointer on the screen will also move. Tracker balls are commonly used on CAD workstations for ease of use, where there may be no desk space on which to use a mouse. Some are able to clip onto the side of the keyboard and have buttons with the same functionality as mouse buttons. There are also wireless trackballs which offer a wider range of ergonomic positions to the user. Isotonic joysticks are handle sticks where the user can freely change the position of the stick, with more or less constant force. Isometric joysticks are where the user controls the stick by varying the amount of force they push with, and the position of the stick remains more or less constant. Isometric joysticks are often cited as more difficult to use due to the lack of tactile feedback provided by an actual moving joystick. A pointing stick is a pressure-sensitive small nub used like a joystick. It is usually found on laptops embedded between the "G", "H", and "B" keys. It operates by sensing the force applied by the user. The corresponding "mouse" buttons are commonly placed just below the space bar. It is also found on mice and some desktop keyboards. The Wii Remote, also known colloquially as the Wiimote, is the primary controller for Nintendo's Wii console. A main feature of the Wii Remote is its motion sensing capability, which allows the user to interact with and manipulate items on screen via gesture recognition and pointing through the use of accelerometer and optical sensor technology. A finger tracking device tracks fingers in the 3D space or close to the surface without contact with a screen. Fingers are triangulated by technologies like stereo camera, time-of-flight and laser. Good examples of finger tracking pointing devices are LM3LABS' Ubiq'window and AirStrike A graphics tablet or digitizing tablet is a special tablet similar to a touchpad, but controlled with a pen or stylus that is held and used like a normal pen or pencil. The thumb usually controls the clicking via a two-way button on the top of the pen, or by tapping on the tablet's surface. A cursor (also called a puck) is similar to a mouse, except that it has a window with cross hairs for pinpoint placement, and it can have as many as 16 buttons. A pen (also called a stylus) looks like a simple ballpoint pen but uses an electronic head instead of ink. The tablet contains electronics that enable it to detect movement of the cursor or pen and translate the movements into digital signals that it sends to the computer." This is different from a mouse because each point on the tablet represents a point on the screen. A stylus is a small pen-shaped instrument that is used to input commands to a computer screen, mobile device or graphics tablet. The stylus is the primary input device for personal digital assistants and smartphones that require accurate input, although devices featuring multi-touch finger-input with capacitive touchscreens are becoming more popular than stylus-driven devices in the smartphone market. A touchpad or trackpad is a flat surface that can detect finger contact. It is a stationary pointing device, commonly used on laptop computers. At least one physical button normally comes with the touchpad, but the user can also generate a mouse click by tapping on the pad. Advanced features include pressure sensitivity and special gestures such as scrolling by moving one's finger along an edge. It uses a two-layer grid of electrodes to measure finger movement: one layer has vertical electrode strips that handle vertical movement, and the other layer has horizontal electrode strips to handle horizontal movements. A touchscreen is a device embedded into the screen of the TV monitor, or system LCD monitor screens of laptop computers. Users interact with the device by physically pressing items shown on the screen, either with their fingers or some helping tool. Several technologies can be used to detect touch. Resistive and capacitive touchscreens have conductive materials embedded in the glass and detect the position of the touch by measuring changes in electric current. Infrared controllers project a grid of infrared beams inserted into the frame surrounding the monitor screen itself, and detect where an object intercepts the beams. Modern touchscreens could be used in conjunction with stylus pointing devices, while those powered by infrared do not require physical touch, but just recognize the movement of hand and fingers in some minimum range distance from the real screen. Touchscreens are becoming popular with the introduction of palmtop computers like those sold by the Palm, Inc. hardware manufacturer, some high range classes of laptop computers, mobile smartphone like HTC or the Apple Inc. iPhone, and the availability of standard touchscreen device drivers into the Symbian, Palm OS, Mac OS X, and Microsoft Windows operating systems. In contrast to a 3D Joystick, the stick itself doesn't move or just moves very little and is mounted in the device chassis. To move the pointer, the user has to apply force to the stick. Typical representatives can be found on notebook's keyboards between the "G" and "H" keys. By performing pressure on the TrackPoint, the cursor moves on the display.
https://en.wikipedia.org/wiki?curid=24598
Pete Best Randolph Peter Best (né Scanland; 24 November 1941) is an English musician known as having been the Beatles' drummer prior to their achieving worldwide fame. After he was dismissed from the group in 1962, he started his own band, the Pete Best Four, and later joined many other bands over the years. He is one of several people who have been referred to as the Fifth Beatle. Best was born in the city of Madras, then part of British India. After Best's mother, Mona Best (1924–1988), moved to Liverpool in 1945, she opened the Casbah Coffee Club in the cellar of the Bests' house in Liverpool. The Beatles (at the time known as the Quarrymen) played some of their first concerts at the club. The Beatles invited Best to join on 12 August 1960, on the eve of the group's first Hamburg season of club dates. Ringo Starr eventually replaced Best on 16 August 1962, when the group's manager, Brian Epstein, fired Best at the request of John Lennon, Paul McCartney and George Harrison following the band's first recording session at Abbey Road Studios in London. Over 30 years later, Best received a major monetary payout for his work with the Beatles after the release of their 1995 compilation of their early recordings on "Anthology 1", with Best on drums on a number of the album's tracks, including the Decca auditions. After working in a number of commercially unsuccessful groups, Best gave up the music industry to work as a civil servant for 20 years, before starting the Pete Best Band. Best's mother, Mona Best (born Alice Mona Shaw), was born in Delhi, India, and was the daughter of Thomas (an Irish major) and Mary Shaw. Randolph Peter Scanland (later surnamed Best), her first child, was born in Madras (now Chennai), Madras Presidency, British India, on 24 November 1941. Best's biological father was marine engineer Donald Peter Scanland, who subsequently died during World War II. Best's mother was training to become a doctor in the service of the Red Cross when she met Johnny Best, who came from a family of sports promoters in Liverpool who ran Liverpool Stadium. During World War II, Johnny Best was a commissioned officer serving as a Physical Training Instructor in India, and was the Army's middleweight boxing champion. After their marriage on 7 March 1944 at St Thomas's Cathedral, Bombay, Rory Best was born. In 1945, the Best family sailed for four weeks to Liverpool on the "Georgic", the last troop ship to leave India, carrying single and married soldiers who had previously been a part of General Sir William Slim's forces in south-east Asia. The ship docked in Liverpool on 25 December 1945. Best's family lived for a short time at the family home, "Ellerslie" in West Derby, until Best's mother fell out with her sister-in-law, Edna, who resented her brother's choice of wife. The family then moved to a small flat on Cases Street, Liverpool, but Mona Best was always looking for a large house—as she had been used to in India—instead of one of the smaller semi-detached houses prevalent in the area. After moving to 17 Queenscourt Road in 1948 where the Bests lived for nine years, Rory Best saw a large Victorian house for sale at 8 Hayman's Green in 1957 and told Mona about it. The Best family claim that Mona then pawned all her jewellery to place a bet on Never Say Die, a horse that was ridden by Lester Piggott in the 1954 Epsom Derby; it won at 33–1 and she used her winnings to buy the house in 1957. The house had previously been owned by the West Derby Conservative Club and was unlike many other family houses in Liverpool as the house (built around 1860) was set back from the road and had 15 bedrooms and an acre of land. All the rooms were painted dark green or brown and the garden was totally overgrown. Mona later opened The Casbah Coffee Club in its large cellar. The idea for the club first came from Best, as he asked his mother for somewhere his friends could meet and listen to the popular music of the day. As The Quarrymen, John Lennon, Paul McCartney, George Harrison, and Ken Brown played at the club after helping Mona to finish painting the walls. Best passed the eleven plus exam at Blackmoor Park primary school in West Derby, and was studying at the Liverpool Collegiate Grammar School in Shaw Street when he decided he wanted to be in a music group. Mona bought him a drum kit from Blackler's music store and Best formed his own band, the Black Jacks. Chas Newby and Bill Barlow joined the group, as did Ken Brown, but only after he had left the Quarrymen. The Black Jacks later became the resident group at the Casbah, after the Quarrymen cancelled their residency because of an argument about money. During 1960, Neil Aspinall became good friends with the young Best and subsequently rented a room in the Bests' house. During one of the extended business trips of Best's stepfather, Aspinall became romantically involved with Mona. Aspinall fathered a child by Mona—Vincent "Roag" Best, Mona's third son—who is Best's half-brother. Aspinall later became the Beatles' road manager, and denied the story for years before publicly admitting that Roag was indeed his son. In 1960, Allan Williams, the Beatles' manager, arranged a season of bookings in Hamburg, starting on 17 August 1960, but complained that they did not impress him, and hoped that he could find a better act. Having no permanent drummer, Paul McCartney looked for someone to fill the Hamburg position. Best had been seen playing in the Casbah with his own group, the Black Jacks, and it was noted that he was a steady drummer, playing the bass drum on all four beats in the bar, which pushed the rhythm. In Liverpool, his female fans knew him as being "mean, moody, and magnificent", which convinced McCartney he would be good for the group. After the Black Jacks broke up, McCartney persuaded Best to go to Hamburg with the group, by saying they would each earn £15 per week (equivalent to £ in 2020). As Best had passed his school exams (unlike Lennon, McCartney and Harrison, who had failed most of theirs), he had the chance to attend teacher-training college, but he decided that playing in Hamburg would be a better career move. Best had an audition in the Jacaranda Club, which Williams owned, and travelled to Hamburg the next day. Williams later said that the audition with Best was unnecessary, as the group had not found any other drummer willing to travel to Hamburg, but did not tell Best in case he asked for more money. The Beatles first played a full show with Best on 17 August 1960 at the Indra Club in Hamburg, and the group slept in the Bambi Kino cinema in a small, dirty room with bunk beds, a cold and noisy former storeroom directly behind the screen. Upon first seeing the Indra, where they were booked to play, Best remembered it as a depressing place patronised by a few tourists, and having heavy, old, red curtains that made it seem shabby compared to the larger Kaiserkeller. As Best had been the only group member to study O-Level German at school, he could converse with the club's owner, Bruno Koschmider, and the clientele. After the Indra closed following complaints about the noise, the group started a residency in the Kaiserkeller. In October 1960, the group left Koschmider's club to work at the Top Ten Club, which Peter Eckhorn ran, as he offered the group more money and a slightly better place to sleep. In doing so they broke their contract with Koschmider. When Best and McCartney went back to the Bambi Kino to retrieve their belongings they found it in almost total darkness. As a snub to Koschmider, McCartney found a condom, attached it to a nail on the concrete wall of the room, and set it alight. There was no real damage done, but Koschmider reported them both for attempted arson. Best and McCartney spent three hours in a local prison and were subsequently deported, as was George Harrison, for working under the legal age limit, on 30 November 1960. Back in Liverpool, the group members had no contact with each other for two weeks, but Best and his mother made numerous phone calls to Hamburg to recover the group's equipment. Mona arranged all the bookings for the group in Liverpool, after parting company with Williams in late 1961. Chas Newby, the ex-Black Jacks guitarist, was invited to play bass for four concerts, as bassist Stuart Sutcliffe had decided to stay in Hamburg. Newby played with the group at Litherland Town Hall and at the Casbah. He was shocked at the vast improvement in their playing and singing, and remembered Best's drumming to be very powerful, which pushed the group to play harder and louder. It was probably thanks to McCartney that Best developed a loud drumming style, as he often told Best in Hamburg to "crank it up" (play as loud as possible). When the group returned to Hamburg, by which time McCartney had switched to bass, Best was asked to sing a speciality number written by McCartney, "Pinwheel Twist", while McCartney played drums, but he always felt uncomfortable being at the front of the stage. The reunited Beatles returned to Hamburg in April 1961. While they played at the Top Ten Club, singer Tony Sheridan recruited them to act as his backing band on a recording for the German Polydor label, produced by bandleader Bert Kaempfert, who signed the group to a Polydor contract at the first session on 22 June 1961. On 31 October 1961, Polydor released the recording "My Bonnie" (Mein Herz ist bei dir nur/My heart is only for you) which appeared on the German charts under the name "Tony Sheridan and the Beat Brothers"—a generic name used for whoever happened to be in Sheridan's backup band. The song was later released in the UK. There was a second recording session on 23 June that year, and a third in May 1962. Brian Epstein, who had been unofficially managing the Beatles for less than a month, arranged a recording audition at Decca Records in London on New Year's Day, 1962. The group recorded 15 songs, mostly cover versions with three Lennon–McCartney songs. John Lennon later admitted they were "terrified and nervous." A month later, Decca informed Epstein the group had been rejected. The band members were informed of the rejection except for Best. Epstein officially became the manager of the Beatles on 24 January 1962 with the contract signed in Pete's house. Epstein negotiated ownership of the Decca audition tape, which was then transferred to an acetate disc, to promote the band to other record companies in London. In the meantime, Epstein negotiated the release of the Beatles from their recording contract with Bert Kaempfert and Polydor Records in Germany, which expired on 22 June 1962. As a part of this contract, the Beatles recorded at Polydor's Studio Rahlstedt on 24 May 1962 in Hamburg as a sessions band, backing Tony Sheridan. Less than two weeks later the Beatles were recording again at Abbey Road studios in London for EMI. The record producer at EMI, George Martin, met with Epstein on 9 May 1962 at the Abbey Road studios, and was impressed with his enthusiasm. He agreed to sign the Beatles on a recording contract, based on listening to the Decca audition tape, without having met them or having seen them perform live. Soon after the recording contract was signed, the Beatles performed a "commercial test" (i.e. an evaluation of a signed artist) on 6 June 1962 in Studio Two at the Abbey Road studios. Assistant producer Ron Richards and his engineer Norman Smith (later to have hits himself, as Hurricane Smith) recorded four songs: "Bésame Mucho", "P.S. I Love You", "Ask Me Why" and "Love Me Do". The last three songs were the Beatles' own compositions, which was very unusual for bands new to recording. Martin was in the building but not in the studio. Martin was called into the studio by Norman Smith when he heard the band play "Love Me Do". At the end of the session Martin asked the individual Beatles if there was anything they personally did not like, to which George Harrison replied, "Well, there's your tie, for a start." That was the turning point, according to Smith, as Lennon, McCartney, and Best joined in with jokes and comic wordplay. The Beatles were not new to studio recording, and Best's drumming had been found acceptable by Polydor in Hamburg, but Richards had alerted Martin to Best's unsuitability for British studio work. EMI engineer Norman Smith stated in a 2006 video interview that "it was mainly down to what he was playing and not how he was playing," when "Love Me Do" was first recorded, referring to the head arrangement . Martin however found Best's timing inadequate and wanted to replace Best with an experienced studio session drummer for the recordings, a common practice at the time. Martin stated years later: When Lennon, McCartney, and Harrison learned that Martin and the engineers preferred replacing Best with a session drummer for their upcoming recording session on 4 September 1962, they considered using it as a pretext to permanently dismiss Best from the group. Eventually, after a very long delay, they asked Epstein to dismiss Best from the band. Epstein agonised over the decision. As he wrote in his autobiography, "A Cellarful of Noise", he "wasn't sure" about Martin's assessment of Best's drumming and "was not anxious to change the membership of the Beatles at a time when they were developing as personalities … I asked the Beatles to leave the group as it was". Epstein also asked Liverpool DJ Bob Wooler, who knew the Beatles intimately, for advice, to which Wooler replied that it was not a good idea, as Best was very popular with the fans. Part of the dilemma for Epstein that arose at that time, (when the band had not yet achieved national success, but rather local status as a good band with limited income), was that Best was an asset at gigs, particularly with the girl fans, and who put on a good show, ensuring venues would have a solid audience. The counter-argument, however, was the larger consideration of the band's having the best music producible for record sales. John, Paul and George ultimately decided that record production was more important than having a drummer for live stage performances who was more image than substance. In the meantime, Epstein withheld telling Best that EMI had made a recording contract with the band (verbally since June and in writing at the end of July 1962), which meant that a new drummer was now inevitable. There may have been legal issues had Pete known. Epstein ultimately decided that "If the group was to remain happy, Pete Best must go." Epstein summoned Best to his office and dismissed him on Thursday, 16 August, ten weeks and one day after the first recording session. Epstein asked Best to continue to play with the band until Ringo joined on Saturday 18 August. Best played his last two gigs with the Beatles on 15 August at the Cavern Club, Liverpool. He was due to play his last show on 16 August at the Riverpark Ballroom, Chester, but never turned up; Johnny Hutchinson of the Big Three was rushed in as a substitute. "Mersey Beat" magazine's editor, Bill Harry, claimed that Epstein initially offered the vacant drummer position in the group to Hutchinson, whom he also managed. Hutchinson refused the job, saying, "Pete Best is a very good friend of mine. I couldn't do the dirty on him." Best had been good friends with Neil Aspinall since 1961, when Aspinall had rented a room in the house where Best lived with his parents. While still part of the group, Best had asked Aspinall to become the band's road manager and personal assistant; accepting the job, Aspinall bought an old Commer van for £80 (equivalent to £ in 2020). Aspinall was waiting for Best downstairs in Epstein's NEMS record shop after the dismissal meeting. The two went to the "Grapes" pub on Mathew Street, the same street as the Cavern Club, where the group had played. Aspinall was furious at the news, insisting to Best that he would also resign from the Beatles. Best strongly advised him to remain with the group. Aspinall's relationship with Mona Best (and their three-week-old baby, Roag) was ended. At the next concert Aspinall asked Lennon why they had fired Best, to which he replied "It's got nothing to do with you, you're only the driver." Ringo Starr had previously played with Rory Storm and the Hurricanes – the alternating band at the Kaiserkeller – and had been deputised whenever Best was ill or unable to play in Hamburg and Liverpool. Harry reported Best's dismissal on the front page of "Mersey Beat" magazine, upsetting many Beatles fans. The group encountered some jeering and heckling in the street and on stage for weeks afterwards, with some fans shouting, "Pete forever, Ringo never!" One agitated fan headbutted Harrison in The Cavern, giving him a black eye. As Best's replacement, Starr accompanied the band on their second recording session with EMI at Abbey Road studios on 4 September 1962. George Martin initially refused to let Starr play, in that he was unfamiliar with Starr, and wanted to avoid any risk of his drumming not being up to par. On 11 September 1962, at the third EMI recording session, Martin used session musician Andy White on the drums for the whole session instead of Starr, as Martin had already booked White after the first session with Best. Starr played tambourine on some songs, while White played drums. Starr told biographer Hunter Davies years later that he had thought, "That's the end. They're pulling a Pete Best on me." Best was never directly told by anybody involved in his sacking exactly why he was dismissed; the only reason Epstein stated to him was, 'The lads don't want you in the group anymore.' Lennon, McCartney, and Harrison all later stated that they regretted the manner in which Best was sacked. Lennon admitted that 'we were cowards when we sacked him. We made Brian do it.' McCartney stated: 'I do feel sorry for him, because of what he could have been on to.' Harrison said: 'We weren't very good at telling Pete he had to go', and 'historically, it may look like we did something nasty to Pete and it may have been that we could have handled it better.' Starr, on the other hand, feels he has no apology to make: 'I never felt sorry … I was not involved.' In 1968, authorised Beatles biographer Hunter Davies wrote: 'But for the sake of Pete's career, whatever happened to the Beatles afterward, the handling and especially the announcing of the sacking might have been done more neatly and cleanly. He could have been fixed up with a job in another group before the news was announced.' Over twenty years later Mark Lewisohn concluded that Epstein claimed in his autobiography that Lennon, McCartney and Harrison thought that Best was 'too conventional to be a Beatle' and added that 'though he was friendly with John, he was not liked by George and Paul'. It has been documented, notably in Cynthia Lennon's book "John", that while Lennon, McCartney and Harrison usually spent their offstage time together in Hamburg and Liverpool, writing songs or socialising, Best generally went off alone. This left Best on the outside, as he was not privy to many of the group's experiences, references and in-jokes. A German photographer, Astrid Kirchherr, asked if they would not mind letting her take photographs of them in a photo session, which impressed them, as other groups only had snapshots taken by friends. The next morning, Kirchherr took photographs on the Heiligengeistfeld, a municipal event area close to the Reeperbahn. In the afternoon, Kirchherr took them to her mother's house in Altona - minus Best, who decided not to attend. Dot Rhone, McCartney's then-girlfriend who later visited Hamburg, described Best as being very quiet and never taking part in conversations with the group. On their first trip to Hamburg, the group soon realised that the stage suits they wore could not stand up to the hours of sweating and jumping about on stage every night, so they all bought leather jackets, jeans and cowboy boots, which were much tougher. Best initially preferred to play in cooler short sleeves on stage, and so did not match the sartorial style of the group, even though he was later photographed wearing a leather jacket and jeans. Lennon, McCartney, Harrison and Sutcliffe were introduced to recreational drugs in Hamburg. As they played for hours every night, they often took Preludin to keep themselves awake, which they received from German customers or Astrid Kirchherr, whose mother bought them. Lennon often took four or five, but Best always refused. It has been claimed that Epstein became exasperated with Best's refusal to adopt the mop-top-style Beatle haircut as part of their unified look, as he preferred to keep his quiffed hairstyle, though Best later stated that he was never asked to change his hairstyle. In a 1995 BBC Radio Merseyside interview, Kirchherr explained: 'My boyfriend, Klaus Voormann, had this hairstyle, and Stuart [Sutcliffe] liked it very, very much. He was the first one who really got the nerve to get the Brylcreem out of his hair, and asking me to cut his hair for him. Pete Best has really curly hair, and it wouldn't work.' Explaining why Geoff Britton, one-time drummer in his subsequent band Wings, 'didn't last long' in that group, McCartney said: 'It's like in the Beatles, we had Pete Best. He was a really good drummer, but there just was something, he wasn't quite like the rest of us, we had like a sense of humour in common and he was nearly in with it all, but it's a fine line, you know, as to what is exactly in and what is nearly in. So he left the band and we were looking for someone who would fit.' He told Mark Lewisohn, similarly, that when George Martin suggested 'changing' their drummer, the Beatles responded: 'Well, we're quite happy with him, he works great in the clubs', but also that 'Pete had never quite been like the rest of us. We were the wacky trio and Pete was perhaps a little more sensible; he was slightly different from us, he wasn't quite as artsy as we were.' Lennon said that Best was recruited only because they needed a drummer to go to Hamburg. 'We were always going to dump him when we found a decent drummer.' McCartney stated that Best was 'good, but a bit limited'. Harrison said that 'Pete kept being sick and not showing up for gigs' and admitted, 'I was quite responsible for stirring things up. I conspired to get Ringo in for good; I talked to Paul and John until they came round to the idea.' For his part, Starr said: 'I felt I was a much better drummer than he [Best] was.' Martin claimed to be surprised to learn that Best had been fired, hearing the news from Mona via telephone. He denied that he had ever suggested sacking him, saying: McCartney remembered: Musically, Best has been judged to have had a limited rhythmic vocabulary that was seen as holding the other three band members back from their collective musical growth. Martin (see above) deemed Best's drumming to be inadequate for a recording. As stated in a 2005 biography written by Bob Spitz: 'all Pete could do was play Fours', a style of drumming that uses kick drum notes on every quarter note to hold down the beat. Spitz's book also contains an account by engineer Ron Richards of his failed attempts to teach Best somewhat more complicated beats for different songs. Critic Richie Unterberger described Best's drumming at the Decca session as 'thinly textured and rather unimaginative', adding that Best 'pushes the beat a little too fast for comfort'. Unterberger thought Starr to be 'more talented'. Beatles' critic Alan W. Pollack compared the Best, Starr, and White versions of "Love Me Do", and concluded that Best was 'an incredibly unsteady and tasteless drummer' on his version. Beatles' historian Ian MacDonald, recounting the Decca audition, said that 'Best's limitations as a drummer are nakedly apparent'. MacDonald notes, of the EMI recording session on 6 June that '...this audition version [of "Love Me Do"] shows one of the reasons why Best was sacked: in moving to the ride cymbal for the first middle eight, he slows down and the group falters.' Before Epstein took the Beatles on, Mona had been handling most of the management and promotional work. According to promoter and manager Joe Flannery, Mona had done a great deal for the band by arranging a number of important early gigs and lending them a badly needed helping hand when they returned from Hamburg the first time, but this came at the cost of having to contend with her overbearing nature. At this crucial time in the history of the Beatles, Lennon confided to Flannery that he considered Mona 'bossy like [his aunt] Mimi' and believed that she was using the Beatles only for the sake of her son Pete, though this should be weighed against the fact that the Beatles' cordial relations with Mona soon resumed. She often met them while visiting Neil Aspinall at his London home. On these occasions, the Beatles often had small gifts for her which they had acquired on their travels. For her part, Mona allowed them to use her father's military medals in the photo shoot for the "Sgt. Pepper" album cover. Although Epstein's publicly stated reluctance to fire Best quickly became a matter of record in the early biographies, he had found Mona to be the cause of mounting aggravation. Epstein's distaste for her interference in the Beatles' management, including her 'aggressive opinions about his handling of her son's career', was obvious to everyone, and he also reportedly considered Mona a loose cannon who must not be allowed to interfere in his operations. Moreover, the very recent birth of her son Roag further complicated matters. Although Best himself was not personally responsible for this development, it would still have caused a grave scandal had it become generally known, and Epstein may have been horrified at the prospect. Best's popularity with fans was reportedly a source of friction, as many female fans considered him to be the band's best-looking member. Radio Merseyside presenter Spencer Leigh wrote a book chronicling Best's firing, suggesting that the other members, McCartney in particular, were jealous. In an issue of Bill Harry's "Mersey Beat" music publication in Liverpool, dated 31 August 1961, Bob Wooler reported on the Beatles' local musical impact and singled out Best for special praise, calling the group 'musically authoritative and physically magnetic, example the mean, moody magnificence of drummer Pete Best – a sort of teenage Jeff Chandler'. During the "Teenagers' Turn" showcase in Manchester, Lennon, McCartney and Harrison walked on stage to applause, but when Best walked on, the girls screamed. Afterwards, attentive females surrounded Best at the stage door, while the other members were ignored after signing a few autographs. McCartney's father, Jim McCartney, was present at the time and admonished Best: 'Why did you have to attract all the attention? Why didn't you call the other lads back? I think that was very selfish of you.' Lennon called the accusations of jealousy a 'myth'. In 1963, on British television, Mona, with Pete present, said of his dismissal: 'From the point of clash of personalities, well, probably that may be it because Peter did have a terrific fan club, you know, compared to the others. [Interviewer: Too good looking perhaps?] I'll leave that for other people to say but from my point of view we haven't come here to sort of throw sticks and stones at the boys because there is no really hard feeling. There was at first, but it's just the way that it was done that has annoyed us. If it had been done a bit more straightforward it would have been more to the mark.' Davies agreed: 'There is some justification for a little of Mrs. Best's anger. The sacking of Pete Best is one of the few murky incidents in the Beatles' history. There was something sneaky about the way it was done.' Soon after Best was dismissed, Epstein attempted to console him by offering to build another group around him, but Best refused. Feeling let down and depressed, he sat at home for two weeks, not wanting to face anybody or answer the inevitable questions about why he had been sacked. Epstein secretly arranged with his booking agent partner, Joe Flannery, for Best to join Lee Curtis and the All-Stars, which then broke off from Curtis to become Pete Best & the All Stars. They signed to Decca Records, releasing the single "I'm Gonna Knock On Your Door", which was not successful. Best later moved to the United States along with songwriters Wayne Bickerton and Tony Waddington. As the Pete Best Four, and later as the Pete Best Combo (a quintet), they toured the United States with a combination of 1950s songs and original tunes, recording for small labels, but they had little success. They ultimately released an album on Savage Records, "Best of the Beatles"; a play on Best's name, leading to disappointment for record buyers who neglected to read the song titles on the front cover and expected a Beatles compilation. The group disbanded shortly afterwards. Bickerton and Waddington were to find greater success as songwriters in the 1960s and 1970s, writing a series of hits for the American female group the Flirtations and the British group the Rubettes. In 2000, the record label Cherry Red reissued the Pete Best Combo's recordings as a compact disc compilation. Richie Unterberger, reviewing the CD, stated that the music's "energy level is reasonably high," that Bickerton and Waddington's songwriting is "kind of catchy," and that Best's drumming is "ordinary." Best decided to leave show business, and by the time of Hunter Davies' authorised Beatles biography in 1968, he was not willing to talk about his Beatles association. Years later he stated in his autobiography, "the Beatles themselves certainly never held out a helping hand and only contributed to the destruction with their readily printed gossip that I had never really been a Beatle, that I didn't smile, that I was unsociable and definitely not a good mixer. There was not a single friendly word from any one of them". This culminated in a Beatles' interview published in "Playboy" magazine in February 1965 in which Lennon stated that "Ringo used to fill in sometimes if our drummer was ill. With his periodic illness." Starr added: "He took little pills to make him ill." Best sued the Beatles for defamation of character, eventually winning an out-of-court settlement for much less than the $18 million he had sought. Davies recalled that while working with the Beatles on their authorised biography in 1968, "when the subject of Pete Best came up they seemed to cut off, as if he had never touched their lives. They showed little reaction ... I suppose it reminded them not just that they had been rather sneaky in the handling of Pete Best's sacking, never telling him to his face, but that for the grace of God, or Brian Epstein, circumstances might have been different and they could have ended up [like Pete]." During the height of Beatlemania, Best attempted suicide, but his mother, Mona, and his brother, Rory, prevented him from completing it. In 1963, Best married Kathy, a Woolworth's sales clerk whom he met at an early Beatles show; they have remained married and have two daughters, as well as four grandchildren. Best did shift work loading bread into the back of delivery vans, earning £8 a week (equivalent to £ in 2020). His education qualifications subsequently helped him become a civil servant working at the Garston Jobcentre in Liverpool, where he rose from employment officer to training manager for the Northwest of England, and, ironically, remembered "a steady stream of real-life Yosser Hughes types" imploring him to give them jobs. The most he could do, he recalls, was to offer to retrain them in other fields, "which was an emotional issue for people who had done one kind of work all their lives." Eventually, Best began giving interviews to the media, writing about his time with the group and serving as a technical advisor for the television film "Birth of the Beatles". He found a modicum of independent fame, and has admitted to being a fan of his former band's music and owning their records. In 1995, the surviving Beatles released "Anthology 1", which featured a number of tracks with Best as drummer, including songs from the Decca and Parlophone auditions. Best received a substantial windfall – between £1 million and £4 million – from the sales, although he was not interviewed for the book or the documentaries. According to writer Philip Norman, the first time Best knew about the royalties due him for the use of those tracks "was a phone call" from Paul McCartney himself, "the one who'd been so keen to get rid of him – the first time they'd spoken since it happened. "Some wrongs need to be righted," Paul told him. "There's some money here that's owing to you and you can take it or leave it." Best took it." However, Best asserts that it was Neil Aspinall and not McCartney who phoned him. “Paul McCartney claims he called me but he didn’t,” Best told the Irish Times. The collage of torn photographs on the "Anthology 1" album cover includes an early group photo that featured Best, but Best's head was removed, revealing a photo of Starr's head, taken from the "Please Please Me" cover photo (the missing section of the photograph appears on the cover of the album "Haymans Green"). A small photograph of Best can be seen on the left side of the "Anthology" cover. Best appeared in an advertisement for Carlsberg lager that was broadcast during the first commercial break of the first episode of the "Anthology" TV series on ITV in November 1995. The tag line was "Probably the Pete Best lager in the world", a variation of Carlsberg's well-known slogan. In 1988, after twenty years of turning down all requests to play drums in public, Best finally relented, appearing at a Beatles convention in Liverpool. He and his brother Roag performed, and afterwards his wife and mother both told him, "You don't know it, but you're going to go back into show business." Best now regularly tours the world with the Pete Best Band, sharing the drumming with his younger brother Roag. The Pete Best Band's album "Haymans Green", made entirely from original material, was released on 16 September 2008 in the US, 24 October 2008 worldwide, excluding the UK, and 27 October 2008 in the UK. On 6 July 2007, Best was inducted into the "All You Need Is Liverpool" Music Hall of Fame as the debut Charter Member. Best was presented with a framed certificate before his band performed. Liverpool has further honoured Best with the announcement, on 25 July 2011, that two new streets in the city would be named Pete Best Drive and Casbah Close. Best is portrayed in several films about the Beatles. In the 1979 biopic "Birth of the Beatles", for which Best was a technical advisor, he is played by Ryan Michael. In both the 1994 film "Backbeat" and in the 2000 television biopic "", Best is played by Liverpool native Scot Williams. The 2008 Rainn Wilson film "The Rocker", about a drummer kicked out of a glam metal band through no fault of his own, was inspired by Best's termination. Best had a cameo in the movie. "BEST!", a comedy play written by Liverpool playwright Fred Lawless, was staged at the Liverpool Everyman Theatre and the Dublin Theatre Festival in 1995 and 1996. The play, which was mainly fiction, showed a scenario where after Pete Best's sacking, he went on to become a world-famous rock superstar while his ex-group struggled as one hit wonders. The play was critically acclaimed in both the "Liverpool Echo" and also in Spencer Leigh's 1998 book "Drummed Out : The Sacking of Pete Best". Pete Best is a main character in David Harrower's 2001 play "Presence", premiered at the Royal Court Theatre, London, dramatising The Beatles' time in Hamburg. Another "Peter Best" single, "Carousel Of Love"/"Want You" (1967 – Capitol / P 2092) is not by Best, but an Australian performer with the same name.
https://en.wikipedia.org/wiki?curid=24600
Port Adelaide Football Club Port Adelaide Football Club is a professional Australian rules football club based in Alberton, South Australia. The club's senior team plays in the Australian Football League (AFL), where they are nicknamed the Power, whilst its reserves team competes in the South Australian National Football League (SANFL), where they are nicknamed the Magpies. Port Adelaide are the oldest professional sporting club in South Australia and the fifth-oldest club in the AFL. Since the club's founding on 12 May 1870, the club has won 36 South Australian league premierships, including six in a row, these records are unequaled in the SANFL or AFL. The club also won the Champions of Australia competition on a record four occasions. After successfully winning an AFL licence in 1994 the club began competing in the Australian Football League in 1997 as the only pre-existing non-Victorian club—and has subsequently added the 2004 AFL premiership to its achievements. By the late 1860s Port Adelaide's river traffic was growing rapidly. The increasing economic activity around the waterways ultimately resulted in a meeting being organised by young Port Adelaide locals John Rann, Mr. Leicester and Mr. Ireland with the intention to form a sporting club to benefit the growing number of workers associated with the wharfes and surrounding industries. As a result of their meeting the Port Adelaide Football Club was established on 12 May 1870 as part of a joint Australian football and cricket club. The first training session of the newly formed club took place two days later. The Port Adelaide Football Club played its first match against a newly established club from North Adelaide called the Young Australian. The clubs first home ground was the family property of inaugural club president John Hart Jr in Glanville. John Hart Sr would become premier of South Australia the week following the first match. During these early years, football in South Australia was yet to be formally organised by a single body and as a result there were two main sets of rules in use across the state. Port Adelaide's main opponents during the years prior to the foundation of a governing body for the code in South Australia were the now defunct Kensington and Old Adelaide club. The rules of the Old Adelaide club, which more closely resembled the rules used in Melbourne at the time, were ultimately adopted across Adelaide in 1876. In 1877, Port Adelaide joined seven other clubs to form the South Australian Football Association (SAFA), the first ever governing body of Australian rules football. For the first few seasons in the SAFA the club competed in magenta guernseys and white shorts. In 1878, Port Adelaide hosted its first game against the recently established Norwood Football Club with the visitors winning 1–0. A rivalry between these clubs would soon develop into one of the fiercest in Australian sport (See Port Adelaide–Norwood SANFL rivalry). In 1879, the club played reigning Victorian Football Association (VFA) premiers Geelong at Adelaide Oval in what was Port Adelaide's first game against an interstate club. In 1880, Port Adelaide moved to Alberton Oval which remains to this day the club's training and administrative headquarters. In 1881, Port Adelaide played its first game against Carlton at Adelaide Oval. Later that year the club travelled to Victoria and played its first game outside South Australia against the Sale Football Club. During the 1882 season Port Adelaide overcame Norwood for the first time after nine previous attempts winning by 1 goal at Adelaide Oval. On 2 July 1883 Port Adelaide played its first game at the Melbourne Cricket Ground against . In 1884, Port Adelaide won its first SAFA premiership, ending Norwood's run of six premierships. On 25 May 1885, Port Adelaide played at the Melbourne Cricket Ground against South Melbourne, drawing with the eventual VFA premiers in front of 10,000 spectators. In 1887, immense interest led into the round 8 meeting against Norwood, as the previous two matches between the clubs resulted in draws. Norwood won in front of a then-record 11,000 spectators at Adelaide Oval. Attending the match were Chinese Commissioners to the Jubilee Exhibition General Wong Yang Ho and Console-General Yu Chiung who were provided the South Australian premiers private box at Adelaide Oval. During 1889, the club played against the Richmond Football Club at Punt Road, with Port prevailing by a goal. The 1889 SAFA season ended with Port Adelaide and Norwood equal top, leading to the staging of Australia's first grand final. Norwood went on to defeat Port Adelaide by two goals. In 1890 Port Adelaide won its second SAFA premiership and would go on to be crowned "Champions of Australia" for the first time after defeating VFA premiers South Melbourne. In 1891 the club defeated Fitzroy at Adelaide Oval with Indigenous Australian Harry Hewitt playing for Port Adelaide. As the 1890s continued Australia would be affected by a severe depression with many players were being forced to move interstate to find work. This exodus translated into poor on field results for the club. By 1896, the club was in crisis and finished last causing the clubs committee to meet with the aim of revitalising the club. Historian John Devaney suggested that there was a "conscious and deliberate cultivation by both the committee and the team's on field leaders of a revitalised club spirit, whereby playing for Port Adelaide became a genuine source of pride". It had immediate results and in 1897 Port Adelaide won a third premiership finishing the season with a record of 14–2–1 with a scoring record two and a half times its conceded total. This is one of only four occurrences since 1877 that the team that finished last won a premiership the following year. Stan Malin won Port Adelaide's first Magarey Medal in 1899. During the 19th century the club had nicknames including the Cockledivers, the Seaside Men, the Seasiders and the Magentas. In 1900, Port finished bottom in the six-team competition, which it has not done in any senior league since. In 1902, Port Adelaide took the field in black and white guernseys for the first time after it was having trouble finding dyes that would last for its blue and magenta guernseys. The first year in the new guernsey would be a controversial year for the club. After finishing the 1902 season on top of the ladder, Port Adelaide was disqualified from a game with after disputing the use of an unaccredited umpire. The 1902 SAFA premiership would subsequently be awarded to North Adelaide after they defeated South Adelaide in the Grand Final a week later. Port Adelaide offered to play North Adelaide in a premiership deciding match, but the association refused. The first premiership after the dispute came the following year when Port Adelaide defeated South Adelaide 6.6 (42) to 5.5 (35) in the 1903 SAFA Challenge Final. In 1906 Port Adelaide appointed James Hodge as club secretary. Hodge would quickly earn the nickname 'Columbus' after taking the club on trips to play exhibition games all across Australia. That year would also see further premiership when Port defeated North Adelaide 8.12 (60) to 5.9 (39) in the year's Grand Final. During the early stages of the 1907 season, Port Adelaide travelled to Sydney to play a combination of the cities best players. The game was marketed as 'Port Adelaide vs. Sydney' with the harbour city side taking the honours 8.9 (57) to 5.14 (44). Port Adelaide won the SAFL premiership in 1910 defeating Sturt 8.12 (60) to 5.11 (41) in the Grand Final. The club would go on to defeat Collingwood for the 1910 Championship of Australia title. During the 1910 post season, seeking revenge for their loss the year before, Port Adelaide travelled to Western Australia and beat East Fremantle by 12 points. To conclude the trip Port Adelaide played a combination of some of the WAFL's best players and achieved a remarkable victory scoring 6.17 (53) to 6.12 (48), with Sampson Hosking named best on ground. Along with beating the premiers from South Australia, Victoria and Western Australia in 1910, Port Adelaide also invited North Broken Hill, the premier team of New South Wales, to a game at Adelaide Oval. Port Would win this game 14.20 (104) to 5.5 (35). The following two seasons for Port Adelaide would be frustrating dropping only one game during the 1911 minor round and going undefeated the following year in 1912 only to be knocked out of contention by West Adelaide both times, the second of these encounters in front of a pre war South Australian record crowd of 28,500. During the 1912 preseason, Port Adelaide travelled to Tasmania and took on a combination of players from various Tasmanian Football League (TFL) sides. The game would prove to be very competitive with Port Adelaide defeating the TFL combination 7.13 (55) to 6.6 (42). During the 1913 preseason, Port Adelaide travelled back to Western Australia to play East Fremantle again with the local side winning for a second time 6.6 (42) to 4.12 (36). Despite this inauspicious preseason the club would break through in 1913, dropping only two games during the minor round and eventually defeating North Adelaide 7.12 (54) to 5.10 (40) for the SAFL premiership and Fitzroy 13.16 (94) to 4.7 (31) for the 1913 Championship of Australia. The 1914 Port Adelaide Football Club season is widely regarded as one of the best in Australian rules football history. It won all its pre season matches, won all fourteen SAFL games by an average margin of 49 points and the 1914 SAFL Grand Final where it held North Adelaide to a single goal for the match 13.15 (93) to 1.8 (14). The club would then meet VFL premiers Carlton on Adelaide Oval, defeating the Victorian club by 34 points to claim a record fourth Championship of Australia. At the end of the 1914 season, a combined team from the six other SAFL clubs played Port Adelaide and lost to the subsequently dubbed "Invincibles" by 58 points. Key players from this era were Harold Oliver, Angelo Congear and Sampson Hosking who all share the unique distinction of playing in three Championships of Australia together as well all taking part in South Australia's first victorious Australian National Football Carnival in 1911. During World War I the club lost three players—William Boon, Joseph Watson and Albert Chaplin—to the war. A scaled-back competition referred to as the 'Patriotic League' was organised during wartime in which Port Adelaide won the 1916 and 1917 instalments. After World War I, Harold Oliver, arguably the state's best player, was close to retiring from league football playing only 1 game in 1919 and 8 in 1920. However keen supporters of the club hoping to replicate its pre-war success raised funds and bought him a motorbike so he could commute from his farm in Berri for the 1921 season. Oliver would captain the club to the 1921 premiership, winning his fourth in the process. In 1922 after playing only 5 league matches for the season his football career came to an end due to commitments regarding his farm and disputes regarding game compensation. His contract termination meant he was paid £76 of £100 pounds for the season making him one of the highest-paid footballers of the era. At the end of the 1922 SAFA season, Port Adelaide travelled to Sydney and played a combined New South Wales side on the Sydney Cricket Ground winning the match. In following seasons most of Port Adelaide's champion players from before the war started to retire and the club's performances declined. As was the case in the 1890s, the depression of the early 1930s hit the club hard with players moving interstate to secure employment. By the late 1930s, the economy and Port Adelaide's form both recovered and after two narrow grand final losses in 1934 and 1935 the club won premierships in 1936, 1937 and 1939. During 1939, Bob Quinn, in his third year as a player for the club, coached the team to a Grand Final win over West Torrens. Many Port Adelaide players also enlisted for military service during this time. In 1941 Port Adelaide suffered its first player casualties from war since World War I with Lloyd Rudd and Jack Wade both killed on the Allies' front in France. Four more players would be killed through the war: Maxwell Carmichael, George Quinn, Christopher Johnston and Halcombe Brock. Just as had happened in 1914, the league was being hit hard by player losses in World War II. Due to a lack of able men the league's eight teams were reduced to four with Port Adelaide merging with nearby West Torrens from 1942 to 1944. The joint club would play in all three Grand Finals during this period, winning the 1942 instalment but losing the 1943 and 1944 editions to the Norwood-North Adelaide combination. Normal competition resumed in 1945. After finishing his military service Haydn Bunton Sr., now a triple Brownlow and Sandover medallist, joined the club for his final season. However, despite this addition Port Adelaide was unable to regain its pre-war success with West Torrens mounting a remarkable comeback to win the 1945 SANFL Grand Final in what was the only one Bunton's career. The first ever 'All-Australian' side concept was created by Sporting Life magazine in 1947 with Bob Quinn named the sides captain. At the end of the 1949, having missed two finals series in a row, the Port Adelaide Football Club had become desperate to improve its on-field performances. The club's committee subsequently sought out a coach that could win the club its next premiership. Eventually a decision was made which would influence the next 50 years of the Port Adelaide Football Club with Foster Neil Williams, a brilliant rover from West Adelaide, being appointed captain-coach of the club. Williams brought to the club a new coaching style based on success at any cost which was succinctly encapsulated in the legendary club creed he eventually wrote in 1962. During his second season as coach in 1951, Williams led Port to their first official premiership (excluding World War II competition) for 9 seasons, defeating North Adelaide by 11 points. At the end of the 1951 season the VFL premiers Geelong visited South Australia to play the local premiers Port Adelaide on Adelaide Oval. Geelong won the match 8.14 (62) to 6.18 (54) in front of 25,000 people. Port Adelaide would make the Grand Final again in 1953 against local rivals West Torrens in what would be the Eagles last appearance before merging with Woodville. West Torrens would disappoint Port Adelaide, winning the 1953 premiership by 7 points. Port Adelaide's run of disappointment from the 1952 and 1953 seasons would prove to be short lived with the club subsequently going on to win a national record six Grand Finals in a row from 1954 to 1959. The club had a win-loss-draw record of 105–16–1 (86%) over the six-year period. During the 1950s Port Adelaide and Melbourne, often the premiers of South Australian and Victorian leagues, played exhibition matches at Norwood Oval. The most notable game was the 1955 match with an estimated crowd of 23,000. The game being a thriller going down to the last 15 seconds with Frank Adams kicking a behind and sealing the game 9.11 (65) to 9.10 (64) in favour of Norm Smith's demons. The following year Melbourne was full of praise for their cross border challenger with those in the Demons camp agreeing that "Port Adelaide could take their place in the V.F.L. competition and do themselves credit". Geof Motley took over the captain-coaching role at the club in 1959 when Williams left to take a break from the game. That year the club won the premiership setting a national record of sixth consecutive Grand Final victories. Port Adelaide's hope of winning 7 consecutive premierships was brought to an end in the 1960 preliminary final when Norwood won by 27 points. For the following two seasons Port Adelaide would finish third. Fos Williams returned in 1962 and Port Adelaide won three of the next four premierships taking his personal tally to nine and the club's record to 10 of the last 15 premierships. At the end of 1962 the Woodville Football Club, based in a neighbouring suburb to Alberton, was admitted to the SANFL in an attempt to weaken Port Adelaide via taking up some of its suburban recruiting zone. In 1965, Fos Williams coached his last premiership in-front of 62,543 people, the largest ever crowd at Adelaide Oval. In that game Port Adelaide defeat Sturt by 3 points. After the 1965 Grand Final, Port Adelaide would be frustrated by the dominance of Sturt, which won seven premierships over this period under the leadership of Jack Oatey. In all, despite playing in 6 of the next 10 grand finals, Port Adelaide would fail to win a premiership until 1977. One of Port Adelaide's finest players during the Fos Williams era was John Cahill. He eventually became William's protégé and ultimately took over as coach in 1974. In 1975 a dispute between the Port Adelaide City Council and the SANFL over the use of Alberton Oval forced Port Adelaide to move its home matches to Adelaide Oval for two seasons. In 1976 Cahill would subsequently take Port Adelaide to its first Grand Final under his leadership against Sturt with an official attendance of 66,897, the record for football in South Australia. The actual crowd was estimated at 80,000, much bigger than the official figure as Football Park ran out of tickets early and were forced to shut the gates 90 minutes before the bounce as people were being crushed on entry. Sturt won in an upset by 41 points. In 1977 the dispute regarding Alberton Oval was resolved and the club moved back to its home ground and won that years premiership breaking an 11-year drought which at the time was Port Adelaide longest since competing in an organised football competition. The 1980 season was Port Adelaide's most dominant since 1914. All SANFL divisions of the club made finals with both the league and reserve sides winning their respective premierships. Russell Ebert won his record 4th Magarey Medal. Tim Evans set the then-league goal kicking record of 146 goals in a season. The club provided seven players to the state league team (Ebert, Evans, Cunningham, Phillips, Williams, Giles and Faletic). The club set a new record for most points scored during the whole season at 3,421 whilst also having the best defence conceding only 1,851 points. Overall Port Adelaide lost 2 games from 24 for the year. Russell Ebert became coach in 1983 when Cahill left to coach Collingwood for two seasons. This period saw the club fail to reach the grand final. The period also marked the rise of the VFL as Australia's premier football competition. Many SANFL players were moving to the VFL larger salaries. In 1982 the SANFL, Norwood and East Perth all approached the VFL in regards to entering the league. All were ignored at the time. Port Adelaide's report from 1982 showed that the failure of these attempts impacted the understanding of its future. From this point onwards the club restructured in regards to economics, public relations and on-field performance for an attempt to enter the league. There was genuine feeling that failure to do this would result in the club ceasing to exist in the future. Talk of a side from South Australia entering the VFL was fast tracked in 1987 when a team from Western Australia, the West Coast Eagles, and a team from Brisbane, the Brisbane Bears joined the VFL. South Australia was left out as the only mainland state without a team. John Cahill returned as coach for the 1988 season. During that year, one of Fos Williams sons, Anthony, was tragically killed in a building accident. The following day the club played against Norwood and managed to overcome an early deficit to win the emotionally charged game. The club would go on to win the 1988 premiership. In 1989 seven out of ten SANFL clubs were recording losses and the combined income of the SANFL and WAFL had dropped to 40% of that of the VFL. During early 1990 the SANFL decided to wait three years before making any further decision in regards to fielding a South Australian side in the VFL until it could be done without negatively affecting football within the state. Frustrated with lack of progress, Port Adelaide were having secret negotiations in the town of Quorn for entry in 1991. From these discussions Port Adelaide Football Club accepted an invitation from the VFL to join what had now become the AFL. The AFL signed a Heads of Agreement with the club in expectation that Port would enter the competition in 1991, meaning the Port Adelaide Football Club would field two teams, one in the AFL and one in the SANFL. During the 1990 preseason Port Adelaide played a practice match against the Geelong at Football Park in front of 35,000 spectators with Gary Ablett Snr and Gavin Wanganeen prominent. When knowledge of Port Adelaide's negotiations to gain an AFL licence were made public, many in the SANFL saw it as an act of treachery. SANFL clubs urged Justice Olssen to make an injunction against the bid, which he agreed to. The AFL suggested to the SANFL that if they didn't want Port Adelaide to join the AFL, they could put forward a counter bid to enter a composite South Australian side into the AFL. After legal action from all parties, the AFL finally agreed to accept the SANFL's bid and the Adelaide Football Club was born. During December 1994 Max Basher announced that Port Adelaide had won the tender for the second South Australian AFL licence. However a licence did not guarantee entry and although a target year of 1996 was set, this was reliant upon an existing AFL club folding or merging with another. In 1996, the cash-strapped Fitzroy announced it would merge with the Brisbane Bears to form the Brisbane Lions. A spot had finally opened and it was announced that in 1997, one year later than expected, Port Adelaide would enter the AFL. Once an entry date had been confirmed, the Port Adelaide Football Club set about forming a side fit for competition in the AFL. It was announced that existing Port Adelaide coach, John Cahill would make the transition to the AFL and Stephen Williams would take over the SANFL coaching role. Cahill then set about forming a group which would form the inaugural squad. Brownlow Medallist and 1990 Port Adelaide premiership player, Gavin Wanganeen was poached from Essendon and made captain of a team made up of six existing Port Adelaide players, two from the Adelaide Crows, seven players from other SANFL clubs and 14 recruits from interstate. Of the 35 players on Port Adelaide's inaugural AFL list 13 had played for the club before. The AFL's father son rule for the club was set at 200 games for players before 1997. This compared to only 100 for Victorian clubs. On 29 March 1997, Port Adelaide played its first AFL premiership match against Collingwood at the MCG, suffering a 79-point defeat. Port won its first AFL game in round 3 against Geelong, and defeated cross town rivals and eventual premiers Adelaide by 11 points in the first Showdown in round 4. At the conclusion of round 17, the side sat fifth – only one win and percentage off the top spot in what was an unusually close season – but it fell out of the finals after recording only a draw from its final five games. Port Adelaide finished its first season 9th, missing the finals on percentage behind Brisbane. The 1998 season was looking very similar to the previous year as they hovered around ninth position for most of the year and looked like a threat for finals after round 14; but they lost six of their last eight games to finish in 10th place, with a record of 9 wins, 12 losses and 1 draw. In 1999 Mark Williams took over as coach of Port Adelaide. They earned a spot in the AFL finals for the first time. They were eliminated by eventual premier, North Melbourne, by 44 points in the Qualifying Final. After finishing 14th in 2000, Port Adelaide had a very successful 2001 season, starting with a maiden pre-season competition victory, defeating the Brisbane Lions. Port Adelaide finished their 2001 home and away season in third place with 16 wins and six losses. The club travelled to Brisbane for the Qualifying Final, losing by 32 points, then lost its home Semi-final against sixth-placed Hawthorn to be eliminated. Port Adelaide started 2002 strongly, winning the pre-season competition for the second time in a row, defeating Richmond by 9 points. The side built on its success and won its first AFL minor premiership with an 18–4 record. However, they lost to the eventual premiers, the Brisbane Lions, by 56 points in the preliminary final. Port Adelaide continued its minor round dominance in 2003 and again finished top to claim the minor premiership; however like the previous year, Port Adelaide was eliminated in the preliminary final, losing to Collingwood by 44 points. Port Adelaide opened the 2004 season well with four straight wins, but then won only four of its next eight games. From rounds 12–17, Port Adelaide turned their fortunes around and had six consecutive wins, and with five rounds remaining were equal top of the ladder with Brisbane, St Kilda and Melbourne. After losing in round 18 to Essendon, Port Adelaide won its remaining four games – including wins against minor premiership contender Melbourne and cross town rivals Adelaide to claim the minor premiership for the third consecutive year. Port Adelaide easily won its qualifying final against Geelong, earning a home preliminary final. Port Adelaide made it through to its first AFL grand final after defeating St Kilda in a thrilling preliminary final by just six points with Gavin Wanganeen kicking the winning goal with a minute to go. The following week Port Adelaide faced a highly fancied Brisbane side attempting to win a record-equalling fourth straight AFL premiership. Only one point separated the sides at half time, however late in the third quarter Port Adelaide took the ascendency to lead by 17 points at three-quarter time, and dominated the final term to win by 40 points: 17.11 (113) to 10.13 (73). Byron Pickett was awarded with the Norm Smith Medal after being judged the best player in the match, tallying 20 disposals and kicking three goals. After a slow start to the 2005 season, Port finished eighth on the ladder, and defeated the Kangaroos by 87 points in the elimination final. In the semi-final, Port faced minor premiers Adelaide and lost by 83 points. After missing the finals in 2006 Port Adelaide made a strong recovery in 2007, and with strong performances from midfielders Shaun Burgoyne and Chad Cornes and strong debut seasons from Justin Westhoff, Robert Gray and Travis Boak, Port Adelaide finished the minor round second on the ladder with 15–7 record. Port Adelaide started their finals campaign against the West Coast Eagles at Football Park and won by three points. That win gave Port the bye, and they easily defeated the Kangaroos in the preliminary final to win by 87 points. This win delivered Port its second Grand Final berth in four years. However, in the grand final they were defeated by Geelong by an AFL record margin of 119 points, 24.19 (163) to Port Adelaide's 6.8 (44) in a crowd of 97,302. The 2008 season was disappointing one for a Port Adelaide side keen to build on its 2007 grand final appearance, dropping to 13th on the ladder and out of the finals. By 2009 Port Adelaide had accumulated a consolidated debt totaling $5.1 million and was unable to pay its players; they had lost $1.4 million the season before. Financial assistance was denied by the league, with AFL Chief Executive Andrew Demetriou saying that they would have to undergo an intensive application process and work with the SANFL, who owned Port Adelaide's AFL licence. On 20 May, Port were handed $2.5 million in debt relief by the SANFL, and on 15 June were handed a $1 million grant by the AFL commission. The SANFL had announced it would not support Port Adelaide in both the AFL and SANFL. Plans for a re-merging the two teams was rejected by the SANFL. Amidst these off-field struggles, the club finished 10th in 2009. The 2010 season would see Mark Williams step down as senior coach marking the end of the Williams era for the club. Matthew Primus took over as caretaker coach for Port Adelaide after Mark Williams stood down. The club finished the 2010 season with five wins from its last seven games to finish tenth. On 9 September, Matthew Primus was appointed as the senior coach of the club for the next three years. The SANFL sought to take control of Port Adelaide in 2011. Despite underwriting $5 million of Port's debt in 2010, the takeover failed when the SANFL was unable to get a line of credit to cover Port Adelaide's future debts. The AFL announced it would underwrite $1.25 million in debt to protect its $1.25 billion television rights. AFL Chief executive Andrew Demetriou, offered $9 million over the next three years to help the club, ahead of the move to the Adelaide Oval. The AFL gave the money to the SANFL with strict conditions that they give Port Adelaide three million dollars a year, for three years. Statistically, 2011 was Port Adelaide's worst season in 141 years, finishing 16th with only three wins from 22 games. Rounds 20 and 21 saw the club lose to and Hawthorn by record margins of 138 and 165 respectively. The 2012 season was marginally better but a loss against resulted in senior coach Matthew Primus stepping down. Assistant coach, Garry Hocking, took over for the remaining four games, with a draw in the final round against the best result. On 8 October 2012, Ken Hinkley was announced as the new senior coach of the club. This marked the first time that the club had appointed someone not associated with the club before since Fos Williams in 1950. Television personality David Koch was named chairman of the club and numerous board members were replaced. The 2013 preseason also saw Travis Boak succeed Domenic Cassisi as captain of the club. The club finished the home and away season 7th on the ladder, making it the first time that they had qualified for the finals since 2007. Port travelled to Melbourne to play Collingwood at the MCG in an Elimination Final where they won by 24 points; they then lost to Geelong by 16 points the following week in a Semi-final. The 2014 season saw both Port Adelaide and Adelaide move their home ground from Football Park to the redeveloped Adelaide Oval. Port Adelaide signed up a record 55,715 members for the 2014 season and averaged 44,429 at home games, a 65% increase from the previous year. Port Adelaide had its best first half of an AFL season, sitting first with ten wins from eleven matches. They then won only four of their remaining eleven matches to finish fifth on the ladder. They hosted Richmond in the elimination finals, kicking the first seven goals of the game and leading by as much as 87 points before recording a 57-point victory. After defeating in the semi-finals the club's 2014 season ended with a three-point loss to Hawthorn in the preliminary finals. With great expectation, Port Adelaide started the 2015 season playing all of the year's finalists in the opening 10 rounds and entered the mid-season break with a 5–7 record. The club had a better second half of the year, recording 7–3, but would miss out on finals by one win. The club did not fare any better in 2016, winning only ten matches and finishing 10th. In 2017, Port Adelaide had made a massive improvement from the previous 2 seasons, winning 14 of 22 games to finish 5th on the ladder. Port Adelaide's season came to an end in an elimination final loss to by 2 points in extra time. The club would again narrowly miss the finals in 2018 and 2019, finishing 10th in both seasons. The Port Adelaide Football Club adopted the black and white Wharf Pylon / "Prison Bar" guernsey after having difficulty finding magenta and blue dyes that would repeatedly last the rigours of an Australian rules football match. Prior to adopting the Wharf Pylon / "Prison Bar" guernsey the club won 3 premierships over 31 years. After adopting the Wharf Pylon / "Prison Bar" guernsey in 1902 the club would, in controversial circumstances, be disqualified from finals but after would ultimately win 31 premierships and 3 Championships of Australia in the black and white guernsey before being admitted into the AFL in 1997. The "Prison Bar" nickname originated from fans of the Norwood Football Club in the late 1980s and early 1990s in an attempt of deriding the Port Adelaide supporter base, playing on Port Adelaide's strong working class demographic. Supporters of Port Adelaide quickly adopted this insult as their own for the name of the guernsey. The 'Prison Bar' name eventually becoming part of the mystique and intimidation of the guernsey. The number panel on the back of the Port Adelaide guernsey originates from the first decade of the twentieth century when club secretary James Hodge took the club across Australia to play matches against interstate teams During Hodge's time as secretary the club played exhibition matches in Kalgoorlie, Perth, Fremantle, Hobart, Devonport, Melbourne, Bendigo, Ballarat, Sydney, Albury, Wagga Wagga and Broken Hill. As spectators from other states were unfamiliar with the majority of Port Adelaide's players the club attached white squares with black numbers to the reverse of their guernseys and assigned numbers alphabetically so that spectators could identify players on the field with the team lists from newspapers covering the match. The tradition dictating that the captain of the Port Adelaide Football Club wear the number one guernsey started when Clifford Keal wore the number as club captain for the first time in 1924. The tradition was cemented, at least in the eyes of then-secretary Charles Hayter, when in 1929 he received a letter from a junior Kilkenny player requesting a number one Port Adelaide guernsey as he had just become captain of his underage team. Charles Hayter granted the wish of the junior and provided him with a number one Port Adelaide guernsey. There have been exceptions to this tradition since 1924, often involving club captains being injured, however in almost all instances since 1924 the club captain has worn the number one guernsey. The most notable exception to this rule was when Geof Motley followed the captaincy of Fos Williams. Motley wore the number one guernsey in his first game as club captain but the pressure of following in Fos Williams footsteps, who just lead the club to five consecutive premierships, was too much and he requested that he revert to the number 17 for the remainder of his career. When Motley handed the captaincy to John Cahill in 1967, at the insistence of coach Fos Williams, the tradition of Port Adelaide captains wearing the number one guernsey resumed. When co-captains were appointed for the 2019 season the No. 1 guernsey was temporarily retired. The guernsey was designed to be a literal depiction of the wharves and pylons that were prominent along the docks of Port Adelaide at the turn of the 20th century. Upon joining the AFL, Port Adelaide, along with being required to find a new logo, song and nickname, was also forced to replace the Prison Bar guernsey because existing club Collingwood, already using the Magpie logo and nickname, also wore a guernsey with vertical black and white stripes. In 1995, a new guernsey was created incorporating teal. In May 2007, chief executive John James stated that Port Adelaide received more correspondence from its supporters about the heritage guernsey than all other issues and that the club would "fight for its heritage and what is right". Port Adelaide decided not to participate in the 2006 heritage round when the AFL declined the club's 1980s guernsey for its 80s-themed heritage round. In 2007, the club was waiting for confirmation from the AFL that it could wear its 1970s' "Prison Bar" guernsey for a match against the Western Bulldogs, and wanted confirmation it would be able do so in any future heritage rounds. On 14 May 2007, the AFL and Port Adelaide reached an agreement whereby the club could wear its traditional guernsey in the heritage round, with the proviso that in future seasons its players can only wear it in home heritage round games and provided that such a game is not against Collingwood. No heritage rounds have been held since this agreement was reached. Collingwood club president Eddie McGuire has been a vocal opponent of Port Adelaide wearing the "Prison Bar" guernsey, claiming that Collingwood has an exclusive right to wear black and white in the AFL, even in the heritage round. Support for the guernsey remains extremely high, with a limited batch of jumpers for a game against Carlton in 2013 resulting in an increase to the club's revenue of around $1,000,000, according to club CEO Keith Thomas. In 2014 the AFL declined Port Adelaide permission to wear its traditional guernsey for celebrating of 100 years since its 1914 Championship of Australia. There was controversy in 2014 during the lead-up to the final against Richmond when the AFL told Port Adelaide they had to wear their clash guernsey. On 2 September 2014 the AFL cleared them to use the traditional guernsey for the match. Towards the end of 2018, a group of supporters organised to push for the return of the club's traditional guernsey full time from the start of the 2020 AFL season to coincide with the club's 150th anniversary year. Prominent football commentators who have given support to the campaign to reinstate the guernsey include Tony Shaw, Paul Hasleby, Mark Bickley, Dale Lewis along with current Port Adelaide players Travis Boak, Robbie Gray and Ollie Wines. Former South Australian premier Jay Weatherill has also given support to the push to reinstate the guernsey. A supporter petition calling for the reinstatement of the guernsey reached 10,000 signatures. The club was granted permission to wear the guernsey in their two Showdown matches in the 2020 season. On 30 August 2019 Port Adelaide's CEO Keith Thomas stated that there was no agreement in place to prevent the club wearing the guernsey again beyond 2020. On 1 June 2020 Port Adelaide's Chairman David Koch stated that he would like to see the club wear the guernsey for the Showdown in 2020 along with 2021. On 3 June 2020 Eddie McGuire said that David Koch "doesn't have the guts to tell his supporters that it's finished!". The following day on 4 June 2020, David Koch submitted an application on behalf of Port Adelaide for the club to wear the guernsey in all Showdown's going forward arguing that "There is an existing agreement in place from 2007 between Port Adelaide, Collingwood and the AFL that states clearly Port Adelaide has permission to wear the black-and-white Prison Bar guernsey in all home AFL Heritage Rounds thereafter. Ironically, Heritage Rounds ceased to exist from that point. We will argue that Showdowns now represent the heritage of South Australian football and we should therefore be granted permission to wear it on an ongoing basis...In a year like no other when we’ve seen the importance of family, community and heritage we believe any decision not to allow us to wear this guernsey in Showdowns would be nothing short of mean spirited...One of the charters of the AFL is to protect and celebrate the heritage of our great game. We think wearing our black-and-white Prison Bar guernsey in Showdowns does just that." The same night on the 4 June 2020 current Collingwood coach Nathan Buckley came out in support of Port Adelaide's request to wear it's Prison Bar guernsey in Showdowns. On 11 June 2020 current South Australian premier Steven Marshall (joining his predecessor Jay Weatherill) in providing his support to the push for the guernsey to be worn in Showdown's every year. Over the years Port Adelaide has used various songs and music at its games. In its first season during 1870 the club invited local brass bands to play during the club's first games at Glanville. In 1882 a song based on Harry Clifton's "Work, Boys, Work (and be contented)" was written for the club as a tribute to the recently retired player Thomas Smith. From around the end of World War I until 1970 the club song was "The pride of Port Adelaide is my football team". In 1971, Port Adelaide secretary Bob McLean decided to change the club song to "Cheer, Cheer the Black and the White" after hearing the South Melbourne Football Club's song based on the Notre Dame Fighting Irish football team's "Victory March". As the Sydney Swans were already using the Notre Dame Victory March when Port entered the AFL, the club was forced to find a new song. Cheer, Cheer the Black and the White is still used for Port Adelaide's reserves in the SANFL. Upon joining the AFL and requiring a new song, Port Adelaide eventually chose the "Power to Win" which was written for the club by Quentin Eyers and Les Kaczmarek. The song was first played at AFL level after Port's win against Geelong in round 3, 1997 at Football Park. Since 2016, an alternative Pitjantjatjara language version of the song ('Nganana wanangara kanyini' – literally, 'We have the lightning bolt') has been used by the club on occasions such as Indigenous Round. Because the club is not officially known as 'Port Power' but just 'The Power', the line in the song "..til the flag is ours for the taking, Port Power!" was eventually changed, deleting the word 'Port' and the song was re-recorded. This change is also reflected when the team sings the song at the end of a game. Since March 2014, Port Adelaide has used "Never Tear Us Apart" by the Australian band INXS as the club's unofficial anthem leading up to the opening bounce at its new home of Adelaide Oval. The song is used as a reference to the various and unique difficulties the club faced when trying to enter the AFL. Port Adelaide's use of the song stemmed from a trip the club took to Anfield in November 2012 while they were in England to play an exhibition match against the Western Bulldogs. In light of the very positive reviews given by the club's players towards the Anfield crowd's rendition of "You'll Never Walk Alone", Matthew Richardson, Port's general manager of marketing and consumer business, along with the club's management, sought to replicate the pre-match experience they experienced at Anfield. At a meeting in mid-2013, the idea of an anthem was raised; a number of various songs were suggested, including "Power and the Passion" by Midnight Oil and "Power to the People" by John Schumann. Eventually, "Never Tear Us Apart" by INXS was suggested by Port Adelaide's events manager, Tara MacLeod. It was eventually accepted, due to the fact that the song resonated with the club's history. In particular when Port Adelaide's was forced to separate its AFL and SANFL operations forcing its local side to train at Ethelton to ensure they would not gain an advantage using the Alberton training facilities. Australian football historian John Devaney described the forced separation of Port Adelaide's SANFL and AFL operations as being "akin to the enforced splitting up of families associated with military conquest or warfare". Initially the song was introduced to coincide with the 60-second countdown before the start of a match, with the music playing over the top of a video montage. The song proved to be a success among the fans, with them adopting the song as well as raising scarves above their heads as the song was being sung. So successful was the song that by June 2014, the club were forced to print club coloured scarves with the words "Never Tear Us Apart" on them that fans would hold aloft and sing in unison prior to the start of matches. Port Adelaide has adopted different insignia's on several occasions throughout its history. The original club crest, adopted in 1900, featured a tan football and magpies perched on a gum tree with a black and white striped flag on the left and the Australian Red Ensign on the right. The ensign switched to blue sporadically through the 1910s before the flags were dropped in 1928. From 1930 to 1996, the logo always featured a dexter (left-facing) magpie perched upon a gum branch (1930 to 1953) and a fence wire (1954 to 1974). The last Magpies-specific logo, used by the club from 1975 onward in the SANFL, was situated inside a circular disc as was the case at all other SANFL clubs. It made mention of "Magpies" in the logo for the first time and was the longest standing in the club's history, running through to the end of the 2019 SANFL season. Upon entering the AFL in 1997, Port were required to adopt colours and insignia that distinguished it from the existing AFL club already nicknamed the Magpies, Collingwood. The club adopted the "Power" moniker, featuring a silver fist clutching a lightning bolt in front of the wharf pylon/prison bar design, while also showcasing the addition of teal to the club's colour scheme for the first time. The logo was slightly altered in 2001 with the lightning bolt and fist defined and the reference to "Port" dropped. Ahead of the 2020 season, Port Adelaide's 150th anniversary, the club unveiled a commemorative logo to be worn by both the senior AFL team and reserves SANFL team. The current logo features the "PA" acronym, 1870 to acknowledge the foundation year, the black-and-white prison bars, the chevron design of the AFL guernsey and a teal outline. On 15 May 1880, Port Adelaide played its first match at Alberton Oval. In 1881 the decision was made by the club to start leasing the oval from the Port Adelaide Council for the sum of 10 shillings a year. Situated at the eastern end of the suburb of Alberton in Adelaide, the playing surface is surrounded by the Allan Scott club headquarters, the Robert B. Quinn MM Stand, the Fos Williams Family Stand, the Port Adelaide Bowling Club and the N.L. Williams Scoreboard. As well as the facilities facing the oval, along Queen Street there is The Port Club and The Port Store. Fos Williams authored the club's creed in 1962. Port Adelaide has a fierce rivalry with fellow South Australian AFL team Adelaide. Matches between the two teams are known as the Showdown. The rivalry between Adelaide and Port Adelaide is often considered the best, and most bitter, in the Australian Football League with Malcolm Blight, Australian Football Hall of Fame Legend, stating in 2009 that "there is no doubt it is the greatest rivalry in football." The Showdown's intense rivalry can be traced back to Port Adelaide's pre-existing rivalries within the SANFL, particularly Norwood. The Norwood–Port Adelaide rivalry began in 1878 when the two clubs first played one another, however it was not until 1882 that the Norwood–Port Adelaide rivalry grew bitter. That year Port Adelaide's first win over Norwood, held at Adelaide Oval, was controversially overruled by the league, with a follow up game overshadowed by a misunderstanding at the gate which almost prevented Norwood players accessing the venue. The Showdown rivalry also significantly draws upon the bitter, winner take all, competition for the two South Australian licences to join the AFL in the 1980s and early 1990s. The Port Adelaide Football Club has historically drawn its supporter base in and around historical working class Port Adelaide. However, this support has spread to many coastal locations in Adelaide (from Outer Harbour down to West Beach), in much of the inner-Western suburbs, throughout the North-Eastern suburbs in Campbelltown and Tea Tree Gully, in many of the Southern suburbs (such as Aberfoyle Park and Flagstaff Hill), as well as throughout the Adelaide Hills and country South Australia. After historically being the largest football club in South Australia, Port Adelaide has reemerged as one of the largest sporting organisations in Australia, with over 60,000 members and an average attendance nearing 45,000 in 2015. Port Adelaide has many supporter groups, with every state or territory containing at least one supporter group. In addition, many country towns within South Australia have their own supporter group, many of which travel to both home and away games, including the Port Adelaide Cheer Squad, the Outer Army and The Alberton Crowd. On 14 April 2016, Port Adelaide announced a three-year multimillion-dollar partnership with leading Chinese property developer Shanghai Cred. Within this partnership, Port Adelaide will take primary responsibility for developing Australian rules football in China. The partnership will see Port Adelaide hold annual training camps and provide sponsorship in China, as well as producing AFL programs and broadcasting games in the country via China Central Television and other networks. The first AFL game played for premiership points was played in May 2017 between the Gold Coast Suns and Port Adelaide. Port Adelaide also runs an Australian rules football program in over 20 Chinese schools culminating in a football carnival the same week the AFL premiership match is held in Shanghai. The Port Adelaide Football Club has had a long connection to the indigenous community. The Hart family who were founders of the club operated The Adelaide Milling and Mercantile Company in Port Adelaide which employed Kaurna people alongside non-indigenous workers as early as the 1850s. John Hart Sr advocated for other settlers to refrain from killing and eating black swans as they were a totem of the Kaurna people. Harry Hewitt was named in Port Adelaide's side when they defeated Fitzroy by two goals on Adelaide Oval in 1891 and is the clubs first known Indigenous Australian player. In 2008 the club started the Aboriginal Power Cup to help promote academic and healthy outcomes for indigenous students in South Australia. Known Indigenous Port Adelaide players to have represented the senior team in the SANFL, AFL or against interstate clubs include: Ross Agius, Keiran Agius, Corey Ah Chee, Brendon Ah Chee, Karl Amon, Troy Bond, Shane Bond, Richie Bray, Peter Burgoyne Jr, Peter Burgoyne Sr, Trent Burgoyne, Shaun Burgoyne, Michael Clinch, Jason Cocaktoo, Che Cockatoo-Collins, Donald Cockatoo-Collins, Malcolm Cooper, Tobin Cox, Aaron Davey, Alwyn Davey, Fabian Francis, Joel Garner, Brett Goodes, Harry Hewitt, Wilf Huddleston, Jarman Impey, Graham Johncock, Aidyn Johnson, Nathan Krakouer, Peter Lindsay, Jarrod Lienert, Andrew McLeod, Tim Milera, Terry Milera, Harry Miller, Cameron Miller, Daniel Motlop, Marlon Motlop, Steven Motlop, Derek Murray, Allan Murray, Jake Neade, Stephen O'Brien, Michael O'Brien, Ricky O'Loughlin, Brenton Owens, Danyle Pearce, Sam Powell-Pepper, Byron Pickett, Patrick Ryder, Peter Talman, Joel Tessman, Lindsay Thomas, Wade Thompson, Gavin Wanganeen, Elijah Ware, Eugene Warrior Jnr, Eugene Warrior Snr, Luke Wilson, Chad Wingard. See Also † denotes killed in action or died while serving In 1994 the Port Adelaide Football Club obtained an AFL licence, however the club had to wait until 1997 to partake in the competition as the Adelaide Crows had a clause preventing another club entering the national competition from South Australia until the end of 1996. Initially Port Adelaide's 1990 proposed model was to use its SANFL side as its reserves team. By the second licence tender in 1994 the club proposal was to field a single team in the AFL. After winning the AFL licence the SANFL teams backflipped and wanted a Port Adelaide side to remain in the SANFL. However the SANFL clubs did not want the reserves side to gain any use of the senior side in the AFL's resources out of fear its advantages would be too strong in the SANFL. As a result for the first few years after 1997, Port Adelaide's SANFL side was forced to train at Ethelton to ensure they would not gain any advantage using the Alberton training facilities. Australian football historian John Devaney described the forced separation of Port Adelaide's SANFL and AFL operations as being "akin to the enforced splitting up of families associated with military conquest or warfare". On 20 August 2010, the "One Port Adelaide Football Club" movement was launched by former player Tim Ginever to merge Port Adelaide's AFL and SANFL operations. A website was created that claimed 50,000 signatures were needed for the merger. On 15 November 2010, all nine SANFL clubs agreed that the off-field merger between the two operations would proceed. On 10 September 2013, Port Adelaide and the SANFL agreed to a model to allow all its AFL-listed players (not selected to play for Port Adelaide in the AFL) to play for the SANFL side. From 2015 onward, the club lost its recruiting zones and could no longer field sides in the junior SANFL competition. Port Adelaide subsequently started an Academy team composed of 18 to 22-year-olds. Port Adelaide featured in three SANFL Grand Final's in the 2010's though lost each of them. The club will not field a team in the SANFL in the 2020 season due to AFL restrictions imposed during the COVID-19 pandemic, though it has expressed its intention to re-join the competition in 2021. Magarey Medal (SANFL best and fairest) AFLCA Champion Player of the Year AFL Rising Star (Best player under 21) Norm Smith Medal (AFL Grand Final best on ground) Jack Oatey Medal (SANFL Grand Final best on ground) Sporting Life Magazine Interstate carnivals Australian Football League John Cahill Medal (best and fairest) Allan Robert McLean Medal (SANFL best and fairest) Gavin Wanganeen Medal (Best player under 21) John McCarthy Medal (Community Award) As at the end of 2017: Overall win/loss record Best league record against another club Minimum 20 league matches against a current club: Worst league record against another club Minimum 20 league matches against a current club: Highest score Lowest score Greatest winning margin Greatest losing margin Most wins in a season 1 This record is unbreakable under the current SANFL fixturing rules (current maximum is 20 wins: 18 home-and-away plus two finals). Fewest losses in a season Largest home attendances (minor round) Largest away attendances (minor round) Largest finals attendances Longest undefeated streak Longest winless streak Most games played Most games coached Most premierships as player Most premierships as coach Most goals at Port Adelaide Most goals in a match Most goals in a season
https://en.wikipedia.org/wiki?curid=24602
Proteasome Proteasomes are protein complexes which degrade unneeded or damaged proteins by proteolysis, a chemical reaction that breaks peptide bonds. Enzymes that help such reactions are called proteases. Proteasomes are part of a major mechanism by which cells regulate the concentration of particular proteins and degrade misfolded proteins. Proteins are tagged for degradation with a small protein called ubiquitin. The tagging reaction is catalyzed by enzymes called ubiquitin ligases. Once a protein is tagged with a single ubiquitin molecule, this is a signal to other ligases to attach additional ubiquitin molecules. The result is a "polyubiquitin chain" that is bound by the proteasome, allowing it to degrade the tagged protein. The degradation process yields peptides of about seven to eight amino acids long, which can then be further degraded into shorter amino acid sequences and used in synthesizing new proteins. Proteasomes are found inside all eukaryotes and archaea, and in some bacteria. In eukaryotes, proteasomes are located both in the nucleus and in the cytoplasm. In structure, the proteasome is a cylindrical complex containing a "core" of four stacked rings forming a central pore. Each ring is composed of seven individual proteins. The inner two rings are made of seven "β subunits" that contain three to seven protease active sites. These sites are located on the interior surface of the rings, so that the target protein must enter the central pore before it is degraded. The outer two rings each contain seven "α subunits" whose function is to maintain a "gate" through which proteins enter the barrel. These α subunits are controlled by binding to "cap" structures or "regulatory particles" that recognize polyubiquitin tags attached to protein substrates and initiate the degradation process. The overall system of ubiquitination and proteasomal degradation is known as the ubiquitin–proteasome system. The proteasomal degradation pathway is essential for many cellular processes, including the cell cycle, the regulation of gene expression, and responses to oxidative stress. The importance of proteolytic degradation inside cells and the role of ubiquitin in proteolytic pathways was acknowledged in the award of the 2004 Nobel Prize in Chemistry to Aaron Ciechanover, Avram Hershko and Irwin Rose. Before the discovery of the ubiquitin–proteasome system, protein degradation in cells was thought to rely mainly on lysosomes, membrane-bound organelles with acidic and protease-filled interiors that can degrade and then recycle exogenous proteins and aged or damaged organelles. However, work by Joseph Etlinger and Alfred Goldberg in 1977 on ATP-dependent protein degradation in reticulocytes, which lack lysosomes, suggested the presence of a second intracellular degradation mechanism. This was shown in 1978 to be composed of several distinct protein chains, a novelty among proteases at the time. Later work on modification of histones led to the identification of an unexpected covalent modification of the histone protein by a bond between a lysine side chain of the histone and the C-terminal glycine residue of ubiquitin, a protein that had no known function. It was then discovered that a previously identified protein associated with proteolytic degradation, known as ATP-dependent proteolysis factor 1 (APF-1), was the same protein as ubiquitin. The proteolytic activities of this system were isolated as a multi-protein complex originally called the multi-catalytic proteinase complex by Sherwin Wilk and Marion Orlowski. Later, the ATP-dependent proteolytic complex that was responsible for ubiquitin-dependent protein degradation was discovered and was called the 26S proteasome. Much of the early work leading up to the discovery of the ubiquitin proteasome system occurred in the late 1970s and early 1980s at the Technion in the laboratory of Avram Hershko, where Aaron Ciechanover worked as a graduate student. Hershko's year-long sabbatical in the laboratory of Irwin Rose at the Fox Chase Cancer Center provided key conceptual insights, though Rose later downplayed his role in the discovery. The three shared the 2004 Nobel Prize in Chemistry for their work in discovering this system. Although electron microscopy data revealing the stacked-ring structure of the proteasome became available in the mid-1980s, the first structure of the proteasome core particle was not solved by X-ray crystallography until 1994. In 2018, the first atomic structures of the human 26S proteasome holoenzyme in complex with a polyubiquitylated protein substrate were solved by cryogenic electron microscopy, revealing mechanisms by which the substrate is recognized, deubiquitylated, unfolded and degraded by the human 26S proteasome. The proteasome subcomponents are often referred to by their Svedberg sedimentation coefficient (denoted "S"). The proteasome most exclusively used in mammals is the cytosolic 26S proteasome, which is about 2000 kilodaltons (kDa) in molecular mass containing one 20S protein subunit and two 19S regulatory cap subunits. The core is hollow and provides an enclosed cavity in which proteins are degraded; openings at the two ends of the core allow the target protein to enter. Each end of the core particle associates with a 19S regulatory subunit that contains multiple ATPase active sites and ubiquitin binding sites; it is this structure that recognizes polyubiquitinated proteins and transfers them to the catalytic core. An alternative form of regulatory subunit called the 11S particle can associate with the core in essentially the same manner as the 19S particle; the 11S may play a role in degradation of foreign peptides such as those produced after infection by a virus. The number and diversity of subunits contained in the 20S core particle depends on the organism; the number of distinct and specialized subunits is larger in multicellular than unicellular organisms and larger in eukaryotes than in prokaryotes. All 20S particles consist of four stacked heptameric ring structures that are themselves composed of two different types of subunits; α subunits are structural in nature, whereas β subunits are predominantly catalytic. The α subunits are pseudoenzymes homologous to β subunits. They are assembled with their N-termini adjacent to that of the β subunits. The outer two rings in the stack consist of seven α subunits each, which serve as docking domains for the regulatory particles and the alpha subunits N-termini () form a gate that blocks unregulated access of substrates to the interior cavity. The inner two rings each consist of seven β subunits and in their N-termini contain the protease active sites that perform the proteolysis reactions. Three distinct catalytic activities were identified in the purified complex: chymotrypsin-like, trypsin-like and peptidylglutamyl-peptide hydrolyzing. The size of the proteasome is relatively conserved and is about 150 angstroms (Å) by 115 Å. The interior chamber is at most 53 Å wide, though the entrance can be as narrow as 13 Å, suggesting that substrate proteins must be at least partially unfolded to enter. In archaea such as "Thermoplasma acidophilum", all the α and all the β subunits are identical, whereas eukaryotic proteasomes such as those in yeast contain seven distinct types of each subunit. In mammals, the β1, β2, and β5 subunits are catalytic; although they share a common mechanism, they have three distinct substrate specificities considered chymotrypsin-like, trypsin-like, and peptidyl-glutamyl peptide-hydrolyzing (PHGH). Alternative β forms denoted β1i, β2i, and β5i can be expressed in hematopoietic cells in response to exposure to pro-inflammatory signals such as cytokines, in particular, interferon gamma. The proteasome assembled with these alternative subunits is known as the "immunoproteasome", whose substrate specificity is altered relative to the normal proteasome. Recently an alternative proteasome was identified in human cells that lack the α3 core subunit. These proteasomes (known as the α4-α4 proteasomes) instead form 20S core particles containing an additional α4 subunit in place of the missing α3 subunit. These alternative 'α4-α4' proteasomes have been known previously to exist in yeast. Although the precise function of these proteasome isoforms is still largely unknown, cells expressing these proteasomes show enhanced resistance to toxicity induced by metallic ions such as cadmium. The 19S particle in eukaryotes consists of 19 individual proteins and is divisible into two subassemblies, a 9-subunit base that binds directly to the α ring of the 20S core particle, and a 10-subunit lid. Six of the nine base proteins are ATPase subunits from the AAA Family, and an evolutionary homolog of these ATPases exists in archaea, called PAN (Proteasome-Activating Nucleotidase). The association of the 19S and 20S particles requires the binding of ATP to the 19S ATPase subunits, and ATP hydrolysis is required for the assembled complex to degrade folded and ubiquitinated proteins. Note that only the step of substrate unfolding requires energy from ATP hydrolysis, while ATP-binding alone can support all the other steps required for protein degradation (e.g., complex assembly, gate opening, translocation, and proteolysis). In fact, ATP binding to the ATPases by itself supports the rapid degradation of unfolded proteins. However, while ATP hydrolysis is required for unfolding only, it is not yet clear whether this energy may be used in the coupling of some of these steps. In 2012, two independent efforts have elucidated the molecular architecture of the 26S proteasome by single particle electron microscopy. In 2016, three independent efforts have determined the first near-atomic resolution structure of the human 26S proteasome in the absence of substrates by cryo-EM. In 2018, a major effort has elucidated the detailed mechanisms of deubiquitylation, initiation of translocation and processive unfolding of substrates by determining seven atomic structures of substrate-engaged 26S proteasome simultaneously. In the heart of the 19S, directly adjacent to the 20S, are the AAA-ATPases (AAA proteins) that assemble to a heterohexameric ring of the order Rpt1/Rpt2/Rpt6/Rpt3/Rpt4/Rpt5. This ring is a trimer of dimers: Rpt1/Rpt2, Rpt6/Rpt3, and Rpt4/Rpt5 dimerize via their N-terminal coiled-coils. These coiled-coils protrude from the hexameric ring. The largest regulatory particle non-ATPases Rpn1 and Rpn2 bind to the tips of Rpt1/2 and Rpt6/3, respectively. The ubiquitin receptor Rpn13 binds to Rpn2 and completes the base cub-complex. The lid covers one half of the AAA-ATPase hexamer (Rpt6/Rpt3/Rpt4) and, unexpectedly, directly contacts the 20S via Rpn6 and to lesser extent Rpn5. The subunits Rpn9, Rpn5, Rpn6, Rpn7, Rpn3, and Rpn12, which are structurally related among themselves and to subunits of the COP9 complex and eIF3 (hence called PCI subunits) assemble to a horseshoe-like structure enclosing the Rpn8/Rpn11 heterodimer. Rpn11, the deubiquinating enzyme, is placed at the mouth of the AAA-ATPase hexamer, ideally positioned to remove ubiquitin moieties immediately before translocation of substrates into the 20S. The second ubiquitin receptor identified to date, Rpn10, is positioned at the periphery of the lid, near subunits Rpn8 and Rpn9. The 19S regulatory particle within the 26S proteasome holoenzyme has been observed in six strongly differing conformational states in the absence of substrates to date. A hallmark of the AAA-ATPase configuration in this predominant low-energy state is a staircase- or lockwasher-like arrangement of the AAA-domains. In the presence of ATP but absence of substrate three alternative, less abundant conformations of the 19S are adopted primarily differing in the positioning of the lid with respect to the AAA-ATPase module. In the presence of ATP-γS or a substrate, considerably more conformations have been observed displaying dramatic structural changes of the AAA-ATPase module. Some of the substrate-bound conformations bear high similarity to the substrate-free ones, but they are not entirely identical, particularly in the AAA-ATPase module. Prior to the 26S assembly, the 19S regulatory particle in a free form has also been observed in seven conformational states. Notably, all these conformers are somewhat different and present distinct features. Thus, the 19S regulatory particle can sample at least 20 conformational states under different physiological conditions. The 19S regulatory particle is responsible for stimulating the 20S to degrade proteins. A primary function of the 19S regulatory ATPases is to open the gate in the 20S that blocks the entry of substrates into the degradation chamber. The mechanism by which the proteasomal ATPase open this gate has been recently elucidated. 20S gate opening, and thus substrate degradation, requires the C-termini of the proteasomal ATPases, which contains a specific motif (i.e., HbYX motif). The ATPases C-termini bind into pockets in the top of the 20S, and tether the ATPase complex to the 20S proteolytic complex, thus joining the substrate unfolding equipment with the 20S degradation machinery. Binding of these C-termini into these 20S pockets by themselves stimulates opening of the gate in the 20S in much the same way that a "key-in-a-lock" opens a door. The precise mechanism by which this "key-in-a-lock" mechanism functions has been structurally elucidated in the context of human 26S proteasome at near-atomic resolution, suggesting that the insertion of five C-termini of ATPase subunits Rpt1/2/3/5/6 into the 20S surface pockets are required to fully open the 20S gate. 20S proteasomes can also associate with a second type of regulatory particle, the 11S regulatory particle, a heptameric structure that does not contain any ATPases and can promote the degradation of short peptides but not of complete proteins. It is presumed that this is because the complex cannot unfold larger substrates. This structure is also known as PA28, REG, or PA26. The mechanisms by which it binds to the core particle through the C-terminal tails of its subunits and induces α-ring conformational changes to open the 20S gate suggest a similar mechanism for the 19S particle. The expression of the 11S particle is induced by interferon gamma and is responsible, in conjunction with the immunoproteasome β subunits, for the generation of peptides that bind to the major histocompatibility complex. Yet another type of non-ATPase regulatory particle is the Blm10 (yeast) or PA200/PSME4 (human). It opens only one α subunit in the 20S gate and itself folds into a dome with a very small pore over it. The assembly of the proteasome is a complex process due to the number of subunits that must associate to form an active complex. The β subunits are synthesized with N-terminal "propeptides" that are post-translationally modified during the assembly of the 20S particle to expose the proteolytic active site. The 20S particle is assembled from two half-proteasomes, each of which consists of a seven-membered pro-β ring attached to a seven-membered α ring. The association of the β rings of the two half-proteasomes triggers threonine-dependent autolysis of the propeptides to expose the active site. These β interactions are mediated mainly by salt bridges and hydrophobic interactions between conserved alpha helices whose disruption by mutation damages the proteasome's ability to assemble. The assembly of the half-proteasomes, in turn, is initiated by the assembly of the α subunits into their heptameric ring, forming a template for the association of the corresponding pro-β ring. The assembly of α subunits has not been characterized. Only recently, the assembly process of the 19S regulatory particle has been elucidated to considerable extent. The 19S regulatory particle assembles as two distinct subcomponents, the base and the lid. Assembly of the base complex is facilitated by four assembly chaperones, Hsm3/S5b, Nas2/p27, Rpn14/PAAF1, and Nas6/gankyrin (names for yeast/mammals). These assembly chaperones bind to the AAA-ATPase subunits and their main function seems to be to ensure proper assembly of the heterohexameric AAA-ATPase ring. To date it is still under debate whether the base complex assembles separately, whether the assembly is templated by the 20S core particle, or whether alternative assembly pathways exist. In addition to the four assembly chaperones, the deubiquitinating enzyme Ubp6/Usp14 also promotes base assembly, but it is not essential. The lid assembles separately in a specific order and does not require assembly chaperones. Proteins are targeted for degradation by the proteasome with covalent modification of a lysine residue that requires the coordinated reactions of three enzymes. In the first step, a ubiquitin-activating enzyme (known as E1) hydrolyzes ATP and adenylylates a ubiquitin molecule. This is then transferred to E1's active-site cysteine residue in concert with the adenylylation of a second ubiquitin. This adenylylated ubiquitin is then transferred to a cysteine of a second enzyme, ubiquitin-conjugating enzyme (E2). In the last step, a member of a highly diverse class of enzymes known as ubiquitin ligases (E3) recognizes the specific protein to be ubiquitinated and catalyzes the transfer of ubiquitin from E2 to this target protein. A target protein must be labeled with at least four ubiquitin monomers (in the form of a polyubiquitin chain) before it is recognized by the proteasome lid. It is therefore the E3 that confers substrate specificity to this system. The number of E1, E2, and E3 proteins expressed depends on the organism and cell type, but there are many different E3 enzymes present in humans, indicating that there is a huge number of targets for the ubiquitin proteasome system. The mechanism by which a polyubiquitinated protein is targeted to the proteasome is not fully understood. A few high-resolution snapshots of the proteasome bound to a polyubiquitinated protein suggest that ubiquitin receptors might be coordinated with deubiquitinase Rpn11 for initial substrate targeting and engagement. Ubiquitin-receptor proteins have an N-terminal ubiquitin-like (UBL) domain and one or more ubiquitin-associated (UBA) domains. The UBL domains are recognized by the 19S proteasome caps and the UBA domains bind ubiquitin via three-helix bundles. These receptor proteins may escort polyubiquitinated proteins to the proteasome, though the specifics of this interaction and its regulation are unclear. The ubiquitin protein itself is 76 amino acids long and was named due to its ubiquitous nature, as it has a highly conserved sequence and is found in all known eukaryotic organisms. The genes encoding ubiquitin in eukaryotes are arranged in tandem repeats, possibly due to the heavy transcription demands on these genes to produce enough ubiquitin for the cell. It has been proposed that ubiquitin is the slowest-evolving protein identified to date. Ubiquitin contains seven lysine residues to which another ubiquitin can be ligated, resulting in different types of polyubiquitin chains. Chains in which each additional ubiquitin is linked to lysine 48 of the previous ubiquitin have a role in proteasome targeting, while other types of chains may be involved in other processes. After a protein has been ubiquitinated, it is recognized by the 19S regulatory particle in an ATP-dependent binding step. The substrate protein must then enter the interior of the 20S particle to come in contact with the proteolytic active sites. Because the 20S particle's central channel is narrow and gated by the N-terminal tails of the α ring subunits, the substrates must be at least partially unfolded before they enter the core. The passage of the unfolded substrate into the core is called "translocation" and necessarily occurs after deubiquitination. However, the order in which substrates are deubiquitinated and unfolded is not yet clear. Which of these processes is the rate-limiting step in the overall proteolysis reaction depends on the specific substrate; for some proteins, the unfolding process is rate-limiting, while deubiquitination is the slowest step for other proteins. The extent to which substrates must be unfolded before translocation is suggested to be around 20 amino acid residues by the atomic structure of the substrate-engaged 26S proteasome in the deubiquitylation-compatible state, but substantial tertiary structure, and in particular nonlocal interactions such as disulfide bonds, are sufficient to inhibit degradation. The presence of intrinsically disordered protein segments of sufficient size, either at the protein terminus or internally, has also been proposed to facilitate efficient initiation of degradation. The gate formed by the α subunits prevents peptides longer than about four residues from entering the interior of the 20S particle. The ATP molecules bound before the initial recognition step are hydrolyzed before translocation. While energy is needed for substrate unfolding, it is not required for translocation. The assembled 26S proteasome can degrade unfolded proteins in the presence of a non-hydrolyzable ATP analog, but cannot degrade folded proteins, indicating that energy from ATP hydrolysis is used for substrate unfolding. Passage of the unfolded substrate through the opened gate occurs via facilitated diffusion if the 19S cap is in the ATP-bound state. The mechanism for unfolding of globular proteins is necessarily general, but somewhat dependent on the amino acid sequence. Long sequences of alternating glycine and alanine have been shown to inhibit substrate unfolding, decreasing the efficiency of proteasomal degradation; this results in the release of partially degraded byproducts, possibly due to the decoupling of the ATP hydrolysis and unfolding steps. Such glycine-alanine repeats are also found in nature, for example in silk fibroin; in particular, certain Epstein–Barr virus gene products bearing this sequence can stall the proteasome, helping the virus propagate by preventing antigen presentation on the major histocompatibility complex. The proteasome functions as an endoprotease. The mechanism of proteolysis by the β subunits of the 20S core particle is through a threonine-dependent nucleophilic attack. This mechanism may depend on an associated water molecule for deprotonation of the reactive threonine hydroxyl. Degradation occurs within the central chamber formed by the association of the two β rings and normally does not release partially degraded products, instead reducing the substrate to short polypeptides typically 7–9 residues long, though they can range from 4 to 25 residues, depending on the organism and substrate. The biochemical mechanism that determines product length is not fully characterized. Although the three catalytic β subunits have a common mechanism, they have slightly different substrate specificities, which are considered chymotrypsin-like, trypsin-like, and peptidyl-glutamyl peptide-hydrolyzing (PHGH)-like. These variations in specificity are the result of interatomic contacts with local residues near the active sites of each subunit. Each catalytic β subunit also possesses a conserved lysine residue required for proteolysis. Although the proteasome normally produces very short peptide fragments, in some cases these products are themselves biologically active and functional molecules. Certain transcription factors regulating the expression of specific genes, including one component of the mammalian complex NF-κB, are synthesized as inactive precursors whose ubiquitination and subsequent proteasomal degradation converts them to an active form. Such activity requires the proteasome to cleave the substrate protein internally, rather than processively degrading it from one terminus. It has been suggested that long loops on these proteins' surfaces serve as the proteasomal substrates and enter the central cavity, while the majority of the protein remains outside. Similar effects have been observed in yeast proteins; this mechanism of selective degradation is known as "regulated ubiquitin/proteasome dependent processing" (RUP). Although most proteasomal substrates must be ubiquitinated before being degraded, there are some exceptions to this general rule, especially when the proteasome plays a normal role in the post-translational processing of the protein. The proteasomal activation of NF-κB by processing p105 into p50 via internal proteolysis is one major example. Some proteins that are hypothesized to be unstable due to intrinsically unstructured regions, are degraded in a ubiquitin-independent manner. The most well-known example of a ubiquitin-independent proteasome substrate is the enzyme ornithine decarboxylase. Ubiquitin-independent mechanisms targeting key cell cycle regulators such as p53 have also been reported, although p53 is also subject to ubiquitin-dependent degradation. Finally, structurally abnormal, misfolded, or highly oxidized proteins are also subject to ubiquitin-independent and 19S-independent degradation under conditions of cellular stress. The 20S proteasome is both ubiquitous and essential in eukaryotes. Some prokaryotes, including many archaea and the bacterial order Actinomycetales, also share homologs of the 20S proteasome, whereas most bacteria possess heat shock genes hslV and hslU, whose gene products are a multimeric protease arranged in a two-layered ring and an ATPase. The hslV protein has been hypothesized to resemble the likely ancestor of the 20S proteasome. In general, HslV is not essential in bacteria, and not all bacteria possess it, whereas some protists possess both the 20S and the hslV systems. Many bacteria also possess other homologs of the proteasome and an associated ATPase, most notably ClpP and ClpX. This redundancy explains why the HslUV system is not essential. Sequence analysis suggests that the catalytic β subunits diverged earlier in evolution than the predominantly structural α subunits. In bacteria that express a 20S proteasome, the β subunits have high sequence identity to archaeal and eukaryotic β subunits, whereas the α sequence identity is much lower. The presence of 20S proteasomes in bacteria may result from lateral gene transfer, while the diversification of subunits among eukaryotes is ascribed to multiple gene duplication events. Cell cycle progression is controlled by ordered action of cyclin-dependent kinases (CDKs), activated by specific cyclins that demarcate phases of the cell cycle. Mitotic cyclins, which persist in the cell for only a few minutes, have one of the shortest life spans of all intracellular proteins. After a CDK-cyclin complex has performed its function, the associated cyclin is polyubiquitinated and destroyed by the proteasome, which provides directionality for the cell cycle. In particular, exit from mitosis requires the proteasome-dependent dissociation of the regulatory component cyclin B from the mitosis promoting factor complex. In vertebrate cells, "slippage" through the mitotic checkpoint leading to premature M phase exit can occur despite the delay of this exit by the spindle checkpoint. Earlier cell cycle checkpoints such as post-restriction point check between G1 phase and S phase similarly involve proteasomal degradation of cyclin A, whose ubiquitination is promoted by the anaphase promoting complex (APC), an E3 ubiquitin ligase. The APC and the Skp1/Cul1/F-box protein complex (SCF complex) are the two key regulators of cyclin degradation and checkpoint control; the SCF itself is regulated by the APC via ubiquitination of the adaptor protein, Skp2, which prevents SCF activity before the G1-S transition. Individual components of the 19S particle have their own regulatory roles. Gankyrin, a recently identified oncoprotein, is one of the 19S subcomponents that also tightly binds the cyclin-dependent kinase CDK4 and plays a key role in recognizing ubiquitinated p53, via its affinity for the ubiquitin ligase MDM2. Gankyrin is anti-apoptotic and has been shown to be overexpressed in some tumor cell types such as hepatocellular carcinoma. In plants, signaling by auxins, or phytohormones that order the direction and tropism of plant growth, induces the targeting of a class of transcription factor repressors known as Aux/IAA proteins for proteasomal degradation. These proteins are ubiquitinated by SCFTIR1, or SCF in complex with the auxin receptor TIR1. Degradation of Aux/IAA proteins derepresses transcription factors in the auxin-response factor (ARF) family and induces ARF-directed gene expression. The cellular consequences of ARF activation depend on the plant type and developmental stage, but are involved in directing growth in roots and leaf veins. The specific response to ARF derepression is thought to be mediated by specificity in the pairing of individual ARF and Aux/IAA proteins. Both internal and external signals can lead to the induction of apoptosis, or programmed cell death. The resulting deconstruction of cellular components is primarily carried out by specialized proteases known as caspases, but the proteasome also plays important and diverse roles in the apoptotic process. The involvement of the proteasome in this process is indicated by both the increase in protein ubiquitination, and of E1, E2, and E3 enzymes that is observed well in advance of apoptosis. During apoptosis, proteasomes localized to the nucleus have also been observed to translocate to outer membrane blebs characteristic of apoptosis. Proteasome inhibition has different effects on apoptosis induction in different cell types. In general, the proteasome is not required for apoptosis, although inhibiting it is pro-apoptotic in most cell types that have been studied. Apoptosis is mediated through disrupting the regulated degradation of pro-growth cell cycle proteins. However, some cell lines — in particular, primary cultures of quiescent and differentiated cells such as thymocytes and neurons — are prevented from undergoing apoptosis on exposure to proteasome inhibitors. The mechanism for this effect is not clear, but is hypothesized to be specific to cells in quiescent states, or to result from the differential activity of the pro-apoptotic kinase JNK. The ability of proteasome inhibitors to induce apoptosis in rapidly dividing cells has been exploited in several recently developed chemotherapy agents such as bortezomib and . In response to cellular stresses – such as infection, heat shock, or oxidative damage – heat shock proteins that identify misfolded or unfolded proteins and target them for proteasomal degradation are expressed. Both Hsp27 and Hsp90—chaperone proteins have been implicated in increasing the activity of the ubiquitin-proteasome system, though they are not direct participants in the process. Hsp70, on the other hand, binds exposed hydrophobic patches on the surface of misfolded proteins and recruits E3 ubiquitin ligases such as CHIP to tag the proteins for proteasomal degradation. The CHIP protein (carboxyl terminus of Hsp70-interacting protein) is itself regulated via inhibition of interactions between the E3 enzyme CHIP and its E2 binding partner. Similar mechanisms exist to promote the degradation of oxidatively damaged proteins via the proteasome system. In particular, proteasomes localized to the nucleus are regulated by PARP and actively degrade inappropriately oxidized histones. Oxidized proteins, which often form large amorphous aggregates in the cell, can be degraded directly by the 20S core particle without the 19S regulatory cap and do not require ATP hydrolysis or tagging with ubiquitin. However, high levels of oxidative damage increases the degree of cross-linking between protein fragments, rendering the aggregates resistant to proteolysis. Larger numbers and sizes of such highly oxidized aggregates are associated with aging. Dysregulation of the ubiquitin proteasome system may contribute to several neural diseases. It may lead to brain tumors such as astrocytomas. In some of the late-onset neurodegenerative diseases that share aggregation of misfolded proteins as a common feature, such as Parkinson's disease and Alzheimer's disease, large insoluble aggregates of misfolded proteins can form and then result in neurotoxicity, through mechanisms that are not yet well understood. Decreased proteasome activity has been suggested as a cause of aggregation and Lewy body formation in Parkinson's. This hypothesis is supported by the observation that yeast models of Parkinson's are more susceptible to toxicity from α-synuclein, the major protein component of Lewy bodies, under conditions of low proteasome activity. Impaired proteasomal activity may underlie cognitive disorders such as the autism spectrum disorders, and muscle and nerve diseases such as inclusion body myopathy. The proteasome plays a straightforward but critical role in the function of the adaptive immune system. Peptide antigens are displayed by the major histocompatibility complex class I (MHC) proteins on the surface of antigen-presenting cells. These peptides are products of proteasomal degradation of proteins originated by the invading pathogen. Although constitutively expressed proteasomes can participate in this process, a specialized complex composed of proteins, whose expression is induced by interferon gamma, are the primary producers of peptides which are optimal in size and composition for MHC binding. These proteins whose expression increases during the immune response include the 11S regulatory particle, whose main known biological role is regulating the production of MHC ligands, and specialized β subunits called β1i, β2i, and β5i with altered substrate specificity. The complex formed with the specialized β subunits is known as the "immunoproteasome". Another β5i variant subunit, β5t, is expressed in the thymus, leading to a thymus-specific "thymoproteasome" whose function is as yet unclear. The strength of MHC class I ligand binding is dependent on the composition of the ligand C-terminus, as peptides bind by hydrogen bonding and by close contacts with a region called the "B pocket" on the MHC surface. Many MHC class I alleles prefer hydrophobic C-terminal residues, and the immunoproteasome complex is more likely to generate hydrophobic C-termini. Due to its role in generating the activated form of NF-κB, an anti-apoptotic and pro-inflammatory regulator of cytokine expression, proteasomal activity has been linked to inflammatory and autoimmune diseases. Increased levels of proteasome activity correlate with disease activity and have been implicated in autoimmune diseases including systemic lupus erythematosus and rheumatoid arthritis. The proteasome is also involved in Intracellular antibody-mediated proteolysis of antibody-bound virions. In this neutralisation pathway, TRIM21 (a protein of the tripartite motif family) binds with immunoglobulin G to direct the virion to the proteasome where it is degraded. Proteasome inhibitors have effective anti-tumor activity in cell culture, inducing apoptosis by disrupting the regulated degradation of pro-growth cell cycle proteins. This approach of selectively inducing apoptosis in tumor cells has proven effective in animal models and human trials. Lactacystin, a natural product synthesized by "Streptomyces" bacteria, was the first non-peptidic proteasome inhibitor discovered and is widely used as a research tool in biochemistry and cell biology. Lactacystin was licensed to Myogenics/Proscript, which was acquired by Millennium Pharmaceuticals, now part of Takeda Pharmaceuticals. Lactacystin covalently modifies the amino-terminal threonine of catalytic β subunits of the proteasome, particularly the β5 subunit responsible for the proteasome's chymotrypsin-like activity. This discovery helped to establish the proteasome as a mechanistically novel class of protease: an amino-terminal threonine protease. Bortezomib (Boronated MG132), a molecule developed by Millennium Pharmaceuticals and marketed as Velcade, is the first proteasome inhibitor to reach clinical use as a chemotherapy agent. Bortezomib is used in the treatment of multiple myeloma. Notably, multiple myeloma has been observed to result in increased proteasome-derived peptide levels in blood serum that decrease to normal levels in response to successful chemotherapy. Studies in animals have indicated that bortezomib may also have clinically significant effects in pancreatic cancer. Preclinical and early clinical studies have been started to examine bortezomib's effectiveness in treating other B-cell-related cancers, particularly some types of non-Hodgkin's lymphoma. Clinical results also seem to justify use of proteasome inhibitor combined with chemotherapy, for B-cell acute lymphoblastic leukemia Proteasome inhibitors can kill some types of cultured leukemia cells that are resistant to glucocorticoids. The molecule ritonavir, marketed as Norvir, was developed as a protease inhibitor and used to target HIV infection. However, it has been shown to inhibit proteasomes as well as free proteases; to be specific, the chymotrypsin-like activity of the proteasome is inhibited by ritonavir, while the trypsin-like activity is somewhat enhanced. Studies in animal models suggest that ritonavir may have inhibitory effects on the growth of glioma cells. Proteasome inhibitors have also shown promise in treating autoimmune diseases in animal models. For example, studies in mice bearing human skin grafts found a reduction in the size of lesions from psoriasis after treatment with a proteasome inhibitor. Inhibitors also show positive effects in rodent models of asthma. Labeling and inhibition of the proteasome is also of interest in laboratory settings for both "in vitro" and "in vivo" study of proteasomal activity in cells. The most commonly used laboratory inhibitors are lactacystin and the peptide aldehyde MG132 initially developed by Goldberg lab. Fluorescent inhibitors have also been developed to specifically label the active sites of the assembled proteasome. The proteasome and its subunits are of clinical significance for at least two reasons: (1) a compromised complex assembly or a dysfunctional proteasome can be associated with the underlying pathophysiology of specific diseases, and (2) they can be exploited as drug targets for therapeutic interventions. More recently, more effort has been made to consider the proteasome for the development of novel diagnostic markers and strategies. An improved and comprehensive understanding of the pathophysiology of the proteasome should lead to clinical applications in the future. The proteasomes form a pivotal component for the ubiquitin–proteasome system (UPS) and corresponding cellular Protein Quality Control (PQC). Protein ubiquitination and subsequent proteolysis and degradation by the proteasome are important mechanisms in the regulation of the cell cycle, cell growth and differentiation, gene transcription, signal transduction and apoptosis. Subsequently, a compromised proteasome complex assembly and function lead to reduced proteolytic activities and the accumulation of damaged or misfolded protein species. Such protein accumulation may contribute to the pathogenesis and phenotypic characteristics in neurodegenerative diseases, cardiovascular diseases, inflammatory responses and autoimmune diseases, and systemic DNA damage responses leading to malignancies. Several experimental and clinical studies have indicated that aberrations and deregulations of the UPS contribute to the pathogenesis of several neurodegenerative and myodegenerative disorders, including Alzheimer's disease, Parkinson's disease and Pick's disease, amyotrophic lateral sclerosis (ALS), Huntington's disease, Creutzfeldt–Jakob disease, and motor neuron diseases, polyglutamine (PolyQ) diseases, muscular dystrophies and several rare forms of neurodegenerative diseases associated with dementia. As part of the ubiquitin–proteasome system (UPS), the proteasome maintains cardiac protein homeostasis and thus plays a significant role in cardiac ischemic injury, ventricular hypertrophy and heart failure. Additionally, evidence is accumulating that the UPS plays an essential role in malignant transformation. UPS proteolysis plays a major role in responses of cancer cells to stimulatory signals that are critical for the development of cancer. Accordingly, gene expression by degradation of transcription factors, such as p53, c-jun, c-Fos, NF-κB, c-Myc, HIF-1α, MATα2, STAT3, sterol-regulated element-binding proteins and androgen receptors are all controlled by the UPS and thus involved in the development of various malignancies. Moreover, the UPS regulates the degradation of tumor suppressor gene products such as adenomatous polyposis coli (APC) in colorectal cancer, retinoblastoma (Rb). and von Hippel–Lindau tumor suppressor (VHL), as well as a number of proto-oncogenes (Raf, Myc, Myb, Rel, Src, Mos, ABL). The UPS is also involved in the regulation of inflammatory responses. This activity is usually attributed to the role of proteasomes in the activation of NF-κB which further regulates the expression of pro inflammatory cytokines such as TNF-α, IL-β, IL-8, adhesion molecules (ICAM-1, VCAM-1, P-selectin) and prostaglandins and nitric oxide (NO). Additionally, the UPS also plays a role in inflammatory responses as regulators of leukocyte proliferation, mainly through proteolysis of cyclines and the degradation of CDK inhibitors. Lastly, autoimmune disease patients with SLE, Sjögren syndrome and rheumatoid arthritis (RA) predominantly exhibit circulating proteasomes which can be applied as clinical biomarkers.
https://en.wikipedia.org/wiki?curid=24603
The Importance of Being Earnest The Importance of Being Earnest, A Trivial Comedy for Serious People is a play by Oscar Wilde. First performed on 14 February 1895 at the St James's Theatre in London, it is a farcical comedy in which the protagonists maintain fictitious personae to escape burdensome social obligations. Working within the social conventions of late Victorian London, the play's major themes are the triviality with which it treats institutions as serious as marriage, and the resulting satire of Victorian ways. Some contemporary reviews praised the play's humour and the culmination of Wilde's artistic career, while others were cautious about its lack of social messages. Its high farce and witty dialogue have helped make "The Importance of Being Earnest" Wilde's most enduringly popular play. The successful opening night marked the climax of Wilde's career but also heralded his downfall. The Marquess of Queensberry, whose son Lord Alfred Douglas was Wilde's lover, planned to present the writer with a bouquet of rotten vegetables and disrupt the show. Wilde was tipped off and Queensberry was refused admission. Their feud came to a climax in court, where Wilde's homosexuality was revealed to the Victorian public and he was sentenced to imprisonment. Despite the play's early success, Wilde's notoriety caused the play to be closed after 86 performances. After his release from prison, he published the play from exile in Paris, but he wrote no further comic or dramatic work. "The Importance of Being Earnest" has been revived many times since its premiere. It has been adapted for the cinema on three occasions. In "The Importance of Being Earnest" (1952), Dame Edith Evans reprised her celebrated interpretation of Lady Bracknell; "The Importance of Being Earnest" (1992) by Kurt Baker used an all-black cast; and Oliver Parker's "The Importance of Being Earnest" (2002) incorporated some of Wilde's original material cut during the preparation of the original stage production. After the success of Wilde's plays "Lady Windermere's Fan" and "A Woman of No Importance", Wilde's producers urged him to write further plays. In July 1894, he mooted his idea for "The Importance of Being Earnest" to George Alexander, the actor-manager of the St James's Theatre. Wilde spent the summer with his family at Worthing, where he wrote the play quickly in August. His fame now at its peak, he used the working title "Lady Lancing" to avoid preemptive speculation of its content. Many names and ideas in the play were borrowed from people or places the author had known; Lady Queensberry, Lord Alfred Douglas's mother, for example, lived at Bracknell. Wilde scholars agree the most important influence on the play was W. S. Gilbert's 1877 farce "Engaged," from which Wilde borrowed not only several incidents but also "the gravity of tone demanded by Gilbert of his actors". Wilde continually revised the text over the next months. No line was left untouched and the revision had significant consequences. Sos Eltis describes Wilde's revisions as refined art at work. The earliest and longest handwritten drafts of the play labour over farcical incidents, broad puns, nonsense dialogue and conventional comic turns. In revising, "Wilde transformed standard nonsense into the more systemic and disconcerting illogicality which characterises "Earnest's" dialogue". Richard Ellmann argues Wilde had reached his artistic maturity and wrote more surely and rapidly. Wilde hesitated about submitting the script to Alexander, worrying it might be unsuitable for the St James's Theatre, whose typical repertoire was more serious, and explaining it had been written in response to a request for a play "with no real serious interest". When Henry James's "Guy Domville" failed, Alexander agreed to put on Wilde's play. After working with Wilde on stage movements with a toy theatre, Alexander asked the author to shorten the play from four acts to three. Wilde agreed and combined elements of the second and third acts. The largest cut was the removal of the character of Mr. Gribsby, a solicitor who comes from London to arrest the profligate "Ernest" (i.e., Jack) for unpaid dining bills. The four-act version was first played on a BBC radio production and is still sometimes performed. Some consider the three-act structure more effective and theatrically resonant than the expanded published edition. The play was first produced at the St James's Theatre on Valentine's Day 1895. It was freezing cold but Wilde arrived dressed in "florid sobriety", wearing a green carnation. The audience, according to one report, "included many members of the great and good, former cabinet ministers and privy councillors, as well as actors, writers, academics, and enthusiasts". Allan Aynesworth, who played Algernon Moncrieff, recalled to Hesketh Pearson that "In my fifty-three years of acting, I never remember a greater triumph than [that] first night". Aynesworth was himself "debonair and stylish", and Alexander, who played Jack Worthing, "demure". The cast was: The Marquess of Queensberry, the father of Wilde's lover Lord Alfred Douglas (who was on holiday in Algiers at the time), had planned to disrupt the play by throwing a bouquet of rotten vegetables at the playwright when he took his bow at the end of the show. Wilde and Alexander learned of the plan, and the latter cancelled Queensberry's ticket and arranged for policemen to bar his entrance. Nevertheless, he continued harassing Wilde, who eventually launched a private prosecution against the peer for criminal libel, triggering a series of trials ending in Wilde's imprisonment for gross indecency. Alexander tried, unsuccessfully, to save the production by removing Wilde's name from the billing, but the play had to close after only 86 performances. The play's original Broadway production opened at the Empire Theatre on 22 April 1895, but closed after sixteen performances. Its cast included William Faversham as Algy, Henry Miller as Jack, Viola Allen as Gwendolen, and Ida Vernon as Lady Bracknell. The Australian premiere was in Melbourne on 10 August 1895, presented by Dion Boucicault Jr. and Robert Brough, and the play was an immediate success. Wilde's downfall in England did not affect the popularity of his plays in Australia. In contrast to much theatre of the time, the light plot of "The Importance of Being Earnest "does not tackle serious social and political issues, something of which contemporary reviewers were wary. Though unsure of Wilde's seriousness as a dramatist, they recognised the play's cleverness, humour and popularity with audiences. George Bernard Shaw, for example, reviewed the play in the "Saturday Review", arguing that comedy should touch as well as amuse, "I go to the theatre to be "moved" to laughter." Later in a letter he said, the play, though "extremely funny", was Wilde's "first really heartless [one]". In "The World", William Archer wrote that he had enjoyed watching the play but found it to be empty of meaning: "What can a poor critic do with a play which raises no principle, whether of art or morals, creates its own canons and conventions, and is nothing but an absolutely wilful expression of an irrepressibly witty personality?" In "The Speaker", A. B. Walkley admired the play and was one of few to see it as the culmination of Wilde's dramatic career. He denied the term "farce" was derogatory, or even lacking in seriousness, and said "It is of nonsense all compact, and better nonsense, I think, our stage has not seen." H. G. Wells, in an unsigned review for "The Pall Mall Gazette", called "Earnest" one of the freshest comedies of the year, saying "More humorous dealing with theatrical conventions it would be difficult to imagine." He also questioned whether people would fully see its message, "... how Serious People will take this Trivial Comedy intended for their learning remains to be seen. No doubt seriously." The play was so light-hearted that many reviewers compared it to comic opera rather than drama. W. H. Auden later called it "a pure verbal opera", and "The Times" commented, "The story is almost too preposterous to go without music." Mary McCarthy, in "Sights and Spectacles" (1959), however, and despite thinking the play extremely funny, called it "a ferocious idyll"; "depravity is the hero and the only character." "The Importance of Being Earnest" is Wilde's most popular work and is continually revived. Max Beerbohm called the play Wilde's "finest, most undeniably his own", saying that in his other comedies – "Lady Windermere's Fan", "A Woman of No Importance" and "An Ideal Husband" – the plot, following the manner of Victorien Sardou, is unrelated to the theme of the work, while in "Earnest" the story is "dissolved" into the form of the play. "The Importance of Being Earnest" and Wilde's three other society plays were performed in Britain during the author's imprisonment and exile, albeit by small touring companies. A. B. Tapping's company toured "Earnest" between October 1895 and March 1899 (their performance at the Theatre Royal, Limerick, in the last week of October 1895 was almost certainly the first production of the play in Ireland). Elsie Lanham's company also toured 'Earnest' between November 1899 and April 1900. Alexander revived "Earnest" in a small theatre in Notting Hill, outside the West End, in 1901; in the same year he presented the piece on tour, playing Jack Worthing with a cast including the young Lilian Braithwaite as Cecily. The play returned to the West End when Alexander presented a revival at the St James's in 1902. Broadway revivals were mounted in 1902 and again in 1910, each production running for six weeks. A collected edition of Wilde's works, published in 1908 and edited by Robert Ross, helped to restore his reputation as an author. Alexander presented another revival of "Earnest" at the St James's in 1909, when he and Aynesworth reprised their original roles; the revival ran for 316 performances. Max Beerbohm said that the play was sure to become a classic of the English repertory, and that its humour was as fresh then as when it had been written, adding that the actors had "worn as well as the play". For a 1913 revival at the same theatre the young actors Gerald Ames and A. E. Matthews succeeded the creators as Jack and Algy. John Deverell as Jack and Margaret Scudamore as Lady Bracknell headed the cast in a 1923 production at the Haymarket Theatre. Many revivals in the first decades of the 20th century treated "the present" as the current year. It was not until the 1920s that the case for 1890s costumes was established; as a critic in "The Manchester Guardian" put it, "Thirty years on, one begins to feel that Wilde should be done in the costume of his period – that his wit today needs the backing of the atmosphere that gave it life and truth. … Wilde's glittering and complex verbal felicities go ill with the shingle and the short skirt." In Sir Nigel Playfair's 1930 production at the Lyric, Hammersmith, John Gielgud played Jack to the Lady Bracknell of his aunt, Mabel Terry-Lewis. Gielgud produced and starred in a production at the Globe (now the Gielgud) Theatre in 1939, in a cast that included Edith Evans as Lady Bracknell, Joyce Carey as Gwendolen, Angela Baddeley as Cecily and Margaret Rutherford as Miss Prism. "The Times" considered the production the best since the original, and praised it for its fidelity to Wilde's conception, its "airy, responsive ball-playing quality." Later in the same year Gielgud presented the work again, with Jack Hawkins as Algy, Gwen Ffrangcon-Davies as Gwendolen and Peggy Ashcroft as Cecily, with Evans and Rutherford in their previous roles. The production was presented in several seasons during and after the Second World War, with mostly the same main players. During a 1946 season at the Haymarket the King and Queen attended a performance, which, as the journalist Geoffrey Wheatcroft put it, gave the play "a final accolade of respectability." The production toured North America, and was successfully staged on Broadway in 1947. As Wilde's work came to be read and performed again, it was "The Importance of Being Earnest" that received the most productions. By the time of its centenary the journalist Mark Lawson described it as "the second most known and quoted play in English after "Hamlet"." For Sir Peter Hall's 1982 production at the National Theatre the cast included Judi Dench as Lady Bracknell, Martin Jarvis as Jack, Nigel Havers as Algy, Zoë Wanamaker as Gwendolen and Anna Massey as Miss Prism. Nicholas Hytner's 1993 production at the Aldwych Theatre, starring Maggie Smith, had occasional references to the supposed gay subtext. In 2005 the Abbey Theatre, Dublin, produced the play with an all-male cast; it also featured Wilde as a character – the play opens with him drinking in a Parisian café, dreaming of his play. The Melbourne Theatre Company staged a production in December 2011 with Geoffrey Rush as Lady Bracknell. In 2011 the Roundabout Theatre Company produced a Broadway revival based on the 2009 Stratford Shakespeare Festival production featuring Brian Bedford as director and as Lady Bracknell. It opened at the American Airlines Theatre on 13 January and ran until 3 July 2011. The cast also included Dana Ivey as Miss Prism, Paxton Whitehead as Canon Chasuble, Santino Fontana as Algernon, Paul O'Brien as Lane, Charlotte Parry as Cecily, David Furr as Jack and Sara Topham as Gwendolen. It was nominated for three Tony Awards. The play was also presented internationally, in Singapore, in October 2004, by the British Theatre Playhouse, and the same company brought it to London's Greenwich Theatre in April 2005. A 2018 revival was directed by Michael Fentiman for the Vaudeville Theatre, London, as part of a season of four Wilde plays produced by Dominic Dromgoole. The production received largely negative press reviews. The play is set in "The Present" (i.e. 1895). The play opens with Algernon Moncrieff, an idle young gentleman, receiving his best friend, Jack Worthing ('Ernest'). Ernest has come from the country to propose to Algernon's cousin, Gwendolen Fairfax. Algernon refuses to consent until Ernest explains why his cigarette case bears the inscription, "From little Cecily, with her fondest love to her dear Uncle Jack." 'Ernest' is forced to admit to living a double life. In the country, he assumes a serious attitude for the benefit of his young ward, the heiress Cecily Cardew, and goes by the name of John (or Jack), while pretending that he must worry about a wastrel younger brother named Ernest in London. In the city, meanwhile, he assumes the identity of the libertine Ernest. Algernon confesses a similar deception: he pretends to have an invalid friend named Bunbury in the country, whom he can "visit" whenever he wishes to avoid an unwelcome social obligation. Jack refuses to tell Algernon the location of his country estate. Gwendolen and her formidable mother Lady Bracknell now call on Algernon who distracts Lady Bracknell in another room while Jack proposes to Gwendolen. She accepts, but seems to love him in large part because of his name, Ernest. Jack accordingly resolves to himself to be rechristened "Ernest". Discovering them in this intimate exchange, Lady Bracknell interviews Jack as a prospective suitor. Horrified to learn that he was adopted after being discovered as a baby in a handbag at Victoria Station, she refuses him and forbids further contact with her daughter. Gwendolen manages to covertly promise to him her undying love. As Jack gives her his address in the country, Algernon surreptitiously notes it on the cuff of his sleeve: Jack's revelation of his pretty and wealthy young ward has motivated his friend to meet her. Cecily is studying with her governess, Miss Prism. Algernon arrives, pretending to be Ernest Worthing, and soon charms Cecily. Long fascinated by Uncle Jack's hitherto absent black sheep brother, she is predisposed to fall for Algernon in his role of Ernest (a name she is apparently particularly fond of). Therefore, Algernon, too, plans for the rector, Dr. Chasuble, to rechristen him "Ernest". Jack has decided to abandon his double life. He arrives in full mourning and announces his brother's death in Paris of a severe chill, a story undermined by Algernon's presence in the guise of Ernest. Gwendolen now enters, having run away from home. During the temporary absence of the two men, she meets Cecily, each woman indignantly declaring that she is the one engaged to "Ernest". When Jack and Algernon reappear, their deceptions are exposed. Arriving in pursuit of her daughter, Lady Bracknell is astonished to be told that Algernon and Cecily are engaged. The revelation of Cecily's wealth soon dispels Lady Bracknell's initial doubts over the young lady's suitability, but any engagement is forbidden by her guardian Jack: he will consent only if Lady Bracknell agrees to his own union with Gwendolen – something she declines to do. The impasse is broken by the return of Miss Prism, whom Lady Bracknell recognises as the person who, 28 years earlier as a family nursemaid, had taken a baby boy for a walk in a perambulator and never returned. Challenged, Miss Prism explains that she had absent mindedly put the manuscript of a novel she was writing in the perambulator, and the baby in a handbag, which she had left at Victoria Station. Jack produces the very same handbag, showing that he is the lost baby, the elder son of Lady Bracknell's late sister, and thus Algernon's elder brother. Having acquired such respectable relations, he is acceptable as a suitor for Gwendolen after all. Gwendolen, however, insists she can love only a man named Ernest. Lady Bracknell informs Jack that, as the first-born, he would have been named after his father, General Moncrieff. Jack examines the army lists and discovers that his father's name – and hence his own real name – was in fact Ernest. Pretense was reality all along. As the happy couples embrace – Jack and Gwendolen, Algernon and Cecily, and even Dr. Chasuble and Miss Prism – Lady Bracknell complains to her newfound relative: "My nephew, you seem to be displaying signs of triviality." "On the contrary, Aunt Augusta", he replies, "I've now realised for the first time in my life the vital importance of being Earnest." Arthur Ransome described "The Importance..." as the most trivial of Wilde's society plays, and the only one that produces "that peculiar exhilaration of the spirit by which we recognise the beautiful." "It is", he wrote, "precisely because it is consistently trivial that it is not ugly." Ellmann says that "The Importance of Being Earnest" touched on many themes Wilde had been building since the 1880s – the languor of aesthetic poses was well established and Wilde takes it as a starting point for the two protagonists. While "Salome", "An Ideal Husband" and "The Picture of Dorian Gray" had dwelt on more serious wrongdoing, vice in "Earnest" is represented by Algy's craving for cucumber sandwiches. Wilde told Robert Ross that the play's theme was "That we should treat all trivial things in life very seriously, and all serious things of life with a sincere and studied triviality." The theme is hinted at in the play's ironic title, and "earnestness" is repeatedly alluded to in the dialogue, Algernon says in Act II, "one has to be serious about something if one is to have any amusement in life" but goes on to reproach Jack for 'being serious about everything'". Blackmail and corruption had haunted the double lives of Dorian Gray and Sir Robert Chiltern (in "An Ideal Husband"), but in "Earnest" the protagonists' duplicity (Algernon's "bunburying" and Worthing's double life as Jack and Ernest) is undertaken for more innocent purposes – largely to avoid unwelcome social obligations. While much theatre of the time tackled serious social and political issues, "Earnest" is superficially about nothing at all. It "refuses to play the game" of other dramatists of the period, for instance Bernard Shaw, who used their characters to draw audiences to grander ideals. The play repeatedly mocks Victorian traditions and social customs, marriage and the pursuit of love in particular. In Victorian times "earnestness" was considered to be the over-riding societal value, originating in religious attempts to reform the lower classes, it spread to the upper ones too throughout the century. The play's very title, with its mocking paradox (serious people are so because they do "not" see trivial comedies), introduces the theme, it continues in the drawing room discussion, "Yes, but you must be serious about it. I hate people who are not serious about meals. It is so shallow of them," says Algernon in Act 1; allusions are quick and from multiple angles. Wilde managed both to engage with and to mock the genre, while providing social commentary and offering reform. The men follow traditional matrimonial rites, whereby suitors admit their weaknesses to their prospective brides, but the foibles they excuse are ridiculous, and the farce is built on an absurd confusion of a book and a baby. When Jack apologises to Gwendolen during his marriage proposal it is for "not" being wicked: JACK: Gwendolen, it is a terrible thing for a man to find out suddenly that all his life he has been speaking nothing but the truth. Can you forgive me? GWENDOLEN: I can. For I feel that you are sure to change. In turn, both Gwendolen and Cecily have the ideal of marrying a man named Ernest, a popular and respected name at the time. Gwendolen, quite unlike her mother's methodical analysis of John Worthing's suitability as a husband, places her entire faith in a Christian name, declaring in Act I, "The only really safe name is Ernest". This is an opinion shared by Cecily in Act II, "I pity any poor married woman whose husband is not called Ernest" and they indignantly declare that they have been deceived when they find out the men's real names. Wilde embodied society's rules and rituals artfully into Lady Bracknell: minute attention to the details of her style created a comic effect of assertion by restraint. In contrast to her encyclopaedic knowledge of the social distinctions of London's street names, Jack's obscure parentage is subtly evoked. He defends himself against her "A handbag?" with the clarification, "The Brighton Line". At the time, Victoria Station consisted of two separate but adjacent terminal stations sharing the same name. To the east was the ramshackle LC&D Railway, on the west the up-market LB&SCR – the Brighton Line, which went to Worthing, the fashionable, expensive town the gentleman who found baby Jack was travelling to at the time (and after which Jack was named). Queer scholars have argued that the play's themes of duplicity and ambivalence are inextricably bound up with Wilde's homosexuality, and that the play exhibits a "flickering presence-absence of… homosexual desire". On re-reading the play after his release from prison, Wilde said: "It was extraordinary reading the play over. How I used to toy with that Tiger Life." As one scholar has put it, the absolute necessity for homosexuals of the period to "need a public mask is a factor contributing to the satire on social disguise." It has been claimed that the use of the name Earnest may have been a homosexual in-joke. In 1892, three years before Wilde wrote the play, John Gambril Nicholson had published the book of pederastic poetry "Love in Earnest". The sonnet "Of Boys' Names" included the verse: "Though Frank may ring like silver bell / And Cecil softer music claim / They cannot work the miracle / –'Tis Ernest sets my heart a-flame." The word "earnest" may also have been a code-word for homosexual, as in: "Is he earnest?", in the same way that "Is he so?" and "Is he musical?" were employed. Sir Donald Sinden, an actor who had met two of the play's original cast (Irene Vanbrugh and Allan Aynesworth), and Lord Alfred Douglas, wrote to "The Times" to dispute suggestions that "Earnest" held any sexual connotations: Although they had ample opportunity, at no time did any of them even hint that "Earnest" was a synonym for homosexual, or that "bunburying" may have implied homosexual sex. The first time I heard it mentioned was in the 1980s and I immediately consulted Sir John Gielgud whose own performance of Jack Worthing in the same play was legendary and whose knowledge of theatrical lore was encyclopaedic. He replied in his ringing tones: "No-No! Nonsense, absolute nonsense: I would have known". A number of theories have also been put forward to explain the derivation of Bunbury, and Bunburying, which are used in the play to imply a secretive double life. It may have derived from Henry Shirley Bunbury, a hypochondriacal acquaintance of Wilde's youth. Another suggestion, put forward in 1913 by Aleister Crowley, who knew Wilde, was that Bunbury was a combination word: that Wilde had once taken a train to Banbury, met a schoolboy there, and arranged a second secret meeting with him at Sunbury. Bunburying is a stratagem used by people who need an excuse for avoiding social obligations in their daily life. The word "bunburying" first appears in Act I when Algernon explains that he invented a fictional friend, a chronic invalid named "Bunbury", to have an excuse for getting out of events he does not wish to attend, particularly with his Aunt Augusta (Lady Bracknell). Algernon and Jack both use this method to secretly visit their lovers, Cecily and Gwendolen. While Wilde had long been famous for dialogue and his use of language, Raby (1988) argues that he achieved a unity and mastery in "Earnest" that was unmatched in his other plays, except perhaps "Salomé". While his earlier comedies suffer from an unevenness resulting from the thematic clash between the trivial and the serious, "Earnest" achieves a pitch-perfect style that allows these to dissolve. There are three different registers detectable in the play. The dandyish insouciance of Jack and Algernon – established early with Algernon's exchange with his manservant – betrays an underlying unity despite their differing attitudes. The formidable pronouncements of Lady Bracknell are as startling for her use of hyperbole and rhetorical extravagance as for her disconcerting opinions. In contrast, the speech of Dr. Chasuble and Miss Prism is distinguished by "pedantic precept" and "idiosyncratic diversion". Furthermore, the play is full of epigrams and paradoxes. Max Beerbohm described it as littered with "chiselled apophthegms – witticisms unrelated to action or character", of which he found half a dozen to be of the highest order. Lady Bracknell's line, "A handbag?", has been called one of the most malleable in English drama, lending itself to interpretations ranging from incredulous or scandalised to baffled. Edith Evans, both on stage and in the 1952 film, delivered the line loudly in a mixture of horror, incredulity and condescension. Stockard Channing, in the Gaiety Theatre, Dublin in 2010, hushed the line, in a critic's words, "with a barely audible 'A handbag?', rapidly swallowed up with a sharp intake of breath. An understated take, to be sure, but with such a well-known play, packed full of witticisms and aphorisms with a life of their own, it's the little things that make a difference." Though Wilde deployed characters that were by now familiar – the dandy lord, the overbearing matriarch, the woman with a past, the puritan young lady – his treatment is subtler than in his earlier comedies. Lady Bracknell, for instance, embodies respectable, upper-class society, but Eltis notes how her development "from the familiar overbearing duchess into a quirkier and more disturbing character" can be traced through Wilde's revisions of the play. For the two young men, Wilde presents not stereotypical stage "dudes" but intelligent beings who, as Jackson puts it, "speak like their creator in well-formed complete sentences and rarely use slang or vogue-words". Dr Chasuble and Miss Prism are characterised by a few light touches of detail, their old-fashioned enthusiasms, and the Canon's fastidious pedantry, pared down by Wilde during his many redrafts of the text. Ransome argues that Wilde freed himself by abandoning the melodrama, the basic structure which underlies his earlier social comedies, and basing the story entirely on the Earnest/Ernest verbal conceit. Freed from "living up to any drama more serious than conversation" Wilde could now amuse himself to a fuller extent with quips, , epigrams and repartee that really had little to do with the business at hand. The genre of the "Importance of Being Earnest" has been deeply debated by scholars and critics alike who have placed the play within a wide variety of genres ranging from parody to satire. In his critique of Wilde, Foster argues that the play creates a world where "real values are inverted [and], reason and unreason are interchanged". Similarly, Wilde's use of dialogue mocks the upper classes of Victorian England lending the play a satirical tone. Reinhart further stipulates that the use of farcical humour to mock the upper classes "merits the play both as satire and as drama". Wilde's two final comedies, "An Ideal Husband" and "The Importance of Being Earnest", were still on stage in London at the time of his prosecution, and they were soon closed as the details of his case became public. After two years in prison with hard labour, Wilde went into exile in Paris, sick and depressed, his reputation destroyed in England. In 1898, when no one else would, Leonard Smithers agreed with Wilde to publish the two final plays. Wilde proved to be a diligent reviser, sending detailed instructions on stage directions, character listings and the presentation of the book, and insisting that a playbill from the first performance be reproduced inside. Ellmann argues that the proofs show a man "very much in command of himself and of the play". Wilde's name did not appear on the cover, it was "By the Author of "Lady Windermere's Fan"". His return to work was brief though, as he refused to write anything else, "I can write, but have lost the joy of writing". On 19 October 2007, a first edition (number 349 of 1,000) was discovered inside a handbag in an Oxfam shop in Nantwich, Cheshire. Staff were unable to trace the donor. It was sold for £650. "The Importance of Being Earnest"s popularity has meant it has been translated into many languages, though the homophonous pun in the title ("Ernest", a masculine proper name, and "earnest", the virtue of steadfastness and seriousness) poses a special problem for translators. The easiest case of a suitable translation of the pun, perpetuating its sense and meaning, may have been its translation into German. Since English and German are closely related languages, German provides an equivalent adjective ("ernst") and also a matching masculine proper name ("Ernst"). The meaning and tenor of the wordplay are exactly the same. Yet there are many different possible titles in German, mostly concerning sentence structure. The two most common ones are "Bunbury oder ernst / Ernst sein ist alles" and "Bunbury oder wie wichtig es ist, ernst / Ernst zu sein". In a study of Italian translations, Adrian Pablé found thirteen different versions using eight titles. Since wordplay is often unique to the language in question, translators are faced with a choice of either staying faithful to the original – in this case the English adjective and virtue "earnest" – or creating a similar pun in their own language. Four main strategies have been used by translators. The first leaves all characters' names unchanged and in their original spelling: thus the name is respected and readers reminded of the original cultural setting, but the liveliness of the pun is lost. Eva Malagoli varied this source-oriented approach by using both the English Christian names and the adjective "earnest", thus preserving the pun and the English character of the play, but possibly straining an Italian reader. A third group of translators replaced "Ernest" with a name that also represents a virtue in the target language, favouring transparency for readers in translation over fidelity to the original. For instance, in Italian, these versions variously call the play "L'importanza di essere Franco/Severo/Fedele", the given names being respectively the values of honesty, propriety, and loyalty. French offers a closer pun: "" is both a first name and the quality of steadfastness, so the play is commonly known as "De l'importance d'être Constant", though Jean Anouilh translated the play under the title: "Il est important d'être Aimé" ("" is a name which also means "beloved"). These translators differ in their attitude to the original English honorific titles, some change them all, or none, but most leave a mix partially as a compensation for the added loss of Englishness. Lastly, one translation gave the name an Italianate touch by rendering it as "Ernesto"; this work liberally mixed proper nouns from both languages. Apart from several "made-for-television" versions, "The Importance of Being Earnest" has been adapted for the English-language cinema at least three times, first in 1952 by Anthony Asquith who adapted the screenplay and directed it. Michael Denison (Algernon), Michael Redgrave (Jack), Edith Evans (Lady Bracknell), Dorothy Tutin (Cecily), Joan Greenwood (Gwendolen), and Margaret Rutherford (Miss Prism) and Miles Malleson (Canon Chasuble) were among the cast. In 1992 Kurt Baker directed a version using an all-black cast with Daryl Keith Roach as Jack, Wren T. Brown as Algernon, Ann Weldon as Lady Bracknell, Lanei Chapman as Cecily, Chris Calloway as Gwendolen, CCH Pounder as Miss Prism, and Brock Peters as Doctor Chasuble, set in the United States. Oliver Parker, an English director who had previously adapted "An Ideal Husband" by Wilde, made the 2002 film; it stars Colin Firth (Jack), Rupert Everett (Algy), Judi Dench (Lady Bracknell), Reese Witherspoon (Cecily), Frances O'Connor (Gwendolen), Anna Massey (Miss Prism), and Tom Wilkinson (Canon Chasuble). Parker's adaptation includes the dunning solicitor Mr. Gribsby who pursues "Ernest" to Hertfordshire (present in Wilde's original draft, but cut at the behest of the play's first producer). Algernon too is pursued by a group of creditors in the opening scene. In 1960, "Ernest in Love" was staged Off-Broadway. The Japanese all-female musical theatre troupe Takarazuka Revue staged this musical in 2005 in two productions, one by Moon Troupe and the other one by Flower Troupe. In 1963, Erik Chisholm composed an opera from the play, using Wilde's text as the libretto. In 1964, Gerd Natschinski composed the musical "Mein Freund Bunbury" based on the play, 1964 premiered at Metropol Theater Berlin. According to a study by Robert Tanitch, by 2002 there had been least eight adaptations of the play as a musical, though "never with conspicuous success". The earliest such version was a 1927 American show entitled "Oh Earnest". The journalist Mark Bostridge comments, "The libretto of a 1957 musical adaptation, "Half in Earnest", deposited in the British Library, is scarcely more encouraging. The curtain rises on Algy strumming away at the piano, singing 'I can play "Chopsticks", Lane'. Other songs include 'A Bunburying I Must Go'." Gerald Barry created the 2011 opera, "The Importance of Being Earnest", commissioned by the Los Angeles Philharmonic and the Barbican Centre in London. It was premiered in Los Angeles in 2011. The stage premiere was given by the Opéra national de Lorraine in Nancy, France in 2013. In 2017, Odyssey Opera of Boston presented a fully staged production of Mario Castelnuovo-Tedesco’s opera "The Importance of Being Earnest" as part of their Wilde Opera Nights series which was a season-long exploration of operatic works inspired by the writings and world of Oscar Wilde. The opera for two pianos, percussion and singers was composed in 1961-2. It is filled with musical quotes at every turn. The opera was never published, but it was performed twice: the premiere in Monte Carlo (1972 in Italian) and in La Guardia, NY (1975). Odyssey Opera was able to obtain the manuscript from the Library of Congress with the permission of the composer’s granddaughter. After Odyssey's production at the Wimberly Theatre, Calderwood Pavilion at the Boston Center for the Arts on March 17th and 18th, being received with critical acclaim, The Boston Globe stated "Odyssey Opera recognizes ‘The Importance of Being Earnest.’" In 2016 Irish actor/writers Helen Norton and Jonathan White wrote the comic play "To Hell in a Handbag" which retells the story of "Importance" from the point of view of the characters Canon Chasuble and Miss Prism, giving them their own back story and showing what happens to them when they are not on stage in Wilde's play. There have been many radio versions of the play. In 1925 the BBC broadcast an adaptation with Hesketh Pearson as Jack Worthing. Further broadcasts of the play followed in 1927 and 1936. In 1977, BBC Radio 4 broadcast the four-act version of the play, with Fabia Drake as Lady Bracknell, Richard Pasco as Jack, Jeremy Clyde as Algy, Maurice Denham as Canon Chasuble, Sylvia Coleridge as Miss Prism, Barbara Leigh-Hunt as Gwendolen and Prunella Scales as Cecily. The production was later released on CD. To commemorate the centenary of the first performance of the play, Radio 4 broadcast a new adaptation on 13 February 1995; directed by Glyn Dearman, it featured Judi Dench as Lady Bracknell, Michael Hordern as Lane, Michael Sheen as Jack Worthing, Martin Clunes as Algernon Moncrieff, John Moffatt as Canon Chasuble, Miriam Margolyes as Miss Prism, Samantha Bond as Gwendolen and Amanda Root as Cecily. The production was later issued on audio cassette. On 13 December 2000, BBC Radio 3 broadcast a new adaptation directed by Howard Davies starring Geraldine McEwan as Lady Bracknell, Simon Russell Beale as Jack Worthing, Julian Wadham as Algernon Moncrieff, Geoffrey Palmer as Canon Chasuble, Celia Imrie as Miss Prism, Victoria Hamilton as Gwendolen and Emma Fielding as Cecily, with music composed by Dominic Muldowney. The production was released on audio cassette. A 1964 commercial television adaptation starred Ian Carmichael, Patrick Macnee, Susannah York, Fenella Fielding, Pamela Brown and Irene Handl. BBC television transmissions of the play have included a 1974 "Play of the Month" version starring Coral Browne as Lady Bracknell with Michael Jayston, Julian Holloway, Gemma Jones and Celia Bannerman. Stuart Burge directed another adaptation in 1986 with a cast including Gemma Jones, Alec McCowen, Paul McGann and Joan Plowright. It was adapted for Australian TV in 1957. Gielgud's performance is preserved on an EMI audio recording dating from 1952, which also captures Edith Evans's Lady Bracknell. The cast also includes Roland Culver (Algy), Jean Cadell (Miss Prism), Pamela Brown (Gwendolen) and Celia Johnson (Cecily). Other audio recordings include a "Theatre Masterworks" version from 1953, directed and narrated by Margaret Webster, with a cast including Maurice Evans, Lucile Watson and Mildred Natwick; a 1989 version by California Artists Radio Theatre, featuring Dan O'Herlihy Jeanette Nolan, Les Tremayne and Richard Erdman; and one by L.A. Theatre Works issued in 2009, featuring Charles Busch, James Marsters and Andrea Bowen.
https://en.wikipedia.org/wiki?curid=31067
Analogy of the divided line The analogy of the divided line () is presented by the Greek philosopher Plato in the "Republic" (509d–511e). It is written as a dialogue between Glaucon and Socrates, in which the latter further elaborates upon the immediately preceding Analogy of the Sun at the former's request. Socrates asks Glaucon to not only envision this unequally bisected line but to imagine further bisecting each of the two segments. Socrates explains that the four resulting segments represent four separate 'affections' (παθήματα) of the psyche. The lower two sections are said to represent the visible while the higher two are said to represent the intelligible. These affections are described in succession as corresponding to increasing levels of reality and truth from conjecture (εἰκασία) to belief () to thought (διάνοια) and finally to understanding (). Furthermore, this analogy not only elaborates a theory of the psyche but also presents metaphysical and epistemological views. In "The Republic" (509d–510a), Plato describes the divided line this way: Thus AB represents shadows and reflections of physical things, and BC the physical things themselves. These correspond to two kinds of knowledge, the illusion (εἰκασία "eikasia") of our ordinary, everyday experience, and belief (πίστις "pistis") about discrete physical objects which cast their shadows. In the "Timaeus", the category of illusion includes all the "opinions of which the minds of ordinary people are full," while the natural sciences are included in the category of belief. According to some translations, the segment CE, representing the intelligible world, is divided into the same ratio as AC, giving the subdivisions CD and DE (it can be readily verified that CD must have the same length as BC: Plato describes CD, the "lower" of these, as involving mathematical reasoning (διάνοια "dianoia"), where abstract mathematical objects such as geometric lines are discussed. Such objects are outside the physical world (and are not to be confused with the "drawings" of those lines, which fall within the physical world BC). However, they are less important to Plato than the subjects of philosophical understanding (νόησις "noesis"), the "higher" of these two subdivisions (DE): Plato here is using the familiar relationship between ordinary objects and their shadows or reflections in order to illustrate the relationship between the physical world as a whole and the world of Ideas (Forms) as a whole. The former is made up of a series of passing reflections of the latter, which is eternal, more real and "true." Moreover, the knowledge that we have of the Ideas – when indeed we do have it – is of a higher order than knowledge of the mere physical world. In particular, knowledge of the forms leads to a knowledge of the Idea (Form) of the Good. The analogy of the divided line is the cornerstone of Plato's metaphysical framework. This structure illustrates the grand picture of Plato's metaphysics, epistemology, and ethics, all in one. It is not enough for the philosopher to understand the Ideas (Forms), he must also understand the relation of Ideas to all four levels of the structure to be able to know anything at all. In the "Republic", the philosopher must understand the Idea of Justice to live a just life or to organize and govern a just state. The divided line also serves as our guide for most past and future metaphysics. The lowest level, which represents "the world of becoming and passing away" ("Republic", 508d), is the metaphysical model for a Heraclitean philosophy of constant flux and for Protagorean philosophy of appearance and opinion. The second level, a world of fixed physical objects, also became Aristotle's metaphysical model. The third level might be a Pythagorean level of mathematics. The fourth level is Plato's ideal Parmenidean reality, the world of highest level Ideas. Plato holds a very strict notion of knowledge. For example, he does not accept expertise about a subject, nor direct perception (see "Theaetetus"), nor true belief about the physical world (the "Meno") as knowledge. It is not enough for the philosopher to understand the Ideas (Forms), he must also understand the relation of Ideas to all four levels of the structure to be able to know anything at all. For this reason, in most of the earlier Socratic dialogues, Socrates denies knowledge both to himself and others. For the first level, "the world of becoming and passing away," Plato expressly denies the possibility of knowledge. Constant change never stays the same, therefore, properties of objects must refer to different Ideas at different times. Note that for knowledge to be possible, which Plato believed, the other three levels must be unchanging. The third and fourth level, mathematics and Ideas, are already eternal and unchanging. However, to ensure that the second level, the objective, physical world, is also unchanging, Plato, in the "Republic", Book 4 introduces empirically derived axiomatic restrictions that prohibit both motion and shifting perspectives.
https://en.wikipedia.org/wiki?curid=31068
Themistocles Themistocles (; "Themistoklẽs"; "Glory of the Law"; c. 524–459 BC) was an Athenian politician and general. He was one of a new breed of non-aristocratic politicians who rose to prominence in the early years of the Athenian democracy. As a politician, Themistocles was a populist, having the support of lower-class Athenians, and generally being at odds with the Athenian nobility. Elected archon in 493 BC, he convinced the polis to increase the naval power of Athens, a recurring theme in his political career. During the first Persian invasion of Greece he fought at the Battle of Marathon (490 BC) and was possibly one of the ten Athenian "strategoi" (generals) in that battle. In the years after Marathon, and in the run-up to the second Persian invasion of 480–479 BC, Themistocles became the most prominent politician in Athens. He continued to advocate for a strong Athenian Navy, and in 483 BC he persuaded the Athenians to build a fleet of 200 triremes; these proved crucial in the forthcoming conflict with Persia. During the second invasion, he effectively commanded the Greek allied navy at the battles of Artemisium and Salamis in 480 BC. Due to his subterfuge, the Allies successfully lured the Persian fleet into the Straits of Salamis, and the decisive Greek victory there was the turning point of the war. The invasion was conclusively repulsed the following year after the Persian defeat at the land Battle of Plataea. After the conflict ended, Themistocles continued his pre-eminence among Athenian politicians. However, he aroused the hostility of Sparta by ordering the re-fortification of Athens, and his perceived arrogance began to alienate him from the Athenians. In 472 or 471 BC, he was ostracised, and went into exile in Argos. The Spartans now saw an opportunity to destroy Themistocles, and implicated him in the alleged treasonous plot of 478 BC of their own general Pausanias. Themistocles thus fled from Greece. Alexander I of Macedon (r. 498–454 BC) temporarily gave him sanctuary at Pydna before he traveled to Asia Minor, where he entered the service of the Persian king Artaxerxes I (reigned 465–424 BC). He was made governor of Magnesia, and lived there for the rest of his life. Themistocles died in 459 BC, probably of natural causes. His reputation was posthumously rehabilitated, and he was re-established as a hero of the Athenian (and indeed Greek) cause. Themistocles can still reasonably be thought of as "the man most instrumental in achieving the salvation of Greece" from the Persian threat, as Plutarch describes him. His naval policies would have a lasting impact on Athens as well, since maritime power became the cornerstone of the Athenian Empire and golden age. Thucydides assessed Themistocles as "a man who exhibited the most indubitable signs of genius; indeed, in this particular he has a claim on our admiration quite extraordinary and unparalleled". Themistocles was born in the Attic deme of Phrearrhioi around 524 BC, the son of Neocles, who was, in the words of Plutarch "no very conspicuous man". His mother is more obscure; according to Plutarch, she was either a Thracian woman called Abrotonon, or Euterpe, a Carian from Halicarnassus. Like many contemporaries, little is known of his early years. Some authors report that he was unruly as a child and was consequently disowned by his father. Plutarch considers this to be false. Plutarch indicates that, on account of his mother's background, Themistocles was considered something of an outsider; furthermore the family appear to have lived in an immigrant district of Athens, Cynosarges, outside the city walls. However, in an early example of his cunning, Themistocles persuaded "well-born" children to exercise with him in Cynosarges, thus breaking down the distinction between "alien and legitimate". Plutarch further reports that Themistocles was preoccupied, even as a child, with preparing for public life. His teacher is said to have told him: "My boy, you will be nothing insignificant, but definitely something great, either for good or evil." Themistocles left three sons by Archippe, daughter to Lysander of Alopece: Archeptolis, Polyeuctus, and Cleophantus. Plato the philosopher mentions Cleophantus as a most excellent horseman, but otherwise insignificant person. And Themistocles had two sons older than these three, Neocles and Diocles. Neocles died when he was young, bitten by a horse, and Diocles was adopted by his grandfather, Lysander. Themistocles had many daughters: Mnesiptolema, the product of his second marriage, married her step-brother Archeptolis and became priestess of Cybele; Italia was married to Panthoides of Chios; and Sybaris to Nicomedes the Athenian. After Themistocles died, his nephew Phrasicles went to Magnesia and married another daughter, Nicomache (with her brothers' consent). Phrasicles then took charge of her sister Asia, the youngest of all ten children. Themistocles grew up in a period of upheaval in Athens. The tyrant Peisistratos had died in 527 BC, passing power to his sons, Hipparchus and Hippias. Hipparchus was murdered in 514 BC, and in response to this, Hippias became paranoid and started to rely increasingly on foreign mercenaries to keep a hold on power. The head of the powerful, but exiled (according to Herodotus only—the fragmentary Archon List for 525/4 shows a Cleisthenes, an Alcmaeonid, holding office in Athens during this period) Alcmaeonid family, Cleisthenes, began to scheme to overthrow Hippias and return to Athens. In 510 BC, he persuaded the Spartan king Cleomenes I to launch a full-scale attack on Athens, which succeeded in overthrowing Hippias. However, in the aftermath, the other noble ('eupatrid') families of Athens rejected Cleisthenes, electing Isagoras as archon, with the support of Cleomenes. On a personal level, Cleisthenes wanted to return to Athens; however, he also probably wanted to prevent Athens becoming a Spartan client state. Outmaneuvering the other nobles, he proposed to the Athenian people a radical program in which political power would be invested in the people—a "democracy". The Athenian people thus overthrew Isagoras, repelled a Spartan attack under Cleomenes, and invited Cleisthenes to return to Athens, to put his plan into action. The establishment of the democracy was to radically change Athens:"And so it was that the Athenians found themselves suddenly a great power... they gave vivid proof of what equality and freedom of speech might achieve" The new system of government in Athens opened up a wealth of opportunity for men like Themistocles, who previously would have had no access to power. Moreover, the new institutions of the democracy required skills that had previously been unimportant in government. Themistocles was to prove himself a master of the new system; "he could infight, he could network, he could spin... and crucially, he knew how to make himself visible." Themistocles moved to the Ceramicus, a down-market part of Athens. This move marked him out as a 'man of the people', and allowed him to interact more easily with ordinary citizens. He began building up a support base among these newly empowered citizens: "he wooed the poor; and they, not used to being courted, duly loved him back. Touring the taverns, the markets, the docks, canvassing where no politician had thought to canvas before, making sure never to forget a single voter's name, Themistocles had set his eyes on a radical new constituency" However, he took care to ensure that he did not alienate the nobility of Athens. He began to practice law, the first person in Athens to prepare for public life in this way. His ability as attorney and arbitrator, used in the service of the common people, gained him further popularity. Themistocles probably turned 30 in 494 BC, which qualified him to become an archon, the highest of the magistracies in Athens. On the back of his popularity, he evidently decided to run for this office and was elected Archon Eponymous, the highest government office in the following year (493 BC). Themistocles's archonship saw the beginnings of a major theme in his career; the advancement of Athenian sea-power. Under his guidance, the Athenians began the building of a new port at Piraeus, to replace the existing facilities at Phalerum. Although further away from Athens, Piraeus offered three natural harbours, and could be easily fortified. Since Athens was to become an essentially maritime power during the 5th century BC, Themistocles's policies were to have huge significance for the future of Athens, and indeed Greece. In advancing naval power, Themistocles was probably advocating a course of action he thought essential for the long-term prospects of Athens. However, as Plutarch implies, since naval power relied on the mass mobilisation of the common citizens ("thetes") as rowers, such a policy put more power into the hands of average Athenians—and thus into Themistocles's own hands. After Marathon, probably in 489, Miltiades, the hero of the battle, was seriously wounded in an abortive attempt to capture Paros. Taking advantage of his incapacitation, the powerful Alcmaeonid family arranged for him to be prosecuted. The Athenian aristocracy, and indeed Greek aristocrats in general, were loath to see one person pre-eminent, and such maneuvers were commonplace. Miltiades was given a massive fine for the crime of 'deceiving the Athenian people', but died weeks later as a result of his wound. In the wake of this prosecution, the Athenian people chose to use a new institution of the democracy, which had been part of Cleisthenes's reforms, but remained so far unused. This was 'ostracism'—each Athenian citizen was required to write on a shard of pottery ("ostrakon") the name of a politician that they wished to see exiled for a period of ten years. This may have been triggered by Miltiades's prosecution, and used by the Athenians to try to stop such power-games among the noble families. Certainly, in the years (487 BC) following, the heads of the prominent families, including the Alcmaeonids, were exiled. The career of a politician in Athens thus became fraught with more difficulty, since displeasing the population was likely to result in exile. Themistocles, with his power-base firmly established among the poor, moved naturally to fill the vacuum left by Miltiades's death, and in that decade became the most influential politician in Athens. However, the support of the nobility began to coalesce around the man who would become Themistocles's great rival—Aristides. Aristides cast himself as Themistocles's opposite—virtuous, honest and incorruptible—and his followers called him "the just". Plutarch suggests that the rivalry between the two had begun when they competed over the love of a boy: "... they were rivals for the affection of the beautiful Stesilaus of Ceos, and were passionate beyond all moderation." During the decade, Themistocles continued to advocate the expansion of Athenian naval power. The Athenians were certainly aware throughout this period that the Persian interest in Greece had not ended; Darius's son and successor, Xerxes I, had continued the preparations for the invasion of Greece. Themistocles seems to have realised that for the Greeks to survive the coming onslaught required a Greek navy that could hope to face up to the Persian navy, and he therefore attempted to persuade the Athenians to build such a fleet. Aristides, as champion of the "zeugites" (the upper, 'hoplite-class') vigorously opposed such a policy. In 483 BC, a massive new seam of silver was found in the Athenian mines at Laurium. Themistocles proposed that the silver should be used to build a new fleet of 200 triremes, while Aristides suggested it should instead be distributed among the Athenian citizens. Themistocles avoided mentioning Persia, deeming that it was too distant a threat for the Athenians to act on, and instead focused their attention on Aegina. At the time, Athens was embroiled in a long-running war with the Aeginetans, and building a fleet would allow the Athenians to finally defeat them at sea. As a result, Themistocles's motion was carried easily, although only 100 warships of the trireme type were to be built. Aristides refused to countenance this; conversely Themistocles was not pleased that only 100 ships would be built. Tension between the two camps built over the winter, so that the ostracism of 482 BC became a direct contest between Themistocles and Aristides. In what has been characterized as the first referendum, Aristides was ostracised, and Themistocles's policies were endorsed. Indeed, becoming aware of the Persian preparations for the coming invasion, the Athenians voted for the construction of more ships than Themistocles had initially asked for. In the run up to the Persian invasion, Themistocles had thus become the foremost politician in Athens. In 481 BC, a congress of Greek city-states was held, during which 30 or so states agreed to ally themselves against the forthcoming invasion. The Spartans and Athenians were foremost in this alliance, being sworn enemies of the Persians. The Spartans claimed the command of land forces, and since the Greek (hereafter referred to as "Allied") fleet would be dominated by Athens, Themistocles tried to claim command of the naval forces. However, the other naval powers, including Corinth and Aegina refused to give command to the Athenians, and Themistocles pragmatically backed down. Instead, as a compromise, the Spartans (an insignificant naval power), in the person of Eurybiades were to command the naval forces. It is clear from Herodotus, however, that Themistocles would be the real leader of the fleet. The 'congress' met again in the spring of 480 BC. A Thessalian delegation suggested that the allies could muster in the narrow Vale of Tempe, on the borders of Thessaly, and thereby block Xerxes's advance. A force of 10,000 hoplites was dispatched under the command of the Spartan polemarch Euenetus and Themistocles to the Vale of Tempe, which they believed the Persian army would have to pass through. However, once there, Alexander I of Macedon warned them that the vale could be bypassed by several other passes, and that the army of Xerxes was overwhelmingly large, and the Greeks retreated. Shortly afterwards, they received the news that Xerxes had crossed the Hellespont. Themistocles now developed a second strategy. The route to southern Greece (Boeotia, Attica and the Peloponnesus) would require the army of Xerxes to travel through the very narrow pass of Thermopylae. This could easily be blocked by the Greek hoplites, despite the overwhelming numbers of Persians; furthermore, to prevent the Persians bypassing Thermopylae by sea, the Athenian and allied navies could block the straits of Artemisium. However, after the Tempe debacle, it was uncertain whether the Spartans would be willing to march out from the Peloponnesus again. To persuade the Spartans to defend Attica, Themistocles had to show them that the Athenians were willing to do everything necessary for the success of the alliance. In short, the entire Athenian fleet must be dispatched to Artemisium. To do this, every able-bodied Athenian male would be required to man the ships. This in turn meant that the Athenians must prepare to abandon Athens. Persuading the Athenians to take this course was undoubtedly one of the highlights of Themistocles's career. As Holland has it: "What precise heights of oratory he attained, what stirring and memorable phrases he pronounced, we have no way of knowing...only by the effect it had on the assembly can we gauge what surely must have been its electric and vivifying quality—for Themistocles' audacious proposals, when put to the vote, were ratified. The Athenian people, facing the gravest moment of peril in their history, committed themselves once and for all to the alien element of the sea, and put their faith in a man whose ambitions many had long profoundly dreaded." His proposals accepted, Themistocles issued orders for the women and children of Athens to be sent to the city of Troezen, safely inside the Peloponnesus. He was then able to travel to a meeting of the Allies, at which he proposed his strategy; with the Athenian fleet fully committed to the defence of Greece, the other Allies accepted his proposals. Thus, in August 480 BC, when the Persian army was approaching Thessaly, the Allied fleet sailed to Artemisium, and the Allied army marched to Thermopylae. Themistocles himself took command of the Athenian contingent of the fleet, and went to Artemisium. When the Persian fleet finally arrived at Artemisium after a significant delay, Eurybiades, who both Herodotus and Plutarch suggest was not the most inspiring commander, wished to sail away without fighting. At this point Themistocles accepted a large bribe from the local people for the fleet to remain at Artemisium, and used some of it to bribe Eurybiades to remain, while pocketing the rest. From this point on, Themistocles appears to have been more-or-less in charge of the Allied effort at Artemisium. Over three days of battle, the Allies held their own against the much larger Persian fleet, but sustained significant losses. However, the loss of the simultaneous Battle of Thermopylae to the Persians made their continued presence at Artemisium irrelevant, and the Allies thus evacuated. According to Herodotus, Themistocles left messages at every place where the Persian fleet might stop for drinking water, asking the Ionians in the Persian fleet to defect, or at least fight badly. Even if this did not work, Themistocles apparently intended that Xerxes would at least begin to suspect the Ionians, thereby sowing dissension in the Persian ranks. In the aftermath of Thermopylae, Boeotia fell to the Persians, who then began to advance on Athens. The Peloponnesian Allies prepared to now defend the Isthmus of Corinth, thus abandoning Athens to the Persians. From Artemisium, the Allied fleet sailed to the island of Salamis, where the Athenian ships helped with the final evacuation of Athens. The Peloponnesian contingents wanted to sail to the coast of the Isthmus to concentrate forces with the army. However, Themistocles tried to convince them to remain in the Straits of Salamis, invoking the lessons of Artemisium; "battle in close conditions works to our advantage". After threatening to sail with the whole Athenian people into exile in Sicily, he eventually persuaded the other Allies, whose security after all relied on the Athenian navy, to accept his plan. Therefore, even after Athens had fallen to the Persians, and the Persian navy had arrived off the coast of Salamis, the Allied navy remained in the Straits. Themistocles appears to have been aiming to fight a battle that would cripple the Persian navy, and thus guarantee the security of the Peloponnesus. To bring about this battle, Themistocles used a cunning mix of subterfuge and misinformation, psychologically exploiting Xerxes's desire to finish the invasion. Xerxes's actions indicate that he was keen to finish the conquest of Greece in 480 BC, and to do this, he needed a decisive victory over the Allied fleet. Themistocles sent a servant, Sicinnus, to Xerxes, with a message proclaiming that Themistocles was "on king's side and prefers that your affairs prevail, not the Hellenes". Themistocles claimed that the Allied commanders were infighting, that the Peloponnesians were planning to evacuate that very night, and that to gain victory all the Persians needed to do was to block the straits. In performing this subterfuge, Themistocles seems to have been trying to lure the Persian fleet into the Straits. The message also had a secondary purpose, namely that in the event of an Allied defeat, the Athenians would probably receive some degree of mercy from Xerxes (having indicated their readiness to submit). At any rate, this was exactly the kind of news that Xerxes wanted to hear. Xerxes evidently took the bait, and the Persian fleet was sent out to effect the block. Perhaps overconfident and expecting no resistance, the Persian navy sailed into the Straits, only to find that, far from disintegrating, the Allied navy was ready for battle. According to Herodotus, after the Persian navy began its maneuvers, Aristides arrived at the Allied camp from Aegina. Aristides had been recalled from exile along with the other ostracised Athenians on the order of Themistocles, so that Athens might be united against the Persians. Aristides told Themistocles that the Persian fleet had encircled the Allies, which greatly pleased Themistocles, as he now knew that the Persians had walked into his trap. The Allied commanders seem to have taken this news rather uncomplainingly, and Holland therefore suggests that they were party to Themistocles's ruse all along. Either way, the Allies prepared for battle, and Themistocles delivered a speech to the marines before they embarked on the ships. In the ensuing battle, the cramped conditions in the Straits hindered the much larger Persian navy, which became disarrayed, and the Allies took advantage to win a famous victory. Salamis was the turning point in the second Persian invasion, and indeed the Greco-Persian Wars in general. While the battle did not end the Persian invasion, it effectively ensured that all Greece would not be conquered, and allowed the Allies to go on the offensive in 479 BC. A number of historians believe that Salamis is one of the most significant battles in human history. Since Themistocles' long-standing advocacy of Athenian naval power enabled the Allied fleet to fight, and his stratagem brought about the Battle of Salamis, it is probably not an exaggeration to say, as Plutarch does, that Themistocles, "...is thought to have been the man most instrumental in achieving the salvation of Hellas." The Allied victory at Salamis ended the immediate threat to Greece, and Xerxes now returned to Asia with part of the army, leaving his general Mardonius to attempt to complete the conquest. Mardonius wintered in Boeotia and Thessaly, and the Athenians were thus able to return to their city, which had been burnt and razed by the Persians, for the winter. For the Athenians, and Themistocles personally, the winter would be a testing one. The Peloponnesians refused to countenance marching north of the Isthmus to fight the Persian army; the Athenians tried to shame them into doing so, with no success. During the winter, the Allies held a meeting at Corinth to celebrate their success, and award prizes for achievement. However, perhaps tired of the Athenians pointing out their role at Salamis, and of their demands for the Allies to march north, the Allies awarded the prize for civic achievement to Aegina. Furthermore, although the admirals all voted for Themistocles in second place, they all voted for themselves in first place, so that no-one won the prize for individual achievement. In response, realising the importance of the Athenian fleet to their security, and probably seeking to massage Themistocles's ego, the Spartans brought Themistocles to Sparta. There, he was awarded a special prize "for his wisdom and cleverness", and won high praise from all. Furthermore, Plutarch reports that at the next Olympic Games: After returning to Athens in the winter, Plutarch reports that Themistocles made a proposal to the city while the Greek fleet was wintering at Pagasae: "Themistocles once declared to the people [of Athens] that he had devised a certain measure which could not be revealed to them, though it would be helpful and salutary for the city, and they ordered that Aristides alone should hear what it was and pass judgment on it. So Themistocles told Aristides that his purpose was to burn the naval station of the confederate Hellenes, for that in this way the Athenians would be greatest, and lords of all. Then Aristides came before the people and said of the deed which Themistocles purposed to do, that none other could be more advantageous, and none more unjust. On hearing this, the Athenians ordained that Themistocles cease from his purpose." However, as happened to many prominent individuals in the Athenian democracy, Themistocles's fellow citizens grew jealous of his success, and possibly tired of his boasting. It is probable that in early 479 BC, Themistocles was stripped of his command; instead, Xanthippus was to command the Athenian fleet, and Aristides the land forces. Though Themistocles was no doubt politically and militarily active for the rest of the campaign, no mention of his activities in 479 BC is made in the ancient sources. In the summer of that year, after receiving an Athenian ultimatum, the Peloponnesians finally agreed to assemble an army and march to confront Mardonius, who had reoccupied Athens in June. At the decisive Battle of Plataea, the Allies destroyed the Persian army, while apparently on the same day, the Allied navy destroyed the remnants of the Persian fleet at the Battle of Mycale. These twin victories completed the Allied triumph, and ended the Persian threat to Greece. Whatever the cause of Themistocles's unpopularity in 479 BC, it obviously did not last long. Both Diodorus and Plutarch suggest he was quickly restored to the favour of the Athenians. Indeed, after 479 BC, he seems to have enjoyed a relatively long period of popularity. In the aftermath of the invasion and the Destruction of Athens by the Achaemenids, the Athenians began rebuilding their city under the guidance of Themistocles in the autumn of 479 BC. They wished to restore the fortifications of Athens, but the Spartans objected on the grounds that no place north of the Isthmus should be left that the Persians could use as a fortress. Themistocles urged the citizens to build the fortifications as quickly as possible, then went to Sparta as an ambassador to answer the charges levelled by the Spartans. There, he assured them that no building work was on-going, and urged them to send emissaries to Athens to see for themselves. By the time the ambassadors arrived, the Athenians had finished building, and then detained the Spartan ambassadors when they complained about the presence of the fortifications. By delaying in this manner, Themistocles gave the Athenians enough time to fortify the city, and thus ward off any Spartan attack aimed at preventing the re-fortification of Athens. Furthermore, the Spartans were obliged to repatriate Themistocles in order to free their own ambassadors. However, this episode may be seen as the beginning of the Spartan mistrust of Themistocles, which would return to haunt him. Themistocles also now returned to his naval policy, and more ambitious undertakings that would increase the dominant position of his native state. He further extended and fortified the port complex at Piraeus, and "fastened the city [Athens] to the Piraeus, and the land to the sea". Themistocles probably aimed to make Athens the dominant naval power in the Aegean. Indeed, Athens would create the Delian League in 478 BC, uniting the naval power of the Aegean Islands and Ionia under Athenian leadership. Themistocles introduced tax breaks for merchants and artisans, to attract both people and trade to the city to make Athens a great mercantile centre. He also instructed the Athenians to build 20 triremes per year, to ensure that their dominance in naval matters continued. Plutarch reports that Themistocles also secretly proposed to destroy the beached ships of the other Allied navies to ensure complete naval dominance—but was overruled by Aristides and the council of Athens. It seems clear that, towards the end of the decade, Themistocles had begun to accrue enemies, and had become arrogant; moreover his fellow citizens had become jealous of his prestige and power. The Rhodian poet Timocreon was among his most eloquent enemies, composing slanderous drinking songs. Meanwhile, the Spartans actively worked against him, trying to promote Cimon (son of Miltiades) as a rival to Themistocles. Furthermore, after the treason and disgrace of the Spartan general Pausanias, the Spartans tried to implicate Themistocles in the plot; he was, however, acquitted of these charges. In Athens itself, he lost favour by building a sanctuary of Artemis, with the epithet "Aristoboulẽ" ("of good counsel") near his home, a blatant reference to his own role in delivering Greece from the Persian invasion. Eventually, in either 472 or 471 BC, he was ostracised. In itself, this did not mean that Themistocles had done anything wrong; ostracism, in the words of Plutarch, "was not a penalty, but a way of pacifying and alleviating that jealousy which delights to humble the eminent, breathing out its malice into this disfranchisement." Themistocles first went to live in exile in Argos. However, perceiving that they now had a prime opportunity to bring Themistocles down for good, the Spartans again levelled accusations of Themistocles's complicity in Pausanias's treason. They demanded that he be tried by the 'Congress of Greeks', rather than in Athens, although it seems that in the end he was actually summoned to Athens to stand trial. Perhaps realising he had little hope of surviving this trial, Themistocles fled, first to Kerkyra, and thence to Admetus, king of Molossia. Themistocles's flight probably only served to convince his accusers of his guilt, and he was declared a traitor in Athens, his property to be confiscated. Both Diodorus and Plutarch considered that the charges were false, and made solely for the purposes of destroying Themistocles. The Spartans sent ambassadors to Admetus, threatening that the whole of Greece would go to war with the Molossians unless they surrendered Themistocles. Admetus, however, allowed Themistocles to escape, giving him a large sum of gold to aid him on his way. Themistocles then fled from Greece, apparently never to return, thus effectively bringing his political career to an end. From Molossia, Themistocles apparently fled to Pydna, from where he took a ship for Asia Minor. This ship was blown off course by a storm, and ended up at Naxos, which an Athenian fleet was in the process of besieging. Desperate to avoid identification, Themistocles pestered the captain of the ship to continue the journey immediately. According to Thucydides, who wrote within living memory of the events, the ship eventually landed safely at Ephesus, where Themistocles disembarked. Plutarch has the ship docking at Cyme in Aeolia, and Diodorus has Themistocles making his way to Asia in an undefined manner. Diodorus and Plutarch next recount a similar tale, namely that Themistocles stayed briefly with an acquaintance (Lysitheides or Nicogenes) who was also acquainted with the Persian king, Artaxerxes I. Since there was a bounty on Themistocles's head, this acquaintance devised a plan to safely convey Themistocles to the Persian king in the type of covered wagon that the King's concubines travelled in. All three chroniclers agree that Themistocles's next move was to contact the Persian king; in Thucydides, this is by letter, while Plutarch and Diodorus have a face-to-face meeting with the king. The spirit is, however, the same in all three: Themistocles introduces himself to the king and seeks to enter his service: "I, Themistocles, am come to you, who did your house more harm than any of the Hellenes, when I was compelled to defend myself against your father's invasion—harm, however, far surpassed by the good that I did him during his retreat, which brought no danger for me but much for him." (Thucydides) Thucydides and Plutarch say that Themistocles asked for a year's grace to learn the Persian language and customs, after which he would serve the king, and Artaxerxes granted this. Plutarch reports that, as might be imagined, Artaxerxes was elated that such a dangerous and illustrious foe had come to serve him. At some point in his travels, Themistocles's wife and children were extricated from Athens by a friend, and joined him in exile. His friends also managed to send him many of his belongings, although up to 100 talents worth of his goods were confiscated by the Athenians. When, after a year, Themistocles returned to the king's court, he appears to have made an immediate impact, and "he attained...very high consideration there, such as no Hellene has ever possessed before or since". Plutarch recounts that "honors he enjoyed were far beyond those paid to other foreigners; nay, he actually took part in the King's hunts and in his household diversions". Themistocles advised the king on his dealings with the Greeks, although it seems that for a long period, the king was distracted by events elsewhere in the empire, and thus Themistocles "lived on for a long time without concern". He was made governor of the district of Magnesia on the Maeander River in Asia Minor, and assigned the revenues of three cities: Magnesia (about 50 talents per year—"for bread"); Myus ("for opson"); and Lampsacus ("for wine"). According to Plutarch, Neanthes of Cyzicus and Phanias reported two more, the city of Palaescepsis ("for clothes") and the city of Percote ("for bedding and furniture for his house"), both near Lampsacus. Themistocles was one of the several Greeks aristocrats who took refuge in the Achaemenid Empire following reversals at home, other famous ones being Hippias, Demaratos, Gongylos or later Alcibiades. In general, those were generously welcomed by the Achaemenid kings, and received land grants to support them, and ruled on various cities of Asia Minor. Conversely, some Achaemenid satraps were welcomed as exiles in western courts, such as Artabazos II. Coins are the only contemporary documents remaining from the time of Themistocles. Although many of the first coins of Antiquity illustrated the images of various gods or symbols, the first portraiture of actual rulers only appears in the 5th century BC. Themistocles was probably the first ruler ever to issue coinage with his personal portrait, as he became Achaemenid Governor of Magnesia in 465–459 BC. Themistocles may have been in a unique position in which he could transfer the notion of individual portraiture, already current in the Greek world, and at the same time wield the dynastic power of an Achaemenid dynast who could issue his own coins and illustrate them as he wished. Still, there is some doubt that his coins may have represented Zeus rather than himself. During his lifetime, Themistocles is known to have erected two statues to himself, one in Athens, and the other in Magnesia, which would lend credence to the possibility that he also illustrated himself on his coins. The Themistocles statue in Magnesia was illustrated on the reverse of some of the Magnesian coins of Roman Emperor Antonius Pius in the 2nd century CE. The rulers of Lycia followed towards the end of the 5th century as the most prolific and unambiguous producers of coins displaying the portrait of their rulers. From the time of Alexander the Great, portraiture of the issuing ruler would then become a standard, generalized, feature of coinage. Themistocles died at Magnesia in 459 BC, at the age of 65, according to Thucydides, from natural causes. However, perhaps inevitably, there were also rumours surrounding his death, saying that unwilling to follow the Great King's order to make war on Athens, he committed suicide by taking poison, or drinking bull's blood. Plutarch provides the most evocative version of this story: "But when Egypt revolted with Athenian aid...and Cimon's mastery of the sea forced the King to resist the efforts of the Hellenes and to hinder their hostile growth...messages came down to Themistocles saying that the King commanded him to make good his promises by applying himself to the Hellenic problem; then, neither embittered by anything like anger against his former fellow-citizens, nor lifted up by the great honor and power he was to have in the war, but possibly thinking his task not even approachable, both because Hellas had other great generals at the time, and especially because Cimon was so marvelously successful in his campaigns; yet most of all out of regard for the reputation of his own achievements and the trophies of those early days; having decided that his best course was to put a fitting end to his life, he made a sacrifice to the gods, then called his friends together, gave them a farewell clasp of his hand, and, as the current story goes, drank bull's blood, or as some say, took a quick poison, and so died in Magnesia, in the sixty-fifth year of his life...They say that the King, on learning the cause and the manner of his death, admired the man yet more, and continued to treat his friends and kindred with kindness." It was rumored that after his death, Themistocles's bones were transported to Attica in accordance with his wishes, and buried in his native soil in secret, it being illegal to bury an Athenian traitor in Attica. The Magnesians built a "splendid tomb" in their marketplace for Themistocles, which still stood during the time of Plutarch, and continued to dedicate part of their revenues to the family of Themistocles. Nepos in the 1st century BC wrote about a statue of Themistocles visible in the forum of Magnesia. The statue also appears on a coin type of Roman Emperor Antonius Pius minted in Magnesia in the 2nd century CE. Archeptolis, son of Themistocles, became a Governor of Magnesia after his father's death c. 459 BCE. Archeptolis also minted his own silver coinage as he ruled Magnesia, and it is probable that part of his revenues continued to be handed over to the Achaemenids in exchange for the maintenance of their territorial grant. Themistocles and his son formed what some authors have called "a Greek dynasty in the Persian Empire". From a second wife, Themistocles also had a daughter named Mnesiptolema, whom he appointed as priestess of the Temple of Dindymene in Magnesia, with the title of "Mother of the Gods". Mnesiptolema would eventually marry her half-brother Archeptolis, homeopatric (but not homeometric) marriages being permitted in Athens. Themistocles also had several other daughters, named Nicomache, Asia, Italia, Sybaris, and probably Hellas, who married the Greek exile in Persia Gongylos and still had a fief in Persian Anatolia in 399/400 BC as his widow. Themistocles also had three other sons, Diocles, Polyeucteus and Cleophantus, the latter possibly a ruler of Lampsacus. One of the descendants of Cleophantus still issued a decree in Lampsacus around 200 BC mentioning a feast for his own father, also named Themistocles, who had greatly benefited the city. Later, Pausanias wrote that the sons of Themistocles "appear to have returned to Athens", and that they dedicated a painting of Themistocles in the Parthenon and erected a bronze statue to Artemis Leucophryene, the goddess of Magnesia, on the Acropolis. They may have returned from Asia Minor in old age, after 412 BC, when the Achaemenids took again firm control of the Greek cities of Asia, and they may have been expelled by the Achaemenid satrap Tissaphernes sometime between 412 and 399 BC. In effect, from 414 BC, Darius II had started to resent increasing Athenian power in the Aegean and had Tissaphernes enter into an alliance with Sparta against Athens, which in 412 BC led to the Persian conquest of the greater part of Ionia. Plutarch in the 1st century AD indicates that he met in Athens a lineal descendant of Themistocles (also called Themistocles) who was still being paid revenues from Asia Minor, 600 years after the events in question. It is possible to draw some conclusions about Themistocles's character. Perhaps his most evident trait was his massive ambition; "In his ambition he surpassed all men"; "he hankered after public office rather as a man in delirium might crave a cure". He was proud and vain, and anxious for recognition of his deeds. His relationship with power was of a particularly personal nature; while he undoubtedly desired the best for Athens, many of his actions also seem to have been made in self-interest. He also appears to have been corrupt (at least by modern standards), and was known for his fondness of bribes. Yet, set against these negative traits, was an apparently natural brilliance and talent for leadership: Both Herodotus and Plato record variations of an anecdote in which Themistocles responded with subtle sarcasm to an undistinguished man who complained that the great politician owed his fame merely to the fact that he came from Athens. As Herodotus tells it: As Plato tells it, the heckler hails from the small island of Seriphus; Themistocles retorts that it is true that he would not have been famous if he had come from that small island, but that the heckler would not have been famous either if he had been born in Athens. Themistocles was undoubtedly intelligent, but also possessed natural cunning; "the workings of his mind [were] infinitely mobile and serpentine". Themistocles was evidently sociable and appears to have enjoyed strong personal loyalty from his friends. At any rate, it seems to have been Themistocles's particular mix of virtues and vices that made him such an effective politician. Themistocles died with his reputation in tatters, a traitor to the Athenian people; the "saviour of Greece" had turned into the enemy of liberty. However, his reputation in Athens was rehabilitated by Pericles in the 450s BC, and by the time Herodotus wrote his history, Themistocles was once again seen as a hero. Thucydides evidently held Themistocles in some esteem, and is uncharacteristically flattering in his praise for him (see above). Diodorus also extensively praises Themistocles, going as far as to offer a rationale for the length at which he discusses him: "Now on the subject of the high merits of Themistocles, even if we have dwelt over-long on the subject in this digression, we believed it not seemly that we should leave his great ability unrecorded." Indeed, Diodorus goes so far as to say that "But if any man, putting envy aside, will estimate closely not only the man's natural gifts but also his achievements, he will find that on both counts Themistocles holds first place among all of whom we have record. Therefore, one may well be amazed that the Athenians were willing to rid themselves of a man of such genius." Since Diodorus's history includes such luminaries as Alexander the Great and Hannibal, this is high praise indeed. Plutarch offers a more nuanced view of Themistocles, with more of a critique of Themistocles's character. He does not detract from Themistocles's achievements, but also highlights his failings. Undoubtedly the greatest achievement of Themistocles's career was his role in the defeat of Xerxes's invasion of Greece. Against overwhelming odds, Greece survived, and classical Greek culture, so influential in Western civilization, was able to develop unabated. Moreover, Themistocles's doctrine of Athenian naval power, and the establishment of Athens as a major power in the Greek world were of enormous consequence during the 5th century BC. In 478 BC, the Hellenic alliance was reconstituted without the Peloponnesian states, into the Delian League, in which Athens was the dominant power. This was essentially a maritime alliance of Athens and her colonies, the Aegean islands, and the Ionian cities. The Delian league took the war to Persia, eventually invading Persian territory and dominating the Aegean. Under the guidance of Pericles, the Delian league gradually evolved into the Athenian Empire, the zenith of Athenian power and influence. Themistocles seems to have deliberately set Athens up as a rival to Sparta in the aftermath of Xerxes's invasion, basing this strategy on Athenian naval power (contrasted with the power of the Spartan army). Tension grew throughout the century between Athens and Sparta, as they competed to be the leading state in Greece. Finally, in 431 BC, this tension erupted into the Peloponnesian War, the first of a series of conflicts that tore Greece apart for the next century; an unforeseen, if indirect, legacy of Themistocles's. Diodorus provides a rhetorical summary that reflects on Themistocles's achievements: "What other man, while Sparta still had the superior strength and the Spartan Eurybiades held the supreme command of the fleet, could by his single-handed efforts have deprived Sparta of that glory? Of what other man have we learned from history that by a single act he caused himself to surpass all the commanders, his city all the other Greek states, and the Greeks the barbarians? In whose term as general have the resources been more inferior and the dangers they faced greater? Who, facing the united might of all Asia, has found himself at the side of his city when its inhabitants had been driven from their homes, and still won the victory?"
https://en.wikipedia.org/wiki?curid=31069
Toonie The toonie (also spelled twonie or twoonie), formally the Canadian two-dollar coin (, nicknamed or ), was introduced on February 19, 1996, by Minister of Public Works Diane Marleau. , it possesses the highest monetary value of any circulating Canadian coin. The toonie is a bi-metallic coin which on the reverse side bears an image of a polar bear by artist Brent Townsend. The obverse, like all other current Canadian circulation coins, has a portrait of Queen Elizabeth II. It has the words in a different typeface from any other Canadian coin; it is also the only coin to consistently bear its issue date on the obverse. The coin is manufactured using a patented distinctive bi-metallic coin-locking mechanism. The coins are estimated to last 20 years. The discontinued two-dollar bill was less expensive to manufacture, but lasted only one year on average. On April 10, 2012, the Royal Canadian Mint (RCM) announced design changes to the loonie and toonie, which include new security features. Coins minted prior to 2012 consist of an aluminum bronze inner core with a pure nickel outer ring; but in March–May 2012, the composition of the inner core switched to aluminum bronze coated with multi-ply plated brass, and the outer ring switched to steel coated with multi-ply plated nickel. The weight dropped from 7.30 to 6.92g, and the thickness changed from 1.8 to 1.75 mm. The Mint stated that multi-ply plated steel technology, already used in Canada's smaller coinage, produces an electromagnetic signature that is harder to counterfeit than that of regular alloy coins; also, using steel provides cost savings and avoids fluctuations in the price or supply of nickel. "Toonie" is a portmanteau word combining the number "two" with the name of the loonie, Canada's one-dollar coin. It is occasionally spelled "twonie" or "twoonie", but Canadian newspapers and the Royal Canadian Mint use the "toonie" spelling. Jack Iyerak Anawak, member of Parliament from Nunatsiaq (the electoral district representing what is now the territory of Nunavut), suggested the name "Nanuq" [nanook, polar bear] in honour of Canada's Inuit people and their northern culture; however, this culturally meaningful proposal went largely unnoticed beside the popular "toonie". The name "toonie" became so widely accepted that in 2006, the RCM secured the rights to it. A competition to name the bear resulted in the name "Churchill", a reference both to Winston Churchill and to the common polar bear sightings in Churchill, Manitoba. Finance Minister Paul Martin announced the replacement of the $2 banknote with a coin in the 1995 Canadian federal budget speech. The RCM spent to canvass 2000 Canadian households about which of the 10 theme options they preferred. Under the direction of Hieu C. Truong, the RCM engineering division designed the two-dollar coin to be made from two different metals. The metals for the bimetallic coin would be lighter and thinner than those produced anywhere in the world. To join the two parts, the engineering division selected a bimechanical locking mechanism. By the end of 1996, the Winnipeg facility had struck 375 million of these coins. The coin was officially launched at Ben's Deli in Montreal on February 19, 1996. The weight of the coin was originally specified as 112.64 grains, equivalent to 7.299 g. The community of Campbellford, Ontario, home to the coin's designer, constructed an toonie monument, similar to the "Big Loonie" in Echo Bay and the Big Nickel in Sudbury. Unlike the loonie before it, the toonie and the $2 bill were not produced concurrently with each other, as the $2 bill was withdrawn from circulation on February 16, 1996, three days prior to the toonie's introduction. From 2010 to 2015, the Royal Canadian Mint issued a two-dollar coin that depicts a different and unique image of a young animal on the coin's reverse. These special toonies have limited mintages and are available only in the six-coin specimen sets. A failure in the bimetallic locking mechanism in the first batch of toonies caused some coins to separate if struck hard or frozen. Despite media reports of defective toonies, the RCM responded that the odds of a toonie falling apart were about one in 60 million. It is against the law to deliberately attempt to separate a toonie. Defacing coin currency is a summary offence under the Canadian Criminal Code, section 456.
https://en.wikipedia.org/wiki?curid=31074
Tirana Tirana (; ; ) is the capital and largest city by area and population of the Republic of Albania. It is located in the center of Albania enclosed by mountains and hills with Mount Dajt elevating on the east and a slight valley on the northwest overlooking the Adriatic Sea in the distance. Due to its location at the Plain of Tirana and the close proximity to the Mediterranean Sea, the city is particularly influenced by a Mediterranean seasonal climate. It is among the wettest and sunniest cities in Europe, with 2,544 hours of sun per year. Tirana flourished as a city in 1614 but the region that today corresponds to the city's territory has been continuously inhabited since the Iron Age. The city's territory was inhabited by several Illyrian tribes but had no importance within Illyria. Indeed, it was annexed by Rome and became an integral part of the Roman Empire following the Illyrian Wars. The heritage of that period is still evident and represented by the Mosaics of Tirana. Later, in the 5th and 6th centuries, a Paleochristian basilica was built around this site. After the Roman Empire split into East and West in the 4th century, its successor the Byzantine Empire took control over most of Albania, and built the Petrelë Castle in the reign of Justinian I. The city was fairly unimportant until the 20th century, when the Congress of Lushnjë proclaimed it as Albania's capital, after the Albanian Declaration of Independence in 1912. Tirana is the most important economic, financial, political and trade center in Albania due to its significant location in the center of the country and its modern air, maritime, rail and road transportation. It is the seat of power of the Government of Albania, with the official residences of the President and Prime Minister of Albania, and the Parliament of Albania. The city was awarded the title of the European Youth Capital of 2022. Tirana is located in the Plain of Tirana in the center of Albania between the mount of Dajti and the mountains of Mali me Gropa, and a valley to the northwest overlooking the Adriatic Sea. The average altitude is about above sea level, with a maximum of . The city is north of Athens, southeast of Rome, south of Podgorica in Montenegro, southwest of Skopje in North Macedonia and from Pristina in Kosovo. The city is surrounded by two important protected areas: the Dajti National Park and Mali me Gropa-Bizë-Martanesh Protected Landscape. In winter, the mountains are often covered with snow and are a popular retreat for the population of Tirana, which rarely receives snowfalls. In terms of biodiversity, the forests are mainly composed of pine, oak and beech, while its interior relief is dotted with canyons, waterfalls, caves, lakes and other landforms. Thanks to its natural heritage, it is considered the "Natural Balcony of Tirana". The mountain can be reached by a narrow asphalt mountain road onto an area known as Fusha e Dajtit. From this small area there is an excellent view of Tirana and its plain. Tiranë river flows through the city, as does the Lanë river. Tirana is home to several artificial lakes, including Tirana, Farka, Tufina, and Kashar. The present municipality was formed in the 2015 local government reform by the merger of the former municipalities of Baldushk, Bërzhitë, Dajt, Farkë, Kashar, Krrabë, Ndroq, Petrelë, Pezë, Shëngjergj, Tirana, Vaqarr, Zall-Bastar and Zall-Herr, which became municipal units. The seat of the municipality is the city of Tirana. The city is defined by the Köppen climate classification like "Cfa", in other words the city has a humid subtropical climate and receives a commensurably amount of precipitation, during summer, to avoid the mediterranean climate "(Csa)" classification, since every summer month receives more than of rainfall, with hot and moderately dry summers and cool and wet winters. It lies on the boundary between Zone 7 and Zone 9 in terms of the hardiness zone. The average precipitation is about per year. The city receives the majority of precipitation in winter months, which occurs from November to March, and less in summer months from June to September. In terms of precipitation, both rain and snow, the city is ranked among the wettest cities in the European Continent. Temperatures vary throughout the year from an average of in January to in July. Springs and summers are very warm to hot often reaching over from May to September. During autumn and winter, from November to March, the average temperature drops and is not lower than . The city receives approximately 2500 hours of sun. In September 2015, Tirana organized its first vehicle-free day, joining forces with numerous cities across the globe to fight against the existing problem of urban air pollution. This initiative resulted in a considerable drop in both air and noise pollution, encouraging the Municipality to organize a vehicle-free day every month. The city suffers from problems related to overpopulation, such as waste management, high levels of air pollution and significant noise pollution. Over the last decades, air pollution has become a pressing concern as the number of cars has increased. These are mostly 1990s and early 2000s diesel cars, while it is widely believed that the fuel used in Albania contains larger amounts of sulfur and lead than in the European Union. Effective 1 January 2019, the government has imposed an import ban of used vehicles made prior to 2005 in an effort to curb pollution, encourage the buying of new cars from certified domestic dealerships, and to improve overall road safety. Another source of pollution are PM10 and PM2.5 inhaled particulate matter and NO2 gases resulting from rapid growth in the construction of new buildings and expanding road infrastructure. Untreated solid waste is present in the city and outskirts. Additionally, there have been complaints of excessive noise pollution. Despite the problems, the Grand Park at the Artificial Lake has some effect on absorbing CO2 emissions, while over 2.000 trees have been planted around sidewalks. Works for four new large parks have started in the summer of 2015 located in Kashar, Farkë, Vaqarr, and Dajt. These parks are part of the new urban plan striving to increase the concentration of green spaces in the capital. The government has included designated green areas around Tirana as part of the Tirana Greenbelt where construction is not permitted or limited. The discovery of the Pellumbas Cave near Tirana shows that ancient human culture was present in Albania as early as the Paleolithic era. Nonetheless, the oldest discovery within the urban area of Tirana was a Roman house, which was transformed into an aisleless church with a mosaic floor, dating to the 3rd century, with other remains found near a medieval temple at Shengjin Fountain in the eastern suburbs. A castle possibly called "Tirkan", whose remnants are found along Murat Toptani Street, was built by Byzantine Emperor Justinian I and restored by Ahmed Pasha Toptani in the 18th century. The area had no special importance in Illyrian and classical times. Tirana is mentioned in Venetian documents in 1418, one year after the Ottoman conquest of the area: ""...the resident Pjeter, son of late Domenik from the village of Tirana..."". Records of the first land registrations under the Ottomans in 1431–32 show that Tirana consisted of 60 inhabited areas, with nearly 2,028 houses and 7,300 inhabitants. In 1510, Marin Barleti, an Albanian Catholic priest and scholar, in the biography of the Albanian national hero Skanderbeg, "Historia de vita et gestis Scanderbegi Epirotarum principis" ("The story of life and deeds of Skanderbeg, the prince of Epirotes"), referred to this area as a small village, distinguishing between "Little Tirana" and "Great Tirana". It is later mentioned in 1572 as "Borgo di Tirana". According to Hahn, the settlement had already started to develop as a bazaar and included several watermills, even before 1614, when Sulejman Bargjini, a local ruler, built the Old Mosque, a small commercial centre, and a hammam (Turkish bath). This is confirmed by oral sources, which state that there were two earlier mosques 300–400 m from the Old Mosque, towards today's Ali Demi Street. The Mosque of Reç and the Mosque of Mujo were positioned on the left side of the Lana river and were older than the Old Mosque. Later, the Et'hem Bey Mosque, built by Molla Bey of Petrela, was constructed. It employed the best artisans in the country and was completed in 1821 by Molla's son Etëhem, who was also Sulejman Bargjini's great-nephew. In 1800, the first newcomers arrived in the settlement, the so-called "ortodoksit". They were Vlachs from villages near Korçë and Pogradec, who settled around modern day Tirana Park on the Artificial Lake. They started to be known as the "llacifac" and were the first Christians to arrive after the creation of the town. In 1807, Tirana became the center of the Subprefecture of Krujë-Tirana. After 1816, Tirana languished under the control of the "Toptani" family of Krujë. Later, Tirana became a sub-prefecture of the newly created Vilayet of Shkodër and the Sanjak of Durrës. In 1889, the Albanian language started to be taught in Tirana's schools, and the patriotic club Bashkimi was founded in 1908. On 28 November 1912, the national flag was raised in agreement with President Ismail Qemali. During the Balkan Wars, the city was temporarily occupied by the Serbian army and it took part in uprising of the villages led by Haxhi Qamili. In August 1916, the first city map was compiled by the specialists of the Austro-Hungarian army. Following the capture of the town of Debar by Serbia, many of its Albanian inhabitants fled to Turkey, the rest went to Tirana. Of those that ended up in Istanbul, some of their number migrated to Albania, mainly to Tirana where the Dibran community formed an important segment of the city's population from 1920 onward and for some years thereafter. On 8 February 1920, the Congress of Lushnjë proclaimed Tirana as the temporary capital of Albania, which had gained independence in 1912. The city acquired that status permanently on 31 December 1925. In 1923, the first regulatory city plan was compiled by Austrian architects. The centre of Tirana was the project of Florestano Di Fausto and Armando Brasini, well-known architects of the Mussolini period in Italy. Brasini laid the basis for the modern-day arrangement of the ministerial buildings in the city centre. The plan underwent revisions by Albanian architect Eshref Frashëri, Italian architect Castellani and Austrian architects Weiss and Kohler. The modern Albanian parliament building served as an officers' club. It was there that, in September 1928, Zog of Albania was crowned King Zog I, King of the Albanians. Tirana was the venue for the signing of the Pact of Tirana between Fascist Italy and Albania. In 1939, Tirana was captured by Fascist forces, who appointed a puppet government. In the meantime, Italian architect Gherardo Bosio was asked to elaborate on previous plans and introduce a new project in the area of present-day Mother Teresa Square. A failed assassination attempt was made on Victor Emmanuel III of Italy by a local resistance activist during a visit to Tirana. In November 1941, two emissaries of the Communist Party of Yugoslavia (KPJ), Miladin Popović and Dušan Mugoša, called a meeting of three Albanian communist groups and founded the Communist Party of Albania, and Enver Hoxha soon emerged as its leader. The town soon became the center of the Albanian communists, who mobilized locals against Italian fascists and later Nazi Germans, while spreading ideological propaganda. On 17 November 1944, the town was liberated after a fierce battle between the Communists and German forces. The Nazis eventually withdrew and the communists seized power. From 1944 to 1991, massive socialist-style apartment complexes and factories were built, while Skanderbeg Square was redesigned, with a number of buildings demolished. For instance, Tirana's former Old Bazaar and the Orthodox Cathedral were razed to the ground in order to build the Soviet-styled Palace of Culture. The northern portion of the main boulevard was renamed Stalin Boulevard and his statue was erected in the city square. Because private car ownership was banned, mass transportation consisted mainly of bicycles, trucks and buses. After Hoxha's death, a pyramidal museum was constructed in his memory by the government. Before and after the proclamation of Albania's policy of self-imposed isolationism, a number of high-profile figures paid visits to the city, such as Soviet leader Nikita Khrushchev, Chinese Premier Zhou Enlai and East German Foreign Minister Oskar Fischer. In 1985, Enver Hoxha's funeral was held in Tirana. A few years later, Mother Teresa became the first religious figure to visit the country after the end of Albania's long anti-religious atheist stance. She paid respects to her mother and sister resting at a local cemetery. Starting at the campus and ending at Skanderbeg Square with the toppling of Enver Hoxha's statue, the city saw significant demonstrations by University of Tirana students demanding political freedoms in the early 1990s. On the political aspect, the city witnessed a number of events. Personalities visited the capital, such as former U.S. Secretary of State James Baker and Pope John Paul II. The former visit came amidst the historical setting after the fall of communism, as hundreds of thousands were chanting in Skanderbeg Square Baker's famous saying of "Freedom works!". Pope John Paul II became the first major religious leader to visit Tirana, though Mother Teresa had visited few years prior. During the Balkans turmoil in the mid-1990s, the city experienced dramatic events such as the unfolding of the 1997 unrest in Albania and a failed coup d'état on 14 September 1998. In 1999, following the Kosovo War, Tirana Airport became a NATO airbase, serving its mission in the former Yugoslavia. Starting in 2000, former Tirana mayor Edi Rama (mayor from 2000 to 2011) under the Ilir Meta government, undertook a campaign to demolish illegal buildings around the city centre and along the Lana River banks to bring the area to its pre-1990 state. In an attempt to widen roads, Rama authorized the bulldozing of private properties so that they could be paved over, thus widening streets. Most main roads underwent reconstruction, such the Ring Road "(Unaza)", "Kavaja Street" and the main boulevard. Rama led the initiative to paint the façades of Tirana's buildings in bright colours "(known as Edi Rama colours – very bright pink, yellow, green, violet)" although much of their interiors continued to degrade. Rama's critics claimed that he focused too much attention on cosmetic changes without fixing any of the major problems such as shortages of drinking water and electricity. A richer calendar of events was introduced and a Municipal Police force established. Since 2005 the southeast region of Tirana, mainly Farke and Petrela has had a burst becoming the preferred destination with many residence complexes being built and having the current biggest mall in Albania, the Tirana East Gate (TEG). In 2007, U.S. President George W. Bush marked the first time that such a high ranking American official visited Tirana. A central Tirana street was named in his honor. In 2008, the Gërdec explosions were felt in the capital as windows were shattered and citizens shaken. On 21 January 2011, Albanian police clashed with opposition supporters in front of the Government building as cars were set on fire, three persons killed and 150 wounded. Following the 2015 municipal elections, power was transferred from the Democratic Party representative Lulzim Basha, to the Socialist Party candidate Erion Veliaj. The country underwent a territorial reform, in which defunct communes were merged with municipalities, leaving only 61 of them in total. Thirteen of Tirana's former communes were integrated as administrative units joining the existing eleven. Since then, Tirana is undergoing major changes in law enforcement and new projects, as well as continuing the ones started by Veliaj's predecessor. In their first few council meetings, 242 social houses got allocated to families in need. Construction permits were suspended until the capital's development plan is revised and synthesized. In addition the municipality will audit all permits granted in the previous years. In 2016, Skanderbeg Square was redesigned according to an earlier plan brought forward in 2010. This included greater green space areas around the square, underground parking, and the introduction of stone material taken from all corners of Albania and Albanian-inhabited lands. Albania's rich flora were represented by the gardens around the square, while the former garden behind Skanderbeg's monument was restored to its pre-2010 state and named Europe Park. Once the project is completed, the square will serve as a venue for the annual Christmas Village of Festivities, music concerts, and where surrounding institutions would showcase themselves in an open environment concept such as in the yearly Nuit Blanche on 29 November. The New Boulevard () was opened recently north of Zog I Boulevard at the defunct Tirana Rail Station, laying the foundation for the development of Tirana north of Skanderbeg Square and south of the Tirana River. The new headquarters of Tirana City Hall are planned to be built along the New Boulevard together with a central park located nearby. The architect Stefano Boeri was contracted to work on the "General Urban Plan of Tirana" (TR030), which makes a series of interventions to the city's infrastructure. The plan was submitted for approval to the Municipality Council in November 2016. The status of Tirana as the capital of the Republic of Albania is officially mandated by the constitution of the country. Being the capital and primate city, Tirana is seat of the country's national authorities, including the executive, legislative, judiciary, and the headquarters of almost all national political parties. The two principal officers of the executive each possess their own official residences and offices at the Dëshmorët e Kombit Boulevard. Both, the President and Prime Minister reside at the Presidenca and Kryeministria, respectively. Government ministries are located in various parts across of the city, while the Parliament of Albania houses also at the Dëshmorët e Kombit Boulevard. Albania's highest courts maintains their headquarters in Tirana, including the Supreme Court, the Constitutional Court, the Court of Appeal and the Administrative Court. Tirana is also home to more than 45 embassies and representative bodies as an international political actor. The Mayor of Tirana along with the Cabinet of Tirana exercises executive power. The Assembly of Tirana functions as the city parliament and consists of 55 members, serving four-year terms. It primarily deals with budget, global orientations and relations between the city and the Government of Albania. It has 14 committees and its current Chairman is Aldrin Dalipi from the Socialist Party. Each of the members have a specific portfolio such as economy, finance, juridical, education, health care, and several professional services, agencies and institutes. The Municipality of Tirana is divided into 24 administrative units, with an own appointed mayor and council. In 2000, the centre of Tirana from the central campus of University of Tirana in the Mother Teresa Square up to the Skanderbeg Square, was declared the place of Cultural Assembly, and given state protection. The historical core of the capital lies around pedestrian only Murat Toptani Street, while the most prominent city district is Blloku. This neighborhood is the most popular part under the youth of Tirana. It is located in the southern side of Tirana and borders Kombinat and the center of the city. Until recently the city lacked a proper address system. In 2010, the municipality undertook the installing of street name signs and entrance numbers while every apartment entrance was physically stamped. List of twin towns of Tirana. As Tirana, many of them are the most influential and largest or primate cities of their country and political, economical, cultural capital of their country. According to the 2011 census, the estimated population of the municipality of Tirana was 418,495 with a population density of 502 inhabitants per square kilometres, simultaneously constituting the largest municipality in Albania measured by population. The encompassing metropolitan area, consisting of the regions of Tirana and Durrës, includes a combined population of approximately 1 million, which amounts to nearly one third of the country's total population. The population of the municipality of Tirana is composed by a mixture of different cultural and ethnic groups of Southern Europe. The five most populous ethnicities are Albanians (84.10%), Greeks (0.35%), Aromanians (0.11%), Macedonians (0.07%) and Italians (0.03%). Tirana has experienced a steady population increase in the recent years, especially during the fall of communism in the 1990s and the beginnings of the 21st century. The remarkable growth was, and still is, largely fueled by migrants from all over the country often in search of employment and improved living conditions. Between 1820 and 1955, the population of Tirana tenfolded while during the period from 1989 to 2011, the city's population grew annually by approximately 2.7%. In the 19th and 20th centuries, the city had a rate of growth less than 1% annually until the 1973s, then down to less than 8% per year until the middle 20th century figures. In Albania, a secular state with no state religion, the freedom of belief, conscience and religion is explicitly guaranteed in the constitution of Albania. Tirana is religiously diverse and has many places of worship catering to its religious population whom are adherents of Islam, Christianity and Judaism but also of Atheism and Agnosticism. In the 2011 census, 55.7% of the population of the municipality was counted as Muslim, 3.4% as Bektashis and 11.8% as Christian including 5.4% as Roman Catholic and 6.4% as Eastern Orthodox. The remaining 29.1% of the population reported having no religion or did not provided an adequate answer. The census of 2011 did not included specific municipality level data for other religious groups. The Roman Catholic Church is represented in Tirana by the Archdiocese of Tiranë and Durrës, with the St Paul's Cathedral as the current seat of the prelacy. The Albanian Orthodox community is served by the Archbishop of Tirana in the Resurrection Cathedral. Tirana is the heart of the economy of Albania and the most industrialised and economically fastest growing region in Albania. Of the main sectors, the tertiary sector is the most important for the economy of Tirana and employs more than 68% of work force of Tirana. 26% of the working population makes up the secondary sector followed by the primary sector with only 5%. The city began to develop at the beginning of the 16th century as it was part of the Ottoman Empire, when a bazaar was established, and its craftsmen manufactured silk and cotton fabrics, leather, ceramics and iron, silver and gold artefacts. In the 20th century, the city and its surrounding areas expanded rapidly and became the most heavily industrialised region of the country. The most significant contribution is made by the tertiary sector which has developed considerably since the fall of communism in Albania. Forming the financial center of the country, the financial industry is a major component of the city's tertiary sector and remains in good conditions overall due to privatization and the commendable monetary policy. All of the most important financial institutions, such as the Bank of Albania and the Albanian Stock Exchange are centred in Tirana as well as most of the banking companies such as the Banka Kombëtare Tregtare, Raiffeisen Bank, Credins Bank, Intesa Sanpaolo Bank and Tirana Bank. The telecommunication industry represents another major and growing contributor to the sector. A rapid development occurred as well as after the end of communism and decades of isolationism mainly due to the new national policy of reform and opening up sped up the industry's development. Vodafone, Telekom Albania and Eagle are the leading telecommunication providers in Tirana, as in all the country. The tourism industry of the city has expanded in recent years to become a vital component of the economy. Tirana has been officially dubbed as 'The Place Beyond Belief' by local authorities. The increasing number of international arrivals at the Tirana International Airport and Port of Durrës from across Europe, Australia and Asia has rapidly grown the number of foreign visitors in the city. The largest hotels of the city are the Tirana International Hotel, Maritim Plaza Tirana both situated in the heart of the city near Scanderbeg Square, and the Hyatt-owned luxury Mak Hotel Tirana located next to the Air Albania Stadium, where Mariott Tirana Hotel is also planned to open. Other major hotels present in central Tirana include the Rogner Hotel, Hilton Garden Inn Tirana, Xheko Imperial Hotel, Best Western Premier Ark Hotel, and Mondial Hotel. Tirana is served by Nënë Tereza International Airport, which is simultaneously the premier air gateway to the country. The airport was officially named in honour of the Albanian Roman Catholic nun and missionary, Mother Teresa. It connects Tirana with many destinations in different countries across Europe, Africa and Asia. The airport carried more than 3.3 million passengers in 2019 and is also the principal hub for the country's flag carrier, Air Albania. The geographical location of Tirana in the center of Albania has long established the city as an integral terminus for the national road transportation, thus connecting the city to all parts of Albania and the neighbouring countries. The Rruga Shtetërore 1 (SH1) connects Tirana with Shkodër and Montenegro in the north, and constitutes an essential section of the proposed Adriatic–Ionian motorway. The Rruga Shtetërore 2 (SH2) continues in the west and provides direct connection to Durrës on the Adriatic Sea. The Rruga Shtetërore 3 (SH3) is currently being transformed to the Autostrada 3 (A3) and follows the ancient Via Egnatia. It significantly constitutes a major section of the Pan-European Corridor VIII and links the city with Elbasan, Korçë and Greece in the south. Tirana is further connected, through the Milot interchange in the northwest, with Kosovo following as part of the Autostrada 1 (A1). During the communist regime in Albania, a plan for the construction of a ring road around Tirana arose in 1989s with no implementation until the 2010s. It is of major importance, especially concerning the demographic growth of the metropolitan region of Tirana as well as the importance of the economy. Although, constructions for the nowadays completed southern section of the ring road started in 2011, however, the northern and eastern sections are still in the planning process. Rail lines of Hekurudha Shqiptare (HSH) connected Tirana with all of the major cities of Albania, including Durrës, Shkodër and Vlorë. In 2013, the Tirana Railway Station was closed and moved to Kashar by the government of Tirana in order to create space for the Bulevardi i Ri project. The new Tirana Station will be constructed in Laprakë, which is projected to be a multifunctional terminal for rail, tram and bus transportation. Furthermore, a new rail line from Tirana through Nënë Tereza International Airport to Durrës is currently planned to be constructed. In 2012, Tirana municipality published a report according to which a project on the construction of two tram lines was under evaluation. The tram lines would have a total length of . The public transport in Tirana is, for now, focused only in the city centre, so that the people living in the suburbs have fewer or no public transport connections. Under the plan, the two tram lines will intersect in the Skanderbeg Square. The current public transport system in Tirana is made of ten bus lines served by 250 to 260 buses every day. The development of the tram network will provide an easier access to the city centre and beyond to necessary facilities, such as leisure areas or jobs without using personal vehicles. The city of Tirana is served by the Port of Durrës, one of the largest passenger port in the Adriatic Sea, distant from the city. Passenger ferries from Durrës sail to Croatia, Greece, Italy, Montenegro and Slovenia. During the administration of mayor Erion Veliaj, the government of Tirana has significantly increased the creation and expansion of a cycling infrastructure in the city in order to reduce traffic congestion as well as to improve the sustainable transportation. Ecovolis was launched in 2011 offering rental services for bicycles at different centrally located stations for a small fee. The international bicycle sharing system, Mobike, launched its operations on 8 June 2018 by deploying 4000 bicycles in the city. After the fall of communism in Albania, a reorganization plan was announced in 1990, that would extend the compulsory education program from eight to ten years. The following year, major economic and political crisis in Albania, and the ensuing breakdown of public order, plunged the school system into chaos. Widespread vandalism and extreme shortages of textbooks and supplies had a devastating effect on school operations, prompting Italy and other countries to provide material assistance. Many teachers relocated from rural to urban areas, leaving village schools understaffed and swelling the ranks of the unemployed in the cities; about 2,000 teachers fled the country. -The highly controlled environment that the communist regime had forced upon the educational system over the course of more than forty-six years was finally liberated set for improvement. In the late 1990, many schools were rebuilt or reconstructed, to improve learning conditions. Most of the improvements have happened in the larger cities of the country especially in Tirana. In Tirana, there are 64 primary schools and 19 secondary schools. The city is also host to many higher education institutions. This brings many young students from other cities and countries, especially from neighbouring countries, to Tirana. Many private Universities have been opened during the recent years. The French computer science university Epitech is also located in the city. In recent years, foreign students mainly from Southern Italy are being enrolled at Italian-affiliated universities in Tirana in the hope of better preparing themselves for entrance exams in Italy's universities. Tirana is the prime location to several national and international television stations, whose content is distributed throughout Albania, Kosovo and other Albanian-speaking territories. The national broadcaster, Radio Televizioni Shqiptar (RTSH), has its headquarters in the city along with all its television and radio channels. Commercial broadcasters, Televizioni Klan (TV Klan), Top Channel and Vizion Plus, also maintains their headquarters in the city. European broadcaster, Euronews, has a notable franchise in Tirana as well as the American broadcaster CNN. Tirana has the country's largest number of newspapers and publications. Gazeta Shqiptare, one of the oldest Albanian-language newspapers in Albania, has its headquarters in the city. Most nationwide newspapers, including Gazeta Shqip, Gazeta Tema, Koha Jonë and Panorama, are also based in Tirana. The city has a well-established English-language newspaper, notably the daily of Tirana Times. Tirana is home to different architectural styles that represent influential periods in its history dating back to the antiquity. The architecture of Tirana as the capital of the country was marked by two totalitarian regimes, by the fascist regime of Benito Mussolini during World War II and the communist regime. Both have left their mark on the city with their typical architecture. In addition to the objects of the architecture of the totalitarian regimes of the 20th century, Tirana offers a couple of other such objects of both periods. The Palace of Brigades (former Palace of the Albania's King Zog I), the ministries buildings, the government building and the municipality hall are designed by Florestano Di Fausto and Armando Brasini, both well-known architects of the Mussolini period in Italy. The Dëshmorët e Kombit Boulevard was built in 1930 and given the name King Zog I Boulevard. In the communist period, the part from Skanderbeg Square up to the train station was named Stalin Boulevard. The Royal Palace or Palace of Brigades previously served as the official residence of King Zog I. It has been used by different Albanian governments for various purposes. Because of the outbreak of World War II, and the 1939 Italian invasion of Albania, King Zog I fled Albania and never had a chance to see the Palace fully constructed. The Italians finished it and used it as the Army Headquarters. The Palace took its nickname Palace of Brigades because it was taken from the Italians by a people's army brigade. In the 21st century, Tirana turned into a proper modernist city, with large blocks of flats, modern new buildings, new shopping centres and many green spaces. In June 2016, the Mayor of Tirana Erion Veliaj and the Italian architect Stefano Boeri announced the start of the works for the redaction of the Master Plan "Tirana 2030". The city of Tirana is a densely-built area but still offers several public parks throughout its districts, graced with green gardens. With an area of 230 hectare, the Grand Park is the largest park in the city. It is one of most visited areas by local citizens. The park includes many children's playgrounds, sport facilities and landmarks such as the Saint Procopius Church, the Presidential Palace, the Botanical Gardens, the Tirana Zoo, the Amphitheatre, the Monument of the Frashëri Brothers and many others. The Rinia Park was built during the Communist regime in Albania. It bordered by Dëshmorët e Kombit Boulevard to the east, Gjergi Fishta Boulevard and Bajram Curri Boulevard to the south, Rruga Ibrahim Rugova to the west and Rruga Myslym Shyri to the north. The Taivani Center is the main landmark in the park and houses cafés, restaurants, fountains, and a bowling lane in the basement. The Summer Festival takes place every year in the park, to celebrate the end of winter and the rebirth of nature and a rejuvenation of spirit amongst the Albanians. As of the current Mayor of Tirana Erion Veliaj, the Municipality of Tirana will build more green spaces and will plant more trees. Tirana is an important center for music, film, theatre, dance and visual art. The city is host to the largest cultural institutions of the country, such as the National Theatre and the National Theatre of Opera and Ballet, the Natioan Archaeological Museum, the Art Gallery of Albania, the Sciences Museum of Albania and the National Historical Museum. Among the local institutions are the National Library, that keeps more than a million books, periodicals, maps, atlases, microfilms and other library materials. The city has five well-preserved traditional houses (museum-houses), 56 cultural monuments, eight public libraries. Since 2011, a Tourist Information Office was opened, located behind the National Historical Museum, with useful information about Tirana and Albania. There are many foreign cultural institutions in the city, including the German Goethe-Institut, Friedrich Ebert Foundation and the British Council. Other cultural centers in Tirana are, Canadian Institute of Technology, Chinese Confucius Institute, Greek Hellenic Foundation for Culture, Italian Istituto Italiano di Cultura and the French Alliance Française. The Information Office of the Council of Europe was established in Tirana. The three main religions in Albania, which contains Islam, Orthodox and Catholic Christianity, have all their headquarters in Tirana. The Bektashi leadership moved to Albania and established their World Headquarters also in the city of Tirana. One of the major annual events taking place in Tirana each year is the Tirana International Film Festival. It was the first international cinema festival in the country and considered as the most important cinematic event in the country. The most prominent museum in Tirana is the National Historical Museum, which details the history of the country. It keeps some of the best archeological finds in Albania, dating from the prehistoric era to the modern times. In the entrance of the pavilions, there are photos of global personalities who met Mother Teresa, such as Jacques Chirac, Bill Clinton, Tony Blair, Ibrahim Kodra and many other personalities. The personal objects used by her increase the curiosity of thousands of visitors in the museum . Almost 1 million visitors were counted in 2012. Other large museums include the National Archaeological Museum, which was the first museum created after World War II in Albania. The National Art Gallery opened to the public in 1954, preserving over 5000 artworks. Other museums include the Natural Sciences Museum, which has branches in zoology, botany and geology, the former Enver Hoxha Museum and the Bunk'art Museum. In 2017, the Museum of Secret Surveillance (House of leaves) were renovated and re-opened. The historical building from the communist period, aims now portray from the omnipresence of the Albanian communist regime. In recent years, Tirana is becoming a popular hub for events, such as festivals. Their diversity makes possible for people of different tastes to find themselves in a city this small. Festivals provide entertainment for the youth as well as for adults. The Summer Day Festival (Albanian: Dita e Veres) takes place every year on 14 March celebrating the arrival of Spring, and is the country's largest pagan festival. It is widely celebrated in Tirana, Elbasan, and other cities in Albania as well as in the Arbëresh colonies in Italy. In addition, Tirana Municipality organizes several food tastings festivals in rural Tirana in order to promote local organic products and stimulate agri-tourism. Notable events include the Tomato Festival in Shengjergj in Dajti Mt, and Olives Festival in Ndroq. Another major event, the Tirana International Film Festival takes place in Tirana each year bringing a large number of artists to produce a wide range of interesting film works. Other festivals include the Tirana Jazz Festival, Tirana Biennial, Guitar Sounds Festival, Albanian Wine Festival, and sports events like track and field championships, Rally Albania, and mountain biking events. In 2016, the first Telekom Electronic Beats Festival were held in Tirana, bringing the latest trends from the "urban lifestyle" to the Albanian youth. This is in effort to increase the number of tourist visits to Tirana. However, the city is become a popular destination for many young people around the region during the vacation period. In 2016, Albania surpassed Spain by becoming the country with the most coffee houses per capita in the world. In fact, there are 654 coffee houses per 100,000 inhabitants in Albania, a country with only 2.5 million inhabitants. This is due to coffee houses closing down in Spain due to the economic crisis, and the fact that as many cafes open as they close in Albania. In addition, the fact that it was one of the easiest ways to make a living after the fall of communism in Albania, together with the country's Ottoman legacy further reinforce its strong dominance in Albania. Tirana's restaurant scene has evolved recently characterized by stylish interiors and delicious food grown locally. The Tirana region is known for the Fergesa traditional dish made with either peppers or liver, and is found at a number of traditional restaurants in the city and agri-tourism sites on the outskirts of Tirana. Being the capital, Tirana is the center of sport in Albania, where activity is organized across amateur and professional levels. It is home to many major sporting facilities. Starting from 2007, the Tirana Municipality has built up to 80 sport gardens in most of Tirana's neighborhoods. One of the latest projects is the reconstruction of the existing Olympic Park, that will provide infrastructure for most intramural sports. Tirana hosted in the past three major events, the FIBA EuroBasket 2006, 2011 World Mountain Running Championships and the 2013 European Weightlifting Championships. There are two major stadiums, the former Qemal Stafa Stadium and the Selman Stërmasi stadium. The former was demolished in 2016 to make way for the new national stadium. The new stadium called the "Air Albania Stadium" was constructed on the same site of the former Qemal Stafa Stadium and it is planned to open in late 2019. It will have an underground parking, Marriott Tirana Hotel, shops and bars and will be used for entertainment events. Tirana's sports infrastructure is developing fast because of the investments from the municipality and the government. Football is the most widely followed sport in Tirana as well as in the country, having numerous club teams including the KF Tirana, Partizani Tirana, and Dinamo Tirana. It is popular at every level of society, from children to wealthy professionals. In football, as of April 2012, the Tirana-based teams have won a combined 57 championships out of 72 championships organized by the FSHF, i.e. 79% of them. Another popular sport in Albania is basketball, represented in particular by the teams KB Tirana, BC Partizani, BC Dinamo, Ardhmëria and also the women's PBC Tirana. Recently two rugby teams were created: Tirana Rugby Club, founded in 2013 and Ilirët Rugby Club founded in 2016.
https://en.wikipedia.org/wiki?curid=31075
The Wedding Planner The Wedding Planner is a 2001 American romantic comedy film directed by Adam Shankman, written by Michael Ellis and Pamela Falk, and starring Jennifer Lopez and Matthew McConaughey. Ambitious San Francisco wedding planner Mary Fiore is re-introduced to childhood acquaintance Massimo by her father Salvatore, who wants them to marry, but Mary declines. Hoping to persuade her boss, Geri, to make her a partner at their company, Mary is hired to plan catering heiress Fran Donolly’s society wedding to long-term boyfriend "Eddie". When Mary’s shoe becomes stuck in a manhole cover, a man pulls her out of the way of a speeding dumpster, and she manages to thank him before fainting. Waking up in the hospital, Mary meets her rescuer, pediatrician Steve Edison. Her colleague Penny persuades Steve to attend an outdoor movie with them, but makes up an excuse to leave the pair alone together. Mary and Steve dance, but are interrupted by a heavy downpour before they can kiss. At a dance lesson with a client, Mary encounters Fran, who introduces her to her fiancé "Eddie", none other than Steve. Fran leaves them to dance together, and Mary angrily rebukes Steve for leading her on behind Fran's back. Penny persuades Mary that her career is more important than her feelings and to continue planning Fran and Steve's wedding, and Steve's colleague assures him that his connection with Mary is only due to pre-wedding nerves. On a visit to a potential wedding venue in Napa Valley, Massimo appears and, to Mary's horror, introduces himself as her fiancé. While riding through the estate with Fran's parents, Mrs. Donelly's singing frightens Mary's horse. Steve rescues Mary again and admonishes her for condemning his actions when she was also engaged. At home, Mary scolds her father for trying to set her up with Massimo. Salvatore reveals that his marriage to her mother, which Mary has viewed as the perfect relationship, was arranged and only became a loving relationship months later, leaving Mary conflicted. While visiting another potential venue, Fran reveals she is going on a week-long business trip and leaves Mary and Steve to continue preparations. They apologize for their angry words, and soon become friends. They run into Wendy and Keith, whom Mary reveals was once her fiancé until she caught him cheating with Wendy, his secret high-school girlfriend, on the night of their rehearsal dinner. After getting drunk and struggling to get back into her apartment, Mary breaks down, lamenting that Keith is married and expecting a baby while she is alone and miserable. Steve manages to get Mary inside and comforts her as she sobers up, insisting that Keith was a fool to pick Wendy over her. Steve leaves, but quickly returns and confesses his feelings for Mary. She sadly replies that she respects Fran too much to let anything happen between them, and sends Steve away. Fran confesses to Mary that she is unsure if she is still in love with Steve. Ignoring her own heart, Mary convinces Fran to continue with the wedding. At a birthday party, Massimo offers Mary a heartfelt proposal and she reluctantly accepts; the two couples prepare for their same-day weddings. Leaving Penny to coordinate the Donelly wedding, Mary goes to marry Massimo at the town hall. Steve asks Fran if they are doing the right thing, and she admits that she does not want to get married. They part as friends, Fran leaving to enjoy their honeymoon alone. Penny reveals Mary’s marriage plans to Steve, and he rushes to stop her. At the town hall, Massimo and Mary prepare to marry but her father stops the ceremony, realizing the wedding is not what Mary wants. Mary, having given up on true love, insists that life is not a fairy tale and marrying Massimo is the right thing to do, but realizes he is not the one for her and leaves. Steve arrives to find Salvatore and Massimo, who reveals that he could not go ahead with the wedding knowing Mary was actually in love with Steve. Steve reveals his feelings to Salvatore, who tells him to go and get her. Steve and Massimo ride off on Massimo's scooter to the park, where another outdoor movie is starting. Steve finds Mary, asks her to dance, and they kiss. The original actors set to play Mary and Steve were Jennifer Love Hewitt and Brendan Fraser, respectively. They were replaced with Sarah Michelle Gellar and Freddie Prinze Jr. Both couples eventually dropped out due to scheduling conflicts, leaving Lopez and McConaughey to be the eventual stars. Many of the scenes were shot in Golden Gate Park, specifically at the Music Concourse (between the old De Young Museum and the old California Academy of Sciences), the Japanese Tea Garden and The Huntington Library and Gardens. The first wedding ceremony is filmed inside the chapel at Stanford University. "The Wedding Planner" was released on January 26, 2001. "The Wedding Planner" was screened at 2,785 theaters and grossed $13,510,293 on its opening weekend (the Super Bowl weekend), opening at number one at the Box Office. It grossed $60,400,856 domestically and earned a worldwide tally of $94,728,529. "The Wedding Planner" received generally negative reviews from critics. Based on 104 reviews collected by Rotten Tomatoes, the film has a "Rotten" rating from critics, with 16% positive reviews and an average rating of 3.9/10. The consensus reads, " Instead of being light and charming, this romantic comedy is heavy-handed and contrived in its execution. Also, it's too unoriginal." Metacritic gave the film an average score of 33 out of 100, based on 29 reviews from mainstream critics. "Entertainment Weekly"s Lisa Schwarzbaum critically compared the film to "My Best Friend's Wedding", writing that: "Where Julia Roberts turned the world on with her huggability, Lopez's vibe is that of someone afraid to get mussed. And where Rupert Everett was divine as a sidekick, McConaughey is mortally ordinary as a main dish who spends most of his time smiling like a party guest." Kimberly Jones of "The Austin Chronicle" noted that the two leading characters being mistreated was the biggest disappointment from "The Wedding Planner", feeling that while Lopez and McConaughey have "enormous charisma" (referencing Lopez's work on "Out of Sight" (1998) as an example) the "blandness of "The Wedding Planner" burlap-sacks their appeal in an altogether dowdy outing for two stars who deserve much snazzier threads." A writer from "The New York Times" wrote that the charisma of the movie's stars along with their goofiness makes ""The Wedding Planner" more painless than it has a right to be." "Variety"s Robert Koehler described "The Wedding Planner" as: "an attractive bridesmaid but hardly a gorgeous bride among romantic comedies." Michael Thomson from the BBC wrote that: "Unfortunately, after the two leads become less wired in each other's presence, and the sexual tension begins to droop, everyone seems to be reading an autocue." The film was nominated for a Razzie Award for Worst Actress for Lopez.
https://en.wikipedia.org/wiki?curid=31079
The Seekers The Seekers are an Australian folk-influenced pop quartet, originally formed in Melbourne in 1962. They were the first Australian pop music group to achieve major chart and sales success in the United Kingdom and the United States. They were popular during the 1960s with their best-known configuration as: Judith Durham on vocals, piano, and tambourine; Athol Guy on double bass and vocals; Keith Potger on twelve-string guitar, banjo, and vocals; and Bruce Woodley on guitar, mandolin, banjo, and vocals. The group had Top 10 hits in the 1960s with "I'll Never Find Another You", "A World of Our Own", "Morningtown Ride", "Someday, One Day" (written by Paul Simon), "Georgy Girl" (the title song of the film of the same name), and "The Carnival Is Over" by Tom Springfield, the last being an adaptation of the Russian folk song "Stenka Razin". It is still one of the top 50 best-selling singles in the UK. Australian music historian Ian McFarlane described their style as "concentrated on a bright, uptempo sound, although they were too pop to be considered strictly folk and too folk to be rock." In 1967, they were named as joint "Australians of the Year" – the only group thus honoured. In July 1968, Durham left to pursue a solo career and the group disbanded. The band has reformed periodically, and in 1995 they were inducted into the ARIA Hall of Fame. "I'll Never Find Another You" was added to the National Film and Sound Archive of Australia's Sounds of Australia registry in 2011. Woodley's and Dobe Newton's song "I Am Australian", which was recorded by The Seekers, and by Durham with Russell Hitchcock and Mandawuy Yunupingu, has become an unofficial Australian anthem. With "I'll Never Find Another You" and "Georgy Girl", the band also achieved success in the United States, but not nearly at the same level as in the rest of the world. The Seekers have sold over 50 million records worldwide. The Seekers were individually honoured as Officers of the Order of Australia in the Queen's Birthday Honours of June 2014. The Seekers were formed in 1962 in Melbourne by Athol Guy on double bass, Keith Potger on twelve-string guitar and Bruce Woodley on guitar. Guy, Potger and Woodley had all attended Melbourne Boys High School in Victoria. In the late 1950s, Potger led The Trinamics, a rock 'n' roll group, Guy led the Ramblers and, with Woodley, they decided to form a doo-wop music group, the Escorts. The Escorts had Ken Ray as the lead singer and in 1962 they became "The Seekers". Ray left the group to get married. His place was taken by Judith Durham, an established traditional jazz singer who added a distinctive female lead voice. She had earlier recorded an extended play disc on W&G Records with the Melbourne group, Frank Traynor's Jazz Preachers. Durham and Guy had met when they both worked in an advertising agency – initially Durham only sang periodically with the Seekers, when not performing at local jazz clubs. She was replaced in Traynor's jazz ensemble by Margret RoadKnight. The Seekers performed folk-influenced pop music and soon gathered a strong following in Melbourne. Durham's connections with W&G Records led to the group's later signing a recording contract with the label. Their debut album, ""Introducing the Seekers"", was released in 1963. Their debut single was the traditional historic Australian bush ballad from 1894, "Waltzing Matilda", which appeared in November and reached the Melbourne "Top 40" singles chart. and peaked at number 74 on the national chart. When being photographed for the album's cover, Potger was replaced by Ray – his day job with the Australian Broadcasting Commission (ABC) as a radio producer barred him from involvement in a commercial enterprise. The Seekers were offered a twelve-month position as on-board entertainment on the Sitmar Line passenger cruise ship "Fairsky" in March 1964. In May, they travelled to the U.K. and had intended to return to Australia after staying ten weeks, but upon arrival they were offered work by a London booking agency, the Grade Organisation. They signed there with World Record Club and issued a single, "Myra", co-written by the group. The group regularly appeared on a British TV show series, ""Call in on Carroll"", hosted by Ronnie Carroll. After filling in on a bill headlined by folk singer Dusty Springfield, they met her brother, songwriter and producer Tom Springfield, who had experience with writing folk-pop material and lyrics/tunes with the siblings' earlier group The Springfields. He penned "I'll Never Find Another You", which they recorded in November 1964. It was released by EMI Records, on their Columbia Graphophone Company (Columbia) label, in December and was championed by the offshore radio station "Radio Caroline" which frequently played and promoted their music. Despite the fact that the group had not signed a contract with EMI, the single reached the U.K. "Top 50" and began selling well. In February 1965, it reached No.1 in the U.K. and Australia, and No.4 in the United States where it was released on EMI's Capitol Records label. "I'll Never Find Another You" was the seventh biggest-selling single in Britain for 1965 though their own "The Carnival Is Over", released later in the year, would eventually eclipse it – and went on to sell 1.75 million copies worldwide. The Seekers were the first Australian pop group to have a "Top 5" hit in all three countries – Australia, U.K. and U.S.A. Australian music historian, Ian McFarlane described their style as "concentrated on a bright, uptempo sound, although they were too pop oriented to be considered strictly folk and too folk to be rock." The distinctive soprano voice of Durham, the group's vocal harmonies and memorable songs encouraged the British media, including the national broadcasting agency on radio and television, the BBC, to give them exposure, allowing them to appeal to a broad cross-section of the young British folk, pop and rock music audience. The Seekers achieved their first success in the United States in 1965 with their highly popular hit, "I'll Never Find Another You", reaching peaks of No. 4 - Pop and No. 2 - Easy Listening in "Billboard" magazine surveys. They followed "I'll Never Find Another You" with "What Have They Done to the Rain?" in February 1965 which did not chart into the "Top 40". In May, another Tom Springfield composition followed, "A World of Our Own", which reached "Top 3" in Australia and the U.K. and "Top 20" in the U.S. Malvina Reynolds' lullaby "Morningtown Ride" was issued in Australia in July and peaked in the "Top 10". "The Carnival Is Over" (the melody is based on a Russian folk song, while the lyrics were written by Tom Springfield), appeared in November, which reached "No. 1" top status in both Australia and the U.K. At its peak, the single was selling 93,000 copies a day in Great Britain alone. Also in 1965, they met Paul Simon (of the American duo Simon & Garfunkel) who was pursuing a solo career in the U.K. following the initial poor chart success of the duo's debut LP, "Wednesday Morning, 3 A.M.". In 1966, the Seekers released the Simon-penned "Someday One Day", which reached "No. 4" in Australia and "No. 11" in the U.K. Their version was Simon's first U.K. success as a songwriter, and his first solo major hit as a composer outside of his work with Art Garfunkel. Woodley co-wrote some songs with Simon, including "Cloudy", "I Wish You Could Be Here" and "Red Rubber Ball" which became an American "No. 2" single for The Cyrkle. The Seekers' version was provided on their 1966 LP, ""Come the Day"" (released on the album ""Georgy Girl"" in the U.S.A.). Early in 1966, after returning to Australia, the Seekers filmed their first TV special, "At Home with the Seekers". The band were named "Best New Group of 1964" at the April 1965 "New Musical Express" Poll Winners Awards. They appeared at the annual celebratory Wembley Empire Pool concert, on a bill which included the Beatles, the Rolling Stones, Dusty Springfield and the Animals. On 16 November in the same year, they appeared at a Royal Command Performance at the London Palladium, before Queen Elizabeth - the Queen Mother (spouse of the late King George VI). In November, a re-recorded version of "Morningtown Ride" was released in the U.K., which reached "No. 2". The song had been recorded earlier as an Australian single from the 1964 album ""Hide and Seekers"" and appeared on the 1965 American debut, ""The New Seekers"". In February 1967, "Morningtown Ride" reached the "Top 50" in the U.S. In December 1966 they issued "Georgy Girl", which became their highest charting American hit when it reached "No. 2" on the "Billboard" Hot 100 and "No. 1" on the "Cashbox" "Top 100" in February 1967. It was the title song and theme for the British film of the same name starring Lynn Redgrave and James Mason and sold 3.5 million copies worldwide. The band was awarded a gold record certificate by the Recording Industry Association of America. Meanwhile, it was "No. 3" in the U.K., and "No. 1" on the top of the charts in Australia. Its writers, Jim Dale and Tom Springfield, were nominated for the 1967 Academy Award for Best Original Song of 1966, but lost out that year for the "Oscar" to the title song from the film, ""Born Free"". In March 1967, The Seekers returned to Australia for a homecoming tour, which included a performance at Music for the People, at the Sidney Myer Music Bowl in Melbourne, attended by an estimated audience of 200,000. "Guinness Book of World Records" (1968) listed it as the greatest attendance at a concert in the Southern Hemisphere. Melburnians were celebrating the annual Moomba Festival, a free community festival, and many thousands were enjoying other attractions but are included in the crowd estimate. The Seekers were accompanied during their 20-minute set by the Australian Symphony Orchestra, conducted by Hector Crawford. Film of their appearance was incorporated into their 1967 Australian television special "The Seekers Down Under", which was screened on Channel 7 and drew a then record audience of over 6 million. It was also screened in the UK on BBC1 on 24 June 1968, and repeated on 27 December 1968. In January 1968, on Australia Day, in recognition of its achievements, the group was named joint Australians of the Year – the only group to have this honour bestowed upon it. They personally accepted their awards from John Gorton, the Prime Minister of Australia, during their tour. During this visit, the group filmed another TV special, "The World of the Seekers", which was screened in cinemas before being screened nationally on Channel 9 to high ratings and is in the Top 10 most watched TV shows of the 20th century in Australia. During the New Zealand tour on 14 February 1968, Durham approached the other group members to announce that she was leaving The Seekers to pursue a solo career and the group subsequently disbanded. Their final performance, on Tuesday 9 July, was screened by the BBC as a special called "Farewell the Seekers", with an audience of more than 10 million viewers. The special had been preceded by a week-long season at London's Talk of the Town nightclub and a live recording of one of their shows was released as a live LP record, "Live at the Talk of the Town". It reached No. 2 on the UK charts. Also in July, the compilation album "The Seekers' Greatest Hits" was released and spent 17 weeks at No. 1 in Australia. It was released as "The Best of The Seekers" in the UK and spent 6 weeks at No. 1 in 1969, managing to knock "The Beatles (White Album)" off the top of the charts and preventing The Rolling Stones' "Beggars Banquet" from reaching the top spot. The album spent 125 weeks in the charts in the UK. Following the Seekers' split, Durham pursued a solo career. She released a Christmas album called "For Christmas with Love" (recorded in Hollywood, California) and later signed with A&M Records, releasing more albums including "Gift of Song" and "Climb Ev'ry Mountain". Guy hosted his own TV show in Australia, "A Guy Called Athol", before entering politics in 1973. In 1969, Potger formed and managed another group, the New Seekers in the UK, which were more pop-oriented. Woodley released several solo albums and focused on songwriting, including co-writing the patriotic song "I Am Australian" with Dobe Newton (of the Bushwackers) in 1987. From 1972, Guy, Potger and Woodley planned on reforming the Seekers without Durham. By 1975 they had recruited Louisa Wisseling, a semi-professional folk singer formerly with Melbourne group the Settlers. They had a top 10 Australian hit with the Woodley-penned "The Sparrow Song". Woodley left the group in June 1977 and was replaced by Buddy England, a former 1960s pop singer and member of the Mixtures. In 1978, Guy was replaced by Peter Robinson (ex-Strangers) and Cheryl Webb replaced Wisseling as lead vocalist, leaving only Keith Potger from the original Seekers line-up. In 1980 the group released an album, "A little bit of Country" and toured periodically until the mid '80s. In 1988, Guy, Potger and Woodley reformed the Seekers with Julie Anthony, a popular cabaret singer. In May, the group sang "The Carnival Is Over" at the World Expo 88 in Brisbane. In March 1989, the group released the album "Live On", which peaked in the top 30 on the Australian Recording Industry Association (ARIA) Albums Chart. In June 1990, Anthony left and was replaced by Karen Knowles, a former teen pop singer on "Young Talent Time". However the unique timbre of Durham's voice was missing from their sound and the group split again. The Seekers reunited late in 1992, with the classic line-up of Durham, Guy, Potger and Woodley. In March 1992, all four met together, for the first time in 20 years, at a restaurant in Toorak, an inner suburb of Melbourne. Before then they had never talked about reforming, they just wanted to get to know each other again. It was two months later that they decided to do a reunion concert, which led to a 102 date tour. The 25-Year Silver Jubilee Reunion Celebration tour in 1993 was sufficiently successful that the group has continued to perform and record together, on and off, ever since. They staged several sell-out tours of Australia, New Zealand and the UK. The reformed group issued several albums, including new studio albums "Future Road" in October 1997 (which peaked at No. 4 on the ARIA Albums Chart) and "Morningtown Ride to Christmas" (which reached the top 20 in 2001). and both albums were certified platinum. In 1995, the group were inducted into the ARIA Hall of Fame. In the buildup to the Sydney 2000 Summer Olympics, an ABC TV satire, "The Games", parodied the Seekers in the final episode, "The End". Durham had suffered a broken hip and sang "The Carnival Is Over" in a wheelchair at the closing ceremony of the related Paralympic Games on 29 October. "Long Way to the Top" was a 2001 Australian Broadcasting Corporation six-part documentary on the history of Australian rock and roll from 1956 to the modern era. The Seekers featured on the second episode, "Ten Pound Rocker 1963–1968", broadcast on 22 August, in which Durham and Woodley discussed their early work on a cruise ship, meeting Tom Springfield and their success in Britain. Four of their songs were played during the episode: "I'll Never Find Another You", "The Carnival Is Over", "A World of Our Own" and "Georgy Girl". In October 2002, on the 40th anniversary of their formation, they were the subjects of a special issue of Australian postage stamps. On 1 September 2006, they were presented with the Key to the City by Melbourne's Lord Mayor, John So. In February 2009, SBS TV program "RocKwiz" hosted a 50th anniversary concert at the Myer Music Bowl, "RocKwiz Salutes the Bowl", which included "World of Our Own" performed by Rebecca Barnard and Billy Miller and "The Carnival Is Over" by Durham. In 2004 a DVD, "The Seekers at Home and Down Under", was released. It consists of a 1966 television documentary on the Seekers and a 1967 special. The cover includes a photo from the 1966 documentary. In October 2010, "The Best of the Seekers" (1968) was listed in the book "100 Best Australian Albums". Also in October, they were scheduled to tour various Australian cities in support of violinist André Rieu and his orchestra. However, the tour was postponed when Rieu was taken ill. They released another "Greatest Hits" compilation in May 2011 which peaked in the top 40. That month they supported Rieu on the rescheduled Australian tour. "I'll Never Find Another You" was added to the National Film and Sound Archive of the Sounds of Australia registry in 2011. "The Seekers' Golden Jubilee Tour" kicked off 2013 in May, celebrating fifty years since the group had formed in December 1962. Performing in Sydney, Brisbane, Newcastle and Melbourne, they received rave reviews to sold-out audiences. However, Judith Durham suffered a brain hemorrhage after their first concert in Melbourne. The rest of the Australian tour and later-to-be-staged UK tour were postponed; the former continued in November, while the UK tour took place in May and June 2014, ending with two performances at the Royal Albert Hall, London. In November 2015, during a tour of Guy's new band 'Athol Guy and Friends' featuring Jenny Blake on vocals, Guy's band was joined by his two Melbourne Boys High School colleagues Potger and Woodley for a one-performance fundraiser hosted by the school. The performance featured many of the Seekers' hits as well as other songs that had influenced them over the years. The performance closed with a performance of "I Am Australian", which Guy introduced as a song that was pertinent given "what was happening around the world" at the time. In April 2019, The Seekers released "Farewell" - a live recording from their 50th Anniversary tour of 2013. Following Durham's retirement from live performance, the band continued on as The Original Seekers with the addition of long-time producer and guitarist/singer Michael Cristiano as the band's "fourth voice". In June 2019, the group released a new studio album titled "Back to Our Roots". The album see Athol, Keith and Bruce join with Michael Cristiano to record songs they had sung pre Judith, and a few they wish they had recorded. The album was released as under the title of The Original Seekers. On 28 April 2020, Universal Music Australia announced a trilogy of The Seekers compilation albums will be released over the next twelve months under the title "Hidden Treasures", featuring rarities and lost classics. "Hidden Treasures - Volume 1" was released on 22 May 2020 and debuted at number 39 on the ARIA Charts. The following recordings by the Seekers were each certified as having sold over one million copies: "I'll Never Find Another You", "A World of Our Own", "The Carnival Is Over" and "Georgy Girl". They were each awarded a gold disc. The Seekers have sold over 50 million records worldwide. General Specific
https://en.wikipedia.org/wiki?curid=31085
Timothy Leary Timothy Francis Leary (October 22, 1920 – May 31, 1996) was an American psychologist and writer known for his strong advocacy of psychedelic drugs. Opinions of Leary are polarized, ranging from bold oracle to publicity hound. He was "a hero of American consciousness," according to Allen Ginsberg, and Tom Robbins called him a "brave neuronaut." But to Louis Menand, it was a put-on: "The only things Leary was serious about were pleasure and renown." Leary was not a seeker of truth, according to Menand: "He liked women, he liked being the center of attention, and he liked to get high." As a clinical psychologist at Harvard University, Leary worked on the Harvard Psilocybin Project from 1960–62 (LSD and psilocybin were still legal in the United States at the time), resulting in the Concord Prison Experiment and the Marsh Chapel Experiment. The scientific legitimacy and ethics of his research were questioned by other Harvard faculty because he took psychedelics along with research subjects and pressured students to join in. Leary and his colleague, Richard Alpert (who later became known as Ram Dass), were fired from Harvard University in May 1963. Most people first heard of psychedelics after the Harvard scandal. Leary believed that LSD showed potential for therapeutic use in psychiatry. He used LSD himself and developed a philosophy of mind expansion and personal truth through LSD. After leaving Harvard, he continued to publicly promote the use of psychedelic drugs and became a well-known figure of the counterculture of the 1960s. He popularized catchphrases that promoted his philosophy, such as "turn on, tune in, drop out", "set and setting", and "think for yourself and question authority". He also wrote and spoke frequently about transhumanist concepts of space migration, intelligence increase, and life extension (SMI²LE). Leary developed the eight-circuit model of consciousness in his book "Exo-Psychology" (1977) and gave lectures, occasionally billing himself as a "performing philosopher". During the 1960s and 1970s, he was arrested often enough to see the inside of 36 prisons worldwide. President Richard Nixon once described Leary as "the most dangerous man in America". Leary was born on October 22, 1920 in Springfield, Massachusetts, the only child in an Irish Catholic household. His father, Timothy "Tote" Leary, was a dentist who left his wife Abigail Ferris when Leary was 14. He graduated from Classical High School in the western Massachusetts city. He attended the College of the Holy Cross in Worcester, Massachusetts from 1938 to 1940. Under pressure from his father, he became a cadet in the United States Military Academy at West Point, New York. In the first months as a "plebe", he received numerous demerits for rule infractions and then got into serious trouble for failing to report rule breaking by cadets he supervised. He was also accused of going on a drinking binge and failing to admit it, and was asked by the Honor Committee to resign. He refused and was "silenced" — that is, shunned, by fellow cadets. He was acquitted by a court-martial, but the silencing continued, as well as the onslaught of demerits for small rule infractions. In his sophomore year his mother appealed to a family friend, United States Senator David I. Walsh, head of the Senate Naval Affairs Committee, who investigated personally. The Honor Committee quietly revised its position and announced that it would abide by the court-martial verdict. Leary then resigned and was honorably discharged by the Army. Almost 50 years later, he said that it was "the only fair trial I've had in a court of law". To the chagrin of his family, Leary transferred to the University of Alabama in late 1941 because it admitted him so expeditiously. He enrolled in the university's ROTC program, maintained top grades, and began to cultivate academic interests in psychology (under the aegis of the Middlebury and Harvard-educated Donald Ramsdell) and biology. Leary was expelled a year later for spending a night in the female dormitory and lost his student deferment in the midst of World War II. Leary was drafted into the United States Army and received basic training at Fort Eustis in 1943. He remained in the non-commissioned officer track while enrolled in the psychology subsection of the Army Specialized Training Program, including three months of study at Georgetown University and six months at Ohio State University. With limited need for officers late in the war, Leary was briefly assigned as a private first class to the Pacific War-bound 2d Combat Cargo Group (which he later characterized as "a suicide command... whose main mission, as far as I could see, was to eliminate the entire civilian branch of American aviation from post-war rivalry") at Syracuse Army Air Base in Mattydale, New York. After a fateful reunion with Ramsdell (who was assigned to Deshon General Hospital in Butler, Pennsylvania as chief psychologist) in Buffalo, New York, he was promoted to corporal and reassigned to his mentor's command as a staff psychometrician. He remained in Deshon's deaf rehabilitation clinic for the remainder of the war. While stationed in Butler, Leary courted Marianne Busch; they married in April 1945. Leary was discharged at the rank of sergeant in January 1946, having earned the Good Conduct Medal, the American Defense Service Medal, the American Campaign Medal, and the World War II Victory Medal. Leary was reinstated at the University of Alabama and received credit for his Ohio State psychology coursework. He completed his degree via correspondence courses and graduated in August, 1945. After receiving his undergraduate degree, Leary pursued an academic career. In 1946, he received a M.S. in psychology at the State College of Washington, where he studied under educational psychologist Lee Cronbach. His M.S. thesis was on clinical applications of the Wechsler Adult Intelligence Scale. In 1947, Marianne gave birth to their first child, Susan. A son, Jack, arrived two years later. In 1950, Leary received a Ph.D. in clinical psychology from the University of California, Berkeley. Like many social scientists of the postwar era, Leary was galvanized by the objectivity of modern physics; his doctoral dissertation ("The Social Dimensions of Personality: Group Structure and Process") approached group therapy as a "psychlotron" from which behavioral characteristics could be derived and quantified in a manner analogous to the periodic table, presaging his later development of the interpersonal circumplex. The new Ph.D. stayed on in the Bay Area as an assistant clinical professor of medical psychology at the University of California, San Francisco; concurrently, Leary co-founded Kaiser Hospital's psychology department in Oakland, California and maintained a private consultancy. In 1952, the Leary family spent a year in Spain, subsisting on a research grant. According to Berkeley colleague Marv Freedman, "Something had been stirred in him in terms of breaking out of being another cog in society..." Despite professional success, his marriage was strained by infidelity and mutual alcohol abuse. Marianne eventually committed suicide in 1955, leaving him to raise their son and daughter alone. He described himself during this period as "an anonymous institutional employee who drove to work each morning in a long line of commuter cars and drove home each night and drank martinis ...like several million middle-class, liberal, intellectual robots." From 1954 or 1955 to 1958, Leary directed psychiatric research at the Kaiser Family Foundation. In 1957, Leary published "The Interpersonal Diagnosis of Personality". The Annual Review of Psychology exuberantly dubbed it the 'most important book on psychotherapy of the year.' In 1958 the National Institute of Mental Health terminated Leary's research grant after he failed to meet with a NIMH investigator. Leary and his children relocated to Europe where he attempted to write his next book while subsisting on small grants and insurance policies. His stay in Florence was unproductive and indigent, prompting a return to academia. In late 1959 he started as a lecturer in clinical psychology at Harvard University at the behest of Frank Barron (a colleague from Berkeley) and David McClelland. Leary and his children lived in nearby Newton, Massachusetts. In addition to teaching, Leary was affiliated with the Harvard Center for Research in Personality under McClelland. He oversaw the Harvard Psilocybin Project and conducted experiments in conjunction with assistant professor Richard Alpert. In 1963, Leary was terminated for failing to attend scheduled class lectures, though he maintained that he had met his teaching obligations. The decision to dismiss him may have been influenced by his promotion of psychedelic drug use among Harvard students and faculty. The drugs were legal at the time. His work in academic psychology expanded on the research of Harry Stack Sullivan and Karen Horney, which sought to better understand interpersonal processes to help diagnose disorders. Leary's dissertation developed the interpersonal circumplex model, later published in "The Interpersonal Diagnosis of Personality". The book demonstrated how psychologists could use Minnesota Multiphasic Personality Inventory (MMPI) scores to predict how respondents might react to various interpersonal situations. Leary's research was an important harbinger of transactional analysis, directly prefiguring the popular work of Eric Berne. On May 13, 1957, "Life" magazine published an article by R. Gordon Wasson about the use of psilocybin mushrooms in religious rites of the indigenous Mazatec people of Mexico. Anthony Russo, a colleague of Leary's, had experimented with psychedelic "Psilocybe mexicana" mushrooms on a trip to Mexico and told Leary about it. In August 1960, Leary traveled to Cuernavaca, Mexico with Russo and consumed psilocybin mushrooms for the first time, an experience that drastically altered the course of his life. In 1965, Leary commented that he had "learned more about ... (his) brain and its possibilities ... [and] more about psychology in the five hours after taking these mushrooms than ... in the preceding 15 years of studying and doing research." Back at Harvard, Leary and his associates (notably Richard Alpert, later known as Ram Dass) began a research program known as the Harvard Psilocybin Project. The goal was to analyze the effects of psilocybin on human subjects (first prisoners, and later Andover Newton Theological Seminary students) from a synthesized version of the drug (which was legal at the time), one of two active compounds found in a wide variety of hallucinogenic mushrooms, including "Psilocybe mexicana". Psilocybin was produced in a process developed by Albert Hofmann of Sandoz Pharmaceuticals, who was famous for synthesizing LSD. Beat poet Allen Ginsberg heard about the Harvard research project and asked to join. Leary was inspired by Ginsberg's enthusiasm, and the two shared an optimism that psychedelics could help people discover a higher level of consciousness. They began introducing psychedelics to intellectuals and artists including Jack Kerouac, Maynard Ferguson, Charles Mingus and Charles Olson. Leary argued that psychedelic substances—in proper doses, a stable setting, and under the guidance of psychologists could benefit behavior in ways not easily obtained by regular therapy. He experimented in treating alcoholism and reforming criminals, and many of his subjects said they had profound mystical and spiritual experiences which permanently improved their lives. The Concord Prison Experiment evaluated the application of psilocybin and psychotherapy in the rehabilitation of released prisoners. Thirty-six prisoners were reported to have repented and sworn off criminality after being guided through the psychedelic experience by Leary and his associates. The overall recidivism rate for American prisoners was 60 percent, whereas the rate for those in Leary's project reportedly dropped to 20 percent. The experimenters concluded that long-term reduction in criminal recidivism could be effected with a combination of psilocybin-assisted group psychotherapy (inside the prison) along with a comprehensive post-release follow-up support program modeled on Alcoholics Anonymous. The Concord conclusions were contested in a follow-up study on the basis of time differences monitoring the study group vs. the control group, and differences between subjects re-incarcerated for parole violations and those imprisoned for new crimes. The researchers concluded that statistically only a slight improvement could be attributed to psilocybin in contrast to the significant improvement reported by Leary and his colleagues. Rick Doblin suggested that Leary had fallen prey to the Halo Effect, skewing the results and clinical conclusions. Doblin further accused Leary of lacking "a higher standard" or "highest ethical standards in order to regain the trust of regulators". Ralph Metzner rebuked Doblin for these assertions: "In my opinion, the existing accepted standards of honesty and truthfulness are perfectly adequate. We have those standards, not to curry favor with regulators, but because it is the agreement within the scientific community that observations should be reported accurately and completely. There is no proof in any of this re-analysis that Leary unethically manipulated his data." Leary and Alpert founded the International Federation for Internal Freedom (IFIF) in 1962 in Cambridge, Massachusetts, in order to carry out studies in the religious use of psychedelic drugs. This was run by Lisa Bieberman (now known as Licia Kuenning), a friend of Leary. "The Harvard Crimson" described her as a "disciple" who ran a Psychedelic Information Center out of her home and published a national LSD newspaper. That publication was actually Leary and Alpert's journal "Psychedelic Review", and Bieberman (a graduate of the Radcliffe Institute for Advanced Study at Harvard, who had volunteered for Leary as a student) was its circulation manager. Leary and Alpert's research attracted so much public attention that many who wanted to participate in the experiments had to be turned away due to the high demand. To satisfy the curiosity of those who were turned away, a black market for psychedelics sprang up near the Harvard campus. Other professors in the Harvard Center for Research in Personality raised concerns about the legitimacy and safety of the experiments. Leary and Alpert taught a class that was required for graduation and colleagues felt they were abusing their power by pressuring graduate students to take hallucinogens in the experiments. Leary and Alpert also went against policy by giving psychedelics to undergraduate students, and did not select participants through random sampling. It was also problematic that the researchers sometimes took hallucinogens along with the subjects they were supposed to be studying. These concerns were printed in "The Harvard Crimson" leading the university to halt the experiments. The Massachusetts Department of Public Health launched an investigation that was later dropped, but the university eventually fired Leary and Alpert. According to Andrew Weil, Leary was fired for missing scheduled lectures, while Alpert was dismissed for allegedly giving psilocybin to an undergraduate in an off-campus apartment. Harvard University President Nathan Marsh Pusey released a statement on May 27, 1963 reporting that Leary had left campus without authorization and "failed to keep his classroom appointments." His salary was terminated April 30, 1963. Leary's psychedelic experimentation attracted the attention of three heirs to the Mellon fortune, siblings Peggy, Billy, and Tommy Hitchcock. In 1963 they gave Leary and his associates access to a rambling 64-room mansion on an estate in Millbrook, New York, where they continued their psychedelic sessions. Peggy Hitchcock directed the International Federation for Internal Freedom (IFIF)'s New York branch, and her brother Billy rented the estate to IFIF. Leary and Alpert set up a communal group with former Psilocybin Project members at the Hitchcock Estate (commonly known as "Millbrook"). The IFIF was reconstituted as the Castalia Foundation after the intellectual colony in Herman Hesse's The Glass Bead Game). The Castalia group's journal was the "Psychedelic Review". The core group at Millbrook wanted to cultivate the divinity within each person and regularly joined LSD sessions facilitated by Leary. The Castalia Foundation also hosted non-drug weekend retreats for meditation, yoga, and group therapy. Leary later wrote: We saw ourselves as anthropologists from the 21st century inhabiting a time module set somewhere in the dark ages of the 1960s. On this space colony we were attempting to create a new paganism and a new dedication to life as art. The Millbrook estate was later described by Luc Sante of "The New York Times" as: the headquarters of Leary and gang for the better part of five years, a period filled with endless parties, epiphanies and breakdowns, emotional dramas of all sizes, and numerous raids and arrests, many of them on flimsy charges concocted by the local assistant district attorney, G. Gordon Liddy. Others contest the characterization of Millbrook as a party house. In "The Electric Kool-Aid Acid Test", Tom Wolfe portrays Leary as using psychedelics only for research, not recreation. When Ken Kesey's Merry Pranksters visited the estate, they received a frosty reception. Leary himself had the flu and did not play host. He later met Ken Kesey and Ken Babbs quietly in his room and promised to remain allies in the years ahead. In 1964, Leary coauthored a book with Alpert and Ralph Metzner called "The Psychedelic Experience" based on the "Tibetan Book of the Dead". In it, they wrote: A psychedelic experience is a journey to new realms of consciousness. The scope and content of the experience is limitless, but its characteristic features are the transcendence of verbal concepts, of spacetime dimensions, and of the ego or identity. Such experiences of enlarged consciousness can occur in a variety of ways: sensory deprivation, yoga exercises, disciplined meditation, religious or aesthetic ecstasies, or spontaneously. Most recently they have become available to anyone through the ingestion of psychedelic drugs such as LSD, psilocybin, mescaline, DMT, etc. Of course, the drug does not produce the transcendent experience. It merely acts as a chemical key — it opens the mind, frees the nervous system of its ordinary patterns and structures. Leary married model Birgitte Caroline "Nena" von Schlebrügge in 1964 at Millbrook. Both Nena and her brother Bjorn were friends of the Hitchcocks. D. A. Pennebaker, also a Hitchcock friend, and cinematographer Nicholas Proferes documented the event in the short film "You're Nobody Till Somebody Loves You". Charles Mingus played piano. The marriage lasted a year before von Schlebrügge divorced Leary in 1965 — she married Indo-Tibetan Buddhist scholar and ex-monk Robert Thurman in 1967 and gave birth to Ganden Thurman that same year. Actress Uma Thurman, her second child, was born in 1970. Leary met Rosemary Woodruff in 1965 at a New York City art exhibit, and invited her to Millbrook. After moving in, she co-edited the manuscript for Leary's 1966 book "Psychedelic Prayers: And Other Meditations" with Ralph Metzner and Michael Horowitz. The poems in the book were inspired by the "Tao Te Ching", and meant to be used as an aid to LSD trips. Woodruff helped Leary prepare weekend multimedia workshops simulating the psychedelic experience, which were presented around the East Coast. In September 1966, Leary said in a famous "Playboy" magazine interview that LSD could cure homosexuality. According to him, a lesbian became heterosexual after using the drug. Like most of the psychiatric field, he later decided that homosexuality was not an illness in need of a cure. By 1966, use of psychedelics by America's youth had reached such proportions that serious concerns about the nature of these drugs and the impact their use was having on American culture were expressed in the national press and halls of government. In response to these concerns, Senator Thomas Dodd of Connecticut convened Senate subcommittee hearings in order to try to better understand the drug-use phenomenon, eventually with the intention of "stamping out" such usage through the criminalizing of these drugs. Leary was one of several expert witnesses called to testify at these hearings. In his testimony, Leary asserted that "the challenge of the psychedelic chemicals is not just how to control them, but how to use them." He implored the subcommittee not to criminalize psychedelic drug use, which he felt would only serve to exponentially increase its usage among America's youth while removing the safeguards that controlled "set and setting" provided. When subcommittee member Senator Ted Kennedy of Massachusetts asked Leary if LSD usage was "extremely dangerous," Leary replied, "Sir, the motor car is dangerous if used improperly...Human stupidity and ignorance is the only danger human beings face in this world." To conclude his testimony, Leary suggested that legislation be enacted that would require LSD users to be adults who were competently trained and licensed, so that such individuals could use LSD "for serious purposes, such as spiritual growth, pursuit of knowledge, or their own personal development." He presciently noted that without such licensing, the United States would be faced with "another era of prohibition." Leary's testimony proved ineffective; on October 6, 1966, just months after the subcommittee hearings, LSD was banned in California, and by October 1968 LSD was banned in all states as a result of the passage of the Staggers-Dodd Bill. In 1966, Folkways Records recorded Leary reading from his book "The Psychedelic Experience", and released the album "The Psychedelic Experience: Readings from the Book "The Psychedelic Experience. A Manual Based on the Tibetan..."." On September 19, 1966, Leary reorganized the IFIF/Castalia Foundation under the nomenclature of the League for Spiritual Discovery, a religion with LSD as its holy sacrament, in part as an unsuccessful attempt to maintain legal status for the use of LSD and other psychedelics for the religion's adherents, based on a "freedom of religion" argument. Leary incorporated the League for Spiritual Discovery as a religious organization in New York State, and their belief structure was based on Leary's mantra: "drop out, turn on, tune in." (The Brotherhood of Eternal Love subsequently considered Leary their spiritual leader, but The Brotherhood did not develop out of International Federation for Internal Freedom.) Nicholas Sand, the clandestine chemist for the Brotherhood of Eternal Love, followed Leary to Millbrook and joined the League for Spiritual Discovery. Sand was designated the "alchemist" of the new religion. At the end of 1966, Nina Graboi, a friend and colleague of Leary's who had spent time with him at Millbrook, became the director of the Center for the League of Spiritual Discovery in Greenwich Village. The Center opened in March 1967. Leary and Alpert gave free weekly talks at the center, and other guest speakers included Ralph Metzner and Allen Ginsberg. Leary's papers at the New York Public Library include complete records of the International Federation for Internal Freedom (IFIF), the Castalia Foundation, and the League for Spiritual Discovery. During late 1966 and early 1967, Leary toured college campuses presenting a multimedia performance entitled "The Death of the Mind", attempting an artistic replication of the LSD experience. He said that the League for Spiritual Discovery was limited to 360 members and was already at its membership limit, but he encouraged others to form their own psychedelic religions. He published a pamphlet in 1967 called "Start Your Own Religion" to encourage people to do so. Leary was invited to attend the January 14, 1967 Human Be-In by Michael Bowen, the primary organizer of the event, a gathering of 30,000 hippies in San Francisco's Golden Gate Park. In speaking to the group, Leary coined the famous phrase "Turn on, tune in, drop out." In a 1988 interview with Neil Strauss, he said that this slogan was "given to him" by Marshall McLuhan when the two had lunch in New York City, adding, "Marshall was very much interested in ideas and marketing, and he started singing something like, 'Psychedelics hit the spot / Five hundred micrograms, that's a lot,' to the tune of [the well-known Pepsi 1950s singing commercial]. Then he started going, 'Tune in, turn on, and drop out.'" Though the more popular "turn on, tune in, drop out" became synonymous with Leary, his actual definition with the League for Spiritual Discovery was: ""Drop Out" – detach yourself from the external social drama which is as dehydrated and ersatz as TV. "Turn On" – find a sacrament which returns you to the temple of God, your own body. Go out of your mind. Get high. "Tune In" – be reborn. Drop back in to express it. Start a new sequence of behavior that reflects your vision." Repeated FBI raids ended the Millbrook era. Leary told author and Prankster Paul Krassner regarding a 1966 raid by Liddy, "He was a government agent entering our bedroom at midnight. We had every right to shoot him. But I've never owned a weapon in my life. I have never had and never will have a gun around." In November 1967, Leary engaged in a televised debate on drug use with MIT professor Jerry Lettvin. At the end of 1967, Leary moved to Laguna Beach, California and made many friends in Hollywood. "When he married his third wife, Rosemary Woodruff, in 1967, the event was directed by Ted Markland of "Bonanza". All the guests were on acid." In the late 1960s and early 1970s, Leary formulated his eight-circuit model of consciousness in collaboration with writer Brian Barritt. The essay "The Seven Tongues of God" claimed that human brains have seven circuits producing seven levels of consciousness. An eighth circuit was added in the 1973 pamphlet "Neurologic", written with Joanna Leary while he was in prison. This eighth-circuit idea was not exhaustively formulated until the publication of "Exo-Psychology" by Leary and Robert Anton Wilson's "Cosmic Trigger" in 1977. Wilson contributed to the model after befriending Leary in the early 1970s, and used it as a framework for further exposition in his book "Prometheus Rising", among other works. Leary believed that the first four of these circuits ("the Larval Circuits" or "Terrestrial Circuits") are naturally accessed by most people at transition points in life such as puberty. The second four circuits ("the Stellar Circuits" or "Extra-Terrestrial Circuits"), Leary wrote, were "evolutionary offshoots" of the first four that would be triggered at transition points as humans evolve further. These circuits, according to Leary, would equip humans to live in space and expand consciousness for further scientific and social progress. Leary suggested that some people might trigger these circuits sooner through meditation, yoga, or psychedelic drugs specific to each circuit. He suggested that the feelings of floating and uninhibited motion sometimes experienced with marijuana demonstrated the purpose of the higher four circuits. The function of the fifth circuit was to accustom humans to life at a zero gravity environment. Leary did not specify the location of the eight circuits in any brain structures, neural organization, or chemical pathways. Leary wrote that the eight circuits were given to humans by a higher intelligence "located in interstellar nuclear-gravitational-quantum structures." A "U.F.O. message" was encoded in human DNA. In the academic community, many researchers believed that Leary provided little scientific evidence for his claims. Even before he began working on psychedelics he was known as a theoretician rather than a data collector. His most ambitious pre-psychedelic work was "Interpersonal Diagnosis Of Personality". The reviewer for "The British Medical Journal" wrote that Leary created a confusing and overly broad rubric for testing psychiatric conditions. "Perhaps the worst failing of the book is the omission of any kind of proof for the validity and reliability of the diagnostic system," wrote reviewer H. J. Eysenck. "It is simply not enough to say" that the accuracy of the system "can be checked by the reader" in clinical practice. When Leary still wrote for an academic audience he co-edited "The Psychedelic Reader" in 1965. Penn State psychology researcher Jerome E. Singer reviewed the book and singled out Leary as the worst offender in a work containing "melanges of hucksterism." In place of scientific data about the effects of LSD, Leary used metaphors about "galaxies spinning" faster than the speed of light and a cerebral cortex "turned on to a much higher voltage." Leary's first run-in with the law came on December 23, 1965, when he was arrested for possession of marijuana. Leary took his two children, Jack and Susan, and his girlfriend Rosemary Woodruff to Mexico for an extended stay to write a book. On their return from Mexico to the United States, a US Customs Service official found marijuana in Susan's underwear. They had crossed into Nuevo Laredo, Mexico in the late afternoon and discovered that they would have to wait until morning for the appropriate visa for an extended stay. They decided to cross back into Texas to spend the night, and were on the US-Mexico bridge when Rosemary remembered that she had a small amount of marijuana in her possession. It was impossible to throw it out on the bridge, so Susan put it in her underwear. After taking responsibility for the controlled substance, Leary was convicted of possession under the Marihuana Tax Act of 1937 on March 11, 1966, sentenced to 30 years in prison, fined $30,000, and ordered to undergo psychiatric treatment. He appealed the case on the basis that the Marihuana Tax Act was unconstitutional, as it required a degree of self-incrimination in blatant violation of the Fifth Amendment. On December 26, 1968, Leary was arrested again in Laguna Beach, California, this time for the possession of two marijuana "roaches". Leary alleged that they were planted by the arresting officer, but was convicted of the crime. On May 19, 1969, The Supreme Court concurred with Leary in "Leary v. United States", declared the Marihuana Tax Act unconstitutional, and overturned his 1965 conviction. On that same day, Leary announced his candidacy for Governor of California against the Republican incumbent, Ronald Reagan. His campaign slogan was "Come together, join the party." On June 1, 1969, Leary joined John Lennon and Yoko Ono at their Montreal Bed-In, and Lennon subsequently wrote Leary a campaign song called "Come Together". On January 21, 1970, Leary received a 10-year sentence for his 1968 offense, with a further 10 added later while in custody for a prior arrest in 1965, for a total of 20 years to be served consecutively. On his arrival in prison, he was given psychological tests used to assign inmates to appropriate work details. Having designed some of these tests himself (including the "Leary Interpersonal Behavior Inventory"), Leary answered them in such a way that he seemed to be a very conforming, conventional person with a great interest in forestry and gardening. As a result, he was assigned to work as a gardener in a lower-security prison from which he escaped in September 1970, saying that his non-violent escape was a humorous prank and leaving a challenging note for the authorities to find after he was gone. For a fee of $25,000, paid by The Brotherhood of Eternal Love, the Weathermen smuggled Leary out of prison in a pickup truck driven by Clayton Van Lydegraf. The truck met Leary after he had escaped over the prison wall by climbing along a telephone wire. The Weathermen then helped both Leary and Rosemary out of the US (and eventually into Algeria). He sought the patronage of Eldridge Cleaver for $10,000 and the remnants of the Black Panther Party's "government in exile" in Algeria, but after a short stay with them said that Cleaver had attempted to hold him and his wife hostage. Cleaver had put Leary and his wife under "house arrest" due to exasperation with their socialite lifestyle. In 1971, the couple fled to Switzerland, where they were sheltered and effectively imprisoned by a high-living arms dealer, Michel Hauchard, who claimed he had an "obligation as a gentleman to protect philosophers"; Hauchard intended to broker a surreptitious film deal, and forced Leary to assign his future earnings (which Leary eventually won back). In 1972, President Richard Nixon's attorney general, John Mitchell, persuaded the Swiss government to imprison Leary, which it did for a month, but refused to extradite him to the United States. Leary and Rosemary separated later that year; she traveled widely, then moved back to the United States where she lived as a fugitive until the 1990s. Shortly after his separation from Rosemary in 1972, Leary became involved with Swiss-born British socialite Joanna Harcourt-Smith, a stepdaughter of financier Árpád Plesch and ex-girlfriend of Hauchard. The couple "married" in a hotel under the influence of cocaine and LSD two weeks after they were first introduced, and Harcourt-Smith would use his surname until their breakup in early 1977. They traveled to Vienna, then Beirut, and finally ended up in Kabul, Afghanistan in 1972; according to Luc Sante, "Afghanistan had no extradition treaty with the United States, but this stricture did not apply to American airliners." That interpretation of the law was used by American authorities to interdict the fugitive. "Before Leary could deplane, he was arrested by an agent of the federal Bureau of Narcotics and Dangerous Drugs." Leary asserted a different story on appeal before the California Court of Appeal for the Second District, namely: His bail was set at $5 million. The judge at his remand hearing stated, "If he is allowed to travel freely, he will speak publicly and spread his ideas," Facing a total of 95 years in prison, Leary hired criminal defense attorney Bruce Margolin. Leary mostly directed his own defense strategy, which proved to be unsuccessful, as the jury convicted him after deliberating for less than two hours. The Brotherhood drug conspiracy charges were dropped for lack of evidence, but Leary received five years for his prison escape added to his original 10-year sentence. In 1973, he was sent to Folsom Prison in California, and put in solitary confinement. While in Folsom, he was placed in a cell right next to Charles Manson, and though they could not see each other, they could talk together. In their discussions, Manson was surprised and found it difficult to understand why Leary had given people LSD without trying to control them. At one point, Manson said to Leary, "They took you off the streets so that I could continue with your work." Leary became an informant for the FBI in order to shorten his prison sentence and he entered the witness protection program upon his release in 1976. He claimed that he feigned cooperation with the FBI investigation of Weathermen by providing information that they already had or which was of little consequence. The FBI gave him the code name "Charlie Thrush". In a 1974 news conference, Allen Ginsberg, Ram Dass, and Leary's 25-year-old son Jack denounced Leary, calling Leary a "cop informant," a "liar," and a "paranoid schizophrenic." No prosecutions stemmed from his FBI reporting. In 1999, a letter from 22 'Friends of Timothy Leary' sought to soften impressions of the FBI episode. It was signed by authors such as Douglas Rushkoff, Ken Kesey, and Robert Anton Wilson. Susan Sarandon, Genesis P-Orridge and Leary's goddaughter Winona Ryder also signed. The letter said that Leary had smuggled a message to the Weather Underground informing it "that he was considering making a deal with the FBI" and he then "waited for their approval." The reported reply was, "We understand." The letter writers did not provide confirmation that the Weather Underground okayed his cooperation with the FBI. While in prison, Leary was sued by the parents of Vernon Powell Cox, who had jumped from a third story window of a Berkeley apartment while under the influence of LSD. Cox had taken the drug after attending a lecture, given by Leary, favoring LSD use. Leary was unable to be present due to his incarceration, and unable to arrange for legal representation; a default judgement was entered against him in the amount of $100,000. Leary was released from prison on April 21, 1976 by Governor Jerry Brown. He moved to Laurel Canyon, where he continued to write books and appear as a lecturer and "stand-up philosopher". In 1978, he married filmmaker Barbara Blum, also known as Barbara Chase, sister of actress Tanya Roberts. He adopted Blum's son Zachary and said he raised him as his own. He also took on several godchildren, including actress Winona Ryder (the daughter of his archivist Michael Horowitz) and MIT Media Lab director Joi Ito. Leary developed an improbable partnership with former foe G. Gordon Liddy, the Watergate burglar and conservative radio talk-show host. They toured the lecture circuit in 1982 as ex-cons debating a range of issues, including gay rights, abortion, welfare, and the environment. Leary generally espoused left-wing views, while Liddy was right-wing. The tour generated massive publicity and considerable funds for both. Leary resumed personal appearances and appeared in the documentary "Return Engagement" which chronicled the tour, and the release of the autobiography "Flashbacks". In 1988, he held a fundraiser for Libertarian presidential candidate Ron Paul. Leary's extensive touring on the lecture circuit ensured him a comfortable lifestyle by the mid-1980s. He hung with famous people including Robert Anton Wilson, science fiction writers William Gibson and Norman Spinrad, and rock musicians David Byrne and John Frusciante. In addition, he appeared in Johnny Depp's and Gibby Haynes's 1994 film "Stuff", which showed Frusciante's squalid living conditions at that time. Leary continued to take drugs frequently in private, but stayed away from proselytizing psychedelics. Instead, he preached about space colonization and extension of the human lifespan. He expounded on the eight-circuit model of consciousness in books such as "Info-Psychology: A Re-Vision of Exo-Psychology" and others. He invented the acronym "SMI²LE" as a succinct summary of his pre-transhumanist agenda: SM (Space Migration) + I² (intelligence increase) + LE (Life extension). Leary's space colonization plan evolved over the years. Initially, 5,000 of Earth's most virile and intelligent individuals would be launched on a vessel (Starseed 1) equipped with luxurious amenities. This idea was inspired by musician Paul Kantner's concept album "Blows Against The Empire", which was derived from Robert A. Heinlein's Lazarus Long series. Leary was jailed in Folsom Prison during the winter of 1975–76, and he became enamored by Gerard O'Neill's plans to construct giant Eden-like High Orbital Mini-Earths, as documented in the Robert Anton Wilson lecture "H.O.M.E.s on LaGrange", using raw materials from the moon, orbital rock, and obsolete satellites. In the 1980s, Leary became fascinated by computers, the internet, and virtual reality. He proclaimed that "the PC is the LSD of the 1990s" and admonished bohemians to "turn on, boot up, jack in". He became a promoter of virtual reality systems, and sometimes demonstrated a prototype of the Mattel Power Glove as part of his lectures (as in "From Psychedelics to Cybernetics"). He befriended a number of notable people in the field, such as Jaron Lanier and Brenda Laurel, a pioneer in virtual environments and human–computer interaction. With the rise of cyberdelic counter-culture, he served as consultant to Billy Idol in the production of the 1993 album "Cyberpunk". In 1990, his daughter Susan (age 42) was arrested in Los Angeles for shooting her boyfriend in the head as he slept. She was ruled mentally unfit to stand trial for murder on two occasions. After years of mental instability, she committed suicide in jail by hanging herself with a shoelace. Leary and Barbara divorced in 1992, and he ensconced himself in a circle of artists and cultural figures as diverse as Johnny Depp, Susan Sarandon, Dan Aykroyd, Zach Leary, author Douglas Rushkoff, and "Spin" magazine publisher Bob Guccione, Jr Despite declining health, he maintained a regular schedule of public appearances through 1994. In the same year, he was the subject of a symposium of the American Psychological Association. From 1989 on, Leary had begun to re-establish his connection to unconventional religious movements with an interest in altered states of consciousness. In 1989, he appeared with friend and book collaborator Robert Anton Wilson in a dialog entitled "The Inner Frontier" for the Association for Consciousness Exploration, a Cleveland-based group that had been responsible for his first Cleveland appearance in 1979. After that, he appeared at the Starwood Festival, a major Neo-Pagan event run by ACE in 1992 and 1993 although his planned 1994 WinterStar Symposium appearance was cancelled due to his declining health. In front of hundreds of Neo-Pagans in 1992 he declared, "I've always considered myself a Pagan." He also collaborated with Eric Gullichsen on "Load and Run High-tech Paganism: Digital Polytheism". Shortly before his death on May 31, 1996, he recorded the "Right to Fly" album with Simon Stokes which was released in July 1996. In January 1995, Leary was diagnosed with inoperable prostate cancer. He then notified Ram Dass and other old friends, and began the process of directed dying, which he termed "designer dying." Leary did not reveal the condition to the press at that time, but did so after the death of Jerry Garcia in August. Leary and Ram Dass reunited before Leary's death in May 1996, as seen in the documentary film "". Leary's last book before he died was "Chaos and Cyber Culture", published in 1994. In it he wrote, "The time has come to talk cheerfully and joke sassily about personal responsibility for managing the dying process." His book "Design for Dying", which tried to give a new perspective on death and dying, was published posthumously. Leary wrote about his belief that death is "a merging with the entire life process." His website team, led by Chris Graves, updated his website on a daily basis as a sort of proto-blog. The website noted his daily intake of various illicit and legal chemical substances with a predilection for nitrous oxide, LSD and other psychedelic drugs. He was noted for his strong views against the use of drugs which "dull the mind" such as heroin, morphine and (more than occasional) alcohol, and also for his trademark "Leary Biscuits" (a snack cracker with cheese and a small marijuana bud, briefly microwaved). At his request, his sterile house was redecorated by the staff with an array of surreal ornamentation. In his final months, thousands of visitors, well-wishers and old friends visited him in his California home. Until his last weeks, he gave many interviews discussing his new philosophy of embracing death. Leary was reportedly excited for a number of years by the possibility of freezing his body in cryonic suspension, and he publicly announced in September 1988 that he had signed up with Alcor for such treatment after having appeared at Alcor's grand opening the year before. He did not believe he would be resurrected in the future, but did believe that cryonics had important possibilities even though he thought it had only "one chance in a thousand". He called it his "duty as a futurist", and helped publicize the process and hoped it would work for his children and grandchildren if not for him, although he said he was "lighthearted" about it. He was connected with two cryonic organizations, first Alcor and then CryoCare, one of which delivered a cryonic tank to his house in the months before his death. Leary initially announced he would freeze his entire body, but due to lack of funds decided to freeze his head only. He then changed his mind again, and requested that his body be cremated, with his ashes scattered in space. Leary died at 75 on May 31, 1996. His death was videotaped for posterity at his request, by Denis Berry and Joey Cavella, capturing his final words. Berry was the trustee of Leary's archives, and Cavella had filmed Leary during his later years. According to his son Zachary, during his final moments, he clenched his fist and said, "Why?", and then unclenching his fist, he said, "Why not?". He uttered the phrase repeatedly, in different intonations, and died soon after. His last word, according to Zach, was "beautiful." The film "Timothy Leary's Dead" (1996) contains a simulated sequence in which he allows his bodily functions to be suspended for the purposes of cryonic preservation. His head is removed, and placed on ice. The film ends with a sequence showing the creation of the artificial head used in the film. Seven grams of Leary's ashes were arranged by his friend at Celestis to be buried in space aboard a rocket carrying the remains of 23 others, including Gene Roddenberry (creator of "Star Trek"), Gerard O'Neill (space physicist), and Krafft Ehricke (rocket scientist). A Pegasus rocket containing their remains was launched on April 21, 1997, and remained in orbit for six years until it burned up in the atmosphere. Leary's ashes were also given to close friends and family. In 2015, Susan Sarandon brought some of his ashes to the Burning Man festival in Black Rock City, Nevada, and put them into an art installation there. The ashes were burned, along with the installation, on September 6, 2015. Leary was married five times, and fathered two children. Timothy Leary was an early influence on game theory applied to psychology, having introduced the concept to the International Association of Applied Psychology in 1961 at their annual conference in Copenhagen. He was also an early influence on transactional analysis. His concept of the four life scripts, dating back to 1951, became an influence on transactional analysis by the late 1960s, popularised by Thomas Harris in his book, "I'm OK, You're OK". Many consider Leary one of the most prominent figures during the counterculture of the 1960s, and since those times has remained influential on pop culture, literature, television, film and, especially, music. Leary coined the influential term reality tunnel, by which he means a kind of representative realism. The theory states that, with a subconscious set of mental filters formed from their beliefs and experiences, every individual interprets the same world differently, hence "Truth is in the eye of the beholder". His ideas influenced the work of his friend Robert Anton Wilson. This influence went both ways, and Leary admittedly took just as much from Wilson. Wilson's book "Prometheus Rising" was an in-depth, highly detailed and inclusive work documenting Leary's eight-circuit model of consciousness. Although the theory originated in discussions between Leary and a Hindu holy man at Millbrook, Wilson was one of the most ardent proponents of it and introduced the theory to a mainstream audience in 1977's bestselling "Cosmic Trigger". In 1989, they appeared together on stage in a dialog entitled "The Inner Frontier" hosted by the Association for Consciousness Exploration, (the same group that had hosted Leary's first Cleveland appearance in 1979). World religion scholar Huston Smith was "turned on" by Leary after being introduced to him by Aldous Huxley in the early 1960s. The experience was interpreted as a deeply religious one by Smith, and is described in detailed religious terms in Smith's later work "Cleansing of the Doors of Perception". Smith asked Leary, to paraphrase, whether he knew the power and danger of what he was conducting research with. In "Mother Jones Magazine", 1997, Smith commented: First, I have to say that during the three years I was involved with that Harvard study, LSD was not only legal but respectable. Before Tim went on his unfortunate careening course, it was a legitimate research project. Though I did find evidence that, when recounted, the experiences of the Harvard group and those of mystics were impossible to tell apart — descriptively indistinguishable — that's not the last word. There is still a question about the truth of the disclosure. The movie "Fear and Loathing in Las Vegas" (1998), adapted from a 1971 novel of Hunter S. Thompson, portrays heavy psychedelic drug use and mentions Leary when the protagonist ponders the meaning of the acid wave of the sixties: In the movie "The Men Who Stare at Goats", Lt. Col Bill Django decides to lace the food and drinking water with LSD after claiming, "I just saw Timothy Leary". In the 1968 "Dragnet" episode "The Big Prophet", Liam Sullivan played Brother William Bentley, leader of the Temple of the Expanded Mind, a thinly disguised portrayal of Timothy Leary. Brother Bentley held forth for the entire half-hour on the rights of the individual and the benefits of LSD and marijuana while Joe Friday argued the contrary. The 1979 musical "Hair" and the stage performance it is based on make multiple references to Timothy Leary. Months before his death, Leary appeared in the Feminist Science Fiction feature film Conceiving Ada (Dir. Lynn Hershmann Leeson, 1997), which has been described as a "heady, challenging film". He plays the character of 'Sims', advisor and mentor to the protagonist Emmy Coer, a scientist who makes a breakthrough by communicating across time with the Victorian pioneer of computer programming Ada Lovelace. This ground-breaking film imagined the possibility of transferring memory and experience across bodies and across time, a topic which interested Leary. Some critics felt the film had enough interesting ideas "for several films", including the protagonist's "relationship with her mentor-played by Timothy Leary-". Leary authored and co-authored more than twenty books and was featured on more than a dozen audio recordings. His acting career included over a dozen appearances in movies and television shows in various roles, over thirty appearances as himself. He also produced and/or collaborated with others in the creation of multimedia presentations and computer games. In June 2011, "The New York Times" reported that the New York Public Library had acquired Leary's personal archives, including papers, videotapes, photographs and other archival material from the Leary estate, including correspondence and documents relating to Allen Ginsberg, Aldous Huxley, William Burroughs, Jack Kerouac, Ken Kesey, Arthur Koestler, G. Gordon Liddy and other prominent cultural figures. The collection became available in September 2013.
https://en.wikipedia.org/wiki?curid=31088
They Might Be Giants They Might Be Giants (often abbreviated as TMBG) is an American alternative rock band formed in 1982 by John Flansburgh and John Linnell. During TMBG's early years, Flansburgh and Linnell frequently performed as a duo, often accompanied by a drum machine. In the early 1990s, TMBG expanded to include a backing band. The duo's current backing band consists of Marty Beller, Dan Miller, and Danny Weinkauf. The group is known for their uniquely experimental and absurdist style of alternative music, typically utilising surreal, humorous lyrics and unconventional instruments in their songs. Over their career, they have found success on the modern rock and college radio charts. They have also found success in children's music, and in theme music for several television programs and films. TMBG have released 22 studio albums. "Flood" has been certified platinum and their children's music albums "Here Come the ABCs", "Here Come the 123s", and "Here Comes Science" have all been certified gold. The band has won two Grammy Awards. They were nominated for a Tony Award for Best Original Score (Music and/or Lyrics) Written for the Theatre for "SpongeBob SquarePants: The Broadway Musical". The band has sold over 4 million records. Linnell and Flansburgh first met as teenagers growing up in Lincoln, Massachusetts. They began writing songs together while attending Lincoln-Sudbury Regional High School but did not form a band at that time. The two attended separate colleges after high school and Linnell joined The Mundanes, a new wave group from Rhode Island. The two reunited in 1981 after moving to Brooklyn (to the same apartment building on the same day) to continue their career. At their first concert, They Might Be Giants performed under the name El Grupo De Rock and Roll (Spanish for "the Rock and Roll Band"), because the show was a Sandinista rally in Central Park, and a majority of the audience members spoke Spanish. Soon discarding this title, the band assumed the name of a 1971 film "They Might Be Giants" (starring George C. Scott and Joanne Woodward), which is in turn taken from a "Don Quixote" passage about how Quixote mistook windmills for evil giants. According to Dave Wilson, in his book "Rock Formations", the name They Might Be Giants had been used and subsequently discarded by a friend of the band who had a ventriloquism act. The name was then adopted by the band, who had been searching for a suitable name. A common misconception is that the name of the band is a reference to themselves and an allusion to future success. In an interview, John Flansburgh said that the words "they might be giants" are just a very outward-looking forward thing which they liked. He clarified this in the documentary movie "Gigantic (A Tale of Two Johns)" by explaining that the name refers to the outside world of possibilities that they saw as a fledgling band. In an earlier radio interview, John Linnell described the phrase as "something very paranoid sounding". The duo began performing their own music in and around New York City – Flansburgh on guitar, Linnell on accordion and saxophone and accompanied by a drum machine or prerecorded backing track on audio cassette. Their atypical instrumentation, along with their songs which featured unusual subject matter and clever wordplay, soon attracted a strong local following. Their performances also featured absurdly comical stage props such as oversized fezzes and large cardboard cutout heads of newspaper editor William Allen White. Many of these props would later turn up in their first music videos. From 1984–1987, They Might Be Giants were the house-band at Darinka, a Lower East Side performance club. One weekend a month they played on the stage there and by the end of their three-year stint sold out every performance. On March 30, 1985, TMBG released their 7" flexi-disc, dubbed "Wiggle Diskette" at Darinka. The disc included demos of the songs "Everything Right Is Wrong" and "You'll Miss Me". At one point, Linnell broke his wrist in a biking accident, and Flansburgh's apartment was burgled, stopping them from performing for a time. During this hiatus, they began recording their songs onto an answering machine, and then advertising the phone number in local newspapers such as "The Village Voice", using the moniker "Dial-A-Song". They also released a demo cassette, which earned them a review in "People" magazine. The review caught the attention of Bar/None Records, who signed them to a recording deal. Through the 1980s until 1998, Dial-A-Song consisted of an answering machine with a tape of the band playing various songs. The machine played one track at a time, ranging from demos and uncompleted work to mock advertisements the band had created. It was often difficult to access due to the popularity of the service and the dubious quality of the machines used. In reference to this, one of Dial-A-Song's many slogans over the years was the tongue-in-cheek "Always Busy, Often Broken". The number, (718) 387-6962, was a local Brooklyn number and was charged accordingly, but the band advertised it with the line: "Free when you call from work". At one point in 1988, the Dial-A-Song answering machine recorded a conversation between two people who had listened to Dial-A-Song, then questioned how they made money out of it. An excerpt from the conversation has been included as a hidden track on the EP for "(She Was A) Hotel Detective". In the late 90s, TMBG started switching to a digital unit to update the format for Dial-A-Song but due to frequent crashes, the band returned to the original format. In March 2000, TMBG started the website dialasong.com, which was more reliable than the original, phone-based version, as it used a Flash document to stream the songs. In 2002, Dial-A-Song's answering machine broke down, and fans responded by sending new similar models. In the following year, Dial-A-Song resumed service with a new answering machine. By 2005, a computer system from TechTV was provided to maintain the system but technical difficulties started bringing the system to an end. In 2006, Dial-A-Song became increasingly difficult to maintain as a result of unreliable answering machines that had to be replaced. The stress placed upon the answering machine in addition to its age caused excessive wear, and the machine broke down soon after. In August, Dial-A-Song ceased production and because fans started taking advantage of the internet, it was replaced with a page promoting the They Might Be Giants podcasts. John Linnell stated in an interview in early 2008 that Dial-A-Song had died of a technical crash, and that the Internet had taken over where the machine left off. On November 15, 2008, the Dial-A-Song number was officially disconnected, though the number has at times been re-used in a similar style by other independent artists. The duo released their self-titled debut album in 1986, which became a college radio hit. The video for "Don't Let's Start", filmed in the New York State Pavilion built for the 1964 New York World's Fair in Queens, became a hit on MTV in 1987, earning them a broader following. In 1988, they released their second album, "Lincoln", named after the duo's hometown. It featured the song "Ana Ng" which reached No. 11 on the US Modern Rock chart. Both albums were produced on 8-track tape at Dubway Studios in New York City. In 1989, They Might Be Giants signed with Elektra Records, and released their third album "Flood" the following year. "Flood" earned them a platinum album, largely thanks to the success of "Birdhouse in Your Soul" which reached number three on the US Modern Rock chart, as well as "Istanbul (Not Constantinople)", a cover of a song originally by The Four Lads. In 1990, "Throttle" magazine interviewed They Might Be Giants and clarified the meaning of the song "Ana Ng": John Flansburgh said, "Ng is a Vietnamese name. The song is about someone who's thinking about a person on the exact opposite side of the world. John looked at a globe and figured out that if Ana Ng is in Vietnam and the person is on the other side of the world, then it must be written by someone in Peru". Further interest in the band was generated when two cartoon music videos were created by Warner Bros. Animation for "Tiny Toon Adventures": "Istanbul" and "Particle Man". The videos reflected TMBG's high "kid appeal", resulting from their often absurd songs and poppy melodies. In 1991, Bar/None Records released the B-sides compilation "Miscellaneous T". The title referred to the section of the record store where TMBG releases were often found as well as to the overall eclectic nature of the tracks. Though consisting of previously released material (save for the "Purple Toupee" b-sides, which were not available publicly), it gave new fans a chance to hear the Johns' earlier non-album work without having to hunt down the individual EPs. In early 1992, They Might Be Giants released "Apollo 18". The heavy space theme coincided with TMBG being named Musical Ambassadors for International Space Year. Singles from the album included "The Statue Got Me High", "I Palindrome I", and "The Guitar (The Lion Sleeps Tonight)". "Apollo 18" was also notable for being one of the first albums to take advantage of the CD player's shuffle feature. The song "Fingertips" actually comprised 21 separate tracks — short snippets that not only acted together to make the song but that when played in random order would be interspersed between the album's full-length songs. Due to mastering errors, the UK and Australian versions of "Apollo 18" contained "Fingertips" as one track. Following "Apollo 18", Flansburgh and Linnell decided to move away from the guitar & accordion (or sax) plus backing tracks on tape nature of their live show, and recruited a supporting band that consisted of live musicians (Kurt Hoffman of The Ordinaires on reeds and keyboards, longtime Pere Ubu bassist Tony Maimone, and drummer Jonathan Feinberg). "John Henry" was released in 1994. Influenced by their more conventional lineup, this album marked a departure from their previous releases with more of a guitar-heavy sound. It was released to mixed reviews amongst fans and critics alike. Their next album, "Factory Showroom", was released in 1996 to little fanfare. The band had quickly moved away from the feel of "John Henry", and "Factory Showroom" returns to the more diverse sounds of their earlier albums, despite the inclusion of two guitarists, the second being Eric Schermerhorn who provided several guitar solos. They left Elektra after the duo refused to do a publicity show, amongst other exposure-related disputes. In 1998, they released a mostly-live album "Severe Tire Damage" from which came the single "Doctor Worm", a studio recording. Around this same time period, Danny Weinkauf (bass) and Dan Miller (guitar) were recruited for their recording and touring band. Both had been members of the bands Lincoln and Candy Butchers which were previous opening acts for TMBG. Weinkauf and Miller continue to work with the band to the present day. For most of their career, TMBG has made innovative use of the Internet. As early as 1992, the band was sending news updates to their fans via Usenet newsgroups. In 1999, They Might Be Giants became the first major-label recording artist to release an entire album exclusively in mp3 format. The album, "Long Tall Weekend", is sold through Emusic. Also, in 1999, the band contributed the song "Dr. Evil" to the motion picture "". Over their career, the band has performed on numerous movie and television soundtracks, including "The Oblongs", the ABC News miniseries "Brave New World" and "Ed and His Dead Mother". They also performed the theme music "Dog on Fire", composed by Bob Mould, for "The Daily Show with Jon Stewart". They composed and performed the music for the TLC series "Resident Life", the theme song for the Disney Channel program "Higglytown Heroes", and songs about the cartoons "Dexter's Laboratory" and "Courage the Cowardly Dog". During this time, the band also worked on a project for McSweeney's, a publishing company and literary journal. The band wrote a McSweeney's theme song and forty-four songs for an album that was meant to be listened to with the journal, with each track corresponding to a particular story or piece of artwork. Labeled "They Might Be Giants vs. McSweeney's", the disk appears in issue No. 6 of "Timothy McSweeney's Quarterly Concern". Contributing the single "Boss of Me" as the theme song to the hit television series "Malcolm in the Middle", as well as to the show's compilation CD, brought a new audience to the band. Not only did the band contribute the theme, songs from all of the Giants' previous albums were used on the show: for example, the infamous punching-the-kid-in-the-wheelchair scene from the first episode was done to the strains of "Pencil Rain" from "Lincoln". Another song to feature in the series was "Spiraling Shape". "Boss of Me" became the band's second top-40 hit in the UK which they performed on long-running UK television programme Top of the Pops, and in 2002, won the duo a Grammy Award. On September 11, 2001, they released the album "Mink Car" on Restless Records. It was their first full album release of new studio material since 1996 and their first since parting ways with Elektra. The making of that album, including a record signing event at a Manhattan Tower Records, was included in a documentary directed by AJ Schnack titled "Gigantic (A Tale of Two Johns)". The film was released on DVD in 2003. In 2002, they released "No!", their first album "for the entire family". Using the enhanced CD format, it included an interactive animation for most of the songs. They followed it up in 2003 with their first book, an illustrated children's book with an included EP, "Bed, Bed, Bed". In 2004, the band created one of the first artist-owned online music stores, at which customers could purchase and download MP3 copies of their music, both new releases and many previously released albums. By creating their own store, the band could keep money that would otherwise go to record companies. With the redesign of the band's website in 2010, the store was reincarnated. Also, in 2004, the band released its first new "adult" rock work since the release of "No!", the EP "Indestructible Object". This was followed by a new album, "The Spine", and an associated EP, "The Spine Surfs Alone". It was at this time that Dan Hickey was replaced by Marty Beller, who had previously collaborated with TMBG. For the album's first single, "Experimental Film", TMBG teamed up with Homestar Runner creators Matt and Mike Chapman to create an animated music video. The band's collaboration with the Brothers Chaps also included several Puppet Jam segments with puppet Homestar and the music for a Strong Bad email titled "Different Town". In 2006 they recorded a track for the 200th Strong Bad e-mail, where Linnell provided the voice of The Poopsmith. TMBG also contributed a track to the 2004 "Future Soundtrack For America" compilation, a project compiled by John Flansburgh with the help of Spike Jonze and Barsuk Records. The band contributed "Tippecanoe and Tyler Too", a political campaign song from the presidential election of 1840. The compilation was released by Barsuk and featured indie, alternative, and high-profile acts such as Death Cab for Cutie, The Flaming Lips, and Bright Eyes. All proceeds went to progressive organizations such as Music for America and MoveOn.org. Flansburgh and Linnell made a guest appearance in "", the January 11, 2004, episode of the animated sitcom "Home Movies". They voice both a pair of camp counselors and members of a strange hooded male bonding cult. On May 10, 2004, they made a guest star appearance on episode 141 of "Blue's Clues" called "Bluestock" alongside several other stars, such as Toni Braxton, Macy Gray, and India.Arie. They Might Be Giants were in a letter for Joe and Blue. Following the "Spine on the Hiway Tour" of 2004, the band announced that they would take an extended hiatus from touring to focus on other projects, such as a musical produced by Flansburgh and written by his wife, Robin "Goldie" Goldwasser, titled "People Are Wrong!". 2005 saw the release of "Here Come the ABCs", TMBG's follow-up to the successful children's album "No!". The Disney Sound label released the CD and DVD separately on February 15, 2005. To promote the album, Flansburgh and Linnell along with drummer Marty Beller embarked on a short tour, performing for free at many Borders Bookstore locations. In November 2005, "Venue Songs" was released as a two-disc CD/DVD set narrated by John Hodgman. It is a concept album based on all of the "venue songs" from their 2004 tour. TMBG covered the "Devo" song "Through Being Cool" in the 2005 "Disney" movie, "Sky High". From 2005 to 2014, They Might Be Giants made podcasts on a monthly, sometimes bi-monthly, basis. Each edition included remixes of previous songs, rarities, covers, and new songs and skits recorded specifically for the podcast. The band contributed 14 original songs for the 2006 Dunkin' Donuts ad campaign, "America Runs on Dunkin'", including "Things I Like to Do", "Pleather", and "Fritalian". In the aired advertisement, Flansburgh sings "Fritalian" along with his wife, Robin Goldwasser. In a 2008 commercial, "Moving" is played. The band has produced and performed three original songs for Playhouse Disney series: one for "Higglytown Heroes" and two for "Mickey Mouse Clubhouse". The Mickey Mouse Clubhouse features two original songs performed by group, including the opening theme song, in which a variant of a Mickey Mouse Club chant ("Meeska Mooska Mickey Mouse!") is used to summon the Clubhouse, and "Hot Dog!", the song used at the end of the show. The song references Mickey's first spoken words in the 1929 short The Karnival Kid. They also recorded a cover of the Disney song, "There's a Great Big Beautiful Tomorrow" for the movie "Meet the Robinsons" and wrote and performed the theme song for "The Drinky Crow Show". The band was recruited to provide original songs for the Henry Selick-directed movie of Neil Gaiman's children's book "Coraline" but were dropped because their music was not "creepy" enough. Only one song, titled "Other Father Song", was kept for the film with Linnell singing as the titular "Other Father". Their twelfth album, "The Else", was released July 10, 2007, on Idlewild Recordings (and distributed by Zoë Records for the CD version), with an earlier digital release on May 15 at the iTunes Store. Advance copies were made available to stations by mid-June 2007. The album was produced by Pat Dillett (David Byrne) and The Dust Brothers (Beck, Beastie Boys). On February 12, 2009, They Might Be Giants performed the song "The Mesopotamians" from the album on "Late Night with Conan O'Brien". In the rest of 2007, They Might Be Giants wrote a commissioned piece for Brooklyn-based robotic music outfit League of Electronic Musical Urban Robots and performed for three dates at the event, and covered the Pixies "Havalina" for American Laundromat Records "Dig For Fire - a tribute to PIXIES" compilation. The band's 13th album, "Here Come the 123s", a DVD/CD follow-up to 2005's critically acclaimed "Here Come the ABCs" children's project, was released on February 5, 2008. On April 10, 2008, They Might Be Giants performed the song "Seven" from the album on "Late Night with Conan O'Brien". In 2009, the album won the Grammy Award for "Best Musical Album For Children" during the 51st Annual Grammy Awards. The band's fourteenth album, "Here Comes Science", a science-themed children's album. This album introduced listeners to natural, formal, social, and applied sciences. It was released on September 1, 2009, and nominated for a Grammy Award on December 1, 2010. On November 3, They Might Be Giants sent out a newsletter stating "The Avatars of They", a set of sock puppets the Johns manipulate for shows, will have an album in 2012, suggesting another children's album. However, a new adult album titled "Join Us" was released on July 19, 2011. On October 3, 2011, Artix Entertainment announced that the band would be performing in-game for a special musical event to commemorate the 3rd birthday of their popular MMORPG AdventureQuest Worlds. They were featured in AdventureQuest World's special third birthday event as John and John. On March 5, 2013, the band released their sixteenth adult studio album, "Nanobots", on their Idlewild Recordings label in the US and on British indie label Lojinx in Europe. The live album "Flood Live in Australia" was made available for free digital download by the band in 2015. Also in 2015, the band reactivated its Dial-A-Song service under the banner of Dial-A-Song-Direct, promising to release one new song every week for the entire year, beginning with the track "Erase" on January 5. Several of these songs were planned to be collected on a new studio rock album entitled "Glean" on April 21, 2015. The band released their newest children's album, "Why?", on November 27, 2015. It was their fifth children's album and the first children's album to be released under their own label, Idlewild Recordings. In a video released on December 20, 2015, John Flansburgh announced that the band would be taking a temporary break following their 2016 U.S. tour. Dial-A-Song was revived in 2015, with a new phone number ((844) 387-6962), the website, and a radio network. In late 2017, the band announced via Twitter that Dial-A-Song would return again, in a modified format, starting in January 2018. On March 8, 2016, the band released "Phone Power", their nineteenth studio album and the third containing songs from the 2015 revival of their Dial-a-Song service. This was the first TMBG album to be sold as a "pay what you want" download, available ahead of the physical release on June 10. The band's twentieth album, "I Like Fun" was released on January 19, 2018. Their twenty-first and twenty-second studio albums, "My Murdered Remains" and "The Escape Team", were both released on December 10, 2018. "My Murdered Remains" contains songs from the 2015 and 2018 iterations of Dial-A-Song. They made a song "I'm Not a Loser" for the SpongeBob SquarePants musical in 2016. In October 2019, the band recorded a new version of their song "Hot Dog" to promote the upcoming third season of the Disney Channel preschool series Mickey and the Roadster Racers, re-titled as "Mickey Mouse: Mixed Up Adventures" for that season. It premiered on Disney Junior on October 14, 2019. Throughout their career, They Might Be Giants have released 22 studio albums, 10 compilations, 10 live albums, 8 EPs, 7 videos and 11 singles. The band has released 25 main music videos for songs from their rock albums. All of their children's albums have also included video content or run alongside DVD releases. The band also has videos for each of the Dial-A-Song tracks from 2015 and 2018 on their main YouTube channel, ParticleMen. In 1999, They Might Be Giants released "Direct from Brooklyn", a VHS compilation of their music videos from 1986 up to that point. It was reissued on DVD in 2003. The following music videos were included:
https://en.wikipedia.org/wiki?curid=31089
Titanite Titanite, or sphene (from the Greek "sphenos" (σφηνώ), meaning wedge), is a calcium titanium nesosilicate mineral, CaTiSiO5. Trace impurities of iron and aluminium are typically present. Also commonly present are rare earth metals including cerium and yttrium; calcium may be partly replaced by thorium. The International Mineralogical Association Commission on New Minerals and Mineral Names (CNMMN) adopted the name titanite and "discredited" the name sphene as of 1982, although commonly papers and books initially identify the mineral using both names. Sphene was the most commonly used name until the IMA decision, although both were well known. Some authorities think it is less confusing as the word is used to describe any chemical or crystal with oxidized titanium such as the rare earth titanate pyrochlores series and many of the minerals with the perovskite structure. The name sphene continues to be publishable in peer-reviewed scientific literature, e.g. a paper by Hayden et al. was published in early 2008 in the journal Contributions to Mineralogy and Petrology. Sphene persists as the informal name for titanite gemstones. Titanite, which is named for its titanium content, occurs as translucent to transparent, reddish brown, gray, yellow, green, or red monoclinic crystals. These crystals are typically sphenoid in habit and are often twinned. Possessing a subadamantine tending to slightly resinous luster, titanite has a hardness of 5.5 and a weak cleavage. Its specific gravity varies between 3.52 and 3.54. Titanite's refractive index is 1.885–1.990 to 1.915–2.050 with a strong birefringence of 0.105 to 0.135 (biaxial positive); under the microscope this leads to a distinctive high relief which combined with the common yellow-brown colour and lozenge-shape cross-section makes the mineral easy to identify. Transparent specimens are noted for their strong trichroism, the three colours presented being dependent on body colour. Owing to the quenching effect of iron, sphene exhibits no fluorescence under ultraviolet light. Some titanite has been found to be metamict, in consequence of structural damage due to radioactive decomposition of the often significant thorium content. When viewed in thin section with a petrographic microscope, pleochroic halos can be observed in minerals surrounding a titanite crystal. Titanite occurs as a common accessory mineral in intermediate and felsic igneous rocks and associated pegmatites. It also occurs in metamorphic rocks such as gneiss and schists and skarns. Source localities include: Pakistan; Italy; Russia; China; Brazil; Tujetsch, St. Gothard, Switzerland; Madagascar; Tyrol, Austria; Renfrew County, Ontario, Canada; Sanford, Maine, Gouverneur, Diana, Rossie, Fine, Pitcairn, Brewster, New York and California in the US. Titanite is a source of titanium dioxide, TiO2, used in pigments. As a gemstone, titanite is usually some shade of chartreuse, but can be brown or black. Hue depends on Fe content, with low Fe content causing green and yellow colours, and high Fe content causing brown or black hues. Zoning is typical in titanite. It is prized for its exceptional dispersive power (0.051, B to G interval) which exceeds that of diamond. Jewelry use of titanite is limited, both because the stone is uncommon in gem quality and is relatively soft. Titanite can also be used as a U-Pb geochronometer, specifically in metamorphic terranes.
https://en.wikipedia.org/wiki?curid=31091
Time management Time management is the process of planning and exercising conscious control of time spent on specific activities, especially to increase effectiveness, efficiency, and productivity. It involves a juggling act of various demands upon a person relating to work, social life, family, hobbies, personal interests and commitments with the finiteness of time. Using time effectively gives the person "choice" on spending/managing activities at their own time and expediency. Time management may be aided by a range of skills, tools, and techniques used to manage time when accomplishing specific tasks, projects, and goals complying with a due date. Initially, time management referred to just business or work activities, but eventually the term broadened to include personal activities as well. A time management system is a designed combination of processes, tools, techniques, and methods. Time management is usually a necessity in any project development as it determines the project completion time and scope. It is also important to understand that both technical and structural differences in time management exist due to variations in cultural concepts of time. The major themes arising from the literature on time management include the following: Time management is related to different concepts such as: Organizational time management is the science of identifying, valuing and reducing time cost wastage within organizations. It identifies, reports and financially values sustainable time, wasted time and effective time within an organization and develops the business case to convert wasted time into productive time through the funding of products, services, projects or initiatives at a positive return on investment. Differences in the way a culture views time can affect the way their time is managed. For example, a "linear time" view is a way of conceiving time as flowing from one moment to the next in a linear fashion. This linear perception of time is predominant in America along with most Northern European countries such as, Germany, Switzerland, and England. People in these cultures tend to place a large value on productive time management, and tend to avoid decisions or actions that would result in wasted time. This linear view of time correlates to these cultures being more “monochronic”, or preferring to do only one thing at a time. Generally speaking, this cultural view leads to a better focus on accomplishing a singular task and hence, more productive time management. Another cultural time view is "multi-active time" view. In multi-active cultures, most people feel that the more activities or tasks being done at once the happier they are. Multi-active cultures are “polychronic” or prefer to do multiple tasks at once. This multi-active time view is prominent in most Southern European countries such as Spain, Portugal, and Italy. In these cultures, the people often tend to spend time on things they deem to be more important such as placing a high importance on finishing social conversations. In business environments, they often pay little attention to how long meetings last, rather, the focus is on having high quality meetings. In general, the cultural focus tends to be on synergy and creativity over efficiency. A final cultural time view is a "cyclical time" view. In cyclical cultures, time is considered neither linear nor event related. Because days, months, years, seasons, and events happen in regular repetitive occurrences, time is viewed as cyclical. In this view, time is not seen as wasted because it will always come back later, hence, there is an unlimited amount of it. This cyclical time view is prevalent throughout most countries in Asia including Japan, China, and Tibet. It is more important in cultures with cyclical concepts of time to complete tasks correctly, therefore, most people will spend more time thinking about decisions and the impact they will have before acting on their plans. Most people in cyclical cultures tend to understand that other cultures have different perspectives of time and are cognizant of this when acting on a global stage. Some time-management literature stresses tasks related to the creation of an environment conducive to "real" effectiveness. These strategies include principles such as: In addition, the timing of tackling tasks is important as tasks requiring high levels of concentration and mental energy are often done in the beginning of the day when a person is more refreshed. Literature also focuses on overcoming chronic psychological issues such as procrastination. Excessive and chronic inability to manage time effectively may result from Attention deficit hyperactivity disorder (ADHD) or attention deficit disorder (ADD). Diagnostic criteria include a sense of underachievement, difficulty getting organized, trouble getting started, trouble managing many simultaneous projects, and trouble with follow-through. Some authors focus on the prefrontal cortex which is the most recently evolved part of the brain. It controls the functions of attention span, impulse control, organization, learning from experience and self-monitoring, among others. Some authors argue that changing the way the prefrontal cortex works is possible and offer a solution. Time management strategies are often associated with the recommendation to set personal goals. The literature stresses themes such as: These goals are recorded and may be broken down into a project, an action plan, or a simple task list. For individual tasks or for goals, an importance rating may be established, deadlines may be set, and priorities assigned. This process results in a plan with a task list, schedule, or calendar of activities. Authors may recommend a daily, weekly, monthly or other planning periods associated with different scope of planning or review. This is done in various ways, as follows. A technique that has been used in business management for a long time is the categorization of large data into groups. These groups are often marked A, B, and C—hence the name. Activities are ranked by these general criteria: Each group is then rank-ordered by priority. To further refine the prioritization, some individuals choose to then force-rank all "B" items as either "A" or "C". ABC analysis can incorporate more than three groups. ABC analysis is frequently combined with Pareto analysis. "See also:" Pareto analysis The Pareto Principle is the idea that 80% of tasks can be completed in 20% of the given time, and the remaining 20% of tasks will take up 80% of the time. This principle is used to sort tasks into two parts. According to this form of Pareto analysis it is recommended that tasks that fall into the first category be assigned a higher priority. The 80-20-rule can also be applied to increase productivity: it is assumed that 80% of the productivity can be achieved by doing 20% of the tasks. Similarly, 80% of results can be attributed to 20% of activity. If productivity is the aim of time management, then these tasks should be prioritized higher. The "Eisenhower Method" stems from a quote attributed to Dwight D. Eisenhower: "I have two kinds of problems, the urgent and the important. The urgent are not important, and the important are never urgent." Note that Eisenhower does not claim this insight for his own, but attributes it to an (unnamed) "former college president." Using the Eisenhower Decision Principle, tasks are evaluated using the criteria important/unimportant and urgent/not urgent, and then placed in according quadrants in an Eisenhower Matrix (also known as an "Eisenhower Box" or "Eisenhower Decision Matrix"). Tasks are then handled as follows: Tasks in This method is inspired by the above quote from U.S. President Dwight D. Eisenhower. Note, however, that Eisenhower seems to say that things are never both important and urgent, or neither: So he has two kinds of problems, the urgent and the important. POSEC is an acronym for "Prioritize by Organizing, Streamlining, Economizing and Contributing". The method dictates a template which emphasizes an average individual's immediate sense of emotional and monetary security. It suggests that by attending to one's personal responsibilities first, an individual is better positioned to shoulder collective responsibilities. Inherent in the acronym is a hierarchy of self-realization, which mirrors Abraham Maslow's hierarchy of needs: Time management also covers how to eliminate tasks that do not provide value to the individual or organization. According to Sandberg, task lists "aren't the key to productivity [that] they're cracked up to be". He reports an estimated "30% of listers spend more time managing their lists than [they do] completing what's on them". Hendrickson asserts that rigid adherence to task lists can create a "tyranny of the to-do list" that forces one to "waste time on unimportant activities". Any form of stress is considered to be debilitative for learning and life, even if adaptability could be acquired its effects are damaging. But stress is an unavoidable part of daily life and Reinhold Niebuhr suggests to face it, as if having "the serenity to accept the things one cannot change and having the courage to change the things one can." Part of setting priorities and goals is the emotion "worry," and its function is to ignore the present to fixate on a future that never arrives, which leads to the fruitless expense of one's time and energy. It is an unnecessary cost or a false aspect that can interfere with plans due to human factors. The Eisenhower Method is a strategy used to compete worry and dull-imperative tasks. Worry as stress, is a reaction to a set of environmental factors; understanding this is not a part of the person gives the person possibilities to manage them. Athletes under a coach call this management as "putting the game face." Change is hard and daily life patterns are the most deeply ingrained habits of all. To eliminate non-priorities in study time it is suggested to divide the tasks, capture the moments, review task handling method, postpone unimportant tasks (understood by its current relevancy and sense of urgency reflects wants of the person rather than importance), control life balance (rest, sleep, leisure), and cheat leisure and non productive time (hearing audio taping of lectures, going through presentations of lectures when in queue, etc.). Certain unnecessary factors that affect time management are habits, lack of task definition (lack of clarity), over-protectiveness of the work, guilt of not meeting objectives and subsequent avoidance of present tasks, defining tasks with higher expectations than their worth (over-qualifying), focusing on matters that have an apparent positive outlook without assessing their importance to personal needs, tasks that require support and time, sectional interests and conflicts, etc. A habituated systematic process becomes a device that the person can use with ownership for effective time management. A task list (also called a to-do list or "things-to-do") is a list of tasks to be completed, such as chores or steps toward completing a project. It is an inventory tool which serves as an alternative or supplement to memory. Task lists are used in self-management, business management, project management, and software development. It may involve more than one list. When one of the items on a task list is accomplished, the task is checked or crossed off. The traditional method is to write these on a piece of paper with a pen or pencil, usually on a note pad or clip-board. Task lists can also have the form of paper or software checklists. Writer Julie Morgenstern suggests "do's and don'ts" of time management that include: Numerous digital equivalents are now available, including personal information management (PIM) applications and most PDAs. There are also several web-based task list applications, many of which are free. Task lists are often diarised and tiered. The simplest tiered system includes a general to-do list (or task-holding file) to record all the tasks the person needs to accomplish, and a daily to-do list which is created each day by transferring tasks from the general to-do list. An alternative is to create a "not-to-do list", to avoid unnecessary tasks. Task lists are often prioritized: Various writers have stressed potential difficulties with to-do lists such as the following: Many companies use time tracking software to track an employee's working time, billable hours etc., e.g. law practice management software. Many software products for time management support multiple users. They allow the person to give tasks to other users and use the software for communication. Task list applications may be thought of as lightweight personal information manager or project management software. Modern task list applications may have built-in task hierarchy (tasks are composed of subtasks which again may contain subtasks), may support multiple methods of filtering and ordering the list of tasks, and may allow one to associate arbitrarily long notes for each task. In contrast to the concept of allowing the person to use multiple filtering methods, at least one software product additionally contains a mode where the software will attempt to dynamically determine the best tasks for any given moment. Time management systems often include a time clock or web-based application used to track an employee's work hours. Time management systems give employers insights into their workforce, allowing them to see, plan and manage employees' time. Doing so allows employers to control labor costs and increase productivity. A time management system automates processes, which eliminates paper work and tedious tasks. Getting Things Done was created by David Allen. The basic idea behind this method is to finish all the small tasks immediately and a big task is to be divided into smaller tasks to start completing now. The reasoning behind this is to avoid the information overload or "brain freeze" which is likely to occur when there are hundreds of tasks. The thrust of GTD is to encourage the user to get their tasks and ideas out and on paper and organized as quickly as possible so they're easy to manage and see. Francesco Cirillo's "Pomodoro Technique" was originally conceived in the late 1980s and gradually refined until it was later defined in 1992. The technique is the namesake of a pomodoro (Italian for tomato) shaped kitchen timer initially used by Cirillo during his time at university. The "Pomodoro" is described as the fundamental metric of time within the technique and is traditionally defined as being 30 minutes long, consisting of 25 minutes of work and 5 minutes of break time. Cirillo also recommends a longer break of 15 to 30 minutes after every four Pomodoros. Through experimentation involving various work groups and mentoring activities, Cirillo determined the "ideal Pomodoro" to be 20–35 minutes long. Book: Systems: Psychology/neuroscience Psychiatry
https://en.wikipedia.org/wiki?curid=31092
Turing Award The ACM A.M. Turing Award is an annual prize given by the Association for Computing Machinery (ACM) to an individual selected for contributions "of lasting and major technical importance to the computer field". The Turing Award is generally recognized as the highest distinction in computer science. The award is named after Alan Turing, who was a British mathematician and reader in mathematics at the University of Manchester. Turing is often credited as being the key founder of theoretical computer science and artificial intelligence. From 2007 to 2013, the award was accompanied by an additional prize of US$250,000, with financial support provided by Intel and Google. Since 2014, the award has been accompanied by a prize of US$1 million, with financial support provided by Google. The first recipient, in 1966, was Alan Perlis, of Carnegie Mellon University. The first female recipient was Frances E. Allen of IBM in 2006.
https://en.wikipedia.org/wiki?curid=31093
The Damned (band) The Damned are an English rock band formed in London, England in 1976 by lead vocalist Dave Vanian, guitarist Brian James, bassist (and later guitarist) Captain Sensible, and drummer Rat Scabies. They were the first punk rock band from the United Kingdom to release a single, "New Rose" (1976), release an album, "Damned Damned Damned" (1977), and tour the United States. They have nine singles that charted on the UK Singles Chart Top 40. The band briefly broke up after "Music for Pleasure" (1977), the follow-up to their debut album, was critically dismissed. They quickly reformed without Brian James, and released "Machine Gun Etiquette" (1979). In the 1980s they released four studio albums, "The Black Album" (1980), "Strawberries" (1982), "Phantasmagoria" (1985), and "Anything" (1986), which saw the band moving towards a gothic rock style. The latter two albums did not feature Captain Sensible, who had left the band in 1984. In 1988, James and Sensible rejoined to play a series of reunion gigs. One of which was released the next year as the live album "Final Damnation". The Damned again reformed for a tour in 1991. In 1995, they released a new album, "Not of This Earth", which was Scabies's last with the band. This was followed by "Grave Disorder" (2001), "So, Who's Paranoid?" (2008), and their most recent album, and the first to crack the United Kingdom's Official Charts' top 10 list, landing at #7, "Evil Spirits" (2018). Despite going through numerous lineup changes, the formation of Vanian, Sensible, keyboardist Monty Oxymoron, drummer Pinch and bassist Stu West had been together from 2004 until 2017, when West left the band and former bassist Paul Gray rejoined. As one of the first gothic rock bands, The Damned were a major influence on the goth subculture with lead singer Vanian's vampire-themed costume, baritone singing voice and dark lyrics being major influences. They also influenced future hardcore punk bands with their fast-paced, energetic playing style and attitude. Dave Vanian (David Lett), Captain Sensible (Raymond Burns) and Rat Scabies (Chris Millar) had been members of the band Masters of the Backside, which also included future Pretenders frontwoman Chrissie Hynde. Brian James (Brian Robertson) had been a member of London SS, who never played live, but in addition to James included musicians who later found fame in The Clash and Generation X. Scabies knew James through a failed audition as drummer for London SS. When the two decided to start their own band, with James on guitar and Scabies on drums, they invited Sid Vicious and Dave Vanian to audition to be the singer. Only Vanian showed up, and got the part. Sensible became the band's bassist, and the four called themselves The Damned. Chrissie Hynde would later write that "Without me, they were probably the most musically accomplished punk outfit in town". The Damned played their first show on 6 July 1976, supporting the Sex Pistols at the 100 Club. A lo-fi recording of the show was later released as "Live at the 100 Club". As part of London's burgeoning punk scene, The Damned again played the club on 20 September, for the 100 Club Punk Festival. On 22 October, five weeks before the release of the Sex Pistols' "Anarchy in the U.K.", Stiff Records put out The Damned's first single, "New Rose", thus making them the first UK punk band to release a single. The single's B-side was a fast paced cover of The Beatles' "Help!". "New Rose" was described by critic Ned Raggett as a "deathless anthem of nuclear-strength romantic angst". When the Sex Pistols released their single, they took The Damned, along with The Clash and Johnny Thunders & The Heartbreakers, as openers for their December "Anarchy Tour of the UK". Many of the tour dates were cancelled by organizers or local authorities, with only seven of approximately twenty scheduled shows taking place. The Damned were kicked off the tour before it ended by Sex Pistols manager Malcolm McLaren. The Damned released their first album, "Damned Damned Damned", on 18 February 1977. Produced by Nick Lowe, it was the first full-length album released by a British punk band, and included a new single, "Neat Neat Neat". The band went on tour to promote the album, in March opening for T. Rex on their final tour. Later that spring, they became the first British punk band to tour the United States. According to Brendan Mullen, founder of the Los Angeles club The Masque, their first tour of the U.S. found them favouring very fast tempos, helping to inspire the first wave of west coast hardcore punk. That August, Lu Edmonds was added as a second guitarist. This expanded line-up unsuccessfully tried to recruit the reclusive Syd Barrett to produce their second album. Unable to get Barrett, they settled for his former Pink Floyd bandmate, Nick Mason. In December, this album was released as "Music For Pleasure", and was quickly dismissed by critics. Its failure led to the band being dropped from Stiff Records. Scabies was also displeased with the album, and quit the band after the recording. He was replaced by future Culture Club drummer Jon Moss, who played with The Damned until they decided to break up in February 1978. The former members of the band worked on a series of brief side projects and solo recordings, all making little commercial impact. Scabies formed a one-off band called "Les Punks" for a late 1978 gig: Les Punks was a quasi-reunion of The Damned (without Brian James or Lu) that featured Scabies, Vanian, Sensible and bassist Lemmy of Hawkwind and Motörhead. The Damned tentatively reformed with the "Les Punks" line-up in early 1979, but originally performed as "The Doomed" to avoid potential trademark problems. Captain Sensible switched to guitar and keyboards, and after a brief period with Lemmy on bass for studio demos and a handful of live appearances, and a slightly longer period with Henry Badowski on bass, the bassist position was filled by Algy Ward, formerly of The Saints. During a December 1978 tour of Scotland, Gary Holton filled in for Vanian. The band officially went by The Damned again, playing their first gig under that name in April 1979, and signing a deal with Chiswick Records. They went back to the studio and released the charting singles, "Love Song" and "Smash It Up", followed by 1979's "Machine Gun Etiquette", and then a cover of Jefferson Airplane's "White Rabbit". Vanian's vocals had by now expanded from the high-baritone of the early records to a smoother crooning style. "Machine Gun Etiquette" featured a strong 1960s garage rock influence, with Farfisa organ in several songs. Recording at Wessex Studios at the same time as The Clash were there to record "London Calling", Joe Strummer and Mick Jones made an uncredited vocal appearance on the title track. Fans and critics were pleasantly surprised, and "Machine Gun Etiquette" received largely positive reviews; Ira Robbins and Jay Pattyn described it as "a great record by a band many had already counted out". Ward left the group in 1980, to be replaced by Paul Gray, formerly of Eddie and the Hot Rods. "The Black Album" was released later that year, produced by the band themselves apart from one track produced by Hans Zimmer, with three sides of the double album consisting of studio tracks, including the theatrical 17-minute song "Curtain Call". Side 4 featured a selection of live tracks recorded at Shepperton. It was their last album for Chiswick. In 1981, The Damned released "Friday 13th", a four song E.P. on which featured original tracks "Disco Man", "Billy Bad Breaks", "Limit Club", and a cover of The Rolling Stones song "Citadel". In 1982, The Damned released their only album for Bronze Records, "Strawberries". The band had now expanded to a quintet, with the addition of new full-time keyboardist Roman Jugg. At this time, Sensible was splitting his time between The Damned and his own solo career, which had seen success in the UK with the number one hit "Happy Talk" in 1982. Consequently, the group's next album was a one-off side project recorded without the unavailable Sensible: a soundtrack to an imaginary 1960s movie called "Give Daddy the Knife, Cindy". This limited-run album of 1960s cover songs had the band billed as Naz Nomad and the Nightmares. In 1984, The Damned made a live performance on the BBC Television show "The Young Ones" performing their song "Nasty", featuring new bassist Bryn Merrick (replacing Gray) and both Jugg and Sensible on guitar. Sensible played a last concert with the band at Brockwell Park, before leaving to pursue his solo career. From the beginnings of the band, Vanian had adopted a vampire-like appearance onstage, with chalk-white makeup and formal dress. With Sensible gone, Vanian's image became more characteristic of the band as a whole. The Damned signed a contract with major label MCA, and the "Phantasmagoria" album followed in July 1985, preceded by the UK No. 21 single "Grimly Fiendish". Other hits from the same album were "The Shadow of Love" with its gloomy gothic sound, and the lighter "Is It A Dream?". In January 1986, the non-album single "Eloise", a cover of a 1968 hit by Barry Ryan, was a No. 3 chart success in the UK, the group's highest chart placing to date. However, "Phantasmagoria"'s November 1986 follow-up, "Anything", was a commercial failure, although MCA did include one of its tracks ("In Dulce Decorum") on the soundtrack release of "Miami Vice II". The cover of Love's "Alone Again Or" was also released as a single. Late in 1987 The Damned began to work on a new album for MCA, but the result of these sessions remain unreleased as the record contract was dissolved. Two of the new songs ("Gunning for Love" and "The Loveless and The Damned") were later re-recorded by the Dave Vanian and the Phantom Chords side project. James and Sensible rejoined the group temporarily for a few live appearances including a concert at the London Town and Country Club in June 1988 which was released the following year as "Final Damnation – The Damned Reunion Concert". Following a farewell concert at London's Brixton Academy supported by The Milk Monitors, Horse, and Claytown Troupe, the band disbanded again. During this era, the only comprehensive book about The Damned was released: "The Book of The Damned, The Light At the End of the Tunnel, The Official Biography", by Carol Clerk (Omnibus Press, 1987, ). Although officially on hiatus, the group issued two singles in 1990. The first, "Fun Factory", was a song recorded in 1982 by the Sensible/Vanian/Scabies/Gray line-up; intended for single release at the time, the bankruptcy of their record company prevented the issue of the record for 9 years. The year's second single, "Prokofiev", was recorded by Scabies, Vanian and Brian James, and was sold on a 1991 reunion tour of the US. In 1993 the group reformed again with a new line-up featuring Scabies, Vanian, guitarists Kris Dollimore (formerly of The Godfathers), Alan Lee Shaw, and bassist Moose Harris (formerly of New Model Army). Around this time, two prominent modern rock groups each covered a Damned song: Guns N' Roses recorded "New Rose" for their ""The Spaghetti Incident?"" (1993), while The Offspring covered "Smash It Up" for the "Batman Forever" soundtrack (1995). Both cover versions enjoyed major label distribution and created more exposure to the Damned sound, sometimes to a younger audience unfamiliar with the group. The reformed Damned toured regularly for about two years and released a new full-length album, "Not of This Earth" in late 1995. Promoted with a series of long tours prior to its release, by the time the album was released The Damned had yet again split, partly as the result of legal battles: Vanian and Sensible accused Scabies of releasing "Not of This Earth" without proper authorization. Sensible rejoined Vanian in 1996 and yet another formation of The Damned appeared. This initially featured bassist Paul Gray, who was later replaced by Patricia Morrison, previously of Bags, The Gun Club and The Sisters of Mercy. By 2000, The Damned consisted of Vanian, Sensible, Morrison and new recruits Monty Oxymoron on keyboards and Andy (Pinch) Pinching, a founding member of English Dogs, on drums. Garrie Dreadful, another recruit from Sensible's solo band, played drums from 1997 to 1999 then gave way to Pinch. In 2001, the band released the album Grave Disorder, on Dexter Holland's Nitro Records label and promoted it with continual touring. A spring tour of the United States was planned in 2002 supporting Rob Zombie. However, the band dropped off after a few shows with Captain Sensible saying, "gothic punk was completely lost on the predominantly metal crowds". In the summer they played the Vans Warped Tour in the US. Morrison and Vanian eventually married and had a daughter, Emily, born on 9 February 2004. Around this time, Morrison 'retired' from performing with the band, though she remained involved with The Damned as the band's manager. Her replacement on bass was Stu West. In 2006, The Damned released the single "Little Miss Disaster", and a live DVD "MGE25" documenting a 2004 Manchester concert celebrating the 25th anniversary of "Machine Gun Etiquette". On 21 October 2006, BBC Radio 2 broadcast an hour-long documentary titled "Is She Really Going Out With Him?" concerning the recording of the Damned's first single "New Rose" and the group's place in the 1976 London punk scene. Featuring interviews with James, Sensible, Scabies, Glen Matlock, Don Letts and Chrissie Hynde, the programme discussed the bands and personalities around the scene, particularly the Anarchy in the U.K. tour. On 28 October 2008, The Damned released for download their tenth studio album, "So, Who's Paranoid?", followed by a conventional release on the English Channel label on 10 November (UK) and 9 December (US). To promote the album, the band made back-to-back appearances performing on the CBS network TV broadcasts in the US on Halloween eve and Halloween on "The Late Late Show with Craig Ferguson". The band undertook a 23-date UK tour to promote their new album, supported by Devilish Presley and Slicks Kitchen. The band then played a set and conducted a short interview on the Cherry Blossom Clinic on WFMU on 16 May 2009. In November 2009, the band supported heavy metal band Motörhead on the UK leg of their world tour. Continual touring occurred throughout the UK and Europe over the next few years. In 2012, they played South America for the first time, with dates in São Paulo (Brazil) and Buenos Aires (Argentina). They returned to the Rhythm Festival, one of only four headline acts to return over the festival's seven-year history. In 2012, The Damned announced that they would return for 2013's Rebellion festival alongside The Exploited, The Casualties and others. On 7 November 2014 Captain Sensible and Dave Vanian appeared on Ken Reid's "TV Guidance Counselor" Podcast. In 2015, The Damned were featured in a documentary by director Wes Orshoski called "". The documentary charts the history of the band against a backdrop of interviews and tour footage from 2011 to 2014, and was edited together rough to make the film feel more like The Damned's first album. After the release of the film, on 12 September 2015, former bassist Bryn Merrick died of throat cancer. Merrick had played on "Phantasmagoria" and "Anything". At the time of his death he had been playing in a Ramones tribute band, the Shamones. In May 2016 the band played a 40th anniversary show at the Royal Albert Hall. In the summer of 2017 "Neat Neat Neat" was prominently featured in the movie "Baby Driver" and its soundtrack. On 11 September 2017 the band announced that Stu West was leaving the band and former bassist Paul Gray who had played on 2 Warfare songs in 2016 for Evo would be returning for the new album. "Evil Spirits", the band's eleventh album and first in ten years, was released on 13 April 2018. It peaked at #7 on the UK Album chart, their highest ever chart position, topping their previous high of #11 in 1985 ("Phantasmagoria"). The album was recorded in November 2017 in New York City and produced by Tony Visconti who is best known for his work with David Bowie. To get the album made, it was largely crowdfunded through Pledge Music. The album was preceded by the first single, "Standing On the Edge of Tomorrow" in January 2018 along with the singles "Devil in Disguise" and "Look Left" in March 2018 and "Procrastination" in April 2018. Starting on 23 May 2019 The Damned will be on tour performing their third studio album, Machine Gun Etiquette, which they haven’t played in full since its release in 1979. The tour will include such venues as the House of Blues, Punk Rock Bowling and Music Festival and Rebellion Festival. On 25 June 2019, "The New York Times Magazine" listed The Damned (band) among hundreds of artists whose material was reportedly destroyed in the 2008 Universal fire. On 25 October 2019 Pinch announced that he would be departing the band after 20 years. His last gig was The Damned's show at the London Palladium on 27 October 2019. Current members
https://en.wikipedia.org/wiki?curid=31098
Tupolev Tu-144 The Tupolev Tu-144 (; NATO reporting name: Charger) is a Soviet supersonic passenger airliner designed by Tupolev in operation from 1968 to 1999. The Tu-144 was the world's first commercial supersonic transport aircraft with its prototype's maiden flight from Zhukovsky Airport on 31 December 1968, three months before the British-French Concorde. The Tu-144 was a product of the Tupolev Design Bureau, an OKB headed by aeronautics pioneer Alexei Tupolev, and 16 aircraft were manufactured by the Voronezh Aircraft Production Association in Voronezh. The Tu-144 conducted 102 commercial flights, of which only 55 carried passengers, at an average service altitude of and cruised at a speed of around (Mach 1.6). The Tu-144 first went supersonic on 5 June 1969, four months before Concorde, and on 26 May 1970 became the world's first commercial transport to exceed Mach 2. The Tu-144 suffered from reliability and developmental issues, and with the 1973 Paris Air Show crash, restricted the viability for regular use. The Tu-144 was introduced into passenger service with Aeroflot between Moscow and Almaty on 26 December 1975, but withdrawn less than three years later after a second Tu-144 crashed and retired on 1 June 1978. The Tu-144 remained in commercial service as a cargo aircraft until cancellation of the Tu-144 program in 1983. The Tu-144 was later used by the Soviet space program to train pilots of the Buran spacecraft, and by NASA for supersonic research until 1999. The Tu-144 made its final flight on 26 June 1999 and surviving aircraft were put on display across the world or into storage. The Soviet government published the concept of the Tu-144 in an article in the January 1962 issue of the magazine "Technology of Air Transport". The air ministry started development of the Tu-144 on 26 July 1963, 10 days after the design was approved by the Council of Ministers. The plan called for five flying prototypes to be built in four years, with the first aircraft to be ready in 1966. The MiG-21I (1968; Izdeliye 21–11; "Analog") I = Imitator ("Simulator") was a testbed for the wing design of the Tu-144. Despite the similarity in appearance of the Tu-144 to the Anglo-French supersonic aircraft, there were significant differences between two aircraft. The Tu-144 is bigger and faster than the Concorde (M2.15 vs. M2.04). Concorde used an electronic engine control package from Lucas, which Tupolev was not permitted to purchase for the Tu-144 as it could also be used on military aircraft. Concorde's designers used fuel as coolant for the cabin air conditioning and for the hydraulic system (see Concorde for details). Tupolev also used fuel/hydraulic heat exchangers but used cooling turbines for the cabin air. The Tu-144 prototype was a full-scale demonstrator aircraft with the very different production aircraft being developed in parallel. While both Concorde and the Tu-144 prototype had ogival delta wings, the Tu-144's wing lacked Concorde's conical camber. Production Tu-144s replaced this wing with a double delta wing including spanwise and chordwise camber. They also added two small retractable surfaces called a moustache canard, with fixed double-slotted leading-edge slats and retractable double-slotted flaps. These were fitted just behind the cockpit and increased lift at low speeds. Moving the elevons downward in a delta-wing aircraft increases the lift (force), but also pitches its nose downward. The canards cancel out this nose-downwards moment, thus reducing the landing speed of the production Tu-144s to , still faster than that of Concorde. The NASA study lists final approach speeds during Tu-144LL test flights as . An FAA circular lists Tu-144S approach speed as , as opposed to Concorde's approach speed of , based on the characteristics declared by the manufacturers to Western regulatory bodies. It is open to argument how stable the Tu-144S was at the listed airspeed. In any event, when NASA subcontracted Tupolev bureau in the 1990s to convert one of the remaining Tu-144D to a Tu-144LL standard, the procedure set by Tupolev for landing defined the Tu-144LL "final approach speed... on the order of 360 km/h depending on fuel weight." Brian Calvert, Concorde's technical flight manager and its first commercial pilot in command for several inaugural flights, cites final approach speed of a typical Concorde landing to be . The lower landing speed compared to Tu-144 is due to Concorde's more refined design of the wing profile that provides higher lift at low speeds without degrading supersonic cruise performance – a feature often mentioned in Western publications on Concorde and acknowledged by Tupolev designers as well. Along with early Tu-134s, the Tu-144 was one of the last commercial aircraft with a braking parachute. The prototypes were also the only passenger jets ever fitted with ejection seats, albeit only for the crew and not the passengers. SSTs for M2.2 had been designed in the Soviet Union before Tupolev was tasked with developing one. Design studies for the Myasischev SST had shown that a cruise specific fuel consumption (SFC) of not more than 1.2 kg/kgp hr would be required. The only engine available in time with the required thrust and suitable for testing and perfecting the aircraft was the afterburning Kuznetsov NK-144 turbofan with a cruise SFC of 1.58 kg/kgp hr. Development of an alternative engine to meet the SFC requirement, a non-afterburning turbojet, the Kolesov RD-36-51A, began in 1964. It took a long time for this engine to achieve acceptable SFC and reliability. In the meantime the NK-144 high SFC gave a limited range of about , far less than Concorde. A maximum speed of (Mach 2.29) was reached with the afterburner. Afterburners were added to Concorde to meet its take-off thrust requirement and were not necessary for supersonic cruise; the Tu-144 used maximum afterburner for take-off and minimum for cruise. The Tu-144S, of which nine were produced, was fitted with the Kuznetsov NK-144A turbofan to address lack of take-off thrust and surge margin. SFC at M2.0 was 1.81 kg/kgp hr. A further improvement, the NK-144V, achieved the required SFC, but too late to influence the decision to use the Kolesov RD-36-51. The Tu-144D, of which five were produced (plus one uncompleted), was powered by the Kolesov RD-36-51 turbojet with an SFC of 1.22 kg/kgp hr. The range with full payload increased to 5,330 km compared to 6,470 km for Concorde. Plans for an aircraft with a range in excess of range were never implemented. The engine intakes had variable ramps and bypass flaps with positions controlled automatically to suit the engine airflow. They were very long to help prevent surging; twice as long as those on Concorde. Jean Rech (Sud Aviation) states the need for excessive length was based on the misconception that length was required to attenuate inlet distortion. The intakes were to be shortened by 10 feet on the projected Tu-144M. The Kolesov RD-36-51 had an unusual variable con-di nozzle for the nozzle pressure ratios at supersonic speeds. Without an afterburner there was no variable nozzle already available. A translating plug nozzle was used. Sixteen airworthy Tu-144 airplanes were built: Although its last commercial passenger flight was in 1978, production of the Tu-144 did not cease until 1983, when construction of the airframe was stopped and left partially complete. The last production aircraft, Tu-144D number 77116, was not completed and was left derelict for many years on Voronezh East airfield. There was at least one ground test airframe for static testing in parallel with the development of prototype 68001. The Tu-144S went into service on 26 December 1975, flying mail and freight between Moscow and Alma-Ata in preparation for passenger services, which commenced on 1 November 1977. The type certificate was issued by the USSR Gosaviaregister on 29 October 1977. The passenger service ran a semi-scheduled service until the first Tu-144D experienced an in-flight failure during a pre-delivery test flight, crash-landing on 23 May 1978 with two crew fatalities. The Tu-144's 55th and last scheduled passenger flight occurred on 1 June 1978. An Aeroflot freight-only service recommenced using the new production variant "Tu-144D" ("D" for "Dal'nyaya" – "long range") aircraft on 23 June 1979, including longer routes from Moscow to Khabarovsk made possible by the more efficient Kolesov RD-36-51 turbojet engines, which also increased the maximum cruising speed to Mach 2.15. There were only 103 scheduled flights before the Tu-144 was removed from commercial service. The Tu-144 programme was cancelled by a Soviet government decree on 1 July 1983 that also provided for future use of the remaining Tu-144 aircraft as airborne laboratories. In 1985, Tu-144D were used to train pilots for the Soviet Buran space shuttle. In 1986–1988 Tu-144D No. 77114, built in 1981, was used for medical and biological research of high-altitude atmosphere radiological conditions. Further research was planned but not completed, due to lack of funding. In the early 1990s, a wealthy businesswoman, Judith DePaul, and her company IBP Aerospace negotiated an agreement with Tupolev, NASA, Rockwell and later Boeing. They offered a Tu-144 as a testbed for its High Speed Commercial Research program, intended to design a second-generation supersonic jetliner called the High Speed Civil Transport. In 1995, Tu-144D No. 77114 (with only 82.5 hours of flight time) was taken out of storage and after extensive modification at a cost of US$350million, designated the "Tu-144LL" (where LL is a Russian abbreviation for Flying Laboratory, ). The aircraft made 27 flights in Russia during 1996 and 1997. Though regarded as a technical success, the project was cancelled for lack of funding in 1999. This aircraft was reportedly sold in June 2001 for $11M via an on-line auction, but the aircraft sale did not proceed. Tejavia Systems, the company handling the transaction, reported in September 2003 that the deal was not signed as the replacement Kuznetsov NK-321 engines from a Tupolev Tu-160 bomber were military hardware and the Russian government would not allow them to be exported. In 2003, after the retirement of Concorde, there was renewed interest from several wealthy individuals who wanted to use the Tu-144LL for a transatlantic record attempt, despite the high cost of a flight readiness overhaul even if military authorities would authorize the use of NK-321 engines outside Russian Federation airspace. The last two aircraft remain in Gromov Flight Research Institute in Zhukovsky, Nos. 77114 (the Tu-144LL) and 77115. In March 2006, it was reported that both aircraft would be preserved, with one erected on a pedestal near Zhukovsky City Council or above the Gromov Flight Research Institute entrance from Tupolev avenue. Early flights in scheduled service indicated the Tu-144S was extremely unreliable. During 102 flights and 181 hours of freight and passenger flight time, the Tu-144S suffered more than 226 failures, 80 of them in flight. (The list was included in the Tu-144 service record provided by the USSR to British Aircraft Corporation-Aérospatiale in late 1978, when requesting Western technological aid with the Tu-144, and probably incomplete.) Eighty of these failures were serious enough to cancel or delay the flight. After the inaugural flight, two subsequent flights, during the next two weeks, were cancelled and the third flight rescheduled. The official reason given by Aeroflot for cancellation was bad weather at Alma-Ata; however when the journalist called the Aeroflot office in Alma-Ata about local weather, the office said that the weather there was perfect and one aircraft had already arrived that morning. Failures included decompression of the cabin in flight on 27 December 1977, and engine-exhaust duct overheating causing the flight to be aborted and returned to the takeoff airport on 14 March 1978. Alexei Tupolev, Tu-144 chief designer, and two USSR vice-ministers (of aviation industry and of civil aviation) had to be personally present in Domodedovo airport before each scheduled Tu-144 departure to review the condition of the aircraft and make a joint decision on whether it could be released into flight. Subsequently, flight cancellations became less common, as several Tu-144s were docked at Moscow's Domodedovo International Airport. Tu-144 pilot Aleksandr Larin remembers a troublesome flight around 25 January 1978. The flight with passengers suffered the failure of 22 to 24 onboard systems. Seven to eight systems failed before takeoff, but given the large number of foreign TV and radio journalists and also other foreign notables aboard the flight, it was decided to proceed with the flight to avoid the embarrassment of cancellation. After takeoff, failures continued to multiply. While the aircraft was supersonic en route to the destination airport, Tupolev bureau's crisis centre predicted that the front and left landing gear would not extend and that the aircraft would have to land on the right gear alone, at a landing speed of over . Due to expected political fallout, Soviet leader Leonid Brezhnev was personally notified of what was going on in the air. With the accumulated failures, an alarm siren went off immediately after takeoff, with sound and volume similar to that of a civil defence warning. The crew could not figure a way to switch it off so the siren stayed on throughout the remaining 75 minutes of the flight. Eventually, the captain ordered the navigator to borrow a pillow from the passengers and stuff it inside the siren's horn. After all the suspense, all landing gear was extended and the aircraft was able to land. The final passenger flight of Tu-144 on around 30 May 1978 involved valve failure on one of the fuel tanks. Only one commercial route, Moscow to Alma-Ata (present-day Almaty), was ever used and flights were limited to one a week, despite there being eight Tu-144S certified aircraft available and a number of other routes suitable for supersonic flights, suggesting that the Soviet decision-makers had little confidence in the Tu-144 when passenger service began in 1977. Considering the high rate of technical failures their reasoning was sound. Bookings were limited to 70–80 passengers or fewer a flight, falling well below both the Tu-144's seating capacity and the demand for seats. On its 55 scheduled flights, Tu-144s transported 3,194 passengers, an average of 58 passengers per flight. With officials acutely aware of the aircraft's poor reliability and fearful of possible crashes, Soviet decision-makers deliberately limited flight frequency to as few as would allow them to claim to be offering a regular service, and they also limited passenger load to minimize the impact and political fallout of a possible crash. A serious problem was discovered when two Tu-144S airframes suffered structural failures during laboratory testing just prior to the Tu-144 entering passenger service. Details are included in a chapter in Fridlyander's memoirs and mentioned by Bliznyuk et al. The problem, discovered in 1976, may have been known prior to this testing; a large crack was discovered in the airframe of the prototype Tu-144 (aircraft 68001) during a stopover in Warsaw following its appearance at the 1971 Paris Air Show. The aircraft was assembled from parts machined from large blocks and panels, many over long and wide. While at the time, this approach was heralded as an advanced feature of the design, it turned out that large whole-moulded and machined parts contained defects in the alloy's structure that caused cracking at stress levels below that which the part was supposed to withstand. Once a crack started to develop, it spread quickly for many metres, with no crack-arresting design feature to stop it. In 1976, during repeat-load and static testing at TsAGI (Russia's "Central Aerohydrodynamic Institute"), a Tu-144S airframe cracked at 70% of expected flight stress with cracks running many metres in both directions from their origin. Later the same year, a test airframe was subjected to a test simulating the temperatures and pressures during a flight. The Tu-144 was placed in a hyperbaric chamber and heated to . Contraction and expansion happened because of the cooling during ascent and descent, heating during supersonic acceleration and cruise and because of the pressure change from high altitude (low outside pressure causing the airframe to expand) to ground-level pressure (causing it to contract). The airframe cracked in a similar way to that during TsAGI load testing. While fatigue cracks of an acceptable length are normal in aircraft, they are usually found during routine inspections or stopped at a crack-arresting feature. Aircraft fly with acceptable cracks until they are repaired. The Tu-144 design was the opposite of standard practice, allowing a higher incidence of defects in the alloy structure, leading to crack formation and propagation to many metres. The Soviet leadership made a political decision to enter the Tu-144 into passenger service in November 1977 despite receiving testing reports indicating that the Tu-144 airframe was unsafe and not airworthy for regular service. Aeroflot appears to have thought so little of the aircraft that it did not mention it in its five-year plan for 1976–1980. However, it was not the airline executives' decision and Aeroflot reluctantly put the Tu-144 into passenger service on 1 November 1977. Though the decision to cancel the Tu-144S passenger service came a few days after the Tu-144D crashed during the test flight on 23 May 1978, this crash was regarded as the last straw over mounting concerns about the reliability of the Tu-144. Even the fact that the technical reason for the crash was specific to the Tu-144D fuel pump system and did not apply to the Tu-144S did not help. The decision to pull the Tu-144S out of passenger service after merely 55 flights is thus more likely to be attributable to high incidence of failures during and before the scheduled flights. A problem for passengers was the very high level of noise inside the cabin, measured on average at least 90–95 dB. The noise came from the engine; unlike Concorde, it could only sustain supersonic speeds using afterburners, like military aircraft. In addition, the unique active heat insulation system used for the air conditioning, which used a flow of spent cabin air, was described as excessively noisy. Passengers seated next to each other could have a conversation only with difficulty, and those seated two seats apart could not hear each other even when screaming and had to pass hand-written notes instead. Noise in the back of the aircraft was unbearable. Alexei Tupolev acknowledged the problem to foreign passengers and promised to fix it, but never had the means to do so. There were unprecedented Soviet requests for Western technological aid with the development of the Tu-144. The request was made despite obviously not helping to foster Soviet technological prestige, which was one of the key purposes of the Tu-144 programme. In 1977, the USSR approached Lucas Industries, a designer of the engine control system for Concorde, requesting help with the design of the electronic management system of the Tu-144 engines, and also asked BAC-Aérospatiale for assistance in improving the Tu-144 air intakes. (The design of air intakes' variable geometry and their control system was one of the most intricate features of Concorde, contributing to its fuel efficiency. Over half of the wind-tunnel time during Concorde development was spent on the design of air intakes and their control system.) In late 1978, the USSR requested a wide range of Concorde technologies, evidently reflecting the broad spectrum of unresolved Tu-144 technical issues. The list included de-icing equipment for the leading edge of the air intakes, fuel-system pipes and devices to improve durability of these pipes, drain valves for fuel tanks, fireproof paints, navigation and piloting equipment, systems and techniques for acoustical loading of airframe and controls (to test against acoustic fatigue caused by high jet-noise environment), ways to reinforce the airframe to withstand damage, firefighting equipment, including warning devices and lightning protection, emergency power supply, and landing gear spray guards (a.k.a. water deflectors or "mud flaps" that increase engine efficiency when taking off from wet airstrips). These requests were denied after the British government vetoed them on the ground that the same technologies, if transferred, could be also employed in Soviet bombers. Soviet approaches were also reported in British mainstream press of the time, such as "The Times". On 31 August 1980, Tu-144D (77113) suffered an uncontained compressor disc failure in supersonic flight which damaged part of the airframe structure and systems. The crew was able to perform an emergency landing at Engels-2 strategic bomber base. On 12 November 1981, a Tu-144D's RD-36-51 engine was destroyed during bench tests, leading to a temporary suspension of all Tu-144D flights. One of the Tu-144Ds (77114, a.k.a. aircraft 101) suffered a crack across the bottom panel of its wing. Finally, the higher oil prices of the 1970s were starting to catch up with the Soviet Union. Much later than in the West, but since the late 1970s, commercial efficiency was starting to become a factor in aviation development decision-making even in the USSR. The Tu-144 disappeared from Aeroflot published prospects, replaced by the Ilyushin Il-86, a jumbo jet that was to become the Soviet flagship airliner. In the late 1970s, Soviet insiders were intensely hopeful in conversations with Western counterparts of reintroducing Tu-144 passenger service for the 1980 Moscow Olympic games, even perhaps for flights to Western Europe, given the aircraft's high visibility, but apparently the technical condition of the aircraft weighed against such re-introduction even for token flights. As discussed in Howard Moon's book "Soviet SST" (1989), economic efficiency alone would not have doomed the Tu-144 altogether; continuation of token flights for reasons of political prestige would have been possible, if only the aircraft itself would have allowed for it, but it did not. The Tu-144 was to a large extent intended to be and trumpeted as a symbol of Soviet technological prestige and superiority. The decision to cease Tu-144D production was issued on 7 January 1982, followed by a USSR government decree dated 1 July 1983 to cease the whole Tu-144 programme and to use produced Tu-144 aircraft as flying laboratories. In retrospect, it is apparent that the Tu-144 suffered from a rush in the design process to the detriment of thoroughness and quality, and this rush to get airborne exacted a heavy penalty later. The rush is apparent even in outward timing: the 1963 government decree launching the Tu-144 programme defined that the Tu-144 should fly in 1968; it first flew on "the last day" of 1968 (31 December) to fulfill government goals set five years earlier. (By the way of comparison, Concorde's first flight was originally scheduled for February 1968, but was pushed back several times until March 1969 in order to iron out problems and test components more thoroughly). Unlike Concorde development, the Tu-144 project was also strongly driven by ideologically and politically motivated haste of Soviet self-imposed racing against Concorde; Aleksei Poukhov, one of Tupolev's designers, reminiscences: "For the Soviet Union to allow the West to get ahead and leave it behind at that time was quite unthinkable. We not only had to prevent the West from getting ahead, but had to compete and leapfrog them, if necessary. This was the task Khrushchev set us... We knew that when Concorde's maiden flight had been set for February or March, 1969, we would have to get our aircraft up and flying by the end of 1968." The introduction of the Tu-144 into passenger service was timed to the 60th anniversary of the Communist revolution, as was duly noted in Soviet officials' speeches delivered at the airport before the inaugural flight – whether the aircraft was actually ready for passenger service was deemed of secondary importance. Even the outward details of the inaugural Tu-144 flight betrayed the haste of its introduction into service: several ceiling panels were ajar, service trays stuck, window shades dropped without being pulled, reading lights did not work, not all toilets worked and a broken ramp delayed departure half an hour. On arrival to Alma-Ata, the Tu-144 was towed back and forth for 25 minutes before it could be aligned with the exit ramp. Equally telling is the number of hours spent on flight testing. Whereas Concorde had been subjected to 5,000 hours of testing by the time it was certified for passenger flight, making it the most tested aircraft ever, total flight testing time of the Tu-144 by the time of its introduction into passenger service was fewer than 800 hours. Flight testing time logged on the prototype (68001) was 180 hours; flight testing time for the Tu-144S until the completion of state acceptance tests was 408 hours; service tests until the commencement of passenger service were 96 hours of flight time; altogether totalling 756 hours. It is unclear why the Minister of Aviation Industry and the Minister of Civil Aviation did not endorse the protocols of state acceptance tests for four months after the tests completion. One reason could be the change of the guard – Minister Dementiev, who was one of the chief backers of Tu-144, died a day before the tests completed – but it might also had something to do with aircraft reliability record uncovered during the tests that was no better than the subsequent dismal service record. Fridlyander points out that in addition to the Tu-144, Tupolev's bureau had to work on other projects, including the Tu-154 passenger aircraft and the Tu-22M bomber. Despite large and high-priority resource investment in the Tu-144 development programme and the fact that a large part of the whole Soviet R&D infrastructure was subordinated to the Tu-144 project, parallel project development overwhelmed the bureau causing it to lose focus and make design errors. (Design errors affected not only the Tu-144, but the Tu-154 as well). The first batch of 120 Tu-154s suffered from wing destruction due to excessive structural load and had to be withdrawn. The rushed introduction to service of poorly tested aircraft happened previously with another Tupolev project that had high political visibility and prestige: the Tu-104 passenger jet-liner was the first successful Soviet passenger jet in service. In a decision-making similar to the Tu-144-story, the Soviet government introduced the Tu-104 into passenger service before satisfactory stability and controllability had been achieved. During high-altitude and high-speed flight the aircraft was prone to longitudinal instability, and also at high altitudes, it had a narrow range of angle of attack separating the aircraft from stalls known as coffin corner. These problems created the preconditions for spin dives, that happened twice before the Tu-104 was eventually properly tested and the problem was resolved. This politically motivated rush, along with the fact that the project was essentially ideologically motivated rather than driven by intrinsic needs of the Soviet society, and with general technological insufficiencies of Soviet industrial base, contributed to the final undoing of the Tu-144 project. (Alexander Poukhov, one of the Tu-144 design engineers who subsequently rose to be one of the bureau's senior designers, estimated in 1998 that the Tu-144 project was 10–15 years beyond the USSR's capabilities at that time). Moon suggests that subordination of available Soviet R&D resource allocation to the Tu-144 programme significantly slowed down the development of other Soviet aircraft projects, such as the Il-86 wide-body jet, and stagnated Soviet aviation development for almost a decade. After ceasing the Tu-144 programme, Tu-144D No. 77114 (aircraft 101 or 08-2) carried out test flights between the 13–20 July 1983 to establish 13 world records registered with the Fédération Aéronautique Internationale (FAI). These records established an altitude of with a range of loads up to 30 tonnes, and a sustained speed of over a closed circuit of up to with similar loads. To put the numbers in perspective, Concorde's service ceiling under a typical Transatlantic flight payload of 10 tonnes is , and this is higher than the record set by the Tu-144D. According to unverified sources, during a 26 March 1974 test flight a Concorde reached its maximum speed ever of (Mach 2.23) at an altitude of , and during subsequent test flights reached maximum altitude of . It is unclear why Tu-144D's maximum achievable altitude would be lower than Concorde's even regular flight altitude, given that Tupolev's data claim better lift-to-drag ratio for the Tu-144 (over 8.0 for Tu-144D vs Concorde's 7.3–7.7 at Mach 2.x) and the thrust of the Tu-144D's RD-36-51 engines is higher than Concorde's Olympus 593 engines. Concorde was originally designed for cruising speeds up to Mach 2.2, but its regular service speed was limited to Mach 2.02 to reduce fuel consumption, extend airframe life and provide a higher safety margin. One of Tupolev's web site pages states that "TU-144 and TU-160 aircraft operation has demonstrated expediency of limitation of cruise supersonic speed of M=2.0 to provide structure service life and to limit cruising altitude". The aircraft was designed for a 30,000-hour service life over 15 years. Airframe heating and the high temperature properties of the primary structural materials, which were aluminum alloys, set the maximum speed at Mach 2.2. 15% by weight was titanium and 23% non-metallic materials. Titanium or stainless steel were used for the leading edges, elevons, rudder and the rear fuselage engine-exhaust heat shield. A project study, assigned the number Tu-144DA, increased the wing area and the take-off weight, and replaced the engines with the RD-36-61 which had 5% more thrust. The Tu-144DA increased fuel capacity from 98,000 kg to 125,000 kg with a higher maximum certified take-off weight (MCTOW) of 235,000 kg. and range up to 7,500 km. Early configurations of the Tu-144 were based on the unbuilt Tupolev Tu-135 bomber, retaining the latter aircraft's canard layout, wings and nacelles. Deriving from the Tu-135 bomber, early Tupolev's design for supersonic passenger airplane was code-named Tu-135P before acquiring the Tu-144 project code. Over the course of the Tu-144 project, the Tupolev bureau created designs of a number of military versions of Tu-144 but none were ever built. In the early 1970s, Tupolev was developing the Tu-144R intended to carry and air-launch up to three solid-fueled ICBMs. The launch was to be performed from within Soviet air space, with the aircraft accelerating to its maximum speed before releasing the missiles. The original design was based on the Tu-144S, but later changed to be derived from the Tu-144D. Another version of the design was to carry air-launched long-range cruise missiles similar to the Kh-55. The study of this version envisioned the use of liquid hydrogen for the afterburners. In the late 1970s. Tupolev contemplated the development of a long-range heavy interceptor (DP-2) based on the Tu-144D also able to escort bombers on long-range missions. Later this project evolved into an aircraft for electronic countermeasures (ECM) to suppress enemy radars and facilitate bomber's penetration through enemy air defenses (Tu-144PP). In the early 1980s this functionality was supplanted with theatre and strategic reconnaissance (Tu-144PR). The dimmer civil prospects for Tu-144 were becoming apparent the more Tupolev tried to "sell" the aircraft to the military. One of the last attempts to sell a military version of the Tu-144 was the Tu-144MR, a project for a long-range reconnaissance aircraft for the Soviet Navy intended to provide targeting information to the Navy's ships and submarines on sea and oceanic theaters of operations. Another proposed navy version was to have a strike capability (two Kh-45 air-to-surface cruise missiles), along with a reconnaissance function. The Tu-144MR was also to have served as a carrier aircraft for the Tupolev Voron reconnaissance drone, designed to compete with the Lockheed D-21 and influenced by it, but the project never materialised. The military was unreceptive to Tupolev's approaches. Vasily Reshetnikov, the commander of Soviet strategic aviation and subsequently, a vice-commander of the Soviet Air Force, remembers how, in 1972, he was dismayed by Tupolev's attempts to offer for military use the aircraft that "fell short of its performance target, was beset by reliability problems, fuel-thirsty and difficult to operate". Reshetnikov goes on to remember: The development and construction of the supersonic airliner, the future Tu-144, was included in the five-year plan and was under the auspices of the influential D.F. Ustinov (then Soviet minister of defence and confidant of Brezhnev, who represented interests of defence industries lobby in opposition to the military) who regarded this mission as a personal responsibility – not so much to his country and people as to "dear Leonid Il'ych" (Brezhnev) whom he literally worshipped, sometimes to the point of shamelessness... Yet the supersonic passenger jet was apparently not making headway and, to the dismay of its curator, it looked as though Brezhnev might be disappointed. It was then that Dmitry Fedorovich (Ustinov) jumped at someone's idea to foist Aeroflot's "bride in search of a wedding" on the military. After it had been rejected in bomber guise, Ustinov used the Military Industrial Commission (one of the most influential Soviet government bodies) to promote the aircraft to the Strategic Aviation as a reconnaissance or ECM platform, or both. It was clear to me that these aircraft could not possibly work in concert with any bomber or missile carrier formation; likewise I could not imagine them operating solo as "Flying Dutchmen" in a war scenario, therefore I resolutely turned down the offer. Naval Aviation Commander Aleksandr Alekseyevich Mironenko, followed suit. Ustinov could not be put off that easily. He managed to persuade the Navy C-in-C (admiral) S.G. Gorshkov who agreed to accept the Tu-144 for Naval Aviation service as a long-range reconnaissance aircraft without consulting anyone on the matter. Mironenko rebelled against this decision, but the commander-in-chief would not hear of heed – the issue is decided, period. On learning of this I was extremely alarmed: if Mironenko had been pressured into taking the Tu-144, this meant I was going to be next. I made a phone call to Aleksandr Alekseyevich, urging him to take radical measures; I needn't have called because even without my urging Mironenko was giving his C-in-C a hard time. Finally Ustinov got wind of the mutiny and summoned Mironenko to his office. They had a long and heated discussion but eventually Mironenko succeeded in proving that Ustinov's ideas were unfounded. That was the last time we heard of Tu-144. While several Tu-144s were donated to museums in Moscow Monino, Samara and Ulyanovsk, at least two Tu-144D remained in open storage in Moscow Zhukovsky. As of June 2010, two aircraft (tail numbers 77114 and 77115) are located outdoors at LII aircraft testing facility, Zhukovsky (at coordinates and ). Previously, they were constantly on display at MAKS Airshows. Tail number 77115 was bought in 2005 by the Heros Club of Zhukovsky and still on display at MAKS as of 2019. In 2019, tail number 77114 was repainted in Aeroflot livery and put on display outside of Zhukovsky International Airport. Tu-144S, tail number 77106, is on display at Central Air Force Museum of Russia in Monino. Maiden flight was on 4 March 1975, the final one on 29 February 1980. The aircraft was used to assess the effectiveness of the air-conditioning systems and to solve some problems on the fuel system. It can be considered the first production aircraft, being the first to be equipped for commercial use and delivered to Aeroflot. The first operational flight was on 26 December 1975 between Moscow and Alma-Ata carrying cargo and mail. This aircraft was the first SST to land on a dirty runway when she was retired to Monino. Another Tu-144, tail number 77107, is on open display in Kazan and located at . The aircraft was constructed in 1975 and was a production model intended for passenger use. However, it was only used during test flights. On 29 March 1976 it made its last flight to Kazan. This aircraft was put on sale on eBay in 2017. TU-144S, tail number 77108, is on display in the museum of Samara State Aerospace University (). It made its maiden flight on 12 December 1975, and its final flight on 27 August 1987. Development works on navigation system were made in this aircraft as well as flight-director approach. TU-144S, tail number 77110, is on display at the Museum of Civil Aviation in Ulyanovsk. Maiden flight occurred on 14 February 1977, the final Flight on 1 June 1984. This aircraft was the second of the two aircraft used for regular passengers' flights on Moscow – Alma-Ata route. In 1977 it flew to Paris to take part in the XXXII Paris Air Show at Le Bourget Airport. This was the last appearance of a Tu-144 in West Europe. CCCP-77110 was the last aircraft produced of the model Tu-144S, powered with Kuznetsov NK-144A engines. In the first half of 2008 the cabin was open for visits and between August and September was restored and painted in the original Aeroflot livery. The only Tu-144 on display outside the former Soviet Union, tail number 77112, was acquired by the Auto & Technikmuseum Sinsheim in Germany, where it was shipped – not flown – in 2001 and where it now stands, in its original Aeroflot livery, on display next to an Air France Concorde. As of 2017, the Technikmuseum Sinsheim remains the only museum in the world where the Tu-144 and Concorde are on display together. At the Paris Air Show on 3 June 1973, the development program of the Tu-144 suffered severely when the first "Tu-144S" production airliner (reg 77102) crashed. At the end of the officially-approved demonstration flight, which was an exact repeat of the previous day's display, instead of landing as expected the aircraft entered a very steep climb before making a violent downwards manoeuvre. As it tried to recover the aircraft broke apart and crashed, destroying 15 houses and killing all six people on board the Tu-144 and eight more on the ground. Gordon et al. state that the flight crew had departed from the approved flight profile for the display, a serious offense in itself. They were under instructions to outperform the Concorde display by all means. During the unapproved, and therefore unrehearsed manoeuvres, the stability and control augmentation system was not operating normally. If it had been it would have prevented the loads that caused the port wing to fail. A popular Russian theory for the crash was that the Tu-144 tried to avoid a French Mirage chase-plane that was attempting to photograph its canards, which were very advanced for the time, and the French and Soviet governments colluded with each other, to cover up such details. The flight of the Mirage was denied in the original French report of the incident, perhaps because it was engaged in industrial espionage. More recent reports have admitted the existence of the Mirage (and the fact that the Russian crew was not told about the Mirage's flight) though not its role in the crash. The official press release did state: "though the inquiry established that there was no real risk of collision between the two aircraft, the Soviet pilot was likely to have been surprised." Another theory relates to deliberate misinformation on the part of the Anglo-French design-team. The main point of this theory being the Anglo-French team knew the Soviet team was planning to steal the design plans of Concorde, and the Soviets were allegedly passed ersatz (substituted) blueprints with a flawed design. The case, it is claimed, contributed to the imprisonment by the Soviets of Greville Wynne in 1963 for spying. Wynne was imprisoned on 11 May 1963 and the development of the Tu-144 was not sanctioned until 16 July 1963. On 23 May 1978, the Tupolev 144 supersonic passenger jet was to make a test flight before delivery to Aeroflot. At an altitude of 3000m, a fire started at the APU located in the right delta-shaped wing. A turn was made to return to the airport and both engines located in the right wing (engines no. 3 and 4) were shut down and the aircraft began to lose height. Fire trailed the aircraft and the cockpit filled with smoke. Then one of the remaining two engines failed. The crew managed to belly land the aircraft in a field near Yegoryevsk, six minutes from the moment of the beginning of the fire. On impact the nose cone collapsed under the fuselage, penetrating the compartment in which two flight engineers were seated. It appeared that 27 minutes prior to the ignition, a fuel line ruptured, causing eight tons of fuel to leak, entering several compartments of the right wing. The fuel readings were judged incorrect by the flight engineers and were thus not reported to the commander.
https://en.wikipedia.org/wiki?curid=31102
Turing (programming language) Turing is a Pascal-like programming language developed in 1982 by Ric Holt and James Cordy, then of University of Toronto, in Toronto, Ontario, Canada. Turing is a descendant of Euclid, Pascal and SP/k that features a clean syntax and precise machine-independent semantics. Turing 4.1.0 is the latest stable version of Turing. Turing 4.1.1 and Turing 4.1.2 do not allow for stand alone .EXE files to be created and versions before Turing 4.1.0 have outdated syntax and outdated functions. Named after British computer scientist Alan Turing, Turing is used primarily as a teaching language at the high school and university level. Two other versions exist, Object-Oriented Turing and Turing Plus, a systems programming variant. In September 2001, "Object Oriented Turing" was renamed "Turing" and the original Turing was renamed "Classic Turing". Turing is no longer supported by Holt Software Associates in Toronto, Ontario. Currently, Microsoft Windows is the only supported platform. Turing is widely used in high schools in Ontario as an introduction to programming. On November 28, 2007, Turing, which was previously a commercial programming language, became freeware, available to download from the developer's website free of charge for personal, commercial, and educational use. The makers of Turing, Holt Software Associates, have since ceased operations, and Turing has seen no further development since November 25, 2007. Turing is designed to have a very lightweight, readable, intuitive syntax. Here is the entire Hello World! program in Turing with syntax highlighting: Turing avoids semicolons and braces, using explicit end markers for most language constructs instead, and allows declarations anywhere. Here is a complete program defining and using the traditional recursive function to calculate a factorial. Currently, there are two open source alternative implementations of Turing: Open Turing, an open source version of the original interpreter, and TPlus, a native compiler for the concurrent systems programming language variant Turing Plus. OpenT, a project to develop a compiler for Turing, is no longer in development. Open Turing is an open-source implementation of the original Turing interpreter for Windows written by Tristan Hume. It includes speed improvements, new features such as OpenGL 3D and a new code editor. It is fully backwards compatible with the closed-source implementation. TPlus is an open-source implementation of original (non-Object-Oriented) Turing with systems programming extensions developed at the University of Toronto and ported to Linux, Solaris and Mac OS X at Queen's University in the late 1990s. TPlus implements Turing+ (Turing Plus), a concurrent systems programming language based on the original Turing programming language. Some, but not all, of the features of Turing Plus were eventually subsumed into the present Object-Oriented Turing language. Turing Plus extends original Turing with processes and monitors (as specified by C.A.R. Hoare) as well as language constructs needed for systems programming such as binary input-output, separate compilation, variables at absolute addresses, type converters and other features. OpenT is an abandoned open-source language, compiler, and IDE that was being developed by the members of the dTeam of Computer Science Canada. It shares many similarities with Turing, and is fully backwards compatible with it. As an addition to the usual graphics drawing functions, Turing features special functions for drawing maple leaves to allow easier drawing of the Canadian flag. Turing+ (Turing Plus) is a concurrent systems programming language based on the Turing programming language designed by James Cordy and Ric Holt, then at the University of Toronto, Canada, in 1987. Some, but not all, of the features of Turing+ were eventually subsumed into Object-Oriented Turing. Turing+ extended original Turing with processes and monitors (as specified by C.A.R. Hoare) as well as language constructs needed for systems programming such as binary input-output, separate compilation, variables at absolute addresses, type converters and other features. Turing+ was explicitly designed to replace Concurrent Euclid in systems-programming applications. The TUNIS operating system, originally written in Concurrent Euclid, was recoded to Turing+ in its MiniTunis implementation. Turing+ has been used to implement several production software systems, including the TXL programming language. Object-Oriented Turing is an extension of the Turing programming language and a replacement for Turing Plus created by Ric Holt of the University of Toronto, Canada, in 1991. It is imperative, object-oriented, and concurrent. It has modules, classes, single inheritance, processes, exception handling, and optional machine-dependent programming. There is an integrated development environment under the X Window System and a demo version. Versions exist for Sun-4, MIPS, RS-6000, NeXTSTEP, Windows 95 and others.
https://en.wikipedia.org/wiki?curid=31105
Trackball A trackball is a pointing device consisting of a ball held by a socket containing sensors to detect a rotation of the ball about two axes—like an upside-down mouse with an exposed protruding ball. Users roll the ball to position the on-screen pointer, using their thumb, fingers, or commonly the palm of the hand while using the fingertips to press the mouse buttons. With most trackballs, operators have to lift their finger, thumb or hand and reposition in on the ball to continue rolling, whereas a mouse would have to be lifted itself and re-positioned. Some trackballs have notably low friction, as well as being made of a dense material such as glass, so they can be spun to make them coast. The trackball's buttons may be situated to that of a mouse or to a unique style that suits the user. Large trackballs are common on CAD workstations for easy precision. Before the advent of the touchpad, small trackballs were common on portable computers (such as the BlackBerry Tour) where there may be no desk space on which to run a mouse. Some small "thumbballs" are designed to clip onto the side of the keyboard and have integral buttons with the same function as mouse buttons. The trackball was invented as part of a post-World War II-era radar plotting system named Comprehensive Display System (CDS) by Ralph Benjamin when working for the British Royal Navy Scientific Service. Benjamin's project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented a "ball tracker" system called the "roller ball" for this purpose in 1946. The device was patented in 1947, but only a prototype using a metal ball rolling on two rubber-coated wheels was ever built and the device was kept as a military secret. Production versions of the CDS used joysticks. The CDS system had also been viewed by a number of engineers from Ferranti Canada, who returned to Canada and began development of the Royal Canadian Navy's DATAR system in 1952. Designed primarily by Tom Cranston, Fred Longstaff and Kenyon Taylor, they chose the trackball as the primary input, using a standard five-pin bowling ball as the roller. DATAR was similar in concept to Benjamin's display, but used a digital computer to calculate tracks, and sent the resulting data to other ships in a task force using pulse-code modulation radio signals. DATAR's trackball used four disks to pick up motion, two each for the X and Y directions. Several additional rollers provided mechanical support. When the ball was rolled, the pickup discs spun and contacts on their outer rim made periodic contact with wires, producing pulses of output with each movement of the ball. By counting the pulses, the physical movement of the ball could be determined. Since 1966, the American company "Orbit Instrument Corporation" produced a device named "X-Y Ball Tracker", a trackball, which was embedded into radar flight control desks. A similar trackball device at the German ' was constructed by a team around of Telefunken as part of the development for the Telefunken computer infrastructure around the main frame , process computer TR 86 and video terminal SIG 100-86, which began in 1965. This trackball was called ' (German for "rolling ball"). Somewhat later, the idea of "reversing" this device led to the introduction of the first computer ball mouse (still named "", model RKS 100-86), which was offered as an alternative input device to light pens and trackballs for Telefunken's computer systems since 1968. In later trackball models the electrical contacts were replaced by a "chopper wheel" which had small slots cut into it in the same locations as the contacts. An LED shone light through the slots to an optical sensor, As the disk rotated the slots alternately lined up and then blocked the light from the LED, causing pulses to be produced in the sensor. The operation was otherwise similar. Mice used the same basic system for determining motion, but had the problem that the ball was in contact with the desk or mousepad. In order to provide smooth motion the balls were often covered with an anti-slip surface treatment, which was, by design, sticky. Rolling the mouse tended to pick up any dirt and drag it into the system where it would clog the chopper wheels, demanding cleanup. In contrast the trackball is in contact only with the user's hand, which tends to be cleaner. In the late 1990s both mice and trackballs began using direct optical tracking which follows dots on the ball, avoiding the need for anti-slip surface treatment. As with modern mice, most trackballs now have an auxiliary device primarily intended for scrolling. Some have a scroll wheel like most mice, but the most common type is a “scroll ring” which is spun around the ball. Kensington's SlimBlade Trackball similarly tracks the ball itself in three dimensions for scrolling. In September 2017 Logitech announced release of MX-Ergo Mouse, which was released after 6 years of its last trackball mouse. Large trackballs are sometimes seen on computerized special-purpose workstations, such as the radar consoles in an air-traffic control room or sonar equipment on a ship or submarine. Modern installations of such equipment may use mice instead, since most people now already know how to use one. However, military mobile anti-aircraft radars, commercial airliners (such as Airbus A380) and submarine sonars tend to continue using trackballs, since they can be made more durable and more fit for fast emergency use. Large and well made ones allow easier high precision work, for which reason they may still be used in these applications (where they are often called "tracker balls") and in computer-aided design. Trackballs have appeared in computer and video games, particularly early arcade games (see a "List of trackball arcade games") notably Atari's "Centipede" and "Missile Command" – though Atari spelled it "trak-ball". "Football", by Atari, released in 1978, is commonly misunderstood to be the first arcade game to use a trackball, but in "The Ultimate History of Video Games" by Steven L. Kent the designer of "Football", Dave Stubben, claims they copied the design from a Japanese game, Soccer (Taito, 1973). Console trackballs, now fairly rare, were common in the early 1980s: the Atari 2600 and 5200 consoles, as well as the competing ColecoVision console, though using a joystick as their standard controller, each had one as an optional peripheral. The Apple Pippin, a console introduced in 1996, had a trackball built into its gamepad as standard. Trackballs were occasionally used in e-sports prior to the mainstreaming of optical mice in the early 2000s because they were more reliable than ball mice, but now they are extremely rare because optical mice offer superior speed and precision. A trackball requires no mousepad and enables the player to aim swiftly (in first person shooters). Trackballs remain in use in pub golf machines (such as Golden Tee) to simulate swinging the club. Computer gamers have been able to successfully use trackballs in most modern computer games, including FPS, RPG, and RTS genres, with any slight loss of speed compensated for with an increase in precision. Many trackball gamers are competent at "throwing" their cursor rapidly across the screen, by spinning the trackball, enabling (with practice) much faster motion than can be achieved with a ball-less mouse and arm motion . However, many gamers are deterred by the time it takes to 'get used to' the different style of hand control that a trackball requires. Trackballs have also been regarded as excellent complements to analog joysticks, as pioneered by the Assassin 3D 1996 trackball with joystick pass-through capability. This combination provides for two-hand aiming and a high accuracy and consistency replacement for the traditional mouse and keyboard combo generally used on first-person shooter games. Many such games natively support joysticks and analog player movement, like Valve's "Half-Life" and id Software's "Quake" series. Trackballs are provided as the pointing device in some public internet access terminals. Unlike a mouse, a trackball can easily be built into a console, and cannot be ripped away or easily vandalised. Two examples are the Internet browsing consoles provided in some UK McDonald's outlets, and the BT Broadband Internet public phone boxes. This simplicity and ruggedness also makes them ideal for use in industrial computers. Because trackballs for personal computers are stationary, they may require less space for operation than a mouse, and may simplify use in confined or cluttered areas such as a small desk or a rack-mounted terminal. They are generally preferred in laboratory setting for the same reason. An advantage of the trackball is that it takes less space to move than a mouse. A trackball was often included in laptop computers, but since the late 1990s these have switched to track pads. Track balls can still be used as separate input devices with standard desktop computers but this application is also moving to trackpads due to the prevalence of multi touch gesture control in new desktop operating systems. People with a mobility impairment use trackballs as an assistive technology input device. Access to an alternative pointing device has become even more important for them with the dominance of graphically-oriented operating systems. There are many alternative systems to be considered. The control surface of a trackball is easier to manipulate and the buttons can be activated without affecting the pointer position. Trackball users also often state that they are not limited to using the device on a flat desk surface. Trackballs can be used whilst browsing a laptop in bed, or wirelessly from an armchair to a PC playing a movie. They are also useful for computing on boats or other unstable platforms where a rolling deck could produce undesirable input. Trackballs are generally either thumb-operated, with a ball about an inch in diameter or smaller moved by one digit (almost always the thumb) and the buttons clicked by others, or finger-operated, with a ball over two inches in diameter operated by the middle fingers and the buttons by the thumb and little finger. Users favour one format or another for reasons of comfort, mobility, precision, or because it reduces strain on one part of the hand/wrist. Most, but not all, finger-operated designs are symmetrical in design, making them usable by both hands, while thumb-operated designs are by their nature asymmetric or “handed,” allowing the smallest examples to be held in the air. Thumb-operated trackballs are not generally available in left-handed configurations, due to small demand. Some computer users prefer a trackball over the more common mouse for ergonomic reasons. There seems to be no conclusive evidence from studies performed to determine which type of pointing device works best for most applications. Application users are encouraged to test different devices, and to maintain proper posture and scheduled breaks for comfort. Some disabled users find trackballs easier since they only have to move their thumb relative to their hand, instead of moving the whole hand, while others incur unacceptable fatigue of the thumb. Elderly people sometimes have difficulty holding a mouse still while double-clicking; the trackball allows them to let go of the ball while using the button. At times when a user is browsing menus or websites rather than typing, it is also possible to hold a trackball in the right hand like a television remote control, operating the ball with the right thumb and pressing the buttons with the left thumb, thus giving the fingers a rest. Some mobile devices have trackballs, including those in the BlackBerry range, the T-Mobile Sidekick 3, and many early HTC smartphones. These miniature trackballs are made to fit within the thickness of a mobile device, and are controlled by the tip of a finger or thumb. These have mostly been replaced on smartphones by touch screens. In lieu of a scroll wheel, some mice include a tiny trackball sometimes called a scroll ball. A popular example is Apple's Mighty Mouse.
https://en.wikipedia.org/wiki?curid=31106
Tape drive A tape drive is a data storage device that reads and writes data on a magnetic tape. Magnetic tape data storage is typically used for offline, archival data storage. Tape media generally has a favorable unit cost and a long archival stability. A tape drive provides sequential access storage, unlike a hard disk drive, which provides direct access storage. A disk drive can move to any position on the disk in a few milliseconds, but a tape drive must physically wind tape between reels to read any one particular piece of data. As a result, tape drives have very large average access times. However, tape drives can stream data very quickly off a tape when the required position has been reached. For example, Linear Tape-Open (LTO) supported continuous data transfer rates of up to 140 MB/s, a rate comparable to hard disk drives. Magnetic tape drives with capacities less than one megabyte were first used for data storage on mainframe computers in the 1950s. , capacities of 10 terabytes or higher of uncompressed data per cartridge were available. In early computer systems, magnetic tape served as the main storage medium because although the drives were expensive, the tapes were inexpensive. Some computer systems ran the operating system on tape drives such as DECtape. DECtape had fixed-size indexed blocks that could be rewritten without disturbing other blocks, so DECtape could be used like a slow disk drive. Data tape drives may use advanced data integrity techniques such as multilevel forward error correction, shingling, and linear serpentine layout for writing data to tape. Tape drives can be connected to a computer with SCSI, Fibre Channel, SATA, USB, FireWire, FICON, or other interfaces. Tape drives are used with autoloaders and tape libraries which automatically load, unload, and store multiple tapes, increasing the volume of data which can be stored without manual intervention. In the early days of home computing, floppy and hard disk drives were very expensive. Many computers had an interface to store data via an audio tape recorder, typically on Compact Cassettes. Simple dedicated tape drives, such as the professional DECtape and the home ZX Microdrive and Rotronics Wafadrive, were also designed for inexpensive data storage. However, the drop in disk drive prices made such alternatives obsolete. As some data can be compressed to a smaller size than the original files, it has become commonplace when marketing tape drives to state the capacity with the assumption of a 2:1 compression ratio; thus a tape with a capacity of 80 GB would be sold as "80/160". The true storage capacity is also known as the native capacity or the raw capacity. The compression ratio actually achievable depends on the data being compressed. Some data has little redundancy; large video files, for example, already use compression and cannot be compressed further. A database with repetitive entries, on the other hand, may allow compression ratios better than 10:1. A disadvantageous effect termed occurs during read/write if the data transfer rate falls below the minimum threshold at which the tape drive heads were designed to transfer data to or from a continuously running tape. In this situation, the modern fast-running tape drive is unable to stop the tape instantly. Instead, the drive must decelerate and stop the tape, rewind it a short distance, restart it, position back to the point at which streaming stopped and then resume the operation. If the condition repeats, the resulting back-and-forth tape motion resembles that of shining shoes with a cloth. Shoe-shining decreases the attainable data transfer rate, drive and tape life, and tape capacity. In early tape drives, non-continuous data transfer was normal and unavoidable. Computer processing power and available memory were usually insufficient to provide a constant stream, so tape drives were typically designed for "start-stop" operation. Early drives used very large spools, which necessarily had high inertia and did not start and stop moving easily. To provide high start, stop and seek performance, several feet of loose tape was played out and pulled by a suction fan down into two deep open channels on either side of the tape head and capstans. The long thin loops of tape hanging in these "vacuum columns" had far less inertia than the two reels and could be rapidly started, stopped and repositioned. The large reels would move as required to keep the slack tape in the vacuum columns. Later, most tape drives of the 1980s introduced the use of an internal data buffer to somewhat reduce start-stop situations. These drives are often referred to as "tape streamers". The tape was stopped only when the buffer contained no data to be written, or when it was full of data during reading. As faster tape drives became available, despite being buffered, the drives started to suffer from the shoe-shining sequence of stop, rewind, start. Most recently, drives no longer operate at a single fixed linear speed, but have several speeds. Internally, they implement algorithms that dynamically match the tape speed level to the computer's data rate. Example speed levels could be 50 percent, 75 percent and 100 percent of full speed. A computer that streams data slower than the lowest speed level (e.g. at 49 percent) will still cause shoe-shining. Magnetic tape is commonly housed in a casing known as a cassette or cartridge—for example, the 4-track cartridge and the Compact Cassette. The cassette contains magnetic tape to provide different audio content using the same player. The outer shell, made of plastic, sometimes with metal plates and parts, permits ease of handling of the fragile tape, making it far more convenient and robust than having spools of exposed tape. Simple analog cassette audio tape recorders were commonly used for data storage and distribution on home computers at a time when floppy disk drives were very expensive. The Commodore Datasette was a dedicated data version using the same media. Manufacturers often specify the capacity of tapes using data compression techniques; compressibility varies for different data (commonly 2:1 to 8:1), and the specified capacity may not be attained for some types of real data. , tape drives capable of higher capacity were still being developed. In 2011, Fujifilm and IBM announced that they had been able to record 29.5 billion bits per square inch with magnetic tape media developed using the BaFe particles and nanotechnologies, allowing drives with true (uncompressed) tape capacity of 35 TB. The technology was not expected to be commercially available for at least a decade. In 2014, Sony and IBM announced that they had been able to record 148 gigabits per square inch with magnetic tape media developed using a new vacuum thin-film forming technology able to form extremely fine crystal particles, allowing true tape capacity of 185 TB.
https://en.wikipedia.org/wiki?curid=31109
Tesseract In geometry, the tesseract is the four-dimensional analogue of the cube; the tesseract is to the cube as the cube is to the square. Just as the surface of the cube consists of six square faces, the hypersurface of the tesseract consists of eight cubical cells. The tesseract is one of the six convex regular 4-polytopes. The tesseract is also called an eight-cell, C8, (regular) octachoron, octahedroid, cubic prism, and tetracube. It is the four-dimensional hypercube, or 4-cube as a part of the dimensional family of hypercubes or measure polytopes. Coxeter labels it the formula_1 polytope. According to the "Oxford English Dictionary", the word "tesseract" was coined and first used in 1888 by Charles Howard Hinton in his book "A New Era of Thought", from the Greek ("téssereis aktines", "four rays"), referring to the four lines from each vertex to other vertices. In this publication, as well as some of Hinton's later work, the word was occasionally spelled "tessaract". The tesseract can be constructed in a number of ways. As a regular polytope with three cubes folded together around every edge, it has Schläfli symbol {4,3,3} with hyperoctahedral symmetry of order 384. Constructed as a 4D hyperprism made of two parallel cubes, it can be named as a composite Schläfli symbol {4,3} × { }, with symmetry order 96. As a 4-4 duoprism, a Cartesian product of two squares, it can be named by a composite Schläfli symbol {4}×{4}, with symmetry order 64. As an orthotope it can be represented by composite Schläfli symbol { } × { } × { } × { } or { }4, with symmetry order 16. Since each vertex of a tesseract is adjacent to four edges, the vertex figure of the tesseract is a regular tetrahedron. The dual polytope of the tesseract is called the regular hexadecachoron, or 16-cell, with Schläfli symbol {3,3,4}, with which it can be combined to form the compound of tesseract and 16-cell. The standard tesseract in Euclidean 4-space is given as the convex hull of the points (±1, ±1, ±1, ±1). That is, it consists of the points: A tesseract is bounded by eight hyperplanes ("x"i = ±1). Each pair of non-parallel hyperplanes intersects to form 24 square faces in a tesseract. Three cubes and three squares intersect at each edge. There are four cubes, six squares, and four edges meeting at every vertex. All in all, it consists of 8 cubes, 24 squares, 32 edges, and 16 vertices. The construction of hypercubes can be imagined the following way: It is possible to project tesseracts into three- and two-dimensional spaces, similarly to projecting a cube into two-dimensional space. Projections on the 2D-plane become more instructive by rearranging the positions of the projected vertices. In this fashion, one can obtain pictures that no longer reflect the spatial relationships within the tesseract, but which illustrate the connection structure of the vertices, such as in the following examples: A tesseract is in principle obtained by combining two cubes. The scheme is similar to the construction of a cube from two squares: juxtapose two copies of the lower-dimensional cube and connect the corresponding vertices. Each edge of a tesseract is of the same length. This view is of interest when using tesseracts as the basis for a network topology to link multiple processors in parallel computing: the distance between two nodes is at most 4 and there are many different paths to allow weight balancing. This configuration matrix represents the tesseract. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur in the whole tesseract. The nondiagonal numbers say how many of the column's element occur in or at the row's element. formula_3 The long radius (center to vertex) of the tesseract is equal to its edge length; thus its diagonal through the center (vertex to opposite vertex) is 2 edge lengths. Only a few polytopes have this property, including the four-dimensional tesseract and 24-cell, the three-dimensional cuboctahedron, and the two-dimensional hexagon. In particular, the tesseract is the only hypercube with this property. The longest vertex-to-vertex diameter of an "n"-dimensional hypercube of unit edge length is , so for the square it is , for the cube it is , and only for the tesseract it is , exactly 2 edge lengths. The tesseract, like all hypercubes, tessellates Euclidean space. The self-dual tesseractic honeycomb consisting of 4 tesseracts around each face has Schläfli symbol {4,3,3,4}. Hence, the tesseract has a dihedral angle of 90°. The tesseract's radial equilateral symmetry makes its tessellation the unique regular body-centered cubic lattice of equal-sized spheres, in any number of dimensions. The tesseract itself can be decomposed into smaller polytopes. For instance, it can be triangulated into 4-dimensional simplices that share their vertices with the tesseract. It is known that there are 92487256 such triangulations and that the least number of 4-dimensional simplices in any of them is 16. The regular complex polytope 4{4}2, , in formula_4 has a real representation as a tesseract or 4-4 duoprism in 4-dimensional space. 4{4}2 has 16 vertices, and 8 4-edges. Its symmetry is 4[4]2, order 32. It also has a lower symmetry construction, , or 4{}×4{}, with symmetry 4[2]4, order 16. This is the symmetry if the red and blue 4-edges are considered distinct. As a uniform duoprism, the tesseract exists in a : {"p"}×{4}. The regular tesseract, along with the 16-cell, exists in a set of 15 uniform 4-polytopes with the same symmetry. The tesseract {4,3,3} exists in a sequence of regular 4-polytopes and honeycombs, {"p",3,3} with tetrahedral vertex figures, {3,3}. The tesseract is also in a sequence of regular 4-polytope and honeycombs, {4,3,"p"} with cubic cells. Since their discovery, four-dimensional hypercubes have been a popular theme in art, architecture, and science fiction. Notable examples include: The word "tesseract" was later adopted for numerous other uses in popular culture, including as a plot device in works of science fiction, often with little or no connection to the four-dimensional hypercube of this article. See Tesseract (disambiguation).
https://en.wikipedia.org/wiki?curid=31112
Top-level domain A top-level domain (TLD) is one of the domains at the highest level in the hierarchical Domain Name System of the Internet. The top-level domain names are installed in the root zone of the name space. For all domains in lower levels, it is the last part of the domain name, that is, the last label of a fully qualified domain name. For example, in the domain name www.example.com, the top-level domain is com. Responsibility for management of most top-level domains is delegated to specific organizations by the Internet Corporation for Assigned Names and Numbers (ICANN), which operates the Internet Assigned Numbers Authority (IANA), and is in charge of maintaining the DNS root zone. Originally, the top-level domain space was organized into three main groups: "Countries", "Categories", and "Multiorganizations". An additional "temporary" group consisted of only the initial DNS domain, arpa, and was intended for transitional purposes toward the stabilization of the domain name system. As of 2015, IANA distinguishes the following groups of top-level domains: Countries are designated in the Domain Name System by their two-letter ISO country code; there are exceptions, however (e.g., .uk). This group of domains is therefore commonly known as country-code top-level domains (ccTLD). Since 2009, countries with non–Latin-based scripts may apply for internationalized country code top-level domain names, which are displayed in end-user applications in their language-native script or alphabet, but use a Punycode-translated ASCII domain name in the Domain Name System. Generic top-level domains (formerly Categories) initially consisted of gov, edu, com, mil, org, and net. More generic TLDs have been added, such as info. The authoritative list of currently existing TLDs in the root zone is published at the IANA website at https://www.iana.org/domains/root/db/. An internationalized country code top-level domain (IDN ccTLD) is a top-level domain with a specially encoded domain name that is displayed in an end user application, such as a web browser, in its language-native script or alphabet (such as the Arabic alphabet), or a non-alphabetic writing system (such as Chinese characters). IDN ccTLDs are an application of the internationalized domain name (IDN) system to top-level Internet domains assigned to countries, or independent geographic regions. ICANN started to accept applications for IDN ccTLDs in November 2009, and installed the first set into the Domain Names System in May 2010. The first set was a group of Arabic names for the countries of Egypt, Saudi Arabia, and the United Arab Emirates. By May 2010, 21 countries had submitted applications to ICANN, representing 11 scripts. The domain arpa was the first Internet top-level domain. It was intended to be used only temporarily, aiding in the transition of traditional ARPANET host names to the domain name system. However, after it had been used for reverse DNS lookup, it was found impractical to retire it, and is used today exclusively for Internet infrastructure purposes such as in-addr.arpa for IPv4 and ip6.arpa for IPv6 reverse DNS resolution, uri.arpa and urn.arpa for the Dynamic Delegation Discovery System, and e164.arpa for telephone number mapping based on NAPTR DNS records. For historical reasons, arpa is sometimes considered to be a generic top-level domain. A set of domain names is reserved by the Internet Engineering Task Force as special-use domain names per authority of Request for Comments (RFC) 6761. The practice originated in RFC 1597 for reserved address allocations in 1994, and reserved top-level domains in RFC 2606 of 1999. RFC 6761 reserves the following four top-level domain names to avoid confusion and conflict. Any such reserved usage of those TLDs should not occur in production networks that utilize the global domain name system: RFC 6762 reserves the use of .local for link-local host names that can be resolved via the Multicast DNS name resolution protocol. RFC 7686 reserves the use of .onion for the self-authenticating names of Tor hidden services. These names can only be resolved by a Tor client because of the use of onion routing to protect the anonymity of users. Internet-Draft draft-wkumari-dnsop-internal-00 proposes reserving the use of .internal for "names which do not have meaning in the global context but do have meaning in a context internal to their network", and for which the RFC 6761 reserved names are semantically inappropriate. In the late 1980s, InterNIC created the nato domain for use by NATO. NATO considered none of the then existing TLDs as adequately reflecting their status as an international organization. Soon after this addition, however, InterNIC also created the int TLD for the use by international organizations in general, and persuaded NATO to use the second level domain "nato.int" instead. The "nato" TLD, no longer used, was finally removed in July 1996. Other historical TLDs are cs for Czechoslovakia (now using cz for Czech Republic and sk for Slovakia), dd for East Germany (using de after reunification of Germany), yu for SFR Yugoslavia and Serbia and Montenegro (now using ba for Bosnia and Herzegovina, hr for Croatia, me for Montenegro, mk for North Macedonia, rs for Serbia and si for Slovenia), and zr for Zaire (now cd for the Democratic Republic of the Congo). In contrast to these, the TLD su has remained active despite the demise of the Soviet Union that it represents. Under the chairmanship of Nigel Roberts, ICANN's ccNSO is working on a policy for retirement of ccTLDs that have been removed from ISO 3166. Around late 2000, ICANN discussed and finally introduced aero, biz, coop, info, museum, name, and pro TLDs. Site owners argued that a similar TLD should be made available for adult and pornographic websites to settle the dispute of obscene content on the Internet, to address the responsibility of US service providers under the US Communications Decency Act of 1996. Several options were proposed including xxx, "sex" and "adult". The .xxx domain went live in 2011. An older proposal consisted of seven new gTLDs: arts, firm, info, nom, rec, shop, and web. Later biz, info, museum, and name covered most of these old proposals. During the 32nd International Public ICANN Meeting in Paris in 2008, ICANN started a new process of TLD naming policy to take a "significant step forward on the introduction of new generic top-level domains". This program envisioned the availability of many new or already proposed domains, as well as a new application and implementation process. Observers believed that the new rules could result in hundreds of new gTLDs being registered. On 13 June 2012, ICANN announced nearly 2,000 applications for top-level domains, which began installation throughout 2013. Donuts Inc. invested $57 million in more than 300 applications while Famous Four Media applied for 61 new domains. The first seven – "bike", "clothing", "guru", "holdings", "plumbing", "singles", and "ventures" – were released in 2014. ICANN's slow progress in creating new generic top-level domains, and the high application costs associated with TLDs, contributed to the creation of alternate DNS roots with different sets of top-level domains. Such domains may be accessed by configuration of a computer with alternate or additional (forwarder) DNS servers or plugin modules for web browsers. Browser plugins detect alternate root domain requests and access an alternate domain name server for such requests. Several networks, such as BITNET, CSNET, and UUCP, existed that were in widespread use among computer professionals and academic users, but were not interoperable directly with the Internet and exchanged mail with the Internet via special email gateways. For relaying purposes on the gateways, messages associated with these networks were labeled with suffixes such as bitnet, oz, csnet, or uucp, but these domains did not exist as top-level domains in the public Domain Name System of the Internet. Most of these networks have long since ceased to exist, and although UUCP still gets significant use in parts of the world where Internet infrastructure has not yet become well established, it subsequently transitioned to using Internet domain names, and pseudo-domains now largely survive as historical relics. One notable exception is the 2007 emergence of SWIFTNet Mail, which uses the swift pseudo-domain. The anonymity network Tor formerly used the top-level pseudo-domain onion for Tor hidden services, which can only be reached with a Tor client because it uses the Tor onion routing protocol to reach the hidden service to protect the anonymity of users. However, the pseudo-domain became officially reserved in October 2015. i2p provides a similar hidden pseudo-domain, .i2p. BT hubs use the top-level pseudo-domain home for local DNS resolution of routers, modems and gateways.
https://en.wikipedia.org/wiki?curid=31115
Temple of Set The Temple of Set is an occult initiatory order founded in 1975. A new religious movement and form of Western esotericism, the Temple espouses a religion known as Setianism, whose practitioners are called Setians. This is sometimes identified as a form of Satanism, although this term is not often embraced by Setians and is contested by some academics. The Temple was established in the United States in 1975 by Michael Aquino, an American political scientist, military officer, and a high-ranking member of Anton LaVey's Church of Satan. Dissatisfied with the direction in which LaVey was taking the Church, Aquino resigned and – according to his own claim – embarked on a ritual to invoke Satan, who revealed to him a sacred text called "The Book of Coming Forth by Night". According to Aquino, in this work Satan revealed his true name to be that of Set, which had been the name used by his followers in ancient Egypt. Aquino was joined in establishing the Temple by a number of other dissatisfied members of LaVey's Church, and soon various Setian groups were established across the United States. Setians believe that Set is the one real god and that he has aided humanity by giving them a questioning intellect, the "Black Flame", which distinguishes them from other animal species. Set is held in high esteem as a teacher whose example is to be emulated but he is not worshipped as a deity. Highly individualistic in basis, the Temple promotes the idea that practitioners should seek self-deification and thus attain an immortality of consciousness. Setians believe in the existence of magic as a force which can be manipulated through ritual, however the nature of these rituals is not prescribed by the Temple. Specifically, Aquino described Setian practices as "black magic", a term which he defines idiosyncratically. Following initiation into the Temple, a Setian can proceed along a series of six degrees, each of which requires greater responsibilities to the group; as a result, most members remain in the first two degrees. Governed by a high priest or high priestess and a wider Council of Nine, the Temple is also divided into groups known as "pylons", through which Setians can meet or correspond in order to advance their magical work in a particular area. Pylons of the Temple are now present in the United States, Australia, and Europe, with estimates placing the Temple's membership between 200 and 500. The Temple of Set is a new religious movement, and draws upon earlier forms of Western esotericism. Among academic scholars of religious studies, there has been some debate as to whether the Temple of Set can be characterised as "Satanism" or not. The religious studies scholars Asbjorn Dyrendal, Massimo Introvigne, James R. Lewis, and Jesper Aa. Petersen describe the Temple of Set as a Satanic group, despite its reluctance to use the term "Satanism", because of it is an offshoot of the Church of Satan which continues to use satanic mythology. Conversely, the scholar Kennet Granholm argued that it should not be considered a form of Satanism because it does not place an emphasis on the figure of Satan. Granholm acknowledged that it was an "actor in the Satanic milieu" and part of the wider Left-Hand Path group of esoteric traditions. He suggested that it could also be seen as a form of "Post-Satanism", thereby continuing to reflect its historical origins within religious Satanism. The Temple of Set is far more rooted in esoteric ideas than the Church of Satan had been. It has thus been termed "Esoteric Satanism", a term used to contrast it with the "Rational Satanism" found in LaVeyan Satanism. Accordingly, it has been labelled the "intellectual wing of esoteric Satanism", with the Temple presenting itself as an intellectual religion. Aquino possessed a P.h.D. in political science and this formal education was reflected in the way that he presented his arguments, in which he draws broadly upon Western philosophy and science. Born in 1946, Michael Aquino was a military intelligence officer specialising in psychological warfare. In 1969 he joined Anton LaVey's Church of Satan and rose rapidly through the group's ranks. In 1970, while he was serving with the U.S. military during the Vietnam War, Aquino was stationed in Bến Cát in South Vietnam when he wrote a tract titled "Diabolicon" in which he reflected upon his growing divergence from the Church of Satan's doctrines. In this tract, teachings about the creation of the world, God, and humanity are presented, as is the dualistic idea that Satan complements God. The character of Lucifer is presented as bringing insight to human society, a perspective that was inherited from the depiction of Lucifer in John Milton's seventeenth-century epic poem "Paradise Lost". By 1971 Aquino was ranked as a Magister Caverns of the IV° within the Church's hierarchy, was editor of its publication "The Cloven Hoof", and sat on its governing Council of Nine. In 1973 he rose to the previously unattained rank of Magister Templi of IV°. According to the scholars of Satanism Per Faxneld and Jesper Petersen, Aquino had become LaVey's "right-hand man". He had nevertheless developed concerns about the Church of Satan, feeling that it had attracted many "fad-followers, egomaniacs and assorted oddballs whose primary interests in becoming Satanists was to flash their membership cards for cocktail-party notoriety". When in 1975 LaVey abolished the system of regional groups, or "grottos", and declared that in future all degrees would be given in exchange for financial or other contributions to the Church, Aquino became increasingly disaffected; he resigned from the organisation on June 10, 1975. While LaVey seems to have held a pragmatic and practical view of the degrees and of the Satanic priesthood, intending them to reflect the social role of the degree holder within the organization, Aquino and his supporters viewed the priesthood as being spiritual, sacred and irrevocable. Dyrendal, Lewis, and Petersen describe Aquino as, in effect, accusing LaVey of the sacrilege of simony. Aquino then provided what has been described as a "foundation myth" for his Setian religion. Having departed the Church, he embarked on a ritual intent on asking Satan for advice on what to do next. According to his account, at Midsummer 1975, Satan appeared and revealed that he wanted to be known by his true name, Set, which had been the name used by his worshippers in ancient Egypt. Aquino produced a religious text, "The Book of Coming Forth by Night", which he alleged had been revealed to him by Set through a process of automatic writing. According to Aquino, "there was nothing overtly sensational, supernatural, or melodramatic about "The Book of Coming Forth By Night" working. I simply sat down and wrote it." The book proclaimed Aquino to be the Magus of the new Aeon of Set and the heir to LaVey's "infernal mandate". Aquino later stated that the revelation that Satan was Set necessitated his own exploration of Egyptology, a subject about which he had previously known comparatively little. In this account, the direct word of Set was appealed to as a source of legitimation. Moreover, by drawing connections between itself and ancient Egypt, this young religion adopted a legitimisation strategy that tried to antedate both Judaism and Christianity. Aquino's "Book of Coming Forth by Night" makes reference to "The Book of the Law", a similarly 'revealed' text produced by the occultist Aleister Crowley in 1904 which provided the basis for Crowley's religion of Thelema. In Aquino's book, "The Book of the Law" was presented as a genuine spiritual text given to Crowley by preternatural sources, but it was also declared that Crowley had misunderstood both its origin and message. In making reference to "The Book of the Law", Aquino presented himself as being as much Crowley's heir as LaVey's, and Aquino's work would engage with Crowley's writings and beliefs to a far greater extent than LaVey ever did. In establishing the Temple, Aquino was joined by other ex-members of LaVey's Church, and soon Setian groups, or "pylons", were established in various parts of the United States. The structure of the Temple was based largely on those of the ceremonial magical orders of the late nineteenth century, such as the Hermetic Order of the Golden Dawn and Ordo Templi Orientis. Aquino has stated that he believed LaVey not to be merely a charismatic leader but to have been actually appointed by Satan himself (referring to this charismatic authority as the "Infernal Mandate") to found the Church. After the split of 1975, Aquino believed LaVey had lost the mandate, which the "Prince of Darkness" then transferred to Aquino and a new organization, the Temple of Set. According to both the historian of religion Mattias Gardell and journalist Gavin Baddeley, Aquino displayed an "obsession" with LaVey after his departure from the Church, for instance by publicly releasing court documents that reflected negatively on his former mentor, among them restraining orders, divorce proceedings, and a bankruptcy filing. In turn, LaVey lampooned the new Temple as "Laurel and Hardy's "Sons of the Desert"". In 1975, the Temple incorporated as a non-profit Church in California, receiving state and federal recognition and tax-exemption later that year. Many members of the Temple had raised concerns about Aquino's authoritarian position within it. Aquino relinquished his office of High Priest in 1979 to Ronald Keith Barrett, who produced an inspired text of his own, titled "The Book of Opening the Way". Barrett's approach was later criticized as "more mystical than magical" by Temple members. Barrett's leadership was also criticised as authoritarian, resulting in a decline in the Temple's membership. Barrett resigned his office and severed ties with the organization in May 1982. He subsequently established his own Temple of Anubis, which he led until his 1998 death; it survived until the early 2010s. After Barrett's departure, Aquino retook leadership of the Temple of Set. During this period, the sociologist Gini Graham Scott clandestinely participated in the Temple, using her observations as the basis for her 1983 book "The Magicians: A Study of the Use of Power in a Black Magic Group". After receiving his PhD in political science from the University of California, Santa Barbara in 1980, Aquino worked as an adjunct professor at Golden Gate University until 1986 while continuing to serve in the United States Army as an Active Guard Reserve officer at the Presidio of San Francisco. He was fascinated with the connections between occultism and Nazism, resulting in some accusations that he was sympathetic to Nazi ideology. In 1983, he performed a solitary rite at Walhalla, the subterranean section of the Wewelsburg castle in Germany that was utilised as a ceremonial space by the Schutzstaffel's Ahnenerbe group during the Nazi period. This resulted in his formation of the Order of the Trapezoid, a Setian group whose members understood themselves as a chivalric order of knights. From 1987 through to 1995, the Grand Master of the Order of the Trapezoid was Edred Thorsson, who had joined the Temple of Set in 1984 and risen to the Fifth Degree in 1990. Thorsson exerted a "discernible influence" over the Setian community through his books, in which he combined aspects of Satanic philosophy with the modern Pagan religion of Heathenry. In 1980 he founded the Texas-based Rune-Gild, which shared many of the Temple's key philosophical tenets but with a focus on the study of runes and their applications in magical practice. In the 1980s, Aquino attracted greater publicity for his Temple through appearances on television talk shows like "The Oprah Winfrey Show" and "Geraldo". In 1987, during the Satanic ritual abuse hysteria, the three-year-old daughter of a Christian clergyman accused Aquino of sexually abusing her during Satanic rites held at his Russian Hill home. Responding to the allegations, police raided Aquino's home, however—after no evidence was found to substantiate the allegation and it was revealed that Aquino was living in Washington D.C. at the time of the alleged abuse—the police decided not to charge him with any felony. Aquino attempted to bring formal charges against the chaplain and psychiatrist who had encouraged the girl's claims, although he was more successful in bringing legal action against two books—Carl A. Raschke's "Painted Black" and Linda Blood's "The New Satanists"—that had suggested that he was guilty. He then left the Presidio and was transferred to St. Louis. In 1994, Aquino retired from active service in the Army, being honourably transferred to the Retired Reserve and awarded the Meritorious Service Medal. While the Satanic ritual abuse hysteria declined, Aquino continued to be a figure of prominence in "mind control" conspiracy theories because of his career as a psychological warfare officer in the US Army. In the United Kingdom during this same period, tabloids like the "News of the World" and "Sunday Mirror" published sensationalist articles about the Temple. In the mid-1990s a group of British Setians approached the religious studies scholar Graham Harvey and encouraged him to conduct research into the group so as to combat misconceptions about them. The Temple first registered a website in 1997, the same year as the Church of Satan. It would also establish its own intranet, allowing for communication between Setians in different parts of the world. One member of the Temple was the New Zealander Kerry Bolton, who split to form his own Order of the Left Hand Path in 1990. In 1995, another couple who joined were LaVey's daughter Zeena Schreck and her husband Nikolas Schreck, both of whom were vocal critics of Zeena's father. In 1996, Don Webb became the high priest of the Temple, a position that he held until 2002. He was replaced by Zeena Schreck, but she resigned after six weeks and was replaced by Aquino, who took charge once more. In that year, Zeena led a schism within the organisation, establishing her own Berlin-based group, The Storm, which she later renamed the Sethian Liberation Movement. Aquino stood down as Supreme Priest again in 2004, to be replaced by Patricia Hardy, who was elected to the position of Supreme Priestess. Although no longer in charge of the organisation, he nevertheless remained its most visible spokesperson. In addition to the "Book of Coming Forth by Night", in which Set himself is purported to speak, the Temple's philosophy and teachings are revealed in a series of occult writings titled the "Jeweled Tablets of Set". Each tablet is keyed to a specific degree in the Temple hierarchy. Only the introduction to the first tablet ("Crystal Tablet of Set"), titled "Black Magic", is available for non-members. The "Ruby Tablet", which is available for second-degree members, is the lengthiest and most diverse of the tablets. The private Temple literature is not regarded as secret per se, but is kept restricted because it contains materials which, according to the Temple, may be dangerous to the non-initiated. The human individual is at the centre of Setian philosophy. The Temple places great emphasis on the development of the individual, postulating self-deification as the ultimate goal. The realization of the true nature of the Setian is termed "becoming" or "coming into being" and is represented by the Egyptian hieroglyphic term "kheper", or "Xeper" (a phonetic of _Xpr_), as the Temple of Set prefers to write it. This term is described in "The Book of Coming Forth by Night" as "the Word of the Aeon of Set". Members attempt "to preserve and strengthen" their "isolate, psyche-centric existence" through adherence to the left-hand path. This idea is in opposition to the traditional goal of Hermetic and Western mystical practices, which is the surrendering of the ego into a union with either God or the universe. The Temple teaches that the true self, or "essence", is immortal, and "Xeper" is the ability to align consciousness with this essence. Aquino taught that there is an afterlife for those who have reached the necessary level of individual development. This afterlife could occur in the individual's subjective universe. Those unable to reach this level dissolve into non-existence when the physical body dies. Self-initiation is knowledge understood as a conjunction of intellect and intuition. In keeping with its emphasis on the individual, the Temple encourages their members to celebrate their own birthday, and does not prescribe any other calendar of religious festivities. Barrett presented "Xem" as his Aeonic Word to the Setians, presenting it as a natural and necessary further development of Aquino's "Xeper". Aquino later acknowledged "Xem" as a worthwhile magical concept for Setians to explore, but found Barrett's insistence on its exclusivity as incompatible with the Temple's individualistic philosophy. Aquino's understanding of Satan differed from the atheistic interpretation promoted by LaVey, and the Temple's theology differs from that of the Church of Satan. The Temple presents the view that the name "Satan" was originally a corruption of the name "Set". The Temple of Set promotes the idea that Set is a real entity, and accordingly has been described as being "openly theistic". It further argues that Set is ageless and is the only real god in existence, with all others having been created by the human imagination. Set is described as having given humanity—through the means of non-natural evolution—the "Black Flame" or the "Gift of Set". This refers to humanity's questioning intellect which sets the species apart from other animals and gives it an "isolate self-consciousness" and the possibility to attain divinity. Aquino argued that the idea of the Gift of Set was inadvertently promoted to a wider audience in Stanley Kubrick's 1968 film "". According to Aquino, the black monolith which imparts human-level intelligence onto prehistoric apes in the film was a symbol of Set. While Setians are expected to revere Set, they do not worship him, instead regarding him as a teacher and a guide. He is portrayed as a role model on which Setians can base their own deification. According to Webb, "we do not worship Set - only our own potential. Set was and is the patron of the magician who seeks to increase his existence through expansion." Embracing the idea of aeons from Crowley's Thelema, Aquino adopted the Crowleyan tripartite division between the Aeon of Isis, Aeon of Osiris, and Aeon of Horus, but added to that the Aeon of Satan, which he dates from 1966 to 1975, and then the Aeon of Set, which he dated from 1975 onward. Despite presenting these chronological parameters, Aquino also portrayed the aeons less as time periods and more as mind-sets that can co-exist alongside one another. Thus, he stated that "A Jew, Christian or Muslim exists in the Æon of Osiris, a Wiccan in that of Isis, and a Thelemite in that of Horus". Aquino placed an emphasis on what he deemed to be the division between the objective and subjective universes. In the Setian religion, the objective world is understood as representing the natural world and humanity's collective meaning systems, while the subjective universe is understood as the individually experienced world and individual meaning systems. Following earlier esotericists like Crowley, Aquino characterised magic as "causing change in accordance with will". Unlike LaVey, Aquino expressed belief in the division between black magic and white magic. He described white magic as "a highly-concentrated form of conventional religious ritual", characterising it as being "more versatile", "less difficult" and "less dangerous" than black magic. However he criticised white magic as "fraud and/or self-delusion" which deceives the consciousness into thinking that it has been accepted in the objective universe. Aquino divided black magic into two forms, lesser black magic and greater black magic. He stated that lesser black magic entails "impelling" things that exist in the "objective universe" into doing a desired act by using "obscure physical or behavioural laws" and into this category he placed stage magic, psychodramas, politics, and propaganda. Conversely, he used the term greater black magic in reference to changes in the subjective universe of the magician, allowing them to realize their self in accordance with the principle of "Xeper". Among Setians it is accepted that there may be changes in the objective universe as a result of greater black magic, however such effects are considered secondary to the impact that they have on the subjective universe. Within the Temple, rituals are typically known as "workings", and are most often carried out alone. Stressing the religion's individualist nature, there are no rituals that are specifically prescribed by the Temple. Aquino also emphasized that in order to achieve a desired aim, the Setian should carry out physical actions alongside magical ones. There are no regular occasions which are marked by fixed rituals, and the Temple holds to no calendar of festivals. The Temple uses an inverted pentagram as its insignia, known as the "Pentagram of Set" to Setians. The use of the geometric shape is derived from the Sigil of Baphomet used by the Church of Satan, although stripped of its overtly Satanic add-ons. The Temple explains the meaning and significance of the pentagram by referring to Pythagorean ideas and "mathematical perfection". At Setian gatherings, members wear the pentagram as a medallion. The medallion is colored according to the initiation degree of the Setian. Both the Church of Satan and the Temple of Set also use the trapezoid symbol. The version used by the Church includes flames, a pitchfork and the number 666, while the trapezoid of the Temple has a left-facing Egyptian sceptre, and the number 666 stylized in geometric shapes rather than as clear numbers. The internal structure of the Temple of Set is closely reminiscent of that of the Church of Satan as it existed in the 1970s. All members of the Temple must be affiliated with a pylon, and thus membership is by application, requiring contact with a Setian priestess or priest followed by an evaluation period. The participation of non-initiated in the Temple's rituals are forbidden, due to the belief that their rituals would be dangerous in the wrong hands. The Temple of Set recognizes several stages or degrees of initiation. The degrees indicate the individual Setian's development and skill in magic. The degree structure is based on that of the Church of Satan, which in turn was based on the degrees of a nineteenth-century occult group, the Hermetic Order of the Golden Dawn. The Temple terms the progression through degrees as "recognitions", because the organization's philosophy sees that the individual member initiates themselves and that the Temple merely acknowledges this by granting the degree. These degrees are: The priesthood of the Temple of Set consists of members holding the third degree or higher; those in the first and second degrees are considered "lay members" of the Temple. The first degree serves as a space for mutual evaluation, in which the Temple assesses whether the individual is appropriate for the group, and the individual decides whether they wish to further their involvement with it. Full membership comes with recognition to the second degree. Many members do not advance beyond the second degree, nor is this expected of them, as while the first and second degree members use the organization's teachings and tools for their own development, the priesthood involves greater responsibilities towards the organization, such as being its official representatives. Recognition is performed by members of the priesthood. The fourth degree, which is acknowledged by the high priest/priestess, entails that the individual is so advanced in their magical skills that they are able to found their own school of magic, represented in the different orders of the Temple. The fifth degree can only be awarded by the unanimous decision of the Council of Nine and by the approval of the Temple's high priest/priestess. A fifth degree member has the power to utter and define a concept which somehow affects the philosophy of the organization, such as the concept of "Xeper" defined by Aquino in 1975. Only a handful of members have attained this degree and most "fifth-degree" concepts defined in such a manner are no longer studied in the organization. The final sixth degree represents a Magus "whose Task is complete". This degree is held by a very select few in the Temple, although any fifth-degree member can assume the sixth degree based on their own assessment. The organization is led by a high priest/priestess, who is also the public face of the Temple. The high priest is chosen among fourth or higher degree members by the chairman of the Council of Nine. This ruling council has nine members chosen from the priesthood (third degree or higher), whose mandate lasts for nine years with a new member being elected every year. The chairman of the council is chosen from among the council members each year. The council has the ultimate ruling power in the Temple and even the high priest is responsible to it. The Temple also has an executive director, whose task is to deal with administrative issues. Since its founding in 1975, the temple has had the following high priests/priestesses: In addition to the international organization, the Temple sponsors initiatory "Orders" and "Elements" and local groups called "Pylons". Pylons are intended to facilitate the initiatory work of the Temple's members by conducting meetings where discussions and magical works take place. The purpose of a pylon is to provide a space in which the Setian can focus on their religion, aided by like-minded individuals. Pylons typically meet in a member's home. Members usually join the Pylon located geographically closest to them. Correspondence- or Internet-based Pylons also exist, with Harvey noting that this online networking is more common than in-person interaction. A Pylon is led a by a second-degree (or higher) member who is called a "Sentinel". The term "pylon" derives from the architectural features which served as fortified gateways to ancient Egyptian temples. One Finnish Setian informed Granholm that the relationship between the orders and the temple was like that of different departments in a university. "Elements" are loosely structured interest groups, where specific themes and issues are addressed. They can be open for non-members and are commonly in operation only for short periods. Topics of interest include, for example, animal rights, which was the subject of the Arkte element operated by Aquino's wife Lilith. There are sections of the Temple known as "Orders", each of which focus on a particular theme, for instance ancient Egypt, Norse culture, Tantric Hinduism, or vampirism. Others focus on a particular skill, for instance the Order of Uart focuses on the visual arts and the Order of Taliesin on music. Orders can be understood as schools of different aspects of magic providing different paths of initiation. Orders are led by grand masters, who will usually be the founder of the order. In longer-lived orders the founder may have a successive grand master. Orders are founded by members of the fourth degree. When members reaches the second degree of initiation, they are expected to join an order of their own choosing. In normal circumstances, a Setian is only permitted to join one order, however special dispensation can be obtained for a practitioner to join two. Setians also hold annual International Conclaves. First Degree Initiates who obtain sponsorship by a member of the Priesthood are permitted to attend the International Conclave and Regional Gatherings. In 2000, the Temple had thirteen pylons, which were operating in the United States, Australia, Germany, and across Sweden and Finland. The extent of the Temple's membership has not been publicly revealed by the group; however, in 2005 Petersen noted that academic estimates for the Temple's membership varied from between 300 and 500, and Granholm suggested that in 2007 the Temple contained circa 200 members. The Temple's members come from a variety of racial backgrounds. In 1999, the anthropologist Jean La Fontaine suggested that in Britain there were 100 members of the Temple at most, and possibly "considerably fewer". In 2001 the scholar Gareth Medway posited that the group had 70 to 80 members in the United Kingdom, adding that it was the largest Satanic group then active in the country. In 2009, Harvey concurred with La Fontaine's assessment, although still believed that it was the largest Satanic group in the United Kingdom. He noted that most members were male, between the ages of twenty and fifty, and that—despite his expectation that they might be political extremists—they endorsed mainstream political positions, with all those whom he communicated with stating that they had voted for either the Conservative Party, Labour Party, or Liberal Democrats.
https://en.wikipedia.org/wiki?curid=31117
Tate Modern Tate Modern is a modern art gallery located in London. It is Britain's national gallery of international modern art and forms part of the Tate group (together with Tate Britain, Tate Liverpool, Tate St Ives and Tate Online). It is based in the former Bankside Power Station, in the Bankside area of the London Borough of Southwark. Tate holds the national collection of British art from 1900 to the present day and international modern and contemporary art. Tate Modern is one of the largest museums of modern and contemporary art in the world. As with the UK's other national galleries and museums, there is no admission charge for access to the collection displays, which take up the majority of the gallery space, while tickets must be purchased for the major temporary exhibitions. The gallery is a highly visited museum, with 5,868,562 visitors in 2018, making it the sixth-most visited art museum in the world, and the most visited in Britain. Tate Modern is housed in the former Bankside Power Station, which was originally designed by Sir Giles Gilbert Scott, the architect of Battersea Power Station, and built in two stages between 1947 and 1963. It is directly across the river from St Paul's Cathedral. The power station closed in 1981. Prior to redevelopment, the power station was a long, steel framed, brick clad building with a substantial central chimney standing . The structure was roughly divided into three main areas each running east–west – the huge main Turbine Hall in the centre, with the boiler house to the north and the switch house to the south. For many years after closure Bankside Power station was at risk of being demolished by developers. Many people campaigned for the building to be saved and put forward suggestions for possible new uses. An application to list the building was refused. In April 1994 the Tate Gallery announced that Bankside would be the home for the new Tate Modern. In July of the same year, an international competition was launched to select an architect for the new gallery. Jacques Herzog and Pierre de Meuron of Herzog & de Meuron were announced as the winning architects in January 1995. The £134 million conversion to the Tate Modern started in June 1995 and completed in January 2000. The most obvious external change was the two-story glass extension on one half of the roof. Much of the original internal structure remained, including the cavernous main turbine hall, which retained the overhead travelling crane. An electrical substation, taking up the Switch House in the southern third of the building, remained on-site and owned by the French power company EDF Energy while Tate took over the northern Boiler House for Tate Modern's main exhibition spaces. The history of the site as well as information about the conversion was the basis for a 2008 documentary "Architects Herzog and de Meuron: Alchemy of Building & Tate Modern". This challenging conversion work was carried by Carillion. Tate Modern was opened by the Queen on 11 May 2000. Tate Modern received 5.25 million visitors in its first year. The previous year the three existing Tate galleries had received 2.5 million visitors combined. Tate Modern had attracted more visitors than originally expected and plans to expand it had been in preparation since 2004. These plans focused on the south west of the building with the intention of providing 5,000 m2 of new display space, almost doubling the amount of display space. The southern third of the building was retained by the French State owned power company EDF Energy as an electrical substation. In 2006, the company released the western half of this holding and plans were made to replace the structure with a tower extension to the museum, initially planned to be completed in 2015. The tower was to be built over the old oil storage tanks, which would be converted to a performance art space. Structural, geotechnical, civil, and façade engineering and environmental consultancy was undertaken by Ramboll between 2008 and 2016. This project was initially costed at £215 million. Of the money raised, £50 million came from the UK government; £7 million from the London Development Agency; £6 million from philanthropist John Studzinski; and donations from, among others, the Sultanate of Oman and Elisabeth Murdoch. In June 2013, international shipping and property magnate Eyal Ofer pledged £10m to the extension project, making it to 85% of the required funds. Eyal Ofer, chairman of London-based Zodiac Maritime Agencies, said the donation made through his family foundation would enable "an iconic institution to enhance the experience and accessibility of contemporary art". The Tate director, Nicholas Serota, praised the donation saying it would help to make Tate Modern a "truly twenty-first-century museum". The first phase of the expansion involved the conversion of three large, circular, underground oil tanks originally used by the power station into accessible display spaces and facilities areas. These opened on 18 July 2012 and closed on 28 October 2012 as work on the tower building continued directly above. They reopened following the completion of the Switch House extension on 17 June 2016. Two of the Tanks are used to show live performance art and installations while the third provides utility space. Tate describes them as "the world's first museum galleries permanently dedicated to live art". A ten-storey tower, 65 metres high from ground level, was built above the oil tanks. The original western half of the Switch House was demolished to make room for the tower and then rebuilt around it with large gallery spaces and access routes between the main building and the new tower on level 1 (ground level) and level 4. The new galleries on level 4 have natural top lighting. A bridge built across the turbine hall on level 4 provides an upper access route. The new building opened to the public on 17 June 2016. The design, again by Herzog & de Meuron, has been controversial. It was originally designed with a glass stepped pyramid, but this was amended to incorporate a sloping façade in brick latticework (to match the original power-station building) despite planning consent to the original design having been previously granted by the supervising authority. The extension provides 22,492 square metres of additional gross internal area for display and exhibition spaces, performance spaces, education facilities, offices, catering and retail facilities as well as a car parking and a new external public space. In May 2017, the Switch House was formally renamed the Blavatnik Building, after Anglo-Ukrainian billionaire Sir Leonard Blavatnik, who contributed a "substantial" amount of the £260m cost of the extension. Sir Nicholas Serota commented "Len Blavatnik's enthusiastic support ensured the successful realisation of the project and I am delighted that the new building now bears his name". The collections in Tate Modern consist of works of international modern and contemporary art dating from 1900 until today. Levels 2, 3 and 4 contain gallery space. Each of those floors is split into a large east and west wing with at least 11 rooms in each. Space between these wings is also used for smaller galleries on levels 2 and 4. The Boiler House shows art from 1900 to the present day. The Switch House has eleven floors, numbered 0 to 10. Levels 0, 2, 3 and 4 contain gallery space. Level 0 consists of the Tanks, spaces converted from the power station's original fuel oil tanks, while all other levels are housed in the tower extension building constructed above them. The Switch House shows art from 1960 to the present day. The Turbine Hall is a single large space running the whole length of the building between the Boiler House and the Switch House. At six stories tall it represents the full height of the original power station building. It is cut by bridges between the Boiler House and the Switch House on levels 1 and 4 but the space is otherwise undivided. The western end consists of a gentle ramp down from the entrance and provides access to both sides on level 0. The eastern end provides a very large space that can be used to show exceptionally large artworks due its unusual height. The main collection displays consist of 8 areas with a named theme or subject. Within each area there are some rooms that change periodically showing different works in keeping with the overall theme or subject. The themes are changed less frequently. There is no admission charge for these areas. As of June 2016 the themed areas were: There is also an area dedicated to displaying works from the Artist Rooms collection. Since the Tate Modern first opened in 2000, the collections have not been displayed in chronological order but have been arranged thematically into broad groups. Prior to the opening of the Switch House there were four of these groupings at a time, each allocated a wing on levels 3 and 5 (now levels 2 and 4). The initial hanging from 2000 to 2006: The first rehang at Tate Modern opened in May 2006. It eschewed the thematic groupings in favour of focusing on pivotal moments of twentieth-century art. It also introduced spaces for shorter exhibitions in between the wings. The layout was: In 2012, there was a partial third rehang. The arrangement was: The Turbine hall, which once housed the electricity generators of the old power station, is five storeys tall with 3,400 square metres of floorspace. It is used to display large specially-commissioned works by contemporary artists, between October and March each year. From 2000 until 2012, the series was named after its corporate sponsor, Unilever. In this time the company provided £4.4m sponsorship in total including a renewal deal of £2.2m for a period of five years agreed in 2008. This series was planned to last the gallery's first five years, but the popularity of the series led to its extension until 2012. The artists who have exhibited commissioned work in the Turbine Hall as part of the Unilever series are: In 2013, Tate Modern signed a sponsorship deal worth around £5 million with Hyundai to cover a ten-year program of commissions, then considered the largest amount of money ever provided to an individual gallery or museum in the United Kingdom. The first commission for the Hyundai series is Mexican artist, Abraham Cruzvillegas. The artists who have exhibited commissioned work in the Turbine Hall as part of the Hyundai series thus far are: When there is no series running, the Turbine Hall is used for occasional events and exhibitions. In 2011 it was used to display Damien Hirst's "For The Love of God". A sell-out show by Kraftwerk in February 2013 crashed the ticket hotline and website, causing a backlash from the band's fans. In 2018 the Turbine Hall was used for two performances of Messiaen's "Et exspecto resurrectionem mortuorum" and Stockhausen's "Gruppen". Two wings of the Boiler House are used to stage the major temporary exhibitions for which an entry fee is charged. These exhibitions normally run for three or four months. When they were located on a single floor, the two exhibition areas could be combined to host a single exhibition. This was done for the Gilbert and George retrospective due to the size and number of the works. Currently the two wings used are on level 3. It is not known if this arrangement is permanent. Each major exhibition has a dedicated mini-shop selling books and merchandise relevant to the exhibition. A 2014 show of Henri Matisse provided Tate Modern with London's best-attended charging exhibition, and with a record 562,622 visitors overall, helped by a nearly five-month-long run. In 2018, Joan Jonas had a retrospective exhibition. The Tanks, located on level 0, are three large underground oil tanks, connecting spaces and side rooms originally used by the power station and refurbished for use by the gallery. One tank is used to display installation and video art specially commissioned for the space while smaller areas are used to show installation and video art from the collection. The Tanks have also been used as a venue for live music. The Project Space (formerly known as the Level 2 Gallery) was a smaller gallery located on the north side of the Boiler House on level 1 which housed exhibitions of contemporary art in collaboration with other international art organisations. Its exhibitions typically ran for 2–3 months and then travelled to the collaborating institution for display there. The space was only accessible by leaving the building and re-entering using a dedicated entrance. It is no longer used as gallery space. Works are also sometimes shown in the restaurants and members' rooms. Other locations that have been used in the past include the mezzanine on Level 1 and the north facing exterior of the Boiler House building. In addition to exhibition space there are a number of other facilities: The closest station is Blackfriars via its new south entrance. Other nearby stations include Southwark, as well as St Paul's and Mansion House north of the river which can be reached via the Millennium Bridge. The lampposts between Southwark tube station and Tate Modern are painted orange to show pedestrian visitors the route. There is also a riverboat pier just outside the gallery called Bankside Pier, with connections to the Docklands and Greenwich via regular passenger boat services (commuter service) and the Tate to Tate service, which connects Tate Modern with Tate Britain. To the west of Tate Modern lie the sleek stone and glass Ludgate House, the former headquarters of Express Newspapers and Sampson House, a massive late Brutalist office building. Frances Morris' appointment as director was announced in January 2016. Since 2010 there have been 14 protest art performances by the art collective "Liberate Tate" demanding the Tate to "disengage from BP as a sponsor, and stop allowing Tate to be used to deflect attention away from the devastating impacts that BP has around the world." BP is criticised for operations in relation with Petroleum exploration in the Arctic, the Deepwater Horizon oil spill, Oil sands and climate change. The artists involved in the protests are referring to a deal between BP and the Tate: BP pays £224,000 a year to the Tate. The Tate presents the brand BP in return. In June 2015 a group of artists occupied Tate Modern for 25 hours.
https://en.wikipedia.org/wiki?curid=31119
Thomas Gainsborough Thomas Gainsborough (14 May 1727 (baptised) – 2 August 1788) was an English portrait and landscape painter, draughtsman, and printmaker. Along with his rival Sir Joshua Reynolds, he is considered one of the most important British artists of the second half of the 18th century. He painted quickly, and the works of his maturity are characterised by a light palette and easy strokes. Despite being a prolific portrait painter, Gainsborough gained greater satisfaction from his landscapes. He is credited (with Richard Wilson) as the originator of the 18th-century British landscape school. Gainsborough was a founding member of the Royal Academy. He was born in Sudbury, Suffolk, the youngest son of John Gainsborough, a weaver and maker of woolen goods, and his wife, the sister of the Reverend Humphry Burroughs. One of Gainsborough's brothers, Humphrey, had a faculty for mechanics and was said to have invented the method of condensing steam in a separate vessel, which was of great service to James Watt; another brother, John, was known as "Scheming Jack" because of his passion for designing curiosities. The artist spent his childhood at what is now Gainsborough's House, on Gainsborough Street, Sudbury. He later resided there, following the death of his father in 1748 and before his move to Ipswich. The building still survives and is now a house-museum dedicated to his life and art. When he was still a boy he impressed his father with his drawing and painting skills, and he almost certainly had painted heads and small landscapes by the time he was ten years old, including a miniature self-portrait. Gainsborough was allowed to leave home in 1740 to study art in London, where he trained under engraver Hubert Gravelot but became associated with William Hogarth and his school. He assisted Francis Hayman in the decoration of the supper boxes at Vauxhall Gardens, and contributed one image to the decoration of what is now the Thomas Coram Foundation for Children. In 1746, Gainsborough married Margaret Burr, an illegitimate daughter of the Duke of Beaufort, who had settled a £200 annuity on her. The artist's work, then mostly consisting of landscape paintings, was not selling well. He returned to Sudbury in 1748–1749 and concentrated on painting portraits. While still in Suffolk, Gainsborough painted a portrait of the Rev. John Chaffy Playing a Violoncello in a Landscape, c.1750-52. Tate Gallery, London. In 1752, he and his family, now including two daughters, moved to Ipswich. Commissions for portraits increased, but his clientele included mainly local merchants and squires. He had to borrow against his wife's annuity. Towards the end of his time in Ipswich, he painted a self-portrait, now in the permanent collection of the National Portrait Gallery, London. In 1759, Gainsborough and his family moved to Bath, living at number 17 The Circus. There, he studied portraits by van Dyck and was eventually able to attract a fashionable clientele. In 1761, he began to send work to the Society of Arts exhibition in London (now the Royal Society of Arts, of which he was one of the earliest members); and from 1769 he submitted works to the Royal Academy's annual exhibitions. The exhibitions helped him enhance his reputation, and he was invited to become a founding member of the Royal Academy in 1769. His relationship with the Academy was not an easy one and he stopped exhibiting his paintings in 1773. Despite Gainsborough's increasing popularity and success in painting portraits for fashionable society, he expressed frustration during his Bath period at the demands of such work and that it prevented him from pursuing his preferred artistic interests. In a letter to a friend in the 1760s Gainsborough wrote: "I'm sick of Portraits and wish very much to take my Viol da Gamba and walk off to some sweet Village where I can paint Landskips [landscapes] and enjoy the fag End of Life in quietness and ease". Of the men he had to deal with as patrons and admirers, and their pretensions, he wrote:... damn Gentlemen, there is not such a set of Enemies to a real artist in the world as they are, if not kept at a proper distance. They think ... that they reward your merit by their Company & notice; but I ... know that they have but one part worth looking at, and that is their Purse; their Hearts are seldom near enough the right place to get a sight of it.Gainsborough was so keen a viol da gamba player that he had at this stage five of the instruments, three made by Henry Jaye and two by Barak Norman. In 1774, Gainsborough and his family moved to London to live in Schomberg House, Pall Mall. A commemorative blue plaque was put on the house in 1951. In 1777, he again began to exhibit his paintings at the Royal Academy, including portraits of contemporary celebrities, such as the Duke and Duchess of Cumberland. Exhibitions of his work continued for the next six years. About this time, Gainsborough began experimenting with printmaking using the then-novel techniques of aquatint and soft-ground etching. During the 1770s and 1780s Gainsborough developed a type of portrait in which he integrated the sitter into the landscape. An example of this is his portrait of Frances Browne, Mrs John Douglas (1746–1811) which can be seen at Waddesdon Manor. The sitter has withdrawn to a secluded and overgrown corner of a garden to read a letter, her pose recalling the traditional representation of Melancholy. Gainsborough emphasised the relationship between Mrs Douglas and her environment by painting the clouds behind her and the drapery billowing across her lap with similar silvery violet tones and fluid brushstrokes. This portrait was included in his first private exhibition at Schomberg House in 1784. In 1776, Gainsborough painted a portrait of Johann Christian Bach, the youngest son of Johann Sebastian Bach. Bach's former teacher Padre Martini of Bologna, Italy, was assembling a collection of portraits of musicians, and Bach asked Gainsborough to paint his portrait as part of this collection. The portrait now hangs in the National Portrait Gallery in London. In 1780, he painted the portraits of King George III and Queen Charlotte and afterwards received other royal commissions. In 1784, Principal Painter in Ordinary Allan Ramsay died and the King was obliged to give the job to Gainsborough's rival and Academy president, Joshua Reynolds. Gainsborough remained the Royal Family's favorite painter, however. In his later years, Gainsborough often painted landscapes. With Richard Wilson, he was one of the originators of the eighteenth-century British landscape school; though simultaneously, in conjunction with Reynolds, he was the dominant British portraitist of the second half of the 18th century. William Jackson in his contemporary essays said of him "to his intimate friends he was sincere and honest and that his heart was always alive to every feeling of honour and generosity". Gainsborough did not particularly enjoy reading but letters written to his friends were penned in such an exceptional conversational manner that the style could not be equalled. As a letter writer Henry Bate-Dudley said of him "a selection of his letters would offer the world as much originality and beauty as is ever traced in his paintings". In the 1780s, Gainsborough used a device he called a "Showbox" to compose landscapes and display them backlit on glass. The original box is on display in the Victoria & Albert Museum with a reproduction transparency. He died of cancer on 2 August 1788 at the age of 61. According to his daughter Peggy, his last words were "van Dyck". He is interred in the churchyard St. Anne's Church, Kew, Surrey, (located on Kew Green). It was his express wish to be buried near his friend Joshua Kirby. Later his wife and nephew, Gainsborough Dupont, were interred with him. Coincidentally Johan Zoffany and Franz Bauer are also buried in the graveyard. As of 2011, an appeal is underway to pay the costs of restoration of his tomb. A street in Kew, Gainsborough Road, is named after him. The art historian Michael Rosenthal described Gainsborough as "one of the most technically proficient and, at the same time, most experimental artists of his time". He was noted for the speed with which he applied paint, and he worked more from observations of nature (and of human nature) than from application of formal academic rules. The poetic sensibility of his paintings caused Constable to say, "On looking at them, we find tears in our eyes and know not what brings them." Gainsborough's enthusiasm for landscapes is shown in the way he merged figures of the portraits with the scenes behind them. His landscapes were often painted at night by candlelight, using a tabletop arrangement of stones, pieces of mirrors, broccoli, and the like as a model. His later work was characterised by a light palette and easy, economical strokes. Gainsborough's only known assistant was his nephew, Gainsborough Dupont. In the last year of his life he collaborated with John Hoppner in painting a full-length portrait of Lady Charlotte Talbot. His most famous works, "Portrait of Mrs. Graham"; "Mary and Margaret: The Painter's Daughters"; "William Hallett and His Wife Elizabeth, nee Stephen", known as "The Morning Walk"; and "Cottage Girl with Dog and Pitcher", display the unique individuality of his subjects. Joshua Reynolds considered "Girl with Pigs" "the best picture he (Gainsborough) ever painted or perhaps ever will". Gainsborough's works became popular with collectors from the 1850s on, after Lionel de Rothschild began buying his portraits. The rapid rise in the value of pictures by Gainsborough and also by Reynolds in the mid 19th century was partly because the Rothschild family, including Ferdinand de Rothschild began collecting them. In 2011, Gainsborough's portrait of "Miss Read" (Mrs Frances Villebois) was sold by Michael Pearson, 4th Viscount Cowdray, for a record price of £6.54M, at Christie’s in London. She was a matrilineal descendant of Cecily Neville, Duchess of York.
https://en.wikipedia.org/wiki?curid=31126
Trust Territory of the Pacific Islands The Trust Territory of the Pacific Islands (TTPI) was a United Nations trust territory in Micronesia administered by the United States from 1947 to 1994. Spain initially claimed the islands that later comprised the territory of the Trust Territory of the Pacific Islands (TTPI). Subsequently, Germany established competing claims over the islands. The competing claims were eventually resolved in favor of Germany when Spain, following its loss of several possessions to the United States during the Spanish–American War, ceded its claims over the islands to Germany pursuant to the German–Spanish Treaty (1899). Germany, in turn, continued to retain possession until the islands were captured by Japan during World War I. The League of Nations formally placed the islands in the former South Seas Mandate, a mandate that authorized Japanese administration of the islands. The islands then remained under Japanese control until captured by the United States in 1944 during World War II. The TTPI entered UN trusteeship pursuant to Security Council Resolution 21 on July 18, 1947, and was designated a "strategic area" in its 1947 trusteeship agreement. Article 83 of the UN Charter provided that, as such, its formal status as a UN trust territory could be terminated only by the Security Council, and not by the General Assembly as with other trust territories. The United States Navy controlled the TTPI from a headquarters in Guam until 1951, when the United States Department of the Interior took over control, administering the territory from a base in Saipan. The Territory contained 100,000 people scattered over a water area the size of continental United States. It was subdivided into six districts, and represented a variety of cultures, with nine spoken languages. The Ponapeans and Kusaieans, Marshallese and Palauans, Trukese, Yapese and Chamorros had little in common, except they were in the same general area of the Pacific Ocean. The large distances between people, lack of an economy, language and cultural barriers, all worked against the union. The six district centers became upscale slums, containing deteriorated Japanese-built roads, with electricity, modern music and distractions, which led to alienated youth and elders. The remainder of the islands maintained their traditional way of life and infrastructure. A Congress of Micronesia first levied an income tax in 1971. It affected mainly foreigners working at military bases in the region. On October 21, 1986, the U.S. ended its administration of the Marshall Islands District. The termination of U.S. administration of the Chuuk, Yap, Kosrae, Pohnpei, and the Mariana Islands districts of the TTPI soon followed on November 3, 1986. The Security Council formally ended the trusteeship for the Chuuk, Yap, Kosrae, Pohnpei, Mariana Islands, and Marshall Islands districts on December 22, 1990 pursuant to Security Council Resolution 683. On May 25, 1994, the Council ended the trusteeship for the Palau District pursuant to Security Council Resolution 956, after which the U.S. and Palau agreed to establish the latter's independence on October 1. In 1969, the 100 occupied islands comprised over an area of of sea. The latter area was comparable in size to the continental United States. The water area is about 5% of the Pacific Ocean. The population of the islands was 200,000 in the latter part of the 19th century. The population decreased to 100,000 by 1969 due to emigration, war, and disease. At that time, the population inhabited less than 100 out of 2,141 of the Marshall, Mariana, and Caroline Islands. In 1947 the Mariana Islands' Teacher Training School (MITTS), a normal school serving all areas of the Trust Territory, opened in Guam. It moved to Chuuk in 1948, to be more central in the Trust Territory, and was renamed Pacific Islands' Teacher Training School (PITTS). It transitioned from being a normal school to a comprehensive secondary school, so it was renamed the Pacific Islands Central School (PICS). The school moved to Pohnpei in 1959. At the time it was a three-year institution housing students who graduated from intermediate schools. The school, later known as Pohnpei Island Central School (PICS), is now Bailey Olter High School. Palau Intermediate School, established in 1946, became Palau High School in 1962 as it added senior high grades. From the late 1960s to the middle of the 1970s, several public high schools were built and/or received additions in the Trust Territory. They included Jaluit High School, Kosrae High School, Marshall Islands High School in Majuro, Palau High, PICS, and Truk High School (now Chuuk High School). The Micronesian Occupational College in Koror, Palau was also built. It later merged with the Kolonia-based Community College of Micronesia, which began operations in 1969, into the College of Micronesia-FSM in 1976. The former area is now (2020) divided into four jurisdictions: The following sovereign states have become freely associated with the United States under the Compact of Free Association (COFA).
https://en.wikipedia.org/wiki?curid=31127
Theobromine Theobromine, formerly known as xantheose, is a bitter alkaloid of the cacao plant, with the chemical formula C7H8N4O2. It is found in chocolate, as well as in a number of other foods, including the leaves of the tea plant, and the kola nut. It is classified as a xanthine alkaloid, others of which include theophylline and caffeine. Caffeine differs from the compounds in that it has an extra methyl group (see under Pharmacology section). Despite its name, the compound contains no bromine—"theobromine" is derived from "Theobroma", the name of the genus of the cacao tree (which itself is made up of the Greek roots "theo" ("god") and "broma" ("food"), meaning "food of the gods") with the suffix "-ine" given to alkaloids and other basic nitrogen-containing compounds. Theobromine is a slightly water-soluble (330 mg/L), crystalline, bitter powder. Theobromine is white or colourless, but commercial samples can be yellowish. It has an effect similar to, but lesser than, that of caffeine in the human nervous system, making it a lesser homologue. Theobromine is an isomer of theophylline, as well as paraxanthine. Theobromine is categorized as a dimethyl xanthine. Theobromine was first discovered in 1841 in cacao beans by Russian chemist Aleksandr Voskresensky. Synthesis of theobromine from xanthine was first reported in 1882 by Hermann Emil Fischer. Theobromine is the primary alkaloid found in cocoa and chocolate. Cocoa powder can vary in the amount of theobromine, from 2% theobromine, up to higher levels around 10%. Cocoa butter only contains trace amounts of theobromine. There are usually higher concentrations in dark than in milk chocolate. Theobromine can also be found in trace amounts in the kola nut , the guarana berry, yerba mate ("Ilex paraguariensis"), Ilex vomitoria, Ilex guayusa, and the tea plant. of milk chocolate contains approximately of theobromine, while the same amount of dark chocolate contains about . Cocoa beans naturally contain approximately 1% theobromine. Plant species and components with substantial amounts of theobromine are: The mean theobromine concentrations in cocoa and carob products are: Theobromine is a purine alkaloid derived from xanthosine, a nucleoside. Cleavage of the ribose and N-methylation yields 7-methylxanthosine. 7-Methylxanthosine in turn is the precursor to theobromine, which in turn is the precursor to caffeine. Even without dietary intake, theobromine may occur in the body as it is a product of the human metabolism of caffeine, which is metabolised in the liver into 12% theobromine, 4% theophylline, and 84% paraxanthine. In the liver, theobromine is metabolized into xanthine and subsequently into methyluric acid. Important enzymes include CYP1A2 and CYP2E1. "The main mechanism of action for methylxanthines has long been established as an inhibition of adenosine receptors". Its effect as a phosphodiesterase inhibitor is thought to be small. It is not currently used as a prescription drug. The amount of theobromine found in chocolate is small enough that chocolate can, in general, be safely consumed by humans. At doses of 0.8–1.5 g/day (50–100 g cocoa), sweating, trembling and severe headaches were noted, with limited mood effects found at 250 mg/day. Theobromine and caffeine are similar in that they are related alkaloids. Theobromine is weaker in both its inhibition of cyclic nucleotide phosphodiesterases and its antagonism of adenosine receptors. The potential inhibitory effect of theobromine on phosphodiesterases is seen only at amounts much higher than what people normally would consume in a typical diet including chocolate. Animals that metabolize theobromine (found in chocolate) more slowly, such as dogs, can succumb to theobromine poisoning from as little as of milk chocolate for a smaller dog and , or around nine small milk chocolate bars, for an average-sized dog. The concentration of theobromine in dark chocolates (approximately ) is up to 10 times that of milk chocolate () – meaning dark chocolate is far more toxic to dogs per unit weight or volume than milk chocolate. The same risk is reported for cats as well, although cats are less likely to ingest sweet food, with most cats having no sweet taste receptors. Complications include digestive issues, dehydration, excitability, and a slow heart rate. Later stages of theobromine poisoning include epileptic-like seizures and death. If caught early on, theobromine poisoning is treatable. Although not common, the effects of theobromine poisoning can be fatal. In 2014, four American black bears were found dead at a bait site in New Hampshire. A necropsy and toxicology report performed at the University of New Hampshire in 2015 confirmed they died of heart failure caused by theobromine after they consumed of chocolate and doughnuts placed at the site as bait. A similar incident killed a black bear cub in Michigan in 2011.
https://en.wikipedia.org/wiki?curid=31128
Thuringia Thuringia (; ), officially the Free State of Thuringia ( ), is a state of Germany. Thuringia is located in central Germany covering an area of and a population of 2.15 million inhabitants, making it the sixth smallest German state by area and the fifth smallest by population. Erfurt is the state capital and largest city, while other major cities include Jena, Gera, and Weimar. Thuringia is surrounded by the states of Bavaria, Hesse, Lower Saxony, Saxony-Anhalt, and Saxony. Most of Thuringia is within the watershed of the Saale, a left tributary of the Elbe, and has been known as "the green heart of Germany" () from the late 19th century due to the dense forest covering the land. Thuringia is home to the Rennsteig, Germany's best-known hiking trail, and the winter resort of Oberhof, making it a well-known winter sports destination with half of Germany's 136 Winter Olympic gold medals won through 2014 having been won by Thuringian athletes. Thuringia has also been home to prominent German intellectuals and creative artists, including Johann Sebastian Bach, Johann Wolfgang von Goethe, and Friedrich Schiller, and is the location of the University of Jena, the Ilmenau University of Technology, the University of Erfurt, and the Bauhaus University of Weimar. Thuringia was established in 1920 as a state of the Weimar Republic from a merger of the Ernestine duchies, except for Saxe-Coburg, but can trace its origins to the Frankish Duchy of Thuringia established around 631 AD by King Dagobert I. After World War II, Thuringia came under the Soviet occupation zone in Allied-occupied Germany, and its borders altered to become contiguous. Thuringia became part of the German Democratic Republic in 1949, but was dissolved in 1952 during administrative reforms, and its territory divided into the districts of Erfurt, Suhl and Gera. Thuringia was re-established in 1990 following German reunification, with slightly different borders, and became one of the Federal Republic of Germany's new states. The name "Thuringia" or "Thüringen" derives from the Germanic tribe Thuringii, who emerged during the Migration Period. Their origin is largely unknown. An older theory claims that they were successors of the Hermunduri, but later research rejected the idea. Other historians argue that the Thuringians were allies of the Huns, came to central Europe together with them, and lived before in what is Galicia today. Publius Flavius Vegetius Renatus first mentioned the Thuringii around 400; during that period, the Thuringii were famous for their excellent horses. The Thuringian Realm existed until after 531, the Landgraviate of Thuringia was the largest state in the region, persisting between 1131 and 1247. Afterwards the state known as Thuringia ceased to exist; nevertheless the term commonly described the region between the Harz mountains in the north, the White Elster river in the east, the Franconian Forest in the south and the Werra river in the west. After the Treaty of Leipzig, Thuringia had its own dynasty again, the Ernestine Wettins. Their various lands formed the Free State of Thuringia, founded in 1920, together with some other small principalities. The Prussian territories around Erfurt, Mühlhausen and Nordhausen joined Thuringia in 1945. The coat of arms of Thuringia shows the lion of the Ludowingian Landgraves of 12th-century origin. The eight stars around it represent the eight former states which formed Thuringia. The flag of Thuringia is a white-red bicolor, derived from the white and red stripes of the Ludowingian lion. The coat of arms and flag of Hesse are quite similar to the Thuringian ones, since they are also derived from the Ludowingian symbols. Symbols of Thuringia in popular culture are the "Bratwurst" and the Forest, because a large amount of the territory is forested. Named after the Thuringii tribe who occupied it around AD 300, Thuringia came under Frankish domination in the 6th century. Thuringia became a landgraviate in 1130 AD. After the extinction of the reigning Ludowingian line of counts and landgraves in 1247 and the War of the Thuringian Succession (1247–1264), the western half became independent under the name of "Hesse", never to become a part of Thuringia again. Most of the remaining Thuringia came under the rule of the Wettin dynasty of the nearby Margraviate of Meissen, the nucleus of the later Electorate and Kingdom of Saxony. With the division of the house of Wettin in 1485, Thuringia went to the senior Ernestine branch of the family, which subsequently subdivided the area into a number of smaller states, according to the Saxon tradition of dividing inheritance amongst male heirs. These were the "Saxon duchies", consisting, among others, of the states of Saxe-Weimar, Saxe-Eisenach, Saxe-Jena, Saxe-Meiningen, Saxe-Altenburg, Saxe-Coburg, and Saxe-Gotha. Thuringia generally accepted the Protestant Reformation, and Roman Catholicism was suppressed as early as 1520; priests who remained loyal to it were driven away and churches and monasteries were largely destroyed, especially during the German Peasants' War of 1525. In Mühlhausen and elsewhere, the Anabaptists found many adherents. Thomas Müntzer, a leader of some non-peaceful groups of this sect, was active in this city. Within the borders of modern Thuringia the Roman Catholic faith only survived in the Eichsfeld district, which was ruled by the Archbishop of Mainz, and to a small degree in Erfurt and its immediate vicinity. The modern German black-red-gold tricolour flag's first appearance anywhere in a German-ethnicity sovereign state, within what today comprises Germany, occurred in 1778 as the state flag of the Principality of Reuss-Greiz, a principality whose lands were located within modern Thuringian borders. Some reordering of the Thuringian states occurred during the German Mediatisation from 1795 to 1814, and the territory was included within the Napoleonic Confederation of the Rhine organized in 1806. The 1815 Congress of Vienna confirmed these changes and the Thuringian states' inclusion in the German Confederation; the Kingdom of Prussia also acquired some Thuringian territory and administered it within the Province of Saxony. The Thuringian duchies which became part of the German Empire in 1871 during the Prussian-led unification of Germany were Saxe-Weimar-Eisenach, Saxe-Meiningen, Saxe-Altenburg, Saxe-Coburg-Gotha, Schwarzburg-Sondershausen, Schwarzburg-Rudolstadt and the two principalities of Reuss Elder Line and Reuss Younger Line. In 1920, after World War I, these small states merged into one state, called Thuringia; only Saxe-Coburg voted to join Bavaria instead. Weimar became the new capital of Thuringia. The coat of arms of this new state was simpler than those of its predecessors. In 1930 Thuringia was one of the free states where the Nazis gained real political power. Wilhelm Frick was appointed Minister of the Interior for the state of Thuringia after the Nazi Party won six delegates to the Thuringia Diet. In this position he removed from the Thuringia police force anyone he suspected of being a republican and replaced them with men who were favourable towards the Nazi Party. He also ensured that whenever an important position came up within Thuringia, he used his power to ensure that a Nazi was given that post. After being controlled briefly by the US, from July 1945, the state of Thuringia came under the Soviet occupation zone, and was expanded to include parts of Prussian Saxony, such as the areas around Erfurt, Mühlhausen, and Nordhausen. Erfurt became the new capital of Thuringia. Ostheim, an exclave of "Landkreis" (roughly equivalent to a county in the English-speaking world) Eisenach, was ceded to Bavaria. In 1952, the German Democratic Republic dissolved its states, and created districts () instead. The three districts that shared the former territory of Thuringia were Erfurt, Gera and Suhl. Altenburg Kreis was part of Leipzig Bezirk. The State of Thuringia was recreated with slightly altered borders during German reunification in 1990. From the northwest going clockwise, Thuringia borders on the German states of Lower Saxony, Saxony-Anhalt, Saxony, Bavaria and Hesse. The landscapes of Thuringia are quite diverse. The far north is occupied by the Harz mountains, followed by the Goldene Aue, a fertile floodplain around Nordhausen with the Helme as most important river. The north-west includes the Eichsfeld, a hilly and sometimes forested region, where the Leine river emanates. The central and northern part of Thuringia is defined by the 3000 km² wide Thuringian Basin, a very fertile and flat area around the Unstrut river and completely surrounded by the following hill chains (clockwise from the north-west): Dün, Hainleite, Windleite, Kyffhäuser, Hohe Schrecke, Schmücke, Finne, Ettersberg, Steigerwald, Thuringian Forest, Hörselberge and Hainich. Within the Basin the smaller hill chains Fahner Höhe and Heilinger Höhen. South of the Thuringian Basin is the Land's largest mountain range, marked by the Thuringian Forest in the north-west, the Thuringian Highland in the middle and the Franconian Forest in the south-east. Most of this range is forested and the Großer Beerberg (983 m) is Thuringia's highest mountain. To the south-west, the Forest is followed up by Werra river valley, dividing it from the Rhön Mountains in the west and the Grabfeld plain in the south. Eastern Thuringia, commonly described as the area east of Saale and Loquitz valley, is marked by a hilly landscape, rising slowly from the flat north to the mountainous south. The Saale in the west and the White Elster in the east are the two big rivers running from south to north and forming densely settled valleys in this area. Between them lies the flat and forested Holzland in the north, the flat and fertile Orlasenke in the middle and the Vogtland, a hilly but in most parts non-forested region in the south. The far eastern region (east of White Elster) is the Osterland or Altenburger Land along Pleiße river, a flat, fertile and densely settled agricultural area. There are two large rivers in Thuringia. The Saale, a tributary of the Elbe, with its tributaries the Unstrut, Ilm and White Elster, drains the most part of Thuringia. The Werra - the headwater of the Weser - drains the south-west and west of the Land. Furthermore, some small areas on the southern border are drained by tributaries of the Main, itself a tributary of the Rhine. There are no large natural lakes in Thuringia, but it does have some of Germany's biggest dams, including the Bleiloch Dam and the Hohenwarte Dam on the River Saale, as well as the Leibis-Lichte Dam and the Goldisthal Pumped Storage Station in the Thuringian Highlands. Thuringia is Germany's only state with no connection to navigable waterways. The geographic centre of the Federal Republic is located in Thuringia, within the municipality of Vogtei next to Mühlhausen. The centre of Thuringia is located only eight kilometres south of the capital's Cathedral, in the municipality of Rockhausen. Thuringia's climate is temperate with humid westerly winds predominating. Increasingly as one moves from the north-west to the south-east, the climate shows continental features: winters can be cold for long periods, and summers can become warm. Dry periods are often recorded, especially within the Thuringian Basin, situated leeward to mountains in all directions. It is Germany's driest area, with annual precipitation of only 400 to 500 mm. Artern, in the north-east, is warm and dry, with a mean annual temperature of 8.5 °C and mean precipitation of 450 mm; contrast this with wet, cool Oberhof, in the Thuringian Forest, where temperature averages only 4.4 °C and mean annual precipitation reaches 1300 mm. Due to many centuries of intensive settlement, most of the area is shaped by human influence. The original natural vegetation of Thuringia is forest with beech as its predominant species, as can still be found in the Hainich mountains today. In the uplands, a mixture of beech and spruce would be natural. However, most of the plains have been cleared and are in intensive agricultural use while most of the forests are planted with spruce and pine. Since 1990, Thuringia's forests have been managed aiming for a more natural and tough vegetation more resilient to climate change as well as diseases and vermin. In comparison to the forest, agriculture is still quite conventional and dominated by large structures and monocultures. Problems here are caused especially by increasingly prolonged dry periods during the summer months. Environmental damage in Thuringia has been reduced to a large extent after 1990. The condition of forests, rivers and air was improved by modernizing factories, houses (decline of coal heating) and cars, and contaminated areas such as the former Uranium surface mines around Ronneburg have been remediated. Today's environmental problems are the salination of the Werra river, caused by discharges of K+S salt mines around Unterbreizbach and overfertilisation in agriculture, damaging the soil and small rivers. Environment and nature protection has been of growing importance and attention since 1990. Large areas, especially within the forested mountains, are protected as natural reserves, including Thuringia's first national park within the Hainich mountains, founded in 1997, the Rhön Biosphere Reserve, the Thuringian Forest Nature Park and the South Harz Nature Park. During the Middle Ages, Thuringia was situated at the border between Germanic and Slavic territories, marked by the Saale river. The main Slavic tribe in what is now Thuringia were the Sorbs proper, who unified all tribes in what is now southern half of Eastern Germany. The Ostsiedlung movement led to the assimilation of Slavic people between the 11th and the 13th century under German rule. The population growth increased during the 18th century and stayed high until World War I, before it slowed within the 20th century and changed to a decline since 1990. Since the beginning of Urbanisation around 1840, the Thuringian cities have higher growth rates resp. smaller rates of decline than rural areas (many villages lost half of their population since 1950, whereas the biggest cities (Erfurt and Jena) kept growing). Largest migrant communities by 31.12.2017 The current population is 2,170,000 (in 2012) with an annual rate of decrease of about 0.5%, which varies widely between the local regions. In 2012, 905,000 Thuringians lived in a municipality with more than 20,000 inhabitants, this is an urbanization rate of 42% which continues to rise. In July 2013, there were 41,000 non-Germans by citizenship living in Thuringia (1.9% of the population − among the smallest proportions of any state in Germany). Nevertheless, the number rose from 33,000 in July 2011, an increase of 24% in only two years. About 4% of the population are migrants (including persons that already received the German citizenship). The biggest groups of foreigners by citizenship are (as of 2012): Russians (3,100), Poles (3,000), Turks (2,100) and Ukrainians (2,000). The number of foreigners varies between regions: the college towns Erfurt, Jena, Weimar and Ilmenau have the highest rates, whereas there are almost no migrants living in the most rural smaller municipalities. The Thuringian population has a significant sex ratio gap, caused by the emigration of young women, especially in rural areas. Overall, there are 115 to 120 men per 100 women in the 25–40 age group ("family founders") which has negative consequences for the birth ratio. Furthermore, the population is getting older and older with some rural municipalities recording more than 30% of over-65s (pensioners). This is a problem for the regional labour market, as there are twice as many people leaving as entering the job market annually. The birth rate was about 1.8 children per women in the 1970s and 1980s, shrinking to 0.8 in 1994 during the economic crisis after the reunification and rose again to more than 1.4 children in 2010, which is a higher level than in West Germany. Nevertheless, there are only 17,000 births compared to 27,000 deaths per year, so that the annual natural change of the Thuringian population is about −0.45%. In 2015 there were 17.934 births, the highest number since 1990. Migration plays an important role in Thuringia. The internal migration shows a strong tendency from rural areas towards the big cities. From 2008 to 2012, there was a net migration from Thuringia to Erfurt of +6,700 persons (33 per 1000 inhabitants), +1,800 to Gera (19 per 1000), +1,400 to Jena (14 per 1000), +1,400 to Eisenach (33 per 1000) and +1,300 to Weimar (21 per 1000). Between Thuringia and the other German states, the balance is negative: In 2012, Thuringia lost 6,500 persons to other federal states, the most to Bavaria, Saxony, Hesse and Berlin. Only with Saxony-Anhalt and Brandenburg the balance is positive. The international migration is fluctuating heavily. In 2009, the balance was +700, in 2010 +1,800, in 2011 +2,700 and in 2012 +4,800. The most important countries of origin of the Thuringia migrants from 2008 to 2012 were Poland (+1,700), Romania (+1,200), Afghanistan (+1,100) and Serbia/Montenegro/Kosovo (+1,000), whereas the balance was negative with Switzerland (−2,800) and Austria (−900). Of the approximately 850 municipalities of Thuringia, 126 are classed as towns (within a district) or cities (forming their own urban district). Most of the towns are small with a population of less than 10,000; only the ten biggest ones have a population greater than 30,000. The first towns emerged during the 12th century, whereas the latest ones received town status only in the 20th century. Today, all municipalities within districts are equal in law, whether they are towns or villages. Independent cities (i.e. urban districts) have greater powers (the same as any district) than towns within a district. * Average annual change in percent within the last three years (2009-12-31 until 2012-12-31), adjusted from incorporations and the 2011 Census results. Since the Protestant Reformation, the most prominent Christian denomination in Thuringia has been Lutheranism. During the GDR period, church membership was discouraged and has continued shrinking since the reunification in 1990. Today over two thirds of the population is non-religious. The Protestant Evangelical Church in Germany has had the largest number of members in the state, adhered to by 20.8% of the population in 2018. Members of the Catholic Church formed 7.6% of the population, while 71.6% of Thuringians were non-religious or adhere to other faiths. The highest Protestant concentrations are in the small villages of southern and western Thuringia, whereas the bigger cities are even more non-religious (up to 88% in Gera). Catholic regions are Eichsfeld in the northwest and parts of the Rhön Mountains around Geisa in the southwest. Protestant church membership is shrinking rapidly, whereas the Catholic Church is somewhat more stable because of Catholic migration from Poland, Southern Europe and West Germany. Other religions play no significant role in Thuringia. There are only a few thousand Muslims (largely migrants) and about 750 Jews (mostly migrants from Russia) living in Thuringia. Furthermore, there are some Orthodox communities of Eastern European migrants and some traditional Protestant Free churches in Thuringia without any societal influence. The Protestant parishes of Thuringia belong to the Evangelical Church in Central Germany or to the Evangelical Church of Hesse Electorate-Waldeck (Schmalkalden region). Catholic dioceses are Erfurt (most of Thuringia), Dresden-Meissen (eastern parts) and Fulda (Rhön around Geisa in the very west). Summary of the 27 October 2019 preliminary election results for the Landtag of Thuringia ! colspan="2" | Party ! Votes ! +/- ! Seats ! Seats % ! colspan=8| ! align="right" colspan=2| Valid votes ! align="right" | 1,108,388 ! align="right" | 98.8 ! align="right" | 0.2 ! align="right" | ! align="right" | ! align="right" | ! align="right" colspan=2| Blank and invalid votes ! align="right" | 13,426 ! align="right" | 1.2 ! align="right" | 0.2 ! align="right" | ! align="right" | ! align="right" | ! align="right" colspan=2| Total ! align="right" | 1,121,814 ! align="right" | 100.0 ! align="right" | ! align="right" | 90 ! align="right" | 1 ! align="right" | ! colspan="2" | Electorate / voter turnout !1,729,242 !64.9 !12.2 ! Thuringia is divided into 17 districts ("Landkreise"): Furthermore, there are six urban districts, indicated on the map by letters: Thuringia's economy is marked by the economic transition that happened after the German reunification and led to the closure of most of the factories within the Land. The unemployment rate reached a peak in 2005. Since that year, the economy has seen an upturn and the general economic situation has improved. Agriculture and forestry have declined in importance over the decades. Nevertheless, they are more important than in the most other areas of Germany, especially within rural regions. 54% of Thuringia's territory is in agricultural use. The fertile basins such as the large Thuringian Basin or the smaller Goldene Aue, Orlasenke and Osterland are in intensive use for growing cereals, vegetables, fruits and energy crops. Important products are apples, strawberries, cherries and plums in the fruit sector, cabbage, potatoes, cauliflower, tomatoes (grown in greenhouses), onions, cucumbers and asparagus in the vegetable sector, as well as maize, rapeseed, wheat, barley and sugar beets in the crop sector. Meat production and processing is also an important activity, with swine, cattle, chickens and turkeys in focus. Furthermore, there are many milk and cheese producers, as well as laying hens. Trout and carp are traditionally bred in aquaculture in many villages. Most agricultural enterprises are large cooperatives, founded as Landwirtschaftliche Produktionsgenossenschaft during the GDR period, and meat producers are part of multinational companies. Traditional private peasant agriculture is an exception, as is organic farming. Thuringia's only wine-growing district is situated around Bad Sulza north of Weimar and Jena along the Ilm and Saale valley. Its production is marketed as Saale-Unstrut wines. Forestry plays an important role in Thuringia because 32% of the Thuringian territory is forested. The most common trees are spruce, pine and beech. There are many wood and pulp-paper factories near the forested areas. Like most other regions of central and southern Germany, Thuringia has a significant industrial sector reaching back to the mid-19th-century industrialisation. The economic transition after the German reunification in 1990 led to the closure of most large-scale factories and companies, leaving small and medium-sized ones to dominate the manufacturing sector. Well-known industrial centres are Jena (a world centre for optical instruments with companies like Carl Zeiss, Schott and Jenoptik) and Eisenach, where BMW started its car production in the 1920s and an Opel factory is based today. The most important industrial branches today are engineering and metalworking, vehicle production and food industries. Especially the small and mid-sized towns in central and southwestern Thuringia (e.g. Arnstadt, Schmalkalden and Ohrdruf) are highly industrialised, whereas there are fewer industrial companies in the northern and eastern parts of the Land. Traditional industries like production of glass, porcelain and toys collapsed during the economic crises between 1930 and 1990. Mining was important in Thuringia since the later Middle Ages, especially within the mining towns of the Thuringian Forest such as Schmalkalden, Suhl and Ilmenau. Following the industrial revolution, the old iron, copper and silver mines declined because the competition from imported metal was too strong. On the other hand, the late 19th century brought new types of mines to Thuringia: the lignite surface mining around Meuselwitz near Altenburg in the east of the Land started in the 1870s, and two potash mining districts were established around 1900. These are the "Südharzrevier" in the north of the state, between Bischofferode in the west and Roßleben in the east with Sondershausen at its centre, and the "Werrarevier" on the Hessian border around Vacha and Bad Salzungen in the west. Together, they accounted for a significant part of the world's potash production in the mid-20th century. After the reunification, the "Südharzrevier" was abandoned, whereas K+S took over the mines in the "Werrarevier". Between 1950 and 1990, uranium mining was also important to cover the Soviet Union's need for this metal. The centre was Ronneburg near Gera in eastern Thuringia and the operating company Wismut was under direct Soviet control. The GDP of Thuringia is below the national average, in line with the other former East German Lands. Until 2004, Thuringia was one of the weakest regions within the European Union. The accession of several new countries, the crisis in southern Europe and the sustained economic growth in Germany since 2005 has brought the Thuringian GDP close to the EU average since then. The high economic subsidies granted by the federal government and the EU after 1990 are being reduced gradually and will end around 2020. The unemployment rate reached its peak of 17.1% in 2005. Since then, it has decreased to 5.3% in 2019, which is only slightly above the national average. The decrease is caused on the one hand by the emergence of new jobs and on the other by a marked decrease in the working-age population, caused by emigration and low birth rates for decades. The wages in Thuringia are low compared to rich bordering Lands like Hesse and Bavaria. Therefore, many Thuringians are working in other German Lands and even in Austria and Switzerland as weekly commuters. Nevertheless, the demographic transition in Thuringia leads to a lack of workers in some sectors. External immigration into Thuringia has been encouraged by the government since about 2010 to counter this problem. The economic progress is quite different between the regions of Thuringia. The big cities along the A4 motorway such as Erfurt, Jena and Eisenach and their surroundings are booming, whereas nearly all the rural regions, especially in the north and east, have little economic impetus and employment, which is a big issue in regional planning. Young people in these areas often have to commute long distances, and many emigrate soon after finishing school. The unemployment rate stood at 5.3% in 2019 and was higher than the German average. As Germany's most central state, Thuringia is an important hub of transit traffic. The transportation infrastructure was in very poor condition after the GDR period. Since 1990, many billions of Euros have been invested to improve the condition of roads and railways within Thuringia. During the 1930s, the first two motorways were built across the Land, the A4 motorway as an important east-west connection in central Germany and the main link between Berlin and south-west Germany, and the A9 motorway as the main north-south route in eastern Germany, connecting Berlin with Munich. The A4 runs from Frankfurt in Hesse via Eisenach, Gotha, Erfurt, Weimar, Jena and Gera to Dresden in Saxony, connecting Thuringia's most important cities. At Hermsdorf junction it is connected with the A9. Both highways were widened from four to six lanes (three each way) after 1990, including some extensive re-routing in the Eisenach and Jena areas. Furthermore, three new motorways were built during the 1990s and 2000s. The A71 crosses the Land in southwest-northeast direction, connecting Würzburg in Bavaria via Meiningen, Suhl, Ilmenau, Arnstadt, Erfurt and Sömmerda with Sangerhausen and Halle in Saxony-Anhalt. The crossing of the Thuringian Forest by the A71 has been one of Germany's most expensive motorway segments with various tunnels (including Germany's longest road tunnel, the Rennsteig Tunnel) and large bridges. The A73 starts at the A71 south of Erfurt in Suhl and runs south towards Nuremberg in Bavaria. The A38 is another west-east connection in the north of Thuringia running from Göttingen in Lower Saxony via Heiligenstadt and Nordhausen to Leipzig in Saxony. Furthermore, there is a dense network of federal highways complementing the motorway network. The upgrading of federal highways is prioritised in the federal trunk road programme 2015 ("Bundesverkehrswegeplan" 2015). Envisaged projects include upgrades of the B247 from Gotha to Leinefelde to improve Mühlhausen's connection to the national road network, the B19 from Eisenach to Meiningen to improve access to Bad Salzungen and Schmalkalden, and the B88 and B281 for strengthening the Saalfeld/Rudolstadt region. The first railways in Thuringia had been built in the 1840s and the network of main lines was finished around 1880. By 1920, many branch lines had been built, giving Thuringia one of the densest rail networks in the world before World War II with about 2,500 km of track. Between 1950 and 2000 most of the branch lines were abandoned, reducing Thuringia's network by half compared to 1940. On the other hand, most of the main lines were refurbished after 1990, resulting in improved speed of travel. The most important railway lines at present are the Thuringian Railway, connecting Halle and Leipzig via Weimar, Erfurt, Gotha and Eisenach with Frankfurt and Kassel and the Saal Railway from Halle/Leipzig via Jena and Saalfeld to Nuremberg. The former has an hourly ICE/IC service from Dresden to Frankfurt while the latter is served hourly by ICE trains from Berlin to Munich. In 2017, a new high speed line will be opened, diverting long-distance services from these mid-19th century lines. Both ICE routes will then use the Erfurt–Leipzig/Halle high-speed railway, and the Berlin-Munich route will continue via the Nuremberg–Erfurt high-speed railway. Only the segment west of Erfurt of the Frankfurt-Dresden line will continue to be used by ICE trains after 2017, with an increased line speed of 200 km/h (currently 160 km/h). Erfurt's central station, which was completely rebuilt for this purpose in the 2000s (decade), will be the new connection between both ICE lines. The most important regional railway lines in Thuringia are the Neudietendorf–Ritschenhausen railway from Erfurt to Würzburg and Meiningen, the Weimar–Gera railway from Erfurt to Chemnitz, the Sangerhausen–Erfurt railway from Erfurt to Magdeburg, the Gotha–Leinefelde railway from Erfurt to Göttingen, the Halle–Kassel railway from Halle via Nordhausen to Kassel and the Leipzig–Hof railway from Leipzig via Altenburg to Zwickau and Hof. Most regional and local lines have hourly service, but some run only every other hour. There are a few small airports in Thuringia but the only one with public aviation is Erfurt–Weimar Airport. It is used for charter flights to the Mediterranean and other holiday destinations. The most important airports for scheduled flights are Frankfurt Airport, Berlin Brandenburg Airport and Munich Airport, all located in adjacent states. Leipzig–Altenburg Airport was served by Ryanair from 2003 to 2011. Thuringia is Germany's only Land without a connection to waterways because its rivers are too small to be navigable. The traditional energy supply of Thuringia is lignite, mined in the bordering Leipzig region. Since 2000, the importance of environmentally unfriendly lignite combustion has declined in favour of renewable energies, which reached an amount of 40% (in 2013), and more clean gas combustion, often carried out as Cogeneration in the municipal power stations. The most important forms of renewable energies are Wind power and Biomass, followed by Solar energy and Hydroelectricity. Furthermore, Thuringia hosts two big pumped storage stations: the Goldisthal Pumped Storage Station and the Hohenwarte Dam. The water supply is granted by the big dams, like the Leibis-Lichte Dam, within the Thuringian Forest and the Thuringian Highland, making a drinking water exporter of Thuringia. Health care provision in Thuringia improved after 1990, as did the level of general health. Life expectancy rose, nevertheless it is still a bit lower than the German average. This is caused by a relatively unhealthy lifestyle of the Thuringians, especially in high consumption of grains, industrial seed oils, refined carbohydrates and alcohol, which led to significant higher rates of obesity compared to the German average. Health care in Thuringia is currently undergoing a concentration process. Many smaller hospitals in the rural towns are closing, whereas the bigger ones in centres like Jena and Erfurt get enlarged. Overall, there is an oversupply of hospital beds, caused by rationalisation processes in the German health care system, so that many smaller hospitals generate losses. On the other hand, there is a lack of family doctors, especially in rural regions with increased need of health care provision because of overageing. In Germany, the educational system is part of the sovereignty of the Lands; therefore each Land has its own school and college system. The Thuringian school system was developed after the reunification in 1990, combining some elements of the former GDR school system with the Bavarian school system. Most German school rankings attest that Thuringia has one of the most successful education systems in Germany, resulting in high-quality outcomes. Early-years education is quite common in Thuringia. Since the 1950s, nearly all children have been using the service, whereas early-years education is less developed in western Germany. Its inventor Friedrich Fröbel lived in Thuringia and founded the world's first Kindergartens there in the 19th century. The Thuringian primary school takes four years and most primary schools are all-day schools offering optional extracurricular activities in the afternoon. At the age of ten, pupils are separated according to aptitude and proceed to either the Gymnasium or the Regelschule. The former leads to the Abitur exam after a further eight years and prepares for higher education, while the latter has a more vocational focus and finishes with exams after five or six years, comparable to the Hauptschule and Realschule found elsewhere in Germany. The German higher education system comprises two forms of academic institutions: universities and polytechnics (Fachhochschule). The University of Jena is the biggest amongst Thuringia's four universities and offers nearly every discipline. It was founded in 1558, and today has 21,000 students. The second-largest is the Technische Universität Ilmenau with 7,000 students, founded in 1894, which offers many technical disciplines such as engineering and mathematics. The University of Erfurt, founded in 1392, has 5,000 students today and an emphasis on humanities and teacher training. The Bauhaus University Weimar with 4,000 students is Thuringia's smallest university, specialising in creative subjects such as architecture and arts. It was founded in 1860 and came to prominence as Germany's leading art school during the inter-war period, the Bauhaus. The polytechnics of Thuringia are based in Erfurt (4,500 students), Jena (5,000 students), Nordhausen (2,500 students) and Schmalkalden (3,000 students). In addition, there is a civil service college in Gotha with 500 students, the College of Music "Franz Liszt" in Weimar (800 students) as well as two private colleges, the Adam-Ries-Fachhochschule in Erfurt (500 students) and the SRH College for nursing and allied medical subjects ("SRH Fachhochschule für Gesundheit Gera") in Gera (500 students). Finally, there are colleges for those studying for a technical qualification while working in a related field ("Berufsakademie") at Eisenach (600 students) and Gera (700 students). Thuringia's leading research centre is Jena, followed by Ilmenau. Both focus on technology, in particular life sciences and optics at Jena and information technology at Ilmenau. Erfurt is a centre of Germany's horticultural research, whereas Weimar and Gotha with their various archives and libraries are centres of historic and cultural research. Most of the research in Thuringia is publicly funded basic research due to the lack of large companies able to invest significant amounts in applied research, with the notable exception of the optics sector at Jena.
https://en.wikipedia.org/wiki?curid=31130
Theodosius I Theodosius I (Flavius Theodosius Augustus; ; 11 January 347 – 17 January 395), also known as Theodosius the Great, was a Roman Emperor from 379 to 395. On accepting his elevation, he campaigned against Goths and other barbarians who had invaded the Empire. His resources were not sufficient to destroy them or drive them out, which had been Roman policy for centuries in dealing with invaders. By treaty, which followed his indecisive victory at the end of the Gothic War, they were established as "foederati", autonomous allies of the Empire, south of the Danube, in Illyricum, within the Empire's borders. They were given lands and allowed to remain under their own leaders, a grave departure from Roman hegemonic ways. This turn away from traditional policies was accommodationist and had enormous consequences for the Western Empire from the beginning of the fifth century, as the Romans found themselves with the impossible task of defending the borders and dealing with unruly federates within. Theodosius I was obliged to fight two destructive civil wars, successively defeating the usurpers Magnus Maximus in 387–388 and Eugenius in 394, though not without material cost to the power of the Empire. He issued decrees that effectively made Nicene Christianity the official state church of the Roman Empire. He neither prevented nor punished the destruction of prominent Hellenistic temples of classical antiquity, including the Temple of Apollo in Delphi and the Serapeum in Alexandria. He dissolved the Order of the Vestal Virgins in Rome. In 393, he banned the pagan rituals of the Olympics in Ancient Greece. After his death, Theodosius's young sons Arcadius and Honorius inherited the east and west halves of the empire respectively, effectively partitioning the Roman Empire in two for the next 80 years. Theodosius is considered a saint by the Armenian Apostolic Church and Eastern Orthodox Church, and his feast day is on January 17. Flavius Theodosius was born in Cauca, Carthaginensis, Hispania (according to Hydatius and Zosimus) or in Italica, Baetica, Hispania (according to Themistius, Claudius Claudianus, or Marcellinus Comes), to a senior military officer, Theodosius the Elder and his wife Thermantia. Theodosius learned his military lessons by campaigning on his father's staff in Britannia where he went to help quell the Great Conspiracy in 368. In about 373, Theodosius was given the title "dux Moesiae Primae" (military governor of Upper Moesia) replacing his father who was sent to put down a revolt in Africa. In 374, the Quadi and their allies the Sarmatians overran the province of Valeria in Illyricum. Theodosius drove the Sarmatians out of the Roman territory and then defeated the Quadi. He is reported to have defended his province with marked ability and success. The death of emperor Valentinian I in 375 created political pandemonium. The sudden disgrace and execution of Theodosius' father, Theodosius the Elder, in 376 remains unexplained. At about the same time Theodosius abruptly retired to his family estates in the province of Gallaecia (present day Galicia, Spain and northern Portugal) where he adopted the life of a provincial aristocrat. The reason for his retirement, and the relationship (if any) between it and his father's death is uncertain, though probable. From 364 to 375, the Roman Empire had been governed by two co-emperors, the brothers Valentinian I and Valens; when Valentinian died in 375, his sons, Valentinian II and Gratian, succeeded him as rulers of the Western Roman Empire. In 378, after the disastrous Battle of Adrianople where Valens was killed, Gratian invited Theodosius to take command of the Illyrian army. As Valens had no successor, Gratian's appointment of Theodosius amounted to a "de facto" invitation for Theodosius to become "co-Augustus" of the eastern half of the Empire. After Gratian was killed in a rebellion in 383, Theodosius appointed his own elder son, Arcadius, to be his co-ruler in the East. His management of the Empire was marked by heavy tax exactions, by the compulsory establishment of Christianity, by a court in which "everything was for sale", and by victories in two disastrous civil wars. After the death in 392 of Valentinian II, whom Theodosius had supported against a variety of usurpations, Theodosius ruled as sole Emperor, appointing his younger son Honorius Augustus as his co-ruler of the West (in Milan, on 23 January 393) and by defeating the usurper Eugenius on 6 September 394, at the Battle of the Frigidus (Vipava river, modern Slovenia) he restored peace. A few months later he died, leaving the Empire nominally in the hands of his two incapable sons. By his first wife, the probably Spanish Aelia Flaccilla Augusta, he had two sons, Arcadius and Honorius, and a daughter, Aelia Pulcheria; Arcadius was his heir in the East and Honorius in the West. Both Aelia Flaccilla and Pulcheria died in 385. His second wife (but never declared "Augusta") was Galla, daughter of the emperor Valentinian I and his second wife Justina. Theodosius and Galla had a son, Gratian, born in 388 and who died young, and a daughter, Aelia Galla Placidia (392–450). Placidia was the only child who survived to adulthood and later became an Empress. The Goths and their allies (Vandals, Taifals, Bastarnae and the native Carpians) entrenched in the provinces of Dacia and eastern Pannonia Inferior consumed Theodosius's attention. The Gothic crisis was so dire that his co-Emperor Gratian relinquished control of the Illyrian provinces and retired to Trier in Gaul to let Theodosius operate without hindrance. A major weakness in the Roman position after the defeat at Adrianople was the recruiting of barbarians to fight against other barbarians. In order to reconstruct the Roman Army of the East, Theodosius needed to find able bodied soldiers and so he turned to the most capable men readily at hand: the barbarians recently settled in the Empire. This caused many difficulties in the battle against barbarians since the newly recruited fighters had little or no loyalty to Theodosius. It did not help that Theodosius himself was dangerously ill during many months after his elevation, being confined to his bed in Thessalonica during much of 379. Gratian suppressed the incursions into dioceses of Illyria (Pannonia and Dalmatia) by Alathaeus and Saphrax in 380. He succeeded in convincing both to agree to a treaty and be settled in Pannonia. Theodosius was able finally to enter Constantinople in November 380, after two seasons in the field, having ultimately prevailed by offering highly favorable terms to the Gothic chiefs. His task was rendered much easier when Athanaric, an aged and cautious leader, accepted Theodosius's invitation to a conference in the capital, Constantinople, and the splendor of the imperial city reportedly awed him and his fellow-chiefs into accepting Theodosius' offers. Athanaric himself died soon after, but his followers were impressed by the honorable funeral arranged for him by Theodosius, and agreed to defending the border of the empire. The final treaties with the remaining Gothic forces, signed 3 October 382, permitted large contingents of barbarians, primarily Thervingian Goths, to settle in Thrace south of the Danube frontier. The Goths now settled within the Empire would largely fight for the Romans as a national contingent, as opposed to being fully integrated into the Roman forces. In 390 the population of Thessalonica rioted in complaint against the presence of the local Gothic garrison. The garrison commander was killed in the violence, so Theodosius ordered the Goths to kill all the spectators in the circus as retaliation; Theodoret, a contemporary witness to these events, reports: Theodosius was excommunicated by the bishop of Milan, Saint Ambrose, for the massacre. Ambrose told Theodosius to imitate David in his repentance as he had imitated him in guilt; Ambrose readmitted the emperor to the Eucharist only after several months of penance. In the last years of Theodosius's reign, one of the emerging leaders of the Goths, named Alaric, participated in Theodosius's campaign against Eugenius in 394, only to resume his rebellious behavior against Theodosius's son and eastern successor, Arcadius, shortly after Theodosius' death. In 383, the usurper Magnus Maximus had deposed and executed Gratian, proclaiming himself emperor of the west. Theodosius, unable to do much about Maximus due to his still inadequate military capability, opened negotiations with the Sasanid Emperor Shapur III. In an attempt to curb Maximus's ambitions, Theodosius appointed Flavius Neoterius as Praetorian Prefect of Italy. The armies of Theodosius and Maximus fought at the Battle of the Save in 388, which saw Maximus defeated. On 28 August 388 Maximus was executed. Now the "de facto" ruler of the Western empire as well, Theodosius celebrated his victory in Rome on June 13, 389 and stayed in Milan until 391, installing his own loyalists in senior positions including the new "magister militum" of the West, the Frankish general Arbogast. Trouble arose again, after Valentinian quarreled publicly with Arbogast, and was found hanging in his room. Arbogast announced that this had been a suicide. Arbogast, unable to assume the role of Emperor because of his non-Roman background, elected his creature Eugenius, a former teacher of rhetoric whom he had made Valentinian's "master of offices". Eugenius made some limited concessions to the Roman religion; like Maximus he sought Theodosius's recognition in vain. In January 393, Theodosius gave his son Honorius the full rank of "Augustus" in the West, citing Eugenius' illegitimacy. Theodosius gathered a large army, including the Goths whom he had settled in the Eastern empire as Foederati, as well as Caucasian and Saracen auxiliaries, and marched against Eugenius. The two armies faced at the Battle of Frigidus in September 394. The battle began on 5 September 394, with Theodosius' full frontal assault on Eugenius's forces. Theodosius was repulsed on the first day, and Eugenius thought the battle to be all but over. In Theodosius's camp, the loss of the day decreased morale. It is said that Theodosius was visited by two "heavenly riders all in white" who gave him courage. The next day, the battle began again and Theodosius's forces were aided by a natural phenomenon known as the Bora, which produces cyclonic winds. The Bora blew directly against the forces of Eugenius and disrupted the line. Eugenius's camp was stormed; Arbogast committed suicide and Eugenius was captured and soon after executed. Thus Theodosius became sole Emperor. Theodosius oversaw the removal in 390 of an Egyptian obelisk from Alexandria to Constantinople. It is now known as the obelisk of Theodosius and still stands in the Hippodrome, the long racetrack that was the center of Constantinople's public life and scene of political turmoil. Re-erecting the monolith was a challenge for the technology that had been honed in the construction of siege engines. The obelisk, still recognizably a solar symbol, had been moved from Karnak to Alexandria with what is now the Lateran obelisk by Constantius II. The Lateran obelisk was shipped to Rome soon afterwards, but the other one then spent a generation lying at the docks due to the difficulty involved in attempting to ship it to Constantinople. Eventually, the obelisk was cracked in transit. The white marble base is entirely covered with bas-reliefs documenting the imperial household and the engineering feat of removing it to Constantinople. Theodosius and the imperial family are separated from the nobles among the spectators in the imperial box, with a cover over them as a mark of their status. The naturalism of traditional Roman art in such scenes gave way in these reliefs to conceptual art: the "idea" of order, decorum and respective ranking, expressed in serried ranks of faces. This is seen as evidence of formal themes beginning to oust the transitory details of mundane life, celebrated in Roman portraiture. The "Forum Tauri" in Constantinople was renamed and redecorated as the Forum of Theodosius, including a column and a triumphal arch in his honour. In 325, Constantine I convened the Council of Nicaea, which affirmed the doctrine that Jesus, the Son, was equal to God the Father and "of one substance" with the Father ("homoousios" in Greek). The Council condemned the teachings of Arius, who believed Jesus to be inferior to the Father. Despite the council's ruling, controversy continued for decades, with several christological alternatives to the Nicene Creed being brought forth. Theologians attempted to bypass the Christological debate by saying that Jesus was merely like ("homoios" in Greek) God the father, without speaking of substance ("ousia"). These non-Nicenes were frequently labeled as Arians (i.e., followers of Arius) by their opponents, though not all would necessarily have identified themselves as such. The Emperor Valens had favored the group who used the "homoios" formula; this theology was prominent in much of the East and had under Constantius II gained a foothold in the West, being ratified by the Council of Ariminum, though it was later abjured by a majority of the western bishops (after Constantius II's death in 361). The death of Valens damaged the standing of the Homoian faction, especially since his successor Theodosius steadfastly held to the Nicene Creed which was the interpretation that predominated in the West and was held by the important Alexandrian church. On 27 February 380, together with Gratian and Valentinian II, Theodosius issued the decree ""Cunctos populos"", the so-called Edict of Thessalonica, recorded in the Codex Theodosianus . This declared the Nicene Trinitarian Christianity to be the only legitimate imperial religion and the only one entitled to call itself Catholic; but the other religions or those who did not support the Trinity, he described as "foolish madmen". He also ended official state support for the traditional polytheist religions and customs. On 26 November 380, two days after he had arrived in Constantinople, Theodosius expelled the Homoian bishop, Demophilus of Constantinople, and appointed Meletius patriarch of Antioch, and Gregory of Nazianzus, one of the Cappadocian Fathers from Cappadocia (today in Turkey), patriarch of Constantinople. Theodosius had just been baptized, by bishop Ascholius of Thessalonica, during a severe illness. In May 381, Theodosius summoned a new ecumenical council at Constantinople to repair the schism between East and West on the basis of Nicene orthodoxy. The council went on to define orthodoxy, including the Third Person of the Trinity, the Holy Spirit, as equal to the Father and 'proceeding' from Him, whereas the Son was 'begotten' of Him. The council also "condemned the Apollonarian and Macedonian heresies, clarified jurisdictions of the bishops according to the civil boundaries of dioceses and ruled that Constantinople was second in precedence to Rome." The persecution of pagans under Theodosius I began in 381, after the first couple of years of his reign in the Eastern Roman Empire. In the 380s, Theodosius I reiterated Constantine's ban on some practices of Roman religion, prohibited haruspicy on pain of death, decreed magistrates who did not enforce laws against polytheism were subject to criminal prosecution, broke up some pagan associations and tolerated attacks on Roman temples. Between 389–392 he promulgated the Theodosian decrees (instituting a major change in his religious policies), which removed non-Nicene Christians from church office and abolished the last remaining expressions of Roman religion by making its holidays into workdays, banning blood sacrifices, closing Roman temples, confiscating Temple endowments and disbanding the Vestal Virgins. The practices of taking auspices and witchcraft were punished. Theodosius refused to restore the Altar of Victory in the Senate House, as asked by non-Christian senators. In 392 he became sole emperor. From this moment till the end of his reign in 395, while non-Christians continued to request toleration, he ordered, authorized, or at least failed to punish, the closure or destruction of many temples, holy sites, images and objects of piety throughout the empire. In 393 he issued a comprehensive law that prohibited any public non-Christian religious customs, and was particularly oppressive to Manicheans. He is likely to have discontinued the ancient Olympic Games, whose last record of celebration was in 393, though archeological evidence indicates that some games were still held after this date. Theodosius died, after suffering from a disease involving severe edema, in Milan on 17 January 395. Ambrose delivered a panegyric titled "De Obitu Theodosii" before Stilicho and Honorius in which Ambrose praised the suppression of paganism by Theodosius. Theodosius was finally buried in the Church of the Holy Apostles in Constantinople on 8 November 395, in a porphyry sarcophagus that was described in the 10th century by Constantine VII Porphyrogenitus in the "De Ceremoniis". Theodosius's army rapidly dissolved after his death, with Gothic contingents raiding as far as Constantinople. As his heir in the Eastern Roman Empire he left Arcadius, who was about eighteen years old, and in the Western Roman Empire Honorius, who was ten. Neither ever showed any sign of fitness to rule, and their reigns were marked by a series of disasters. As their guardians Theodosius left Stilicho, who ruled in the name of Honorius in the Western Empire, and Flavius Rufinus who was the actual power behind the throne in the East. Several historians mark the day of Theodosius' death as the beginning of the Middle Ages.
https://en.wikipedia.org/wiki?curid=31131
Tswana language The Tswana language () is a Bantu language spoken in Southern Africa by about five million speakers. It is a Bantu language belonging to the Niger–Congo language family within the Sotho-Tswana branch of Zone S (S.30), and is closely related to the Northern and Southern Sotho languages, as well as the Kgalagadi language and the Lozi language. Setswana is in addition, sometimes referred to as Western Sotho, as a differentiation from its other sister 'Sotho' languages in Southern Africa. Tswana is an official language and lingua franca of Botswana and South Africa. Tswana-speakers are found in the north-west of South Africa, where four million people speak the language. An urbanised variety, which is part slang and not the formal Setswana, known as Pretoria Sotho, is the principal unique language of city of Pretoria. Which is a mixture of all Sotho languages. The three South African provinces with the most speakers are Gauteng (circa 11%), Northern Cape, and North West (over 70%). Until 1994, South African Tswana people were notionally citizens of Bophuthatswana, one of the bantustans of the apartheid regime. The Setswana language in the Northwest Province has variations in which it is spoken according to the tribes found in the Tswana culture (Bakgatla, Barolong, Bakwena, Batlhaping, Bahurutshe, Bafokeng, Batlokwa, Bataung, Bakgatla, Bapo, to name a few); the written language remains the same. A small number of speakers are also found in Zimbabwe (unknown number) and Namibia (about 10,000 people). The first European to describe the language was the German traveller Hinrich Lichtenstein, who lived among the Tswana people Batlhaping in 1806 although his work was not published until 1930. He mistakenly regarded Tswana as a dialect of the Xhosa, and the name that he used for the language ""Beetjuana"" may also have covered the Northern and Southern Sotho languages. The first major work on Tswana was carried out by the British missionary Robert Moffat, who had also lived among the Batlhaping, and published "Bechuana Spelling Book" and "A Bechuana Catechism" in 1826. In the following years, he published several other books of the Bible, and in 1857, he was able to publish a complete translation of the Bible . The first grammar of Tswana was published in 1833 by the missionary James Archbell although it was modelled on a Xhosa grammar. The first grammar of Tswana which regarded it as a separate language from Xhosa (but still not as a separate language from the Northern and Southern Sotho languages) was published by the French missionary E. Casalis in 1841. He changed his mind later, and in a publication from 1882, he noted that the Northern and Southern Sotho languages were distinct from Tswana. Solomon Plaatje, a South African intellectual and linguist, was one of the first writers to extensively write in and about the Tswana language. The vowel inventory of Tswana can be seen below. Some dialects have two additional vowels, the close-mid vowels and . The consonant inventory of Tswana can be seen below. The consonant is merely an allophone of , when the latter is followed by the vowels or . Two more sounds, v and z , exist only in loanwords. Tswana also has three click consonants, but these are only used in interjections or ideophones, and tend only to be used by the older generation, and are therefore falling out of use. The three click consonants are the dental click , orthographically ; the lateral click , orthographically ; and the palatal click , orthographically . There are some minor dialectal variations among the consonants between speakers of Tswana. For instance, is realised as either or by many speakers; is realised as in most dialects; and and are realised as and in northern dialects. Stress is fixed in Tswana and thus always falls on the penult of a word, although some compounds may receive a secondary stress in the first part of the word. The syllable on which the stress falls is lengthened. Thus, mosadi (woman) is realised as . Tswana has two tones, high and low, but the latter has a much wider distribution in words than the former. Tones are not marked orthographically, which may lead to ambiguity. An important feature of the tones is the so-called spreading of the high tone. If a syllable bears a high tone, the following two syllables will have high tones unless they are at the end of the word. Nouns in Tswana are grouped into nine noun classes and one subclass, each having different prefixes. The nine classes and their respective prefixes can be seen below, along with a short note regarding the common characteristics of most nouns within their respective classes. Some nouns may be found in several classes. For instance, many class 1 nouns are also found in class 1a, class 3, class 4, and class 5.
https://en.wikipedia.org/wiki?curid=31133
Nikolai Trubetzkoy Prince Nikolai Sergeyevich Trubetzkoy (; 16 April 1890 – 25 June 1938) was a Russian linguist and historian whose teachings formed a nucleus of the Prague School of structural linguistics. He is widely considered to be the founder of morphophonology. He was also associated with the Russian Eurasianists. Trubetzkoy was born into privilege. His father, Sergei Nikolaevich Trubetskoy, came from a Gediminid princely family. In 1908, he enrolled at the Moscow University. While spending some time at the University of Leipzig, Trubetzkoy was taught by August Leskien, a pioneer of research into sound laws. Having graduated from the Moscow University (1913), Trubetzkoy delivered lectures there until the Revolution. Thereafter he moved first to the University of Rostov-on-Don, then to the University of Sofia (1920–22), and finally took the chair of Professor of Slavic Philology at the University of Vienna (1922–1938). He died from a heart attack attributed to Nazi persecution following his publishing an article highly critical of Hitler's theories. Trubetzkoy's chief contributions to linguistics lie in the domain of phonology, in particular in analyses of the phonological systems of individual languages and in the search for general and universal phonological laws. His magnum opus, "Grundzüge der Phonologie" "(Principles of Phonology)" was issued posthumously. In this book he defined the phoneme as the smallest distinctive unit within the structure of a given language. This work was crucial in establishing phonology as a discipline separate from phonetics. Trubetzkoy also wrote as a literary critic. In "Writings on Literature", a brief collection of translated articles, he analyzed Russian literature beginning with the Old Russian epic "The Tale of Igor's Campaign", then proceeding to 19th century Russian poetry and Dostoevsky. It is sometimes hard to distinguish Trubetzkoy's views from those of his friend Roman Jakobson, who should be credited with spreading the Prague School views on phonology after Trubetzkoy's death. In his biography of the mathematical collective Nicolas Bourbaki, Amir Aczel described Trubetzkoy as a pioneer in structuralism, an interdisciplinary outgrowth of structural linguistics which would be applied in mathematics by the Bourbaki group—as in the notion of a mathematical structure—and in anthropology by Claude Lévi-Strauss, who sought to describe rules governing human behavior. According to Aczel, Trubetzkoy's focus in "Principles of Phonology" was the study of phonemes and their opposing aspects, in order to describe rules of language—the goal of describing general, underlying rules being the common goal of structuralism.
https://en.wikipedia.org/wiki?curid=31134
Trekkies (film) Trekkies is a 1997 documentary film directed by Roger Nygard about the devoted fans of Gene Roddenberry's "Star Trek". It is the first film released by Paramount Vantage, then known as Paramount Classics, and stars Denise Crosby (best known for her portrayal of Security Chief Tasha Yar on the first season of ""). The film contains interviews with "Star Trek" devotees, more commonly known as Trekkies. The fans range from people who dress as Klingons to members of Brent Spiner fan clubs and includes a club that is producing a "Star Trek" movie of their own. "Trekkies" includes many "Star Trek" actors and fans including Barbara Adams, the Whitewater scandal trial juror who arrived in court in her Starfleet uniform. Another prominent profilee was Gabriel Köerner, who attained minor celebrity status as a result of his role in the film. After she worked with director Roger Nygard in his television film "High Strung" (1991), former "" actress Denise Crosby suggested that they should work together on a documentary regarding "Star Trek" fandom. Nygard thought it was a good idea and was surprised that it had not been done before. The first filming session took place at the Fantasticon science fiction convention at the Hilton Hotel in Los Angeles organised by William Campbell. Nygard felt that the footage from this first convention was good enough to warrant continuing with the project. "Trekkies" received a national release in the United States on May 21, 1999. It was in direct competition with "". Colin Covert gave three and a half stars in his review for the "Star Tribune", adding that it was likely to shake up the viewers opinions of "Star Trek" fans. He praised the structure of the film, with the initial "shock value" of each fan being followed up with further background to round out the portrait. Renee Graham at the "Boston Globe" gave a similar opinion, saying that despite the expectation of the reviewer, none of the opinions put forward by "Trekkies" felt like a "put-down". She summed up the documentary by saying "The Trekkie phenomenon may fall short of common definitions of normalcy, but as a film, "Trekkies" sure beats sleeping outside for days to see a mediocre movie about some galaxy far, far away." Bob Stauss, the film critic for the "Los Angeles Daily News" gave two and a half stars, saying that the documentary goes on too long and did not delve into the "real psychological issues raised by such obsessive interest". He added that the fans were "amusing for about 15 minutes, tops" and that the cumulative effect of watching the fans for that long was "numbing". James Verniere for the "Boston Herald" gave three stars, suggesting that Nyland may have found his niche as the "Michael Moore of weirdness" with this and his following film, "Six Days in Roswell". Verniere suggested that "Trekkies" wasn't as definitive as the film "Free Enterprise" (1999) but described it as an "interesting, often hilarious piece of sociological comedy". Alicia Potter, also at the "Boston Herald", described the film as "surreally funny". In 2004, a sequel was released, titled "Trekkies 2". This documentary travels throughout the world, mainly in Europe, to show fans of "Star Trek" from outside the United States. It also revisits memorable fans featured in the previous film. Nygard sought to answer some of the criticism received from Trekkies in the sequel, in that he was accused of not showing "normal" fans. With this in mind, he attempted to include some degree of a description of what was normal fan behaviour and what was more unusual. In an interview with TrekNews.net at the "Star Trek Mission: New York" convention in September 2016, Crosby talked about the potential for another documentary about "Star Trek": We do want to make a Trekkies 3. In earnest, we’re hoping to get started in two more years. We have to time it with the rights and all the legal stuff. There’s a lot of politics involved. We approached CBS/Viacom/Paramount [who own various rights to Star Trek] already and there’s interest. So fingers crossed. When producing these documentaries, I’m always delighted by the surprise at something we didn’t expect, such as the stories people tell and interacting with the fans, hearing stories. I never get tired of it..."
https://en.wikipedia.org/wiki?curid=31136
The Goodies The Goodies were a trio of British comedians: Tim Brooke-Taylor, Graeme Garden, and Bill Oddie. The trio created, wrote for and performed in their eponymous television comedy show from 1970 until 1982, combining sketches and situation comedy. The three actors met each other as undergraduates at Cambridge University, where Brooke-Taylor was studying law, Garden was studying medicine, and Oddie was studying English. Their contemporaries included Graham Chapman, John Cleese, and Eric Idle, who later became members of Monty Python, and with whom they became close friends. Brooke-Taylor and Cleese studied together and swapped lecture notes as they were both law students, but at different colleges within the university. All three Goodies became members of the Cambridge University Footlights Club, with Brooke-Taylor becoming president in 1963, and Garden succeeding him as president in 1964. Garden himself was succeeded as Footlights Club president in 1965 by Idle, who had initially become aware of the Footlights when he auditioned for a Pembroke College "smoker" for Brooke-Taylor and Oddie. Brooke-Taylor, Garden and Oddie were cast members of the 1960s BBC radio comedy show "I'm Sorry, I'll Read That Again", which also featured John Cleese, David Hatch and Jo Kendall, and lasted until 1973. "I'm Sorry, I'll Read That Again" resulted from the 1963 Cambridge University Footlights Club revue "A Clump of Plinths". After having its title changed to "Cambridge Circus", the revue went on to play at West End in London, England, followed by a tour of New Zealand and Broadway in New York, US (including an appearance on the "Ed Sullivan Show"). They also took part in various TV shows with other people, including Brooke-Taylor in "At Last the 1948 Show" (with Cleese, Chapman and Marty Feldman). Brooke-Taylor also took part in "Marty" (with Marty Feldman, John Junkin and Roland MacLeod). In 1968 Brooke-Taylor appeared with Cleese, Michael Palin and Graham Chapman in "How to Irritate People." Garden and Oddie took part in "Twice a Fortnight" (with Michael Palin, Terry Jones and Jonathan Lynn), before Brooke-Taylor, Garden, and Oddie worked on the late 1960s TV show "Broaden Your Mind" (of which only about ten minutes survives). The original BBC television series ran from November 1970 to February 1980 on BBC 2, with 67 half-hour episodes, and two forty-five-minute Christmas specials. The series was created by Tim Brooke-Taylor, Graeme Garden and Bill Oddie, and originally co-written by all three, with Oddie providing the music for the show. Later episodes were co-written by Garden and Oddie. It was one of the first shows in the UK to use chroma key and one of the first to use stop-motion techniques in a live action format. Other effects include hand editing for repeated movement, mainly used to make animals "talk" or "sing", and play speed effects as used in the episode "Kitten Kong". In the series, the threesome travelled around on, and frequently fell off, a three-seater bicycle called the trandem. In September 1978, the trio appeared in character in an episode of the BBC One television game show "Star Turn Challenge", presented by Bernard Cribbins, in which teams of celebrities competed in acting games. Their opponents were three members of the cast of "The Liver Birds", Nerys Hughes, Elizabeth Estensen and Michael Angelis. They also presented the Christmas 1976 edition of "Disney Time" from the toy department of Selfridges store in London, broadcast on BBC1 on Boxing Day at 5.50 pm. The Goodies never had a formal contract with the BBC, and when the BBC Light Entertainment budget for 1980 was exhausted by the production of "The Hitchhiker's Guide to the Galaxy" TV series, the Goodies signed a contract with London Weekend Television (LWT) for ITV. However, after one half-hour Christmas special ("Snow White 2") in 1981, and a six-part series in early 1982, the series was cancelled. In later interviews the cast suggest the reasons were mainly economic, and that a typical Goodies sketch was more expensive than it appeared. A special episode, which was based on the "original" 1971 Goodies' "Kitten Kong" episode, was called "Kitten Kong: Montreux '72 Edition", and was first broadcast in 1972. The Goodies won the Silver Rose in 1972 for this special episode at the Festival Rose d'Or, held in Montreux, Switzerland. In the first episode of the next series, "The New Office", Tim Brooke-Taylor can be seen painting the trophy gold. The Goodies also won the Silver Rose in 1975 at the Festival Rose d'Or for their episode "The Movies". "The Goodies" was nominated for a BAFTA award in 1975, as the Best Light Entertainment Programme, but lost out to "Fawlty Towers". "The Goodies" were also nominated for an EMMY award. Unlike many long-running BBC comedy series, "The Goodies" has not enjoyed extensive repeats on terrestrial television in the UK. In 1986 BBC2 broadcast the episode "Kitten Kong" during a week of programmes screened under the banner "TV-50", when the BBC celebrated 50 years of broadcasting. In the late 1980s, the pan-European satellite-channel Super Channel broadcast a couple of episodes and the short-lived Comedy Channel broadcast some of the later "Goodies" episodes in the early 1990s. Later UK Gold screened many of the earlier episodes, often with commercial timing cuts. The same episodes subsequently aired on UK Arena, also cut. When UK Arena became UK Drama, later UKTV Drama, "The Goodies" was dropped along with its other comedy and documentary shows. The cast finally took matters into their own hands and arranged with Network Video for the release of a digitally-remastered 'best of' selection entitled "The Goodies ... At Last" on VHS and Region 0 DVD in April 2003. A second volume, "The Goodies ... At Last a Second Helping" was released on Region 2 in February 2005. Series 9 (including the Xmas special) was released on Region 2 as "The Goodies – The Complete LWT Series" on 26 March 2007 and a fourth volume "The Goodies ... At Last Back for More, Again" was released on region 2 in 2010 as well as a DVD box set containing all four volumes to celebrate 40 years of "The Goodies". In 2004, an episode of the BBC documentary series "Comedy Connections" was devoted to the Goodies. During Christmas that year, Channel 5 repeated the classic 1973 episode "The Goodies and the Beanstalk". Christmas 2005 saw a 90-minute "Goodies" special, a documentary about the series, "Return of the Goodies", broadcast on BBC Two. Early in 2006, a single episode ("Winter Olympics") was broadcast on BBC Two. In February 2007, the 1982 LWT series was repeated on pay-TV channel Paramount 2. In December 2010 BBC Two showed selected late night repeats of the BBC series, which ran nightly from 23–30 December. This apparent gesture followed years of campaigning by The Goodies that the shows had not been repeated like other BBC shows such as "Dad's Army", "Only Fools and Horses" and "Some Mothers Do 'Ave 'Em". The episodes shown were: "Bunfight at the O.K. Tea Rooms" / "Earthanasia" / "The Goodies and the Beanstalk" / "Kitten Kong" / "Lighthouse Keeping Loonies" / "Saturday Night Grease" / "The Baddies" (a.k.a. "Double Trouble") and "The Stone Age", although "Scoutrageous", "Kung Fu Kapers" and "Scotland" (a.k.a. "Loch Ness Monster") were originally billed as episodes 1, 2 and 7 of the repeat run. The episodes garnered good ratings given their time slot, and the first six episodes were taken from the BBC's own master tapes, rather than the digital remasters, the rights to which are currently owned by Network Video, "The Baddies" and "The Stone Age" have never been digitally remastered. On Sunday 8 June 2014, during 70s weekend, BBC Two repeated the Montreux '72 Edition of "Kitten Kong" once again, however this has been the only episode to be repeated twice, and no full series have been repeated since. In September 2018, Network released a box set titled "The Goodies: The Complete BBC Collection". This set contains every single episode from 1970-1980 (excepting the lost, original version of "Kitten Kong") and, as a bonus feature, a one-hour edit of the show "An Audience with the Goodies," hosted by Stewart Lee and filmed live at Leicester Square in June 2018. In Australia, the series has had continued popularity. It was especially popular when it was repeated through the 1970s and 1980s by the ABC. As the show was typically broadcast in the 25-minute 6:00 pm children's timeslot, portions often had to be cut. The 1981-82 LWT series was played once on the Seven Network in the early 1980s. The ABC screened the BBC episodes again in the early 1990s, but skipped several stories due to either political correctness, or a lack of colour prints at the time. The BBC episodes were then heavily edited to allow time for commercials when repeated on Network Ten in the 1990s, before moving to the pay television channel UK.TV during the late 1990s and early 2000s, where they were screened in full. ABC2 ran re-runs of the series, beginning in 2010. Three of the Goodies DVDs are available in Australia under different titles to the UK releases: "The Goodies: 8 Delicious Episodes", "The Goodies: A Tasty Second Helping" and "The Goodies: The Final Episodes", respectively. The Goodies' DVDs are also available in a boxed set with a commemorative booklet ("The Goodies: The Tasty Box"). This collection contains the same 16 episodes as the original two DVD releases but with additional material such as commentaries on several episodes and the original scripts of some episodes in PDF format. Picture quality has been greatly improved using digital restoration techniques and the episode "Come Dancing", which was originally thought to only have survived as a black and white film recording, is presented in colour from a 625 line low-band broadcast standard PAL VT recording, made for training purposes, which has had the low level colour boosted. (The original Australian DVD release, "The Goodies – A Tasty Second Helping" (2 disc set), and "The Goodies – A Second Helping: 4 tasty serves" (1 Disc), featured the b/w telerecording of this episode.) In Canada, the series was shown in on the CBC national broadcast network during the late 1970s and early 1980s, in the traditional "after school" time slot, later a Friday night 10 pm slot, and occasionally in a midnight slot. Several episodes were also shown on the CTV Television Network. In the mid-1970s it was shown on TVOntario on Saturday evenings, repeated on Thursday evenings, until being replaced by Doctor Who in 1976. In Germany in 1972, German TV screened the 13 part variety show "Engelbert and the Young Generation" a co-production between the BBC and German station ZDF in which The Goodies appeared in short 3 to 4-minute film sequences. The first six films were culled from the first and second series of "The Goodies", "Pets" (from "Kitten Kong"), "Pop Festival" (from "The Music Lovers"), "Keep Fit" (from "Commonwealth Games"), "Post Office" (from "Radio Goodies"), "Sleepwalking" (from "Snooze") and "Factory Farm" (from "Fresh Farm Foods") and seven new film sequences, "Good Deed Day", "The Gym", "The Country Code", "Street Entertainers", "Plum Pudding", "Bodyguards" and "Pan's Grannies", these also featured intro sequences with host Engelbert Humperdinck visiting the Goodies at their office. The shows were dubbed into German and because the Goodies part of the shows were more visual than dialogue based, it translated very well. Five of these new films were also cut together, with a new story involving "The Goodies" filling out their "Tax Evasion" form, as a special 25-minute Goodies compilation episode, "A Collection of Goodies" was first broadcast on BBC1 at 8.15 pm on 24 September 1972, and was produced by Jim Franklin, "The Country Code" and "Bodyguards" were not used. In New Zealand, the series was originally shown in full by the NZBC (later TV One) during the 1970s and 1980s. Since then, it has been re-run on SKY Network Television's Comedy Central. In Spain, a couple of episodes of "The Goodies" were shown as part of a season of television-award-winning programmes (the Goodies were Montreux Festival winners) on TVE 2 entitled "Festival TV" in 1981. In the US, the series was shown widely in syndication during the late 1970s and early 1980s, but has been little seen since. It was shown also on PBS stations, sometimes in tandem with "Monty Python's Flying Circus". In their heyday The Goodies also produced successful books: "All Things Bright and Beautiful" was released as a single credited to The Goodies in 1973, although it had been recorded in 1966 when they were part of "I'm Sorry, I'll Read That Again". The first true Goodies album, "The Goodies Sing Songs From The Goodies" was released in 1973 and reissued as "The World of the Goodies" in 1974. "The Goodies Theme" was released as a single in 1973. They had a string of successful chart singles penned by Bill Oddie. In 1974–75, they chalked up five hit singles in twelve months: "The Inbetweenies", "Black Pudding Bertha", "Nappy Love" and "The Funky Gibbon" (all performed during the episode "The Goodies – Almost Live"), and "Make a Daft Noise for Christmas". "The Funky Gibbon" was their biggest hit, reached number 4 in the UK Singles Chart. The Goodies made an appearance on "Top of the Pops" with the song. They also performed it during the Amnesty International show "A Poke in the Eye (with a Sharp Stick)". "The Funky Gibbon" became a favourite in the United States on Dr. Demento's radio shows and reached number 79 on the "Billboard" Hot 100 in 1975. "The New Goodies LP", which featured most of the hit singles, reached number 25 on the UK Albums Chart in 1975. Three variations of the Goodies Theme were used on the opening titles for the 1970–1982 television series. Apart from the original Goodies Theme, used from 1970–1972 and released as a single, two other variations surfaced, one, with a contemporary feel from 1973–1974, sung by Bill and then the third and final theme for the rest of the series from 1975 onwards, again sung by Bill. This variation lasted for the rest of the TV series and also surfaced on later Goodies LPs and eventually singles. Tim Brooke-Taylor was a writer/performer on the television comedy series "At Last the 1948 Show" (which also included John Cleese, Graham Chapman and Marty Feldman in the cast), in which Eric Idle and Bill Oddie guest starred in some of the episodes. The famous "Four Yorkshiremen" sketch was co-written by the four writers/performers of the series – Tim Brooke-Taylor, John Cleese, Graham Chapman and Marty Feldman. Tim Brooke-Taylor was a cast member of the television comedy series "Marty" with Marty Feldman and John Junkin – a compilation of the two series of "Marty" has been released on a DVD with the title of "It's Marty". Brooke-Taylor was also a cast member of John Cleese's special "How to Irritate People". Along with John Junkin and Barry Cryer, Brooke-Taylor was a regular cast member of the long-running Radio 2 comedy sketch show "Hello, Cheeky!", which ran from 1973 to 1979. The series also transferred to Yorkshire Television for two series in 1975 and 1976. Tim Brooke-Taylor also appeared on BBC's hospital comedy "TLC", as well as the sitcoms "You Must Be The Husband" (with Diane Keen and Sheila Steafel), and "Me and My Girl" (with Richard O'Sullivan and Joan Sanderson). He also played in a televised pro-celebrity golf match opposite Bruce Forsyth. Graeme Garden and Bill Oddie were writers/performers on the television comedy series "Twice a Fortnight" (which also included Terry Jones, Michael Palin and Jonathan Lynn in the cast). Tim Brooke-Taylor and Graeme Garden were writers/performers on the television comedy series "Broaden Your Mind", with Bill Oddie joining them for the second series. The three writers and performers also collaborated on the 1983 animated children's programme "Bananaman", where they played various voice roles. Bill Oddie has occasionally appeared on the BBC Radio 4 panel game "I'm Sorry I Haven't a Clue", on which Garden and Brooke-Taylor are regular panellists. Graeme Garden and Bill Oddie worked on the television comedy "Doctor in the House": they co-wrote most of the first series and all of the second. Garden also appeared as a television interviewer in the series, in the episode titled "On the Box". During 1981-1983 Garden and Oddie wrote, but did not perform in, a science fiction sitcom called "Astronauts" for Central and ITV. The show was set in a British space station in the near future. Garden was a regular team captain on the political satire game show "If I Ruled the World". Brooke-Taylor appeared as a guest in one episode, and during the game "I Couldn't Disagree More" he proposed that it was high time "The Goodies" episodes were repeated. Garden was obliged by the rules of the game to refute this statement, and replied "I couldn't disagree more...it was time to repeat them ten, fifteen years ago." This was followed by uproarious applause from the studio audience. In 2004, Garden and Brooke-Taylor were co-presenters of Channel 4's daytime game show "Beat the Nation", in which they indulged in usual game show "banter", but took the quiz itself seriously. Oddie hosts a very successful series of nature programmes for the BBC. On 9 October 2019, "Audible" UK launched a Goodies podcast free to members with, initially, a single hour-long scripted episode of a new show featuring the original cast members plus guests, recorded in front of a live audience. The trio reunited in Australia for "The Goodies (Still A) Live on Stage" as part of Sydney's Big Laugh Comedy Festival in March 2005. The show toured the country, visiting Melbourne, Brisbane and Canberra and selling out most of the 13 performances. A further Australian tour by the Goodies, sans Bill, took place during November and December 2005. Tim Brooke-Taylor and Graeme Garden took their Goodies Live show to the 2006 Edinburgh Fringe festival. The show was similar to the second leg of the Goodies Australian tour, with Bill Oddie participating via video (due to his many filming commitments). The show was also performed at the Paramount Comedy Festival in Brighton in October 2006. Brooke-Taylor and Garden performed the show at 22 further UK venues in 2007. Tim Brooke-Taylor and Graeme Garden appeared at Sydney's Riverside Theatre (Parramatta) on 15 October 2009 and the World's Funniest Island comedy festival on Cockatoo Island, Sydney Harbour on 17–18 October 2009. The show was hosted by Andrew Hansen of Australian comedy team The Chaser. The Goodies were once again reunited when the BBC1 show, entitled "The One Show", brought them back. It concluded with Tim riding a tandem alone while the others stared / watched. Show air date was 4 November 2010. Bill Oddie toured Australia, to present a series of one-man shows, "An Oldie but a Goodie", during June 2013. The Australian tour took in Brisbane, Sydney, Melbourne, Adelaide and Perth. A video with the three Goodies was shown during the shows. On 19 June 2013, Oddie made personal appearances on both "The Project" and the "Adam Hills Tonight" television shows in conjunction with the tour. "The Mighty Boosh" was started when Julian Barratt asked Noel Fielding if he wanted to make a modern-day Goodies. The official Goodies fan club's (Goodies Rule-OK!) newsletter, is called the "Clarion & Globe". It was named after the newspaper in The Goodies' episode "Fleet Street Goodies" (a.k.a. "Cunning Stunts"). During the 1970s, "Cor!!" comic, released by Fleetway publications, had a "Goodies" comics strip. When the comic later merged with "Buster", the "Goodies" did not move across, although the TV show was still running. Australian rock band Spiderbait released their 1993 album and EP that had a rocked up fast cover version of the Goodies song "Run". Australian theatre company Shaolin Punk produced a short play titled "A Record or an OBE", written by Melbourne comedian and actor Ben McKenzie, and featuring Tim and Graeme as characters. Set in 1975, the two remaining Goodies struggle to carry on after Bill leaves the group to pursue a music career. The play premiered in the 2007 Melbourne Fringe Festival, where it was highly commended in the Comedy category. Later seasons were also performed for the Adelaide Fringe and Melbourne International Comedy Festival in 2008. U.S. rock band The White Stripes named their 6th album "Icky Thump" in reference to The Goodies sketch "The Battle of Ecky Thump". The name was changed from "Ecky Thump" to "Icky Thump" to make the title more palatable to an American teenage audience. All three Goodies were awarded OBEs. Bill Oddie received his OBE in 2003 for wildlife conservation, while Tim Brooke-Taylor and Graeme Garden received their OBEs in 2011 for services to light entertainment. The show often mocked OBEs, in particular a running joke was that Tim desperately wanted to receive one. All three Goodies have been regular attendees at Slapstick Festival in Bristol, and in 2011 they were awarded the Aardman Slapstick Visual Comedy Legend Award at the festival for the significant contributions to the field of visual comedy they've made over their lifetime. On 24 March 1975, Alex Mitchell, a 50-year-old bricklayer from King's Lynn, literally died laughing while watching an episode of "The Goodies". According to his wife, who was a witness, Mitchell was unable to stop laughing whilst watching a sketch in the episode "Kung Fu Kapers" in which Tim Brooke-Taylor, dressed as a kilted Scotsman, used a set of bagpipes to defend himself from a black pudding-wielding Bill Oddie (master of the ancient Lancastrian martial art "Ecky-Thump") in a demonstration of the Scottish martial art of "Hoots-Toot-ochaye." After twenty-five minutes of continuous laughter Mitchell finally slumped on the settee and died from heart failure. His widow later sent the Goodies a letter thanking them for making Mitchell's final moments so pleasant. On 1 November 1977, Seema Bakewell, a 32-year-old housewife from Leicester, went into labour whilst laughing at a sketch in "The Goodies" episode "Alternative Roots". She refused to leave home for the hospital until the episode had finished. Thirty years later, she visited the 2007 UK reunion tour with "her baby, Ayesha, and the baby's husband" and recounted the story to Graeme Garden.
https://en.wikipedia.org/wiki?curid=31139
Divine Comedy The Divine Comedy ( ) is a long Italian narrative poem by Dante Alighieri, begun c. 1308 and completed in 1320, a year before his death in 1321. It is widely considered to be the pre-eminent work in Italian literature and one of the greatest works of world literature. The poem's imaginative vision of the afterlife is representative of the medieval world-view as it had developed in the Western Church by the 14th century. It helped establish the Tuscan language, in which it is written (also in most present-day Italian-market editions), as the standardized Italian language. It is divided into three parts: "Inferno", "Purgatorio", and "Paradiso". The narrative takes as its literal subject the state of souls after death and presents an image of divine justice meted out as due punishment or reward, and describes Dante's travels through Hell, Purgatory, and Paradise or Heaven, while allegorically the poem represents the soul's journey towards God, beginning with the recognition and rejection of sin ("Inferno"), followed by the penitent Christian life ("Purgatorio"), which is then followed by the soul's ascent to God ("Paradiso"). Dante draws on medieval Roman Catholic theology and philosophy, especially Thomistic philosophy derived from the "Summa Theologica" of Thomas Aquinas. Consequently, the "Divine Comedy" has been called "the "Summa" in verse". In Dante's work, the pilgrim Dante is accompanied by three guides: Virgil (who represents human reason), Beatrice (who represents divine revelation, theology, faith, and grace), and Saint Bernard of Clairvaux (who represents contemplative mysticism and devotion to Mary). The work was originally simply titled Comedìa (; so also in the first printed edition, published in 1472), Tuscan for "Comedy", later adjusted to the modern Italian . The adjective was added by Giovanni Boccaccio, and the first edition to name the poem "Divina Comedia" in the title was that of the Venetian humanist Lodovico Dolce, published in 1555 by Gabriele Giolito de' Ferrari. The "Divine Comedy" is composed of 14,233 lines that are divided into three "cantiche" (singular "cantica") – "Inferno" (Hell), "Purgatorio" (Purgatory), and "Paradiso" (Paradise) – each consisting of 33 cantos (Italian plural "canti"). An initial canto, serving as an introduction to the poem and generally considered to be part of the first "cantica", brings the total number of cantos to 100. It is generally accepted, however, that the first two cantos serve as a unitary prologue to the entire epic, and that the opening two cantos of each "cantica" serve as prologues to each of the three "cantiche". The number three is prominent in the work (alluding to the Trinity), represented in part by the number of "cantiche" and their lengths. Additionally, the verse scheme used, "terza rima", is hendecasyllabic (lines of eleven syllables), with the lines composing tercets according to the rhyme scheme "aba, bcb, cdc, ded, ...". The total number of syllables in each tercet is thus 33, the same as the number of cantos in each "cantica". Written in the first person, the poem tells of Dante's journey through the three realms of the dead, lasting from the night before Good Friday to the Wednesday after Easter in the spring of 1300. The Roman poet Virgil guides him through Hell and Purgatory; Beatrice, Dante's ideal woman, guides him through Heaven. Beatrice was a Florentine woman he had met in childhood and admired from afar in the mode of the then-fashionable courtly love tradition, which is highlighted in Dante's earlier work "La Vita Nuova". The structure of the three realms follows a common numerical pattern of 9 plus 1, for a total of 10: 9 circles of the Inferno, followed by Lucifer contained at its bottom; 9 rings of Mount Purgatory, followed by the Garden of Eden crowning its summit; and the 9 celestial bodies of Paradiso, followed by the Empyrean containing the very essence of God. Within each group of 9, 7 elements correspond to a specific moral scheme, subdivided into three subcategories, while 2 others of greater particularity are added to total nine. For example, the seven deadly sins of the Catholic Church that are cleansed in Purgatory are joined by special realms for the late repentant and the excommunicated by the church. The core seven sins within Purgatory correspond to a moral scheme of love perverted, subdivided into three groups corresponding to excessive love (Lust, Gluttony, Greed), deficient love (Sloth), and malicious love (Wrath, Envy, Pride). In central Italy's political struggle between Guelphs and Ghibellines, Dante was part of the Guelphs, who in general favored the Papacy over the Holy Roman Emperor. Florence's Guelphs split into factions around 1300the White Guelphs and the Black Guelphs. Dante was among the White Guelphs who were exiled in 1302 by the Lord-Mayor Cante de' Gabrielli di Gubbio, after troops under Charles of Valois entered the city, at the request of Pope Boniface VIII, who supported the Black Guelphs. This exile, which lasted the rest of Dante's life, shows its influence in many parts of the "Comedy," from prophecies of Dante's exile to Dante's views of politics, to the eternal damnation of some of his opponents. The last word in each of the three "cantiche" is "stelle" ("stars"). The poem begins on the night before Good Friday in 1300, "halfway along our life's path" ("Nel mezzo del cammin di nostra vita"). Dante is thirty-five years old, half of the biblical lifespan of 70 (Psalms 89:10, Vulgate), lost in a dark wood (understood as sin), assailed by beasts (a lion, a leopard, and a she-wolf) he cannot evade and unable to find the "straight way" ("diritta via") – also translatable as "right way" – to salvation (symbolized by the sun behind the mountain). Conscious that he is ruining himself and that he is falling into a "low place" ("basso loco") where the sun is silent ("'l sol tace"), Dante is at last rescued by Virgil, and the two of them begin their journey to the underworld. Each sin's punishment in "Inferno" is a "contrapasso", a symbolic instance of poetic justice; for example, in Canto XX, fortune-tellers and soothsayers must walk with their heads on backwards, unable to see what is ahead, because that was what they had tried to do in life: Allegorically, the "Inferno" represents the Christian soul seeing sin for what it really is, and the three beasts represent three types of sin: the self-indulgent, the violent, and the malicious. These three types of sin also provide the three main divisions of Dante's Hell: Upper Hell, outside the city of Dis, for the four sins of indulgence (lust, gluttony, avarice, anger); Circle 7 for the sins of violence; and Circles 8 and 9 for the sins of fraud and treachery. Added to these are two unlike categories that are specifically spiritual: Limbo, in Circle 1, contains the virtuous pagans who were not sinful but were ignorant of Christ, and Circle 6 contains the heretics who contradicted the doctrine and confused the spirit of Christ. Having survived the depths of Hell, Dante and Virgil ascend out of the undergloom to the Mountain of Purgatory on the far side of the world. The Mountain is on an island, the only land in the Southern Hemisphere, created by the displacement of rock which resulted when Satan's fall created Hell (which Dante portrays as existing underneath Jerusalem). The mountain has seven terraces, corresponding to the seven deadly sins or "seven roots of sinfulness." The classification of sin here is more psychological than that of the "Inferno", being based on motives, rather than actions. It is also drawn primarily from Christian theology, rather than from classical sources. However, Dante's illustrative examples of sin and virtue draw on classical sources as well as on the Bible and on contemporary events. Love, a theme throughout the "Divine Comedy", is particularly important for the framing of sin on the Mountain of Purgatory. While the love that flows from God is pure, it can become sinful as it flows through humanity. Humans can sin by using love towards improper or malicious ends (Wrath, Envy, Pride), or using it to proper ends but with love that is either not strong enough (Sloth) or love that is too strong (Lust, Gluttony, Greed). Below the seven purges of the soul is the Ante-Purgatory, containing the Excommunicated from the church and the Late repentant who died, often violently, before receiving rites. Thus the total comes to nine, with the addition of the Garden of Eden at the summit, equaling ten. Allegorically, the "Purgatorio" represents the Christian life. Christian souls arrive escorted by an angel, singing "In exitu Israel de Aegypto". In his "Letter to Cangrande", Dante explains that this reference to Israel leaving Egypt refers both to the redemption of Christ and to "the conversion of the soul from the sorrow and misery of sin to the state of grace." Appropriately, therefore, it is Easter Sunday when Dante and Virgil arrive. The "Purgatorio" is notable for demonstrating the medieval knowledge of a spherical Earth. During the poem, Dante discusses the different stars visible in the southern hemisphere, the altered position of the sun, and the various time zones of the Earth. At this stage it is, Dante says, sunset at Jerusalem, midnight on the River Ganges, and sunrise in Purgatory. After an initial ascension, Beatrice guides Dante through the nine celestial spheres of Heaven. These are concentric and spherical, as in Aristotelian and Ptolemaic cosmology. While the structures of the "Inferno" and "Purgatorio" were based on different classifications of sin, the structure of the "Paradiso" is based on the four cardinal virtues and the three theological virtues. The seven lowest spheres of Heaven deal solely with the cardinal virtues of Prudence, Fortitude, Justice and Temperance. The first three spheres involve a deficiency of one of the cardinal virtues – the Moon, containing the inconstant, whose vows to God waned as the moon and thus lack fortitude; Mercury, containing the ambitious, who were virtuous for glory and thus lacked justice; and Venus, containing the lovers, whose love was directed towards another than God and thus lacked Temperance. The final four incidentally are positive examples of the cardinal virtues, all led on by the Sun, containing the prudent, whose wisdom lighted the way for the other virtues, to which the others are bound (constituting a category on its own). Mars contains the men of fortitude who died in the cause of Christianity; Jupiter contains the kings of Justice; and Saturn contains the temperate, the monks who abided by the contemplative lifestyle. The seven subdivided into three are raised further by two more categories: the eighth sphere of the fixed stars that contain those who achieved the theological virtues of faith, hope and love, and represent the Church Triumphant – the total perfection of humanity, cleansed of all the sins and carrying all the virtues of heaven; and the ninth circle, or Primum Mobile (corresponding to the Geocentricism of Medieval astronomy), which contains the angels, creatures never poisoned by original sin. Topping them all is the Empyrean, which contains the essence of God, completing the 9-fold division to 10. Dante meets and converses with several great saints of the Church, including Thomas Aquinas, Bonaventure, Saint Peter, and St. John. The "Paradiso" is consequently more theological in nature than the "Inferno" and the "Purgatorio". However, Dante admits that the vision of heaven he receives is merely the one his human eyes permit him to see, and thus the vision of heaven found in the Cantos is Dante's personal vision. The "Divine Comedy" finishes with Dante seeing the Triune God. In a flash of understanding that he cannot express, Dante finally understands the mystery of Christ's divinity and humanity, and his soul becomes aligned with God's love: According to the Italian Dante Society, no original manuscript written by Dante has survived, although there are many manuscript copies from the 14th and 15th centuries – some 800 are listed on their site. The first printed edition was published in Foligno, Italy, by Johann Numeister and Evangelista Angelini da Trevi on 1472. Of the 300 copies printed, fourteen still survive. The original printing press is on display in the "Oratorio della Nunziatella" in Foligno. The "Divine Comedy" can be described simply as an allegory: each canto, and the episodes therein, can contain many alternative meanings. Dante's allegory, however, is more complex, and, in explaining how to read the poem – see the "Letter to Cangrande" – he outlines other levels of meaning besides the allegory: the historical, the moral, the literal, and the anagogical. The structure of the poem is also quite complex, with mathematical and numerological patterns distributed throughout the work, particularly threes and nines, which are related to the Holy Trinity. The poem is often lauded for its particularly human qualities: Dante's skillful delineation of the characters he encounters in Hell, Purgatory, and Paradise; his bitter denunciations of Florentine and Italian politics; and his powerful poetic imagination. Dante's use of real characters, according to Dorothy Sayers in her introduction to her translation of the "Inferno", allows Dante the freedom of not having to involve the reader in description, and allows him to "[make] room in his poem for the discussion of a great many subjects of the utmost importance, thus widening its range and increasing its variety." Dante called the poem "Comedy" (the adjective "Divine" was added later, in the 16th century) because poems in the ancient world were classified as High ("Tragedy") or Low ("Comedy"). Low poems had happy endings and were written in everyday language, whereas High poems treated more serious matters and were written in an elevated style. Dante was one of the first in the Middle Ages to write of a serious subject, the Redemption of humanity, in the low and "vulgar" Italian language and not the Latin one might expect for such a serious topic. Boccaccio's account that an early version of the poem was begun by Dante in Latin is still controversial. Although the "Divine Comedy" is primarily a religious poem, discussing sin, virtue, and theology, Dante also discusses several elements of the science of his day (this mixture of science with poetry has received both praise and criticism over the centuries). The "Purgatorio" repeatedly refers to the implications of a spherical Earth, such as the different stars visible in the southern hemisphere, the altered position of the sun, and the various time zones of the Earth. For example, at sunset in Purgatory it is midnight at the Ebro, dawn in Jerusalem, and noon on the River Ganges: Dante travels through the centre of the Earth in the "Inferno", and comments on the resulting change in the direction of gravity in Canto XXXIV (lines 76–120). A little earlier (XXXIII, 102–105), he queries the existence of wind in the frozen inner circle of hell, since it has no temperature differentials. Inevitably, given its setting, the "Paradiso" discusses astronomy extensively, but in the Ptolemaic sense. The "Paradiso" also discusses the importance of the experimental method in science, with a detailed example in lines 94–105 of Canto II: A briefer example occurs in Canto XV of the "Purgatorio" (lines 16–21), where Dante points out that both theory and experiment confirm that the angle of incidence is equal to the angle of reflection. Other references to science in the "Paradiso" include descriptions of clockwork in Canto XXIV (lines 13–18), and Thales' theorem about triangles in Canto XIII (lines 101–102). Galileo Galilei is known to have lectured on the "Inferno", and it has been suggested that the poem may have influenced some of Galileo's own ideas regarding mechanics. In 1919, Miguel Asín Palacios, a Spanish scholar and a Catholic priest, published "La Escatología musulmana en la Divina Comedia" ("Islamic Eschatology in the Divine Comedy"), an account of parallels between early Islamic philosophy and the "Divine Comedy". Palacios argued that Dante derived many features of and episodes about the hereafter from the spiritual writings of Ibn Arabi and from the Isra and Mi'raj or night journey of Muhammad to heaven. The latter is described in the "ahadith" and the "Kitab al Miraj" (translated into Latin in 1264 or shortly before as "Liber Scalae Machometi", "The Book of Muhammad's Ladder"), and has significant similarities to the "Paradiso", such as a sevenfold division of Paradise, although this is not unique to the "Kitab al Miraj" or Islamic cosmology. Some "superficial similarities" of the "Divine Comedy" to the "Resalat Al-Ghufran" or "Epistle of Forgiveness" of Al-Ma'arri have also been mentioned in this debate. The "Resalat Al-Ghufran" describes the journey of the poet in the realms of the afterlife and includes dialogue with people in Heaven and Hell, although, unlike the "Kitab al Miraj", there is little description of these locations, and it is unlikely that Dante borrowed from this work. Dante did, however, live in a Europe of substantial literary and philosophical contact with the Muslim world, encouraged by such factors as Averroism ("Averrois, che'l gran comento feo" Commedia, Inferno, IV, 144, meaning "Averrois, who wrote the great comment") and the patronage of Alfonso X of Castile. Of the twelve wise men Dante meets in Canto X of the "Paradiso", Thomas Aquinas and, even more so, Siger of Brabant were strongly influenced by Arabic commentators on Aristotle. Medieval Christian mysticism also shared the Neoplatonic influence of Sufis such as Ibn Arabi. Philosopher Frederick Copleston argued in 1950 that Dante's respectful treatment of Averroes, Avicenna, and Siger of Brabant indicates his acknowledgement of a "considerable debt" to Islamic philosophy. Although this philosophical influence is generally acknowledged, many scholars have not been satisfied that Dante was influenced by the "Kitab al Miraj". The 20th century Orientalist expressed skepticism regarding the claimed similarities, and the lack of evidence of a vehicle through which it could have been transmitted to Dante. Even so, while dismissing the probability of some influences posited in Palacios' work, Gabrieli conceded that it was "at least possible, if not probable, that Dante may have known the "Liber Scalae" and have taken from it certain images and concepts of Muslim eschatology". Shortly before her death, the Italian philologist pointed out that, during his stay at the court of Alfonso X, Dante's mentor Brunetto Latini met Bonaventura de Siena, a Tuscan who had translated the "Kitab al Miraj" from Arabic into Latin. Corti speculates that Brunetto may have provided a copy of that work to Dante. René Guénon, a Sufi convert and scholar of Ibn Arabi, rejected in "The Esoterism of Dante" the theory of his influence (direct or indirect) on Dante. Palacios' theory that Dante was influenced by Ibn Arabi was satirized by the Turkish academic Orhan Pamuk in his novel "The Black Book". The "Divine Comedy" was not always as well-regarded as it is today. Although recognized as a masterpiece in the centuries immediately following its publication, the work was largely ignored during the Enlightenment, with some notable exceptions such as Vittorio Alfieri; Antoine de Rivarol, who translated the "Inferno" into French; and Giambattista Vico, who in the "Scienza nuova" and in the "Giudizio su Dante" inaugurated what would later become the romantic reappraisal of Dante, juxtaposing him to Homer. The "Comedy" was "rediscovered" in the English-speaking world by William Blake – who illustrated several passages of the epic – and the Romantic writers of the 19th century. Later authors such as T. S. Eliot, Ezra Pound, Samuel Beckett, C. S. Lewis and James Joyce have drawn on it for inspiration. The poet Henry Wadsworth Longfellow was its first American translator, and modern poets, including Seamus Heaney, Robert Pinsky, John Ciardi, W. S. Merwin, and Stanley Lombardo, have also produced translations of all or parts of the book. In Russia, beyond Pushkin's translation of a few tercets, Osip Mandelstam's late poetry has been said to bear the mark of a "tormented meditation" on the "Comedy". In 1934, Mandelstam gave a modern reading of the poem in his labyrinthine "Conversation on Dante". In T. S. Eliot's estimation, "Dante and Shakespeare divide the world between them. There is no third." For Jorge Luis Borges the "Divine Comedy" was "the best book literature has achieved". New English translations of the "Divine Comedy" continue to be published regularly. Notable English translations of the complete poem include the following. A number of other translators, such as Robert Pinsky, have translated the "Inferno" only. The "Divine Comedy" has been a source of inspiration for countless artists for almost seven centuries. There are many references to Dante's work in literature. In music, Franz Liszt was one of many composers to write works based on the "Divine Comedy". In sculpture, the work of Auguste Rodin includes themes from Dante, and many visual artists have illustrated Dante's work, as shown by the examples above. There have also been many references to the "Divine Comedy" in cinema, television, digital arts, comics and video games.
https://en.wikipedia.org/wiki?curid=31140
Transport for London Transport for London (TfL) is a local government body responsible for the transport system in Greater London, England. TfL has responsibility for London's network of principal road routes, for various rail networks including the London Underground, London Overground, Docklands Light Railway and TfL Rail. It does not control National Rail services in London, however, but does control London's trams, buses and taxis, cycling provision and river services. The underlying services are provided by a mixture of wholly owned subsidiary companies (principally London Underground), by private sector franchisees (the remaining rail services, trams and most buses) and by licensees (some buses, taxis and river services). TfL is also responsible, jointly with the national Department for Transport (DfT), for commissioning the construction of the new Crossrail line, and will be responsible for franchising its operation once completed. In 2019–20, TfL had a budget of £10.3 billion, 47% of which came from fares. The rest came from grants, mainly from the Greater London Authority (33%), borrowing (8%), congestion charging and other income (12%). Direct central government funding for operations ceased in 2018. In 2020, during the COVID-19 pandemic in the United Kingdom, TfL sought urgent government support as fare revenues dropped 90%, and proposed near-40% cuts in capital expenditure. TfL was created in 2000 as part of the Greater London Authority (GLA) by the Greater London Authority Act 1999. It gained most of its functions from its predecessor London Regional Transport in 2000. The first Commissioner of TfL was Bob Kiley. The first chair was then-Mayor of London Ken Livingstone, and the first deputy chair was Dave Wetzel. Livingstone and Wetzel remained in office until the election of Boris Johnson as Mayor in 2008. Johnson took over as chairman, and in February 2009 fellow-Conservative Daniel Moylan was appointed as his deputy. TfL did not take over responsibility for the London Underground until 2003, after the controversial public-private partnership (PPP) contract for maintenance had been agreed. Management of the Public Carriage Office had previously been a function of the Metropolitan Police. Transport for London Group Archives holds business records for TfL and its predecessor bodies and transport companies. Some early records are also held on behalf of TfL Group Archives at the London Metropolitan Archives. After the bombings on the underground and bus systems on 7 July 2005, many staff were recognised in the 2006 New Year honours list for the work they did. They helped survivors out, removed bodies, and got the transport system up and running, to get the millions of commuters back out of London at the end of the workday. On 1 June 2008, the drinking of alcoholic beverages was banned on Tube and London Overground trains, buses, trams, Docklands Light Railway and all stations operated by TfL across London but not those operated by other rail companies. Carrying open containers of alcohol was also banned on public transport operated by TfL. The Mayor of London and TfL announced the ban with the intention of providing a safer and more pleasant experience for passengers. There were "Last Round on the Underground" parties on the night before the ban came into force. Passengers refusing to observe the ban may be refused travel and asked to leave the premises. The GLA reported in 2011 that assaults on London Underground staff had fallen by 15% since the introduction of the ban. TfL commissioned a survey in 2013 which showed that 15% of women using public transport in London had been the subject of some form of unwanted sexual behaviour but that 90% of incidents were not reported to the police. In an effort to reduce sexual offences and increase reporting, TfL—in conjunction with the British Transport Police, Metropolitan Police Service, and City of London Police—launched Project Guardian. In 2014, Transport for London launched the 100 years of women in transport campaign in partnership with the Department for Transport, Crossrail, Network Rail, Women's Engineering Society and the Women's Transportation Seminar (WTS). The programme was a celebration of the significant role that women had played in transport over the previous 100 years, following the centennial anniversary of the First World War, when 100,000 women entered the transport industry to take on the responsibilities held by men who enlisted for military service. In 2020, during the COVID-19 pandemic in the United Kingdom, TfL services were reduced. All Night Overground and Night Tube services, as well as all services on the Waterloo & City line, were suspended from 20 March, and 40 tube stations were closed on the same day. The Mayor of London and TfL urged people to only use public transport if absolutely essential, so that it could be used by critical workers. The London Underground brought in new measures on 25 March to combat the spread of the virus, by slowing the flow of passengers onto platforms. Measures included the imposition of queuing at ticket gates and turning off some escalators. In April, TfL trialled changes encouraging passengers to board London buses by the middle doors to lessen the risks to drivers, after the deaths of 14 TfL workers including nine drivers. This measure was extended to all routes on 20 April, and passengers were no longer required to pay, so that they did not need to use the card reader near the driver. On 22 April, London mayor Sadiq Khan warned that TfL could run out of money to pay staff by the end of April unless the government stepped in. Two days later, TfL announced it was furloughing around 7,000 employees, about a quarter of its staff, to help mitigate a 90% reduction in fare revenues. Since London entered lockdown on 23 March, Tube journeys had fallen by 95% and bus journeys by 85%, though TfL continued to operate limited services to allow "essential travel" for key workers. Without government financial support for TfL, London Assembly members warned that Crossrail, the Northern line extension and other projects such as step-free schemes at tube stations could be delayed. On 7 May, it was reported that TfL had requested £2 billion in state aid to keep services running until September 2020. On 12 May, TfL documents warned it expected to lose £4bn due to the pandemic and said it needed £3.2bn to balance a proposed emergency budget for 2021, having lost 90% of its overall income. Without an agreement with the government, deputy mayor for transport Heidi Alexander said TfL might have to issue a 'Section 114 notice' - the equivalent of a public body going bust. On 14 May, the UK Government agreed £1.6bn in emergency funding to keep Tube and bus services running until September - a bailout condemned as "a sticking plaster" by Khan who called for agreement on a new longer-term funding model. On 1 June 2020, TfL released details of its emergency budget for 2020-2021 revealing it planned to reduce capital investment by 39% from £1.3bn to £808m, and to cut maintenance and renewal spending by 38% to £201m. TfL is controlled by a board whose members are appointed by the Mayor of London, a position held by Sadiq Khan since May 2016. The Commissioner of Transport for London reports to the Board and leads a management team with individual functional responsibilities. The body is organised in two main directorates and corporate services, each with responsibility for different aspects and modes of transport. The two main directorates are: TfL owns and operates the London Transport Museum in Covent Garden, a museum that conserves, explores and explains London's transport system heritage over the last 200 years. It both explores the past, with a retrospective look at past days since 1800, and the present-day transport developments and upgrades. The museum also has an extensive depot, situated at Acton, that contains material impossible to display at the central London museum, including many additional road vehicles, trains, collections of signs and advertising materials. The depot has several open weekends each year. There are also occasional heritage train runs on the Metropolitan line. TfL's Surface Transport and Traffic Operations Centre (STTOC) was officially opened by Prince Andrew, Duke of York in November 2009. The centre monitors and coordinates official responses to traffic congestion, incidents and major events in London. London Buses Command and Control Centre (CentreComm), London Streets Traffic Control Centre (LSTCC) and the Metropolitan Police Traffic Operation Control Centre (MetroComm) were brought together under STTOC. STTOC played an important part in the security and smooth running of the 2012 Summer Olympics. The London Underground Network Operations Centre is now located on the fifth floor of Palestra and not within STTOC. The centre featured in the 2013 BBC Two documentary series "". Transport for London introduced the "Connect" project for radio communications during the 2000s, to improve radio connections for London Underground staff and the emergency services. The system replaced various separate radio systems for each tube line, and was funded under a private finance initiative. The supply contract was signed in November 1999 with Motorola as the radio provider alongside Thales. Citylink's shareholders are Thales Group (33 per cent), Fluor Corporation (18%), Motorola (10%), Laing Investment (19.5%) and HSBC (19.5%). The cost of the design, build and maintain contract was £2 billion over twenty years.Various subcontractors were used for the installation work, including Brookvex and Fentons. A key reasoning for the introduction of the system was in light of the King's Cross fire disaster, where efforts by the emergency services were hampered by a lack of radio coverage below ground. Work was due to be completed by the end of 2002, although suffered delays due to the necessity of installing the required equipment on an ageing railway infrastructure with no disruption to the operational railway. On 5 June 2006 the London Assembly published the 7 July Review Committee report, which urged TfL to speed up implementation of the Connect system. The East London line was chosen as the first line to receive the TETRA radio in February 2006, as it was the second smallest line and is a mix of surface and sub surface. In the same year it was rolled out to the District, Circle, Hammersmith & City, Metropolitan and Victoria lines, with the Bakerloo, Piccadilly, Jubilee, Waterloo & City and Central lines following in 2007. The final line, the Northern, was handed over in November 2008. The 2010 TfL investment programme included the project "LU-PJ231 LU-managed Connect communications", which provided Connect with a new transmission and radio system comprising 290 cell sites with two to three base stations, 1,400 new train mobiles, 7,500 new telephone links and 180 CCTV links. Most of the transport modes that come under the control of TfL have their own charging and ticketing regimes for single fare. Buses and trams share a common fare and ticketing regime, and the DLR, Overground, Underground, and National Rail services another. Rail service fares in the capital are calculated by a zonal fare system. London is divided into eleven fare zones, with every station on the London Underground, London Overground, Docklands Light Railway and, since 2007, on National Rail services, being in one, or in some cases, two zones. The zones are mostly concentric rings of increasing size emanating from the centre of London. They are (in order): Superimposed on these mode-specific regimes is the Travelcard system, which provides zonal tickets with validities from one day to one year, and off-peak variants. These are accepted on the DLR, buses, railways, trams, and the Underground, and provide a discount on many river services fares. The Oyster card is a contactless smart card system introduced for the public in 2003, which can be used to pay individual fares (pay as you go) or to carry various Travelcards and other passes. It is used by scanning the card at a yellow card reader. Such readers are found on ticket gates where otherwise a paper ticket could be fed through, allowing the gate to open and the passenger to walk through, and on stand-alone Oyster validators, which do not operate a barrier. Since 2010, Oyster Pay as you go has been available on all National Rail services within London. Oyster Pay as you go has a set of daily maximum charges that are the same as buying the nearest equivalent Day Travelcard. Virtually all contactless Visa, Maestro, MasterCard and American Express debit and credit cards issued in the UK, and also most international cards supporting contactless payment, are accepted for travel on London Underground, London Overground, Docklands Light Railway, most National Rail, London Tramlink and Bus services. This works in the same way for the passenger as an Oyster card, including the use of capping and reduced fares compared to paper tickets. The widespread use of contactless payment - around 25 million journeys each week - has meant that TfL is now one of Europe's largest contactless merchants, with one in 10 contactless transactions in the UK taking place on the TfL network. Mobile payments - such as Apple Pay, Google Pay and Samsung Pay - are accepted in the same way as contactless payment cards. The fares are the same as those charged on a debit or credit card, including the same daily capping. In 2020, one in five journeys are made using mobile devices instead of using contactless bank cards, and TfL had become the most popular Apple Pay merchant in the UK. TfL's expertise in contactless payments has led other cities such as New York, Sydney and Boston to licence the technology from TfL and Cubic. Each of the main transport units has its own corporate identity, formed by differently coloured versions of the standard roundel logo and adding appropriate lettering across the horizontal bar. The roundel rendered in blue without any lettering represents TfL as a whole (see Transport for London logo), as well as used in situations where lettering on the roundel is not possible (such as bus receipts, where a logo is a blank roundel with the name "London Buses" to the right). The same range of colours is also used extensively in publicity and on the TfL website. Transport for London has always mounted advertising campaigns to encourage use of the Underground. For example, in 1999, they commissioned artist Stephen Whatley to paint an interior – 'The Grand Staircase' – which he did on location inside Buckingham Palace. This painting was reproduced on posters and displayed all over the London Underground. In 2010 they commissioned artist Mark Wallinger to assist them in celebrating the 150th anniversary of the Underground, by creating the Labyrinth Project, with one enamel plaque mounted permanently in each of the Tube's 270 stations. In 2015, in partnership with the London Transport Museum and sponsored by Exterion Media, TfL launched Transported by Design, an 18-month programme of activities. The intention is to showcase the importance of both physical and service design across London's transport network. In October 2015, after two months of public voting, the black cab topped the list of favourite London transport icons, which also included the original Routemaster bus and the Tube map, among others. In 2016, the programme held exhibitions, walks and a festival at Regent Street on 3 July. In May 2019, TfL banned advertising from Saudi Arabia, Pakistan and the United Arab Emirates due to their poor human rights records. This brought the number of countries to 11 from which TfL has banned adverts, due to them having the death penalty for homosexuals. Countries previously banned from advertising were Iran, Nigeria, Saudi Arabia, Somalia, Sudan and Yemen.
https://en.wikipedia.org/wiki?curid=31145
Transfer function In engineering, a transfer function (also known as system function or network function) of an electronic or control system component is a mathematical function which theoretically models the device's output for each possible input. In its simplest form, this function is a two-dimensional graph of an independent scalar input versus the dependent scalar output, called a transfer curve or characteristic curve. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory. The dimensions and units of the transfer function model the output response of the device for a range of possible inputs. For example, the transfer function of a two-port electronic circuit like an amplifier might be a two-dimensional graph of the scalar voltage at the output as a function of the scalar voltage applied to the input; the transfer function of an electromechanical actuator might be the mechanical displacement of the movable arm as a function of electrical current applied to the device; the transfer function of a photodetector might be the output voltage as a function of the luminous intensity of incident light of a given wavelength. The term "transfer function" is also used in the frequency domain analysis of systems using transform methods such as the Laplace transform; here it means the amplitude of the output as a function of the frequency of the input signal. For example, the transfer function of an electronic filter is the voltage amplitude at the output as a function of the frequency of a constant amplitude sine wave applied to the input. For optical imaging devices, the optical transfer function is the Fourier transform of the point spread function (hence a function of spatial frequency). Transfer functions are commonly used in the analysis of systems such as single-input single-output filters in the fields of signal processing, communication theory, and control theory. The term is often used exclusively to refer to linear time-invariant (LTI) systems. Most real systems have non-linear input/output characteristics, but many systems, when operated within nominal parameters (not "over-driven") have behavior close enough to linear that LTI system theory is an acceptable representation of the input/output behavior. The descriptions below are given in terms of a complex variable, formula_1, which bears a brief explanation. In many applications, it is sufficient to define formula_2 (thus formula_3), which reduces the Laplace transforms with complex arguments to Fourier transforms with real argument ω. The applications where this is common are ones where there is interest only in the steady-state response of an LTI system, not the fleeting turn-on and turn-off behaviors or stability issues. That is usually the case for signal processing and communication theory. Thus, for continuous-time input signal formula_4 and output formula_5, the transfer function formula_6 is the linear mapping of the Laplace transform of the input, formula_7, to the Laplace transform of the output formula_8: or In discrete-time systems, the relation between an input signal formula_4 and output formula_5 is dealt with using the z-transform, and then the transfer function is similarly written as formula_13 and this is often referred to as the pulse-transfer function. Consider a linear differential equation with constant coefficients where "u" and "r" are suitably smooth functions of "t", and "L" is the operator defined on the relevant function space, that transforms "u" into "r". That kind of equation can be used to constrain the output function "u" in terms of the "forcing" function "r". The transfer function can be used to define an operator formula_15 that serves as a right inverse of "L", meaning that formula_16. Solutions of the "homogeneous", constant-coefficient differential equation formula_17 can be found by trying formula_18. That substitution yields the characteristic polynomial The inhomogeneous case can be easily solved if the input function "r" is also of the form formula_20. In that case, by substituting formula_21 one finds that formula_22 if we define Taking that as the definition of the transfer function requires careful disambiguation between complex vs. real values, which is traditionally influenced by the interpretation of "abs(H(s))" as the gain and "-atan(H(s))" as the phase lag. Other definitions of the transfer function are used: for example formula_24 A general sinusoidal input to a system of frequency formula_25 may be written formula_26. The response of a system to a sinusoidal input beginning at time formula_27 will consist of the sum of the steady-state response and a transient response. The steady-state response is the output of the system in the limit of infinite time, and the transient response is the difference between the response and the steady state response (It corresponds to the homogeneous solution of the above differential equation.) The transfer function for an LTI system may be written as the product: where "sPi" are the "N" roots of the characteristic polynomial and will therefore be the poles of the transfer function. Consider the case of a transfer function with a single pole formula_29 where formula_30. The Laplace transform of a general sinusoid of unit amplitude will be formula_31. The Laplace transform of the output will be formula_32 and the temporal output will be the inverse Laplace transform of that function: The second term in the numerator is the transient response, and in the limit of infinite time it will diverge to infinity if "σP" is positive. In order for a system to be stable, its transfer function must have no poles whose real parts are positive. If the transfer function is strictly stable, the real parts of all poles will be negative, and the transient behavior will tend to zero in the limit of infinite time. The steady-state output will be: The frequency response (or "gain") "G" of the system is defined as the absolute value of the ratio of the output amplitude to the steady-state input amplitude: which is just the absolute value of the transfer function formula_36 evaluated at formula_37. This result can be shown to be valid for any number of transfer function poles. Let formula_38 be the input to a general linear time-invariant system, and formula_39 be the output, and the bilateral Laplace transform of formula_38 and formula_39 be Then the output is related to the input by the transfer function formula_36 as and the transfer function itself is therefore In particular, if a complex harmonic signal with a sinusoidal component with amplitude formula_46, angular frequency formula_47 and phase formula_48, where arg is the argument is input to a linear time-invariant system, then the corresponding component in the output is: Note that, in a linear time-invariant system, the input frequency formula_52 has not changed, only the amplitude and the phase angle of the sinusoid has been changed by the system. The frequency response formula_53 describes this change for every frequency formula_52 in terms of "gain": and "phase shift": The phase delay (i.e., the frequency-dependent amount of delay introduced to the sinusoid by the transfer function) is: The group delay (i.e., the frequency-dependent amount of delay introduced to the envelope of the sinusoid by the transfer function) is found by computing the derivative of the phase shift with respect to angular frequency formula_52, The transfer function can also be shown using the Fourier transform which is only a special case of the bilateral Laplace transform for the case where formula_60. While any LTI system can be described by some transfer function or another, there are certain "families" of special transfer functions that are commonly used. Some common transfer function families and their particular characteristics are: In control engineering and control theory the transfer function is derived using the Laplace transform. The transfer function was the primary tool used in classical control engineering. However, it has proven to be unwieldy for the analysis of multiple-input multiple-output (MIMO) systems, and has been largely supplanted by state space representations for such systems. In spite of this, a transfer matrix can always be obtained for any linear system, in order to analyze its dynamics and other properties: each element of a transfer matrix is a transfer function relating a particular input variable to an output variable. A useful representation bridging state space and transfer function methods was proposed by Howard H. Rosenbrock and is referred to as Rosenbrock system matrix. In optics, modulation transfer function indicates the capability of optical contrast transmission. For example, when observing a series of black-white-light fringes drawn with a specific spatial frequency, the image quality may decay. White fringes fade while black ones turn brighter. The modulation transfer function in a specific spatial frequency is defined by where modulation (M) is computed from the following image or light brightness: Transfer functions do not properly exist for many non-linear systems. For example, they do not exist for relaxation oscillators; however, describing functions can sometimes be used to approximate such nonlinear time-invariant systems.
https://en.wikipedia.org/wiki?curid=31146
Twiglets Twiglets are a wheat-based snack with a "distinctive knobbly shape" similar to that of a small twig. The taste of Twiglets derives from the yeast extract used in its coating, and has been compared to that of Marmite. They are marketed in the United Kingdom and packaged in 24 g, 45 g, 105 g and 150 g bags, and in 200 g tubs. Twiglets were invented in 1929 by a French biscuit maker named J. Rondalin, a technical manager at Peek Freans' Bermondsey factory, who added brewer's yeast to a leftover batch of Vitawheat dough. They were first launched onto the consumer market in 1932 by Peek Freans. Today, Twiglets are manufactured in Aintree by United Biscuits subsidiary Jacob's. During the Christmas season, Twiglets were traditionally sold in drum-shaped tin boxes as a high class cocktail accompaniment from the 1930s until the 1970s. In modern times, the tin boxes have been substituted with large cardboard tubes decorated with seasonal themes. In the early 1990s, a range of tangy Worcester Sauce Twiglets was introduced. Jacobs also released a curry flavoured edition from 1999–2001, in collaboration with several Indian restaurant chains in Northern England. From 2010 until 2012, Tangy Twiglets were briefly re-released as a limited edition to commemorate Twiglets' 80th anniversary. Twiglets used to be manufactured partly from grain prepared using hammer milling machinery located at the Parker Brothers Lark Roller Mills in Mildenhall. The machinery was powered by water turbines fed by water from the river Lark and this process created the broken grains that give twiglets their crunchy irregular shape and texture. This water powered mill operated until the last decade of the 20th century in this capacity. To celebrate the 85th anniversary of Twiglets in August 2014, United Biscuits hosted an event known as Camp Twiglet on the Cotswolds farm of Blur's Alex James. This included three wigwams made from Twiglets which were attributed to a local artist named Mrs Cakehead.
https://en.wikipedia.org/wiki?curid=31149
Lagrange's theorem (group theory) Lagrange's theorem, in the mathematics of group theory, states that for any finite group "G", the order (number of elements) of every subgroup "H" of "G" divides the order of "G". The theorem is named after Joseph-Louis Lagrange. This can be shown using the concept of left cosets of in . The left cosets are the equivalence classes of a certain equivalence relation on and therefore form a partition of . Specifically, and in are related if and only if there exists in such that "x = yh". If we can show that all cosets of have the same number of elements, then each coset of has precisely || elements. We are then done since the order of times the number of cosets is equal to the number of elements in , thereby proving that the order of divides the order of . To show any two left cosets have the same cardinality, it suffices to demonstrate a bijection between them. Suppose and are two left cosets of . Then define a map by setting . This map is bijective because it has an inverse given by formula_1 This proof also shows that the quotient of the orders || / || is equal to the index (the number of left cosets of in ). If we allow and to be infinite, and write this statement as then, seen as a statement about cardinal numbers, it is equivalent to the axiom of choice. A consequence of the theorem is that the order of any element "a" of a finite group (i.e. the smallest positive integer number "k" with "a""k" = "e," where "e" is the identity element of the group) divides the order of that group, since the order of "a" is equal to the order of the cyclic subgroup generated by "a". If the group has "n" elements, it follows This can be used to prove Fermat's little theorem and its generalization, Euler's theorem. These special cases were known long before the general theorem was proved. The theorem also shows that any group of prime order is cyclic and simple. This in turn can be used to prove Wilson's theorem, that if "p" is prime then "p" is a factor of formula_4. Lagrange's theorem can also be used to show that there are infinitely many primes: if there were a largest prime "p", then a prime divisor "q" of the Mersenne number formula_5 would be such that the order of "2" in the multiplicative group formula_6 (see modular arithmetic) divides the order of formula_7, which is formula_8. Hence formula_9, contradicting the assumption that "p" is the largest prime. Lagrange's theorem raises the converse question as to whether every divisor of the order of a group is the order of some subgroup. This does not hold in general: given a finite group "G" and a divisor "d" of |"G"|, there does not necessarily exist a subgroup of "G" with order "d". The smallest example is "A"4 (the alternating group of degree 4), which has 12 elements but no subgroup of order 6. A "Converse of Lagrange's Theorem" (CLT) group is a finite group with the property that for every divisor of the order of the group, there is a subgroup of that order. It is known that a CLT group must be solvable and that every supersolvable group is a CLT group. However, there exist solvable groups that are not CLT (for example, "A"4) and CLT groups that are not supersolvable (for example, "S"4, the symmetric group of degree 4). There are partial converses to Lagrange's theorem. For general groups, Cauchy's theorem guarantees the existence of an element, and hence of a cyclic subgroup, of order any prime dividing the group order. Sylow's theorem extends this to the existence of a subgroup of order equal to the maximal power of any prime dividing the group order. For solvable groups, Hall's theorems assert the existence of a subgroup of order equal to any unitary divisor of the group order (that is, a divisor coprime to its cofactor). The converse of Lagrange's theorem states that if formula_10 is a divisor of the order of a group formula_11, then there exists a subgroup formula_12 where formula_13. We will examine the group formula_14, the set of even permutations as the subgroup of the Symmetric group formula_15. formula_17 so the divisors are formula_18. Assume to the contrary that there exists a subgroup formula_12 in formula_14 with formula_21. Let formula_22 be the non-cyclic subgroup of formula_14 called the Klein four-group. Let formula_25. Since both formula_12 and formula_22 are subgroups of formula_14, formula_29 is also a subgroup of formula_14. From Lagrange's theorem, the order of formula_29 must divide both formula_32 and formula_33, the orders of formula_12 and formula_22 respectively. The only two positive integers that divide both formula_32 and formula_33 are formula_38 and formula_39. So formula_40 or formula_39. Assume formula_40, then formula_43. If formula_12 does not share any elements with formula_22, then the 5 elements in formula_12 besides the Identity element formula_47 must be of the form formula_48 where formula_49 are distinct elements in formula_50. Since any element of the form formula_48 squared is formula_52, and formula_53, any element of formula_12 in the form formula_48 must be paired with its inverse. Specifically, the remaining 5 elements of formula_12 must come from distinct pairs of elements in formula_14 that are not in formula_22. This is impossible since pairs of elements must be even and cannot total up to 5 elements. Thus, the assumptions that formula_43 is wrong, so formula_60. Then, formula_61 where formula_62, formula_63 must be in the form formula_64 where formula_65 are distinct elements of formula_50. The other four elements in formula_12 are cycles of length 3. Note that the cosets generated by a subgroup of a group is a partition of the group. The cosets generated by a specific subgroup are either identical to each other or disjoint. The index of a subgroup in a group formula_68 is the number of cosets generated by that subgroup. Since formula_69 and formula_70, formula_12 will generate two left cosets, one that is equal to formula_12 and another, formula_73, that is of length 6 and includes all the elements in formula_14 not in formula_12. Since there are only 2 distinct cosets generated by formula_12, then formula_12 must be normal. Because of that, formula_78. In particular, this is true for formula_79. Since formula_80. Without loss of generality, assume that formula_81. Then formula_82. Transforming back, we get formula_83. Because formula_22 contains all disjoint transpositions in formula_14, formula_86. Hence, formula_87. Since formula_88, we have demonstrated that there is a third element in formula_29. But earlier we showed that formula_60, so we have a contradiction. Therefore, our original assumption that there is a subgroup of order 6 is not true and consequently there is no subgroup of order 6 in formula_14 and the converse of Lagrange's theorem is not necessarily true. Lagrange did not prove Lagrange's theorem in its general form. He stated, in his article "Réflexions sur la résolution algébrique des équations", that if a polynomial in variables has its variables permuted in all ways, the number of different polynomials that are obtained is always a factor of . (For example, if the variables , , and are permuted in all 6 possible ways in the polynomial then we get a total of 3 different polynomials: . Note that 3 is a factor of 6.) The number of such polynomials is the index in the symmetric group of the subgroup of permutations that preserve the polynomial. (For the example of , the subgroup in contains the identity and the transposition .) So the size of divides . With the later development of abstract groups, this result of Lagrange on polynomials was recognized to extend to the general theorem about finite groups which now bears his name. In his "Disquisitiones Arithmeticae" in 1801, Carl Friedrich Gauss proved Lagrange's theorem for the special case of formula_92, the multiplicative group of nonzero integers modulo , where is a prime. In 1844, Augustin-Louis Cauchy proved Lagrange's theorem for the symmetric group . Camille Jordan finally proved Lagrange's theorem for the case of any permutation group in 1861.
https://en.wikipedia.org/wiki?curid=31150
The Princess Bride (film) The Princess Bride is a 1987 American fantasy adventure comedy film directed and co-produced by Rob Reiner, starring Cary Elwes, Robin Wright, Mandy Patinkin, Chris Sarandon, Wallace Shawn, André the Giant, and Christopher Guest. Adapted by William Goldman from his 1973 novel "The Princess Bride", it tells the story of a farmhand named Westley, accompanied by companions befriended along the way, who must rescue his true love Princess Buttercup from the odious Prince Humperdinck. The film essentially preserves the novel's narrative style by presenting the story as a book being read by a grandfather (Peter Falk) to his sick grandson (Fred Savage). The film was first released in the United States on September 25, 1987, and was well-received by critics at the time, but was only a modest box office success. Over time the film has become a cult film. The film is number 50 on the Bravo's "100 Funniest Movies", number 88 on The American Film Institute's (AFI) "AFI's 100 Years...100 Passions" list of the 100 greatest film love stories, and 46 in Channel 4's 50 Greatest Comedy Films list. In 2016, the film was inducted into the National Film Registry, being deemed as "culturally, historically or aesthetically significant". The film is an enactment of a book read to a sick boy from Chicago, who is initially dismissive of the story, by his grandfather, with occasional interruptions of the scenes in this frame story. A beautiful young woman named Buttercup lives on a farm in the fictional country of Florin. Whenever she instructs the farmhand Westley, he complies and answers, "As you wish". She eventually realizes that he truly means "I love you" and that she loves him in return. He leaves to seek his fortune so they can marry, but his ship is attacked by the Dread Pirate Roberts, who is infamous for never leaving survivors, and Westley is believed dead. Five years later, Buttercup is forced into marriage to Prince Humperdinck, heir to the throne of Florin. Before the wedding, she is kidnapped by three outlaws: a short Sicilian boss named Vizzini, a giant from Greenland named Fezzik, and a Spanish fencing master named Inigo Montoya, who seeks revenge against a six-fingered man who killed his father. The outlaws are pursued separately by a masked man in black and Prince Humperdinck with a complement of soldiers. The man in black catches up to the outlaws at the top of the Cliffs of Insanity. He defeats Inigo in a duel and knocks him unconscious, chokes Fezzik until he passes out, and kills Vizzini by tricking him into drinking from a cup containing poison. He takes Buttercup prisoner and they flee, stopping to rest at the edge of a gorge. When Buttercup correctly guesses that he is the Dread Pirate Roberts she becomes enraged at him for killing Westley. When Buttercup sees Humperdinck and his men appearing, she shoves Roberts down a hill and wishes death upon him. As he tumbles down, he shouts, "As you wish!" Realizing Roberts is Westley, she throws herself into the gorge after him and they are reunited. Westley explains the Dread Pirate Roberts is actually a title passed on by subsequent holders: he had taken the title so the previous Roberts could retire. They pass through the dangerous Fire Swamp, which is inhabited by rodents of unusual size (ROUS), but are captured as they leave by Humperdinck and his sadistic vizier Count Rugen, who is revealed to be Inigo's father's killer. Buttercup agrees to return with Humperdinck in exchange for Westley's release, but Humperdinck secretly orders Rugen to lock Westley in his torture chamber, the Pit of Despair. When Buttercup expresses unhappiness at marrying Humperdinck, he promises to search for Westley. However, his real plan is to start a war with the neighboring country of Guilder by killing Buttercup and framing Guilder for her death (he had hired Vizzini to kill her for that same purpose). Meanwhile, Inigo and Fezzik reunite when Humperdinck orders the thieves arrested in the nearby forest, and Fezzik tells Inigo about Rugen. Inigo decides that they need Westley's help to get into the castle. Buttercup berates Humperdinck after learning that he has not tried to find Westley; enraged, Humperdinck imprisons Buttercup in her chambers, and tortures Westley to death. Inigo and Fezzik follow the cries of anguish through the forest. They find Westley's body and bring him to a folk healer, Miracle Max, whom Humperdinck has recently fired. He discovers that Westley is only "mostly dead" due to being sustained by his love for Buttercup, and revives him to a state of heavy paralysis. After Westley, Inigo and Fezzik invade the castle, Humperdinck panics and orders the wedding ceremony shortened. Inigo finds and kills Rugen in a duel, repeatedly taunting him with his greeting of vengeance: "Hello, my name is Inigo Montoya. You killed my father. Prepare to die". Westley finds Buttercup, who is about to commit suicide, assuring her that the marriage is invalid because she never said "I do". Still partly paralyzed, he bluffs his way out of a duel with Humperdinck and they flee the castle. Westley rides away with Buttercup, Inigo and Fezzik before sharing a passionate kiss with Buttercup. Back in the boy's bedroom, the boy eagerly asks his grandfather to read the story to him again the next day, to which the grandfather replies, "As you wish". Rob Reiner, who had been enamored with Goldman's book ever since he was given it as a gift from his father, Carl Reiner, realized he wanted to make the film adaptation after successfully demonstrating his filmmaking skill with the release of "This Is Spinal Tap" in 1984. During production of "Stand by Me", released in 1986, Reiner had spoken to an executive at Paramount Pictures regarding what his next film would be, and suggested the adaptation of "The Princess Bride". He was told they could not, leading Reiner to discover that several studios had previously attempted to bring Goldman's book to the big screen without success. Those previous attempts included 20th Century Fox, which paid Goldman $500,000 for the film rights and to do a screenplay in 1973. Richard Lester was signed to direct and the movie was almost made, but the head of production at Fox was fired and the project was put on hiatus. Goldman subsequently bought back the film rights to the novel with his own money. Other directors had also attempted to adapt the book, including François Truffaut, Robert Redford and Norman Jewison, and at one point, Christopher Reeve was interested in playing Westley in one planned adaption. Reiner found success by gaining financial support from Norman Lear, who Reiner knew from "All in the Family" and who had funded production of "This is Spinal Tap", with the production to be distributed by 20th Century Fox. Reiner worked closely with Goldman to adapt the book for the screenplay. Reiner had quickly decided on Cary Elwes for Westley, based on his performance in "Lady Jane"; however, during the casting period in Los Angeles, Elwes was in Germany on set for "Maschenka". Reiner flew out to Munich to meet with Elwes, confirming his appropriateness for the role. While Reiner and casting director Jane Jenkins auditioned other actors for Westley, they knew Elwes was perfect for the part. Elwes had read the book in his childhood and associated himself with the character of Westley, but never believed he would have the opportunity to play him. Robin Wright was cast late in the process, about a week before filming; Reiner and Jenkins had auditioned a number of English actresses but had not found their ideal Buttercup. Wright's agent had heard of the casting call and encouraged Wright to audition. Though initially shy, Wright impressed Jenkins, and later Reiner. They invited Wright to come meet Goldman at his house. Jenkins recalls: "The doorbell rang. Rob went to the door, and literally, as he opened the door, [Wright] was standing there in this little white summer dress, with her long blonde hair, and she had a halo from the sun. She was backlit by God. And Bill Goldman looked across the room at her, and he said, 'Well, that's what I wrote.' It was the most perfect thing." Mandy Patinkin and Wallace Shawn were early choices for the cast; Shawn in particular was chosen as Vizzini due to his diminutive size to contrast that of the giant Fezzik. When Goldman originally shopped his novel in the early 1970s, his first choice for Fezzik was Andre the Giant, whose wrestling schedule left him unavailable for filming. Goldman's second choice was Arnold Schwarzenegger, who at that time was almost unknown as an actor. However, by the time "The Princess Bride" was finally green-lit, Schwarzenegger was a major film star and the studio could not afford him. Jenkins contacted the World Wrestling Federation to ask about hiring Andre, but were told that the filming conflicted with a wrestling match in Tokyo that would pay him $5 million. Jenkins auditioned other tall men, including Kareem Abdul-Jabbar, Lou Ferrigno and Carel Struycken, but these did not pan out. Near the end of casting, the World Wrestling Federation told Jenkins that Andre's match in Tokyo had been cancelled, clearing him to play the role of Fezzik. The film was shot in various locations in Great Britain and Ireland in late 1986: The framing story scenes, the last to be filmed, were recorded at Shepperton Studios in Surrey. Reiner rented a house in England near these sites and frequently invited the cast over for meals and light-hearted get-togethers. Many cast members believed this helped to create a sense of "family" that helped to improve their performances for the film. Cary Elwes and Mandy Patinkin learned to fence (both left- and right-handed) for the film, and performed these scenes themselves, outside of the two somersaults, which were performed by stunt doubles. They were trained by fencing instructors Bob Anderson and Peter Diamond, both of whom had also worked on training the actors in the original "Star Wars" trilogy. Elwes and Patinkin spent about three weeks prior to filming learning to fence, and spent most of their off-camera free time practicing. Anderson encouraged the two to learn the other's choreography for the fight to help them anticipate the movements and avoid an accident. They also watched as many sword fights from previous films to see how they could improve on those. Popular professional wrestler André the Giant had undergone major back surgery prior to filming and, despite his great size and strength, could not support the weight of Elwes during their fight scene or Wright for a scene at the end of the film. For the wrestling scene, when Elwes hangs on André's back, he was actually walking on a series of ramps below the camera during close-ups. For the wide shots, a stunt double took the place of André. When he was apparently carrying Wright, she was actually suspended by cables. Billy Crystal and Carol Kane spent time before traveling to England to work out the backstory between Miracle Max and his wife, and develop a rapport for their characters. Once on set, Reiner allowed the pair to improvise some of their lines. The original soundtrack album was composed by Mark Knopfler of Dire Straits, and released by Warner Bros. Records in the United States and Vertigo Records internationally in November 1987. The album contains the song "Storybook Love", performed by Willy DeVille and produced by Mark Knopfler. It was nominated for an Academy Award for Best Original Song at the 60th Academy Awards. In his audio commentary of the film on the special edition DVD, director Rob Reiner said that only Knopfler could create a soundtrack to capture the film's quirky yet romantic nature. Reiner was an admirer of Knopfler's work but did not know him before working on the film. He sent the script to him hoping he would agree to score the film. Knopfler agreed on one condition: that somewhere in the film Reiner would include the baseball cap (which had been modified to say "USS Ooral Sea OV-4B") he wore as Marty DiBergi in "This Is Spinal Tap". Reiner was unable to produce the original cap, but did include a similar cap in the grandson's room. Knopfler later said he was joking. The film was initially a modest success, grossing $30.8 million at the United States and Canada box office, on a $16 million production budget. "The Princess Bride" received critical acclaim. On Rotten Tomatoes, the film holds a 97% "Certified Fresh" rating, based on 74 reviews, with an average rating of 8.34/10. The site's consensus states: "A delightfully postmodern fairy tale, "The Princess Bride" is a deft, intelligent mix of swashbuckling, romance, and comedy that takes an age-old damsel-in-distress story and makes it fresh." On Metacritic, the film holds a score of 77 out of 100, based on 20 critics, indicating "generally favorable reviews." Audiences surveyed by CinemaScore gave the film a grade "A+" on scale of A to F. Gene Siskel and Roger Ebert gave the film a "two thumbs up" rating on their television program. Ebert also wrote a very favorable print review in his column for the "Chicago Sun-Times". Richard Corliss of "Time" said the film was fun for the whole family, and later, "Time" listed the film as one of the "Best of '87". Janet Maslin of "The New York Times" praised the cast and the sweetness of the film. "The Princess Bride" was not a major box-office success, but it became a cult classic after its release to the home video market. The film is widely regarded as eminently quotable. Elwes noted in 2017, on the film's 30th anniversary, that fans still frequently come up to him and quote lines from the movie. According to him Wallace Shawn had it "worse", in that any time Wallace made a small error like dropping his keys, people would shout "Inconceivable!" to him. In 2000, readers of "Total Film" magazine voted "The Princess Bride" the 38th greatest comedy film of all time. In 2006, William Goldman's screenplay was selected by the Writers Guild of America as the 84th best screenplay of all time; it earned the same ranking in the Guild's 2013 update. The film was selected number 88 on The American Film Institute's (AFI) "AFI's 100 Years... 100 Passions" listing the 100 greatest film love stories of all time. BBC Radio 5's resident film critic, Mark Kermode, is a fan of the film, frequently considering it a model to which similar films aspire. In December 2011, director Jason Reitman staged a live dramatic reading of "The Princess Bride" script at the Los Angeles County Museum of Art (LACMA), with Paul Rudd as Westley; Mindy Kaling as Buttercup; Patton Oswalt as Vizzini; Kevin Pollak as Miracle Max; Goran Visnjic as Inigo Montoya; Cary Elwes (switching roles) as Humperdinck; director Rob Reiner as the grandfather; and Fred Savage reprising his role as the grandson. In 2013, director Ari Folman released a live-action animated film titled "The Congress", which directly referenced "The Princess Bride". Folman's film starred Robin Wright, playing both a live and animated version of herself, as a digitally cloned actress. In 2014, Cary Elwes wrote "As You Wish: Inconceivable Tales From the Making of The Princess Bride," a behind-the-scenes account of the film's production, co-written with Joe Layden. To help Elwes recall the production, Lear sent him a bound copy of the filming's call sheets. The book debuted at #3 on the New York Times Bestseller list. In addition to a foreword by director Rob Reiner and a limited edition poster, the book includes exclusive photos and interviews with the cast members from the 25th anniversary cast reunion, as well as unique stories and set secrets from the making of the film. In 2018, Savage reprised his role as The Grandson in a PG-13 version of "Deadpool 2" entitled "Once Upon a Deadpool", with Deadpool taking the role of The Narrator and reading the "Deadpool 2"'s story to him at bedtime and skipping over the more adult parts from the R-rated version. In 2020, a bar themed after the film, named "As You Wish," opened in Chicago. The menu features 16 themed cocktails. In North America, the film was released on VHS and Laserdisc in 1988 by Nelson Entertainment, the latter being a "bare bones" release in unmatted full screen. New Line Home Video reissued the VHS in 1994. The film was also released on Video CD by Philips. The Criterion Collection released a matted widescreen version, bare bones version on laserdisc in 1989, supplementing it with liner notes. In 1997 Criterion re-released the Laserdisc as a "special edition". This edition was widescreen and included an audio commentary by Rob Reiner, William Goldman, Andrew Scheinman, Billy Crystal and Peter Falk (this commentary would also later appear on the Criterion Blu-ray and DVD release); excerpts from the novel read by Rob Reiner; behind the scenes footage; a production scrapbook by unit photographer Clive Coote; design sketches by production designer Norman Garwood; and excerpts from the television series "Morton and Hayes", directed by Christopher Guest. By 2000, MGM had acquired the US home video rights to the film (as part of the "pre-1996 PolyGram film library" package) and released the film on VHS and DVD. The DVD release featured the soundtrack remastered in Dolby Digital 5.1 with the film in wide and full screen versions, and included the original US theatrical trailer. The next year MGM re-released the film in another widescreen "special edition", this time with two audio commentaries—one by Rob Reiner, the other by William Goldman—"As You Wish", "Promotional", and "Making Of" featurettes; a "Cary Elwes Video Diary"; the US and UK theatrical trailers; four television spots; a photo gallery; and a collectible booklet. In 2006, MGM and Sony Pictures Home Entertainment released a two-disc set with varying covers—the "Dread Pirate" and "Buttercup" editions. Each featured their respective character, but had identical features: in addition to the features in the previous release were, the "Dread Pirate Roberts: Greatest Legend of the Seven Seas", "Love is Like a Storybook Story", and "Miraculous Make Up" featurettes, "The Quotable Battle of Wits" game, and Fezzik's "Guide to Florin" booklet. Another year later, to commemorate the 20th anniversary of the film, MGM and 20th Century Fox Home Entertainment (whose corporate parent The Walt Disney Company holds all US rights to the film except for US home video rights) released the film with flippable cover art featuring the title displayed in an ambigram. This DVD did not include any of the bonus features from the older editions, but had new short featurettes and a new game. A Blu-ray Disc was released on March 17, 2009, encoded in 5.1 DTS-HD Master Audio. Special features include two audio commentaries, the original theatrical trailer and eight featurettes. In 2007, the film was released for download in the iTunes Store. The film is also available in Region 2 where it is published by Lions Gate Entertainment. Its extras are the theatrical trailer and text filmographies. The Criterion Collection released the film on Blu-ray and DVD on October 30, 2018. It included a new 4K digital transfer, the same audio commentary from the Criterion LaserDisc release, an edited 1987 audiobook reading of Goldman's novel by director Rob Reiner, new programs on William Goldman's screenplay and tapestry, a new interview with art director Richard Holland, an essay by author Sloane Crosley, and a Blu-ray exclusive book highlighting four screenplays, as well as Goldman's introduction to the 1995 screenplay. "The Princess Bride" has been made available on May 1, 2020 on The Walt Disney Company's streaming service Disney+. It was announced that composer Adam Guettel was working with William Goldman on a musical adaptation of "The Princess Bride" in 2006. The project was abandoned in February 2007 after Goldman reportedly demanded 75 percent of the author's share, even though Guettel was writing both the music and the lyrics. In late 2013, Disney Theatrical Productions announced that they would adapt a stage musical adaptation of "The Princess Bride". A website was launched a couple of months later. In 2016, Rob Reiner said the project was still in development despite "roadblocks" and that Marc Shaiman, Randy Newman and John Mayer had all been approached to write songs, but had turned them down. In 2018, "The Princess Bride" was adapted by players of a virtual reality social game, "Rec Room", into what is likely to be the world's first full-length virtual reality stage production. The duration of the production was approximately 80 minutes, and ran for a total of four shows. In 2008, PlayRoom Entertainment released "The Princess Bride: Storming the Castle", a board game based on the film. "The Princess Bride Game" is a casual video game developed and published by New York game development studio Worldwide Biggies. In June 2020, a "fan made" recreation of "The Princess Bride" was released on Quibi called "Home Movie: The Princess Bride". It was produced by Jason Reitman during the COVID-19 pandemic quarantine in March 2020 with help from an ensemble cast who filmed themselves recreating the various scenes at their homes to raise money for the World Central Kitchen charity. Reitman received backing from Jeffrey Katzenberg for the project, as well as the rights to stream the film on his Quibi service. The "fan-made" film also had approval from Norman Lear and the estate of William Goldman, and Mark Knopfler permitted the use of his music. Rob Reiner approved of the project, even briefly stepping in to play the grandfather. In a September 2019 biographical article on Norman Lear in "Variety", Sony Pictures Entertainment CEO Tony Vinciquerra, speaking of Lear's works and interest in remaking them, stated "Very famous people whose names I won’t use, but they want to redo "The Princess Bride."" The reaction to this via social media was very negative, with fans of the film asserting that a remake would be a bad idea and, in reference to the film, "inconceivable". Elwes paraphrased the film saying "There’s a shortage of perfect movies in this world. It would be a pity to damage this one." Jamie Lee Curtis, Guest's wife, stated "there is only ONE "The Princess Bride" and it's William Goldman and [Reiner]'s".
https://en.wikipedia.org/wiki?curid=31153
Taxil hoax The Taxil hoax was an 1890s hoax of exposure by Léo Taxil intended to mock not only Freemasonry but also the Catholic Church's opposition to it. Léo Taxil was the pen name of Marie Joseph Gabriel Antoine Jogand-Pagès, who had been accused earlier of libel regarding a book he wrote called "The Secret Loves of Pope Pius IX". On April 20, 1884, Pope Leo XIII published an encyclical, "Humanum genus", that said that the human race was After this encyclical, Taxil underwent a public, feigned conversion to Roman Catholicism and announced his intention of repairing the damage he had done to the true faith. The first book produced by Taxil after his conversion was a four-volume history of Freemasonry, which contained fictitious eyewitness verifications of their participation in Satanism. With a collaborator who published as "Dr. Karl Hacks", Taxil wrote another book called "The Devil in the Nineteenth Century", which introduced a new character, Diana Vaughan, a supposed descendant of the Rosicrucian alchemist Thomas Vaughan. The book contained many tales about her encounters with incarnate demons, one of whom was supposed to have written prophecies on her back with its tail, and another who played the piano while in the shape of a crocodile. Diana was supposedly involved in Satanic freemasonry but was redeemed when one day she professed admiration for Joan of Arc, at whose name the demons were put to flight. As Diana Vaughan, Taxil published a book called "Eucharistic Novena", a collection of prayers which were praised by the Pope. On April 19, 1897, Taxil called a press conference at which he said he would introduce Diana Vaughan to the press. He instead announced that his revelations about the Freemasons were fictitious. He thanked the clergy for their assistance in giving publicity to his wild claims. The confession was printed, in its entirety, in the Parisian newspaper "Le Frondeur", on April 25, 1897, titled: "Twelve Years Under the Banner of the Church, The Prank Of Palladism. Miss Diana Vaughan–The Devil At The Freemasons. A Conference held by M. Léo Taxil, at the Hall of the Geographic Society in Paris". The hoax material is still used to this day. Chick Publications publishes such a tract called "The Curse of Baphomet" and Randy Noblitt's book on satanic ritual abuse, "Cult and Ritual Abuse", also cites the Taxil hoax. In the magazine "National Magazine, an Illustrated American Monthly", Volume XXIV: April – September, 1906, pages 228 and 229, Taxil is quoted as giving his true reasons behind the hoax. Ten months later, on March 31, 1907, Taxil died. A series of paragraphs about Lucifer are frequently associated with the Taxil hoax. They read: While this quotation was published by Abel Clarin de la Rive in his "Woman and Child in Universal Freemasonry", it does not appear in Taxil's writings proper, though it is sourced in a footnote to Diana Vaughan, Taxil's creation.
https://en.wikipedia.org/wiki?curid=31155
Taiwan independence movement The Taiwan independence movement is a political and social movement that aims to establish an officially independent sovereign state and new country on the archipelagic territory of "Formosa and Pescadores", based on a unique "Taiwanese national identity". Currently, Taiwan's political status is highly ambiguous and heavily disputed. All of the island territories (aside from the Japan-controlled Senkaku/Diaoyutai islands) that are generally considered to collectively constitute a single "Taiwan region" are under the control of the Republic of China (ROC), a polity that conducts official diplomatic relations with and is recognized by fifteen United Nations-recognized countries. Taiwanese independence is opposed by pro-Chinese unification political parties in Taiwan as well as by the government of the People's Republic of China (PRC), a state that administers the territory of mainland China, which it officially claimed as part of the territory of the ROC (in addition to the territory of Taiwan, which is actually administered by the ROC). These groups oppose Taiwanese independence since they believe that Taiwan and mainland China comprise two portions of a single country's territory, that country being "China" (whether the "ROC" or the "PRC"). The PRC's government has formulated a "One-China principle", whereby foreign countries may only conduct official diplomatic relations with the PRC on the condition that they surrender all official diplomatic relations with and formal recognition of the ROC. Due to the PRC's economic clout, it has successfully pressured many countries into withdrawing official recognition of the ROC. All countries that officially recognize the PRC effectively acknowledge and/or recognize the "One-China policy". The United Nations, a prominent intergovernmental organization, seemingly acknowledges the One-China policy. The United Nations formally designates the territory of Taiwan as "Taiwan, Province of China", as of 2019. At the conclusion of the First Sino-Japanese War in 1895, Taiwan was ceded by the Chinese Qing Empire to the Empire of Japan via the Treaty of Shimonoseki. At the conclusion of World War II and the Second Sino-Japanese War in 1945, Taiwan was placed under the control of the Republic of China (ROC) on behalf of the WWII Allies. The ROC, then the generally recognized government of both China and Taiwan, declared Taiwan to have been "restored" to China; this is argued to have been an illegal act. In 1949–1950, the Communist Party of China (CPC) drove the ROC government out of China and into Taiwan (plus some minor Chinese islands), during the events of the Chinese Civil War. At the time, no treaty had yet been signed to officially transfer Taiwan to China. The ROC selected Taipei as the provisional capital (of China) and declared "martial law" in 1949. The supposedly democratic institutions of the ROC were "temporarily" suspended. With democracy suspended in ROC-controlled Taiwan, the Kuomintang (Chinese Nationalist Party) of the ROC, in reality, developed Taiwan into a dictatorship. The period of martial law that existed in Taiwan from 1949 until 1987 resulted in the unlawful convictions and occasional executions of thousands of Taiwanese and Chinese democracy activists and other dissidents. This period has become colloquially known as the "White Terror". In 1987, the Kuomintang released its hold on power and ended martial law in Taiwan. This was due not only to pressure from democracy/independence activists within Taiwan but also pressure from the United States due to its citizen Henry Liu having been assassinated by criminal triad members hired by the Republic of China military intelligence. From hereafter, independence-oriented parties were now able to gain control of Taiwan. Democratic activism within Taiwan gave birth to a range of independence-oriented political parties. Most notable out of these is the Democratic Progressive Party (DPP), which has been democratically elected into power three times. The governing body of Taiwan still continues to identify as the "Republic of China", but many institutions have been occupied and occasionally changed by the DPP, which has led to a theory that "the ROC is Taiwan". It is a point of contention as to whether Taiwan has already achieved "de facto" independence under the Constitution of the Republic of China amended in 2005. The PRC and the Kuomintang continue to argue that "the Chinese Civil War hasn't yet ended". These two political camps have developed a "1992 Consensus" in order to cement Taiwan's status as a province of "China". In retaliation, the DPP has been trying to develop a "Taiwan Consensus". The polity that exercises real control over Taiwan is a collection of political parties that variously refer to their country as either "Taiwan (Republic of China)" or "China (Republic of China)". There is no real consensus within the country over the fundamental status of the country itself, with the country being divided between two main factions known as the "Pan-Blue Coalition" and the "Pan-Green Coalition". The Pan-Blue Coalition, led by the Kuomintang (Chinese Nationalist Party or KMT), believes that their country (including Taiwan) is China and does not acknowledge the legitimacy of the People's Republic of China (PRC), which they view to be an occupation of the rest of China by rebel forces; they refer to Taiwan, the place where they actually live, as "Taiwan, free area of the Republic of China". On the other hand, the Pan-Green Coalition, currently led by the Democratic Progressive Party (DPP), believes that their country is limited to the geographical definition of Taiwan (including Taiwan's satellite islands and the Penghu Islands), as well as perhaps some minor outlying islands, and does not actively claim sovereignty over China. Furthermore, the territorial dispute over Taiwan is connected to various other territorial disputes in East Asia, especially the Senkaku/Diaoyutai Islands dispute and the various South China Sea Islands disputes. For the former, this is because both the PRC and the Pan-Blue Coalition believe that the Senkaku/Diaoyutai Islands are part of the geographical definition of Taiwan, although they are currently under the control of Japan and have been under Japanese rule since the late 19th century; hence, the Chinese claim to the Senkaku/Diaoyutai Islands is simply an extension of the Chinese claim to Taiwan. Meanwhile, regarding the latter, Taiwan/ROC maintains control over a few islands of the South China Sea, and the Pan-Blue Coalition further claims sovereignty over all of the other islands of the South China Sea. Finally, another crucial detail of the territorial dispute over Taiwan is the fact that Taiwan/ROC maintains control over a few other non-Taiwanese islands assigned to China; the islands of Kinmen (Quemoy) and Matsu, which are under Taiwan/ROC control, are geographically defined as being parts of Fujian Province, China (within Taiwan/ROC, they are governed as parts of the Pan-Blue Coalition's own definition of Fujian Province, China). Taiwan independence is supported by the Pan-Green Coalition in Taiwan, led by the Democratic Progressive Party (DPP), but opposed by the Pan-Blue Coalition, led by the Kuomintang (KMT). The former coalition aims to eventually achieve full sovereign independence for Taiwan. Whereas, the latter coalition aims to improve relations with the Beijing government (PRC) — which it refers to as "mainland China" — and eventually "reunify" at some point. Both parties have long been forced to precariously dance around the so-called "status quo" of Taiwan's political status. The DPP is unable to immediately declare independence due to pressure from the PRC and the KMT, whereas the KMT and PRC are unable to immediately achieve Chinese unification due to pressure from the DPP and its unofficial allies (including political factions within the United States (US), Japan, and the European Union (EU)). The 1895 Treaty of Shimonoseki and 1951 Treaty of San Francisco are oft-cited as the main bases for Taiwan independence in international law, if such things as "self-determination" and the Montevideo Convention (on the Rights and Duties of States) are to be disregarded. These two treaties are not recognized by the Beijing government and the Pan-Blue Coalition of Taiwan. Whereas the PRC usually dismisses self-determination and the Montevideo Convention as conspiracies against Chinese sovereignty, the two aforementioned treaties have strong legal bases in international law and have been recognized by numerous countries across the globe. Notably, the Treaty of San Francisco forms the primary basis of modern Japan's independence (from the WWII Allies), and largely dictates Japan's modern geopolitics. The premise of citing these two treaties is that: a) Japan gained sovereignty over Taiwan in 1895, b) Japan lost sovereignty over Taiwan in 1951-1952, and c) Japan never indicated the "successor state" on Taiwan thereafter. Therefore, according to certain activists, this means that Taiwan is only controlled by the Republic of China on behalf of the WWII Allies, and does not constitute a part of the ROC's sovereign territory. The Beijing government disregards these two treaties, claiming that: a) the Treaty of Shimonoseki has been nullified and b) the Treaty of San Francisco was illegal. Furthermore, the Potsdam Declaration and Cairo Communique are oft-cited as indisputable bases for Chinese sovereignty over Taiwan. The PRC is also adamant on emphasizing the fact that the United Nations (UN) refers to Taiwan as "Taiwan, Province of China". However, this point is dubious given that it has a huge amount of influence over the UN as one of five permanent members of the UN Security Council. The Beijing government also claims that the majority of countries recognize Taiwan as a province of China, though this is only a half-truth. People's Republic of China authorities also accuse the US, Japan, and the EU of interfering in "Chinese internal affairs", claiming that the United States is responsible for separating Taiwan from China, and is responsible for manufacturing "artificial" pro-independence sentiments within Taiwan. Most governments, including the U.S. government, claim to adhere to a so-called "One-China Policy", which is based on the Chinese "One China Principle". Most "developed" and "Western" countries consider Taiwan to be a self-governing state in reality, but claim that they consider this political reality to be illegal/illegitimate. However, since recognizing the existence of a "de facto independent Taiwan/ROC" provides some form of grounds for officially recognising Taiwan independence, China (PRC) usually rejects the main premise of the Montevideo Convention, which is that "there are certain realities that determine statehood" (irrespective of international recognition). Within the Pan-Green Coalition of Taiwan, there are two main factions. The faction that is currently in power aims to attain official international recognition for the reality of "two Chinas", where the PRC and the ROC can coexist; later, the ROC can gradually "transform" itself into a Taiwanese state whilst avoiding a major conflict with the PRC. Whereas, the other faction aims to directly achieve Taiwan independence through a more abrupt and complete overthrowal of ROC institutions within Taiwan, which the faction views to be illegitimate. The use of ""independence"" for Taiwan can be ambiguous. If some supporters articulate that they agree to the independence of Taiwan, they may either be referring to the notion of formally creating an independent Taiwanese state, or to the notion that Taiwan has become synonymous with the current Republic of China from Resolution on Taiwan's Future and that ROC-Taiwan is already independent (as reflected in the evolving concept from Four Noes and One Without to One Country on Each Side); both of these ideas run counter to the claims of China (PRC). Many supporters of independence for Taiwan view the history of Taiwan since the 17th century as a continuous struggle for independence and use it as an inspiration for the current political movement. According to this view, the people indigenous to Taiwan and those who have taken up residence there have been repeatedly occupied by groups including the Dutch, the Spanish, the Ming, Koxinga and the Ming loyalists, the Qing, the Japanese and finally the Chinese Nationalists led by the Kuomintang. From a pro-independence supporter's point of view, the movement for Taiwan independence began under Qing rule in the 1680s which led to a well known saying those days, "Every three years an uprising, every five years a rebellion". Taiwan Independence supporters compared Taiwan under Kuomintang rule to South Africa under apartheid. The Taiwan independence movement under Japan was supported by Mao Zedong in the 1930s as a means of freeing Taiwan from Japanese rule. With the end of World War II in 1945, by issuing "General Order No. 1" to the Supreme Commander for the Allied Powers, the Allies agreed that the Republic of China Army under the Kuomintang would "temporarily occupy Taiwan, on behalf of the Allied forces." Modern-day political movement for Taiwan independence dates back to the Japanese colonial period but only became a viable political force within Taiwan in the 1990s. Taiwanese independence was advocated periodically during the Japanese colonial period, but was suppressed by the Japanese government. These efforts were the goal of the Taiwanese Communist Party of the late 1920s. Unlike current formulations, and in line with the thinking of the Comintern, such a state would have been a proletarian one. With the end of World War II in 1945, Japanese rule ended, but the subsequent autocratic rule of the ROC's Kuomintang (KMT) later revived calls for local rule. However, it was a movement supported by the Chinese students who were born on the Island and not associated with KMT. It found its roots in the US and Japan. In the 1950s a Republic of Taiwan Provisional Government was set up in Japan. Thomas Liao was nominally the President. At one time it held quasi-official relations with the newly independent Indonesia. This was possible mainly through the connections between Sukarno and the Provisional Government's Southeast Asian liaison, Chen Chih-hsiung, who had assisted in Indonesia's local resistance movements against the Japanese rule. After the Kuomintang began to rule the island, the focus of the movement was as a vehicle for discontent from the native Taiwanese against the rule of "mainlanders" (i.e. mainland Chinese-born people who fled to Taiwan with KMT in the late 1940s). The February 28 Incident in 1947 and the ensuing martial law that lasted until 1987 contributed to the period of White Terror on the island. In 1979, the Kaohsiung Incident, occurred as the movement for democracy and independence intensified. Between 1949 and 1991, the official position of the ROC government on Taiwan was that it was the legitimate government of all of China and it used this position as justification for authoritarian measures such as the refusal to vacate the seats held by delegates elected on the mainland in 1947 for the Legislative Yuan. The Taiwan independence movement intensified in response to this and presented an alternative vision of a sovereign and independent Taiwanese state. This vision was represented through a number of symbols such as the use of Taiwanese in opposition to the school-taught Mandarin Chinese. Several scholars drafted various versions of a constitution, as both political statement or vision and as intellectual exercise. Most of these drafts favor a bicameral parliamentary rather than presidential system. In at least one such draft, seats in the upper house would be divided equally among Taiwan's established ethnicities. In the 1980s the Chinese Nationalist government considered publication of these ideas criminal. In the most dramatic case, it decided to arrest the pro-independence publisher Cheng Nan-jung for publishing a version in his Tang-wai magazine, "Liberty Era Weekly" (自由時代週刊). Rather than giving himself up, Cheng self-immolated in protest. Other campaigns and tactics toward such a State have included soliciting designs from the public for a new national flag (see images to the right) and anthem (for example, "Taiwan the Formosa"). More recently the Taiwan Name Rectification Campaign (台灣正名運動) has played an active role. More traditional independence supporters, however, have criticized name rectification as merely a superficial tactic devoid of the larger vision inherent in the independence agenda. Various overseas Taiwan Independence movements, such as the Formosan Association, World United Formosans for Independence, United Young Formosans for Independence (Japan), Union for Formosa's Independence in Europe, United Formosans in America for Independence, Committee for Human Rights in Formosa (Toronto, Ont.), published "The Independent Formosa" in several volumes with the publisher "Formosan Association." In "The Independent Formosa, Volumes 2-3", they tried to justify Taiwanese collaboration with Japan during World War II by saying that the "atmosphere covered the whole Japanese territories, including Korea and Formosa, and the Japanese mainlands as well", when Taiwanese publications supported Japan's "holy war", and that the people who did it were not at fault. The Anti-communist Kuomintang leader Chiang Kai-shek, President of the Republic of China on Taiwan, believed the Americans were going to plot a coup against him along with Taiwan Independence. In 1950, Chiang Ching-kuo became director of the secret police, which he remained until 1965. Chiang also considered some people who were friends to Americans to be his enemies. An enemy of the Chiang family, Wu Kuo-chen, was kicked out of his position of governor of Taiwan by Chiang Ching-kuo and fled to America in 1953. Chiang Ching-kuo, educated in the Soviet Union, initiated Soviet style military organization in the Republic of China Military, reorganizing and Sovietizing the political officer corps, surveillance, and Kuomintang party activities were propagated throughout the military. Opposed to this was Sun Li-jen, who was educated at the American Virginia Military Institute. Chiang orchestrated the controversial court-martial and arrest of General Sun Li-jen in August 1955, for plotting a coup d'état with the American CIA against his father Chiang Kai-shek and the Kuomintang. The CIA allegedly wanted to help Sun take control of Taiwan and declare its independence. During the martial law era lasting until 1987, discussion of Taiwan independence was forbidden in Taiwan, at a time when recovery of the mainland and national unification were the stated goals of the ROC. During that time, many advocates of independence and other dissidents fled overseas, and carried out their advocacy work there, notably in Japan and the United States. Part of their work involved setting up think tanks, political organizations, and lobbying networks in order to influence the politics of their host countries, notably the United States, the ROC's main ally at the time, though they would not be very successful until much later. Within Taiwan, the independence movement was one of many dissident causes among the intensifying democracy movement of the 1970s, which culminated in the 1979 Kaohsiung Incident. The Democratic Progressive Party (DPP) was eventually formed to represent dissident causes. After the lifting of martial law in 1987, and the acceptance of multi-party politics, the Democratic Progressive Party became increasingly identified with Taiwan independence, which entered its party platform in 1991. At the same time, many overseas independence advocates and organizations returned to Taiwan and for the first time openly promoted their cause in Taiwan, gradually building up political support. Many had previously fled to the US or Europe and had been on a blacklist held by KMT, which had held them back from going back to Taiwan. Where they had fled, they built many organisations like European Federation of Taiwanese Associations or Formosan Association for Public Affairs. By the late 1990s, DPP and Taiwan independence have gained a solid electoral constituency in Taiwan, supported by an increasingly vocal and hardcore base. As the electoral success of the DPP, and later, the DPP-led Pan-Green Coalition grew in recent years, the Taiwan independence movement shifted focus to identity politics by proposing many plans involving symbolism and social engineering. The interpretation of historical events such as the February 28 Incident, the use of broadcast language and mother tongue education in schools, the official name and flag of the ROC, slogans in the army, orientation of maps all have been issues of concern to the present-day Taiwan independence movement. The movement, at its peak in the 70s through the 90s in the form of the Taiwan literature movement and other cultural upheavals, has moderated in recent years with the assimilation of these changes. Friction between "mainlander" and "native" communities on Taiwan has decreased due to shared interests: increasing economic ties with mainland China, continuing threats by the PRC to invade, and doubts as to whether or not the United States would support a unilateral declaration of independence. Since the late 1990s many supporters of Taiwan independence have argued that Taiwan, as the ROC, is already independent from the mainland, making a formal declaration unnecessary. In May 1999, the Democratic Progressive Party formalized this position in its "Resolution on Taiwan's Future". In 1995, Taiwanese president Lee Teng-hui was given permission to speak at Cornell University about his dream of Taiwanese independence, the first time a Taiwanese leader had been allowed to visit the United States. This led to a military response from China that included buying Russian submarines and conducting missile tests near Taiwan. In February 2007, President Chen Shui-bian initiated changes to names of state-owned enterprises, and the nation's embassies and overseas representative offices. As a result, Chunghwa Post Co. (中華郵政) was renamed Taiwan Post Co (臺灣郵政) and Chinese Petroleum Corporation (中國石油) is now called "CPC Corporation, Taiwan" (臺灣中油) and the signs in Taiwan's embassies now display the word "Taiwan" in brackets after "Republic of China". In 2007, the Taiwan Post Co. issued stamps bearing the name "Taiwan" in remembrance of the February 28 Incident. However, the name of the post office was reverted to 'Chunghwa Post Co.' following the inauguration of Kuomintang president Ma Ying-jeou in 2008. The Pan-Blue camp voiced its opposition to the changes and the former KMT Chairman Ma Ying-jeou said that it would generate diplomatic troubles and cause cross-strait tensions. It also argued that without a change in the relevant legislation pertaining to state-owned enterprises, the name changes of these enterprises could not be valid. As the Pan-Blue camp held only a slim parliamentary majority throughout the administration of President Chen, the Government's motion to change the law to this effect were blocked by the opposition. Later, U.S. Department of State spokesman Sean McCormack said that the U.S. does not support administrative steps that would appear to change Taiwan's status or move toward independence. Former president Lee Teng-hui has stated that he never pursued Taiwanese independence. Lee views Taiwan as already an independent state, and that the call for "Taiwanese independence" could even confuse the international community by implying that Taiwan once viewed itself as part of China. From this perspective, Taiwan is independent even if it remains unable to enter the UN. Lee said the most important goals are to improve the people's livelihoods, build national consciousness, make a formal name change and draft a new constitution that reflects the present reality so that Taiwan can officially identify itself as a country. Legislative elections were held on 12 January 2008, resulting in a supermajority (86 of the 113 seats) in the legislature for the Kuomintang (KMT) and the Pan-Blue Coalition. President Chen Shui-bian's Democratic Progressive Party was handed a heavy defeat, winning only the remaining 27 seats. The junior partner in the Pan-Green Coalition, the Taiwan Solidarity Union, won no seats. Two months later, the election for the 12th-term President and Vice-President of the Republic of China was held on Saturday, 22 March 2008. Kuomintang (KMT) nominee Ma Ying-jeou won, with 58% of the vote, ending eight years of Democratic Progressive Party rule. Along with the 2008 legislative election, Ma's landslide victory brought the Kuomintang back to power in Taiwan. On 1 August 2008, the Board of Directors of Taiwan Post Co. resolved to reverse the name change and restored the name "Chunghwa Post". The Board of Directors, as well as resolving to restore the name of the corporation, also resolved to re-hire the chief executive dismissed in 2007, and to withdraw defamation proceedings against him. On 2 September 2008, President Ma defined the relations between Taiwan and mainland China as "special", but "not that between two states" - they are relations based on two areas of one state, with Taiwan considering that state to be the Republic of China, and mainland China considering that state to be the People's Republic of China. Ma's approach with the mainland is conspicuously evasive of political negotiations that may lead to unification which is the mainland's ultimate goal. The National Unification Guidelines remain “frozen” and Ma has precluded any discussion of reunification during his term by his “three no’s” (no unification, no independence, and no use of force). The Democratic Progressive Party, led by Tsai Ing-wen, won a landslide victory over the Kuomintang on 20 May 2016. Her administration has stated she seeks to maintain the current political status of Taiwan. The PRC government continues to criticize the Taiwanese government, as the DPP administration has refused to officially recognize the 1992 Consensus and the One-China policy. Domestically, the issue of independence has dominated Taiwanese politics for the past few decades. This is also a grave issue for mainland China. The creation of a Taiwanese state is formally the goal of the Taiwan Solidarity Union and former President Lee Teng-hui. Although the Democratic Progressive Party was originally also an advocate for both the idea of a Taiwanese state and Taiwan independence, they now take a middle line in which a sovereign, independent Taiwan is identified with the "Republic of China (Taiwan)" and its symbols. This movement also has international significance, because the PRC has stated, or implied, that it will force reunification by taking military action against Taiwan under one of these five conditions: (1) Taiwan makes a formal declaration of independence, (2) Taiwan forges a military alliance with any foreign power, (3) internal turmoil arises in Taiwan, (4) Taiwan gains weapons of mass destruction, (5) Taiwan shows no will to negotiate on the basis of “one China.” The PRC government warned that if the situation in Taiwan were to become “worse,” it will not look on “indifferently.” Such a military action would pose the threat of a superpower conflict in East Asia. Under the terms of Taiwan Relations Act, United States shall provide Taiwan with arms of a defensive character. However, Taiwan Relations Act does not oblige US to provide military intervention. While so, military intervention could still be sought should a formal declaration of war be made by the President of the United States in an act of Congress signed by the President. The questions of independence and the island's relationship to mainland China are complex and inspire very strong emotions among Taiwanese people. There are some who continue to maintain the KMT's position, which states that the ROC is the sole legitimate government for "all" of China (of which they consider Taiwan to be a part), and that the aim of the government should be eventual reunification of the mainland and Taiwan under the rule of the ROC. Some argue that Taiwan has been, and should continue to be, completely independent from China and should become a Taiwanese state with a distinct name. Then, there are numerous positions running the entire spectrum between these two extremes, as well as differing opinions on how best to manage either situation should it ever be realized. On 25 October 2004, in Beijing, the U.S. Secretary of State Colin Powell said Taiwan is “not sovereign,” provoking strong comments from both the Pan-Green and Pan-Blue coalitions – but for very different reasons. From the DPP's side, President Chen declared that "Taiwan is definitely a sovereign, independent country, a great country that absolutely does not belong to the People's Republic of China". The TSU (Taiwan Solidarity Union) criticized Powell, and questioned why the US sold weapons to Taiwan if it were not a sovereign state. From the KMT, then Chairman Ma Ying-jeou announced, “the Republic of China has been a sovereign state ever since it was formed [in 1912].” The pro-unification PFP Party Chairman, James Soong, called it “Taiwan's biggest failure in diplomacy.” The first view considers the move for Taiwan independence as a nationalist movement. Historically, this was view of such pro-independence groups as the tangwai movement (which later grew into the Democratic Progressive Party) who argued that the ROC under the Kuomintang had been a "foreign regime" forcibly imposed on Taiwan. Since the 1990s, supporters of Taiwan independence no longer actively make this argument. Instead, the argument has been that, in order to survive the growing power of the PRC, Taiwan must view itself as a separate and distinct entity from “China.” Such a change in view involves: (1) removing the name of “China” from official and unofficial items in Taiwan, (2) changes in history books, which now portrays Taiwan as a central entity, (3) promoting the use of Taiwanese language in the government and in the education system, (4) reducing economic links with mainland China, and (5) promoting the general thinking that Taiwan is a separate entity. The goal of this movement is the eventual creation of a country where China is a "foreign" entity, and Taiwan is an internationally recognized "country" separate from any concept of “China." The proposed "state of Taiwan" will exclude areas such as Quemoy and Matsu off the coast of Fujian, and some of the islands in the South China Sea, which historically were not part of Taiwan. Some supporters of Taiwan independence argue that the Treaty of San Francisco justifies Taiwan independence by not explicitly granting Taiwan to either the ROC or the PRC, even though neither the PRC nor the ROC government accepts such legal justification. It is also thought that if formal independence were declared, Taiwan's foreign policies would lean further towards Japan and the United States, and the desirable option of United Nations Trusteeship Council is also considered. The Taiwan Independence Party won a single seat in the Legislative Yuan in the 1998 legislative election. The Taiwan Solidarity Union was formed in 2001, and is also supportive of independence. Though it gained more legislative support than TAIP in elections, the TSU's legislative representation has dropped over time. In 2018, political parties and organizations demanding a referendum on Taiwan's independence formed an alliance to further their objective. The Formosa Alliance was established on 7 April 2018, prompted by a sense of crisis in the face of growing pressure from China for unification. The alliance wanted to hold a referendum on Taiwan's independence in April 2019, and change the island's name from the “Republic of China” to “Taiwan,” and apply for membership in the United Nations. In August 2019, another party supportive of independence, the Taiwan Action Party Alliance was founded. A second view is that Taiwan is already an independent nation with the official name “Republic of China,” which has been independent (i.e. de facto separate from mainland China) since the end of the Chinese Civil War in 1949, when the ROC lost control of mainland China, with only Taiwan (including the Penghu islands), Kinmen (Quemoy), the Matsu Islands off the coast of Fujian Province, and some of the islands in the South China Sea remaining under its administration. Although previously no major political faction adopted this pro-status quo viewpoint, because it is a "compromise" in face of PRC threats and American warnings against a unilateral declaration of independence, the DPP combined it with their traditional belief to form their latest official policy. This viewpoint has not been adopted by more radical groups such as the Taiwan Solidarity Union, which favor only the third view described above and are in favor of a Republic or State of Taiwan. In addition, many members of the Pan-Blue Coalition are rather suspicious of this view, fearing that adopting this definition of Taiwan independence is merely an insincere stealth tactical effort to advance desinicization and the third view of Taiwan independence. As a result, supporters of Pan-Blue tend to make a clear distinction between Taiwan "independence" and Taiwan "sovereignty", while supporters of Pan-Green tend to try to blur the distinction between the two. Most Taiwanese and political parties of the ROC support the status quo, and recognize that this is de facto independence through sovereign self-rule. Even among those who believe Taiwan is and should remain independent, the threat of war from PRC softens their approach, and they tend to support maintaining the status quo rather than pursuing an ideological path that could result in war with the PRC. When President Lee Teng-hui put forth the two-states policy, he received 80% support. A similar situation arose when President Chen Shui-bian declared that there was "one country on each side" of the Taiwan Strait. To this day, the parties disagree, sometimes bitterly, on such things as territory, name (R.O.C. or Taiwan), future policies, and interpretations of history. The Pan-Blue Coalition and the PRC believe that Lee Teng-hui and Chen Shui-bian are intent on publicly promoting a moderate form of Taiwan independence in order to advance secretly deeper forms of Taiwan independence, and that they intend to use popular support on Taiwan for political separation to advance notions of cultural and economic separation. The third view, put forward by the government of the PRC and Nationalists of the KMT, defines Taiwan independence as "splitting Taiwan from China, causing division of the nation and the people." What PRC claims by this statement is somewhat ambiguous according to supporters of Taiwanese independence, as some statements by the PRC seem to identify China solely and uncompromisingly with the PRC. Others propose a broader and more flexible definition suggesting that both mainland China and Taiwan are parts that form one cultural and geographic entity, although divided politically as a vestige of the Chinese Civil War. The PRC considers itself the sole legitimate government of all China, and the ROC to be a defunct entity replaced in the Communist revolution that succeeded in 1949. Therefore, assertions that the ROC is a sovereign state are construed as support for Taiwan independence, so are proposals to change the name of the ROC. Such a name change is met with even more disapproval since it rejects Taiwan as part of the greater China entity (as one side of a still-unresolved Chinese civil war). The ROC used to be recognized by the UN as the sole legal government of China until 1971. In that year, the UN Resolution 2758 was passed, and the PRC became recognized as the legal government of China by the UN. During PRC President Hu Jintao's visit to the United States on 20 April 2006, U.S. President George W. Bush reaffirmed to the world that the U.S. would uphold its "one China" policy. Chinese nationalists have called the Taiwan independence movement and its supporters to be hanjian (traitors). According to opinion poll conducted in Taiwan by Mainland Affairs Council in 2019, 27.7% of respondents supported Taiwan's independence: 21.7% said that status quo has to be maintained for now but Taiwan should become independent in the future, while 6% said that independence must be declared as soon as possible. 31% respondents supported the current situation as it is, and 10.3% agreed to unification with the mainland with 1.4% saying that it should happen as soon as possible. When the government of the Republic of China (under the Kuomintang) was forced to retreat to Formosa and the Pescadores (Taiwan and Penghu) in 1949, several Chinese (i.e. not Japanese) islands still remained under Kuomintang control. Because the Chinese Communist Party never gained control of the Kinmen, Wuqiu, and Matsu Islands, they are now governed by the Republic of China on Taiwan as Kinmen County (Kinmen, Wuqiu) and Lienchiang County (Matsu) within a streamlined Fujian Province. The islands are often referred to collectively as Quemoy and Matsu or as "Golden Horse". Historically, Kinmen County ('Quemoy') and Lienchiang County ('Matsu') served as important defensive strongholds for the Kuomintang during the 1950–1970s, symbolizing the frontline of Kuomintang resistance against the Communist rebellion. They represented the last Kuomintang presence in "mainland China". The islands received immense coverage from Western (especially United States) media during the First Taiwan Strait Crisis of 1954–1955 and the Second Taiwan Strait Crisis of 1958. They were very significant in the context of the Cold War, a period from 1946 until 1991 of geopolitical tension between the Soviet Union (and its allies) and the United States (and its allies). Ever since the transition into multi-party politics (i.e. "Democratization") during the 1990s, Kinmen and Lienchiang counties have now essentially developed into two electorates that can be contested through democratic elections. Currently the two electorates are "strongholds" for the Kuomintang due mainly to popular opinion within the electorates, rather than brute control (as in the past). The two electorates have recently developed close relations with the mainland, which lies only around 2–9 km west from the islands, whereas Taiwan lies around 166–189 km east from the islands. Quemoy and Matsu are unique and important for several reasons. Reportedly, the local government of Kinmen County supports stronger business and cultural ties with mainland China, similarly to the Kuomintang, and views itself as an important (representative) or (focal point) for improving Cross-Strait relations (that is, in the favour of Chinese unification). In January 2001, direct travel between Kinmen County (and Lienchiang County) and mainland China re-opened under the "mini Three Links". As of 2015, Kinmen has plans to become a "special economic zone (of China)", similarly to the neighbouring mainland Chinese city of Xiamen. This might be accomplished in part by building a huge bridge connecting Kinmen to Xiamen, via the island of Lesser Kinmen (Lieyu); already, a bridge is being constructed between Greater Kinmen and Lesser Kinmen. Additionally, Kinmen has plans to become a "university island". In 2010, "National Kinmen Institute of Technology" was upgraded to "National Quemoy University". Kinmen County plans to establish several branches of mainland Chinese universities in Kinmen, and has bargained with the central Taiwanese (ROC) government so that universities in Kinmen don't have to bounded by the same quotas as other Taiwanese universities in terms of admitting mainland Chinese students. In 2018, the local government of Kinmen County unveiled a new undersea pipeline linking Kinmen to mainland China, through which drinking-water can be imported. This business deal caused controversy in Taiwan and resulted in a "stand-off" between Kinmen County and the Mainland Affairs Council of Taiwan (ROC). Within Taiwan, one camp believes that Kinmen County (Quemoy) and Lienchiang County (Matsu) should be abandoned from a potential independent and sovereign Taiwanese state. This view aligns with the aforementioned treaties and acts that do not define Kinmen and Matsu as being part of Taiwan. This same camp also believes that the PRC has only "allowed" the ROC to continue controlling Kinmen and Matsu in order to "tether" Taiwan to mainland China. The fact that the PRC propagandizes Kinmen and Matsu is evidence that this is true to at least a certain degree. In a hypothetical scenario where Kinmen and Matsu are abandoned by the Taiwanese state, they would likely be "ceded" to the People's Republic of China via a peace treaty, officially ending the Chinese Civil War. Also within Taiwan, a second camp believes that Quemoy and Matsu belong to Taiwan. This camp believes that the ROC and Taiwan have become one and the same. By this logic, Taiwan effectively owns all of the same territories that the ROC is said to own. Among these territories is Quemoy and Matsu. If a potential Taiwanese state were to be created, this camp believes that the new country will actually be the successor state to the ROC, rather than an entirely new country. Therefore, if Taiwan independence were to be successfully achieved, then the islands of Quemoy and Matsu would hypothetically cease to be administered as "Fujian Province", and would instead simply be classified as "satellite islands of Taiwan" (much in the same way as Penghu). Despite the differing views of these two camps, there is a general understanding throughout Taiwan that Quemoy and Matsu are not part of the historical region of "Taiwan", due to having never been governed under the following regimes: Dutch Formosa, Spanish Formosa, Kingdom of Tungning, Republic of Formosa, and Japanese Formosa. Additionally, Quemoy and Matsu experienced a unique history for several years as military outposts of the ROC, further separating the islands from Taiwan in terms of culture.
https://en.wikipedia.org/wiki?curid=31156
Trident (missile) The Trident missile is a submarine-launched ballistic missile (SLBM) equipped with multiple independently targetable reentry vehicles (MIRV). Originally developed by Lockheed Missiles and Space Corporation, the missile is armed with thermonuclear warheads and is launched from nuclear-powered ballistic missile submarines (SSBNs). Trident missiles are carried by fourteen United States Navy s, with American warheads, as well as four Royal Navy s, with British warheads. The missile is named after the mythological trident of Neptune. In 1971, the US Navy began studies of an advanced Undersea Long-range Missile System (ULMS). A Decision Coordinating Paper (DCP) for the ULMS was approved on 14 September 1971. ULMS program outlined a long-term modernization plan, which proposed the development of a longer-range missile termed ULMS II, which was to achieve twice the range of the existing Poseidon (ULMS I) missile. In addition to a longer-range missile, a larger submarine was proposed to replace the , and -class SSBNs in 1978. The ULMS II missile system was designed to be retrofitted to the existing SSBNs, while also being fitted to the proposed . In May 1972, the term ULMS II was replaced with Trident. The Trident was to be a larger, higher-performance missile with a range capacity greater than 6000 mi. Trident I (designated as "C4") was deployed in 1979 and retired in 2005. Its objective was to achieve performance similar to Poseidon (C3) but at extended range. Trident II (designated "D5") had the objective of improved circular error probable (CEP), or accuracy, and was first deployed in 1990, and was planned to be in service for the thirty-year life of the submarines, until 2027. Trident missiles are provided to the United Kingdom under the terms of the 1963 Polaris Sales Agreement which was modified in 1982 for Trident. British Prime Minister Margaret Thatcher wrote to President Carter on 10 July 1980, to request that he approve supply of Trident I missiles. However, in 1982 Thatcher wrote to President Reagan to request the United Kingdom be allowed to procure the Trident II system, the procurement of which had been accelerated by the US Navy. This was agreed upon in March 1982. Under the agreement, the United Kingdom paid an additional 5% of their total procurement cost of $2.5 billion to the US government as a research and development contribution. The total cost of the Trident program thus far came to $39.546 billion in 2011, with a cost of $70 million per missile. In 2009 the United States upgraded the D5 missiles with an arming, fuzing and firing (AF&F) system that allows them to target hardened silos and bunkers more accurately. The launch from the submarine occurs below the sea surface. The missiles are ejected from their tubes by igniting an explosive charge in a separate container which is separated by seventeen titanium alloy pinnacles activated by a double alloy steam system. The energy from the blast is directed to a water tank, where the water is flash-vaporized to steam. The subsequent pressure spike is strong enough to eject the missile out of the tube and give it enough momentum to reach and clear the surface of the water. The missile is pressurized with nitrogen to prevent the intrusion of water into any internal spaces, which could damage the missile or add weight, destabilizing the missile. Should the missile fail to breach the surface of the water, there are several safety mechanisms that can either deactivate the missile before launch or guide the missile through an additional phase of launch. Inertial motion sensors are activated upon launch, and when the sensors detect downward acceleration after being blown out of the water, the first-stage motor ignites. The aerospike, a telescoping outward extension that halves aerodynamic drag, is then deployed, and the boost phase begins. When the third-stage motor fires, within two minutes of launch, the missile is traveling faster than 20,000 ft/s (6,000 m/s), or 13,600 mph (21,600 km/h) Mach 18. Minutes after launch, the missile is exo-atmospheric and on a sub-orbital trajectory. The Guidance System for the missile was developed by the Charles Stark Draper Laboratory and is maintained by a joint Draper/General Dynamics Mission Systems facility. It is an Inertial Guidance System with an additional Star-Sighting system (this combination is known as astro-inertial guidance), which is used to correct small position and velocity errors that result from launch condition uncertainties due to errors in the submarine navigation system and errors that may have accumulated in the guidance system during the flight due to imperfect instrument calibration. GPS has been used on some test flights but is assumed not to be available for a real mission. The fire control system was designed and continues to be maintained by General Dynamics Mission Systems. Once the star-sighting has been completed, the "bus" section of the missile maneuvers to achieve the various velocity vectors that will send the deployed multiple independent reentry vehicles to their individual targets. The downrange and crossrange dispersion of the targets remains classified. The Trident was built in two variants: the I (C4) UGM-96A and II (D5) UGM-133A; however, these two missiles have little in common. While the C4, formerly known as EXPO (Extended Range Poseidon), is just an improved version of the Poseidon C-3 missile, the Trident II D-5 has a completely new design (although with some technologies adopted from the C-4). The C4 and D5 designations put the missiles within the "family" that started in 1960 with Polaris (A1, A2 and A3) and continued with the 1971 Poseidon (C3). Both Trident versions are three-stage, solid-propellant, inertially guided missiles, and both guidance systems use a star sighting to improve overall weapons system accuracy. The first eight "Ohio"-class submarines were built with the Trident I missiles. The second variant of the Trident is more sophisticated and can carry a heavier payload. It is accurate enough to be a first strike, counterforce, or second strike weapon. All three stages of the Trident II are made of graphite epoxy, making the missile much lighter. The Trident II was the original missile on the British "Vanguard"-class and American "Ohio"-class SSBNs from "Tennessee" on. The D5 missile is currently carried by fourteen "Ohio"-class and four "Vanguard"-class SSBNs. There have been 172 successful test flights of the D5 missile since design completion in 1989, the most recent being from in May 2019. There have been fewer than 10 test flights that were failures, the most recent being from , one of Britain's four nuclear-armed submarines, off the coast of Florida in June 2016. The Royal Navy operates their missiles from a shared pool, together with the Atlantic squadron of the U.S. Navy "Ohio"-class SSBNs at King's Bay, Georgia. The pool is 'co-mingled' and missiles are selected at random for loading on to either nation's submarines. In 2002, the United States Navy announced plans to extend the life of the submarines and the D5 missiles to the year 2040. This requires a D5 Life Extension Program (D5LEP), which is currently underway. The main aim is to replace obsolete components at minimal cost by using commercial off the shelf (COTS) hardware; all the while maintaining the demonstrated performance of the existing Trident II missiles. In 2007, Lockheed Martin was awarded a total of $848 million in contracts to perform this and related work, which also includes upgrading the missiles' reentry systems. On the same day, Draper Labs was awarded $318 million for upgrade of the guidance system. Then-British Prime Minister Tony Blair was quoted as saying the issue would be fully debated in Parliament prior to a decision being taken. Blair outlined plans in Parliament on 4 December 2006 to build a new generation of submarines (Dreadnought-class) to carry existing Trident missiles, and join the D5LE project to refurbish them. The first flight test of a D-5 LE subsystem, the MK 6 Mod 1 guidance system, in Demonstration and Shakedown Operation (DASO)-23, took place on on 22 February 2012. This was almost exactly 22 years after the first Trident II missile was launched from "Tennessee" in February 1990. The Pentagon proposed the Conventional Trident Modification program in 2006 to diversify its strategic options, as part of a broader long-term strategy to develop worldwide rapid strike capabilities, dubbed "Prompt Global Strike". The US$503 million program would have converted existing Trident II missiles (presumably two missiles per submarine) into conventional weapons, by fitting them with modified Mk4 reentry vehicles equipped with GPS for navigation update and a reentry guidance and control (trajectory correction) segment to perform 10 m class impact accuracy. No explosive is said to be used since the reentry vehicle's mass and hypersonic impact velocity provide sufficient mechanical energy and "effect". The second conventional warhead version is a fragmentation version that would disperse thousands of tungsten rods which could obliterate an area of 3000 square feet. (approximately 280 square meters). It offered the promise of accurate conventional strikes with little warning and flight time. The primary drawback of using conventionally tipped ballistic missiles is that they are virtually impossible for radar warning systems to distinguish from nuclear-tipped missiles. This leaves open the likelihood that other nuclear-armed countries might mistake it for a nuclear launch which could provoke a counterattack. For that reason among others, this project raised a substantial debate before US Congress for the FY07 Defense budget, but also internationally. Russian President Vladimir Putin, among others, warned that the project would increase the danger of accidental nuclear war. "The launch of such a missile could ... provoke a full-scale counterattack using strategic nuclear forces," Putin said in May 2006.
https://en.wikipedia.org/wiki?curid=31158
Tsunami A ( ) is a series of waves in a water body caused by the displacement of a large volume of water, generally in an ocean or a large lake. Earthquakes, volcanic eruptions and other underwater explosions (including detonations, landslides, glacier calvings, meteorite impacts and other disturbances) above or below water all have the potential to generate a tsunami. Unlike normal ocean waves, which are generated by wind, or tides, which are generated by the gravitational pull of the Moon and the Sun, a tsunami is generated by the displacement of water. Tsunami waves do not resemble normal undersea currents or sea waves because their wavelength is far longer. Rather than appearing as a breaking wave, a tsunami may instead initially resemble a rapidly rising tide. For this reason, it is often referred to as a tidal wave, although this usage is not favoured by the scientific community because it might give the false impression of a causal relationship between tides and tsunamis. Tsunamis generally consist of a series of waves, with periods ranging from minutes to hours, arriving in a so-called "wave train". Wave heights of tens of metres can be generated by large events. Although the impact of tsunamis is limited to coastal areas, their destructive power can be enormous, and they can affect entire ocean basins. The 2004 Indian Ocean tsunami was among the deadliest natural disasters in human history, with at least 230,000 people killed or missing in 14 countries bordering the Indian Ocean. The Ancient Greek historian Thucydides suggested in his 5th century BC "History of the Peloponnesian War" that tsunamis were related to submarine earthquakes, but the understanding of tsunamis remained slim until the 20th century and much remains unknown. Major areas of current research include determining why some large earthquakes do not generate tsunamis while other smaller ones do; accurately forecasting the passage of tsunamis across the oceans; and forecasting how tsunami waves interact with shorelines. The term "tsunami" is a borrowing from the Japanese "tsunami" , meaning "harbour wave". For the plural, one can either follow ordinary English practice and add an "s", or use an invariable plural as in the Japanese. Some English speakers alter the word's initial to an by dropping the "t", since English does not natively permit /ts/ at the beginning of words, though the original Japanese pronunciation is . Tsunamis are sometimes referred to as "tidal waves". This once-popular term derives from the most common appearance of a tsunami, which is that of an extraordinarily high tidal bore. Tsunamis and tides both produce waves of water that move inland, but in the case of a tsunami, the inland movement of water may be much greater, giving the impression of an incredibly high and forceful tide. In recent years, the term "tidal wave" has fallen out of favour, especially in the scientific community, because the causes of tsunamis have nothing to do with those of tides, which are produced by the gravitational pull of the moon and sun rather than the displacement of water. Although the meanings of "tidal" include "resembling" or "having the form or character of" the tides, use of the term "tidal wave" is discouraged by geologists and oceanographers. A 1969 episode of the TV crime show "Hawaii Five-O" entitled "Forty Feet High and It Kills!" used the terms "tsunami" and "tidal wave" interchangeably. The term "seismic sea wave" is also used to refer to the phenomenon, because the waves most often are generated by seismic activity such as earthquakes. Prior to the rise of the use of the term "tsunami" in English, scientists generally encouraged the use of the term "seismic sea wave" rather than "tidal wave". However, like "tsunami", "seismic sea wave" is not a completely accurate term, as forces other than earthquakes – including underwater landslides, volcanic eruptions, underwater explosions, land or ice slumping into the ocean, meteorite impacts, and the weather when the atmospheric pressure changes very rapidly – can generate such waves by displacing water. While Japan may have the longest recorded history of tsunamis, the sheer destruction caused by the 2004 Indian Ocean earthquake and tsunami event mark it as the most devastating of its kind in modern times, killing around 230,000 people. The Sumatran region is also accustomed to tsunamis, with earthquakes of varying magnitudes regularly occurring off the coast of the island. Tsunamis are an often underestimated hazard in the Mediterranean Sea and parts of Europe. Of historical and current (with regard to risk assumptions) importance are the 1755 Lisbon earthquake and tsunami (which was caused by the Azores–Gibraltar Transform Fault), the 1783 Calabrian earthquakes, each causing several tens of thousands of deaths and the 1908 Messina earthquake and tsunami. The tsunami claimed more than 123,000 lives in Sicily and Calabria and is among the most deadly natural disasters in modern Europe. The Storegga Slide in the Norwegian Sea and some examples of tsunamis affecting the British Isles refer to landslide and meteotsunamis predominantly and less to earthquake-induced waves. As early as 426 BC the Greek historian Thucydides inquired in his book "History of the Peloponnesian War" about the causes of tsunami, and was the first to argue that ocean earthquakes must be the cause. The cause, in my opinion, of this phenomenon must be sought in the earthquake. At the point where its shock has been the most violent the sea is driven back, and suddenly recoiling with redoubled force, causes the inundation. Without an earthquake I do not see how such an accident could happen. The Roman historian Ammianus Marcellinus ("Res Gestae" 26.10.15–19) described the typical sequence of a tsunami, including an incipient earthquake, the sudden retreat of the sea and a following gigantic wave, after the 365 AD tsunami devastated Alexandria. The principal generation mechanism of a tsunami is the displacement of a substantial volume of water or perturbation of the sea. This displacement of water is usually attributed to either earthquakes, landslides, volcanic eruptions, glacier calvings or more rarely by meteorites and nuclear tests. Tsunamis can be generated when the sea floor abruptly deforms and vertically displaces the overlying water. Tectonic earthquakes are a particular kind of earthquake that are associated with the Earth's crustal deformation; when these earthquakes occur beneath the sea, the water above the deformed area is displaced from its equilibrium position. More specifically, a tsunami can be generated when thrust faults associated with convergent or destructive plate boundaries move abruptly, resulting in water displacement, owing to the vertical component of movement involved. Movement on normal (extensional) faults can also cause displacement of the seabed, but only the largest of such events (typically related to flexure in the outer trench swell) cause enough displacement to give rise to a significant tsunami, such as the 1977 Sumba and 1933 Sanriku events. Tsunamis have a small wave height offshore, and a very long wavelength (often hundreds of kilometres long, whereas normal ocean waves have a wavelength of only 30 or 40 metres), which is why they generally pass unnoticed at sea, forming only a slight swell usually about above the normal sea surface. They grow in height when they reach shallower water, in a wave shoaling process described below. A tsunami can occur in any tidal state and even at low tide can still inundate coastal areas. On April 1, 1946, the 8.6 Aleutian Islands earthquake occurred with a maximum Mercalli intensity of VI ("Strong"). It generated a tsunami which inundated Hilo on the island of Hawaii with a surge. Between 165 and 173 were killed. The area where the earthquake occurred is where the Pacific Ocean floor is subducting (or being pushed downwards) under Alaska. Examples of tsunamis originating at locations away from convergent boundaries include Storegga about 8,000 years ago, Grand Banks in 1929, and Papua New Guinea in 1998 (Tappin, 2001). The Grand Banks and Papua New Guinea tsunamis came from earthquakes which destabilised sediments, causing them to flow into the ocean and generate a tsunami. They dissipated before travelling transoceanic distances. The cause of the Storegga sediment failure is unknown. Possibilities include an overloading of the sediments, an earthquake or a release of gas hydrates (methane etc.). The 1960 Valdivia earthquake ("M"w 9.5), 1964 Alaska earthquake ("M"w 9.2), 2004 Indian Ocean earthquake ("M"w 9.2), and 2011 Tōhoku earthquake ("M"w9.0) are recent examples of powerful megathrust earthquakes that generated tsunamis (known as teletsunamis) that can cross entire oceans. Smaller ("M"w 4.2) earthquakes in Japan can trigger tsunamis (called local and regional tsunamis) that can devastate stretches of coastline, but can do so in only a few minutes at a time. In the 1950s, it was discovered that tsunamis larger than had previously been believed possible can be caused by giant submarine landslides. These rapidly displace large water volumes, as energy transfers to the water at a rate faster than the water can absorb. Their existence was confirmed in 1958, when a giant landslide in Lituya Bay, Alaska, caused the highest wave ever recorded, which had a height of . The wave did not travel far, as it struck land almost immediately. The wave struck three boats — each with two people aboard — anchored in the bay. One boat rode out the wave, but the wave sank the other two, killing both people aboard one of them. Another landslide-tsunami event occurred in 1963 when a massive landslide from Monte Toc entered the reservoir behind the Vajont Dam in Italy. The resulting wave surged over the -high dam by and destroyed several towns. Around 2,000 people died. Scientists named these waves megatsunamis. Some geologists claim that large landslides from volcanic islands, e.g. Cumbre Vieja on La Palma in the Canary Islands, may be able to generate megatsunamis that can cross oceans, but this is disputed by many others. In general, landslides generate displacements mainly in the shallower parts of the coastline, and there is conjecture about the nature of large landslides that enter the water. This has been shown to subsequently affect water in enclosed bays and lakes, but a landslide large enough to cause a transoceanic tsunami has not occurred within recorded history. Susceptible locations are believed to be the Big Island of Hawaii, Fogo in the Cape Verde Islands, La Reunion in the Indian Ocean, and Cumbre Vieja on the island of La Palma in the Canary Islands; along with other volcanic ocean islands. This is because large masses of relatively unconsolidated volcanic material occurs on the flanks and in some cases detachment planes are believed to be developing. However, there is growing controversy about how dangerous these slopes actually are. Some meteorological conditions, especially rapid changes in barometric pressure, as seen with the passing of a front, can displace bodies of water enough to cause trains of waves with wavelengths comparable to seismic tsunamis, but usually with lower energies. These are essentially dynamically equivalent to seismic tsunamis, the only differences being that meteotsunamis lack the transoceanic reach of significant seismic tsunamis and that the force that displaces the water is sustained over some length of time such that meteotsunamis cannot be modelled as having been caused instantaneously. In spite of their lower energies, on shorelines where they can be amplified by resonance, they are sometimes powerful enough to cause localised damage and potential for loss of life. They have been documented in many places, including the Great Lakes, the Aegean Sea, the English Channel, and the Balearic Islands, where they are common enough to have a local name, "rissaga". In Sicily they are called "marubbio" and in Nagasaki Bay, they are called "abiki". Some examples of destructive meteotsunamis include 31 March 1979 at Nagasaki and 15 June 2006 at Menorca, the latter causing damage in the tens of millions of euros. Meteotsunamis should not be confused with storm surges, which are local increases in sea level associated with the low barometric pressure of passing tropical cyclones, nor should they be confused with setup, the temporary local raising of sea level caused by strong on-shore winds. Storm surges and setup are also dangerous causes of coastal flooding in severe weather but their dynamics are completely unrelated to tsunami waves. They are unable to propagate beyond their sources, as waves do. There have been studies of the potential of the induction of and at least one actual attempt to create tsunami waves as a tectonic weapon. In World War II, the New Zealand Military Forces initiated Project Seal, which attempted to create small tsunamis with explosives in the area of today's Shakespear Regional Park; the attempt failed. There has been considerable speculation on the possibility of using nuclear weapons to cause tsunamis near an enemy coastline. Even during World War II consideration of the idea using conventional explosives was explored. Nuclear testing in the Pacific Proving Ground by the United States seemed to generate poor results. "Operation Crossroads" fired two bombs, one in the air and one underwater, above and below the shallow () waters of the Bikini Atoll lagoon. Fired about from the nearest island, the waves there were no higher than upon reaching the shoreline. Other underwater tests, mainly "Hardtack I/Wahoo" (deep water) and "Hardtack I/Umbrella" (shallow water) confirmed the results. Analysis of the effects of shallow and deep underwater explosions indicate that the energy of the explosions does not easily generate the kind of deep, all-ocean waveforms which are tsunamis; most of the energy creates steam, causes vertical fountains above the water, and creates compressional waveforms. Tsunamis are hallmarked by permanent large vertical displacements of very large volumes of water which do not occur in explosions. Tsunamis cause damage by two mechanisms: the smashing force of a wall of water travelling at high speed, and the destructive power of a large volume of water draining off the land and carrying a large amount of debris with it, even with waves that do not appear to be large. While everyday wind waves have a wavelength (from crest to crest) of about and a height of roughly , a tsunami in the deep ocean has a much larger wavelength of up to . Such a wave travels at well over , but owing to the enormous wavelength the wave oscillation at any given point takes 20 or 30 minutes to complete a cycle and has an amplitude of only about . This makes tsunamis difficult to detect over deep water, where ships are unable to feel their passage. The velocity of a tsunami can be calculated by obtaining the square root of the depth of the water in metres multiplied by the acceleration due to gravity (approximated to 10 m/s2). For example, if the Pacific Ocean is considered to have a depth of 5000 metres, the velocity of a tsunami would be the square root of √(5000 × 10) = √50000 = ~224 metres per second (735 feet per second), which equates to a speed of ~806 kilometres per hour or about 500 miles per hour. This is the formula used for calculating the velocity of shallow-water waves. Even the deep ocean is shallow in this sense because a tsunami wave is so long (horizontally from crest to crest) by comparison. The reason for the Japanese name "harbour wave" is that sometimes a village's fishermen would sail out, and encounter no unusual waves while out at sea fishing, and come back to land to find their village devastated by a huge wave. As the tsunami approaches the coast and the waters become shallow, wave shoaling compresses the wave and its speed decreases below . Its wavelength diminishes to less than and its amplitude grows enormously – in accord with Green's law. Since the wave still has the same very long period, the tsunami may take minutes to reach full height. Except for the very largest tsunamis, the approaching wave does not break, but rather appears like a fast-moving tidal bore. Open bays and coastlines adjacent to very deep water may shape the tsunami further into a step-like wave with a steep-breaking front. When the tsunami's wave peak reaches the shore, the resulting temporary rise in sea level is termed "run up". Run up is measured in metres above a reference sea level. A large tsunami may feature multiple waves arriving over a period of hours, with significant time between the wave crests. The first wave to reach the shore may not have the highest run-up. About 80% of tsunamis occur in the Pacific Ocean, but they are possible wherever there are large bodies of water, including lakes. They are caused by earthquakes, landslides, volcanic explosions, glacier calvings, and bolides. All waves have a positive and negative peak; that is, a ridge and a trough. In the case of a propagating wave like a tsunami, either may be the first to arrive. If the first part to arrive at the shore is the ridge, a massive breaking wave or sudden flooding will be the first effect noticed on land. However, if the first part to arrive is a trough, a drawback will occur as the shoreline recedes dramatically, exposing normally submerged areas. The drawback can exceed hundreds of metres, and people unaware of the danger sometimes remain near the shore to satisfy their curiosity or to collect fish from the exposed seabed. A typical wave period for a damaging tsunami is about twelve minutes. Thus, the sea recedes in the drawback phase, with areas well below sea level exposed after three minutes. For the next six minutes, the wave trough builds into a ridge which may flood the coast, and destruction ensues. During the next six minutes, the wave changes from a ridge to a trough, and the flood waters recede in a second drawback. Victims and debris may be swept into the ocean. The process repeats with succeeding waves. As with earthquakes, several attempts have been made to set up scales of tsunami intensity or magnitude to allow comparison between different events. The first scales used routinely to measure the intensity of tsunamis were the "Sieberg-Ambraseys scale" (1962), used in the Mediterranean Sea and the "Imamura-Iida intensity scale" (1963), used in the Pacific Ocean. The latter scale was modified by Soloviev (1972), who calculated the tsunami intensity ""I"" according to the formula: where formula_2 is the "tsunami height", averaged along the nearest coastline, with the tsunami height defined as the rise of the water level above the normal tidal level at the time of occurrence of the tsunami. This scale, known as the "Soloviev-Imamura tsunami intensity scale", is used in the global tsunami catalogues compiled by the NGDC/NOAA and the Novosibirsk Tsunami Laboratory as the main parameter for the size of the tsunami. This formula yields: In 2013, following the intensively studied tsunamis in 2004 and 2011, a new 12-point scale was proposed, the Integrated Tsunami Intensity Scale (ITIS-2012), intended to match as closely as possible to the modified ESI2007 and EMS earthquake intensity scales. The first scale that genuinely calculated a magnitude for a tsunami, rather than an intensity at a particular location was the ML scale proposed by Murty & Loomis based on the potential energy. Difficulties in calculating the potential energy of the tsunami mean that this scale is rarely used. Abe introduced the "tsunami magnitude scale formula_7", calculated from, where "h" is the maximum tsunami-wave amplitude (in m) measured by a tide gauge at a distance "R" from the epicentre, "a", "b" and "D" are constants used to make the Mt scale match as closely as possible with the moment magnitude scale. Several terms are used to describe the different characteristics of tsunami in terms of their height: Drawbacks can serve as a brief warning. People who observe drawback (many survivors report an accompanying sucking sound), can survive only if they immediately run for high ground or seek the upper floors of nearby buildings. In 2004, ten-year-old Tilly Smith of Surrey, England, was on Maikhao beach in Phuket, Thailand with her parents and sister, and having learned about tsunamis recently in school, told her family that a tsunami might be imminent. Her parents warned others minutes before the wave arrived, saving dozens of lives. She credited her geography teacher, Andrew Kearney. In the 2004 Indian Ocean tsunami drawback was not reported on the African coast or any other east-facing coasts that it reached. This was because the initial wave moved downwards on the eastern side of the megathrust and upwards on the western side. The western pulse hit coastal Africa and other western areas. A tsunami cannot be precisely predicted, even if the magnitude and location of an earthquake is known. Geologists, oceanographers, and seismologists analyse each earthquake and based on many factors may or may not issue a tsunami warning. However, there are some warning signs of an impending tsunami, and automated systems can provide warnings immediately after an earthquake in time to save lives. One of the most successful systems uses bottom pressure sensors, attached to buoys, which constantly monitor the pressure of the overlying water column. Regions with a high tsunami risk typically use tsunami warning systems to warn the population before the wave reaches land. On the west coast of the United States, which is prone to Pacific Ocean tsunami, warning signs indicate evacuation routes. In Japan, the community is well-educated about earthquakes and tsunamis, and along the Japanese shorelines the tsunami warning signs are reminders of the natural hazards together with a network of warning sirens, typically at the top of the cliff of surroundings hills. The Pacific Tsunami Warning System is based in Honolulu, Hawaii. It monitors Pacific Ocean seismic activity. A sufficiently large earthquake magnitude and other information triggers a tsunami warning. While the subduction zones around the Pacific are seismically active, not all earthquakes generate a tsunami. Computers assist in analysing the tsunami risk of every earthquake that occurs in the Pacific Ocean and the adjoining land masses. As a direct result of the Indian Ocean tsunami, a re-appraisal of the tsunami threat for all coastal areas is being undertaken by national governments and the United Nations Disaster Mitigation Committee. A tsunami warning system is being installed in the Indian Ocean. Computer models can predict tsunami arrival, usually within minutes of the arrival time. Bottom pressure sensors can relay information in real time. Based on these pressure readings and other seismic information and the seafloor's shape (bathymetry) and coastal topography, the models estimate the amplitude and surge height of the approaching tsunami. All Pacific Rim countries collaborate in the Tsunami Warning System and most regularly practise evacuation and other procedures. In Japan, such preparation is mandatory for government, local authorities, emergency services and the population. Along the United States west coast, in addition to sirens, warnings are sent on television and radio via the National Weather Service, using the Emergency Alert System. Some zoologists hypothesise that some animal species have an ability to sense subsonic Rayleigh waves from an earthquake or a tsunami. If correct, monitoring their behaviour could provide advance warning of earthquakes and tsunamis. However, the evidence is controversial and is not widely accepted. There are unsubstantiated claims about the Lisbon quake that some animals escaped to higher ground, while many other animals in the same areas drowned. The phenomenon was also noted by media sources in Sri Lanka in the 2004 Indian Ocean earthquake. It is possible that certain animals (e.g., elephants) may have heard the sounds of the tsunami as it approached the coast. The elephants' reaction was to move away from the approaching noise. By contrast, some humans went to the shore to investigate and many drowned as a result. In some tsunami-prone countries, earthquake engineering measures have been taken to reduce the damage caused onshore. Japan, where tsunami science and response measures first began following a disaster in 1896, has produced ever-more elaborate countermeasures and response plans. The country has built many tsunami walls of up to high to protect populated coastal areas. Other localities have built floodgates of up to high and channels to redirect the water from an incoming tsunami. However, their effectiveness has been questioned, as tsunamis often overtop the barriers. The Fukushima Daiichi nuclear disaster was directly triggered by the 2011 Tōhoku earthquake and tsunami, when waves exceeded the height of the plant's sea wall. Iwate Prefecture, which is an area at high risk from tsunami, had tsunami barriers walls (Taro sea wall) totalling long at coastal towns. The 2011 tsunami toppled more than 50% of the walls and caused catastrophic damage. The which struck Okushiri Island of Hokkaidō within two to five minutes of the earthquake on July 12, 1993, created waves as much as tall—as high as a 10-storey building. The port town of Aonae was completely surrounded by a tsunami wall, but the waves washed right over the wall and destroyed all the wood-framed structures in the area. The wall may have succeeded in slowing down and moderating the height of the tsunami, but it did not prevent major destruction and loss of life.
https://en.wikipedia.org/wiki?curid=31161
Tower of London The Tower of London, officially Her Majesty's Royal Palace and Fortress of the Tower of London, is a historic castle on the north bank of the River Thames in central London. It lies within the London Borough of Tower Hamlets, which is separated from the eastern edge of the square mile of the City of London by the open space known as Tower Hill. It was founded towards the end of 1066 as part of the Norman Conquest of England. The White Tower, which gives the entire castle its name, was built by William the Conqueror in 1078 and was a resented symbol of oppression, inflicted upon London by the new ruling elite. The castle was also used as a prison from 1100 (Ranulf Flambard) until 1952 (Kray twins), although that was not its primary purpose. A grand palace early in its history, it served as a royal residence. As a whole, the Tower is a complex of several buildings set within two concentric rings of defensive walls and a moat. There were several phases of expansion, mainly under kings Richard I, Henry III, and Edward I in the 12th and 13th centuries. The general layout established by the late 13th century remains despite later activity on the site. The Tower of London has played a prominent role in English history. It was besieged several times, and controlling it has been important to controlling the country. The Tower has served variously as an armoury, a treasury, a menagerie, the home of the Royal Mint, a public record office, and the home of the Crown Jewels of England. From the early 14th century until the reign of Charles II, a procession would be led from the Tower to Westminster Abbey on the coronation of a monarch. In the absence of the monarch, the Constable of the Tower is in charge of the castle. This was a powerful and trusted position in the medieval period. In the late 15th century, the castle was the prison of the Princes in the Tower. Under the Tudors, the Tower became used less as a royal residence, and despite attempts to refortify and repair the castle, its defences lagged behind developments to deal with artillery. The peak period of the castle's use as a prison was the 16th and 17th centuries, when many figures who had fallen into disgrace, such as Elizabeth I before she became queen, Sir Walter Raleigh, and Elizabeth Throckmorton, were held within its walls. This use has led to the phrase "sent to the Tower". Despite its enduring reputation as a place of torture and death, popularised by 16th-century religious propagandists and 19th-century writers, only seven people were executed within the Tower before the World Wars of the 20th century. Executions were more commonly held on the notorious Tower Hill to the north of the castle, with 112 occurring there over a 400-year period. In the latter half of the 19th century, institutions such as the Royal Mint moved out of the castle to other locations, leaving many buildings empty. Anthony Salvin and John Taylor took the opportunity to restore the Tower to what was felt to be its medieval appearance, clearing out many of the vacant post-medieval structures. In the First and Second World Wars, the Tower was again used as a prison and witnessed the executions of 12 men for espionage. After the Second World War, damage caused during the Blitz was repaired, and the castle reopened to the public. Today, the Tower of London is one of the country's most popular tourist attractions. Under the ceremonial charge of the Constable of the Tower, and operated by the Resident Governor of the Tower of London and Keeper of the Jewel House, the property is cared for by the charity Historic Royal Palaces and is protected as a World Heritage Site. The Tower was orientated with its strongest and most impressive defences overlooking Saxon London, which archaeologist Alan Vince suggests was deliberate. It would have visually dominated the surrounding area and stood out to traffic on the River Thames. The castle is made up of three "wards", or enclosures. The innermost ward contains the White Tower and is the earliest phase of the castle. Encircling it to the north, east, and west is the inner ward, built during the reign of Richard I (1189–1199). Finally, there is the outer ward which encompasses the castle and was built under Edward I. Although there were several phases of expansion after William the Conqueror founded the Tower of London, the general layout has remained the same since Edward I completed his rebuild in 1285. The castle encloses an area of almost with a further around the Tower of London constituting the Tower Liberties – land under the direct influence of the castle and cleared for military reasons. The precursor of the Liberties was laid out in the 13th century when Henry III ordered that a strip of land adjacent to the castle be kept clear. Despite popular fiction, the Tower of London never had a permanent torture chamber, although the basement of the White Tower housed a rack in later periods. Tower Wharf was built on the bank of the Thames under Edward I and was expanded to its current size during the reign of Richard II (1377–1399). The White Tower is a keep (also known as a donjon), which was often the strongest structure in a medieval castle, and contained lodgings suitable for the lord – in this case, the king or his representative. According to military historian Allen Brown, "The great tower [White Tower] was also, by virtue of its strength, majesty and lordly accommodation, the donjon "par excellence"". As one of the largest keeps in the Christian world, the White Tower has been described as "the most complete eleventh-century palace in Europe". The White Tower, not including its projecting corner towers, measures at the base, and is high at the southern battlements. The structure was originally three storeys high, comprising a basement floor, an entrance level, and an upper floor. The entrance, as is usual in Norman keeps, was above ground, in this case on the south face, and accessed via a wooden staircase which could be removed in the event of an attack. It was probably during Henry II's reign (1154–1189) that a forebuilding was added to the south side of the tower to provide extra defences to the entrance, but it has not survived. Each floor was divided into three chambers, the largest in the west, a smaller room in the north-east, and the chapel taking up the entrance and upper floors of the south-east. At the western corners of the building are square towers, while to the north-east a round tower houses a spiral staircase. At the south-east corner there is a larger semi-circular projection which accommodates the apse of the chapel. As the building was intended to be a comfortable residence as well as a stronghold, latrines were built into the walls, and four fireplaces provided warmth. The main building material is Kentish rag-stone, although some local mudstone was also used. Caen stone was imported from northern France to provide details in the Tower's facing, although little of the original material survives as it was replaced with Portland stone in the 17th and 18th centuries. As most of the Tower's windows were enlarged in the 18th century, only two original – albeit restored – examples remain, in the south wall at the gallery level. The tower was terraced into the side of a mound, so the northern side of the basement is partially below ground level. As was typical of most keeps, the bottom floor was an undercroft used for storage. One of the rooms contained a well. Although the layout has remained the same since the tower's construction, the interior of the basement dates mostly from the 18th century when the floor was lowered and the pre-existing timber vaults were replaced with brick counterparts. The basement is lit through small slits. The entrance floor was probably intended for the use of the Constable of the Tower, Lieutenant of the Tower of London and other important officials. The south entrance was blocked during the 17th century, and not reopened until 1973. Those heading to the upper floor had to pass through a smaller chamber to the east, also connected to the entrance floor. The crypt of St John's Chapel occupied the south-east corner and was accessible only from the eastern chamber. There is a recess in the north wall of the crypt; according to Geoffrey Parnell, Keeper of the Tower History at the Royal Armouries, "the windowless form and restricted access, suggest that it was designed as a strong-room for safekeeping of royal treasures and important documents". The upper floor contained a grand hall in the west and residential chamber in the eastboth originally open to the roof and surrounded by a gallery built into the walland St John's Chapel in the south-east. The top floor was added in the 15th century, along with the present roof. St John's Chapel was not part of the White Tower's original design, as the apsidal projection was built after the basement walls. Due to changes in function and design since the tower's construction, except for the chapel little is left of the original interior. The chapel's current bare and unadorned appearance is reminiscent of how it would have been in the Norman period. In the 13th century, during Henry III's reign, the chapel was decorated with such ornamentation as a gold-painted cross, and stained glass windows that depicted the Virgin Mary and the Holy Trinity. The innermost ward encloses an area immediately south of the White Tower, stretching to what was once the edge of the River Thames. As was the case at other castles, such as the 11th-century Hen Domen, the innermost ward was probably filled with timber buildings from the Tower's foundation. Exactly when the royal lodgings began to encroach from the White Tower into the innermost ward is uncertain, although it had happened by the 1170s. The lodgings were renovated and elaborated during the 1220s and 1230s, becoming comparable with other palatial residences such as Windsor Castle. Construction of Wakefield and Lanthorn Towers – located at the corners of the innermost ward's wall along the river – began around 1220. They probably served as private residences for the queen and king respectively. The earliest evidence for how the royal chambers were decorated comes from Henry III's reign: the queen's chamber was whitewashed, and painted with flowers and imitation stonework. A great hall existed in the south of the ward, between the two towers. It was similar to, although slightly smaller than, that also built by Henry III at Winchester Castle. Near Wakefield Tower was a postern gate which allowed private access to the king's apartments. The innermost ward was originally surrounded by a protective ditch, which had been filled in by the 1220s. Around this time, a kitchen was built in the ward. Between 1666 and 1676, the innermost ward was transformed and the palace buildings removed. The area around the White Tower was cleared so that anyone approaching would have to cross open ground. The Jewel House was demolished, and the Crown Jewels moved to Martin Tower. The inner ward was created during Richard the Lionheart's reign, when a moat was dug to the west of the innermost ward, effectively doubling the castle's size. Henry III created the ward's east and north walls, and the ward's dimensions remain to this day. Most of Henry's work survives, and only two of the nine towers he constructed have been completely rebuilt. Between the Wakefield and Lanthorn Towers, the innermost ward's wall also serves as a curtain wall for the inner ward. The main entrance to the inner ward would have been through a gatehouse, most likely in the west wall on the site of what is now Beauchamp Tower. The inner ward's western curtain wall was rebuilt by Edward I. The 13th-century Beauchamp Tower marks the first large-scale use of brick as a building material in Britain, since the 5th-century departure of the Romans. The Beauchamp Tower is one of 13 towers that stud the curtain wall. Clockwise from the south-west corner they are: Bell, Beauchamp, Devereux, Flint, Bowyer, Brick, Martin, Constable, Broad Arrow, Salt, Lanthorn, Wakefield, and the Bloody Tower. While these towers provided positions from which flanking fire could be deployed against a potential enemy, they also contained accommodation. As its name suggests, Bell Tower housed a belfry, its purpose to raise the alarm in the event of an attack. The royal bow-maker, responsible for making longbows, crossbows, catapults, and other siege and hand weapons, had a workshop in the Bowyer Tower. A turret at the top of Lanthorn Tower was used as a beacon by traffic approaching the Tower at night. As a result of Henry's expansion, St Peter ad Vincula, a Norman chapel which had previously stood outside the Tower, was incorporated into the castle. Henry decorated the chapel by adding glazed windows, and stalls for himself and his queen. It was rebuilt by Edward I at a cost of over £300 and again by Henry VIII in 1519; the current building dates from this period, although the chapel was refurbished in the 19th century. Immediately west of Wakefield Tower, the Bloody Tower was built at the same time as the inner ward's curtain wall, and as a water-gate provided access to the castle from the River Thames. It was a simple structure, protected by a portcullis and gate. The Bloody Tower acquired its name in the 16th century, as it was believed to be the site of the murder of the Princes in the Tower. Between 1339 and 1341, a gatehouse was built into the curtain wall between Bell and Salt Towers. During the Tudor period, a range of buildings for the storage of munitions was built along the inside of the north inner ward. The castle buildings were remodelled during the Stuart period, mostly under the auspices of the Office of Ordnance. In 1663 just over £4,000 was spent building a new storehouse (now known as the New Armouries) in the inner ward. Construction of the Grand Storehouse north of the White Tower began in 1688, on the same site as the dilapidated Tudor range of storehouses; it was destroyed by fire in 1841. The Waterloo Block, a former barracks in the castellated Gothic Revival style with Domestic Tudor details, was built on the site and remains to this day, housing the Crown Jewels on the ground floor. A third ward was created during Edward I's extension to the Tower, as the narrow enclosure completely surrounded the castle. At the same time a bastion known as Legge's Mount was built at the castle's northwest corner. Brass Mount, the bastion in the northeast corner, was a later addition. The three rectangular towers along the east wall apart were dismantled in 1843. Although the bastions have often been ascribed to the Tudor period, there is no evidence to support this; archaeological investigations suggest that Legge's Mount dates from the reign of Edward I. Blocked battlements (also known as crenellations) in the south side of Legge's Mount are the only surviving medieval battlements at the Tower of London (the rest are Victorian replacements). A new moat was dug beyond the castle's new limits; it was originally deeper in the middle than it is today. With the addition of a new curtain wall, the old main entrance to the Tower of London was obscured and made redundant; a new entrance was created in the southwest corner of the external wall circuit. The complex consisted of an inner and an outer gatehouse and a barbican, which became known as the Lion Tower as it was associated with the animals as part of the Royal Menagerie since at least the 1330s. The Lion Tower itself no longer survives. Edward extended the south side of the Tower of London onto land that had previously been submerged by the River Thames. In this wall, he built St Thomas's Tower between 1275 and 1279; later known as Traitors' Gate, it replaced the Bloody Tower as the castle's water-gate. The building is unique in England, and the closest parallel is the now demolished water-gate at the Louvre in Paris. The dock was covered with arrowslits in case of an attack on the castle from the River; there was also a portcullis at the entrance to control who entered. There were luxurious lodgings on the first floor. Edward also moved the Royal Mint into the Tower; its exact location early on is unknown, although it was probably in either the outer ward or the Lion Tower. By 1560, the Mint was located in a building in the outer ward near Salt Tower. Between 1348 and 1355, a second water-gate, Cradle Tower, was added east of St Thomas's Tower for the king's private use. Victorious at the Battle of Hastings on 14 October 1066, the invading Duke of Normandy, William the Conqueror, spent the rest of the year securing his holdings by fortifying key positions. He founded several castles along the way, but took a circuitous route toward London; only when he reached Canterbury did he turn towards England's largest city. As the fortified bridge into London was held by Saxon troops, he decided instead to ravage Southwark before continuing his journey around southern England. A series of Norman victories along the route cut the city's supply lines and in December 1066, isolated and intimidated, its leaders yielded London without a fight. Between 1066 and 1087, William established 36 castles, although references in the Domesday Book indicate that many more were founded by his subordinates. The new ruling elite undertook what has been described as "the most extensive and concentrated programme of castle-building in the whole history of feudal Europe". They were multi-purpose buildings, serving as fortifications (used as a base of operations in enemy territory), centres of administration, and residences. William sent an advance party to prepare the city for his entrance, to celebrate his victory and found a castle; in the words of William's biographer, William of Poitiers, "certain fortifications were completed in the city against the restlessness of the huge and brutal populace. For he [William] realised that it was of the first importance to overawe the Londoners". At the time, London was the largest town in England; the foundation of Westminster Abbey and the old Palace of Westminster under Edward the Confessor had marked it as a centre of governance, and with a prosperous port it was important for the Normans to establish control over the settlement. The other two castles in London – Baynard's Castle and Montfichet's Castle – were established at the same time. The fortification that would later become known as the Tower of London was built onto the south-east corner of the Roman town walls, using them as prefabricated defences, with the River Thames providing additional protection from the south. This earliest phase of the castle would have been enclosed by a ditch and defended by a timber palisade, and probably had accommodation suitable for William. Most of the early Norman castles were built from timber, but by the end of the 11th century a few, including the Tower of London, had been renovated or replaced with stone. Work on the White Tower – which gives the whole castle its name – is usually considered to have begun in 1078, however the exact date is uncertain. William made Gundulf, Bishop of Rochester, responsible for its construction, although it may not have been completed until after William's death in 1087. The White Tower is the earliest stone keep in England, and was the strongest point of the early castle. It also contained grand accommodation for the king. At the latest, it was probably finished by 1100 when Bishop Ranulf Flambard was imprisoned there. Flambard was loathed by the English for exacting harsh taxes. Although he is the first recorded prisoner held in the Tower, he was also the first person to escape from it, using a smuggled rope secreted in a butt of wine. He was held in luxury and permitted servants, but on 2 February 1101 he hosted a banquet for his captors. After plying them with drink, when no one was looking he lowered himself from a secluded chamber, and out of the Tower. The escape came as such a surprise that one contemporary chronicler accused the bishop of witchcraft. The "Anglo-Saxon Chronicle" records that in 1097 King William II ordered a wall to be built around the Tower of London; it was probably built from stone as a replacement for the timber palisade that arced around the north and west sides of the castle, between the Roman wall and the Thames. The Norman Conquest of London manifested itself not only with a new ruling class, but in the way the city was structured. Land was confiscated and redistributed amongst the Normans, who also brought over hundreds of Jews, for financial reasons. The Jews arrived under the direct protection of the Crown, as a result of which Jewish communities were often found close to castles. The Jews used the Tower as a retreat, when threatened by anti-Jewish violence. The death in 1135 of Henry I left England with a disputed succession; although the king had persuaded his most powerful barons to swear support for the Empress Matilda, just a few days after Henry's death Stephen of Blois arrived from France to lay claim to the throne. The importance of the city and its Tower is marked by the speed at which he secured London. The castle, which had not been used as a royal residence for some time, was usually left in the charge of a Constable, a post held at this time by Geoffrey de Mandeville. As the Tower was considered an impregnable fortress in a strategically important position, possession was highly valued. Mandeville exploited this, selling his allegiance to Matilda after Stephen was captured in 1141 at the Battle of Lincoln. Once her support waned, the following year he resold his loyalty to Stephen. Through his role as Constable of the Tower, Mandeville became "the richest and most powerful man in England". When he tried the same ploy again, this time holding secret talks with Matilda, Stephen had him arrested, forced him to cede control of his castles, and replaced him with one of his most loyal supporters. Until then the position had been hereditary, originally held by Geoffrey de Mandeville (a friend of William the Conqueror's and ancestor of the Geoffrey that Stephen and Matilda dealt with), but the position's authority was such that from then on it remained in the hands of an appointee of the monarch. The position was usually given to someone of great importance, who might not always be at the castle due to other duties. Although the Constable was still responsible for maintaining the castle and its garrison, from an early stage he had a subordinate to help with this duty: the Lieutenant of the Tower. Constables also had civic duties relating to the city. Usually they were given control of the city and were responsible for levying taxes, enforcing the law and maintaining order. The creation in 1191 of the position of Lord Mayor of London removed many of the Constable's civic powers, and at times led to friction between the two. The castle probably retained its form as established by 1100 until the reign of Richard I (1189–1199). The castle was extended under William Longchamp, King Richard's Lord Chancellor and the man in charge of England while he was on crusade. The Pipe Rolls record £2,881 1s 10d spent at the Tower of London between 3 December 1189 and 11 November 1190, from an estimated £7,000 spent by Richard on castle building in England. According to the contemporary chronicler Roger of Howden, Longchamp dug a moat around the castle and tried in vain to fill it from the Thames. Longchamp was also Constable of the Tower, and undertook its expansion while preparing for war with King Richard's younger brother, Prince John, who in Richard's absence arrived in England to try to seize power. As Longchamp's main fortress, he made the Tower as strong as possible. The new fortifications were first tested in October 1191, when the Tower was besieged for the first time in its history. Longchamp capitulated to John after just three days, deciding he had more to gain from surrender than prolonging the siege. John succeeded Richard as king in 1199, but his rule proved unpopular with many of his barons, who in response moved against him. In 1214, while the king was at Windsor Castle, Robert Fitzwalter led an army into London and laid siege to the Tower. Although under-garrisoned, the Tower resisted and the siege was lifted once John signed the Magna Carta. The king reneged on his promises of reform, leading to the outbreak of the First Barons' War. Even after the Magna Carta was signed, Fitzwalter maintained his control of London. During the war, the Tower's garrison joined forces with the barons. John was deposed in 1216 and the barons offered the English throne to Prince Louis, the eldest son of the French king. However, after John's death in October 1216, many began to support the claim of his eldest son, Henry III. War continued between the factions supporting Louis and Henry, with Fitzwalter supporting Louis. Fitzwalter was still in control of London and the Tower, both of which held out until it was clear that Henry III's supporters would prevail. In the 13th century, Kings Henry III (1216–1272) and Edward I (1272–1307) extended the castle, essentially creating it as it stands today. Henry was disconnected from his barons, and a mutual lack of understanding led to unrest and resentment towards his rule. As a result, he was eager to ensure the Tower of London was a formidable fortification; at the same time Henry was an aesthete and wished to make the castle a comfortable place to live. From 1216 to 1227 nearly £10,000 was spent on the Tower of London; in this period, only the work at Windsor Castle cost more (£15,000). Most of the work was focused on the palatial buildings of the innermost ward. The tradition of whitewashing the White Tower (from which it derives its name) began in 1240. Beginning around 1238, the castle was expanded to the east, north, and north-west. The work lasted through the reign of Henry III and into that of Edward I, interrupted occasionally by civil unrest. New creations included a new defensive perimeter, studded with towers, while on the west, north, and east sides, where the wall was not defended by the river, a defensive ditch was dug. The eastern extension took the castle beyond the bounds of the old Roman settlement, marked by the city wall which had been incorporated into the castle's defences. The Tower had long been a symbol of oppression, despised by Londoners, and Henry's building programme was unpopular. So when the gatehouse collapsed in 1240, the locals celebrated the setback. The expansion caused disruption locally and £166 was paid to St Katherine's Hospital and the prior of Holy Trinity in compensation. Henry III often held court at the Tower of London, and held parliament there on at least two occasions (1236 and 1261) when he felt that the barons were becoming dangerously unruly. In 1258, the discontented barons, led by Simon de Montfort, forced the King to agree to reforms including the holding of regular parliaments. Relinquishing the Tower of London was among the conditions. Henry III resented losing power and sought permission from the pope to break his oath. With the backing of mercenaries, Henry installed himself in the Tower in 1261. While negotiations continued with the barons, the King ensconced himself in the castle, although no army moved to take it. A truce was agreed with the condition that the King hand over control of the Tower once again. Henry won a significant victory at the Battle of Evesham in 1265, allowing him to regain control of the country and the Tower of London. Cardinal Ottobuon came to England to excommunicate those who were still rebellious; the act was deeply unpopular and the situation was exacerbated when the cardinal was granted custody of the Tower. Gilbert de Clare, 6th Earl of Hertford, marched on London in April 1267 and laid siege to the castle, declaring that custody of the Tower was "not a post to be trusted in the hands of a foreigner, much less of an ecclesiastic". Despite a large army and siege engines, Gilbert de Clare was unable to take the castle. The Earl retreated, allowing the King control of the capital, and the Tower experienced peace for the rest of Henry's reign. Although he was rarely in London, Edward I undertook an expensive remodelling of the Tower, costing £21,000 between 1275 and 1285, over double that spent on the castle during the whole of Henry III's reign. Edward I was a seasoned castle builder, and used his experience of siege warfare during the crusades to bring innovations to castle building. His programme of castle building in Wales heralded the introduction of the widespread use of arrowslits in castle walls across Europe, drawing on Eastern influences. At the Tower of London, Edward filled in the moat dug by Henry III and built a new curtain wall along its line, creating a new enclosure. A new moat was created in front of the new curtain wall. The western part of Henry III's curtain wall was rebuilt, with Beauchamp Tower replacing the castle's old gatehouse. A new entrance was created, with elaborate defences including two gatehouses and a barbican. In an effort to make the castle self-sufficient, Edward I also added two watermills. Six hundred Jews were imprisoned in the Tower of London in 1278, charged with coin clipping. Persecution of the country's Jewish population under Edward began in 1276 and culminated in 1290 when he issued the Edict of Expulsion, forcing the Jews out of the country. During Edward II's reign (1307–1327) there was relatively little activity at the Tower of London. However, it was during this period that the Privy Wardrobe was founded. The institution was based at the Tower and responsible for organising the state's arms. In 1321, Margaret de Clare, Baroness Badlesmere became the first woman imprisoned in the Tower of London after she refused Queen Isabella admittance to Leeds Castle and ordered her archers to fire upon Isabella, killing six of the royal escort. Generally reserved for high-ranking inmates, the Tower was the most important royal prison in the country. However it was not necessarily very secure, and throughout its history people bribed the guards to help them escape. In 1323 Roger Mortimer, Baron Mortimer, was aided in his escape from the Tower by the Sub-Lieutenant of the Tower who let Mortimer's men inside. They hacked a hole in his cell wall and Mortimer escaped to a waiting boat. He fled to France where he encountered Edward's Queen. They began an affair and plotted to overthrow the King. One of Mortimer's first acts on entering England in 1326 was to capture the Tower and release the prisoners held there. For four years he ruled while Edward III was too young to do so himself; in 1330, Edward and his supporters captured Mortimer and threw him in the Tower. Under Edward III's rule (1312–1377) England experienced renewed success in warfare after his father's reign had put the realm on the backfoot against the Scots and French. Amongst Edward's successes were the battles of Crécy and Poitiers where King John II of France was taken prisoner, and the capture of the King David II of Scotland at Neville's Cross. During this period, the Tower of London held many noble prisoners of war. Edward II had allowed the Tower of London to fall into a state of disrepair, and by the reign of Edward III the castle was an uncomfortable place. The nobility held captive within its walls were unable to engage in activities such as hunting which were permissible at other royal castles used as prisons, for instance Windsor. Edward III ordered that the castle should be renovated. When Richard II was crowned in 1377, he led a procession from the Tower to Westminster Abbey. This tradition began in at least the early 14th century and lasted until 1660. During the Peasants' Revolt of 1381 the Tower of London was besieged with the King inside. When Richard rode out to meet with Wat Tyler, the rebel leader, a crowd broke into the castle without meeting resistance and looted the Jewel House. The Archbishop of Canterbury, Simon Sudbury, took refuge in St John's Chapel, hoping the mob would respect the sanctuary. However, he was taken away and beheaded on Tower Hill. Six years later there was again civil unrest, and Richard spent Christmas in the security of the Tower rather than Windsor as was more usual. When Henry Bolingbroke returned from exile in 1399, Richard was imprisoned in the White Tower. He abdicated and was replaced on the throne by Bolingbroke, who became King Henry IV. In the 15th century, there was little building work at the Tower of London, yet the castle still remained important as a place of refuge. When supporters of the late Richard II attempted a coup, Henry IV found safety in the Tower of London. During this period, the castle also held many distinguished prisoners. The heir to the Scottish throne, later King James I of Scotland, was kidnapped while journeying to France in 1406 and held in the Tower. The reign of Henry V (1413–1422) renewed England's fortune in the Hundred Years' War against France. As a result of Henry's victories, such as the Battle of Agincourt, many high-status prisoners were held in the Tower of London until they were ransomed. Much of the latter half of the 15th century was occupied by the Wars of the Roses between the claimants to the throne, the houses of Lancaster and York. The castle was once again besieged in 1460, this time by a Yorkist force. The Tower was damaged by artillery fire but only surrendered when Henry VI was captured at the Battle of Northampton. With the help of Richard Neville, 16th Earl of Warwick (nicknamed "the Kingmaker") Henry recaptured the throne for a short time in 1470. However, Edward IV soon regained control and Henry VI was imprisoned in the Tower of London, where he was probably murdered. During the wars, the Tower was fortified to withstand gunfire, and provided with loopholes for cannons and handguns: an enclosure was created for this purpose to the south of Tower Hill, although it no longer survives. Shortly after the death of Edward IV in 1483, the notorious murder of the Princes in the Tower is traditionally believed to have taken place. The incident is one of the most infamous events associated with the Tower of London. Edward V's uncle Richard, Duke of Gloucester was declared Lord Protector while the prince was too young to rule. Traditional accounts have held that the 12-year-old Edward was confined to the Tower of London along with his younger brother Richard. The Duke of Gloucester was proclaimed King Richard III in June. The princes were last seen in public in June 1483; it has traditionally been thought that the most likely reason for their disappearance is that they were murdered late in the summer of 1483. Bones thought to belong to them were discovered in 1674 when the 12th-century forebuilding at the entrance to the White Tower was demolished; however, the reputed level at which the bones were found () would put the bones at a depth similar to that of the recently discovered Roman graveyard found underneath the Minories a few hundred yards to the north. Opposition to Richard escalated until he was defeated at the Battle of Bosworth Field in 1485 by the Lancastrian Henry Tudor, who ascended to the throne as Henry VII. The beginning of the Tudor period marked the start of the decline of the Tower of London's use as a royal residence. As 16th-century chronicler Raphael Holinshed said the Tower became used more as "an armouries and house of munition, and thereunto a place for the safekeeping of offenders than a palace roiall for a king or queen to sojourne in". The Yeoman Warders have been the Royal Bodyguard since at least 1509. During the reign of Henry VIII, the Tower was assessed as needing considerable work on its defences. In 1532, Thomas Cromwell spent £3,593 on repairs and imported nearly 3,000 tons of Caen stone for the work. Even so, this was not sufficient to bring the castle up to the standard of contemporary military fortifications which were designed to withstand powerful artillery. Although the defences were repaired, the palace buildings were left in a state of neglect after Henry's death. Their condition was so poor that they were virtually uninhabitable. From 1547 onwards, the Tower of London was only used as a royal residence when its political and historic symbolism was considered useful, for instance each of Edward VI, Mary I, and Elizabeth I briefly stayed at the Tower before their coronations. In the 16th century, the Tower acquired an enduring reputation as a grim, forbidding prison. This had not always been the case. As a royal castle, it was used by the monarch to imprison people for various reasons, however these were usually high-status individuals for short periods rather than common citizenry as there were plenty of prisons elsewhere for such people. Contrary to the popular image of the Tower, prisoners were able to make their life easier by purchasing amenities such as better food or tapestries through the Lieutenant of the Tower. As holding prisoners was originally an incidental role of the Tower – as would have been the case for any castle – there was no purpose-built accommodation for prisoners until 1687 when a brick shed, a "Prison for Soldiers", was built to the north-west of the White Tower. The Tower's reputation for torture and imprisonment derives largely from 16th-century religious propagandists and 19th-century romanticists. Although much of the Tower's reputation is exaggerated, the 16th and 17th centuries marked the castle's zenith as a prison, with many religious and political undesirables locked away. The Privy Council had to sanction the use of torture, so it was not often used; between 1540 and 1640, the peak of imprisonment at the Tower, there were 48 recorded cases of the use of torture. The three most common forms used were the infamous rack, the Scavenger's daughter, and manacles. The rack was introduced to England in 1447 by the Duke of Exeter, the Constable of the Tower; consequentially it was also known as the Duke of Exeter's daughter. One of those tortured at the Tower was Guy Fawkes, who was brought there on 6 November 1605; after torture he signed a full confession to the Gunpowder Plot. Among those held and executed at the Tower was Anne Boleyn. Although the Yeoman Warders were once the Royal Bodyguard, by the 16th and 17th centuries their main duty had become to look after the prisoners. The Tower was often a safer place than other prisons in London such as the Fleet, where disease was rife. High-status prisoners could live in conditions comparable to those they might expect outside; one such example was that while Walter Raleigh was held in the Tower his rooms were altered to accommodate his family, including his son who was born there in 1605. Executions were usually carried out on Tower Hill rather than in the Tower of London itself, and 112 people were executed on the hill over 400 years. Before the 20th century, there had been seven executions within the castle on Tower Green; as was the case with Lady Jane Grey, this was reserved for prisoners for whom public execution was considered dangerous. After Lady Jane Grey's execution on 12 February 1554, Queen Mary I imprisoned her sister Elizabeth, later Queen Elizabeth I, in the Tower under suspicion of causing rebellion as Sir Thomas Wyatt had led a revolt against Mary in Elizabeth's name. The Office of Ordnance and Armoury Office were founded in the 15th century, taking over the Privy Wardrobe's duties of looking after the monarch's arsenal and valuables. As there was no standing army before 1661, the importance of the royal armoury at the Tower of London was that it provided a professional basis for procuring supplies and equipment in times of war. The two bodies were resident at the Tower from at least 1454, and by the 16th century they had moved to a position in the inner ward. The Board of Ordnance (successor to these Offices) had its headquarters in the White Tower and used surrounding buildings for storage. In 1855 the Board was abolished; its successor (the Military Store Department of the War Office) was also based there until 1869, after which its headquarters staff were relocated to the Royal Arsenal in Woolwich (where the recently closed Woolwich Dockyard was converted into a vast ordnance store). Political tensions between Charles I and Parliament in the second quarter of the 17th century led to an attempt by forces loyal to the King to secure the Tower and its valuable contents, including money and munitions. London's Trained Bands, a militia force, were moved into the castle in 1640. Plans for defence were drawn up and gun platforms were built, readying the Tower for war. The preparations were never put to the test. In 1642, Charles I attempted to arrest five members of parliament. When this failed he fled the city, and Parliament retaliated by removing Sir John Byron, the Lieutenant of the Tower. The Trained Bands had switched sides, and now supported Parliament; together with the London citizenry, they blockaded the Tower. With permission from the King, Byron relinquished control of the Tower. Parliament replaced Byron with a man of their own choosing, Sir John Conyers. By the time the English Civil War broke out in November 1642, the Tower of London was already in Parliament's control. The last monarch to uphold the tradition of taking a procession from the Tower to Westminster to be crowned was Charles II in 1661. At the time, the castle's accommodation was in such poor condition that he did not stay there the night before his coronation. Under the Stuart kings the Tower's buildings were remodelled, mostly under the auspices of the Office of Ordnance. Just over £4,000 was spent in 1663 on building a new storehouse, now known as the New Armouries in the inner ward. In the 17th century there were plans to enhance the Tower's defences in the style of the "trace italienne", however they were never acted on. Although the facilities for the garrison were improved with the addition of the first purpose-built quarters for soldiers (the "Irish Barracks") in 1670, the general accommodations were still in poor condition. When the Hanoverian dynasty ascended the throne, their situation was uncertain and with a possible Scottish rebellion in mind, the Tower of London was repaired. Gun platforms added under the Stuarts had decayed. The number of guns at the Tower was reduced from 118 to 45, and one contemporary commentator noted that the castle "would not hold out four and twenty hours against an army prepared for a siege". For the most part, the 18th-century work on the defences was spasmodic and piecemeal, although a new gateway in the southern curtain wall permitting access from the wharf to the outer ward was added in 1774. The moat surrounding the castle had become silted over the centuries since it was created despite attempts at clearing it. It was still an integral part of the castle's defences, so in 1830 the Constable of the Tower, the Duke of Wellington, ordered a large-scale clearance of several feet of silt. However this did not prevent an outbreak of disease in the garrison in 1841 caused by poor water supply, resulting in several deaths. To prevent the festering ditch posing further health problems, it was ordered that the moat should be drained and filled with earth. The work began in 1843 and was mostly complete two years later. The construction of the Waterloo Barracks in the inner ward began in 1845, when the Duke of Wellington laid the foundation stone. The building could accommodate 1,000 men; at the same time, separate quarters for the officers were built to the north-east of the White Tower. The building is now the headquarters of the Royal Regiment of Fusiliers. The popularity of the Chartist movement between 1828 and 1858 led to a desire to refortify the Tower of London in the event of civil unrest. It was the last major programme of fortification at the castle. Most of the surviving installations for the use of artillery and firearms date from this period. During the First World War, eleven men were tried in private and shot by firing squad at the Tower for espionage. During the Second World War, the Tower was once again used to hold prisoners of war. One such person was Rudolf Hess, Adolf Hitler's deputy, albeit just for four days in 1941. He was the last state prisoner to be held at the castle. The last person to be executed at the Tower was German spy Josef Jakobs who was shot on 15 August 1941. The executions for espionage during the wars took place in a prefabricated miniature rifle range which stood in the outer ward and was demolished in 1969. The Second World War also saw the last use of the Tower as a fortification. In the event of a German invasion, the Tower, together with the Royal Mint and nearby warehouses, was to have formed one of three "keeps" or complexes of defended buildings which formed the last-ditch defences of the capital. The Tower of London has become established as one of the most popular tourist attractions in the country. It has been a tourist attraction since at least the Elizabethan period, when it was one of the sights of London that foreign visitors wrote about. Its most popular attractions were the Royal Menagerie and displays of armour. The Crown Jewels also garner much interest, and have been on public display since 1669. The Tower steadily gained popularity with tourists through the 19th century, despite the opposition of the Duke of Wellington to visitors. Numbers became so high that by 1851 a purpose-built ticket office was erected. By the end of the century, over 500,000 were visiting the castle every year. Over the 18th and 19th centuries, the palatial buildings were slowly adapted for other uses and demolished. Only the Wakefield and St Thomas's Towers survived. The 18th century marked an increasing interest in England's medieval past. One of the effects was the emergence of Gothic Revival architecture. In the Tower's architecture, this was manifest when the New Horse Armoury was built in 1825 against the south face of the White Tower. It featured elements of Gothic Revival architecture such as battlements. Other buildings were remodelled to match the style and the Waterloo Barracks were described as "castellated Gothic of the 15th century". Between 1845 and 1885 institutions such as the Mint which had inhabited the castle for centuries moved to other sites; many of the post-medieval structures left vacant were demolished. In 1855, the War Office took over responsibility for manufacture and storage of weapons from the Ordnance Office, which was gradually phased out of the castle. At the same time, there was greater interest in the history of the Tower of London. Public interest was partly fuelled by contemporary writers, of whom the work of William Harrison Ainsworth was particularly influential. In "The Tower of London: A Historical Romance" he created a vivid image of underground torture chambers and devices for extracting confessions that stuck in the public imagination. Ainsworth also played another role in the Tower's history, as he suggested that Beauchamp Tower should be opened to the public so they could see the inscriptions of 16th- and 17th-century prisoners. Working on the suggestion, Anthony Salvin refurbished the tower and led a further programme for a comprehensive restoration at the behest of Prince Albert. Salvin was succeeded in the work by John Taylor. When a feature did not meet his expectations of medieval architecture Taylor would ruthlessly remove it; as a result, several important buildings within the castle were pulled down and in some cases post-medieval internal decoration removed. Although only one bomb fell on the Tower of London in the First World War (it landed harmlessly in the moat), the Second World War left a greater mark. On 23 September 1940, during the Blitz, high-explosive bombs damaged the castle, destroying several buildings and narrowly missing the White Tower. After the war, the damage was repaired and the Tower of London was reopened to the public. A 1974 Tower of London bombing in the White Tower Mortar Room left one person dead and 41 injured. No one claimed responsibility for the blast, but the police investigated suspicions that the IRA was behind it. In the 21st century, tourism is the Tower's primary role, the remaining routine military activities, under the Royal Logistic Corps, having wound down in the latter half of the 20th century and moved out of the castle. However, the Tower is still home to the ceremonial regimental headquarters of the Royal Regiment of Fusiliers, and the museum dedicated to it and its predecessor, the Royal Fusiliers. Also, a detachment of the unit providing the Queen's Guard at Buckingham Palace still mounts a guard at the Tower, and with the Yeomen Warders, takes part in the Ceremony of the Keys each day. On several occasions through the year gun salutes are fired from the Tower by the Honourable Artillery Company, these consist of 62 rounds for royal occasions, and 41 on other occasions. Since 1990, the Tower of London has been cared for by an independent charity, Historic Royal Palaces, which receives no funding from the Government or the Crown. In 1988, the Tower of London was added to the UNESCO list of World Heritage Sites, in recognition of its global importance and to help conserve and protect the site. However, recent developments, such as the construction of skyscrapers nearby, have pushed the Tower towards being added to the United Nations' Heritage in Danger List. The remains of the medieval palace have been open to the public since 2006. Visitors can explore the chambers restored to their former glory, once used by past kings and queens. Although the position of Constable of the Tower remains the highest position held at the Tower, the responsibility of day-to-day administration is delegated to the Resident Governor. The Constable is appointed for a five-year term; this is primarily a ceremonial post today but the Constable is also a trustee of Historic Royal Palaces and of the Royal Armouries. General Sir Nick Houghton was appointed Constable in 2016. At least six ravens are kept at the Tower at all times, in accordance with the belief that if they are absent, the kingdom will fall. They are under the care of the Ravenmaster, one of the Yeoman Warders. As well as having ceremonial duties, the Yeoman Warders provide guided tours around the Tower. Over 2.8 million people visited the Tower of London in 2017. The Yeomen Warders provided the permanent garrison of the Tower, but the Constable of the Tower could call upon the men of the Tower Hamlets to supplement them when necessary. The Tower Hamlets, aka Tower Division was an area, significantly larger than the modern London Borough of the same name, which owed military service to the Constable in his "ex officio" role as Lord Lieutenant of the Tower Hamlets. The tradition of housing the Crown Jewels in the Tower of London probably dates from the reign of Henry III (1216–1272). The Jewel House was built specifically to house the royal regalia, including jewels, plate, and symbols of royalty such as the crown, sceptre, and sword. When money needed to be raised, the treasure could be pawned by the monarch. The treasure allowed the monarch independence from the aristocracy, and consequently was closely guarded. A new position for "keeper of the jewels, armouries and other things" was created, which was well rewarded; in the reign of Edward III (1327–1377) the holder was paid 12d a day. The position grew to include other duties including purchasing royal jewels, gold, and silver, and appointing royal goldsmiths and jewellers. In 1649, during the English Civil War, the contents of the Jewel House were disposed of along with other royal properties, as decreed by Cromwell. Metal items were sent to the Mint to be melted down and re-used, and the crowns were "totallie broken and defaced". When the monarchy was restored in 1660, the only surviving items of the coronation regalia were a 12th-century spoon and three ceremonial swords. (Some pieces that had been sold were later returned to the Crown.) Detailed records of old regalia survived, and replacements were made for the coronation of Charles II in 1661 based on drawings from the time of Charles I. For the coronation of Charles II, gems were rented because the treasury could not afford to replace them. In 1669, the Jewel House was demolished and the Crown Jewels moved into Martin Tower (until 1841). They were displayed here for viewing by the paying public. This was exploited two years later when Colonel Thomas Blood attempted to steal them. Blood and his accomplices bound and gagged the Jewel House keeper. Although they laid their hands on the Imperial State Crown, Sceptre and Orb, they were foiled when the keeper's son turned up unexpectedly and raised the alarm. Since 1994, the Crown Jewels have been on display in the Jewel House in the Waterloo Block. Some of the pieces are used regularly by the Queen. The display includes 23,578 gemstones, the 800-year-old Coronation Spoon, St. Edward's Crown (worn during all crownings at Westminster Abbey) and the Imperial State Crown. There is evidence that King John (1166–1216) first started keeping wild animals at the Tower. Records of 1210–1212 show payments to lion keepers. The Royal Menagerie is frequently referenced during the reign of Henry III. Holy Roman Emperor Frederick II presented Henry with three leopards, circa 1235, which were kept in the Tower. In 1252, the sheriffs were ordered to pay fourpence a day towards the upkeep of the King's polar bear, a gift from Haakon IV of Norway in the same year; the bear attracted a great deal of attention from Londoners when it went fishing in the Thames while tied to the land by a chain. In 1254 or 1255, Henry III received an African elephant from Louis IX of France depicted by Matthew Paris in his "Chronica Majora". A wooden structure was built to house the elephant, 12.2 m (40 ft) long by 6.1 m (20 ft) wide. The animal died in 1258, possibly because it was given red wine, but also perhaps because of the cold climate of England. In 1288, Edward I added a lion and a lynx and appointed the first official Keeper of the animals. Edward III added other types of animals, two lions, a leopard and two wildcats. Under subsequent kings, the number of animals grew to include additional cats of various types, jackals, hyenas, and an old brown bear, Max, gifted to Henry VIII by Emperor Maximilian. In 1436, during the time of Henry VI, all the lions died and the employment of Keeper William Kerby was terminated. Historical records indicate that a semi-circular structure or barbican was built by Edward I in 1277; this area was later named the Lion Tower, to the immediate west of the Middle Tower. Records from 1335 indicate the purchase of a lock and key for the lions and leopards, also suggesting they were located near the western entrance of the Tower. By the 1500s that area was called the Menagerie. Between 1604 and 1606 the Menagerie was extensively refurbished and an exercise yard was created in the moat area beside the Lion Tower. An overhead platform was added for viewing of the lions by the royals, during lion baiting, for example in the time of James I. Reports from 1657 include mention of six lions, increasing to 11 by 1708, in addition to other types of cats, eagles, owls and a jackal. By the 18th century, the menagerie was open to the public; admission cost three half-pence or the supply of a cat or dog to be fed to the lions. By the end of the century, that had increased to 9 pence. A particularly famous inhabitant was Old Martin, a large grizzly bear given to George III by the Hudson's Bay Company in 1811. An 1800 inventory also listed a tiger, leopards, a hyena, a large baboon, various types of monkeys, wolves and "other animals". By 1822, however, the collection included only a grizzly bear, an elephant and some birds. Additional animals were then introduced. In 1828 there were over 280 representing at least 60 species as the new keeper Alfred Copps was actively acquiring animals. After the death of George IV in 1830, a decision was made to close down the Menagerie on the orders of the Duke of Wellington. In 1831, most of the stock was moved to the London Zoo which had opened in 1828. This decision was made after an incident, although sources vary as to the specifics: either a lion was accused of biting a soldier, or a sailor, Ensign Seymour, had been bitten by a monkey. The last of the animals left in 1835, relocated to Regent's Park. The Menagerie buildings were removed in 1852 but the Keeper of the Royal Menagerie was entitled to use the Lion Tower as a house for life. Consequently, even though the animals had long since left the building, the tower was not demolished until the death of Copps, the last keeper, in 1853. In 1999, physical evidence of lion cages was found, one being 2x3 metres (6.5x10 feet) in size, very small for a lion that can grow to be 2.5 meters (approximately 8 feet) long. In 2008, the skulls of two male Barbary lions (now extinct in the wild) from northwest Africa were found in the moat area of the Tower. Radiocarbon tests dated them from 1280–1385 and 1420–1480. In 2011, an exhibition was hosted at the Tower with fine wire sculptures by Kendra Haste. Anne Boleyn was beheaded in 1536 for treason against Henry VIII; her ghost supposedly haunts the Church of St Peter ad Vincula in the Tower, where she is buried, and has been said to walk around the White Tower carrying her head under her arm. This haunting is commemorated in the 1934 comic song "With Her Head Tucked Underneath Her Arm". Other reported ghosts include Henry VI, Lady Jane Grey, Margaret Pole, and the Princes in the Tower. In January 1816, a sentry on guard outside the Jewel House claimed to have witnessed an apparition of a bear advancing towards him, and reportedly died of fright a few days later. In October 1817, a tubular, glowing apparition was claimed to have been seen in the Jewel House by the Keeper of the Crown Jewels, Edmund Lenthal Swifte. He said that the apparition hovered over the shoulder of his wife, leading her to exclaim: "Oh, Christ! It has seized me!" Other nameless and formless terrors have been reported, more recently, by night staff at the Tower.
https://en.wikipedia.org/wiki?curid=31165
Thar Desert The Thar Desert, also known as the Great Indian Desert, is a large arid region in the northwestern part of the Indian subcontinent that covers an area of and forms a natural boundary between India and Pakistan. It is the world's 17th largest desert, and the world's 9th largest subtropical desert. About 85% of the Thar Desert is located within India, with the remaining 15% in Pakistan. In India, it covers about , and the remaining of the desert is within Pakistan. The Thar desert forms approximately 5% (c. 4.56%) of the total geographic area of India. More than 60% of the desert lies in the Indian state of Rajasthan, and it extends into the states of Gujarat, Punjab, and Haryana, and the Pakistani province of Sindh. Within Pakistan's Punjab province, the Thar continues as the Cholistan Desert. The desert comprises a very dry part, the Marusthali region in the west, and a semidesert region in the east with fewer sand dunes and slightly more precipitation. The Thar Desert extends between the Aravalli Hills in the north-east, the Great Rann of Kutch along the coast and the alluvial plains of the Indus River in the west and north-west. Most of the desert area is covered by huge shifting sand dunes that receive sediments from the alluvial plains and the coast. The sand is highly mobile due to strong winds occurring before the onset of the monsoon. The Luni River is the only river integrated into the desert. Rainfall is limited to per year, mostly falling from July to September. Salt water lakes within the Thar Desert include the Sambhar, Kuchaman, Didwana, Pachpadra and Phalodi in Rajasthan and Kharaghoda in Gujarat. These lakes receive and collect rain water during monsoon and evaporate during the dry season. The salt is derived by the weathering of rocks in the region. Lithic tools belonging to the prehistoric Aterian culture of the Maghreb have been discovered in Middle Paleolithic deposits in the Thar Desert. The soil of the Thar Desert remains dry for much of the year and is prone to wind erosion. High velocity winds blow soil from the desert, depositing some on neighboring fertile lands, and causing shifting sand dunes within the desert. Sand dunes are stabilised by erecting micro-windbreak barriers with scrub material and subsequent afforestation of the treated dunes with seedlings of shrubs such as phog, senna, castor oil plant and trees such as gum acacia, "Prosopis juliflora" and lebbek tree. The long Indira Gandhi Canal brings fresh water to the Thar Desert. It was conceived to halt spreading of the desert to fertile areas. There are few local tree species suitable for planting in the desert, which are slow growing. Therefore, exotic tree species were introduced for plantation. Many species of "Eucalyptus", "Acacia", "Cassia" and other genera from Israel, Australia, US, Russia, Zimbabwe, Chile, Peru and Sudan have been tried in Thar Desert. "Acacia tortilis" has proved to be the most promising species for desert afforestation and the jojoba is another promising species of economic value found suitable for planting in these areas. There are several protected areas in the Thar Desert. Stretches of sand in the desert are interspersed by hillocks and sandy and gravel plains. Due to the diversified habitat and ecosystem, the vegetation, human culture and animal life in this arid region is very rich in contrast to the other deserts of the world. About 23 species of lizard and 25 species of snakes are found here and several of them are endemic to the region. Some wildlife species, which are fast vanishing in other parts of India, are found in the desert in large numbers such as the blackbuck ("Antilope cervicapra"), chinkara ("Gazella bennettii") and Indian wild ass ("Equus hemionus khur") in the Rann of Kutch. They have evolved excellent survival strategies, their size is smaller than other similar animals living in different conditions, and they are mainly nocturnal. There are certain other factors responsible for the survival of these animals in the desert. Due to the lack of water in this region, transformation of the grasslands into cropland has been very slow. The protection provided to them by a local community, the Bishnois, is also a factor. Other mammals of the Thar Desert include a subspecies of red fox ("Vulpes vulpes pusilla") and the caracal. The region is a haven for 141 species of migratory and resident birds of the desert. One can see eagles, harriers, falcons, buzzards, kestrel and vultures. There are short-toed eagles ("Circaetus gallicus"), tawny eagles ("Aquila rapax"), greater spotted eagles ("Aquila clanga"), laggar falcons ("Falco jugger") and kestrels. There are also a number of reptiles. The Indian peafowl is a resident breeder in the Thar region. The peacock is designated as the national bird of India and the provincial bird of the Punjab (Pakistan). It can be seen sitting on khejri or pipal trees in villages or Deblina. The natural vegetation of this dry area is classed as Northwestern thorn scrub forest occurring in small clumps scattered more or less openly. Density and size of patches increase from west to east following the increase in rainfall. The natural vegetation of the Thar Desert is composed of the following tree, shrub and herb species: The endemic floral species include "Calligonum polygonoides", "Prosopis cineraria", "Acacia nilotica", "Tamarix aphylla", "Cenchrus biflorus". The Thar Desert is the most widely populated desert in the world, with a population density of 83 people per km2. In India, the inhabitants comprise Hindus, Jains, Sikhs and Muslims. In Pakistan, inhabitants also include both Muslims and Hindus. About 40% of the total population of Rajasthan live in the Thar Desert. The main occupation of the people is agriculture and animal husbandry. A colourful culture rich in tradition prevails in this desert. The people have a great passion for folk music and folk poetry. Jodhpur, the largest city in the region, lies in the scrub forest zone. Bikaner and Jaisalmer are located in the desert proper. A large irrigation and power project has reclaimed areas of the northern and western desert for agriculture. The small population is mostly pastoral, and hide and wool industries are prominent. The desert's part in Pakistan also has a rich multifaceted culture, heritage, traditions, folk tales, dances and music due to its inhabitants who belong to different religions, sects and castes. In the years 1965 and 1971, population exchanges took place in the Thar between India and Pakistan. 3,500 Muslims shifted from the Indian section of the Thar to Pakistani Thar whilst thousands of Hindu families also migrated from Pakistani Thar to the Indian section of the Thar. The Sarasvati River is one of the chief Rigvedic rivers mentioned in ancient Hindu texts. The Nadistuti hymn in the Rigveda mentions the Sarasvati between the Yamuna in the east and the Sutlej in the west, and later Vedic texts like Tandya and Jaiminiya Brahmanas as well as the Mahabharata mention that the Sarasvati dried up in a desert. Most scholars agree that at least some of the references to the Sarasvati in the Rigveda refer to the Ghaggar-Hakra River. There is also a small present-day Sarasvati River ("Sarsuti") that joins the Ghaggar. The epic "Mahabharata" mentions the Kamyaka Forest situated on the western boundary of the Kuru Kingdom (Kuru Proper and Kurujangala), on the banks of the Sarasvati River to the west of the Kurukshetra plain, which contained a lake known as Kamyaka. The Kamyaka forest is mentioned as being situated at the head of the Thar Desert, near Lake Trinavindu. The Pandavas, on their way to exile in the woods, left Pramanakoti on the banks of the Ganges and went towards Kurukshetra, travelling in a western direction and crossing the Yamuna and Drishadvati rivers. They finally reached the banks of the Sarasvati River where they saw the forest of Kamyaka, the favourite haunt of ascetics, situated on a level and wild plain on the banks of the Sarasvati abounding in birds and deer. There the Pandavas lived in an ascetic asylum. It took three days for the Pandavas to reach the Kamyaka forest, setting out from Hastinapura, on their chariots. In the Rigveda there is also mention of a river named "Aśvanvatī" along with the river Drishadvati. Some scholars consider both the Sarasvati and Aśvanvatī to be the same river. Human habitations on the banks of Sarasvati and Drishadvati had shifted to the east and south directions prior to the "Mahabharata" period. At that time the present day Bikaner and Jodhpur areas were known as Kurujangala and Madrajangala provinces. The Desert National Park in Jaisalmer district has a collection of animal fossils and plants 180 million years old. The Thar is one of the most heavily populated desert areas in the world with the main occupations of its inhabitants being agriculture and animal husbandry. Agriculture is not a dependable proposition in this area because after the rainy season, at least one third of crops fail. Animal husbandry, trees and grasses, intercropped with vegetables or fruit trees, is the most viable model for arid, drought-prone regions. The region faces frequent droughts. Overgrazing due to high animal populations, wind and water erosion, mining and other industries have resulted in serious land degradation. Agricultural production is mainly from kharif crops, which are grown in the summer season and seeded in June and July. These are then harvested in September and October and include bajra, pulses such as guar, jowar ("Sorghum vulgare"), maize ("zea mays"), sesame and groundnuts. Over the past few decades the development of irrigation features including canals and tube wells have changed the crop pattern with desert districts in Rajasthan now producing rabi crops including wheat, mustard and cumin seed along with cash crops. The Thar region of Rajasthan is a major opium production and consumption area. The Indira Gandhi Canal irrigates northwestern Rajasthan while the Government of India has started a centrally sponsored Desert Development Program based on watershed management with the objective of preventing the spread of desert and improving the living conditions of people in the desert. In the last 15–20 years, the Rajasthan desert has seen many changes, including a manifold increase of both the human and animal population. Animal husbandry has become popular due to the difficult farming conditions. At present, there are ten times more animals per person in Rajasthan than the national average, and overgrazing is also a factor affecting climatic and drought conditions. A large number of farmers in the Thar Desert depend on animal husbandry for their livelihood. Cows, buffalo, sheep, goats, camels, and oxen consists of major cattle population. Barmer district has the highest cattle population out of which sheep and goats are in majority. Some of the best breeds of bullocks such as Kankrej (Sanchori) and Nagauri are from the desert region. Thar region of Rajasthan is the biggest wool-producing area in India. Chokla, Marwari, Jaisalmeri, Magra, Malpuri, Sonadi, Nali and Pungal breeds of sheep are found in the region. Of the total wool production in India, 40-50% comes from Rajasthan. The sheep-wool from Rajasthan is considered to be the best in the world for the carpet making industry. The wool of the Chokla breed of sheep is considered to be of high quality. Breeding centres have been developed for Karakul and Merino sheep at Suratgarh, Jaitsar and Bikaner. Some important mills for making woolen thread established in the desert are: Jodhpur Woolen Mill, Jodhpur; Rajasthan Woolen Mill, Bikaner and India Woolen Mill, Bikaner. Bikaner is the biggest "mandi" (market place) of wool in Asia. The live stock depends for grazing on common lands in villages. During famine years the nomadic Rebari people move with large herds of sheep and camel to the forested areas of south Rajasthan or nearby states like Madhya Pradesh for grazing their cattle. The importance of animal husbandry can be understood from the organization of large number of cattle fairs in the region. Cattle fairs are normally named after the folk-deities. Some of major cattle fairs held are Ramdevji cattle fair at Manasar in Nagaur district, Tejaji cattle fair at Parbatsar in Nagaur district, Baldeo cattle fair at Merta city in Nagaur district, Mallinath cattle fair at Tilwara in Barmer district. Livestock is very important to the Thar Desert people. Forestry has an important part to play in the amelioration of the conditions in semi-arid and arid lands. If properly planned, forestry can make an important contribution to the general welfare of the people living in desert areas. The living standard of the people in the desert is low. They can not afford other fuels like gas and kerosene. Firewood is their main fuel, of the total consumption of wood about 75 percent is firewood. The forest cover in the desert is low. Rajasthan has a forest area of 31150 km2. which is about 9% of the geographical area. The forest area is mainly in southern districts of Rajasthan like Udaipur and Chittorgarh. The minimum forest area is in Churu district at only 80 km2. Thus the forest is insufficient to fulfill the needs of firewood and grazing in desert districts. This diverts the much needed cattle dung from the field to the hearth. This in turn results into a decrease in agricultural production. Agroforestry model is best suited to the people of desert. The scientists of the Central Arid Zone Research Institute (CAZRI), have successfully developed and improved dozens of traditional and non-traditional crops and fruits, such as Ber trees (like plums) that produce much larger fruits than before and can thrive with minimal rainfall. These trees have become a profitable option for farmers. One example from a case study of horticulture showed that in a situation of budding in 35 plants of Ber and Guar (Gola, Seb and Mundia variety developed by CAZRI), using only one hectare of land, yielded 10,000 kg. of Ber and 250 kg. of Guar, which translates into double or even triple the profit. Arid Forest Research Institute, (AFRI) situated at Jodhpur is another national level institute in the region. It is one of the institutes of the Indian Council of Forestry Research and Education (ICFRE) working under the Indian Ministry of Environment & Forests. The objective of the Institute is to carry out scientific research in forestry in order to provide technologies that increase the vegetative cover and to conserve the biodiversity in the hot arid and semi arid region of Rajasthan, Gujarat and Dadara and Nagar Haveli union territory. The most important tree species in terms of providing a livelihood in Thar Desert communities is "Prosopis cineraria". "Prosopis cineraria" provides wood of construction grade. It is used for house-building, chiefly as rafters, posts scantlings, doors and windows, and for well construction water pipes, the upright posts of Persian wheels, agricultural implements and shafts, spokes, fellows and cart yokes. It can also be used for small turning work and tool-handles. Container manufacturing is another important wood-based industry, which depends heavily on desert-grown trees. "Prosopis cineraria" is much valued as a fodder tree. The trees are heavily lopped particularly during winter months when no other green fodder is available in the dry tracts. There is a popular saying that death will not visit a man, even at the time of a famine, if he has a "Prosopis cineraria", a goat and a camel, since the three together are some what said to sustain a man even under the most trying condition. The forage yield per tree varies a great deal. On an average, the yield of green forage from a full grown tree is expected to be about 60 kg with complete lopping having only the central leading shoot, 30 kg when the lower two third crown is lopped and 20 kg when the lower one third crown is lopped. The leaves are of high nutritive value. Feeding of the leaves during winter when no other green fodder is generally available in rain-fed areas is thus profitable. The pods have a sweetish pulp and are also used as fodder for livestock. "Prosopis cineraria" is most important top feed species providing nutritious and highly palatable green as well as dry fodder, which is readily eaten by camels, cattle, sheep and goats, constituting a major feed requirement of desert livestock. Locally it is called "Loong". Pods are locally called "sangar" or "sangri". The dried pods locally called "kho-kha" are eaten. Dried pods also form rich animal feed, which is liked by all livestock. Green pods also form rich animal feed, which is liked by drying the young boiled pods. They are also used as famine food and known even to prehistoric man. Even the bark, having an astringent bitter taste, was reportedly eaten during the severe famine of 1899 and 1939. Pod yield is nearly 1.4 quintals of pods/ha with a variation of 10.7% in dry locations. "Prosopis cineraria" wood is reported to contain high calorific value and provide high quality fuel wood. The lopped branches are good as fencing material. Its roots also encourage "nitrogen" fixation, which produces higher crop yields. "Tecomella undulata" is one more tree species, locally known as Rohida, which is found in the Thar Desert regions of northwest and western India. It is another important medium-sized tree of great use in agroforestry, that produces quality timber and is the main source of timber amongst the indigenous tree species of desert regions. The trade name of the tree species is Desert teak or Marwar teak. "Tecomella undulata" is mainly used as a source of timber. Its wood is strong, tough and durable. It takes a fine finish. Heartwood contains quinoid. The wood is excellent for firewood and charcoal. Cattle and goats eat leaves of the tree. Camels, goats and sheep consume flowers and pods. "Tecomella undulata" plays an important role in the desert ecology. It acts as a soil-binding tree by spreading a network of lateral roots on the top surface of the soil. It also acts as a windbreak and helps in stabilizing shifting sand dunes. It is considered as the home of birds and provides shelter for other desert wildlife. Shade of tree crown is shelter for the cattle, goats and sheep during summer days. "Tecomella undulata" has medicinal properties as well. The bark obtained from the stem is used as a remedy for syphilis. It is also used in curing urinary disorders, enlargement of spleen, gonorrhoea, leucoderma and liver diseases. Seeds are used against abscess. Desert safaris on camels have become increasingly popular around Jaisalmer. Domestic and international tourists frequent the desert seeking adventure on camels for anything from a day to several days. This ecotourism industry ranges from cheaper backpacker treks to plush Arabian night style campsites replete with banquets and cultural performances. During the treks tourists are able to view the fragile and beautiful ecosystem of the Thar Desert. This form of tourism provides income to many operators and camel owners in Jaisalmer as well as employment for many camel trekkers in the desert villages nearby. People from various parts of the world come to see the Pushkar ka Mela (Pushkar Fair) and oases. Rajasthan is pre-eminent in quarrying and mining in India. The Taj Mahal was built with white marble mined from Makrana in Nagaur district. The state is the second largest source of cement in India. It has rich salt deposits at Sambhar. Jodhpur sandstone is mostly used in monuments, important buildings, and residential buildings. This stone is termed "chittar patthar". Jodhpur also has mines of red stone locally known as "ghatu patthar" used in construction. Sandstone is found in Jodhpur and Naguar districts. Jalore is biggest centre of granite processing units. Lignite coal deposits are there at places Giral, Kapuradi, Jalipa, Bhadka in Barmer district; Plana, Gudha, Bithnok, Barsinghpur, Mandla Charan, Raneri Hadla in Bikaner district and Kasnau, Merta, Lunsar etc., in Nagaur district. A lignite based thermal power plant has been established at Giral in Barmer district. Jindal group is working on 1080 Megawatt power project in private sector at Bhadaresh village in Barmer district. "Neyeli Lignite Barsinghpur Project" is in progress to establish two thermal power units of capacity 125 megawatts each at Barsinghpur in Bikaner district. Reliance Energy is working on establishing power generation through an underground gasification technique in Barmer district with an outlay of about 30 billion rupees. There is a large amount of good quality petroleum in Jaisalmer and Barmer districts. The main places with deposits of petroleum are Baghewal, Kalrewal, and Tawariwal in Jaisalmer district and Gudha Malani area in Barmer district. Barmer district has started petroleum production on commercial scale. Barmer district is in the news due to its large oil basin. The British exploration company Cairn Energy started production of oil on a large scale. Mangala, Bhagyam and Aishwariya are the major oil fields in the district. This is India's biggest oil discovery in 22 years. This promises to transform the local economy, which has long suffered from the harshness of the desert. The Government of India initiated departmental exploration for oil in 1955-56 in the Jaisalmer area, Oil India Limited discovered natural gas in 1988 in the Jaisalmer basin. Also known for their fine leather messenger bags made from wild camels native to the area. The Thar desert seems an ideal place for generation of electricity from wind power. According to an estimate Rajasthan state has got a potential of 5500 Megawatt wind power generation as such it is in the priority of the state govt. Rajasthan State Power Corporation has established its first wind power-based power plant at Amarsagar in Jaisalmer district. Some leading companies in the field are working on establishing wind mills in Barmer, Jaisalmer and Bikaner districts. Solar energy also has a great potential in this region as most of the days during a year are cloud free. A solar energy based plant has been established at Bhaleri in Churu district to convert hard water into drinking water. There are a number of salt water lakes in the Thar Desert. These are Sambhar, Pachpadra, Tal Chhapar, Falaudi and Lunkaransar where sodium chloride salt is produced from salt water. The Didwana lake produces sodium sulphate salt. Ancient archaeological evidences of habitations have been recovered from Sambhar and Didwana lakes which shows their antiquity and historical importance. Water scarcity plays an important role in shaping life in all parts of the Thar. Small, intermittent ponds, whether natural (tobas) or man-made (johads), are often the only source of water for animals and humans in the true desert areas. The lack of a constant water supply causes much of the local population to live as nomads. Most human settlements are found near the two seasonal streams of the Karon-Jhar hills. Potable groundwater is also rare in the Thar Desert. Supplies are often sour due to dissolved minerals, and are only available deep underground. Wells that successfully bear sweet water attract nearby settlement, but are difficult to dig, possibly claiming the lives of the well-diggers. According to 1980 housing census in Pakistan, there were 241,326 housing units of one or two very small rooms. The degree of crowding was six persons per housing unit and three persons per room. For most of the housing units (approximately 76 per cent), the main construction material of outer walls is unbaked bricks whereas wood is used in 10 percent and baked bricks or stones with mud bonding in 8 percent housing units. A large number of families still live in jhugis or huts which are housing units formed with straws and thin wood-sticks. These jhugis are susceptible to damage from the occasional high winds. But the poverty leaves no other option to these jhugiwalas (people living in jhugis). The river Luni is the only natural water source that drains inside a lake in the desert. It originates in the Pushkar valley of the Aravalli Range, near Ajmer and ends in the marshy lands of Rann of Kutch in Gujarat, after travelling a distance of 530 km. The Luni flows through part of Ajmer, Barmer, Jalor, Jodhpur, Nagaur, Pali, and Sirohi districts and Mithavirana Vav Radhanpur region of Banaskantha North Gujarat. Its major tributaries are the Sukri, Mithri, Bandi, Khari, Jawai, Guhiya and Sagi from the left, and the Jojari River from the right. The Ghaggar is another intermittent river in India, flowing during the monsoon rains. It originates in the Shivalik Hills of Himachal Pradesh and flows through Punjab and Haryana to Rajasthan; just southwest of Sirsa, Haryana and by the side of "talwara jheel" in Rajasthan. This seasonal river feeds two irrigation canals that extend into Rajasthan. It terminates in Hanumangarh district. The Rajasthan Canal system is the major irrigation scheme of the Thar Desert and is conceived to reclaim it and also to check spreading of the desert to fertile areas. It is world's largest irrigation which is being extended in an attempt to make the desert arable. It runs south-southwest in Punjab and Haryana but mainly in Rajasthan for a total of 650 kilometers and ends near Jaisalmer, in Rajasthan. After the construction of the Indira Gandhi Canal, irrigation facilities were available over an area of 6770 km² in Jaisalmer district and 37 km² in Barmer district. Irrigation had already been provided in an area of 3670 km² in Jaisalmer district. The canal has transformed the barren deserts of this district into fertile fields. Crops of mustard, cotton, and wheat now flourish in this semi-arid western region replacing the sand there previously. Besides providing water for agriculture, the canal will supply drinking water to hundreds of people in far-flung areas. As the second stage of work on the canal progresses rapidly, there is hope that it will enhance the living standards of the people of the state. The Thar Desert provides recreational value in terms of desert festivals organized every year. Rajasthan desert festivals are celebrated with great zest and zeal. This festival is held once a year during winters. Dressed in brilliantly hued costumes, the people of the desert dance and sing haunting ballads of valor, romance and tragedy. The fair has snake charmers, puppeteers, acrobats and folk performers. Camels, of course, play a starring role in this festival, where the rich and colorful folk culture of Rajasthan can be seen. Camels are an integral part of the desert life and the camel events during the Desert Festival confirm this fact. Special efforts go into dressing the animal for entering the competition of the best-dressed camel. Other interesting competitions on the fringes are the moustache and turban tying competitions, which not only demonstrate tradition but also inspire its preservation. Both the turban and the moustache have been centuries old symbols of honor in Rajasthan. Evenings are meant for the main shows of music and dance. Continuing till late into the night, the number of spectators swells up each night and the grand finale, on the night of a full moon, takes place by sand dunes.
https://en.wikipedia.org/wiki?curid=31174
Tobin tax A Tobin tax was originally defined as a tax on all spot conversions of one currency into another. It was suggested by James Tobin, an economist who won the Nobel Memorial Prize in Economic Sciences. Tobin's tax was originally intended to penalize short-term financial round-trip excursions into another currency. By the late 1990s, the term Tobin tax was being incorrectly used to apply to all forms of short term transaction taxation, whether across currencies or not. Another term for these broader tax schemes is Robin Hood tax, due to tax revenues from the (presumably richer) speculator funding general revenue (of whom the primary beneficiaries are less wealthy). More exact terms, however, apply to different scopes of tax. Tobin suggested his currency transaction tax in 1972 in his Janeway Lectures at Princeton, shortly after the Bretton Woods system of monetary management ended in 1971. Prior to 1971, one of the chief features of the Bretton Woods system was an obligation for each country to adopt a monetary policy that maintained the exchange rate of its currency within a fixed value—plus or minus one percent—in terms of gold. Then, on August 15, 1971, United States President Richard Nixon announced that the United States dollar would no longer be convertible to gold, effectively ending the system. This action created the situation whereby the U.S. dollar became the sole backing of currencies and a reserve currency for the member states of the Bretton Woods system, leading the system to collapse in the face of increasing financial strain in that same year. In that context, Tobin suggested a new system for international currency stability, and proposed that such a system include an international charge on foreign-exchange transactions. In 2001, in another context, just after "the nineties' crises in Mexico, Southeast Asia and Russia," which included the 1994 economic crisis in Mexico, the 1997 Asian Financial Crisis, and the 1998 Russian financial crisis, Tobin summarized his idea: Tobin: Ich hatte vorgeschlagen, die Einnahmen der Weltbank zur Verfügung zu stellen. Aber darum ging es mir gar nicht. Die Devisenumsatzsteuer war dafür gedacht, Wechselkursschwankungen einzudämmen. Die Idee ist ganz simpel: Bei jedem Umtausch von einer Währung in die andere würde eine kleine Steuer fällig, sagen wir von einem halben Prozent des Umsatzes. So schreckt man Spekulanten ab. Denn viele Investoren legen ihr Geld sehr kurzfristig in Währungen an. Wird dieses Geld plötzlich zurückgezogen, müssen die Länder die Zinsen drastisch anheben, damit die Währung attraktiv bleibt. Hohe Zinsen aber sind oft desaströs für die heimische Wirtschaft, wie die Krisen in Mexiko, Südostasien und Russland während der neunziger Jahre gezeigt haben. Meine Steuer würde Notenbanken kleiner Länder Handlungsspielraum zurückgeben und dem Diktat der Finanzmärkte etwas entgegensetzen." - Though James Tobin suggested the rate as "let's say 0.5%", in that interview setting, others have tried to be more precise in their search for the optimum rate. Economic literature of the period 1990s-2000s emphasized that variations in the terms of payment in trade-related transactions (so-called "swaps" for instance) provided a ready means of evading a tax levied on currency only. Accordingly, most debate on the issue has shifted towards a general financial transaction tax which would capture such proxies. Other measures to avoid punishing hedging (a form of insurance for cashflows) were also proposed. By the 2010s the Basel II and Basel III frameworks required reporting that would help to differentiate them. and economic thought was tending to reject the belief that they could not be differentiated, or (as the "Chicago School" had held) should not be. In March 2016 China drafted rules to impose a genuine currency transaction tax and this was referred to in financial press as a Tobin tax . This was widely viewed as a warning to curb shorting of its currency the yuan. It was however expected to keep this tax at 0% initially, calculating potential revenue from different rate schemes and exemptions, and not to impose the actual tax unless speculation increased. Also in 2016 US Democratic Party POTUS nominee Hillary Clinton included in her platform a vow to "Impose a tax on high-frequency trading. The growth of high-frequency trading has unnecessarily placed stress on our markets, created instability, and enabled unfair and abusive trading strategies. Hillary would impose a tax on harmful high-frequency trading and reform rules to make our stock markets fairer, more open, and transparent.". However, the term "high-frequency" implied that only a few large volume transaction players engaged in arbitrage would likely be affected. Clinton referred separately to "Impose a risk fee on the largest financial institutions. Big banks and financial companies would be required to pay a fee based on their size and their risk of contributing to another crisis." The calculations of such fees would necessarily depend on financial risk management criteria (see Basel II and Basel III). Because of its restriction to so-called "harmful high-frequency trading" rather than to inter-currency transactions, neither of Clinton's proposals could be considered a true Tobin tax though international exposure would be a factor in the "risk fee". Critics of all financial transaction taxes and currency transaction taxes emphasize the financial risk management difficulty of differentiating hedging from speculation, and the economic argument (attributed to the "Chicago School") that they cannot in principle be differentiated. However, advocates of such taxes considered these problems manageable, especially in context of broader financial transaction tax. Briefly, the differences are: James Tobin's purpose in developing his idea of a currency transaction tax was to find a way to manage exchange-rate volatility. In his view, "currency exchanges transmit disturbances originating in international financial markets. National economies and national governments are not capable of adjusting to massive movements of funds across the foreign exchanges, without real hardship and without significant sacrifice of the objectives of national economic policy with respect to employment, output, and inflation." Tobin saw two solutions to this issue. The first was to move "toward a common currency, common monetary and fiscal policy, and economic integration." The second was to move "toward greater financial segmentation between nations or currency areas, permitting their central banks and governments greater autonomy in policies tailored to their specific economic institutions and objectives." Tobin's preferred solution was the former one but he did not see this as politically viable so he advocated for the latter approach: "I therefore regretfully recommend the second, and my proposal is to throw some sand in the wheels of our excessively efficient international money markets." Tobin's method of "throwing sand in the wheels" was to suggest a tax on all spot conversions of one currency into another, proportional to the size of the transaction. In the development of his idea, Tobin was influenced by the earlier work of John Maynard Keynes on general financial transaction taxes. Keynes' concept stems from 1936 when he proposed that a transaction tax should be levied on dealings on Wall Street, where he argued that excessive speculation by uninformed financial traders increased volatility. For Keynes (who was himself a speculator) the key issue was the proportion of 'speculators' in the market, and his concern that, if left unchecked, these types of players would become too dominant. The most common variations on Tobin's idea are a general currency transaction tax, a more general financial transaction tax and (the most general) Robin Hood tax on transactions only richer investors can afford to engage in. A key issue with Tobin's tax was "avoidance by change of product mix... market participants would have an incentive to substitute out of financial instruments subject to the tax and into instruments not subject to it. In this fashion, markets would innovate so as to avoid the tax... [so] focusing on just spot currency markets would clearly induce a huge shifting of transactions into futures and derivatives markets. Thus, the real issue is how to design a tax that takes account of all the methods and margins of substitution that investors have for changing their patterns of activity to avoid the tax. Taking account of these considerations implies a Tobin tax that is bigger in scope, and pushes the design toward a generalized securities transaction tax that resembles the tax suggested by Pollin et al. (1999). There are four benefits to this. First, it is likely to generate significantly greater revenues. Second, it maintains a level playing field across financial markets so that no individual financial instrument is arbitrarily put at a competitive disadvantage versus another. Third, it is likely to enhance domestic financial market stability by discouraging domestic asset speculation. Fourth, to the extent that advanced economies already put too many real resources into financial dealings, it would cut back on this resource use, freeing these resources for other productive uses [Fourth] such substitution is costly both in resource use, and because alternative instruments do not provide exactly the same services [thus] just as the market provides an incentive to avoid a Tobin tax, so too it automatically sets in motion forces that deter excessive avoidance." - Palley, 2000 Pollin, Palley and Baker (2000) emphasize that transaction taxes "have clearly not prevented the efficient functioning of these markets. " According to Paul Bernd Spahn in 1995, "Analysis has shown that the Tobin tax as originally proposed is not viable and should be laid aside for good." On September 19, 2001, retired speculator George Soros put forward a proposal based on the IMF's existing special drawing rights (SDRs) mechanism. In Soros' scheme, rich countries would pledge SDRs (which are denominated as a basket of multiple 'hard' currencies) for the purpose of providing international assistance. Soros was not necessarily dismissing the Tobin tax idea. He stated, "I think there is a case for a Tobin tax ... (but) it is not at all clear to me that a Tobin tax would reduce volatility in the currency markets. It is true that it may discourage currency speculation but it would also reduce the liquidity of the marketplace." In this Soros appeared to agree with the Chicago School. The term "Tobin tax" has sometimes been used interchangeably with a specific currency transaction tax (CTT) in the manner of Tobin's original idea, and other times it has been used interchangeably with the various different ideas of a more general financial transaction tax (FTT). In both cases, the various ideas proposed have included both national and multinational concepts. Examples of associating Tobin's tax with these: It was originally assumed that the Tobin tax would require multilateral implementation, since one country acting alone would find it very difficult to implement this tax. Many people have therefore argued that it would be best implemented by an international institution. It has been proposed that having the United Nations manage a Tobin tax would solve this problem and would give the UN a large source of funding independent from donations by participating states. However, there have also been initiatives of national dimension about the tax. (This is in addition to the many countries that have foreign exchange controls.) Whilst finding some support in countries with strong left-wing political movements such as France and Latin America, the Tobin tax proposal came under much criticism from economists and governments, especially those with liberal markets and a large international banking sector, who said it would be impossible to implement and would destabilise foreign exchange markets. Most of the actual implementation of Tobin taxes, whether in the form of a specific currency transaction tax, or a more general financial transaction tax, has occurred at a national level. In July, 2006, analyst Marion G. Wrobel examined the international experiences of various countries with financial transaction taxes. The EU financial transaction tax (EU FTT) is a proposal made by the European Commission in September 2011 to introduce a financial transaction tax within the 27 member states of the European Union by 2014. The tax would only impact financial transactions between financial institutions charging 0.1% against the exchange of shares and bonds and 0.01% across derivative contracts. According to the European Commission it could raise €57 billion every year, of which around €10bn (£8.4bn) would go to Great Britain, which hosts Europe's biggest financial center. It is unclear whether a financial transaction tax is compatible with European law. If implemented the tax must be paid in the European country where the financial operator is established. This "R plus I" (residence plus issuance) solution means the EU-FTT would cover all transactions that involve a single European firm, no matter if these transactions are carried out in the EU or elsewhere in the world. The scheme makes it impossible for say French or German banks to avoid the tax by moving their transactions offshore, unless they give up all their European customers. Being faced with stiff resistance from some non-eurozone EU countries, particularly United Kingdom and Sweden, a group of eleven states began pursuing the idea of utilizing enhanced co-operation to implement the tax in states which wish to participate. Opinion polls indicate that 41 percent of the British people are in favour of some forms of FTT (see section: Public opinion). The proposal supported by the eleven EU member states, was approved in the European Parliament in December 2012, and by the Council of the European Union in January 2013. The formal agreement on the details of the EU FTT still need to be decided upon and approved by the European Parliament. Wrobel's paper highlighted the Swedish experience with financial transaction taxes. In January 1984, Sweden introduced a 0.5% tax on the purchase or sale of an equity security. Thus a round trip (purchase and sale) transaction resulted in a 1% tax. In July 1986 the rate was doubled. In January 1989, a considerably lower tax of 0.002% on fixed-income securities was introduced for a security with a maturity of 90 days or less. On a bond with a maturity of five years or more, the tax was 0.003%. The revenues from taxes were disappointing; for example, revenues from the tax on fixed-income securities were initially expected to amount to 1,500 million Swedish kronor per year. They did not amount to more than 80 million Swedish kronor in any year and the average was closer to 50 million. In addition, as taxable trading volumes fell, so did revenues from capital gains taxes, entirely offsetting revenues from the equity transactions tax that had grown to 4,000 million Swedish kronor by 1988. On the day that the tax was announced, share prices fell by 2.2%. But there was leakage of information prior to the announcement, which might explain the 5.35% price decline in the 30 days prior to the announcement. When the tax was doubled, prices again fell by another 1%. These declines were in line with the capitalized value of future tax payments resulting from expected trades. It was further felt that the taxes on fixed-income securities only served to increase the cost of government borrowing, providing another argument against the tax. Even though the tax on fixed-income securities was much lower than that on equities, the impact on market trading was much more dramatic. During the first week of the tax, the volume of bond trading fell by 85%, even though the tax rate on five-year bonds was only 0.003%. The volume of futures trading fell by 98% and the options trading market disappeared. On 15 April 1990, the tax on fixed-income securities was abolished. In January 1991 the rates on the remaining taxes were cut in half and by the end of the year they were abolished completely. Once the taxes were eliminated, trading volumes returned and grew substantially in the 1990s and 2000s. The Swedish experience of a transaction tax was with purchase or sale of equity securities, fixed income securities and derivatives. In global international currency trading, however, the situation could, some argue, look quite different. Wrobel's studies do not address the global economy as a whole, as James Tobin did when he spoke of "the nineties' crises in Mexico, South East Asia and Russia," which included the 1994 economic crisis in Mexico, the 1997 Asian Financial Crisis, and the 1998 Russian financial crisis. An existing example of a Financial Transaction Tax (FTT) is Stamp Duty Reserve Tax (SDRT) and stamp duty. Stamp duty was introduced as an ad valorem tax on share purchases in 1808, preceding by over 150 years the Tobin tax on currency transactions. Changes were made in 1963. In 1963 the rate of the UK Stamp Duty was 2%, subsequently fluctuating between 1% and 2%, until a process of its gradual reduction started in 1984, when the rate was halved, first from 2% to 1%, and then once again in 1986 from 1% to the current level of 0.5%. The changes in Stamp Duty rates in 1974, 1984, and 1986 provided researchers with "natural experiments", allowing them to measure the impact of transaction taxes on market volume, volatility, returns, and valuations of UK companies listed on the London Stock Exchange. Jackson and O'Donnel (1985), using UK quarterly data, found that the 1% cut in the Stamp Duty in April 1984 from 2% to 1% lead to a "dramatic 70% increase in equity turnover". Analyzing all three Stamp Duty rate changes, Saporta and Kan (1997) found that the announcements of tax rate increases (decreases) were followed by negative (positive) returns, but even though these results were statistically significant, they were likely to be influenced by other factors, because the announcements were made on Budget Days. Bond et al. (2005) confirmed the findings of previous studies, noting also that the impact of the announced tax rate cuts was more beneficial (increasing market value more significantly) in case of larger firms, which had higher turnover, and were therefore more affected by the transaction tax than stocks of smaller companies, less frequently traded. Because the UK tax code provides exemptions from the Stamp Duty Reserve Tax for all financial intermediaries, including market makers, investment banks and other members of the LSE, and due to the strong growth of the contract for difference (CFD) industry, which provides UK investors with untaxed substitutes for LSE stocks, according to the Oxera (2007) report, more than 70% percent of the total UK stock market volume, including the entire institutional volume remained (in 2005) exempt from the Stamp Duty, in contrast to the common perception of this tax as a "tax on bank transactions" or a "tax on speculation". On the other hand, as much as 40% of the Stamp Duty revenues come from taxing foreign residents, because the tax is "chargeable whether the transaction takes place in the UK or overseas, and whether either party is resident in the UK or not." In 2005, the Tobin tax was developed into a modern proposal by the United Kingdom NGO Stamp Out Poverty. It simplified the two-tier tax in favour of a mechanism designed solely as a means for raising development revenue. The currency market by this time had grown to $2,000 billion a day. To investigate the feasibility of such a tax they hired the City of London firm Intelligence Capital, who found that a tax on the pound sterling wherever it was traded in the world, as opposed to a tax on all currencies traded in the UK, was indeed feasible and could be unilaterally implemented by the UK government. The Sterling Stamp Duty, as it became known, was to be set at a rate 200 times lower than Tobin had envisaged in 2001, which "pro Tobin tax" supporters claim wouldn't have affected currency markets and could still raise large sums of money. The global currency market grew to $3,200 billion a day in 2007, or £400,000 billion per annum with the trade in sterling, the fourth most traded currency in the world, worth £34,000 billion a year. A sterling stamp duty set at 0.005% as some claim would have raised in the region of £2 billion a year in 2007. The All Party Parliamentary Group for Debt, Aid and Trade published a report in November 2007 into financing for development in which it recommended that the UK government undertake rigorous research into the implementation of a 0.005% stamp duty on all sterling foreign exchange transactions, to provide additional revenue to help bridge the funding gap required to pay for the Millennium Development Goals. In 1996, the United Nations Development Programme sponsored a comprehensive feasibility and cost-benefit study of the Tobin tax: Haq, Mahbub ul; Kaul, Inge; Grunberg, Isabelle (August 1996). The Tobin Tax: Coping with Financial Volatility. Oxford University Press. . In late 2001, a Tobin tax amendment was adopted by the French National Assembly. However, it was overturned by March 2002 by the French Senate. On June 15, 2004, the Commission of Finance and Budget in the Belgian Federal Parliament approved a bill implementing a Spahn tax. According to the legislation, Belgium will introduce the Tobin tax once all countries of the eurozone introduce a similar law. In July 2005 former Austrian chancellor Wolfgang Schüssel called for a European Union Tobin tax to base the communities' financial structure on more stable and independent grounds. However, the proposal was rejected by the European Commission. On November 23, 2009, the President of the European Council, Herman Van Rompuy, after attending a meeting of the Bilderberg Group argued for a European version of the Tobin tax. This tax would go beyond just financial transactions: "all shopping and petrol would be taxed.". Countering him was his sister, Christine Van Rompuy, who said, "any new taxes would directly affect the poor". On June 29, 2011, the European Commission called for Tobin-style taxes on the EU's financial sector to generate direct revenue starting from 2014. At the same time it suggested to reduce existing levies coming from the 27 member states. The first nation in the G20 group to formally accept the Tobin tax was Canada. On March 23, 1999, the House of Commons of Canada passed a resolution directing the government to "enact a tax on financial transactions in concert with the international community." However, ten years later, in November 2009, at the G20 finance ministers summit in Scotland, the representatives of the minority government of Canada spoke publicly on the world stage in opposition to that House of Commons of Canada resolution. In September 2009, French president Nicolas Sarkozy brought up the issue of a Tobin tax once again, suggesting it be adopted by the G20. On November 7, 2009, prime minister Gordon Brown said that G-20 should consider a tax on speculation, although did not specify that it should be on currency trading alone. The BBC reported that there was a negative response to the plan among the G20. By December 11, 2009, European Union leaders expressed broad support for a Tobin tax in a communiqué sent to the International Monetary Fund. For supporters of a Tobin tax, there is a wide range of opinion on who should administer a global Tobin tax and what the revenue should be used for. There are some who think that it should take the form of an insurance: In early November 2009, at the G20 finance ministers summit in Scotland, the British Prime Minister "Mr. Brown and Nicolas Sarkozy, France's president, suggested that revenues from the Tobin tax could be devoted to the world's fight against climate change, especially in developing countries. They suggested that funding could come from "a global financial transactions tax." However British officials later argued the main point of a financial transactions tax would be provide insurance for the global taxpayer against a future banking crisis." John Dillon contends that it is not necessary to have unanimous agreement on the feasibility of an international FTT before moving forward. He proposes that it could be introduced gradually, beginning probably in Europe where support is strongest. The first stage might involve a levy on financial instruments within a few countries. Stephan Schulmeister of the Austrian Institute for Economic Re-search has suggested that initially Britain and Germany could implement a tax on a range of financial instruments since about 97% of all transactions on European Union exchanges occur in these two countries This scenario is possible, given the events in May and June, 2010: On June 28, 2010, the European Union's executive said it will study whether the European Union should go alone in imposing a tax on financial transactions after G20 leaders failed to agree on the issue. The financial transaction tax would be "separate" from a bank levy, or a resolution levy, which some governments are "also" proposing to impose on banks to insure them against the costs of any future bailouts. EU leaders instructed their finance ministers, in May, 2010, to work out by the end of October 2010, details for the banking levy, but any financial transaction tax remains much more controversial. In early November 2007, a regional Tobin tax was adopted by the Bank of the South, after an initiative of Presidents Hugo Chávez from Venezuela and Néstor Kirchner from Argentina. According to Stephen Spratt, "the revenues raised could be used for ... international development objectives ... such as meeting the Millennium Development Goals."(, p. 19) These are eight international development goals that 192 United Nations member states and at least 23 international organizations have agreed (in 2000) to achieve by the year 2015. They include reducing extreme poverty, reducing child mortality rates, fighting disease epidemics such as AIDS, and developing a global partnership for development. At the UN September 2001 World Conference against Racism, when the issue of compensation for colonialism and slavery arose in the agenda, Fidel Castro, the President of Cuba, advocated the Tobin Tax to address that issue. (According to Cliff Kincaid, Castro advocated it "specifically in order to generate U.S. financial reparations to the rest of the world," however a closer reading of Castro's speech shows that he never did mention "the rest of the world" as being recipients of revenue.) Castro cited Holocaust reparations as a previously established precedent for the concept of reparations. Tobin's more specific concept of a "currency transaction tax" from 1972 lay dormant for more than 20 years but was revived by the advent of the 1997 Asian Financial Crisis. In December, 1997 Ignacio Ramonet, editor of "Le Monde Diplomatique", renewed the debate around the Tobin tax with an editorial titled "Disarming the markets". Ramonet proposed to create an association for the introduction of this tax, which was named ATTAC (Association for the Taxation of financial Transactions for the Aid of Citizens). The tax then became an issue of the global justice movement or alter-globalization movement and a matter of discussion not only in academic institutions but even in streets and in parliaments in the UK, France, and around the world. In an interview given to the Italian independent radio network Radio Popolare in July 2001 James Tobin distanced himself from the global justice movement. «"There are agencies and groups in Europe that have used the Tobin Tax as an issue of broader campaigns, for reasons that go far beyond my proposal. My proposal was made into a sort of milestone for an antiglobalization program"». James Tobin's interview with Radio Popolare was quoted by the Italian foreign minister at the time, former director-general of the World Trade Organization Renato Ruggiero, during a Parliamentary debate on the eve of the G8 2001 summit in Genoa. Afterwards James Tobin distanced himself from the global justice movement. Tobin observed that, while his original proposal had only the goal of "putting a brake on the foreign exchange trafficking", the antiglobalization movement had stressed "the income from the taxes with which they want to finance their projects to improve the world". He declared himself not contrary to this use of the tax's income, but stressed that it was not the important aspect of the tax. ATTAC and other organizations have recognized that while they still consider Tobin's original aim as paramount, they think the tax could produce funds for development needs in the South (such as the Millennium Development Goals), and allow governments, and therefore citizens, to reclaim part of the democratic space conceded to the financial markets. In March, 2002, London School of Economics Professor Willem Buiter, who studied under James Tobin, wrote an obituary for the man, but also remarked that, "This [Tobin Tax] ... was in recent years adopted by some of the most determined enemies of trade liberalisation, globalisation and the open society." Buiter added, "The proposal to use the Tobin tax as a means of raising revenues for development assistance was rejected by Tobin, and he forcefully repudiated the anti-globalisation mantra of the Seattle crowd." In September 2009, Buiter also wrote in the "Financial Times", "Tobin was a genius ... but the Tobin tax was probably his one daft idea". In those same "years" that Buiter spoke of, the Tobin tax was also "adopted" or supported in varying degrees by the people who were not, as he put it, "enemies of trade liberalisation." Among them were several supporters from 1990 to 1999, including Larry Summers and several from 2000 to 2004, including lukewarm support from George Soros. In 1972, Tobin examined the global monetary system that remained after the Bretton Woods monetary system was abandoned. This examination was subsequently revisited by other analysts, such as Ellen Frank, who, in 2002 wrote: "If by globalization we mean the determined efforts of international businesses to build markets and production networks that are truly global in scope, then the current monetary system is in many ways an endless headache whose costs are rapidly outstripping its benefits." She continues with a view on how that monetary system stability is appealing to many players in the world economy, but is being undermined by volatility and fluctuation in exchange rates: "Money scrambles around the globe in quest of the banker's holy grail – sound money of stable value – while undermining every attempt by cash-strapped governments to provide the very stability the wealthy crave." Frank then corroborates Tobin's comments on the problems this instability can create (e.g. high interest rates) for developing countries such as Mexico (1994), countries in South East Asia (1997), and Russia (1998). She writes, "Governments of developing countries try to peg their currencies, only to have the peg undone by capital flight. They offer to dollarize or euroize, only to find themselves so short of dollars that they are forced to cut off growth. They raise interest rates to extraordinary levels to protect investors against currency losses, only to topple their economies and the source of investor profits. ... IMF bailouts provide a brief respite for international investors but they are, even from the perspective of the wealthy, a short-term solution at best ... they leave countries with more debt and fewer options." One of the main economic hypotheses raised in favor of financial transaction taxes is that such taxes reduce return volatility, leading to an increase of long-term investor utility or more predictable levels of exchange rates. The impact of such a tax on volatility is of particular concern because the main justification given for this tax by Tobin was to improve the autonomy of macroeconomic policy by curbing international currency speculation and its destabilizing effect on national exchange rates. Most studies of the likely impact of the Tobin tax on financial markets volatility have been "theoretical"—researches conducted laboratory simulations or constructed economic models. Some of these theoretical studies have concluded that a transaction tax could reduce volatility by crowding out speculators or eliminating individual 'noise traders' but that it 'would not have any impact on volatility in case of sufficiently deep global markets such as those in major currency pairs, unlike in case of less liquid markets, such as those in stocks and (especially) options, where volatility would probably increase with reduced volumes. Behavioral finance theoretical models, such as those developed by Wei and Kim (1997) or Westerhoff and Dieci (2006) suggest that transaction taxes can reduce volatility, at least in the foreign exchange market. In contrast, some papers find a positive effect of a transaction tax on market volatility. Lanne and Vesala (2006) argue that a transaction tax "is likely to amplify, not dampen, volatility in foreign exchange markets", because such tax penalises informed market participants disproportionately more than uninformed ones, leading to volatility increases. In most of the available "empirical" studies however, no statistically significant causal link has been found between an increase in transaction costs (transaction taxes or government-controlled minimum brokerage commissions) and a reduction in volatility—in fact a frequent unintended consequence observed by 'early adopters' after the imposition of a financial transactions tax (see Werner, 2003) has been an "increase in the volatility" of stock market returns, usually coinciding with significant declines in liquidity (market volume) and thus in taxable revenue (Umlauf, 1993). For a recent evidence to the contrary, see, e.g., Liu and Zhu (2009), which may be affected by selection bias given that their Japanese sample is subsumed by a research conducted in 14 Asian countries by Hu (1998), showing that "an increase in tax rate reduces the stock price but has no significant effect on market volatility". As Liu and Zhu (2009) point out, [...] the different experience in Japan highlights the comment made by Umlauf (1993) that it is hazardous to generalize limited evidence when debating important policy issues such as the STT [securities transaction tax] and brokerage commissions." When James Tobin was interviewed by "Der Spiegel" in 2001, the tax rate he suggested was 0.5%. His use of the phrase "let's say" ("sagen wir") indicated that he was not, at that point, in an interview setting, trying to be precise. Others have tried to be more precise or practical in their search for the Tobin tax rate. According to Garber (1996), competitive pressure on transaction costs (spreads) in currency markets has reduced these costs to fractions of a basis point. For example, the EUR.USD currency pair trades with spreads as tight as 1/10 of a basis point, i.e. with just a 0.00001 difference between the bid and offer price, so "a tax on transactions in foreign exchange markets imposed unilaterally, 6/1000 of a basis point (or 0.00006%) is a realistic maximum magnitude." Similarly Shvedov (2004) concludes that "even making the unrealistic assumption that the rate of 0.00006% causes no reduction of trading volume, the tax on foreign currency exchange transactions would yield just $4.3 billion a year, despite an annual turnover in dozens of trillion dollars." Accordingly, one of the modern Tobin tax versions, called the "Sterling Stamp Duty", sponsored by certain UK charities, has a rate of 0.005% "in order to avoid market distortions", i.e., 1/100 of what Tobin himself envisaged in 2001. Sterling Stamp Duty supporters argue that this tax rate would not adversely affect currency markets and could still raise large sums of money. The same rate of 0.005% was proposed for a currency transactions tax (CTT) in a report prepared by Rodney Schmidt for The North-South Institute (a Canadian NGO whose "research supports global efforts to [..] improve international financial systems and institutions"). Schmidt (2007) used the observed negative relationship between bid–ask spreads and transactions volume in foreign exchange markets to estimate the maximum "non-disruptive rate" of a currency transaction tax. A CTT tax rate designed with a pragmatic goal of raising revenue for various development projects, rather than to fulfill Tobin's original goals (of "slowing the flow of capital across borders" and "preventing or managing exchange rate crises"), should avoid altering the existing "fundamental market behavior", and thus, according to Schmidt, must not exceed 0.00005, i.e., the observed levels of currency transaction costs (bid-ask spreads). The mathematician Paul Wilmott has pointed out that while perhaps some trading ought to be discouraged, trading for the hedging of derivatives is generally considered a good thing in that it can reduce risk, and this should not be punished. He estimates that any financial tax should be at most one basis point so as to have negligible effect on hedging. Assuming that all currency market participants incur the same maximum level of transaction costs (the full cost of the bid-ask spread), as opposed to earning them in their capacity of market makers, and assuming that no untaxed substitutes exist for spot currency markets transactions (such as currency futures and currency exchange-traded funds), Schmidt (2007) finds that a CTT rate of 0.00005 would be nearly volume-neutral, reducing foreign exchange transaction volumes by only 14%. Such volume-neutral CTT tax would raise "relatively little revenue" though, estimated at around $33 bn annually, i.e., an order of magnitude less than the "carbon tax [which] has by far the greatest revenue-raising potential, estimated at $130-750 bn annually." The author warns however that both these market-based revenue estimates "are necessarily speculative", and he has more confidence in the revenue-raising potential of "The International Finance Facility (IFF) and International Finance Facility for Immunisation (IFFIm)." Although Tobin had said his own tax idea was unfeasible in practice, Joseph Stiglitz, former Senior Vice President and Chief Economist of the World Bank, said, on October 5, 2009, that modern technology meant that was no longer the case. Stiglitz said, the tax is "much more feasible today" than a few decades ago, when Tobin recanted. However, on November 7, 2009, at the G20 finance ministers summit in Scotland, Dominique Strauss-Khan, head of the International Monetary Fund, said "transactions are very difficult to measure and so it's very easy to avoid a transaction tax." Nevertheless, in early December 2009, economist Stephany Griffith-Jones agreed that the "greater centralisation and automisation of the exchanges and banks clearing and settlements systems ... makes avoidance of payment more difficult and less desirable." In January 2010, feasibility of the tax was supported and clarified by researcher Rodney Schmidt, who noted "it is technically easy to collect a financial tax from exchanges ... transactions taxes can be collected by the central counterparty at the point of the trade, or automatically in the clearing or settlement process." (All large-value financial transactions go through three steps. First dealers agree to a trade; then the dealers' banks match the two sides of the trade through an electronic central clearing system; and finally, the two individual financial instruments are transferred simultaneously to a central settlement system. Thus a tax can be collected at the few places where all trades are ultimately cleared or settled.) Based on digital technology, a new form of taxation, levied on bank transactions, was successfully used in Brazil from 1993 to 2007 and proved to be evasion-proof, more efficient and less costly than orthodox tax models. In his book, "Bank transactions: pathway to the single tax ideal", Marcos Cintra carries out a qualitative and quantitative in-depth comparison of the efficiency, equity and compliance costs of a bank transactions tax relative to orthodox tax systems, and opens new perspectives for the use of modern banking technology in tax reform across the world. There has been debate as to whether one single nation could unilaterally implement a "Tobin tax." In the year 2000, "eighty per cent of foreign-exchange trading [took] place in just seven cities. Agreement [to implement the tax] by [just three cities,] London, New York and Tokyo alone, would capture 58 per cent of speculative trading." In July, 2006, analyst Marion G. Wrobel examined the actual international experiences of various countries in implementing financial transaction taxes. Wrobel's paper highlighted the Swedish experience with financial transaction taxes. In January 1984, Sweden introduced a 0.5% tax on the purchase or sale of an equity security. Thus a round trip (purchase and sale) transaction resulted in a 1% tax. In July 1986 the rate was doubled. In January 1989, a considerably lower tax of 0.002% on fixed income securities was introduced for a security with a maturity of 90 days or less. On a bond with a maturity of five years or more, the tax was 0.003%. The revenues from taxes were disappointing; for example, revenues from the tax on fixed-income securities were initially expected to amount to 1,500 million Swedish kronor per year. They did not amount to more than 80 million Swedish kronor in any year and the average was closer to 50 million. In addition, as taxable trading volumes fell, so did revenues from capital gains taxes, entirely offsetting revenues from the equity transactions tax that had grown to 4,000 million Swedish kronor by 1988. On the day that the tax was announced, share prices fell by 2.2%. But there was leakage of information prior to the announcement, which might explain the 5.35% price decline in the 30 days prior to the announcement. When the tax was doubled, prices again fell by another 1%. These declines were in line with the capitalized value of future tax payments resulting from expected trades. It was further felt that the taxes on fixed-income securities only served to increase the cost of government borrowing, providing another argument against the tax. Even though the tax on fixed-income securities was much lower than that on equities, the impact on market trading was much more dramatic. During the first week of the tax, the volume of bond trading fell by 85%, even though the tax rate on five-year bonds was only 0.003%. The volume of futures trading fell by 98% and the options trading market disappeared. On 15 April 1990, the tax on fixed-income securities was abolished. In January 1991 the rates on the remaining taxes were cut in half and by the end of the year they were abolished completely. Once the taxes were eliminated, trading volumes returned and grew substantially in the 1990s. The Swedish experience of a transaction tax was with purchase or sale of equity securities, fixed income securities and derivatives. In global international currency trading, however, the situation could, some argue, look quite different. Wrobel's studies do not address the global economy as a whole, as James Tobin did when he spoke of "the nineties' crises in Mexico, South East Asia and Russia," which included the 1994 economic crisis in Mexico, the 1997 Asian Financial Crisis, and the 1998 Russian financial crisis. The APEC Business Advisory Council, the business representatives' body in APEC, which is the forum for facilitating economic growth, cooperation, trade and investment in the Asia-Pacific region, expressed its views in a letter to the IMF on 15 February 2010. In addition, ABAC expressed further concerns in the letter: Note - APEC's 21 Member Economies are Australia, Brunei Darussalam, Canada, Chile, People's Republic of China, Hong Kong, China, Indonesia, Japan, Republic of Korea, Malaysia, Mexico, New Zealand, Papua New Guinea, Peru, The Republic of the Philippines, The Russian Federation, Singapore, Chinese Taipei, Thailand, United States of America, Viet Nam. The International Trade Union Confederation/Asia-Pacific Labour Network (ITUC/APLN), the informal trade union body of the Asia-Pacific, supported the Tobin Tax in their Statement to the 2010 APEC Economic Leaders Meeting. The representatives of APEC's national trade unions centers also met with the Japanese Prime Minister, Naoto Kan, the host Leader of APEC for 2010, and called for the Prime Minister's support on the Tobin Tax. The ITUC shares its support for Tobin Tax with the Trade Union Advisory Council (TUAC), the official OECD trade union body, in a research on the feasibility, strengths and weaknesses of a potential Tobin Tax. ITUC, APLN and TUAC refer to Tobin Tax as the Financial Transactions Tax. An economist speaking out against the common belief that investment banks would bear the burden of a Tobin tax is Simon Johnson, Professor of Economics at the MIT and a former Chief Economist at the IMF, who in a BBC Radio 4 interview discussing banking system reforms presented his views on the Tobin tax. In 2009, U.S. Representative Peter DeFazio of Oregon proposed a financial transaction tax in his "Let Wall Street Pay for the Restoration of Main Street Bill". (This was proposed domestically for the United States only.) Schwabish (2005) examined the potential effects of introducing a stock transaction (or "transfer") tax in a single city (New York) on employment not only in the securities industry, but also in the supporting industries. A financial transactions tax would lead to job losses also in non-financial sectors of the economy through the so-called multiplier effect forwarding in a magnified form any taxes imposed on Wall Street employees through their reduced demand to their suppliers and supporting industries. The author estimated the ratios of financial- to non-financial job losses of between 10:1 to 10:4, that is "a 10 percent decrease in securities industry employment would depress employment in the retail, services, and restaurant sectors by more than 1 percent; in the business services sector by about 4 percent; and in total private jobs by about 1 percent." It is also possible to estimate the impact of a reduction in stock market volume caused by taxing stock transactions on the rise in the overall unemployment rate. For every 10 percent decline in stock market volume, elasticities estimated by Schwabish implied that a stock transaction ("transfer") tax could cost New York City between 30,000 and 42,000 private-sector jobs, and if the stock market volume reductions reached levels observed by Umlauf (1993) in Sweden after a stock FTT was introduced there ("By 1990, more than 50% of all Swedish trading had moved to London") then according to Schwabish (2005), following an introduction of a FTT tax, there would be 150,000-210,000 private-sector jobs losses in the New York alone. The cost of currency hedges—and thus "certainty what importers and exporters' money is worth"—has nothing to do with volatility whatsoever, as this cost is exclusively determined by the interest rate differental between two currencies. Nevertheless, as Tobin said, "If ... [currency] is suddenly withdrawn, countries have to drastically increase interest rates for their currency to still be attractive." Financial transaction tax rates of the magnitude of 0.1%-1% have been proposed by normative economists, without addressing the practicability of implementing a tax at these levels. In positive economics studies however, where due reference was paid to the prevailing market conditions, the resulting tax rates have been significantly lower. For instance, Edwards (1993) concluded that if the transaction tax revenue from taxing the futures markets were to be maximized (see Laffer curve), with the tax rate not leading to a prohibitively large increase in the marginal cost of market participants, the rate would have to be set so low that "a tax on futures markets will not achieve any important social objective and will not generate much revenue." Opinions are divided between those who applaud that the Tobin tax could protect countries from spillovers of financial crises, and those who claim that the tax would also constrain the effectiveness of the global economic system, increase price volatility, widen bid–ask spreads for end users such as investors, savers and hedgers, and destroy liquidity. Lack of "direct" supporting evidence for stabilizing (volatility-reducing) properties of Tobin-style transaction taxes in econometric research is acknowledged by some of the Tobin tax supporters: These Tobin tax proponents propose on "indirect" evidence in their favor, reinterpreting studies which do not deal directly with volatility, but instead with trading volume (with volume being generally reduced by transaction taxes, though it constitutes their tax base, see: negative feedback loop). This allows these Tobin tax proponents to state that "some studies show (implicitly) that higher transaction costs might dampen price volatility. This is so because these studies report that a reduction of trading activities is associated with lower price volatility." So if a study finds that reducing trading volume or trading frequency reduces volatility, these Tobin tax supporters combine it with the observation that Tobin-style taxes are volume-reducing, and thus should also indirectly reduce volatility ("this finding implies a negative relationship between [..] transaction tax [..] and volatility, because higher transaction costs will 'ceteris paribus' always dampen trading activities)." (Schulmeister et al., 2008, p. 18). Some Tobin tax supporters argue that volatility is better defined as a "long-term overshooting of speculative prices" than by standard statistical definitions (e.g., conditional variance of returns) ) which are typically used in empirical studies of volatility. The lack of empirical evidence to support or clearly refute the Tobin tax proponents' claim it will reduce "excess" volatility is due in part to a lack of an agreed definition of "excess" volatility that allows to be distinguished and formally measured. The Tobin tax rests on the premise that speculators ought to be, as Tobin puts it, "dissuaded." This premise itself is a matter of debate: "See Speculation." On the other side of the debate were the leaders of Germany who, in May 2008, planned to propose a worldwide ban on oil trading by speculators, blaming the 2008 oil price rises on manipulation by hedge funds. At that time India, with similar concerns, had already suspended futures trading of five commodities. On December 3, 2009, US Congressman Peter DeFazio stated, "The American taxpayers bailed out Wall Street during a crisis brought on by reckless speculation in the financial markets, ... This [ proposed financial transaction tax ] legislation will force Wall Street to do their part and put people displaced by that crisis back to work." On January 21, 2010, President Barack Obama endorsed the Volcker Rule which deals with proprietary trading of investment banks and restricts banks from making certain speculative kinds of investments if they are not on behalf of their customers. Former U.S. Federal Reserve Chairman Paul Volcker, President Obama's advisor, has argued that such speculative activity played a key role in the financial crisis of 2007–2010. Volcker endorsed only the UK's tax on bank bonuses, calling it "interesting", but was wary about imposing levies on financial market transactions, because he is "instinctively opposed" to any tax on financial transactions. In February 2010, Tim Harford, writing in the Undercover Economist column of the "Financial Times", commented directly on the claims of Keynes and Tobin that 'taxes on financial transactions would reduce financial volatility'. In 2003, researchers like Aliber et al. proposed that empirical evidence on the observed effects of the already introduced and abolished stock transaction taxes and a hypothetical CTT (Tobin) can probably be treated "interchangeably." They did not find any evidence on the differential effects of introducing or removing, stock transactions taxes or a hypothetical currency (Tobin) tax on any subset of markets or all markets. Researchers have used models belonging to the GARCH family to describe both the volatility behavior of stock market returns and the volatility behavior of foreign exchange rates. This is used as evidence that the similarity between currencies and stocks in the context of a tax designed to curb volatility such as a CTT (or FTT in general) can be inferred from the almost identical (statistically indistinguishable) behavior of the volatilities of equity and exchange rate returns. Hanke et al. state, "The economic consequences of introducing a [currency-only] Tobin Tax are [...] completely unknown, as such a tax has not been introduced on any real foreign exchange market so far". At the same time, even in the case of stock transaction taxes, where some empirical evidence is available, researchers warn that "it is hazardous to generalize limited evidence when debating important policy issues such as the transaction taxes". According to Stephan Schulmeister, Margit Schratzenstaller, and Oliver Picek (2008), from the practical viewpoint it is no longer possible to introduce a non-currency transactions tax (even if foreign exchange transactions were formally exempt) since the advent of currency derivatives and currency exchange-traded funds. All of these would have to be taxed together under a "non-currency" financial transactions tax (such as under certain proposals in the U.S. in 2009 which, although not intending to tax currencies directly, would still do so due to taxation of currency futures and currency exchange traded funds). Because these three groups of instruments are nearly perfect substitutes, if at least one of these groups were to be exempt, it would likely attract most market volume from the taxed alternatives. According to Stephan Schulmeister, Margit Schratzenstaller, and Oliver Picek (2008), restricting the financial transactions tax to foreign exchange only (as envisaged originally by Tobin) would not be desirable. Any "general FTT seems...more attractive than a specific transaction tax" (such as a currency-only Tobin tax), because it could reduce tax avoidance (i.e., substitution of similar untaxed instruments), could significantly increase the tax base and could be implemented more easily on organized exchanges than in a dealership market like the global foreign exchange market. (See also the discussion of tax avoidance as it relates to a currency transaction tax.) On October 5, 2009, Joseph Stiglitz said that any new tax should be levied on all asset classes – not merely foreign exchange, and would be based on the gross value of the assets, thereby helping to discourage the creation of asset bubbles. One non-tax regulatory equivalent of Tobin's (very narrow original) tax is to require "non-interest bearing deposit requirements on all open foreign exchange positions.". If these deposit requirements result in forfeits or losses if a currency suddenly declines due to speculation, they act as inhibitions against deliberate speculative shorts of a currency. However, they would not raise funds for other purposes, so are not a tax.
https://en.wikipedia.org/wiki?curid=31175
The Parent Trap (1961 film) The Parent Trap is a 1961 Walt Disney Technicolor romantic comedy film directed by David Swift. It stars Hayley Mills (in a dual role), Maureen O'Hara and Brian Keith in a story about teenage twins on a quest to reunite their divorced parents. The screenplay by the film's director David Swift was based upon the 1949 book "Lottie and Lisa" (German: ) by Erich Kästner. "The Parent Trap" was nominated for two Academy Awards, was broadcast on television, saw three television sequels, was remade in 1998 with Lindsay Lohan, and has been released on digital stereo LaserDisc format in 1986 as well as VHS and DVD in 2002. "The Parent Trap" was Hayley Mills's second film in the series of six for Disney. Identical twins Sharon McKendrick and Susan Evers (Hayley Mills) meet at Miss Inch's Summer Camp for Girls, unaware that they are sisters. Their identical appearance initially creates rivalry, and they pull pranks on each other, culminating in the camp dance being ruined. As punishment, Miss Inch decides that they must live together in the isolated "Serendipity" cabin (and eat together at an "Isolation Table") for the remainder of the camp season. After discovering that they both come from single-parent homes, they soon realize they are twin sisters and that their parents, Mitchell "Mitch" Evers (Brian Keith) and Margaret "Maggie" McKendrick (Maureen O'Hara), divorced shortly after their birth, with each parent having custody of one of them. The twins, each eager to meet the parent she never knew, decide to switch places. Susan gives Sharon a matching haircut and teaches her how to bite her nails, and they also take a crash-course getting to know each other's personalities and character traits so as to fool the parents. While Susan is in Boston at their grandparents' house pretending to be Sharon, Sharon goes to California to their father's house, pretending to be Susan. Sharon learns their father is engaged to a child-hating gold digger named Vicky Robinson (Joanna Barnes). Sharon calls Susan to tell her that their father is planning to marry Vicky, who is beautiful and dangerous, and that she must bring their mother to California immediately. Susan eventually reveals to their mother and grandparents the truth about their switching places. They are extremely happy to see Susan again, and Maggie and Susan fly to California. After Mitch and Maggie are reunited, they argue and the twins make their surprise appearance together. Mitch is extremely happy to see Sharon again, and after he tells Vicky the truth about the twins, she is shocked and furious — especially after learning that Maggie plans to spend the night at his house. The girls recreate their parents' first date at the Italian restaurant Martinelli's with a gypsy violinist. The former spouses are gradually drawn together, but have another fight about why they divorced in the first place, with Maggie telling Mitch that she and Sharon are leaving in the morning, and that she wishes him the best of everything with Vicky. Susan and Sharon try to find a way to delay their return to Boston, so the twins dress and talk alike so their parents are unable to tell them apart. They will reveal who is who only after returning from the annual family camping trip. Mitch and Maggie reluctantly agree. Vicky is furious, so Maggie tricks her into taking her place and letting her know it would give her a chance to get to know the twins better. Mitch is an outdoorsman, but Vicky is not, and she is not used to climbing mountains and being in the woods, so the twins decide to play tricks on her. Vicky spends her time swatting mosquitoes after unknowingly using sugared water instead of mosquito repellent. She is also awakened by two bear cubs licking honey, placed by the twins, off her feet. Exasperated, Vicky finally has a shouting tantrum, destroying everything in her path. The tantrum culminates in Vicky angrily slapping Susan and Sharon, leaving Mitch with a whole new-found view of her. When Vicky escapes back to the city in a great huff, Mitch seems none too worried to be rid of her. Back at the house, the twins apologize for what they did to Vicky, and are forgiven. Maggie makes dinner, and Mitch talks about what their life was like when they were married. They realize they still love each other, and do not want to grow into a couple of old and lonely people. They share a kiss, and decide to remarry. The novel was discovered by Disney's story editor Bill Dover who recommended the studio buy it. In March 1960 Disney announced that Hayley Mills would star in "His and Hers" to be written and directed by David Swift. Swift and Mills had just made "Pollyanna" for Disney. It was also known as "Petticoats and Blue Jeans" and was the first in a five-film contract Mills signed with Disney, to make one each summer. Maureen O'Hara signed in June. She wrote in her memoirs that Disney offered her a third of her normal fee of $75,000 but that she held out for her quote and got it. O'Hara said her contract gave her top billing but that Disney decided to give that to Mills; she says this caused tension with the studio and was why she never worked with Disney again. Production started in July under the title of "We Belong Together" and went until September. The film originally called for only a few trick photography shots of Hayley Mills in scenes with herself; the bulk of the film was to be shot using a body double. The film used Disney's proprietary sodium vapor process for compositing rather than the usual chroma key technique. When Walt Disney saw how seamless the processed shots were, he ordered the script reconfigured to include more of the special effect. Disney also wanted Mills to appear on camera as much as possible, knowing that she was having growth spurts during filming. The film was shot mostly at various locales in California. The summer camp scenes were filmed at Bluff Lake Camp (then owned by the Pasadena YMCA, now by Habonim Dror's Camp Gilboa) and the family camping scenes later in the movie at Cedar Lake Camp, both in the San Bernardino Mountains near the city of Big Bear Lake in Southern California. The Monterey scenes were filmed in various California locations, including millionaire Stuyvesant Fish's ranch in Carmel and Monterey's Pebble Beach golf course. The scenes at the Monterey house were shot at the studio's Golden Oak Ranch in Placerita Canyon, where Mitch's ranch was built. It was the design of this set that proved the most popular, and to this day the Walt Disney Archives receives requests for plans of the home's interior design. In fact, there never was such a house; the set was simply various rooms built on a sound stage. Camp Inch was based on a real girls' camp called Camp Crestridge for Girls at the Ridgecrest Baptist Conference Center near Asheville, North Carolina. Richard and Robert Sherman provided the songs, which, besides the title song "The Parent Trap", includes "For Now, For Always", and "Let's Get Together". "Let's Get Together" (sung by Annette Funicello) is heard playing from a record player at the summer camp; the tune is reprised by the twins when they restage their parents' first date and that version is sung double-tracked by Hayley Mills. (Hayley's own single of the song, credited to "Hayley Mills and Hayley Mills," reached #8 on the US charts.) The film's title song was performed by Tommy Sands and Annette Funicello, who were both on the studio lot shooting "Babes in Toyland" at the time. Bosley Crowther of "The New York Times" wrote that "it should be most appealing to adults, as well as to children, because of the cheerfully persuasive dual performance of Hayley Mills." "Variety" stated that the film was "absolutely predictable from the outset," but was still "a winner" thanks to the performance of Mills, who "seems to have an instinctive sense of comedy and an uncanny ability to react in just the right manner. Her contribution to the picture is virtually infinite." Charles Stinson of the "Los Angeles Times" declared it "a comedy unusually well designed for the entire family — enough sight gags to keep the children screaming and enough clever dialogue to amuse their parents." "Harrison's Reports" graded the film as "Very Good," and Richard L. Coe of "The Washington Post" called it "charmingly lively" even though "the terrain is familiar." The film currently holds a score of 90% on the review aggregator site Rotten Tomatoes based on 20 reviews. The film was nominated for two Academy Awards: one for Sound by Robert O. Cook, and the other for Film Editing by Philip W. Anderson. The film and its editor, Philip W. Anderson, won the inaugural 1962 Eddie Award of the American Cinema Editors. In 1961 a comic book version of the film was published, adapted and illustrated by Dan Spiegle. The film was theatrically re-released in 1968 and earned $1.8 million in rentals. The Disney Studios produced three television sequels "The Parent Trap II" (1986), "Parent Trap III" (1989) and "" (1989). The original was remade in 1998 starring Lindsay Lohan, Dennis Quaid and Natasha Richardson. Joanna Barnes also made an appearance as Vicki, the mother of Dennis Quaid's character's fiancée, Meridith. Vicki is the same name as Barnes' character in the 1961 film, hinting at the fate of her original character. In February 2018, it was reported that another remake of "The Parent Trap" is in development for Walt Disney Studios' upcoming streaming service Disney+. In India, there have been several films inspired by "The Parent Trap". In 1965, a Tamil language version of the story called "Kuzhandaiyum Deivamum", starring Kutty Padmini was released. The following year, it was remade into Telugu as "Leta Manasulu" also starring Kutty Padmini. A Hindi version "Do Kaliyaan" starring Neetu Singh in the double role was made in 1968. The 1987 film "Pyar Ke Kabil" also has a similar storyline, as does the 2001 film "Kuch Khatti Kuch Meethi" which has Kajol playing the double role of 23-year-old twins. The film was released on a 2-disc special edition DVD in 2002, as part of the Vault Disney collection, with a new digital remaster by THX. In 2005, the film was once again released in a 2-Movie Collection, which also contained the made-for-television sequel, "The Parent Trap II" (1986), plus the original film trailer and other bonus features. On April 24, 2018, the film was released for the first time on Blu-ray, but as a Disney Movie Club exclusive. The 1998 remake was also released on Blu-ray the same day.
https://en.wikipedia.org/wiki?curid=31176
Torpoint Ferry The Torpoint Ferry is a car and pedestrian chain ferry connecting the A374 which crosses the Hamoaze, a stretch of water at the mouth of the River Tamar, between Devonport in Plymouth and Torpoint in Cornwall. The service was established in 1791 and chain ferry operations were introduced by James Meadows Rendel in 1832. The route is currently served by three ferries, built by Ferguson Shipbuilders Ltd at Port Glasgow and named after three rivers in the area: "Tamar II", "Lynher II" and "Plym II". Each ferry carries 73 cars and operates using its own set of slipways and parallel chains, with a vehicle weight limit of The ferry boats are propelled across the river by pulling themselves on the chains; the chains then sink to the bottom to allow shipping movements in the river. An intensive service is provided, with service frequencies ranging from every 10 minutes (3 ferries in service) at peak times, to half-hourly (1 ferry in service) at night. Services operate 24 hours a day, every day (including throughout Christmas and all other holiday periods), with service frequency never falling below half-hourly. The ferries, along with the nearby Tamar Bridge, are operated by the "Tamar Bridge and Torpoint Ferry Joint Committee", which is jointly owned by Plymouth City Council and Cornwall Council. Tolls are payable in the Torpoint to Devonport eastbound direction only, except for motorcyclists who pay westbound only. The toll is £2.00 for cars and motorcycle riders are charged 40p; there is no additional charge for a pillion passenger. Frequent users can reduce the fare by half by purchasing top-ups online for a machine-readable windscreen-mounted digital payment tag, called "TamarTag", which is also usable on the bridge. The toll increase of 50% in March 2010 was the first rise for nearly 15 years before a further increase in November 2019. The ferry takes around 10 minutes as opposed to a 20 mile, 30 minute trip around the mainland; rush hour times will differ. A ferry route between Torpoint and Plymouth Dock (now called Devonport) was created by an Act of Parliament in 1790 and the Earl of Mount Edgcumbe began to run ferries the following year. In 1826 the ferry operations were taken over by the Torpoint Steamboat Company, which built landing piers on both sides of the Tamar. The company also built the steam ferry "Jemima" which entered service in 1831. The steamer was unable to hold a course in the strong tidal flow of the Hamoaze, so it was soon withdrawn and the older ferryboats returned to service. The steamboat company approached James Meadows Rendel in 1832 and asked him to design a steam-powered floating bridge for the route. Two ferries were built in 1834 and 1835 and provided a continuous service, operating in alternate months. The tolls varied between 2d for a horse and 5s for a coach with 4 horses, with a double fare charged on Sundays. The original ferries were replaced by two new ferries built in 1871 and 1878. As a result of increasing traffic, the ferry company investigated twin ferry operations in 1905. Both the Admiralty and Devonport Corporation opposed this as the company would need to expand the landing beach in Devonport. An experimental two ferry service with the existing shore installations had to be abandoned due to the strain on the equipment. A supplementary steamer service was also introduced in 1902, with the "Volta" and "Lady Beatrice" linking Torpoint to two locations in Devonport on a triangular route. Cornwall County Council acquired both the ferry and the steamers in 1922 for £42,000. The "Volta" was immediately sold for breaking and two new ferries were ordered, which entered service in 1925 and 1926. These were the first ferries on the route designed to carry motor vehicles and could carry 800 passengers and 16 cars. Land was acquired on both sides of the rivers to lay a second set of chains and expand the landing beaches. A third, reserve, ferry was ordered and modern shore facilities were also built and twin-ferry operation began in July 1932. These changes made the supplementary steamer redundant and the "Lady Beatrice" was sold. Motor traffic using the route increased rapidly after World War II, and two new ferries with a capacity of 30 cars each were introduced by 1961. A third ferry entered service in 1966 and a marshaling area was built on the Torpoint foreshore, relieving congestion in the centre of Torpoint. The landing beaches were expanded further in 1972 allowing all 3 ferries to operate simultaneously. The three ferries were refitted in the 1980s and were stretched so that they could carry approximately 50 cars. After the refit, they were named the "Tamar", "Lynher" and "Plym". These remained in service until 2005 when they were replaced by the current ferries. All three ferries, Lynher, Plymouth and Tamar were eventually sold in 2004 for recycling by the company Smedegaarden located at Esbjerg in Denmark. They had the vessels towed across the North Sea and recycled during 2005.
https://en.wikipedia.org/wiki?curid=31177
Tarot The tarot (, first known as trionfi and later as tarocchi or tarock) is a pack of playing cards, used from the mid-15th century in various parts of Europe to play games such as Italian tarocchini, French tarot and Austrian Königrufen, many of which are still played today. In the late 18th century, some tarot decks began to be used for divination via tarot card reading and cartomancy leading to custom decks developed for such occult purposes. Like common playing cards, the tarot has four suits which vary by region: French suits in Northern Europe, Latin suits in Southern Europe, and German suits in Central Europe. Each suit has 14 cards: ten pip cards numbering from one (or Ace) to ten, and four face cards (King, Queen, Knight, and Jack/Knave/Page). In addition, the tarot has a separate 21-card trump suit and a single card known as the Fool. Depending on the game, the Fool may act as the top trump or may be played to avoid following suit. These tarot cards are still used throughout much of Europe to play conventional card games without occult associations. Among English-speaking countries where these games are not played frequently, tarot cards are used primarily for novelty and divinatory purposes, usually using specially designed packs. Some occult enthusiasts claim that tarot has esoteric links to ancient Egypt, the Kabbalah, Indian Tantra, or the I Ching, though no documented evidence of such origins or of the usage of tarot for divination before the 18th century has been demonstrated to a scholarly standard. The word "Tarot" and German "Tarock" derive from the Italian "Tarocchi", the origin of which is uncertain but "taroch" was used as a synonym for foolishness in the late 15th and early 16th centuries. The decks were known exclusively as "Trionfi" during the fifteenth century. The new name first appeared in Brescia around 1502 as "Tarocho". During the 16th century, a new game played with a standard deck but sharing a very similar name (Trionfa) was quickly becoming popular. This coincided with the older game being renamed "tarocchi". In modern Italian, the singular term is "Tarocco", which, as a noun, refers to a cultivar of blood orange. Playing cards first entered Europe in the late 14th century, most likely from Mamluk Egypt. The first records date to 1367 in Berne and they appear to have spread very rapidly across the whole of Europe, as may be seen from the records, mainly of card games being banned. Little is known about the appearance and number of these cards; the only significant information being provided by a text by John of Rheinfelden in 1377 from Freiburg im Breisgau, who, in addition to other versions describes the basic pack as containing the still-current 4 suits of 13 cards, the courts usually being the King, Ober and Unter ("marshals"), although Dames and Queens were already known by then. One early pattern of playing cards that evolve was one with the suits of Batons or Clubs, Coins, Swords and Cups. These suits are still used in traditional Italian, Spanish and Portuguese playing card decks, but have also been adapted in packs used specifically for tarot divination cards that first appeared in the late 18th century. The first documented tarot packs were recorded between 1440 and 1450 in Milan, Ferrara, Florence and Bologna when additional trump cards with allegorical illustrations were added to the common four-suit pack. These new decks were called "carte da trionfi", triumph cards, and the additional cards known simply as trionfi, which became "trumps" in English. The earliest documentation of "trionfi" is found in a written statement in the court records of Florence, in 1440, regarding the transfer of two decks to Sigismondo Pandolfo Malatesta. The oldest surviving tarot cards are the 15 or so Visconti-Sforza tarot decks painted in the mid-15th century for the rulers of the Duchy of Milan. A lost tarot-like pack was commissioned by Duke Filippo Maria Visconti and described by Martiano da Tortona probably between 1418 and 1425, since the painter he mentions, Michelino da Besozzo, returned to Milan in 1418, while Martiano himself died in 1425. He described a 60-card deck with 16 cards having images of the Roman gods and suits depicting four kinds of birds. The 16 cards were regarded as "trumps" since in 1449 Jacopo Antonio Marcello recalled that the now deceased duke had invented a "novum quoddam et exquisitum triumphorum genus", or "a new and exquisite kind of triumphs". Other early decks that also showcased classical motifs include the Sola-Busca and Boiardo-Viti decks of the 1490s. In Florence, an expanded deck called "Minchiate" was used. This deck of 97 cards includes astrological symbols and the four elements, as well as traditional tarot motifs. Although a Dominican preacher inveighed against the evil inherent in cards (chiefly owing to their use in gambling) in a sermon in the 15th century, no routine condemnations of tarot were found during its early history. Because the earliest tarot cards were hand-painted, the number of the decks produced is thought to have been small. It was only after the invention of the printing press that mass production of cards became possible. The expansion of tarot outside of Italy, first to France and Switzerland, occurred during the Italian Wars. The most important tarot pattern used in these two countries was the Tarot of Marseilles of Milanese origin. The original purpose of tarot cards was to play games. A very cursory explanation of rules for a tarot-like deck is given in a manuscript by Martiano da Tortona before 1425. Vague descriptions of game play or game terminology follow for the next two centuries until the earliest known complete description of rules for a French variant in 1637. The game of tarot has many regional variations. Tarocchini has survived in Bologna and there are still others played in Piedmont and Sicily, but in Italy the game is generally less popular than elsewhere. The 18th century saw tarot's greatest revival, during which it became one of the most popular card games in Europe, played everywhere except Ireland and Britain, the Iberian peninsula, and the Ottoman Balkans. French tarot experienced a revival beginning in the 1970s and France has the strongest tarot gaming community. Regional tarot games—often known as "tarock", "tarok", or "tarokk" are widely played in central Europe within the borders of the former Austro-Hungarian empire. These were the oldest form of tarot deck to be made, being first devised in the 15th century in northern Italy. The so-called occult tarot decks are based on decks of this type. Three decks of this category are still used to play certain games: The Tarocco Siciliano is the only deck to use the so-called Portuguese suit system which uses Spanish pips but intersects them like Italian pips. Some of the trumps are different such as the lowest trump, "Miseria" (destitution). It omits the Two and Three of coins, and numerals one to four in clubs, swords and cups: it thus has 64 cards but the ace of coins is not used, being the bearer of the former stamp tax. The cards are quite small and not reversible.[9] The illustrations of French-suited tarot trumps depart considerably from the older Italian-suited design, abandoning the Renaissance allegorical motifs. With the exception of novelty decks, French-suited tarot cards are almost exclusively used for card games. The first generation of French-suited tarots depicted scenes of animals on the trumps and were thus called "Tiertarock" ('Tier' being German for 'animal') appeared around 1740. Around 1800, a greater variety of decks were produced, mostly with genre art or veduta. Current French-suited tarot decks come in these patterns: The German states used to produce a variety of 78-card Tarot packs, today, there are only two: both designs of Cego pack - Cego Adler by ASS Altenburger and Cego with genre scenes by F.X. Schmid. There are, however, cards that were and are marketed as 'Tarock' cards. These are standard 36-card German-suited decks for Bauerntarock, Württemberg Tarock and Bavarian Tarock. They are not true tarot/tarock packs, but a Bavarian or Württemberg pattern of the standard German-suited decks with only 36 cards; the pip cards ranging from 6 to 10, Under Knave ("Unter"), Over Knave ("Ober"), King, and Ace. These use Ace-Ten ranking, like Klaverjas, where Ace is the highest followed by 10, King, Ober, Unter, then 9 to 6. The heart suit is the default trump suit. The Bavarian deck is also used to play Schafkopf by excluding the Sixes. The earliest evidence of a tarot deck used for cartomancy comes from an anonymous manuscript from around 1750 which documents rudimentary divinatory meanings for the cards of the Tarocco Bolognese. The popularization of esoteric tarot started with Antoine Court and Jean-Baptiste Alliette (Etteilla) in Paris during the 1780s, using the Tarot of Marseilles. French tarot players abandoned the Marseilles tarot in favor of the Tarot Nouveau around 1900, with the result that the Marseilles pattern is now used mostly by cartomancers. Etteilla was the first to issue a tarot deck specifically designed for occult purposes around 1789. In keeping with the misplaced belief that such cards were derived from the Book of Thoth, Etteilla's tarot contained themes related to ancient Egypt. The 78-card tarot deck used by esotericists has two distinct parts: The terms "Major Arcana" and "Minor Arcana" were first used by Jean-Baptiste Pitois (also known as Paul Christian) and are never used in relation to tarot card games. Some decks exist primarily as artwork, and such art decks sometimes contain only the 22 major arcana. The three most common decks used in esoteric tarot are the Tarot of Marseilles, the Rider-Waite-Smith tarot deck, and the Thoth tarot deck. Aleister Crowley, who devised the Thoth deck along with Lady Frieda Harris, stated of the Tarot: "The origin of this pack of cards is very obscure. Some authorities seek to put it back as far as the ancient Egyptian Mysteries; others try to bring it forward as late as the fifteenth or even the sixteenth century ... [but] The only theory of ultimate interest about the Tarot is that it is an admirable symbolic picture of the Universe, based on the data of the Holy Qabalah."
https://en.wikipedia.org/wiki?curid=31178
Toyotomi Hideyoshi Hideyoshi rose from a peasant background as a retainer of the prominent lord Oda Nobunaga to become one of the most powerful men in Japan. Hideyoshi succeeded Nobunaga after the Honnō-ji Incident in 1582 and continued Nobunaga's campaign to unite Japan that led to the closing of the Sengoku period. Hideyoshi became the "de facto" leader of Japan and acquired the prestigious positions of Chancellor of the Realm and Imperial Regent by the mid-1580s. Hideyoshi launched the Japanese invasions of Korea in 1592 to initial success, but eventual military stalemate damaged his prestige before his death in 1598. Hideyoshi's young son and successor Toyotomi Hideyori was displaced by Tokugawa Ieyasu at the Battle of Sekigahara in 1600 which would lead to the founding of the Tokugawa Shogunate. Hideyoshi's rule covers most of the Azuchi–Momoyama period of Japan, partially named after his castle, Momoyama Castle. Hideyoshi left an influential and lasting legacy in Japan, including Osaka Castle, the Tokugawa class system, the restriction on the possession of weapons to the "samurai", and the construction and restoration of many temples some of which are still visible in Kyoto. Very little is known for certain about Toyotomi Hideyoshi before 1570, when he begins to appear in surviving documents and letters. His autobiography starts in 1577, but in it, Hideyoshi spoke very little about his past. According to tradition, Hideyoshi was born on 17 March 1537 in Nakamura, Owari Province (present-day Nakamura Ward, Nagoya), in the middle of the chaotic Sengoku period under the collapsed Ashikaga Shogunate. Hideyoshi had no traceable "samurai" lineage, and his father Yaemon was an "ashigaru" – a peasant employed by the "samurai" as a foot soldier. Hideyoshi had no surname, and his childhood given name was ("Bounty of the Sun") although variations exist. Yaemon died in 1543 when Hideyoshi was 7-years-old, the younger of two children, his sibling being an older sister. Many legends describe Hideyoshi being sent to study at a temple as a young man, but he rejected temple life and went in search of adventure. Under the name , he first joined the Imagawa clan as a servant to a local ruler named . Hideyoshi traveled all the way to the lands of Imagawa Yoshimoto, the "daimyō" (feudal lord) based in Suruga Province, and served there for a time, only to abscond with a sum of money entrusted to him by Matsushita Yukitsuna. In 1558, Hideyoshi became an "ashigaru" for the powerful Oda clan, the rulers of his home province of Owari, now headed by the ambitious Oda Nobunaga. Hideyoshi soon became one of Nobunaga's sandal-bearers, a position of relatively high status, and was present at the Battle of Okehazama in 1560 when Nobunaga defeated Imagawa Yoshimoto to become one of the most powerful warlords in the Sengoku period. According to his biographers, Hideyoshi supervised the repair of Kiyosu Castle, a claim described as "apocryphal", and managed the kitchen. In 1561, Hideyoshi married One, the adopted daughter of Asano Nagakatsu. Hideyoshi carried out repairs on Sunomata Castle with his younger brother Toyotomi Hidenaga, Hachisuka Masakatsu, and Maeno Nagayasu. Hideyoshi's efforts were well-received because Sunomata was in enemy territory, and according to legend constructed a fort in Sunomata overnight and discovered a secret route into Mount Inaba, after which much of the local garrison surrendered. Hideyoshi was very successful as a negotiator. In 1564, he managed to convince, mostly with liberal bribes, a number of Mino warlords to desert the Saitō clan. Hideyoshi approached many Saitō clan samurai and convinced them to submit to Nobunaga, including the Saitō clan's strategist, Takenaka Shigeharu. Nobunaga's easy victory at the Siege of Inabayama Castle in 1567 was largely due to Hideyoshi's efforts, and despite his peasant origins, Hideyoshi became one of Nobunaga's most distinguished generals, eventually taking the name . The new surname included two characters, one each from Oda's two other right-hand men, and . Hideyoshi led troops in the Battle of Anegawa in 1570 in which Oda Nobunaga allied with Tokugawa Ieyasu to lay siege to two fortresses of the Azai and Asakura clans. Hideyoshi participated in the 1573 Siege of Nagashima. In 1573, after victorious campaigns against the Azai and Asakura, Nobunaga appointed Hideyoshi "daimyō" of three districts in the northern part of Ōmi Province. Initially, Hideyoshi based at the former Azai headquarters at Odani Castle but moved to Kunitomo and renamed the city "Nagahama" in tribute to Nobunaga. Hideyoshi later moved to the port at Imahama on Lake Biwa, where he began work on Imahama Castle and took control of the nearby Kunitomo firearms factory that had been established some years previously by the Azai and Asakura. Under Hideyoshi's administration, the factory's output of firearms increased dramatically. Hideyoshi fought in the Battle of Nagashino. Nobunaga sent Hideyoshi to Himeji Castle to conquer the Chūgoku region from the Mori clan in 1576. Hideyoshi then fought in the 1577 Battle of Tedorigawa, the Siege of Miki, the Siege of Itami (1579), and the 1582 Siege of Takamatsu. After the assassinations at Honnō-ji of Oda Nobunaga and his eldest son Nobutada in 1582 at the hands of Akechi Mitsuhide, Hideyoshi, seeking vengeance for the death of his beloved lord, made peace with the Mōri clan and defeated Akechi at the Battle of Yamazaki. Subsequently, Hideyoshi was in a very strong position. He summoned the powerful daimyo to Kiyosu so that they could determine Nobunaga's heir. Oda Nobukatsu and Oda Nobutaka quarreled, causing Hideyoshi to instead choose Samboshi, Nobu's grandson. Having won the support of the other two Oda elders, Niwa Nagahide and Ikeda Tsuneoki, Hideyoshi established Hidenobu's position, as well as his own influence in the Oda clan. He distributed Nobunaga's provinces among the generals and formed a council of four generals to help govern. Tension quickly escalated between Hideyoshi and Katsuie, and at the Battle of Shizugatake in the following year, Hideyoshi destroyed Katsuie's forces. Hideyoshi had thus consolidated his own power, dealt with most of the Oda clan, and controlled 30 provinces. The famous kirishitan daimyō and samurai Dom Justo Takayama fought on his side at this epic battle. In 1582, Hideyoshi began construction of Osaka Castle. Built on the site of the temple Ishiyama Hongan-ji destroyed by Nobunaga, the castle would become the last stronghold of the Toyotomi clan after Hideyoshi's death. Nobunaga's other son, Oda Nobukatsu, remained hostile to Hideyoshi. He allied himself with Tokugawa Ieyasu, and the two sides fought at the inconclusive Battle of Komaki and Nagakute. It ultimately resulted in a stalemate, although Hideyoshi's forces were delivered a heavy blow. Finally, Hideyoshi made peace with Nobukatsu, ending the pretext for war between the Tokugawa and Hashiba clans. Hideyoshi sent Tokugawa Ieyasu his younger sister Asahi no kata and mother Ōmandokoro as hostages. Ieyasu eventually agreed to become a vassal of Hideyoshi. Like Nobunaga before him, Hideyoshi never achieved the title of "shōgun". Instead, he arranged to have himself adopted by Konoe Sakihisa, one of the noblest men belonging to the Fujiwara clan and secured a succession of high court titles including, in 1585, the prestigious position of Imperial Regent ("kampaku"). In 1586, Hideyoshi was formally given the new clan name Toyotomi (instead of Fujiwara) by the Imperial court. He built a lavish palace, the Jurakudai, in 1587 and entertained the reigning Emperor, Emperor Go-Yōzei, the following year. Afterwards, Hideyoshi subjugated Kii Province and conquered Shikoku under the Chōsokabe clan. He also took control of Etchū Province and conquered Kyūshū. In 1587, Hideyoshi banished Christian missionaries from Kyūshū to exert greater control over the "Kirishitan" "daimyōs". However, since he made much of trade with Europeans, individual Christians were overlooked unofficially. In 1588, Hideyoshi forbade ordinary peasants from owning weapons and started a sword hunt to confiscate arms. The swords were melted down to create a statue of the Buddha. This measure effectively stopped peasant revolts and ensured greater stability at the expense of freedom of the individual "daimyōs". The 1590 Siege of Odawara against the Hōjō clan in the Kantō region eliminated the last resistance to Hideyoshi's authority. His victory signified the end of the Sengoku period. During this siege, Hideyoshi offered Ieyasu the eight Hōjō-ruled provinces in the Kantō region in exchange for the submission of Ieyasu's five provinces. Ieyasu accepted this proposal. In February 1591, Hideyoshi ordered Sen no Rikyū to commit suicide, likely in one of his angry outbursts. Rikyū had been a trusted retainer and master of the tea ceremony under both Hideyoshi and Nobunaga. Under Hideyoshi's patronage, Rikyū made significant changes to the aesthetics of the tea ceremony that had a lasting influence over many aspects of Japanese culture. Even after Rikyū's death, Hideyoshi is said to have built his many construction projects based upon aesthetics promoted by Rikyū, perhaps suggesting that he regretted his actions. Following Rikyū's death, Hideyoshi turned his attention from tea ceremony to Noh, which he had been studying since becoming Imperial Regent. During his brief stay in Nagoya Castle in what is today Saga Prefecture, on Kyūshū, Hideyoshi memorized the "shite" (lead roles) parts of ten Noh plays, which he then performed, forcing various "daimyōs" to accompany him onstage as the "waki" (secondary, accompanying role). He even performed before the emperor. The stability of the Toyotomi dynasty after Hideyoshi's death was put in doubt with the death of his son Tsurumatsu in September 1591. The three-year-old was his only child. When his half-brother Hidenaga died shortly after, Hideyoshi named his nephew Hidetsugu his heir, adopting him in January 1592. Hideyoshi resigned as "kampaku" to take the title of "taikō" (retired regent). Hidetsugu succeeded him as "kampaku". With Hideyoshi's health beginning to falter, but still yearning for some accomplishment to solidify his legacy, he adopted Oda Nobunaga's dream of a Japanese conquest of China and launched the conquest of the Ming dynasty by way of Korea (at the time known as Koryu or Joseon). Hideyoshi had been communicating with the Koreans since 1587 requesting unmolested passage into China. As an ally of Ming China, the Joseon government of the time at first refused talks entirely, and in April and July 1591 also refused demands that Japanese troops be allowed to march through Korea. The government of Joseon was concerned that allowing Japanese troops to march through Korea (Joseon) would mean that masses of Ming Chinese troops would battle Hideyoshi's troops on Korean soil before they could reach China, putting Korean security at risk. In August 1591, Hideyoshi ordered preparations for an invasion of Korea to begin. In the first campaign, Hideyoshi appointed Ukita Hideie as field marshal, and had him go to the Korean peninsula in April 1592. Konishi Yukinaga occupied Seoul, which had been the capital of the Joseon dynasty of Korea, on June 19. After Seoul fell easily, Japanese commanders held a war council in June in Seoul and determined targets of subjugation called Hachidokuniwari (literally, dividing the country into eight routes) by each corps (the First Division of Konishi Yukinaga and others from Pyeongan Province, the Second Division of Katō Kiyomasa and others from Hangyong Province, the Third Division of Kuroda Nagamasa and others from Hwanghae Province, the Fourth Division of Mōri Yoshinari and others from Gangwon Province; the Fifth Division of Fukushima Masanori and others from Chungcheong Province; the Sixth Division by Kobayakawa Takakage and others from Jeolla Province, the Seventh Division by Mōri Terumoto and others from Gyeongsang Province, and the Eighth Division of Ukita Hideie and others from Gyeonggi Province). In only four months, Hideyoshi's forces had a route into Manchuria and occupied much of Korea. The Korean king Seonjo of Joseon escaped to Uiju and requested military intervention from China. In 1593, the Wanli Emperor of Ming China sent an army under general Li Rusong to block the planned Japanese invasion of China and recapture the Korean peninsula. The Ming army of 43,000 soldiers headed by Li Ru-song proceeded to attack Pyongyang. On January 7, 1593, the Ming relief forces under Li recaptured Pyongyang and surrounded Seoul, but Kobayakawa Takakage, Ukita Hideie, Tachibana Muneshige and Kikkawa Hiroie won the Battle of Byeokjegwan in the suburbs of Seoul. At the end of the first campaign, Japan's entire navy was destroyed by Admiral Yi Sun-sin of Korea whose base was located in a part of Korea the Japanese could not control. This, in effect, put an end to Japan's dream of conquering China as the Koreans simply destroyed Japan's ability to re-supply their troops who were bogged down in Pyongyang. The birth of Hideyoshi's second son in 1593, Hideyori, created a potential succession problem. To avoid it, Hideyoshi exiled his nephew and heir Hidetsugu to Mount Kōya and then ordered him to commit suicide in August 1595. Hidetsugu's family members who did not follow his example were then murdered in Kyoto, including 31 women and several children. In January 1597, Toyotomi Hideyoshi had twenty-six Christians arrested as an example to Japanese who wanted to convert to Christianity. They are known as the Twenty-six Martyrs of Japan. They included five European Franciscan missionaries, one Mexican Franciscan missionary, three Japanese Jesuits and seventeen Japanese laymen including three young boys. They were tortured, mutilated, and paraded through towns across Japan. On February 5, they were executed in Nagasaki by public crucifixion. After several years of negotiations (broken off because envoys of both sides falsely reported to their masters that the opposition had surrendered), Hideyoshi appointed Kobayakawa Hideaki to lead a renewed invasion of Korea, but their efforts on the peninsula met with less success than the first invasion. Japanese troops remained pinned down in Gyeongsang Province. In June 1598, the Japanese forces turned back several Chinese offensives in Suncheon and Sacheon, but they were unable to make further progress as the Ming army prepared for a final assault. While Hideyoshi's battle at Sacheon was a major Japanese victory, all three parties to the war were exhausted. He told his commander in Korea, "Don't let my soldiers become spirits in a foreign land." Toyotomi Hideyoshi died on September 18, 1598. He was delirious, with Sansom asserting that he was babbling of the distribution of fiefs. His last words, delivered to his closest daimyos and generals, were "‘"I depend upon you for everything. I have no other thoughts to leave behind. It is sad to part from you"". His death was kept secret by the Council of Five Elders to preserve morale, and they ordered the Japanese forces in Korea to withdraw back to Japan. Because of his failure to capture Korea, Hideyoshi's forces were unable to invade China. Rather than strengthen his position, the military expeditions left his clan's coffers and fighting strength depleted, his vassals at odds over responsibility for the failure, and the clans that were loyal to the Toyotomi name weakened. The dream of a Japanese conquest of China was put on hold indefinitely. The Tokugawa government later not only prohibited any further military expeditions to the Asian mainland but closed Japan to nearly all foreigners during the years of the Tokugawa shogunate. It was not until the late 19th century that Japan again fought a war against China through Korea, using much the same route that Hideyoshi's invasion force had used. After his death, the other members of the Council of Five Regents were unable to keep the ambitions of Tokugawa Ieyasu in check. Two of Hideyoshi's top generals, Katō Kiyomasa and Fukushima Masanori, had fought bravely during the war but returned to find the Toyotomi clan castellan Ishida Mitsunari in power. He held the generals in contempt, and they sided with Tokugawa Ieyasu. Hideyoshi's underage son and designated successor Hideyori lost the power his father once held, and Tokugawa Ieyasu was declared "shōgun" following the Battle of Sekigahara in 1600. Toyotomi Hideyoshi changed Japanese society in many ways. These include the imposition of a rigid class structure, restriction on travel, and surveys of land and production. Class reforms affected commoners and warriors. During the Sengoku period, it had become common for peasants to become warriors, or for samurai to farm due to the constant uncertainty caused by the lack of centralized government and always tentative peace. Upon taking control, Hideyoshi decreed that all peasants be disarmed completely. Conversely, he required samurai to leave the land and take up residence in the castle towns. This solidified the social class system for the next 300 years. Furthermore, he ordered comprehensive surveys and a complete census of Japan. Once this was done and all citizens were registered, he required all Japanese to stay in their respective "han" (fiefs) unless they obtained official permission to go elsewhere. This ensured order in a period when bandits still roamed the countryside and peace was still new. The land surveys formed the basis for systematic taxation. In 1590, Hideyoshi completed construction of the Osaka Castle, the largest and most formidable in all Japan, to guard the western approaches to Kyoto. In that same year, Hideyoshi banned "unfree labour" or slavery, but forms of contract and indentured labour persisted alongside the period penal codes' forced labour. Hideyoshi also influenced the material culture of Japan. He lavished time and money on the tea ceremony, collecting implements, sponsoring lavish social events, and patronizing acclaimed masters. As interest in the tea ceremony rose among the ruling class, so too did demand for fine ceramic implements, and during the course of the Korean campaigns, not only were large quantities of prized ceramic ware confiscated, many Korean artisans were forcibly relocated to Japan. Inspired by the dazzling Golden Pavilion in Kyoto, he had the Golden Tea Room constructed, which was covered with gold leaf and lined inside with red gossamer. Using this mobile innovation, he was able to practice the tea ceremony wherever he went, powerfully projecting his unrivalled power and status upon his arrival. Politically, he set up a governmental system that balanced out the most powerful Japanese warlords (or "daimyōs"). A council was created to include the most influential lords. At the same time, a regent was designated to be in command. Just before his death, Hideyoshi hoped to set up a system stable enough to survive until his son grew old enough to become the next leader. A was formed, consisting of the five most powerful "daimyōs". Following the death of Maeda Toshiie, however, Tokugawa Ieyasu began to secure alliances, including political marriages (which had been forbidden by Hideyoshi). Eventually, the pro-Toyotomi forces fought against the Tokugawa in the Battle of Sekigahara. Ieyasu won and received the title of "Seii-Tai Shōgun" two years later. Hideyoshi is commemorated at several Toyokuni Shrines scattered over Japan. Ieyasu left in place the majority of Hideyoshi's decrees and built his shogunate upon them. This ensured that Hideyoshi's cultural legacy remained. In a letter to his wife, Hideyoshi wrote: Because of his low birth with no family name, to the eventual achievement of Imperial Regent, the highest title of Imperial nobility, Toyotomi Hideyoshi had quite a few names throughout his life. At birth, he was given the name . At "genpuku", he took the name . Later, he was given the surname Hashiba and the honorary court office Chikuzen no Kami; as a result, he was styled . His surname remained Hashiba even as he was granted the new "Uji" or "sei" ( or , clan name) Toyotomi by the Emperor. The Toyotomi "Uji" was simultaneously granted to a number of Hideyoshi's chosen allies, who adopted the new "Uji" "" (Toyotomi no some, courtier of Toyotomi). The Catholic sources of the time referred to him as "emperor Taicosama" (from "taikō", a retired "kampaku" (see Sesshō and Kampaku), and the honorific "-sama"). Toyotomi Hideyoshi had been given the nickname Kozaru, meaning "little monkey", from his lord Oda Nobunaga because his facial features and skinny form resembled that of a monkey. He was also known as the "bald rat" or a "naked mole rat". He was portrayed by Lee Hyo-jung in the 2004–2005 KBS1 TV series "Immortal Admiral Yi Sun-sin". "Hyouge Mono" (, lit. "Jocular Fellow") is a Japanese manga written and illustrated by Yoshihiro Yamada. It was adapted into an anime series in 2011, and includes a fictional depiction of Toyotomi Hideyoshi's life. In the "Sengoku Basara" game series and anime, he is described as a brutally strong man that killed his own wife to kill his heart, then raised an army to conquer Japan with conscripts and forced draftees. In The 39 Clues series, Toyotomi is a member of the Tomas branch of the Cahill family, the son of Thomas Cahill.
https://en.wikipedia.org/wiki?curid=31182
Tokugawa Ieyasu He was one of the three "Great Unifiers" of Japan, along with his former lord Oda Nobunaga and Toyotomi Hideyoshi. Son of a minor daimyo, Tokugawa once lived as a hostage, on behalf of his father, under another Daimyo. He later succeeded as daimyo after his father's death, serving as vassal and general under Oda Nobunaga, building up his strength. After Oda's death, Tokugawa was briefly a rival of fellow Oda subordinate Toyotomi Hideyoshi, before declaring allegiance to Toyotomi and fighting on his behalf. Under Toyotomi, Tokugawa was relocated to the Kanto plains in eastern Japan, away from the Toyotomi power base in Osaka. He built his castle in the fishing village of Edo (now Tokyo). He was to become the most powerful daimyo and the most senior officer under the Toyotomi regime. Toyotomi preserved his strength in Toyotomi's failed attempt to conquer Korea. After Toyotomi's death, Ieyasu seized power in 1600, after the Battle of Sekigahara. He received appointment as "shōgun" in 1603, and abdicated from office in 1605, but remained in power until his death in 1616. He implemented a set of careful rules known as the "bakuhan" system, designed to keep the daimyos and samurai in check under the Tokugawa Shogunate. His given name is sometimes spelled Iyeyasu, according to the historical pronunciation of the kana character "we". Ieyasu was posthumously enshrined at Nikkō Tōshō-gū with the name . During the Muromachi period, the Matsudaira clan controlled a portion of Mikawa Province (the eastern half of modern Aichi Prefecture). Ieyasu's father, Matsudaira Hirotada, was a minor local warlord based at Okazaki Castle who controlled a portion of the Tōkaidō highway linking Kyoto with the eastern provinces. His territory was sandwiched between stronger and predatory neighbors, including the Imagawa clan based in Suruga Province to the east and the Oda clan to the west. Hirotada's main enemy was Oda Nobuhide, the father of Oda Nobunaga. Tokugawa Ieyasu was born in Okazaki Castle on the 26th day of the twelfth month of the eleventh year of Tenbun, according to the Japanese calendar. Originally named , he was the son of , the "daimyō" of Mikawa of the Matsudaira clan, and , the daughter of a neighbouring samurai lord, . His mother and father were step-siblings. They were just 17 and 15 years old, respectively, when Ieyasu was born. In the year of Ieyasu's birth, the Matsudaira clan was split. In 1543, Hirotada's uncle, Matsudaira Nobutaka defected to the Oda clan. This gave Oda Nobuhide the confidence to attack Okazaki. Soon afterwards, Hirotada's father-in-law died, and his son Mizuno Nobumoto revived the clan's traditional enmity against the Matsudaira and declared for Oda Nobuhide as well. As a result, Hirotada divorced Odai-no-kata and sent her back to her family. As both husband and wife remarried and both went on to have further children, Ieyasu eventually had 11 half-brothers and sisters. As Oda Nobuhide continued to attack Okazaki, in 1548 Hirotada turned to his powerful eastern neighbor, Imagawa Yoshimoto for assistance. Yoshimoto agreed to an alliance under the condition that Hirotada send his young heir to Sunpu Domain as a hostage. Oda Nobuhide, learned of this arrangement and had Ieyasu abducted from his entourage en route to Sunpu. Ieyasu was just five years old at the time. Nobuhide threatened to execute Ieyasu unless his father severed all ties with the Imagawa clan; however, Hirotada refused, stating that sacrificing his own son would show his seriousness in his pact with the Imagawa. Despite this refusal, Nobuhide chose not to kill Ieyasu, but instead held him as a hostage for the next three years at the Bansho-ji Temple in Nagoya, (It is said that Ieyasu grew up at the temple from the age of six, met 13 years old Nobunaga here, until Ieyasu nine years old). In 1549, when Ieyasu was 6, his father Hirotada was murdered by his own vassals, who had been bribed by the Oda clan. At about the same time, Oda Nobuhide died during an epidemic. Nobuhide's death dealt a heavy blow to the Oda clan. An army under the command of Imagawa Sessai laid siege to the castle where Oda Nobuhiro, Nobuhide's eldest son and the new head of the Oda, was living. With the castle about to fall, Sessai offered a deal to Oda Nobunaga, Nobuhide's second son. Sessai offered to give up the siege if Ieyasu was handed over to the Imagawa. Nobunaga agreed, and so Ieyasu (now nine) was taken as a hostage to Sunpu. At Sunpu, he remained a hostage, but was treated fairly well as a potentially useful future ally of the Imagawa clan until 1556 when he was 14 years old. In 1556, Ieyasu officially came of age, with Imagawa Yoshimoto presiding over his "genpuku" ceremony. Following tradition, he changed his name from Matsudaira Takechiyo to . He was also briefly allowed to visit Okazaki to pay his respects to the tomb of his father, and receive the homage of his nominal retainers, led by the "karō" Torii Tadayoshi. One year later, at the age of 15 (according to East Asian age reckoning), he married his first wife, Lady Tsukiyama, a relative of Imagawa Yoshimoto, and changed his name again to . Allowed to return to his native Mikawa, the Imagawa then ordered him to fight the Oda clan in a series of battles. Motoyasu fought his first battle in 1558 at the Siege of Terabe. The castellan of Terabe in western Mikawa, Suzuki Shigeteru, betrayed the Imagawa by defecting to Oda Nobunaga. This was nominally within Matsudaira territory, so Imagawa Yoshimoto entrusted the campaign to Ieyasu and his retainers from Okazaki. Ieyasu led the attack in person, but after taking the outer defences, grew fearful of a counterattack to the rear, so he burned the main castle and withdrew. As anticipated, the Oda forces attacked his rear lines, but Motoyasu was prepared and drove off the Oda army. He then succeeded in delivering supplies in the 1559 Siege of Odaka. Odaka was the only one of five disputed frontier forts under attack by the Oda which remained in Imagawa hands. Motoyasu launched diversionary attacks against the two neighboring forts, and when the garrisons of the other forts went to their assistance, Ieyasu's supply column was able to reach Odaka. By 1560 the leadership of the Oda clan had passed to the brilliant leader Oda Nobunaga. Imagawa Yoshimoto, leading a large army (perhaps 25,000 strong) invaded Oda clan territory. Motoyasu was assigned a separate mission to capture the stronghold of Marune. As a result, he and his men were not present at the Battle of Okehazama where Yoshimoto was killed in Nobunaga's surprise assault. With Yoshimoto dead, and the Imagawa clan in a state of confusion, Motoyasu used the opportunity to assert his independence and marched his men back into the abandoned Okazaki Castle and reclaimed his ancestral seat. Motoyasu then decided to ally with the Oda clan. A secret deal was needed because Motoyasu's wife, Lady Tsukiyama, and infant son, Nobuyasu, were held hostage in Sumpu by Imagawa Ujizane, Yoshimoto's heir. In 1561, Motoyasu openly broke with the Imagawa and captured the fortress of Kaminogō. Kaminogō was held by Udono Nagamochi. Resorting to stealth, Motoyasu forces under Hattori Hanzō attacked under cover of darkness, setting fire to the castle, and capturing two of Udono's sons, whom he used as hostages to exchange for his wife and son. In 1563 Nobuyasu was married to Nobunaga's daughter Tokuhime. In the same year in February he changed his name to Ieyasu. For the next few years Motoyasu was occupied with reforming the Matsudaira clan and pacifying Mikawa. He also strengthened his key vassals by awarding them land and castles. These vassals included: Honda Tadakatsu, Ishikawa Kazumasa, Kōriki Kiyonaga, Hattori Hanzō, Sakai Tadatsugu, and Sakakibara Yasumasa. During this period, the Matsudaira clan also faced a threat from a different source. Mikawa was a major center for the "Ikkō-ikki" movement, where peasants banded together with militant monks under the Jōdo Shinshū sect, and rejected the traditional feudal social order. Motoyasu undertook several battles to suppress this movement in his territories, including the Battle of Azukizaka. In one engagement, he was nearly killed when struck by two bullets which did not penetrate his armour. Both sides were using the new gunpowder weapons which the Portuguese had introduced to Japan just 20 years earlier. In 1567, he changed his name yet again, this time to Tokugawa Ieyasu. As he was a member of the Matsudaira clan, he claimed descent from the Seiwa Genji branch of the Minamoto clan. However, there was no proof the Matsudaira clan are descendants of Emperor Seiwa. Yet, his surname was changed with the permission of the Imperial Court, after writing a petition, and he was bestowed the courtesy title Mikawa-no-kami and the court rank of . Ieyasu remained an ally of Nobunaga and his Mikawa soldiers were part of Nobunaga's army which captured Kyoto in 1568. At the same time, Ieyasu was expanding his own territory. Ieyasu and Takeda Shingen, the head of the Takeda clan in Kai Province made an alliance for the purpose of conquering all the Imagawa territory. In 1570, Ieyasu's troops captured Yoshida Castle (modern Toyohashi), which made him master of all of Mikawa Province, and he penetrated into Tōtōmi Province. Meanwhile, Shingen's troops captured Suruga Province (including the Imagawa capital of Sunpu). Imagawa Ujizane fled to Kakegawa Castle, which Ieyasu placed under siege. Ieyasu then negotiated with Ujizane, promising that if he should surrender himself and the remainder of Tōtōmi, he would assist Ujizane in regaining Suruga. Ujizane had nothing left to lose, and Ieyasu immediately ended his alliance with Takeda, instead making a new alliance with Takeda's enemy to the north, Uesugi Kenshin of the Uesugi clan. Through these political manipulations, Ieyasu gained the support of the samurai of Tōtōmi Province. In 1570, Ieyasu established Hamamatsu as the capital of his territory, placing his son Nobuyasu in charge of Okazaki. The same year, he led 5,000 of his men to support Nobunaga at the Battle of Anegawa against the Azai and Asakura clans. In October 1571, Takeda Shingen, now allied with the Odawara Hōjō clan, attacked the Tokugawa lands in Tōtōmi. Ieyasu asked for help from Nobunaga, who sent him some 3,000 troops. Early in 1572 the two armies met at the Battle of Mikatagahara. The considerably larger Takeda army, under the expert direction of Shingen, overwhelmed Ieyasu's troops and caused heavy casualties. Despite his initial reticence, Ieyasu was convinced by one of his generals to retreat. The battle was a major defeat, but in the interests of maintaining the appearance of dignified withdrawal, Ieyasu brazenly ordered the men at his castle to light torches, sound drums, and leave the gates open, to properly receive the returning warriors. To the surprise and relief of the Tokugawa army, this spectacle made the Takeda generals suspicious of being led into a trap, so they did not besiege the castle and instead made camp for the night. This error would allow a band of Tokugawa "ninja" to raid the camp in the ensuing hours, further upsetting the already disoriented Takeda army, and ultimately resulting in Shingen's decision to call off the offensive altogether. Incidentally, Takeda Shingen would not get another chance to advance on Hamamatsu, much less Kyoto, since he would perish shortly after the Siege of Noda Castle a year later in 1573. Shingen was succeeded by his less capable son Takeda Katsuyori. In 1575, the Takeda attacked Nagashino Castle in Mikawa Province. Ieyasu appealed to Nobunaga for help and the result was that Nobunaga personally came at the head of a very large army (about 30,000 strong). The Oda-Tokugawa force of 38,000 won a great victory on June 28, 1575, at the Battle of Nagashino, though Takeda Katsuyori survived the battle and retreated back to Kai Province. For the next seven years, Ieyasu and Katsuyori fought a series of small battles, as the result of which Ieyasu's troops managed to wrest control of Suruga Province away from the Takeda clan. In 1579, Ieyasu's wife, and his heir Nobuyasu, were accused by Nobunaga of conspiring with Takeda Katsuyori to assassinate Nobunaga, whose daughter Tokuhime (1559–1636) was married to Nobuyasu. For this Ieyasu ordered his wife to be executed and forced his oldest son by her, Nobuyasu, to commit "seppuku". Ieyasu then named his third son, Tokugawa Hidetada, as heir, since his second son was adopted by another rising power: the trusted Oda clan general Toyotomi Hideyoshi, soon to be the most powerful daimyō in Japan. The end of the war with Takeda came in 1582 when a combined Oda-Tokugawa force attacked and conquered Kai Province. Takeda Katsuyori was defeated at the Battle of Tenmokuzan and then committed "seppuku". In late June 1582, Ieyasu was near Osaka and far from his own territory when he learned that Nobunaga had been killed at the Honnō-ji Temple by Akechi Mitsuhide, one of his retainers. Ieyasu managed the dangerous journey back to Mikawa. Ieyasu was mobilizing his army when he learned Hideyoshi had defeated Akechi Mitsuhide at the Battle of Yamazaki. The death of Nobunaga meant that some provinces, ruled by Nobunaga's vassals, were ripe for conquest. The leader of Kai province made the mistake of killing one of Ieyasu's aides. Ieyasu promptly invaded Kai and took control. Hōjō Ujimasa, leader of the Hōjō clan responded by sending his much larger army into Shinano and then into Kai Province. No battles were fought between Ieyasu's forces and the large Hōjō army and, after some negotiation, Ieyasu and the Hōjō agreed to a settlement which left Ieyasu in control of both Kai and Shinano Provinces, while the Hōjō took control of Kazusa Province (as well as bits of both Kai and Shinano Provinces). At the same time (1583) a war for rule over Japan was fought between Toyotomi Hideyoshi and Shibata Katsuie. Ieyasu did not take a side in this conflict, building on his reputation for both caution and wisdom. Hideyoshi defeated Katsuie at Battle of Shizugatake. With this victory, Hideyoshi became the single most powerful "daimyō" in Japan. In 1584, Ieyasu decided to support Oda Nobukatsu, the eldest surviving son and heir of Oda Nobunaga, against Hideyoshi. This was a dangerous act and could have resulted in the annihilation of the Tokugawa. Tokugawa troops took the traditional Oda stronghold of Owari; Hideyoshi responded by sending an army into Owari. The Komaki Campaign was the only time any of the great unifiers of Japan fought each other. The campaign proved indecisive and after months of fruitless marches and feints, Hideyoshi settled the war through negotiation. First he made peace with Oda Nobukatsu, and then he offered a truce to Ieyasu. The deal was made at the end of the year; as part of the terms Ieyasu's second son, Ogimaru (also known as Yuki Hideyasu) became an adopted son of Hideyoshi. Ieyasu's aide, Ishikawa Kazumasa, chose to join the pre-eminent "daimyō" and so he moved to Osaka to be with Hideyoshi. However, few other Tokugawa retainers followed this example. Hideyoshi was understandably distrustful of Ieyasu, and five years passed before they fought as allies. The Tokugawa did not participate in Hideyoshi's successful invasions of Shikoku and Kyūshū. In 1590, Hideyoshi attacked the last independent "daimyō" in Japan, Hōjō Ujimasa. The Hōjō clan ruled the eight provinces of the Kantō region in eastern Japan. Hideyoshi ordered them to submit to his authority and they refused. Ieyasu, though a friend and occasional ally of Ujimasa, joined his large force of 30,000 samurai with Hideyoshi's enormous army of some 160,000. Hideyoshi attacked several castles on the borders of the Hōjō clan with most of his army laying siege to the castle at Odawara. Hideyoshi's army captured Odawara after six months (oddly for the time period, deaths on both sides were few). During this siege, Hideyoshi offered Ieyasu a radical deal. He offered Ieyasu the eight Kantō provinces which they were about to take from the Hōjō in return for the five provinces that Ieyasu currently controlled (including Ieyasu's home province of Mikawa). Ieyasu accepted this proposal. Bowing to the overwhelming power of the Toyotomi army, the Hōjō accepted defeat, the top Hōjō leaders killed themselves and Ieyasu marched in and took control of their provinces, so ending the clan's reign of over 100 years. Ieyasu now gave up control of his five provinces (Mikawa, Tōtōmi, Suruga, Shinano, and Kai) and moved all his soldiers and vassals to the Kantō region. He himself occupied the castle town of Edo in Kantō. This was possibly the riskiest move Ieyasu ever made—to leave his home province and rely on the uncertain loyalty of the formerly Hōjō samurai in Kantō. In the end, it worked out brilliantly for Ieyasu. He reformed the Kantō provinces, controlled and pacified the Hōjō samurai and improved the underlying economic infrastructure of the lands. Also, because Kantō was somewhat isolated from the rest of Japan, Ieyasu was able to maintain a unique level of autonomy from Hideyoshi's rule. Within a few years, Ieyasu had become the second most powerful "daimyō" in Japan. There is a Japanese proverb which likely refers to this event: "Ieyasu won the Empire by retreating." In 1592, Hideyoshi invaded Korea as a prelude to his plan to attack China. The Tokugawa samurai never actually took part in this campaign, though in early 1593, Ieyasu himself was summoned to Hideyoshi's court in Nagoya (in Kyūshū, different from the similarly spelled city in Owari Province) as a military advisor and given command of a body of troops meant as reserves for the Korean campaign. He stayed in Nagoya off and on for the next five years. Despite his frequent absences, Ieyasu's sons, loyal retainers and vassals were able to control and improve Edo and the other new Tokugawa lands. In 1593, Hideyoshi fathered a son and heir, Toyotomi Hideyori. In 1598, with his health clearly failing, Hideyoshi called a meeting that would determine the Council of Five Elders, who would be responsible for ruling on behalf of his son after his death. The five that were chosen as regents (tairō) for Hideyori were Maeda Toshiie, Mōri Terumoto, Ukita Hideie, Uesugi Kagekatsu, and Ieyasu himself, who was the most powerful of the five. This change in the pre-Sekigahara power structure became pivotal as Ieyasu turned his attention towards Kansai; and at the same time, other ambitious (albeit ultimately unrealized) plans, such as the Tokugawa initiative establishing official relations with New Spain (modern-day Mexico), continued to unfold and advance. Hideyoshi, after three more months of increasing sickness, died on September 18, 1598. He was nominally succeeded by his young son Hideyori but as he was just five years old, real power was in the hands of the regents. Over the next two years Ieyasu made alliances with various "daimyōs", especially those who had no love for Hideyoshi. Happily for Ieyasu, the oldest and most respected of the regents, Toshiie Maeda, died after just one year. With the death of Toshiie in 1599, Ieyasu led an army to Fushimi and took over Osaka Castle, the residence of Hideyori. This angered the three remaining regents and plans were made on all sides for war. Opposition to Ieyasu centered around Ishida Mitsunari, a powerful "daimyō" who was not one of the regents. Mitsunari plotted Ieyasu's death and news of this plot reached some of Ieyasu's generals. They attempted to kill Mitsunari but he fled and gained protection from none other than Ieyasu himself. It is not clear why Ieyasu protected a powerful enemy from his own men but Ieyasu was a master strategist and he may have concluded that he would be better off with Mitsunari leading the enemy army rather than one of the regents, who would have more legitimacy. Nearly all of Japan's "daimyō" and samurai now split into two factions—The Western Army (Mitsunari's group) and The Eastern Army (anti-Mitsunari Group). Ieyasu supported the anti-Mitsunari Group, and formed them as his potential allies. Ieyasu's allies were the Date clan, the Mogami clan, the Satake clan and the Maeda clan. Mitsunari allied himself with the three other regents: Ukita Hideie, Mōri Terumoto, and Uesugi Kagekatsu as well as many "daimyō" from the eastern end of Honshū. In June 1600, Ieyasu and his allies moved their armies to defeat the Uesugi clan, which was accused of planning to revolt against Toyotomi administration. Before arriving at Uesugi's territory, Ieyasu received information that Mitsunari and his allies had moved their army against Ieyasu. Ieyasu held a meeting with the "daimyōs", and they agreed to follow Ieyasu. He then led the majority of his army west towards Kyoto. In late summer, Ishida's forces captured Fushimi. Ieyasu and his allies marched along the Tōkaidō, while his son Hidetada went along the Nakasendō with 38,000 soldiers. A battle against Sanada Masayuki in Shinano Province delayed Hidetada's forces, and they did not arrive in time for the main battle. This battle, fought near Sekigahara, was the biggest and one of the most important battles in Japanese feudal history. It began on October 21, 1600, with a total of 160,000 men facing each other. The Battle of Sekigahara ended with a complete Tokugawa victory. The Western bloc was crushed and over the next few days Ishida Mitsunari and many other western nobles were captured and killed. Tokugawa Ieyasu was now the "de facto" ruler of Japan. Immediately after the victory at Sekigahara, Ieyasu redistributed land to the vassals who had served him. Ieyasu left some western "daimyōs" unharmed, such as the Shimazu clan, but others were completely destroyed. Toyotomi Hideyori (the son of Hideyoshi) lost most of his territory which were under management of western "daimyōs", and he was degraded to an ordinary "daimyō", not a ruler of Japan. In later years the vassals who had pledged allegiance to Ieyasu before Sekigahara became known as the "fudai daimyō", while those who pledged allegiance to him after the battle (in other words, after his power was unquestioned) were known as "tozama daimyō". "Tozama daimyō" were considered inferior to "fudai daimyōs". On March 24, 1603, Tokugawa Ieyasu received the title of "shōgun" from Emperor Go-Yōzei. Ieyasu was 60 years old. He had outlasted all the other great men of his times: Nobunaga, Shingen, Hideyoshi, and Kenshin. As "shōgun", he used his remaining years to create and solidify the Tokugawa shogunate, which ushered in the Edo period, and was the third shogunal government (after the Kamakura (Minamoto) and the Ashikaga). He claimed descent from the Minamoto clan, by way of the Nitta clan. His descendants would marry into the Taira clan and the Fujiwara clan. The Tokugawa shogunate would rule Japan for the next 260 years. Following a well established Japanese pattern, Ieyasu abdicated his official position as "shōgun" in 1605. His successor was his son and heir, Tokugawa Hidetada. There may have been several factors that contributed to his decision, including his desire to avoid being tied up in ceremonial duties, to make it harder for his enemies to attack the real power center, and to secure a smoother succession of his son. The abdication of Ieyasu had no effect on the practical extent of his powers or his rule; but Hidetada nevertheless assumed a role as formal head of the shogunal bureaucracy. Ieyasu, acting as the , remained the effective ruler of Japan until his death. Ieyasu retired to Sunpu Castle in Sunpu, but he also supervised the building of Edo Castle, a massive construction project which lasted for the rest of Ieyasu's life. The result was the largest castle in all of Japan, the costs for building the castle being borne by all the other "daimyōs", while Ieyasu reaped all the benefits. The central donjon, or "tenshu", burned in the 1657 "Meireki" fire. Today, the Imperial Palace stands on the site of the castle. In 1611, Ieyasu, at the head of 50,000 men, visited Kyoto to witness the enthronement of Emperor Go-Mizunoo. In Kyoto, Ieyasu ordered the remodeling of the Imperial court and buildings, and forced the remaining western "daimyōs" to sign an oath of fealty to him. In 1613, he composed the , a document which put the court "daimyōs" under strict supervision, leaving them as mere ceremonial figureheads. In 1615, Ieyasu prepared the , a document setting out the future of the Tokugawa regime. As Ōgosho, Ieyasu also supervised diplomatic affairs with the Netherlands, Spain, and England. Ieyasu chose to distance Japan from European influence starting in 1609, although the shogunate did still grant preferential trading rights to the Dutch East India Company and permitted them to maintain a "factory" for trading purposes. From 1605 until his death, Ieyasu frequently consulted English shipwright and pilot, William Adams. Adams, fluent in Japanese, assisted the shogunate in negotiating trading relations, but was cited by members of the competing Jesuit and Spanish-sponsored mendicant orders as an obstacle to improved relations between Ieyasu and the Roman Catholic Church. Significant attempts to curtail the influence of Christian missionaries in Japan date to 1587 during the shogunate of Toyotomi Hideyoshi. However, in 1614, Ieyasu was sufficiently concerned about Spanish territorial ambitions that he signed a Christian Expulsion Edict. The edict banned the practice of Christianity and led to the expulsion of all foreign missionaries. Although some smaller Dutch trading operations remained in Nagasaki, this edict dramatically curtailed foreign trade and marked the end of open Christian witness in Japan until the 1870s. The immediate cause of the prohibition was the Okamoto Daihachi incident, a case of fraud involving Ieyasu's Catholic vavasor, but the shogunate was also concerned about a possible invasion by the Iberian colonial powers, which had previously occurred in the New World and the Philippines. The last remaining threat to Ieyasu's rule was Toyotomi Hideyori, the son and rightful heir to Hideyoshi. He was now a young "daimyō" living in Osaka Castle. Many samurai who opposed Ieyasu rallied around Hideyori, claiming that he was the rightful ruler of Japan. Ieyasu found fault with the opening ceremony of a temple built by Hideyori; it was as if he prayed for Ieyasu's death and the ruin of the Tokugawa clan. Ieyasu ordered Toyotomi to leave Osaka Castle, but those in the castle refused and summoned samurai to gather within the castle. Then the Tokugawa, with a huge army led by Ieyasu and "shōgun" Hidetada, laid siege to Osaka castle in what is now known as "the Winter Siege of Osaka". Eventually, Tokugawa was able to precipitate negotiations and an armistice after directed cannon fire threatened Hideyori's mother, Yodo-dono. However, once the treaty was agreed, Tokugawa filled the castle's outer moats with sand so his troops could walk across. Through this ploy, Tokugawa gained a huge tract of land through negotiation and deception that he could not through siege and combat. Ieyasu returned to Sunpu Castle once, but after Toyotomi refused another order to leave Osaka, he and his allied army of 155,000 soldiers attacked Osaka Castle again in "the Summer Siege of Osaka". Finally, in late 1615, Osaka Castle fell and nearly all the defenders were killed including Hideyori, his mother (Hideyoshi's widow, Yodo-dono), and his infant son. His wife, Senhime (a granddaughter of Ieyasu), pleaded to save Hideyori and Yodo-dono's lives. Ieyasu refused and either required them to commit ritual suicide, or killed both of them. Eventually, Senhime was sent back to Tokugawa alive. After killing two people at Kamakura, who have escaped from Osaka Castle. With the Toyotomi line finally extinguished, no threats remained to the Tokugawa clan's domination of Japan. In 1616, Ieyasu died at age 73. The cause of death is thought to have been cancer or syphilis. The first Tokugawa "shōgun" was posthumously deified with the name Tōshō Daigongen (), the "Great Gongen, Light of the East". (A "Gongen" is believed to be a buddha who has appeared on Earth in the shape of a "kami" to save sentient beings). In life, Ieyasu had expressed the wish to be deified after his death to protect his descendants from evil. His remains were buried at the Gongens' mausoleum at Kunōzan, Kunōzan Tōshō-gū (). As a common view, many people believe that "after the first anniversary of his death, his remains were reburied at Nikkō Shrine, Nikkō Tōshō-gū (). His remains are still there." Neither shrine has offered to open the graves, so the location of Ieyasu's physical remains are still a mystery. The mausoleum's architectural style became known as "gongen-zukuri", that is "gongen"-style. He was first given the Buddhist name Tosho Dai-Gongen (), then after his death it was changed to Hogo Onkokuin (). Ieyasu ruled directly as "shōgun" or indirectly as Ōgosho () during the "Keichō" era (1596–1615). Ieyasu had a number of qualities that enabled him to rise to power. He was both careful and bold—at the right times, and in the right places. Calculating and subtle, Ieyasu switched alliances when he thought he would benefit from the change. He allied with the Late Hōjō clan; then he joined Hideyoshi's army of conquest, which destroyed the Hōjō; and he himself took over their lands. In this he was like other "daimyōs" of his time. This was an era of violence, sudden death, and betrayal. He was not very well liked nor personally popular, but he was feared and he was respected for his leadership and his cunning. For example, he wisely kept his soldiers out of Hideyoshi's campaign in Korea. He was capable of great loyalty: once he allied with Oda Nobunaga, he never went against him, and both leaders profited from their long alliance. He was known for being loyal towards his personal friends and vassals, whom he rewarded, He was said to have a close friendship with his vassal Hattori Hanzō. However, he also remembered those who had wronged him in the past. It is said that Ieyasu executed a man who came into his power because he had insulted him when Ieyasu was young. Ieyasu protected many former Takeda retainers from the wrath of Oda Nobunaga, who was known to harbour a bitter grudge towards the Takeda. He managed successfully to transform many of the retainers of the Takeda, Hōjō, and Imagawa clans—all whom he had defeated himself or helped to defeat—into loyal followers. At the same time, he could be ruthless when crossed. For example, he ordered the executions of his first wife and his eldest son—a son-in-law of Oda Nobunaga; Oda was also an uncle of Hidetada's wife Oeyo. He was cruel, relentless and merciless in the elimination of Toyotomi survivors after Osaka. For days, dozens and dozens of men and women were hunted down and executed, including an eight-year-old son of Hideyori by a concubine, who was beheaded. Unlike Hideyoshi, he did not harbor any desires to conquer outside Japan—he only wanted to bring order and an end to open warfare, and to rule Japan. While at first tolerant of Christianity, his attitude changed after 1613 and the executions of Christians sharply increased. Ieyasu's favorite pastime was falconry. He regarded it as excellent training for a warrior. "When you go into the country hawking, you learn to understand the military spirit and also the hard life of the lower classes. You exercise your muscles and train your limbs. You have any amount of walking and running and become quite indifferent to heat and cold, and so you are little likely to suffer from any illness.". Ieyasu swam often; even late in his life he is reported to have swum in the moat of Edo Castle. Later in life he took to scholarship and religion, patronizing scholars like Hayashi Razan. Two of his famous quotes: Life is like unto a long journey with a heavy burden. Let thy step be slow and steady, that thou stumble not. Persuade thyself that imperfection and inconvenience are the lot of natural mortals, and there will be no room for discontent, neither for despair. When ambitious desires arise in thy heart, recall the days of extremity thou hast passed through. Forbearance is the root of all quietness and assurance forever. Look upon the wrath of thy enemy. If thou only knowest what it is to conquer, and knowest not what it is to be defeated; woe unto thee, it will fare ill with thee. Find fault with thyself rather than with others. The strong manly ones in life are those who understand the meaning of the word patience. Patience means restraining one's inclinations. There are seven emotions: joy, anger, anxiety, adoration, grief, fear, and hate, and if a man does not give way to these he can be called patient. I am not as strong as I might be, but I have long known and practiced patience. And if my descendants wish to be as I am, they must study patience. He said that he fought, as a warrior or a general, in 90 battles. He was interested in various kenjutsu skills, was a patron of the Yagyū Shinkage-ryū school, and also had them as his personal sword instructors. In James Clavell's historical-novel "Shōgun", Tokugawa served as basis for the character of "Toranaga". Toranaga was portrayed by Toshiro Mifune in the 1980 TV mini-series adaptation. "" is a Japanese "manga" written and illustrated by Yoshihiro Yamada. It was adapted into an anime series in 2011, and includes a fictional depiction of Tokugawa's life... In "Sengoku Basara" game and anime series, he was shown with Honda Tadakatsu. In earlier games, he was armed with spears and led countless warriors, in later ones, he discards the spear and fights with his fists and wants Japan united under the force of bonds. Tokugawa is the leader of Japan in Sid Meier's Civilization IV. He is an aggressive and organized leader with an emphasis on mercantilism. Among the many conspiracies surrounding the Honnō-ji Incident is Ieyasu's role in the event. Historically, Ieyasu was away from his lord at the time and, when he heard that Nobunaga was in danger, he wanted to rush to his lord's rescue in spite of the small number of attendants with him. However, Tadakatsu advised for his lord to avoid the risk and urged for a quick retreat to Mikawa. Masanari led the way through Iga and they returned home by boat. However, skeptics think otherwise. While they usually accept the historically known facts about Ieyasu's actions during Mitsuhide's betrayal, theorists tend to pay more attention to the events before. Ever since Ieyasu lost his wife and son due to Nobunaga's orders, they reason, he held a secret resentment against his lord. Generally, there is some belief that he privately goaded Mitsuhide to take action when the two warlords were together in Azuchi Castle. Together, they planned when to attack and went their separate ways. When the deed was done, Ieyasu turned a blind eye to Mitsuhide's schemes and fled the scene to feign innocence. A variation of the concept states that Ieyasu was well aware of Mitsuhide's feelings regarding Nobunaga and simply chose to do nothing for his own benefit. is a myth that has been circulating since the Edo period. It is believed to have arisen due to historical records of Ieyasu's "sudden change of behavior" with some of his closest colleagues. The idea was made more popular in modern times by the historians, Tokutomi Sohō and Yasutsugu Shigeno. The general outline of the legend is that after the Battle of Okehazama, Motoyasu (Ieyasu) was ready to face the world as a changed man. According to Hayashi Razan, the last line was meant quite literally. Before Motoyasu could make his new face known to the world, he was replaced by a completely different man named Sarata Jiro Saburo Motonobu (Sakai Jōkei). Variations include that the switch actually occurred much earlier in Motoyasu's life when he was being a hostage. Motonobu went in Motoyasu's stead and was considered a more suitable "heir". After Motonobu replaced him, Motoyasu fled and lived a hermit's life. Another version states that Ieyasu was actually killed during the Battle of Sekigahara or the Osaka Campaign. When he was killed by Sanada Nobushige during the latter conflict, it is said that he was replaced by Ogasawara Hidemasa who became the "Ieyasu" from then on. While prevalent in fiction, historians are unsure whether or not the myth holds any merit. His dubious personality traits during these specific time frames have been mostly blamed on stress and personal strain.
https://en.wikipedia.org/wiki?curid=31183
Tonne The tonne ( or ; symbol: t) is a non-SI metric unit of mass equal to 1,000 kilograms. It is commonly referred to as a metric ton in the United States. It is equivalent to approximately pounds, or approximately 0.984 long tons (UK). The official SI unit is the megagram (symbol: Mg), a less common way to express the same mass. The tonne is derived from the mass of one cubic metre of pure water; at 4 °C one thousand litres of pure water has an absolute mass of one tonne. The BIPM symbol for the tonne is 't', adopted at the same time as the unit in 1879. Its use is also official for the metric ton in the United States, having been adopted by the United States National Institute of Standards and Technology (NIST). It is a symbol, not an abbreviation, and should not be followed by a period. Use of majuscule and minuscule letter case is significant, and use of other letter combinations is not permitted and would lead to ambiguity. For example, 'T', 'MT', 'mT', 'Mt', 'mt' are the SI symbols for the tesla, megatesla, millitesla, megatonne (one teragram), and millitonne (one kilogram) respectively. If describing TNT equivalent units of energy, one megatonne of TNT is equivalent to approximately 4.184 petajoules. In French and English, "tonne" is the correct spelling. It is usually pronounced the same as ton (), but the final "e" can also be pronounced, i.e. "tunnie" (). In Australia, it is also pronounced . In the United States, "metric ton" is the name for this unit used and recommended by NIST; an unqualified mention of a "ton" almost invariably refers to a short ton of , and "tonne" is rarely used in speech or writing. Both terms are acceptable in Canadian usage. Before metrication in the UK, the unit used for most purposes was the Imperial ton of 2,240 pounds avoirdupois or 20 hundredweight (usually referred to as the long ton in the US), equivalent to approximately 1,016 kg, differing by about 1.6% from the tonne. The UK Weights and Measures Act 1985 explicitly excluded from use for trade certain imperial units, including the ton, unless the item being sold or the weighing equipment being used was weighed or certified prior to 1 December 1980, and even then only if the buyer was made aware that the weight of the item was measured in imperial units. "Ton" and "tonne" are both derived from a Germanic word in general use in the North Sea area since the Middle Ages (cf. Old English and Old Frisian "tunne", Old High German and Medieval Latin "tunna", German and French "tonne") to designate a large cask, or "tun". A full tun, standing about a metre high, could easily weigh a tonne. An English tun (an old wine cask volume measurement equivalent to approximately 954 litres) of wine has a relative mass of approximately 954  kg if full of pure water, a little less for wine. The spelling "tonne" pre-dates the introduction of the SI in 1960; it has been used with this meaning in France since 1842, when there were no metric prefixes for multiples of 106 and above, and is now used as the standard spelling for the metric mass measurement in most English-speaking countries. In the United States, the unit was originally referred to using the French words "millier" or "tonneau", but these terms are now obsolete. The Imperial and US customary units comparable to the tonne are both spelled "ton" in English, though they differ in mass. One tonne is equivalent to: For multiples of the tonne, it is more usual to speak of thousands or millions of tonnes. Kilotonne, megatonne, and gigatonne are more usually used for the energy of nuclear explosions and other events in equivalent mass of TNT, often loosely as approximate figures. When used in this context, there is little need to distinguish between metric and other tons, and the unit is spelt either as "ton" or "tonne" with the relevant prefix attached. *The equivalent units columns use the short scale large-number naming system currently used in most English-language countries, e.g. 1 billion = 1,000 million = 1,000,000,000. †Values in the equivalent short and long tons columns are rounded to five significant figures. See Conversions for exact values. ǂThough non-standard, the symbol "kt" is also used for knot, a unit of speed for aircraft and sea-going vessels, and should not be confused with kilotonne. A metric ton unit (mtu) can mean 10 kg (approximately 22 lns) within metal (e.g. tungsten, manganese) trading, particularly within the US. It traditionally referred to a metric ton of ore containing 1% (i.e. 10 kg) of metal. The following excerpt from a mining geology textbook describes its usage in the particular case of tungsten: "Tungsten concentrates are usually traded in metric tonne units (originally designating one tonne of ore containing 1% of WO3, today used to measure WO3 quantities in 10 kg units. One metric tonne unit (mtu) of tungsten (VI) contains 7.93 kilograms of tungsten." (Walter L Pohl, "Economic Geology: Principles and Practices", English edition, 2011, p 183.) Tungsten is also known as wolfram and has the atomic symbol W. In the case of uranium, the acronym "MTU" is sometimes considered to be "metric ton of uranium", meaning 1,000 kg. A gigatonne is a unit of mass often used by the coal mining industry to assess and define the extent of a coal reserve. The "tonne of trinitrotoluene (TNT)" is used as a proxy for energy, usually of explosions (TNT is a common high explosive). Prefixes are used: kiloton(ne), megaton(ne), gigaton(ne), especially for expressing nuclear weapon yield, based on a specific combustion energy of TNT of about 4.2 MJ/kg (or one thermochemical calorie per milligram). Hence, 1 t TNT = approx. 4.2 GJ, 1 kt TNT = approx. 4.2 TJ, 1 Mt TNT = approx. 4.2 PJ. The SI unit of energy is the joule. Assuming that a TNT explosion releases 1,000 small (thermochemical) calories per gram (approx. 4.2 kJ/g), one tonne of TNT is approx. equivalent to 4.2 gigajoules. In the petroleum industry the tonne of oil equivalent (toe) is a unit of energy: the amount of energy released by burning one tonne of crude oil, approx, 42 GJ. There are several slightly different definitions. This is ten times as much as a tonne of TNT because atmospheric oxygen is used. Like the gram and the kilogram, the tonne gave rise to a (now obsolete) force unit of the same name, the tonne-force, equivalent to about 9.8 kilonewtons: a unit also often called simply "tonne" or "metric ton" without identifying it as a unit of force. In contrast to the tonne as a mass unit, the tonne-force or metric ton-force is not acceptable for use with SI, partly because it is not an exact multiple of the SI unit of force, the newton.
https://en.wikipedia.org/wiki?curid=31185
Neutron activation analysis Neutron activation analysis (NAA) is the nuclear process used for determining the concentrations of elements in a vast amount of materials. NAA allows discrete sampling of elements as it disregards the chemical form of a sample, and focuses solely on its nucleus. The method is based on neutron activation and therefore requires a source of neutrons. The sample is bombarded with neutrons, causing the elements to form radioactive isotopes. The radioactive emissions and radioactive decay paths for each element are well known. Using this information, it is possible to study spectra of the emissions of the radioactive sample, and determine the concentrations of the elements within it. A particular advantage of this technique is that it does not destroy the sample, and thus has been used for analysis of works of art and historical artifacts. NAA can also be used to determine the activity of a radioactive sample. If NAA is conducted directly on irradiated samples it is termed Instrumental Neutron Activation Analysis (INAA). In some cases irradiated samples are subjected to chemical separation to remove interfering species or to concentrate the radioisotope of interest, this technique is known as Radiochemical Neutron Activation Analysis (RNAA). NAA can perform non-destructive analyses on solids, liquids, suspensions, slurries, and gases with no or minimal preparation. Due to the penetrating nature of incident neutrons and resultant gamma rays, the technique provides a true bulk analysis. As different radioisotopes have different half-lives, counting can be delayed to allow interfering species to decay eliminating interference. Until the introduction of ICP-AES and PIXE, NAA was the standard analytical method for performing multi-element analyses with minimum detection limits in the sub-ppm range. Accuracy of NAA is in the region of 5%, and relative precision is often better than 0.1%. There are two noteworthy drawbacks to the use of NAA; even though the technique is essentially non-destructive, the irradiated sample will remain radioactive for many years after the initial analysis, requiring handling and disposal protocols for low-level to medium-level radioactive material; also, the number of suitable activation nuclear reactors is declining; with a lack of irradiation facilities, the technique has declined in popularity and become more expensive. Neutron activation analysis is a sensitive multi-element analytical technique used for both qualitative and quantitative analysis of major, minor, trace and rare elements. NAA was discovered in 1936 by Hevesy and Levi, who found that samples containing certain rare earth elements became highly radioactive after exposure to a source of neutrons. This observation led to the use of induced radioactivity for the identification of elements. NAA is significantly different from other spectroscopic analytical techniques in that it is based not on electronic transitions but on nuclear transitions. To carry out an NAA analysis, the specimen is placed into a suitable irradiation facility and bombarded with neutrons. This creates artificial radioisotopes of the elements present. Following irradiation, the artificial radioisotopes decay with emission of particles or, more importantly gamma rays, which are characteristic of the element from which they were emitted. For the NAA procedure to be successful, the specimen or sample must be selected carefully. In many cases small objects can be irradiated and analysed intact without the need of sampling. But, more commonly, a small sample is taken, usually by drilling in an inconspicuous place. About 50 mg (one-twentieth of a gram) is a sufficient sample, so damage to the object is minimised. It is often good practice to remove two samples using two different drill bits made of different materials. This will reveal any contamination of the sample from the drill bit material itself. The sample is then encapsulated in a vial made of either high purity linear polyethylene or quartz. These sample vials come in many shapes and sizes to accommodate many specimen types. The sample and a standard are then packaged and irradiated in a suitable reactor at a constant, known neutron flux. A typical reactor used for activation uses uranium fission, providing a high neutron flux and the highest available sensitivities for most elements. The neutron flux from such a reactor is in the order of 1012 neutrons cm−2 s−1. The type of neutrons generated are of relatively low kinetic energy (KE), typically less than 0.5 eV. These neutrons are termed thermal neutrons. Upon irradiation, a thermal neutron interacts with the target nucleus via a non-elastic collision, causing neutron capture. This collision forms a compound nucleus which is in an excited state. The excitation energy within the compound nucleus is formed from the binding energy of the thermal neutron with the target nucleus. This excited state is unfavourable and the compound nucleus will almost instantaneously de-excite (transmutate) into a more stable configuration through the emission of a prompt particle and one or more characteristic prompt gamma photons. In most cases, this more stable configuration yields a radioactive nucleus. The newly formed radioactive nucleus now decays by the emission of both particles and one or more characteristic delayed gamma photons. This decay process is at a much slower rate than the initial de-excitation and is dependent on the unique half-life of the radioactive nucleus. These unique half-lives are dependent upon the particular radioactive species and can range from fractions of a second to several years. Once irradiated, the sample is left for a specific decay period, then placed into a detector, which will measure the nuclear decay according to either the emitted particles, or more commonly, the emitted gamma rays. NAA can vary according to a number of experimental parameters. The kinetic energy of the neutrons used for irradiation will be a major experimental parameter. The above description is of activation by slow neutrons, slow neutrons are fully moderated within the reactor and have KE <0.5 eV. Medium KE neutrons may also be used for activation, these neutrons have been only partially moderated and have KE of 0.5 eV to 0.5 MeV, and are termed epithermal neutrons. Activation with epithermal neutrons is known as Epithermal NAA (ENAA). High KE neutrons are sometimes used for activation, these neutrons are unmoderated and consist of primary fission neutrons. High KE or fast neutrons have a KE >0.5 MeV. Activation with fast neutrons is termed Fast NAA (FNAA). Another major experimental parameter is whether nuclear decay products (gamma rays or particles) are measured during neutron irradiation (prompt gamma), or at some time after irradiation (delayed gamma, DGNAA). PGNAA is generally performed by using a neutron stream tapped off the nuclear reactor via a beam port. Neutron fluxes from beam ports are the order of 106 times weaker than inside a reactor. This is somewhat compensated for by placing the detector very close to the sample reducing the loss in sensitivity due to low flux. PGNAA is generally applied to elements with extremely high neutron capture cross-sections; elements which decay too rapidly to be measured by DGNAA; elements that produce only stable isotopes; or elements with weak decay gamma ray intensities. PGNAA is characterised by short irradiation times and short decay times, often in the order of seconds and minutes. DGNAA is applicable to the vast majority of elements that form artificial radioisotopes. DG analyses are often performed over days, weeks or even months. This improves sensitivity for long-lived radionuclides as it allows short-lived radionuclide to decay, effectively eliminating interference. DGNAA is characterised by long irradiation times and long decay times, often in the order of hours, weeks or longer. a range of different sources can be used: Some reactors are used for the neutron irradiation of samples for radioisotope production for a range of purposes. The sample can be placed in an irradiation container which is then placed in the reactor; if epithermal neutrons are required for the irradiation then cadmium can be used to filter out the thermal neutrons. A relatively simple Farnsworth–Hirsch fusor can be used to generate neutrons for NAA experiments. The advantages of this kind of apparatus is that it is compact, often benchtop-sized, and that it can simply be turned off and on. A disadvantage is that this type of source will not produce the neutron flux that can be obtained using a reactor. For many workers in the field a reactor is an item which is too expensive, instead it is common to use a neutron source which uses a combination of an alpha emitter and beryllium. These sources tend to be much weaker than reactors. These can be used to create pulses of neutrons, they have been used for some activation work where the decay of the target isotope is very rapid. For instance in oil wells. There are a number of detector types and configurations used in NAA. Most are designed to detect the emitted gamma radiation. The most common types of gamma detectors encountered in NAA are the gas ionisation type, scintillation type and the semiconductor type. Of these the scintillation and semiconductor type are the most widely employed. There are two detector configurations utilised, they are the planar detector, used for PGNAA and the well detector, used for DGNAA. The planar detector has a flat, large collection surface area and can be placed close to the sample. The well detector ‘surrounds’ the sample with a large collection surface area. Scintillation-type detectors use a radiation-sensitive crystal, most commonly thallium-doped sodium iodide (NaI(Tl)), which emits light when struck by gamma photons. These detectors have excellent sensitivity and stability, and a reasonable resolution. Semiconductor detectors utilise the semiconducting element germanium. The germanium is processed to form a p-i-n (positive-intrinsic-negative) diode, and when cooled to ~77 K by liquid nitrogen to reduce dark current and detector noise, produces a signal which is proportional to the photon energy of the incoming radiation. There are two types of germanium detector, the lithium-drifted germanium or Ge(Li) (pronounced ‘jelly’), and the high-purity germanium or HPGe. The semiconducting element silicon may also be used but germanium is preferred, as its higher atomic number makes it more efficient at stopping and detecting high energy gamma rays. Both Ge(Li) and HPGe detectors have excellent sensitivity and resolution, but Ge(Li) detectors are unstable at room temperature, with the lithium drifting into the intrinsic region ruining the detector. The development of undrifted high purity germanium has overcome this problem. Particle detectors can also be used to detect the emission of alpha (α) and beta (β) particles which often accompany the emission of a gamma photon but are less favourable, as these particles are only emitted from the surface of the sample and are often absorbed or attenuated by atmospheric gases requiring expensive vacuum conditions to be effectively detected. Gamma rays, however, are not absorbed or attenuated by atmospheric gases, and can also escape from deep within the sample with minimal absorption. NAA can detect up to 74 elements depending upon the experimental procedure, with minimum detection limits ranging from 0.1 to 1x106 ng g−1 depending on element under investigation. Heavier elements have larger nuclei, therefore they have a larger neutron capture cross-section and are more likely to be activated. Some nuclei can capture a number of neutrons and remain relatively stable, not undergoing transmutation or decay for many months or even years. Other nuclei decay instantaneously or form only stable isotopes and can only be identified by PGNAA. Neutron Activation Analysis has a wide variety of applications including within the fields of archaeology, soil science, geology, forensics, and the semiconductor industry. Forensically, hairs subjected to a detailed forensic neutron analysis to determine whether they had sourced from the same individuals was first used in the trial of John Norman Collins. Archaeologists use NAA in order to determine the elements that comprise certain artifacts. This technique is used because it is nondestructive and it can relate an artifact to its source by its chemical signature. This method has proven to be very successful at determining trade routes, particularly for obsidian, with the ability of NAA to distinguish between chemical compositions. In agricultural processes, the movement of fertilizers and pesticides is influenced by surface and subsurface movement as it infiltrates the water supplies. In order to track the distribution of the fertilizers and pesticides, bromide ions in various forms are used as tracers that move freely with the flow of water while having minimal interaction with the soil. Neutron activation analysis is used to measure bromide so that extraction is not necessary for analysis. NAA is used in geology to aid in researching the processes that formed the rocks through the analysis of the rare earth elements and trace elements. It also assists in locating ore deposits and tracking certain elements. Neutron activation analysis is also used to create standards in the semiconductor industry. Semiconductors require a high level of purity, with contamination significantly reducing the quality of the semiconductor. NAA is used to detect trace impurities and establish contamination standards, because it involves limited sample handling and high sensitivity.
https://en.wikipedia.org/wiki?curid=21933
Nondeterministic Turing machine In theoretical computer science, a nondeterministic Turing machine (NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state is "not" completely determined by its action and the current symbol it sees (unlike a deterministic Turing machine). NTMs are sometimes used in thought experiments to examine the abilities and limitations of computers. One of the most important open problems in theoretical computer science is the P vs. NP problem, which (among other equivalent formulations) concerns the question of how difficult it is to simulate nondeterministic computation with a deterministic computer. In essence, a Turing machine is imagined to be a simple computer that reads and writes symbols one at a time on an endless tape by strictly following a set of rules. It determines what action it should perform next according to its internal "state" and "what symbol it currently sees". An example of one of a Turing Machine's rules might thus be: "If you are in state 2 and you see an 'A', change it to 'B', move left, and change to state 3." In a deterministic Turing machine (DTM), the set of rules prescribes at most one action to be performed for any given situation. A deterministic Turing machine has a "transition function" that, for a given state and symbol under the tape head, specifies three things: For example, an X on the tape in state 3 might make the DTM write a Y on the tape, move the head one position to the right, and switch to state 5. In contrast to a deterministic Turing machine, in a nondeterministic Turing machine (NTM) the set of rules may prescribe more than one action to be performed for any given situation. For example, an X on the tape in state 3 might allow the NTM to: or How does the NTM "know" which of these actions it should take? There are two ways of looking at it. One is to say that the machine is the "luckiest possible guesser"; it always picks a transition that eventually leads to an accepting state, if there is such a transition. The other is to imagine that the machine "branches" into many copies, each of which follows one of the possible transitions. Whereas a DTM has a single "computation path" that it follows, an NTM has a "computation tree". If at least one branch of the tree halts with an "accept" condition, the NTM accepts the input. A nondeterministic Turing machine can be formally defined as a 6-tuple formula_1, where The difference with a standard (deterministic) Turing machine is that, for deterministic Turing machines, the transition relation is a function rather than just a relation. Configurations and the "yields" relation on configurations, which describes the possible actions of the Turing machine given any possible contents of the tape, are as for standard Turing machines, except that the "yields" relation is no longer single-valued. (If the machine is deterministic, the possible computations are all prefixes of a single, possibly infinite, path.) The input for an NTM is provided in the same manner as for a deterministic Turing machine: the machine is started in the configuration in which the tape head is on the first character of the string (if any), and the tape is all blank otherwise. An NTM accepts an input string if and only if "at least one" of the possible computational paths starting from that string puts the machine into an accepting state. When simulating the many branching paths of an NTM on a deterministic machine, we can stop the entire simulation as soon as "any" branch reaches an accepting state. As a mathematical construction used primarily in proofs, there are a variety of minor variations on the definition of an NTM, but these variations all accept equivalent languages. The head movement in the output of the transition relation is often encoded numerically instead of using letters to represent moving the head Left (-1), Stationary (0), and Right (+1); giving a transition function output of formula_11. It is common to omit the stationary (0) output, and instead insert the transitive closure of any desired stationary transitions. Some authors add an explicit "reject" state, which causes the NTM to halt without accepting. This definition still retains the asymmetry that "any" nondeterministic branch can accept, but "every" branch must reject for the string to be rejected. Any computational problem that can be solved by a DTM can also be solved by a NTM, and vice versa. However, it is believed that in general the time complexity may not be the same. NTMs include DTMs as special cases, so every computation that can be carried out by a DTM can also be carried out by the equivalent NTM. It might seem that NTMs are more powerful than DTMs, since they can allow trees of possible computations arising from the same initial configuration, accepting a string if any one branch in the tree accepts it. However, it is possible to simulate NTMs with DTMs, and in fact this can be done in more than one way. One approach is to use a DTM of which the configurations represent multiple configurations of the NTM, and the DTM's operation consists of visiting each of them in turn, executing a single step at each visit, and spawning new configurations whenever the transition relation defines multiple continuations. Another construction simulates NTMs with 3-tape DTMs, of which the first tape always holds the original input string, the second is used to simulate a particular computation of the NTM, and the third encodes a path in the NTM's computation tree. The 3-tape DTMs are easily simulated with a normal single-tape DTM. In the second construction, the constructed DTM effectively performs a breadth-first search of the NTM's computation tree, visiting all possible computations of the NTM in order of increasing length until it finds an accepting one. Therefore, the length of an accepting computation of the DTM is, in general, exponential in the length of the shortest accepting computation of the NTM. This is believed to be a general property of simulations of NTMs by DTMs. The P = NP problem, the most famous unresolved question in computer science, concerns one case of this issue: whether or not every problem solvable by a NTM in polynomial time is necessarily also solvable by a DTM in polynomial time. An NTM has the property of bounded nondeterminism. That is, if an NTM always halts on a given input tape "T" then it halts in a bounded number of steps, and therefore can only have a bounded number of possible configurations. Because quantum computers use quantum bits, which can be in superpositions of states, rather than conventional bits, there is sometimes a misconception that quantum computers are NTMs. However, it is believed by experts (but has not been proven) that the power of quantum computers is, in fact, incomparable to that of NTMs; that is, problems likely exist that an NTM could efficiently solve that a quantum computer cannot and vice versa. In particular, it is likely that NP-complete problems are solvable by NTMs but not by quantum computers in polynomial time. Intuitively speaking, while a quantum computer can indeed be in a superposition state corresponding to all possible computational branches having been executed at the same time (similar to an NTM), the final measurement will collapse the quantum computer into a randomly selected branch. This branch then does not, in general, represent the sought-for solution, unlike the NTM, which is allowed to pick the right solution among the exponentially many branches.
https://en.wikipedia.org/wiki?curid=21935
Nitrogen narcosis Narcosis while diving (also known as nitrogen narcosis, inert gas narcosis, raptures of the deep, Martini effect) is a reversible alteration in consciousness that occurs while diving at depth. It is caused by the anesthetic effect of certain gases at high pressure. The Greek word (narkōsis), "the act of making numb", is derived from (narkē), "numbness, torpor", a term used by Homer and Hippocrates. Narcosis produces a state similar to drunkenness (alcohol intoxication), or nitrous oxide inhalation. It can occur during shallow dives, but does not usually become noticeable at depths less than . Except for helium and probably neon, all gases that can be breathed have a narcotic effect, although widely varying in degree. The effect is consistently greater for gases with a higher lipid solubility, and there is good evidence that the two properties are mechanistically related. As depth increases, the mental impairment may become hazardous. Divers can learn to cope with some of the effects of narcosis, but it is impossible to develop a tolerance. Narcosis affects all divers, although susceptibility varies widely among individuals and from dive to dive. Narcosis may be completely reversed in a few minutes by ascending to a shallower depth, with no long-term effects. Thus narcosis while diving in open water rarely develops into a serious problem as long as the divers are aware of its symptoms, and are able to ascend to manage it. Diving much beyond is generally considered outside the scope of recreational diving. In order to dive at greater depths, as narcosis and oxygen toxicity become critical risk factors, specialist training is required in the use of various helium-containing gas mixtures such as trimix or heliox. These mixtures prevent narcosis by replacing some or all of the inert fraction of the breathing gas with non-narcotic helium. Narcosis results from breathing gases under elevated pressure, and may be classified by the principal gas involved. The noble gases, except helium and probably neon, as well as nitrogen, oxygen and hydrogen cause a decrement in mental function, but their effect on psychomotor function (processes affecting the coordination of sensory or cognitive processes and motor activity) varies widely. The effect of carbon dioxide is a consistent diminution of mental and psychomotor function. The noble gases argon, krypton, and xenon are more narcotic than nitrogen at a given pressure, and xenon has so much anesthetic activity that it is a usable anesthetic at 80% concentration and normal atmospheric pressure. Xenon has historically been too expensive to be used very much in practice, but it has been successfully used for surgical operations, and xenon anesthesia systems are still being proposed and designed. Due to its perception-altering effects, the onset of narcosis may be hard to recognize. At its most benign, narcosis results in relief of anxiety – a feeling of tranquility and mastery of the environment. These effects are essentially identical to various concentrations of nitrous oxide. They also resemble (though not as closely) the effects of alcohol or cannabis and the familiar benzodiazepine drugs such as diazepam and alprazolam. Such effects are not harmful unless they cause some immediate danger to go unrecognized and unaddressed. Once stabilized, the effects generally remain the same at a given depth, only worsening if the diver ventures deeper. The most dangerous aspects of narcosis are the impairment of judgement, multi-tasking and coordination, and the loss of decision-making ability and focus. Other effects include vertigo and visual or auditory disturbances. The syndrome may cause exhilaration, giddiness, extreme anxiety, depression, or paranoia, depending on the individual diver and the diver's medical or personal history. When more serious, the diver may feel overconfident, disregarding normal safe diving practices. Slowed mental activity, as indicated by increased reaction time and increased errors in cognitive function, are effects which increase the risk of a diver mismanaging an incident. Narcosis reduces both the perception of cold discomfort and shivering and thereby affects the production of body heat and consequently allows a faster drop in the core temperature in cold water, with reduced awareness of the developing problem. The relation of depth to narcosis is sometimes informally known as "Martini's law", the idea that narcosis results in the feeling of one martini for every below depth. Professional divers use such a calculation only as a rough guide to give new divers a metaphor, comparing a situation they may be more familiar with. Reported signs and symptoms are summarized against typical depths in meters and feet of sea water in the following table, closely adapted from "Deeper into Diving" by Lippman and Mitchell: The cause of narcosis is related to the increased solubility of gases in body tissues, as a result of the elevated pressures at depth (Henry's law). Modern theories have suggested that inert gases dissolving in the lipid bilayer of cell membranes cause narcosis. More recently, researchers have been looking at neurotransmitter receptor protein mechanisms as a possible cause of narcosis. The breathing gas mix entering the diver's lungs will have the same pressure as the surrounding water, known as the ambient pressure. After any change of depth, the pressure of gases in the blood passing through the brain catches up with ambient pressure within a minute or two, which results in a delayed narcotic effect after descending to a new depth. Rapid compression potentiates narcosis owing to carbon dioxide retention. A divers' cognition may be affected on dives as shallow as , but the changes are not usually noticeable. There is no reliable method to predict the depth at which narcosis becomes noticeable, or the severity of the effect on an individual diver, as it may vary from dive to dive even on the same day. Significant impairment due to narcosis is an increasing risk below depths of about , corresponding to an ambient pressure of about . Most sport scuba training organizations recommend depths of no more than because of the risk of narcosis. When breathing air at depths of  – an ambient pressure of about  – narcosis in most divers leads to hallucinations, loss of memory, and unconsciousness. A number of divers have died in attempts to set air depth records below . Because of these incidents, "Guinness World Records" no longer reports on this figure. Narcosis has been compared with altitude sickness regarding its variability of onset (though not its symptoms); its effects depend on many factors, with variations between individuals. Thermal cold, stress, heavy work, fatigue, and carbon dioxide retention all increase the risk and severity of narcosis. Carbon dioxide has a high narcotic potential and also causes increased blood flow to the brain, increasing the effects of other gases. Increased risk of narcosis results from increasing the amount of carbon dioxide retained through heavy exercise, shallow or skip breathing, or because of poor gas exchange in the lungs. Narcosis is known to be additive to even minimal alcohol intoxication. Other sedative and analgesic drugs, such as opiate narcotics and benzodiazepines, add to narcosis. The precise mechanism is not well understood, but it appears to be the direct effect of gas dissolving into nerve membranes and causing temporary disruption in nerve transmissions. While the effect was first observed with air, other gases including argon, krypton and hydrogen cause very similar effects at higher than atmospheric pressure. Some of these effects may be due to antagonism at NMDA receptors and potentiation of GABAA receptors, similar to the mechanism of nonpolar anesthetics such diethyl ether or ethylene. However, their reproduction by the very chemically inactive gas argon makes them unlikely to be a strictly chemical bonding to receptors in the usual sense of a chemical bond. An indirect physical effect – such as a change in membrane volume – would therefore be needed to affect the ligand-gated ion channels of nerve cells. Trudell "et al." have suggested non-chemical binding due to the attractive van der Waals force between proteins and inert gases. Similar to the mechanism of ethanol's effect, the increase of gas dissolved in nerve cell membranes may cause altered ion permeability properties of the neural cells' lipid bilayers. The partial pressure of a gas required to cause a measured degree of impairment correlates well with the lipid solubility of the gas: the greater the solubility, the less partial pressure is needed. An early theory, the Meyer-Overton hypothesis, suggested that narcosis happens when the gas penetrates the lipids of the brain's nerve cells, causing direct mechanical interference with the transmission of signals from one nerve cell to another. More recently, specific types of chemically gated receptors in nerve cells have been identified as being involved with anesthesia and narcosis. However, the basic and most general underlying idea, that nerve transmission is altered in many diffuse areas of the brain as a result of gas molecules dissolved in the nerve cells' fatty membranes, remains largely unchallenged. The management of narcosis is simply to ascend to shallower depths; the effects then disappear within minutes. In the event of complications or other conditions being present, ascending is always the correct initial response. Should problems remain, then it is necessary to abort the dive. The decompression schedule can still be followed unless other conditions require emergency assistance. The symptoms of narcosis may be caused by other factors during a dive: ear problems causing disorientation or nausea; early signs of oxygen toxicity causing visual disturbances; or hypothermia causing rapid breathing and shivering. Nevertheless, the presence of any of these symptoms should imply narcosis. Alleviation of the effects upon ascending to a shallower depth will confirm the diagnosis. Given the setting, other likely conditions do not produce reversible effects. In the rare event of misdiagnosis when another condition is causing the symptoms, the initial management – ascending closer to the surface – is still essential. The most straightforward way to avoid nitrogen narcosis is for a diver to limit the depth of dives. Since narcosis becomes more severe as depth increases, a diver keeping to shallower depths can avoid serious narcosis. Most recreational dive schools will only certify basic divers to depths of , and at these depths narcosis does not present a significant risk. Further training is normally required for certification up to on air, and this training should include a discussion of narcosis, its effects, and cure. Some diver training agencies offer specialized training to prepare recreational divers to go to depths of , often consisting of further theory and some practice in deep dives under close supervision. Scuba organizations that train for diving beyond recreational depths, may forbid diving with gases that cause too much narcosis at depth in the average diver, and strongly encourage the use of other breathing gas mixes containing helium in place of some or all of the nitrogen in air – such as trimix and heliox – because helium has no narcotic effect. The use of these gases forms part of technical diving and requires further training and certification. While the individual diver cannot predict exactly at what depth the onset of narcosis will occur on a given day, the first symptoms of narcosis for any given diver are often more predictable and personal. For example, one diver may have trouble with eye focus (close accommodation for middle-aged divers), another may experience feelings of euphoria, and another feelings of claustrophobia. Some divers report that they have hearing changes, and that the sound their exhaled bubbles make becomes different. Specialist training may help divers to identify these personal onset signs, which may then be used as a signal to ascend to avoid the narcosis, although severe narcosis may interfere with the judgement necessary to take preventive action. Deep dives should be made only after a gradual training to test the individual diver's sensitivity to increasing depths, with careful supervision and logging of reactions. Scientific evidence does not show that a diver can train to overcome any measure of narcosis at a given depth or become tolerant of it. Equivalent narcotic depth (END) is a commonly used way of expressing the narcotic effect of different breathing gases. The National Oceanic and Atmospheric Administration (NOAA) Diving Manual now states that oxygen and nitrogen should be considered equally narcotic. Standard tables, based on relative lipid solubilities, list conversion factors for narcotic effect of other gases. For example, hydrogen at a given pressure has a narcotic effect equivalent to nitrogen at 0.55 times that pressure, so in principle it should be usable at more than twice the depth. Argon, however, has 2.33 times the narcotic effect of nitrogen, and is a poor choice as a breathing gas for diving (it is used as a drysuit inflation gas, owing to its low thermal conductivity). Some gases have other dangerous effects when breathed at pressure; for example, high-pressure oxygen can lead to oxygen toxicity. Although helium is the least intoxicating of the breathing gases, at greater depths it can cause high pressure nervous syndrome, a still mysterious but apparently unrelated phenomenon. Inert gas narcosis is only one factor influencing the choice of gas mixture; the risks of decompression sickness and oxygen toxicity, cost, and other factors are also important. Because of similar and additive effects, divers should avoid sedating medications and drugs, such as cannabis and alcohol before any dive. A hangover, combined with the reduced physical capacity that goes with it, makes nitrogen narcosis more likely. Experts recommend total abstinence from alcohol for at least 12 hours before diving, and longer for other drugs. Narcosis is potentially one of the most dangerous conditions to affect the scuba diver below about . Except for occasional amnesia of events at depth, the effects of narcosis are entirely removed on ascent and therefore pose no problem in themselves, even for repeated, chronic or acute exposure. Nevertheless, the severity of narcosis is unpredictable and it can be fatal while diving, as the result of illogical behavior in a dangerous environment. Tests have shown that all divers are affected by nitrogen narcosis, though some experience lesser effects than others. Even though it is possible that some divers can manage better than others because of learning to cope with the subjective impairment, the underlying behavioral effects remain. These effects are particularly dangerous because a diver may feel they are not experiencing narcosis, yet still be affected by it. French researcher Victor T. Junod was the first to describe symptoms of narcosis in 1834, noting "the functions of the brain are activated, imagination is lively, thoughts have a peculiar charm and, in some persons, symptoms of intoxication are present." Junod suggested that narcosis resulted from pressure causing increased blood flow and hence stimulating nerve centers. Walter Moxon (1836–1886), a prominent Victorian physician, hypothesized in 1881 that pressure forced blood to inaccessible parts of the body and the stagnant blood then resulted in emotional changes. The first report of anesthetic potency being related to lipid solubility was published by Hans H. Meyer in 1899, entitled "Zur Theorie der Alkoholnarkose". Two years later a similar theory was published independently by Charles Ernest Overton. What became known as the Meyer-Overton Hypothesis may be illustrated by a graph comparing narcotic potency with solubility in oil. In 1939, Albert R. Behnke and O. D. Yarborough demonstrated that gases other than nitrogen also could cause narcosis. For an inert gas the narcotic potency was found to be proportional to its lipid solubility. As hydrogen has only 0.55 the solubility of nitrogen, deep diving experiments using hydrox were conducted by Arne Zetterström between 1943 and 1945. Jacques-Yves Cousteau in 1953 famously described it as "l’ivresse des grandes profondeurs" or the "rapture of the deep". Further research into the possible mechanisms of narcosis by anesthetic action led to the "minimum alveolar concentration" concept in 1965. This measures the relative concentration of different gases required to prevent motor response in 50% of subjects in response to stimulus, and shows similar results for anesthetic potency as the measurements of lipid solubility. The (NOAA) Diving Manual was revised to recommend treating oxygen as if it were as narcotic as nitrogen, following research by Christian J. Lambertsen "et al." in 1977 and 1978.
https://en.wikipedia.org/wiki?curid=21937
Neoproterozoic The Neoproterozoic Era is the unit of geologic time from . It is the last era of the Precambrian Supereon and the Proterozoic Eon; it is subdivided into the Tonian, Cryogenian, and Ediacaran Periods. It is preceded by the Mesoproterozoic era and succeeded by the Paleozoic era of the Phanerozoic eon. The most severe glaciation known in the geologic record occurred during the Cryogenian, when ice sheets may have reached the equator and formed a "Snowball Earth". The earliest fossils of complex multicellular life are found in the Ediacaran period. These organisms make up the Ediacaran biota, including the oldest definitive animals in the fossil record. According to Rino and co-workers, the sum of the continental crust formed in the Pan-African orogeny and the Grenville orogeny makes the Neoproterozoic the period of Earth's history that has produced most continental crust. At the onset of the Neoproterozoic the supercontinent Rodinia, which had assembled during the late Mesoproterozoic, straddled the equator. During the Tonian, rifting commenced which broke Rodinia into a number of individual land masses. Possibly as a consequence of the low-latitude position of most continents, several large-scale glacial events occurred during the Neoproterozoic Era including the Sturtian and Marinoan glaciations of the Cryogenian Period. These glaciations are believed to have been so severe that there were ice sheets at the equator—a state known as the "Snowball Earth". Neoproterozoic time is subdivided into the Tonian (1000–720 Ma), Cryogenian (720–635 Ma) and Ediacaran (635–541 Ma) periods. In the regional timescale of Russia, the Tonian and Cryogenian correspond to the Late Riphean; the Ediacaran corresponds to the Early to middle Vendian. Russian geologists divide the Neoproterozoic of Siberia into the Mayanian (from 1000 to 850 Ma) followed by the Baikalian (from 850 to 650 Ma, loosely equivalent to the Cryogenian). The idea of the Neoproterozoic Era was introduced in the 1960s. Nineteenth-century paleontologists set the start of multicellular life at the first appearance of hard-shelled arthropods called trilobites and archeocyathid sponges at the beginning of the Cambrian Period. In the early 20th century, paleontologists started finding fossils of multicellular animals that predated the Cambrian. A complex fauna was found in South West Africa in the 1920s but was inaccurately dated. Another fauna was found in South Australia in the 1940s, but it was not thoroughly examined until the late 1950s. Other possible early animal fossils were found in Russia, England, Canada, and elsewhere (see Ediacaran biota). Some were determined to be pseudofossils, but others were revealed to be members of rather complex biotas that remain poorly understood. At least 25 regions worldwide have yielded metazoan fossils older than the classical Precambrian–Cambrian boundary (which is currently dated at ). A few of the early animals appear possibly to be ancestors of modern animals. Most fall into ambiguous groups of frond-like organisms; discoids that might be holdfasts for stalked organisms ("medusoids"); mattress-like forms; small calcareous tubes; and armored animals of unknown provenance. These were most commonly known as Vendian biota until the formal naming of the Period, and are currently known as Ediacaran Period biota. Most were soft bodied. The relationships, if any, to modern forms are obscure. Some paleontologists relate many or most of these forms to modern animals. Others acknowledge a few possible or even likely relationships but feel that most of the Ediacaran forms are representatives of unknown animal types. In addition to Ediacaran biota, two other types of biota were discovered in China (the Doushantuo Formation and Hainan Formation). The nomenclature for the terminal period of the Neoproterozoic Era has been unstable. Russian and Nordic geologists referred to the last period of the Neoproterozoic as the Vendian, while Chinese geologists referred to it as the Sinian, and most Australians and North Americans used the name Ediacaran. However, in 2004, the International Union of Geological Sciences ratified the Ediacaran Period to be a geological age of the Neoproterozoic, ranging from to million years ago. The Ediacaran Period boundaries are the only Precambrian boundaries defined by biologic Global Boundary Stratotype Section and Points, rather than the absolute Global Standard Stratigraphic Ages.
https://en.wikipedia.org/wiki?curid=21938
National Security Agency The National Security Agency (NSA) is a national-level intelligence agency of the United States Department of Defense, under the authority of the Director of National Intelligence. The NSA is responsible for global monitoring, collection, and processing of information and data for foreign and domestic intelligence and counterintelligence purposes, specializing in a discipline known as signals intelligence (SIGINT). The NSA is also tasked with the protection of U.S. communications networks and information systems. The NSA relies on a variety of measures to accomplish its mission, the majority of which are clandestine. Originating as a unit to decipher coded communications in World War II, it was officially formed as the NSA by President Harry S. Truman in 1952. Since then, it has become the largest of the U.S. intelligence organizations in terms of personnel and budget. The NSA currently conducts worldwide mass data collection and has been known to physically bug electronic systems as one method to this end. The NSA is also alleged to have been behind such attack software as Stuxnet, which severely damaged Iran's nuclear program. The NSA, alongside the Central Intelligence Agency (CIA), maintains a physical presence in many countries across the globe; the CIA/NSA joint Special Collection Service (a highly classified intelligence team) inserts eavesdropping devices in high value targets (such as presidential palaces or embassies). SCS collection tactics allegedly encompass "close surveillance, burglary, wiretapping, [and] breaking and entering". Unlike the CIA and the Defense Intelligence Agency (DIA), both of which specialize primarily in foreign human espionage, the NSA does not publicly conduct human-source intelligence gathering. The NSA is entrusted with providing assistance to, and the coordination of, SIGINT elements for other government organizations – which are prevented by law from engaging in such activities on their own. As part of these responsibilities, the agency has a co-located organization called the Central Security Service (CSS), which facilitates cooperation between the NSA and other U.S. defense cryptanalysis components. To further ensure streamlined communication between the signals intelligence community divisions, the NSA Director simultaneously serves as the Commander of the United States Cyber Command and as Chief of the Central Security Service. The NSA's actions have been a matter of political controversy on several occasions, including its spying on anti–Vietnam War leaders and the agency's participation in economic espionage. In 2013, the NSA had many of its secret surveillance programs revealed to the public by Edward Snowden, a former NSA contractor. According to the leaked documents, the NSA intercepts and stores the communications of over a billion people worldwide, including United States citizens. The documents also revealed the NSA tracks hundreds of millions of people's movements using cellphones' metadata. Internationally, research has pointed to the NSA's ability to surveil the domestic Internet traffic of foreign countries through "boomerang routing". The origins of the National Security Agency can be traced back to April 28, 1917, three weeks after the U.S. Congress declared war on Germany in World War I. A code and cipher decryption unit was established as the Cable and Telegraph Section which was also known as the Cipher Bureau. It was headquartered in Washington, D.C. and was part of the war effort under the executive branch without direct Congressional authorization. During the course of the war it was relocated in the army's organizational chart several times. On July 5, 1917, Herbert O. Yardley was assigned to head the unit. At that point, the unit consisted of Yardley and two civilian clerks. It absorbed the navy's Cryptanalysis functions in July 1918. World War I ended on November 11, 1918, and the army cryptographic section of Military Intelligence (MI-8) moved to New York City on May 20, 1919, where it continued intelligence activities as the Code Compilation Company under the direction of Yardley. After the disbandment of the U.S. Army cryptographic section of military intelligence, known as MI-8, in 1919, the U.S. government created the Cipher Bureau, also known as Black Chamber. The Black Chamber was the United States' first peacetime cryptanalytic organization. Jointly funded by the Army and the State Department, the Cipher Bureau was disguised as a New York City commercial code company; it actually produced and sold such codes for business use. Its true mission, however, was to break the communications (chiefly diplomatic) of other nations. Its most notable known success was at the Washington Naval Conference, during which it aided American negotiators considerably by providing them with the decrypted traffic of many of the conference delegations, most notably the Japanese. The Black Chamber successfully persuaded Western Union, the largest U.S. telegram company at the time, as well as several other communications companies to illegally give the Black Chamber access to cable traffic of foreign embassies and consulates. Soon, these companies publicly discontinued their collaboration. Despite the Chamber's initial successes, it was shut down in 1929 by U.S. Secretary of State Henry L. Stimson, who defended his decision by stating, "Gentlemen do not read each other's mail". During World War II, the Signal Intelligence Service (SIS) was created to intercept and decipher the communications of the Axis powers. When the war ended, the SIS was reorganized as the Army Security Agency (ASA), and it was placed under the leadership of the Director of Military Intelligence. On May 20, 1949, all cryptologic activities were centralized under a national organization called the Armed Forces Security Agency (AFSA). This organization was originally established within the U.S. Department of Defense under the command of the Joint Chiefs of Staff. The AFSA was tasked to direct Department of Defense communications and electronic intelligence activities, except those of U.S. military intelligence units. However, the AFSA was unable to centralize communications intelligence and failed to coordinate with civilian agencies that shared its interests such as the Department of State, Central Intelligence Agency (CIA) and the Federal Bureau of Investigation (FBI). In December 1951, President Harry S. Truman ordered a panel to investigate how AFSA had failed to achieve its goals. The results of the investigation led to improvements and its redesignation as the National Security Agency. The National Security Council issued a memorandum of October 24, 1952, that revised National Security Council Intelligence Directive (NSCID) 9. On the same day, Truman issued a second memorandum that called for the establishment of the NSA. The actual establishment of the NSA was done by a November 4 memo by Robert A. Lovett, the Secretary of Defense, changing the name of the AFSA to the NSA, and making the new agency responsible for all communications intelligence. Since President Truman's memo was a classified document, the existence of the NSA was not known to the public at that time. Due to its ultra-secrecy the U.S. intelligence community referred to the NSA as "No Such Agency". In the 1960s, the NSA played a key role in expanding U.S. commitment to the Vietnam War by providing evidence of a North Vietnamese attack on the American destroyer during the Gulf of Tonkin incident. A secret operation, code-named "MINARET", was set up by the NSA to monitor the phone communications of Senators Frank Church and Howard Baker, as well as major civil rights leaders, including Martin Luther King, Jr., and prominent U.S. journalists and athletes who criticized the Vietnam War. However, the project turned out to be controversial, and an internal review by the NSA concluded that its Minaret program was "disreputable if not outright illegal". The NSA mounted a major effort to secure tactical communications among U.S. forces during the war with mixed success. The NESTOR family of compatible secure voice systems it developed was widely deployed during the Vietnam War, with about 30,000 NESTOR sets produced. However a variety of technical and operational problems limited their use, allowing the North Vietnamese to exploit and intercept U.S. communications. In the aftermath of the Watergate scandal, a congressional hearing in 1975 led by Senator Frank Church revealed that the NSA, in collaboration with Britain's SIGINT intelligence agency Government Communications Headquarters (GCHQ), had routinely intercepted the international communications of prominent anti-Vietnam war leaders such as Jane Fonda and Dr. Benjamin Spock. The Agency tracked these individuals in a secret filing system that was destroyed in 1974. Following the resignation of President Richard Nixon, there were several investigations of suspected misuse of FBI, CIA and NSA facilities. Senator Frank Church uncovered previously unknown activity, such as a CIA plot (ordered by the administration of President John F. Kennedy) to assassinate Fidel Castro. The investigation also uncovered NSA's wiretaps on targeted U.S. citizens. After the Church Committee hearings, the Foreign Intelligence Surveillance Act of 1978 was passed into law. This was designed to limit the practice of mass surveillance in the United States. In 1986, the NSA intercepted the communications of the Libyan government during the immediate aftermath of the Berlin discotheque bombing. The White House asserted that the NSA interception had provided "irrefutable" evidence that Libya was behind the bombing, which U.S. President Ronald Reagan cited as a justification for the 1986 United States bombing of Libya. In 1999, a multi-year investigation by the European Parliament highlighted the NSA's role in economic espionage in a report entitled 'Development of Surveillance Technology and Risk of Abuse of Economic Information'. That year, the NSA founded the NSA Hall of Honor, a memorial at the National Cryptologic Museum in Fort Meade, Maryland. The memorial is a, "tribute to the pioneers and heroes who have made significant and long-lasting contributions to American cryptology". NSA employees must be retired for more than fifteen years to qualify for the memorial. NSA's infrastructure deteriorated in the 1990s as defense budget cuts resulted in maintenance deferrals. On January 24, 2000, NSA headquarters suffered a total network outage for three days caused by an overloaded network. Incoming traffic was successfully stored on agency servers, but it could not be directed and processed. The agency carried out emergency repairs at a cost of $3 million to get the system running again. (Some incoming traffic was also directed instead to Britain's GCHQ for the time being.) Director Michael Hayden called the outage a "wake-up call" for the need to invest in the agency's infrastructure. In the 1990s the defensive arm of the NSA—the Information Assurance Directorate (IAD)—started working more openly; the first public technical talk by an NSA scientist at a major cryptography conference was J. Solinas' presentation on efficient Elliptic Curve Cryptography algorithms at Crypto 1997. The IAD's cooperative approach to academia and industry culminated in its support for a transparent process for replacing the outdated Data Encryption Standard (DES) by an Advanced Encryption Standard (AES). Cybersecurity policy expert Susan Landau attributes the NSA's harmonious collaboration with industry and academia in the selection of the AES in 2000—and the Agency's support for the choice of a strong encryption algorithm designed by Europeans rather than by Americans—to Brian Snow, who was the Technical Director of IAD and represented the NSA as cochairman of the Technical Working Group for the AES competition, and Michael Jacobs, who headed IAD at the time. After the terrorist attacks of September 11, 2001, the NSA believed that it had public support for a dramatic expansion of its surveillance activities. According to Neal Koblitz and Alfred Menezes, the period when the NSA was a trusted partner with academia and industry in the development of cryptographic standards started to come to an end when, as part of the change in the NSA in the post-September 11 era, Snow was replaced as Technical Director, Jacobs retired, and IAD could no longer effectively oppose proposed actions by the offensive arm of the NSA. In the aftermath of the September 11 attacks, the NSA created new IT systems to deal with the flood of information from new technologies like the Internet and cellphones. ThinThread contained advanced data mining capabilities. It also had a "privacy mechanism"; surveillance was stored encrypted; decryption required a warrant. The research done under this program may have contributed to the technology used in later systems. ThinThread was cancelled when Michael Hayden chose Trailblazer, which did not include ThinThread's privacy system. Trailblazer Project ramped up in 2002 and was worked on by Science Applications International Corporation (SAIC), Boeing, Computer Sciences Corporation, IBM, and Litton Industries. Some NSA whistleblowers complained internally about major problems surrounding Trailblazer. This led to investigations by Congress and the NSA and DoD Inspectors General. The project was cancelled in early 2004. Turbulence started in 2005. It was developed in small, inexpensive "test" pieces, rather than one grand plan like Trailblazer. It also included offensive cyber-warfare capabilities, like injecting malware into remote computers. Congress criticized Turbulence in 2007 for having similar bureaucratic problems as Trailblazer. It was to be a realization of information processing at higher speeds in cyberspace. The massive extent of the NSA's spying, both foreign and domestic, was revealed to the public in a series of detailed disclosures of internal NSA documents beginning in June 2013. Most of the disclosures were leaked by former NSA contractor Edward Snowden. NSA's eavesdropping mission includes radio broadcasting, both from various organizations and individuals, the Internet, telephone calls, and other intercepted forms of communication. Its secure communications mission includes military, diplomatic, and all other sensitive, confidential or secret government communications. According to a 2010 article in "The Washington Post", "[e]very day, collection systems at the National Security Agency intercept and store 1.7 billion e-mails, phone calls and other types of communications. The NSA sorts a fraction of those into 70 separate databases." Because of its listening task, NSA/CSS has been heavily involved in cryptanalytic research, continuing the work of predecessor agencies which had broken many World War II codes and ciphers (see, for instance, Purple, Venona project, and JN-25). In 2004, NSA Central Security Service and the National Cyber Security Division of the Department of Homeland Security (DHS) agreed to expand NSA Centers of Academic Excellence in Information Assurance Education Program. As part of the National Security Presidential Directive 54/Homeland Security Presidential Directive 23 (NSPD 54), signed on January 8, 2008, by President Bush, the NSA became the lead agency to monitor and protect all of the federal government's computer networks from cyber-terrorism. Operations by the National Security Agency can be divided in three types: "Echelon" was created in the incubator of the Cold War. Today it is a legacy system, and several NSA stations are closing. NSA/CSS, in combination with the equivalent agencies in the United Kingdom (Government Communications Headquarters), Canada (Communications Security Establishment), Australia (Australian Signals Directorate), and New Zealand (Government Communications Security Bureau), otherwise known as the UKUSA group, was reported to be in command of the operation of the so-called ECHELON system. Its capabilities were suspected to include the ability to monitor a large proportion of the world's transmitted civilian telephone, fax and data traffic. During the early 1970s, the first of what became more than eight large satellite communications dishes were installed at Menwith Hill. Investigative journalist Duncan Campbell reported in 1988 on the "ECHELON" surveillance program, an extension of the UKUSA Agreement on global signals intelligence SIGINT, and detailed how the eavesdropping operations worked. On November 3, 1999 the BBC reported that they had confirmation from the Australian Government of the existence of a powerful "global spying network" code-named Echelon, that could "eavesdrop on every single phone call, fax or e-mail, anywhere on the planet" with Britain and the United States as the chief protagonists. They confirmed that Menwith Hill was "linked directly to the headquarters of the US National Security Agency (NSA) at Fort Meade in Maryland". NSA's United States Signals Intelligence Directive 18 (USSID 18) strictly prohibited the interception or collection of information about "... U.S. persons, entities, corporations or organizations..." without explicit written legal permission from the United States Attorney General when the subject is located abroad, or the Foreign Intelligence Surveillance Court when within U.S. borders. Alleged Echelon-related activities, including its use for motives other than national security, including political and industrial espionage, received criticism from countries outside the UKUSA alliance. The NSA was also involved in planning to blackmail people with "SEXINT", intelligence gained about a potential target's sexual activity and preferences. Those targeted had not committed any apparent crime nor were they charged with one. In order to support its facial recognition program, the NSA is intercepting "millions of images per day". The Real Time Regional Gateway is a data collection program introduced in 2005 in Iraq by NSA during the Iraq War that consisted of gathering all electronic communication, storing it, then searching and otherwise analyzing it. It was effective in providing information about Iraqi insurgents who had eluded less comprehensive techniques. This "collect it all" strategy introduced by NSA director, Keith B. Alexander, is believed by Glenn Greenwald of "The Guardian" to be the model for the comprehensive worldwide mass archiving of communications which NSA is engaged in as of 2013. A dedicated unit of the NSA locates targets for the CIA for extrajudicial assassination in the Middle East. The NSA has also spied extensively on the European Union, the United Nations and numerous governments including allies and trading partners in Europe, South America and Asia. In June 2015, WikiLeaks published documents showing that NSA spied on French companies. In July 2015, WikiLeaks published documents showing that NSA spied on federal German ministries since the 1990s. Even Germany's Chancellor Angela Merkel's cellphones and phone of her predecessors had been intercepted. Edward Snowden revealed in June 2013 that between February 8 and March 8, 2013, the NSA collected about 124.8 billion telephone data items and 97.1 billion computer data items throughout the world, as was displayed in charts from an internal NSA tool codenamed Boundless Informant. Initially, it was reported that some of these data reflected eavesdropping on citizens in countries like Germany, Spain and France, but later on, it became clear that those data were collected by European agencies during military missions abroad and were subsequently shared with NSA. In 2013, reporters uncovered a secret memo that claims the NSA created and pushed for the adoption of the Dual EC DRBG encryption standard that contained built-in vulnerabilities in 2006 to the United States National Institute of Standards and Technology (NIST), and the International Organization for Standardization (aka ISO). This memo appears to give credence to previous speculation by cryptographers at Microsoft Research. Edward Snowden claims that the NSA often bypasses encryption altogether by lifting information before it is encrypted or after it is decrypted. XKeyscore rules (as specified in a file xkeyscorerules100.txt, sourced by German TV stations NDR and WDR, who claim to have excerpts from its source code) reveal that the NSA tracks users of privacy-enhancing software tools, including Tor; an anonymous email service provided by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in Cambridge, Massachusetts; and readers of the "Linux Journal". Linus Torvalds, the founder of Linux kernel, joked during a LinuxCon keynote on September 18, 2013, that the NSA, who are the founder of SELinux, wanted a backdoor in the kernel. However, later, Linus' father, a Member of the European Parliament (MEP), revealed that the NSA actually did this. IBM Notes was the first widely adopted software product to use public key cryptography for client–server and server–server authentication and for encryption of data. Until US laws regulating encryption were changed in 2000, IBM and Lotus were prohibited from exporting versions of Notes that supported symmetric encryption keys that were longer than 40 bits. In 1997, Lotus negotiated an agreement with the NSA that allowed export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a "workload reduction factor" for the NSA. This strengthened the protection for users of Notes outside the US against private-sector industrial espionage, but not against spying by the US government. While it is assumed that foreign transmissions terminating in the U.S. (such as a non-U.S. citizen accessing a U.S. website) subject non-U.S. citizens to NSA surveillance, recent research into boomerang routing has raised new concerns about the NSA's ability to surveil the domestic Internet traffic of foreign countries. Boomerang routing occurs when an Internet transmission that originates and terminates in a single country transits another. Research at the University of Toronto has suggested that approximately 25% of Canadian domestic traffic may be subject to NSA surveillance activities as a result of the boomerang routing of Canadian Internet service providers. A document included in NSA files released with Glenn Greenwald's book "No Place to Hide" details how the agency's Tailored Access Operations (TAO) and other NSA units gain access to hardware. They intercept routers, servers and other network hardware being shipped to organizations targeted for surveillance and install covert implant firmware onto them before they are delivered. This was described by an NSA manager as "some of the most productive operations in TAO because they preposition access points into hard target networks around the world." Computers seized by the NSA due to interdiction are often modified with a physical device known as Cottonmouth. Cottonmouth is a device that can be inserted in the USB port of a computer in order to establish remote access to the targeted machine. According to NSA's Tailored Access Operations (TAO) group implant catalog, after implanting Cottonmouth, the NSA can establish a network bridge "that allows the NSA to load exploit software onto modified computers as well as allowing the NSA to relay commands and data between hardware and software implants." NSA's mission, as set forth in Executive Order 12333 in 1981, is to collect information that constitutes "foreign intelligence or counterintelligence" while "not" "acquiring information concerning the domestic activities of United States persons". NSA has declared that it relies on the FBI to collect information on foreign intelligence activities within the borders of the United States, while confining its own activities within the United States to the embassies and missions of foreign nations. The appearance of a 'Domestic Surveillance Directorate' of the NSA was soon exposed as a hoax in 2013. NSA's domestic surveillance activities are limited by the requirements imposed by the Fourth Amendment to the U.S. Constitution. The Foreign Intelligence Surveillance Court for example held in October 2011, citing multiple Supreme Court precedents, that the Fourth Amendment prohibitions against unreasonable searches and seizures applies to the contents of all communications, whatever the means, because "a person's private communications are akin to personal papers." However, these protections do not apply to non-U.S. persons located outside of U.S. borders, so the NSA's foreign surveillance efforts are subject to far fewer limitations under U.S. law. The specific requirements for domestic surveillance operations are contained in the Foreign Intelligence Surveillance Act of 1978 (FISA), which does not extend protection to non-U.S. citizens located outside of U.S. territory. George W. Bush, president during the 9/11 terrorist attacks, approved the Patriot Act shortly after the attacks to take anti-terrorist security measures. Title 1, 2, and 9 specifically authorized measures that would be taken by the NSA. These titles granted enhanced domestic security against terrorism, surveillance procedures, and improved intelligence, respectively. On March 10, 2004, there was a debate between President Bush and White House Counsel Alberto Gonzales, Attorney General John Ashcroft, and Acting Attorney General James Comey. The Attorneys General were unsure if the NSA's programs could be considered constitutional. They threatened to resign over the matter, but ultimately the NSA's programs continued. On March 11, 2004, President Bush signed a new authorization for mass surveillance of Internet records, in addition to the surveillance of phone records. This allowed the president to be able to override laws such as the Foreign Intelligence Surveillance Act, which protected civilians from mass surveillance. In addition to this, President Bush also signed that the measures of mass surveillance were also retroactively in place. Under the PRISM program, which started in 2007, NSA gathers Internet communications from foreign targets from nine major U.S. Internet-based communication service providers: Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube and Apple. Data gathered include email, video and voice chat, videos, photos, VoIP chats such as Skype, and file transfers. Former NSA director General Keith Alexander claimed that in September 2009 the NSA prevented Najibullah Zazi and his friends from carrying out a terrorist attack. However, this claim has been debunked and no evidence has been presented demonstrating that the NSA has ever been instrumental in preventing a terrorist attack. Besides the more traditional ways of eavesdropping in order to collect signals intelligence, NSA is also engaged in hacking computers, smartphones and their networks. These operations are conducted by the Tailored Access Operations (TAO) division, which has been active since at least circa 1998. According to the "Foreign Policy" magazine, "... the Office of Tailored Access Operations, or TAO, has successfully penetrated Chinese computer and telecommunications systems for almost 15 years, generating some of the best and most reliable intelligence information about what is going on inside the People's Republic of China." In an interview with "Wired" magazine, Edward Snowden said the Tailored Access Operations division accidentally caused Syria's internet blackout in 2012. The NSA is led by the Director of the National Security Agency (DIRNSA), who also serves as Chief of the Central Security Service (CHCSS) and Commander of the United States Cyber Command (USCYBERCOM) and is the highest-ranking military official of these organizations. He is assisted by a Deputy Director, who is the highest-ranking civilian within the NSA/CSS. NSA also has an Inspector General, head of the Office of the Inspector General (OIG), a General Counsel, head of the Office of the General Counsel (OGC) and a Director of Compliance, who is head of the Office of the Director of Compliance (ODOC). Unlike other intelligence organizations such as CIA or DIA, NSA has always been particularly reticent concerning its internal organizational structure. As of the mid-1990s, the National Security Agency was organized into five Directorates: Each of these directorates consisted of several groups or elements, designated by a letter. There were for example the A Group, which was responsible for all SIGINT operations against the Soviet Union and Eastern Europe, and G Group, which was responsible for SIGINT related to all non-communist countries. These groups were divided in units designated by an additional number, like unit A5 for breaking Soviet codes, and G6, being the office for the Middle East, North Africa, Cuba, Central and South America. , NSA has about a dozen directorates, which are designated by a letter, although not all of them are publicly known. The directorates are divided in divisions and units starting with the letter of the parent directorate, followed by a number for the division, the sub-unit or a sub-sub-unit. The main elements of the organizational structure of the NSA are: In the year 2000, a leadership team was formed, consisting of the Director, the Deputy Director and the Directors of the Signals Intelligence (SID), the Information Assurance (IAD) and the Technical Directorate (TD). The chiefs of other main NSA divisions became associate directors of the senior leadership team. After president George W. Bush initiated the President's Surveillance Program (PSP) in 2001, the NSA created a 24-hour Metadata Analysis Center (MAC), followed in 2004 by the Advanced Analysis Division (AAD), with the mission of analyzing content, Internet metadata and telephone metadata. Both units were part of the Signals Intelligence Directorate. A 2016 proposal would combine the Signals Intelligence Directorate with Information Assurance Directorate into Directorate of Operations. NSANet stands for National Security Agency Network and is the official NSA intranet. It is a classified network, for information up to the level of TS/SCI to support the use and sharing of intelligence data between NSA and the signals intelligence agencies of the four other nations of the Five Eyes partnership. The management of NSANet has been delegated to the Central Security Service Texas (CSSTEXAS). NSANet is a highly secured computer network consisting of fiber-optic and satellite communication channels which are almost completely separated from the public Internet. The network allows NSA personnel and civilian and military intelligence analysts anywhere in the world to have access to the agency's systems and databases. This access is tightly controlled and monitored. For example, every keystroke is logged, activities are audited at random and downloading and printing of documents from NSANet are recorded. In 1998, NSANet, along with NIPRNET and SIPRNET, had "significant problems with poor search capabilities, unorganized data and old information". In 2004, the network was reported to have used over twenty commercial off-the-shelf operating systems. Some universities that do highly sensitive research are allowed to connect to it. The thousands of Top Secret internal NSA documents that were taken by Edward Snowden in 2013 were stored in "a file-sharing location on the NSA's intranet site"; so, they could easily be read online by NSA personnel. Everyone with a TS/SCI-clearance had access to these documents. As a system administrator, Snowden was responsible for moving accidentally misplaced highly sensitive documents to safer storage locations. The NSA maintains at least two watch centers: The number of NSA employees is officially classified but there are several sources providing estimates. In 1961, NSA had 59,000 military and civilian employees, which grew to 93,067 in 1969, of which 19,300 worked at the headquarters at Fort Meade. In the early 1980s NSA had roughly 50,000 military and civilian personnel. By 1989 this number had grown again to 75,000, of which 25,000 worked at the NSA headquarters. Between 1990 and 1995 the NSA's budget and workforce were cut by one third, which led to a substantial loss of experience. In 2012, the NSA said more than 30,000 employees worked at Fort Meade and other facilities. In 2012, John C. Inglis, the deputy director, said that the total number of NSA employees is "somewhere between 37,000 and one billion" as a joke, and stated that the agency is "probably the biggest employer of introverts." In 2013 "Der Spiegel" stated that the NSA had 40,000 employees. More widely, it has been described as the world's largest single employer of mathematicians. Some NSA employees form part of the workforce of the National Reconnaissance Office (NRO), the agency that provides the NSA with satellite signals intelligence. As of 2013 about 1,000 system administrators work for the NSA. The NSA received criticism early on in 1960 after two agents had defected to the Soviet Union. Investigations by the House Un-American Activities Committee and a special subcommittee of the United States House Committee on Armed Services revealed severe cases of ignorance in personnel security regulations, prompting the former personnel director and the director of security to step down and leading to the adoption of stricter security practices. Nonetheless, security breaches reoccurred only a year later when in an issue of "Izvestia" of July 23, 1963, a former NSA employee published several cryptologic secrets. The very same day, an NSA clerk-messenger committed suicide as ongoing investigations disclosed that he had sold secret information to the Soviets on a regular basis. The reluctance of Congressional houses to look into these affairs had prompted a journalist to write, "If a similar series of tragic blunders occurred in any ordinary agency of Government an aroused public would insist that those responsible be officially censured, demoted, or fired." David Kahn criticized the NSA's tactics of concealing its doings as smug and the Congress' blind faith in the agency's right-doing as shortsighted, and pointed out the necessity of surveillance by the Congress to prevent abuse of power. Edward Snowden's leaking of the existence of PRISM in 2013 caused the NSA to institute a "two-man rule", where two system administrators are required to be present when one accesses certain sensitive information. Snowden claims he suggested such a rule in 2009. The NSA conducts polygraph tests of employees. For new employees, the tests are meant to discover enemy spies who are applying to the NSA and to uncover any information that could make an applicant pliant to coercion. As part of the latter, historically "EPQs" or "embarrassing personal questions" about sexual behavior had been included in the NSA polygraph. The NSA also conducts five-year periodic reinvestigation polygraphs of employees, focusing on counterintelligence programs. In addition the NSA conducts periodic polygraph investigations in order to find spies and leakers; those who refuse to take them may receive "termination of employment", according to a 1982 memorandum from the director of NSA. There are also "special access examination" polygraphs for employees who wish to work in highly sensitive areas, and those polygraphs cover counterintelligence questions and some questions about behavior. NSA's brochure states that the average test length is between two and four hours. A 1983 report of the Office of Technology Assessment stated that "It appears that the NSA [National Security Agency] (and possibly CIA) use the polygraph not to determine deception or truthfulness per se, but as a technique of interrogation to encourage admissions." Sometimes applicants in the polygraph process confess to committing felonies such as murder, rape, and selling of illegal drugs. Between 1974 and 1979, of the 20,511 job applicants who took polygraph tests, 695 (3.4%) confessed to previous felony crimes; almost all of those crimes had been undetected. In 2010 the NSA produced a video explaining its polygraph process. The video, ten minutes long, is titled "The Truth About the Polygraph" and was posted to the Web site of the Defense Security Service. Jeff Stein of "The Washington Post" said that the video portrays "various applicants, or actors playing them—it's not clear—describing everything bad they had heard about the test, the implication being that none of it is true." AntiPolygraph.org argues that the NSA-produced video omits some information about the polygraph process; it produced a video responding to the NSA video. George Maschke, the founder of the Web site, accused the NSA polygraph video of being "Orwellian". After Edward Snowden revealed his identity in 2013, the NSA began requiring polygraphing of employees once per quarter. The number of exemptions from legal requirements has been criticized. When in 1964 the Congress was hearing a bill giving the director of the NSA the power to fire at will any employee, "The Washington Post" wrote: "This is the very definition of arbitrariness. It means that an employee could be discharged and disgraced on the basis of anonymous allegations without the slightest opportunity to defend himself." Yet, the bill was accepted by an overwhelming majority. Also, every person hired to a job in the US after 2007, at any private organization, state or federal government agency, "must" be reported to the New Hire Registry, ostensibly to look for child support evaders, "except" that employees of an intelligence agency may be excluded from reporting if the director deems it necessary for national security reasons. When the agency was first established, its headquarters and cryptographic center were in the Naval Security Station in Washington, D.C. The COMINT functions were located in Arlington Hall in Northern Virginia, which served as the headquarters of the U.S. Army's cryptographic operations. Because the Soviet Union had detonated a nuclear bomb and because the facilities were crowded, the federal government wanted to move several agencies, including the AFSA/NSA. A planning committee considered Fort Knox, but Fort Meade, Maryland, was ultimately chosen as NSA headquarters because it was far enough away from Washington, D.C. in case of a nuclear strike and was close enough so its employees would not have to move their families. Construction of additional buildings began after the agency occupied buildings at Fort Meade in the late 1950s, which they soon outgrew. In 1963 the new headquarters building, nine stories tall, opened. NSA workers referred to the building as the "Headquarters Building" and since the NSA management occupied the top floor, workers used "Ninth Floor" to refer to their leaders. COMSEC remained in Washington, D.C., until its new building was completed in 1968. In September 1986, the Operations 2A and 2B buildings, both copper-shielded to prevent eavesdropping, opened with a dedication by President Ronald Reagan. The four NSA buildings became known as the "Big Four." The NSA director moved to 2B when it opened. Headquarters for the National Security Agency is located at in Fort George G. Meade, Maryland, although it is separate from other compounds and agencies that are based within this same military installation. Fort Meade is about southwest of Baltimore, and northeast of Washington, D.C. The NSA has two dedicated exits off Baltimore–Washington Parkway. The Eastbound exit from the Parkway (heading toward Baltimore) is open to the public and provides employee access to its main campus and public access to the National Cryptology Museum. The Westbound side exit, (heading toward Washington) is labeled "NSA Employees Only". The exit may only be used by people with the proper clearances, and security vehicles parked along the road guard the entrance. NSA is the largest employer in the state of Maryland, and two-thirds of its personnel work at Fort Meade. Built on of Fort Meade's , the site has 1,300 buildings and an estimated 18,000 parking spaces. The main NSA headquarters and operations building is what James Bamford, author of "Body of Secrets", describes as "a modern boxy structure" that appears similar to "any stylish office building." The building is covered with one-way dark glass, which is lined with copper shielding in order to prevent espionage by trapping in signals and sounds. It contains , or more than , of floor space; Bamford said that the U.S. Capitol "could easily fit inside it four times over." The facility has over 100 watchposts, one of them being the visitor control center, a two-story area that serves as the entrance. At the entrance, a white pentagonal structure, visitor badges are issued to visitors and security clearances of employees are checked. The visitor center includes a painting of the NSA seal. The OPS2A building, the tallest building in the NSA complex and the location of much of the agency's operations directorate, is accessible from the visitor center. Bamford described it as a "dark glass Rubik's Cube". The facility's "red corridor" houses non-security operations such as concessions and the drug store. The name refers to the "red badge" which is worn by someone without a security clearance. The NSA headquarters includes a cafeteria, a credit union, ticket counters for airlines and entertainment, a barbershop, and a bank. NSA headquarters has its own post office, fire department, and police force. The employees at the NSA headquarters reside in various places in the Baltimore-Washington area, including Annapolis, Baltimore, and Columbia in Maryland and the District of Columbia, including the Georgetown community. The NSA maintains a shuttle service from the Odenton station of MARC to its Visitor Control Center and has done so since 2005. Following a major power outage in 2000, in 2003 and in follow-ups through 2007, "The Baltimore Sun" reported that the NSA was at risk of electrical overload because of insufficient internal electrical infrastructure at Fort Meade to support the amount of equipment being installed. This problem was apparently recognized in the 1990s but not made a priority, and "now the agency's ability to keep its operations going is threatened." On August 6, 2006, "The Baltimore Sun" reported that the NSA had completely maxed out the grid, and that Baltimore Gas & Electric (BGE, now Constellation Energy) was unable to sell them any more power. NSA decided to move some of its operations to a new satellite facility. BGE provided NSA with 65 to 75 megawatts at Fort Meade in 2007, and expected that an increase of 10 to 15 megawatts would be needed later that year. In 2011, the NSA was Maryland's largest consumer of power. In 2007, as BGE's largest customer, NSA bought as much electricity as Annapolis, the capital city of Maryland. One estimate put the potential for power consumption by the new Utah Data Center at 40 million per year. In 1995, "The Baltimore Sun" reported that the NSA is the owner of the single largest group of supercomputers. NSA held a groundbreaking ceremony at Fort Meade in May 2013 for its High Performance Computing Center 2, expected to open in 2016. Called Site M, the center has a 150 megawatt power substation, 14 administrative buildings and 10 parking garages. It cost 3.2 billion and covers . The center is and initially uses 60 megawatts of electricity. Increments II and III are expected to be completed by 2030, and would quadruple the space, covering with 60 buildings and 40 parking garages. Defense contractors are also establishing or expanding cybersecurity facilities near the NSA and around the Washington metropolitan area. The DoD Computer Security Center was founded in 1981 and renamed the National Computer Security Center (NCSC) in 1985. NCSC was responsible for computer security throughout the federal government. NCSC was part of NSA, and during the late 1980s and the 1990s, NSA and NCSC published Trusted Computer System Evaluation Criteria in a six-foot high Rainbow Series of books that detailed trusted computing and network platform specifications. The Rainbow books were replaced by the Common Criteria, however, in the early 2000s. As of 2012, NSA collected intelligence from four geostationary satellites. Satellite receivers were at Roaring Creek Station in Catawissa, Pennsylvania and Salt Creek Station in Arbuckle, California. It operated ten to twenty taps on U.S. telecom switches. NSA had installations in several U.S. states and from them observed intercepts from Europe, the Middle East, North Africa, Latin America, and Asia. NSA had facilities at Friendship Annex (FANX) in Linthicum, Maryland, which is a 20 to 25-minute drive from Fort Meade; the Aerospace Data Facility at Buckley Air Force Base in Aurora outside Denver, Colorado; NSA Texas in the Texas Cryptology Center at Lackland Air Force Base in San Antonio, Texas; NSA Georgia at Fort Gordon in Augusta, Georgia; NSA Hawaii in Honolulu; the Multiprogram Research Facility in Oak Ridge, Tennessee, and elsewhere. On January 6, 2011, a groundbreaking ceremony was held to begin construction on NSA's first Comprehensive National Cyber-security Initiative (CNCI) Data Center, known as the "Utah Data Center" for short. The $1.5B data center is being built at Camp Williams, Utah, located south of Salt Lake City, and will help support the agency's National Cyber-security Initiative. It is expected to be operational by September 2013. In 2009, to protect its assets and access more electricity, NSA sought to decentralize and expand its existing facilities in Fort Meade and Menwith Hill, the latter expansion expected to be completed by 2015. The "Yakima Herald-Republic" cited Bamford, saying that many of NSA's bases for its Echelon program were a legacy system, using outdated, 1990s technology. In 2004, NSA closed its operations at Bad Aibling Station (Field Station 81) in Bad Aibling, Germany. In 2012, NSA began to move some of its operations at Yakima Research Station, Yakima Training Center, in Washington state to Colorado, planning to leave Yakima closed. As of 2013, NSA also intended to close operations at Sugar Grove, West Virginia. Following the signing in 1946–1956 of the UKUSA Agreement between the United States, United Kingdom, Canada, Australia and New Zealand, who then cooperated on signals intelligence and ECHELON, NSA stations were built at GCHQ Bude in Morwenstow, United Kingdom; Geraldton, Pine Gap and Shoal Bay, Australia; Leitrim and Ottawa, Ontario, Canada; Misawa, Japan; and Waihopai and Tangimoana, New Zealand. NSA operates RAF Menwith Hill in North Yorkshire, United Kingdom, which was, according to BBC News in 2007, the largest electronic monitoring station in the world. Planned in 1954, and opened in 1960, the base covered in 1999. The agency's European Cryptologic Center (ECC), with 240 employees in 2011, is headquartered at a US military compound in Griesheim, near Frankfurt in Germany. A 2011 NSA report indicates that the ECC is responsible for the "largest analysis and productivity in Europe" and focuses on various priorities, including Africa, Europe, the Middle East and counterterrorism operations. In 2013, a new Consolidated Intelligence Center, also to be used by NSA, is being built at the headquarters of the United States Army Europe in Wiesbaden, Germany. NSA's partnership with Bundesnachrichtendienst (BND), the German foreign intelligence service, was confirmed by BND president Gerhard Schindler. Thailand is a "3rd party partner" of the NSA along with nine other nations. These are non-English-speaking countries that have made security agreements for the exchange of SIGINT raw material and end product reports. Thailand is the site of at least two US SIGINT collection stations. One is at the US Embassy in Bangkok, a joint NSA-CIA Special Collection Service (SCS) unit. It presumably eavesdrops on foreign embassies, governmental communications, and other targets of opportunity. The second installation is a FORNSAT (foreign satellite interception) station in the Thai city of Khon Kaen. It is codenamed INDRA, but has also been referred to as LEMONWOOD. The station is approximately 40 ha (100 acres) in size and consists of a large 3,700–4,600 m2 (40,000–50,000 ft2) operations building on the west side of the ops compound and four radome-enclosed parabolic antennas. Possibly two of the radome-enclosed antennas are used for SATCOM intercept and two antennas used for relaying the intercepted material back to NSA. There is also a PUSHER-type circularly-disposed antenna array (CDAA) just north of the ops compound. NSA activated Khon Kaen in October 1979. Its mission was to eavesdrop on the radio traffic of Chinese army and air force units in southern China, especially in and around the city of Kunming in Yunnan Province. Back in the late 1970s the base consisted only of a small CDAA antenna array that was remote-controlled via satellite from the NSA listening post at Kunia, Hawaii, and a small force of civilian contractors from Bendix Field Engineering Corp. whose job it was to keep the antenna array and satellite relay facilities up and running 24/7. According to the papers of the late General William Odom, the INDRA facility was upgraded in 1986 with a new British-made PUSHER CDAA antenna as part of an overall upgrade of NSA and Thai SIGINT facilities whose objective was to spy on the neighboring communist nations of Vietnam, Laos, and Cambodia. The base apparently fell into disrepair in the 1990s as China and Vietnam became more friendly towards the US, and by 2002 archived satellite imagery showed that the PUSHER CDAA antenna had been torn down, perhaps indicating that the base had been closed. At some point in the period since 9/11, the Khon Kaen base was reactivated and expanded to include a sizeable SATCOM intercept mission. It is likely that the NSA presence at Khon Kaen is relatively small, and that most of the work is done by civilian contractors. NSA has been involved in debates about public policy, both indirectly as a behind-the-scenes adviser to other departments, and directly during and after Vice Admiral Bobby Ray Inman's directorship. NSA was a major player in the debates of the 1990s regarding the export of cryptography in the United States. Restrictions on export were reduced but not eliminated in 1996. Its secure government communications work has involved the NSA in numerous technology areas, including the design of specialized communications hardware and software, production of dedicated semiconductors (at the Ft. Meade chip fabrication plant), and advanced cryptography research. For 50 years, NSA designed and built most of its computer equipment in-house, but from the 1990s until about 2003 (when the U.S. Congress curtailed the practice), the agency contracted with the private sector in the fields of research and equipment. NSA was embroiled in some minor controversy concerning its involvement in the creation of the Data Encryption Standard (DES), a standard and public block cipher algorithm used by the U.S. government and banking community. During the development of DES by IBM in the 1970s, NSA recommended changes to some details of the design. There was suspicion that these changes had weakened the algorithm sufficiently to enable the agency to eavesdrop if required, including speculation that a critical component—the so-called S-boxes—had been altered to insert a "backdoor" and that the reduction in key length might have made it feasible for NSA to discover DES keys using massive computing power. It has since been observed that the S-boxes in DES are particularly resilient against differential cryptanalysis, a technique which was not publicly discovered until the late 1980s but known to the IBM DES team. The involvement of NSA in selecting a successor to Data Encryption Standard (DES), the Advanced Encryption Standard (AES), was limited to hardware performance testing (see AES competition). NSA has subsequently certified AES for protection of classified information when used in NSA-approved systems. The NSA is responsible for the encryption-related components in these legacy systems: The NSA oversees encryption in following systems which are in use today: The NSA has specified Suite A and Suite B cryptographic algorithm suites to be used in U.S. government systems; the Suite B algorithms are a subset of those previously specified by NIST and are expected to serve for most information protection purposes, while the Suite A algorithms are secret and are intended for especially high levels of protection. The widely used SHA-1 and SHA-2 hash functions were designed by NSA. SHA-1 is a slight modification of the weaker SHA-0 algorithm, also designed by NSA in 1993. This small modification was suggested by NSA two years later, with no justification other than the fact that it provides additional security. An attack for SHA-0 that does not apply to the revised algorithm was indeed found between 1998 and 2005 by academic cryptographers. Because of weaknesses and key length restrictions in SHA-1, NIST deprecates its use for digital signatures, and approves only the newer SHA-2 algorithms for such applications from 2013 on. A new hash standard, SHA-3, has recently been selected through the competition concluded October 2, 2012 with the selection of Keccak as the algorithm. The process to select SHA-3 was similar to the one held in choosing the AES, but some doubts have been cast over it, since fundamental modifications have been made to Keccak in order to turn it into a standard. These changes potentially undermine the cryptanalysis performed during the competition and reduce the security levels of the algorithm. NSA promoted the inclusion of a random number generator called Dual EC DRBG in the U.S. National Institute of Standards and Technology's 2007 guidelines. This led to speculation of a backdoor which would allow NSA access to data encrypted by systems using that pseudo random number generator. This is now deemed to be plausible based on the fact that output of next iterations of PRNG can provably be determined if relation between two internal elliptic curve points is known. Both NIST and RSA are now officially recommending against the use of this PRNG. Because of concerns that widespread use of strong cryptography would hamper government use of wiretaps, NSA proposed the concept of key escrow in 1993 and introduced the Clipper chip that would offer stronger protection than DES but would allow access to encrypted data by authorized law enforcement officials. The proposal was strongly opposed and key escrow requirements ultimately went nowhere. However, NSA's Fortezza hardware-based encryption cards, created for the Clipper project, are still used within government, and NSA ultimately declassified and published the design of the Skipjack cipher used on the cards. Perfect Citizen is a program to perform vulnerability assessment by the NSA on U.S. critical infrastructure. It was originally reported to be a program to develop a system of sensors to detect cyber attacks on critical infrastructure computer networks in both the private and public sector through a network monitoring system named "Einstein". It is funded by the Comprehensive National Cybersecurity Initiative and thus far Raytheon has received a contract for up to $100 million for the initial stage. NSA has invested many millions of dollars in academic research under grant code prefix "MDA904", resulting in over 3,000 papers NSA/CSS has, at times, attempted to restrict the publication of academic research into cryptography; for example, the Khufu and Khafre block ciphers were voluntarily withheld in response to an NSA request to do so. In response to a FOIA lawsuit, in 2013 the NSA released the 643-page research paper titled, "Untangling the Web: A Guide to Internet Research," written and compiled by NSA employees to assist other NSA workers in searching for information of interest to the agency on the public Internet. NSA has the ability to file for a patent from the U.S. Patent and Trademark Office under gag order. Unlike normal patents, these are not revealed to the public and do not expire. However, if the Patent Office receives an application for an identical patent from a third party, they will reveal NSA's patent and officially grant it to NSA for the full term on that date. One of NSA's published patents describes a method of geographically locating an individual computer site in an Internet-like network, based on the latency of multiple network connections. Although no public patent exists, NSA is reported to have used a similar locating technology called trilateralization that allows real-time tracking of an individual's location, including altitude from ground level, using data obtained from cellphone towers. The heraldic insignia of NSA consists of an eagle inside a circle, grasping a key in its talons. The eagle represents the agency's national mission. Its breast features a shield with bands of red and white, taken from the Great Seal of the United States and representing Congress. The key is taken from the emblem of Saint Peter and represents security. When the NSA was created, the agency had no emblem and used that of the Department of Defense. The agency adopted its first of two emblems in 1963. The current NSA insignia has been in use since 1965, when then-Director, LTG Marshall S. Carter (USA) ordered the creation of a device to represent the agency. The NSA's flag consists of the agency's seal on a light blue background. Crews associated with NSA missions have been involved in a number of dangerous and deadly situations. The USS "Liberty" incident in 1967 and USS "Pueblo" incident in 1968 are examples of the losses endured during the Cold War. The National Security Agency/Central Security Service Cryptologic Memorial honors and remembers the fallen personnel, both military and civilian, of these intelligence missions. It is made of black granite, and has 171 names carved into it, It is located at NSA headquarters. A tradition of declassifying the stories of the fallen was begun in 2001. In the United States, at least since 2001, there has been legal controversy over what signal intelligence can be used for and how much freedom the National Security Agency has to use signal intelligence. In 2015, the government made slight changes in how it uses and collects certain types of data, specifically phone records. The government was not analyzing the phone records as of early 2019. On December 16, 2005, "The New York Times" reported that, under White House pressure and with an executive order from President George W. Bush, the National Security Agency, in an attempt to thwart terrorism, had been tapping phone calls made to persons outside the country, without obtaining warrants from the United States Foreign Intelligence Surveillance Court, a secret court created for that purpose under the Foreign Intelligence Surveillance Act (FISA). One such surveillance program, authorized by the U.S. Signals Intelligence Directive 18 of President George Bush, was the Highlander Project undertaken for the National Security Agency by the U.S. Army 513th Military Intelligence Brigade. NSA relayed telephone (including cell phone) conversations obtained from ground, airborne, and satellite monitoring stations to various U.S. Army Signal Intelligence Officers, including the 201st Military Intelligence Battalion. Conversations of citizens of the U.S. were intercepted, along with those of other nations. Proponents of the surveillance program claim that the President has executive authority to order such action, arguing that laws such as FISA are overridden by the President's Constitutional powers. In addition, some argued that FISA was implicitly overridden by a subsequent statute, the Authorization for Use of Military Force, although the Supreme Court's ruling in "Hamdan v. Rumsfeld" deprecates this view. In the August 2006 case "ACLU v. NSA", U.S. District Court Judge Anna Diggs Taylor concluded that NSA's warrantless surveillance program was both illegal and unconstitutional. On July 6, 2007, the 6th Circuit Court of Appeals vacated the decision on the grounds that the ACLU lacked standing to bring the suit. On January 17, 2006, the Center for Constitutional Rights filed a lawsuit, CCR v. Bush, against the George W. Bush Presidency. The lawsuit challenged the National Security Agency's (NSA's) surveillance of people within the U.S., including the interception of CCR emails without securing a warrant first. In September 2008, the Electronic Frontier Foundation (EFF) filed a class action lawsuit against the NSA and several high-ranking officials of the Bush administration, charging an "illegal and unconstitutional program of dragnet communications surveillance," based on documentation provided by former AT&T technician Mark Klein. As a result of the USA Freedom Act passed by Congress in June 2015, the NSA had to shut down its bulk phone surveillance program on November 29 of the same year. The USA Freedom Act forbids the NSA to collect metadata and content of phone calls unless it has a warrant for terrorism investigation. In that case the agency has to ask the telecom companies for the record, which will only be kept for six months. In May 2008, Mark Klein, a former AT&T employee, alleged that his company had cooperated with NSA in installing Narus hardware to replace the FBI Carnivore program, to monitor network communications including traffic between U.S. citizens. NSA was reported in 2008 to use its computing capability to analyze "transactional" data that it regularly acquires from other government agencies, which gather it under their own jurisdictional authorities. As part of this effort, NSA now monitors huge volumes of records of domestic email data, web addresses from Internet searches, bank transfers, credit-card transactions, travel records, and telephone data, according to current and former intelligence officials interviewed by "The Wall Street Journal". The sender, recipient, and subject line of emails can be included, but the content of the messages or of phone calls are not. A 2013 advisory group for the Obama administration, seeking to reform NSA spying programs following the revelations of documents released by Edward J. Snowden. mentioned in 'Recommendation 30' on page 37, "...that the National Security Council staff should manage an interagency process to review on a regular basis the activities of the US Government regarding attacks that exploit a previously unknown vulnerability in a computer application." Retired cyber security expert Richard A. Clarke was a group member and stated on April 11 that NSA had no advance knowledge of Heartbleed. In August 2013 it was revealed that a 2005 IRS training document showed that NSA intelligence intercepts and wiretaps, both foreign and domestic, were being supplied to the Drug Enforcement Administration (DEA) and Internal Revenue Service (IRS) and were illegally used to launch criminal investigations of US citizens. Law enforcement agents were directed to conceal how the investigations began and recreate an apparently legal investigative trail by re-obtaining the same evidence by other means. In the months leading to April 2009, the NSA intercepted the communications of U.S. citizens, including a Congressman, although the Justice Department believed that the interception was unintentional. The Justice Department then took action to correct the issues and bring the program into compliance with existing laws. United States Attorney General Eric Holder resumed the program according to his understanding of the Foreign Intelligence Surveillance Act amendment of 2008, without explaining what had occurred. Polls conducted in June 2013 found divided results among Americans regarding NSA's secret data collection. Rasmussen Reports found that 59% of Americans disapprove, Gallup found that 53% disapprove, and Pew found that 56% are in favor of NSA data collection. On April 25, 2013, the NSA obtained a court order requiring Verizon's Business Network Services to provide metadata on all calls in its system to the NSA "on an ongoing daily basis" for a three-month period, as reported by "The Guardian" on June 6, 2013. This information includes "the numbers of both parties on a call ... location data, call duration, unique identifiers, and the time and duration of all calls" but not "[t]he contents of the conversation itself". The order relies on the so-called "business records" provision of the Patriot Act. In August 2013, following the Snowden leaks, new details about the NSA's data mining activity were revealed. Reportedly, the majority of emails into or out of the United States are captured at "selected communications links" and automatically analyzed for keywords or other "selectors". Emails that do not match are deleted. The utility of such a massive metadata collection in preventing terrorist attacks is disputed. Many studies reveal the dragnet like system to be ineffective. One such report, released by the New America Foundation concluded that after an analysis of 225 terrorism cases, the NSA "had no discernible impact on preventing acts of terrorism." Defenders of the program said that while metadata alone cannot provide all the information necessary to prevent an attack, it assures the ability to "connect the dots" between suspect foreign numbers and domestic numbers with a speed only the NSA's software is capable of. One benefit of this is quickly being able to determine the difference between suspicious activity and real threats. As an example, NSA director General Keith B. Alexander mentioned at the annual Cybersecurity Summit in 2013, that metadata analysis of domestic phone call records after the Boston Marathon bombing helped determine that rumors of a follow-up attack in New York were baseless. In addition to doubts about its effectiveness, many people argue that the collection of metadata is an unconstitutional invasion of privacy. , the collection process remains legal and grounded in the ruling from Smith v. Maryland (1979). A prominent opponent of the data collection and its legality is U.S. District Judge Richard J. Leon, who issued a report in 2013 in which he stated: "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval...Surely, such a program infringes on 'that degree of privacy' that the founders enshrined in the Fourth Amendment". As of May 7, 2015, the U.S. Court of Appeals for the Second Circuit ruled that the interpretation of Section 215 of the Patriot Act was wrong and that the NSA program that has been collecting Americans' phone records in bulk is illegal. It stated that Section 215 cannot be clearly interpreted to allow government to collect national phone data and, as a result, expired on June 1, 2015. This ruling "is the first time a higher-level court in the regular judicial system has reviewed the N.S.A. phone records program." The replacement law known as the USA Freedom Act, which will enable the NSA to continue to have bulk access to citizens' metadata but with the stipulation that the data will now be stored by the companies themselves. This change will not have any effect on other Agency procedures - outside of metadata collection - which have purportedly challenged Americans' Fourth Amendment rights;, including Upstream collection, a mass of techniques used by the Agency to collect and store American's data/communications directly from the Internet backbone. Under the Upstream program, the NSA paid telecommunications companies between 9 and 95 million dollars in order to collect data from them. While companies such as Google and Yahoo! claim that they do not provide "direct access" from their servers to the NSA unless under a court order, the NSA had access to emails, phone calls and cellular data users. Under this new ruling, telecommunications companies maintain bulk user metadata on their servers for at least 18 months, to be provided upon request to the NSA. This ruling made the mass storage of specific phone records at NSA datacenters illegal, but it did not rule on Section 215's constitutionality. In a declassified document it was revealed that 17,835 phone lines were on an improperly permitted "alert list" from 2006 to 2009 in breach of compliance, which tagged these phone lines for daily monitoring. Eleven percent of these monitored phone lines met the agency's legal standard for "reasonably articulable suspicion" (RAS). The NSA tracks the locations of hundreds of millions of cellphones per day, allowing it to map people's movements and relationships in detail. The NSA has been reported to have access to all communications made via Google, Microsoft, Facebook, Yahoo, YouTube, AOL, Skype, Apple and Paltalk, and collects hundreds of millions of contact lists from personal email and instant messaging accounts each year. It has also managed to weaken much of the encryption used on the Internet (by collaborating with, coercing or otherwise infiltrating numerous technology companies to leave "backdoors" into their systems), so that the majority of encryption is inadvertently vulnerable to different forms of attack. Domestically, the NSA has been proven to collect and store metadata records of phone calls, including over 120 million US Verizon subscribers, as well as intercept vast amounts of communications via the internet (Upstream). The government's legal standing had been to rely on a secret interpretation of the Patriot Act whereby the entirety of US communications may be considered "relevant" to a terrorism investigation if it is expected that even a tiny minority may relate to terrorism. The NSA also supplies foreign intercepts to the DEA, IRS and other law enforcement agencies, who use these to initiate criminal investigations. Federal agents are then instructed to "recreate" the investigative trail via parallel construction. The NSA also spies on influential Muslims to obtain information that could be used to discredit them, such as their use of pornography. The targets, both domestic and abroad, are not suspected of any crime but hold religious or political views deemed "radical" by the NSA. According to a report in "The Washington Post" in July 2014, relying on information provided by Snowden, 90% of those placed under surveillance in the U.S. are ordinary Americans, and are not the intended targets. The newspaper said it had examined documents including emails, text messages, and online accounts that support the claim. Despite White House claims that these programs have congressional oversight, many members of Congress were unaware of the existence of these NSA programs or the secret interpretation of the Patriot Act, and have consistently been denied access to basic information about them. The United States Foreign Intelligence Surveillance Court, the secret court charged with regulating the NSA's activities is, according to its chief judge, incapable of investigating or verifying how often the NSA breaks even its own secret rules. It has since been reported that the NSA violated its own rules on data access thousands of times a year, many of these violations involving large-scale data interceptions. NSA officers have even used data intercepts to spy on love interests; "most of the NSA violations were self-reported, and each instance resulted in administrative action of termination."
https://en.wikipedia.org/wiki?curid=21939
Nervous system The nervous system is a highly complex part of an animal that coordinates its actions and sensory information by transmitting signals to and from different parts of its body. The nervous system detects environmental changes that impact the body, then works in tandem with the endocrine system to respond to such events. Nervous tissue first arose in wormlike organisms about 550 to 600 million years ago. In vertebrates it consists of two main parts, the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS consists of the brain and spinal cord. The PNS consists mainly of nerves, which are enclosed bundles of the long fibers or axons, that connect the CNS to every other part of the body. Nerves that transmit signals from the brain are called "motor" or "efferent" nerves, while those nerves that transmit information from the body to the CNS are called "sensory" or "afferent". Spinal nerves serve both functions and are called "mixed" nerves. The PNS is divided into three separate subsystems, the somatic, autonomic, and enteric nervous systems. Somatic nerves mediate voluntary movement. The autonomic nervous system is further subdivided into the sympathetic and the parasympathetic nervous systems. The sympathetic nervous system is activated in cases of emergencies to mobilize energy, while the parasympathetic nervous system is activated when organisms are in a relaxed state. The enteric nervous system functions to control the gastrointestinal system. Both autonomic and enteric nervous systems function involuntarily. Nerves that exit from the cranium are called cranial nerves while those exiting from the spinal cord are called spinal nerves. At the cellular level, the nervous system is defined by the presence of a special type of cell, called the neuron, also known as a "nerve cell". Neurons have special structures that allow them to send signals rapidly and precisely to other cells. They send these signals in the form of electrochemical waves traveling along thin fibers called axons, which cause chemicals called neurotransmitters to be released at junctions called synapses. A cell that receives a synaptic signal from a neuron may be excited, inhibited, or otherwise modulated. The connections between neurons can form neural pathways, neural circuits, and larger networks that generate an organism's perception of the world and determine its behavior. Along with neurons, the nervous system contains other specialized cells called glial cells (or simply glia), which provide structural and metabolic support. Nervous systems are found in most multicellular animals, but vary greatly in complexity. The only multicellular animals that have no nervous system at all are sponges, placozoans, and mesozoans, which have very simple body plans. The nervous systems of the radially symmetric organisms ctenophores (comb jellies) and cnidarians (which include anemones, hydras, corals and jellyfish) consist of a diffuse nerve net. All other animal species, with the exception of a few types of worm, have a nervous system containing a brain, a central cord (or two cords running in parallel), and nerves radiating from the brain and central cord. The size of the nervous system ranges from a few hundred cells in the simplest worms, to around 300 billion cells in African elephants. The central nervous system functions to send signals from one cell to others, or from one part of the body to others and to receive feedback. Malfunction of the nervous system can occur as a result of genetic defects, physical damage due to trauma or toxicity, infection or simply of ageing. The medical specialty of neurology studies disorders of the nervous system and looks for interventions that can prevent or treat them. In the peripheral nervous system, the most common problem is the failure of nerve conduction, which can be due to different causes including diabetic neuropathy and demyelinating disorders such as multiple sclerosis and amyotrophic lateral sclerosis. Neuroscience is the field of science that focuses on the study of the nervous system. The nervous system derives its name from nerves, which are cylindrical bundles of fibers (the axons of neurons), that emanate from the brain and spinal cord, and branch repeatedly to innervate every part of the body. Nerves are large enough to have been recognized by the ancient Egyptians, Greeks, and Romans, but their internal structure was not understood until it became possible to examine them using a microscope. The author Michael Nikoletseas wrote: "It is difficult to believe that until approximately year 1900 it was not known that neurons are the basic units of the brain (Santiago Ramón y Cajal). Equally surprising is the fact that the concept of chemical transmission in the brain was not known until around 1930 (Henry Hallett Dale and Otto Loewi). We began to understand the basic electrical phenomenon that neurons use in order to communicate among themselves, the action potential, in the 1950s (Alan Lloyd Hodgkin, Andrew Huxley and John Eccles). It was in the 1960s that we became aware of how basic neuronal networks code stimuli and thus basic concepts are possible (David H. Hubel and Torsten Wiesel). The molecular revolution swept across US universities in the 1980s. It was in the 1990s that molecular mechanisms of behavioral phenomena became widely known (Eric Richard Kandel)." A microscopic examination shows that nerves consist primarily of axons, along with different membranes that wrap around them and segregate them into fascicles. The neurons that give rise to nerves do not lie entirely within the nerves themselves—their cell bodies reside within the brain, spinal cord, or peripheral ganglia. All animals more advanced than sponges have nervous systems. However, even sponges, unicellular animals, and non-animals such as slime molds have cell-to-cell signalling mechanisms that are precursors to those of neurons. In radially symmetric animals such as the jellyfish and hydra, the nervous system consists of a nerve net, a diffuse network of isolated cells. In bilaterian animals, which make up the great majority of existing species, the nervous system has a common structure that originated early in the Ediacaran period, over 550 million years ago. The nervous system contains two main categories or types of cells: neurons and glial cells. The nervous system is defined by the presence of a special type of cell—the neuron (sometimes called "neurone" or "nerve cell"). Neurons can be distinguished from other cells in a number of ways, but their most fundamental property is that they communicate with other cells via synapses, which are membrane-to-membrane junctions containing molecular machinery that allows rapid transmission of signals, either electrical or chemical. Many types of neuron possess an axon, a protoplasmic protrusion that can extend to distant parts of the body and make thousands of synaptic contacts; axons typically extend throughout the body in bundles called nerves. Even in the nervous system of a single species such as humans, hundreds of different types of neurons exist, with a wide variety of morphologies and functions. These include sensory neurons that transmute physical stimuli such as light and sound into neural signals, and motor neurons that transmute neural signals into activation of muscles or glands; however in many species the great majority of neurons participate in the formation of centralized structures (the brain and ganglia) and they receive all of their input from other neurons and send their output to other neurons. Glial cells (named from the Greek for "glue") are non-neuronal cells that provide support and nutrition, maintain homeostasis, form myelin, and participate in signal transmission in the nervous system. In the human brain, it is estimated that the total number of glia roughly equals the number of neurons, although the proportions vary in different brain areas. Among the most important functions of glial cells are to support neurons and hold them in place; to supply nutrients to neurons; to insulate neurons electrically; to destroy pathogens and remove dead neurons; and to provide guidance cues directing the axons of neurons to their targets. A very important type of glial cell (oligodendrocytes in the central nervous system, and Schwann cells in the peripheral nervous system) generates layers of a fatty substance called myelin that wraps around axons and provides electrical insulation which allows them to transmit action potentials much more rapidly and efficiently. Recent findings indicate that glial cells, such as microglia and astrocytes, serve as important resident immune cells within the central nervous system. The nervous system of vertebrates (including humans) is divided into the central nervous system (CNS) and the peripheral nervous system (PNS). The (CNS) is the major division, and consists of the brain and the spinal cord. The spinal canal contains the spinal cord, while the cranial cavity contains the brain. The CNS is enclosed and protected by the meninges, a three-layered system of membranes, including a tough, leathery outer layer called the dura mater. The brain is also protected by the skull, and the spinal cord by the vertebrae. The peripheral nervous system (PNS) is a collective term for the nervous system structures that do not lie within the CNS. The large majority of the axon bundles called nerves are considered to belong to the PNS, even when the cell bodies of the neurons to which they belong reside within the brain or spinal cord. The PNS is divided into somatic and visceral parts. The somatic part consists of the nerves that innervate the skin, joints, and muscles. The cell bodies of somatic sensory neurons lie in dorsal root ganglia of the spinal cord. The visceral part, also known as the autonomic nervous system, contains neurons that innervate the internal organs, blood vessels, and glands. The autonomic nervous system itself consists of two parts: the sympathetic nervous system and the parasympathetic nervous system. Some authors also include sensory neurons whose cell bodies lie in the periphery (for senses such as hearing) as part of the PNS; others, however, omit them. The vertebrate nervous system can also be divided into areas called gray matter and white matter. Gray matter (which is only gray in preserved tissue, and is better described as pink or light brown in living tissue) contains a high proportion of cell bodies of neurons. White matter is composed mainly of myelinated axons, and takes its color from the myelin. White matter includes all of the nerves, and much of the interior of the brain and spinal cord. Gray matter is found in clusters of neurons in the brain and spinal cord, and in cortical layers that line their surfaces. There is an anatomical convention that a cluster of neurons in the brain or spinal cord is called a nucleus, whereas a cluster of neurons in the periphery is called a ganglion. There are, however, a few exceptions to this rule, notably including the part of the forebrain called the basal ganglia. Sponges have no cells connected to each other by synaptic junctions, that is, no neurons, and therefore no nervous system. They do, however, have homologs of many genes that play key roles in synaptic function. Recent studies have shown that sponge cells express a group of proteins that cluster together to form a structure resembling a postsynaptic density (the signal-receiving part of a synapse). However, the function of this structure is currently unclear. Although sponge cells do not show synaptic transmission, they do communicate with each other via calcium waves and other impulses, which mediate some simple actions such as whole-body contraction. Jellyfish, comb jellies, and related animals have diffuse nerve nets rather than a central nervous system. In most jellyfish the nerve net is spread more or less evenly across the body; in comb jellies it is concentrated near the mouth. The nerve nets consist of sensory neurons, which pick up chemical, tactile, and visual signals; motor neurons, which can activate contractions of the body wall; and intermediate neurons, which detect patterns of activity in the sensory neurons and, in response, send signals to groups of motor neurons. In some cases groups of intermediate neurons are clustered into discrete ganglia. The development of the nervous system in radiata is relatively unstructured. Unlike bilaterians, radiata only have two primordial cell layers, endoderm and ectoderm. Neurons are generated from a special set of ectodermal precursor cells, which also serve as precursors for every other ectodermal cell type. The vast majority of existing animals are bilaterians, meaning animals with left and right sides that are approximate mirror images of each other. All bilateria are thought to have descended from a common wormlike ancestor that appeared in the Ediacaran period, 550–600 million years ago. The fundamental bilaterian body form is a tube with a hollow gut cavity running from mouth to anus, and a nerve cord with an enlargement (a "ganglion") for each body segment, with an especially large ganglion at the front, called the "brain". Even mammals, including humans, show the segmented bilaterian body plan at the level of the nervous system. The spinal cord contains a series of segmental ganglia, each giving rise to motor and sensory nerves that innervate a portion of the body surface and underlying musculature. On the limbs, the layout of the innervation pattern is complex, but on the trunk it gives rise to a series of narrow bands. The top three segments belong to the brain, giving rise to the forebrain, midbrain, and hindbrain. Bilaterians can be divided, based on events that occur very early in embryonic development, into two groups (superphyla) called protostomes and deuterostomes. Deuterostomes include vertebrates as well as echinoderms, hemichordates (mainly acorn worms), and Xenoturbellidans. Protostomes, the more diverse group, include arthropods, molluscs, and numerous types of worms. There is a basic difference between the two groups in the placement of the nervous system within the body: protostomes possess a nerve cord on the ventral (usually bottom) side of the body, whereas in deuterostomes the nerve cord is on the dorsal (usually top) side. In fact, numerous aspects of the body are inverted between the two groups, including the expression patterns of several genes that show dorsal-to-ventral gradients. Most anatomists now consider that the bodies of protostomes and deuterostomes are "flipped over" with respect to each other, a hypothesis that was first proposed by Geoffroy Saint-Hilaire for insects in comparison to vertebrates. Thus insects, for example, have nerve cords that run along the ventral midline of the body, while all vertebrates have spinal cords that run along the dorsal midline. Worms are the simplest bilaterian animals, and reveal the basic structure of the bilaterian nervous system in the most straightforward way. As an example, earthworms have dual nerve cords running along the length of the body and merging at the tail and the mouth. These nerve cords are connected by transverse nerves like the rungs of a ladder. These transverse nerves help coordinate the two sides of the animal. Two ganglia at the head (the "nerve ring") end function similar to a simple brain. Photoreceptors on the animal's eyespots provide sensory information on light and dark. The nervous system of one very small roundworm, the nematode "Caenorhabditis elegans", has been completely mapped out in a connectome including its synapses. Every neuron and its cellular lineage has been recorded and most, if not all, of the neural connections are known. In this species, the nervous system is sexually dimorphic; the nervous systems of the two sexes, males and female hermaphrodites, have different numbers of neurons and groups of neurons that perform sex-specific functions. In "C. elegans", males have exactly 383 neurons, while hermaphrodites have exactly 302 neurons. Arthropods, such as insects and crustaceans, have a nervous system made up of a series of ganglia, connected by a ventral nerve cord made up of two parallel connectives running along the length of the belly. Typically, each body segment has one ganglion on each side, though some ganglia are fused to form the brain and other large ganglia. The head segment contains the brain, also known as the supraesophageal ganglion. In the insect nervous system, the brain is anatomically divided into the protocerebrum, deutocerebrum, and tritocerebrum. Immediately behind the brain is the subesophageal ganglion, which is composed of three pairs of fused ganglia. It controls the mouthparts, the salivary glands and certain muscles. Many arthropods have well-developed sensory organs, including compound eyes for vision and antennae for olfaction and pheromone sensation. The sensory information from these organs is processed by the brain. In insects, many neurons have cell bodies that are positioned at the edge of the brain and are electrically passive—the cell bodies serve only to provide metabolic support and do not participate in signalling. A protoplasmic fiber runs from the cell body and branches profusely, with some parts transmitting signals and other parts receiving signals. Thus, most parts of the insect brain have passive cell bodies arranged around the periphery, while the neural signal processing takes place in a tangle of protoplasmic fibers called neuropil, in the interior. A neuron is called "identified" if it has properties that distinguish it from every other neuron in the same animal—properties such as location, neurotransmitter, gene expression pattern, and connectivity—and if every individual organism belonging to the same species has one and only one neuron with the same set of properties. In vertebrate nervous systems very few neurons are "identified" in this sense—in humans, there are believed to be none—but in simpler nervous systems, some or all neurons may be thus unique. In the roundworm "C. elegans", whose nervous system is the most thoroughly described of any animal's, every neuron in the body is uniquely identifiable, with the same location and the same connections in every individual worm. One notable consequence of this fact is that the form of the "C. elegans" nervous system is completely specified by the genome, with no experience-dependent plasticity. The brains of many molluscs and insects also contain substantial numbers of identified neurons. In vertebrates, the best known identified neurons are the gigantic Mauthner cells of fish. Every fish has two Mauthner cells, in the bottom part of the brainstem, one on the left side and one on the right. Each Mauthner cell has an axon that crosses over, innervating neurons at the same brain level and then travelling down through the spinal cord, making numerous connections as it goes. The synapses generated by a Mauthner cell are so powerful that a single action potential gives rise to a major behavioral response: within milliseconds the fish curves its body into a C-shape, then straightens, thereby propelling itself rapidly forward. Functionally this is a fast escape response, triggered most easily by a strong sound wave or pressure wave impinging on the lateral line organ of the fish. Mauthner cells are not the only identified neurons in fish—there are about 20 more types, including pairs of "Mauthner cell analogs" in each spinal segmental nucleus. Although a Mauthner cell is capable of bringing about an escape response individually, in the context of ordinary behavior other types of cells usually contribute to shaping the amplitude and direction of the response. Mauthner cells have been described as command neurons. A command neuron is a special type of identified neuron, defined as a neuron that is capable of driving a specific behavior individually. Such neurons appear most commonly in the fast escape systems of various species—the squid giant axon and squid giant synapse, used for pioneering experiments in neurophysiology because of their enormous size, both participate in the fast escape circuit of the squid. The concept of a command neuron has, however, become controversial, because of studies showing that some neurons that initially appeared to fit the description were really only capable of evoking a response in a limited set of circumstances. At the most basic level, the function of the nervous system is to send signals from one cell to others, or from one part of the body to others. There are multiple ways that a cell can send signals to other cells. One is by releasing chemicals called hormones into the internal circulation, so that they can diffuse to distant sites. In contrast to this "broadcast" mode of signaling, the nervous system provides "point-to-point" signals—neurons project their axons to specific target areas and make synaptic connections with specific target cells. Thus, neural signaling is capable of a much higher level of specificity than hormonal signaling. It is also much faster: the fastest nerve signals travel at speeds that exceed 100 meters per second. At a more integrative level, the primary function of the nervous system is to control the body. It does this by extracting information from the environment using sensory receptors, sending signals that encode this information into the central nervous system, processing the information to determine an appropriate response, and sending output signals to muscles or glands to activate the response. The evolution of a complex nervous system has made it possible for various animal species to have advanced perception abilities such as vision, complex social interactions, rapid coordination of organ systems, and integrated processing of concurrent signals. In humans, the sophistication of the nervous system makes it possible to have language, abstract representation of concepts, transmission of culture, and many other features of human society that would not exist without the human brain. Most neurons send signals via their axons, although some types are capable of dendrite-to-dendrite communication. (In fact, the types of neurons called amacrine cells have no axons, and communicate only via their dendrites.) Neural signals propagate along an axon in the form of electrochemical waves called action potentials, which produce cell-to-cell signals at points where axon terminals make synaptic contact with other cells. Synapses may be electrical or chemical. Electrical synapses make direct electrical connections between neurons, but chemical synapses are much more common, and much more diverse in function. At a chemical synapse, the cell that sends signals is called presynaptic, and the cell that receives signals is called postsynaptic. Both the presynaptic and postsynaptic areas are full of molecular machinery that carries out the signalling process. The presynaptic area contains large numbers of tiny spherical vessels called synaptic vesicles, packed with neurotransmitter chemicals. When the presynaptic terminal is electrically stimulated, an array of molecules embedded in the membrane are activated, and cause the contents of the vesicles to be released into the narrow space between the presynaptic and postsynaptic membranes, called the synaptic cleft. The neurotransmitter then binds to receptors embedded in the postsynaptic membrane, causing them to enter an activated state. Depending on the type of receptor, the resulting effect on the postsynaptic cell may be excitatory, inhibitory, or modulatory in more complex ways. For example, release of the neurotransmitter acetylcholine at a synaptic contact between a motor neuron and a muscle cell induces rapid contraction of the muscle cell. The entire synaptic transmission process takes only a fraction of a millisecond, although the effects on the postsynaptic cell may last much longer (even indefinitely, in cases where the synaptic signal leads to the formation of a memory trace). There are literally hundreds of different types of synapses. In fact, there are over a hundred known neurotransmitters, and many of them have multiple types of receptors. Many synapses use more than one neurotransmitter—a common arrangement is for a synapse to use one fast-acting small-molecule neurotransmitter such as glutamate or GABA, along with one or more peptide neurotransmitters that play slower-acting modulatory roles. Molecular neuroscientists generally divide receptors into two broad groups: chemically gated ion channels and second messenger systems. When a chemically gated ion channel is activated, it forms a passage that allows specific types of ions to flow across the membrane. Depending on the type of ion, the effect on the target cell may be excitatory or inhibitory. When a second messenger system is activated, it starts a cascade of molecular interactions inside the target cell, which may ultimately produce a wide variety of complex effects, such as increasing or decreasing the sensitivity of the cell to stimuli, or even altering gene transcription. According to a rule called Dale's principle, which has only a few known exceptions, a neuron releases the same neurotransmitters at all of its synapses. This does not mean, though, that a neuron exerts the same effect on all of its targets, because the effect of a synapse depends not on the neurotransmitter, but on the receptors that it activates. Because different targets can (and frequently do) use different types of receptors, it is possible for a neuron to have excitatory effects on one set of target cells, inhibitory effects on others, and complex modulatory effects on others still. Nevertheless, it happens that the two most widely used neurotransmitters, glutamate and GABA, each have largely consistent effects. Glutamate has several widely occurring types of receptors, but all of them are excitatory or modulatory. Similarly, GABA has several widely occurring receptor types, but all of them are inhibitory. Because of this consistency, glutamatergic cells are frequently referred to as "excitatory neurons", and GABAergic cells as "inhibitory neurons". Strictly speaking, this is an abuse of terminology—it is the receptors that are excitatory and inhibitory, not the neurons—but it is commonly seen even in scholarly publications. One very important subset of synapses are capable of forming memory traces by means of long-lasting activity-dependent changes in synaptic strength. The best-known form of neural memory is a process called long-term potentiation (abbreviated LTP), which operates at synapses that use the neurotransmitter glutamate acting on a special type of receptor known as the NMDA receptor. The NMDA receptor has an "associative" property: if the two cells involved in the synapse are both activated at approximately the same time, a channel opens that permits calcium to flow into the target cell. The calcium entry initiates a second messenger cascade that ultimately leads to an increase in the number of glutamate receptors in the target cell, thereby increasing the effective strength of the synapse. This change in strength can last for weeks or longer. Since the discovery of LTP in 1973, many other types of synaptic memory traces have been found, involving increases or decreases in synaptic strength that are induced by varying conditions, and last for variable periods of time. The reward system, that reinforces desired behaviour for example, depends on a variant form of LTP that is conditioned on an extra input coming from a reward-signalling pathway that uses dopamine as neurotransmitter. All these forms of synaptic modifiability, taken collectively, give rise to neural plasticity, that is, to a capability for the nervous system to adapt itself to variations in the environment. The basic neuronal function of sending signals to other cells includes a capability for neurons to exchange signals with each other. Networks formed by interconnected groups of neurons are capable of a wide variety of functions, including feature detection, pattern generation and timing, and there are seen to be countless types of information processing possible. Warren McCulloch and Walter Pitts showed in 1943 that even artificial neural networks formed from a greatly simplified mathematical abstraction of a neuron are capable of universal computation. Historically, for many years the predominant view of the function of the nervous system was as a stimulus-response associator. In this conception, neural processing begins with stimuli that activate sensory neurons, producing signals that propagate through chains of connections in the spinal cord and brain, giving rise eventually to activation of motor neurons and thereby to muscle contraction, i.e., to overt responses. Descartes believed that all of the behaviors of animals, and most of the behaviors of humans, could be explained in terms of stimulus-response circuits, although he also believed that higher cognitive functions such as language were not capable of being explained mechanistically. Charles Sherrington, in his influential 1906 book "The Integrative Action of the Nervous System", developed the concept of stimulus-response mechanisms in much more detail, and Behaviorism, the school of thought that dominated Psychology through the middle of the 20th century, attempted to explain every aspect of human behavior in stimulus-response terms. However, experimental studies of electrophysiology, beginning in the early 20th century and reaching high productivity by the 1940s, showed that the nervous system contains many mechanisms for maintaining cell excitability and generating patterns of activity intrinsically, without requiring an external stimulus. Neurons were found to be capable of producing regular sequences of action potentials, or sequences of bursts, even in complete isolation. When intrinsically active neurons are connected to each other in complex circuits, the possibilities for generating intricate temporal patterns become far more extensive. A modern conception views the function of the nervous system partly in terms of stimulus-response chains, and partly in terms of intrinsically generated activity patterns—both types of activity interact with each other to generate the full repertoire of behavior. The simplest type of neural circuit is a reflex arc, which begins with a sensory input and ends with a motor output, passing through a sequence of neurons connected in series. This can be shown in the "withdrawal reflex" causing a hand to jerk back after a hot stove is touched. The circuit begins with sensory receptors in the skin that are activated by harmful levels of heat: a special type of molecular structure embedded in the membrane causes heat to change the electrical field across the membrane. If the change in electrical potential is large enough to pass the given threshold, it evokes an action potential, which is transmitted along the axon of the receptor cell, into the spinal cord. There the axon makes excitatory synaptic contacts with other cells, some of which project (send axonal output) to the same region of the spinal cord, others projecting into the brain. One target is a set of spinal interneurons that project to motor neurons controlling the arm muscles. The interneurons excite the motor neurons, and if the excitation is strong enough, some of the motor neurons generate action potentials, which travel down their axons to the point where they make excitatory synaptic contacts with muscle cells. The excitatory signals induce contraction of the muscle cells, which causes the joint angles in the arm to change, pulling the arm away. In reality, this straightforward schema is subject to numerous complications. Although for the simplest reflexes there are short neural paths from sensory neuron to motor neuron, there are also other nearby neurons that participate in the circuit and modulate the response. Furthermore, there are projections from the brain to the spinal cord that are capable of enhancing or inhibiting the reflex. Although the simplest reflexes may be mediated by circuits lying entirely within the spinal cord, more complex responses rely on signal processing in the brain. For example, when an object in the periphery of the visual field moves, and a person looks toward it many stages of signal processing are initiated. The initial sensory response, in the retina of the eye, and the final motor response, in the oculomotor nuclei of the brain stem, are not all that different from those in a simple reflex, but the intermediate stages are completely different. Instead of a one or two step chain of processing, the visual signals pass through perhaps a dozen stages of integration, involving the thalamus, cerebral cortex, basal ganglia, superior colliculus, cerebellum, and several brainstem nuclei. These areas perform signal-processing functions that include feature detection, perceptual analysis, memory recall, decision-making, and motor planning. Feature detection is the ability to extract biologically relevant information from combinations of sensory signals. In the visual system, for example, sensory receptors in the retina of the eye are only individually capable of detecting "points of light" in the outside world. Second-level visual neurons receive input from groups of primary receptors, higher-level neurons receive input from groups of second-level neurons, and so on, forming a hierarchy of processing stages. At each stage, important information is extracted from the signal ensemble and unimportant information is discarded. By the end of the process, input signals representing "points of light" have been transformed into a neural representation of objects in the surrounding world and their properties. The most sophisticated sensory processing occurs inside the brain, but complex feature extraction also takes place in the spinal cord and in peripheral sensory organs such as the retina. Although stimulus-response mechanisms are the easiest to understand, the nervous system is also capable of controlling the body in ways that do not require an external stimulus, by means of internally generated rhythms of activity. Because of the variety of voltage-sensitive ion channels that can be embedded in the membrane of a neuron, many types of neurons are capable, even in isolation, of generating rhythmic sequences of action potentials, or rhythmic alternations between high-rate bursting and quiescence. When neurons that are intrinsically rhythmic are connected to each other by excitatory or inhibitory synapses, the resulting networks are capable of a wide variety of dynamical behaviors, including attractor dynamics, periodicity, and even chaos. A network of neurons that uses its internal structure to generate temporally structured output, without requiring a corresponding temporally structured stimulus, is called a central pattern generator. Internal pattern generation operates on a wide range of time scales, from milliseconds to hours or longer. One of the most important types of temporal pattern is circadian rhythmicity—that is, rhythmicity with a period of approximately 24 hours. All animals that have been studied show circadian fluctuations in neural activity, which control circadian alternations in behavior such as the sleep-wake cycle. Experimental studies dating from the 1990s have shown that circadian rhythms are generated by a "genetic clock" consisting of a special set of genes whose expression level rises and falls over the course of the day. Animals as diverse as insects and vertebrates share a similar genetic clock system. The circadian clock is influenced by light but continues to operate even when light levels are held constant and no other external time-of-day cues are available. The clock genes are expressed in many parts of the nervous system as well as many peripheral organs, but in mammals, all of these "tissue clocks" are kept in synchrony by signals that emanate from a master timekeeper in a tiny part of the brain called the suprachiasmatic nucleus. A mirror neuron is a neuron that fires both when an animal acts and when the animal observes the same action performed by another. Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. Such neurons have been directly observed in primate species. Birds have been shown to have imitative resonance behaviors and neurological evidence suggests the presence of some form of mirroring system. In humans, brain activity consistent with that of mirror neurons has been found in the premotor cortex, the supplementary motor area, the primary somatosensory cortex and the inferior parietal cortex. The function of the mirror system is a subject of much speculation. Many researchers in cognitive neuroscience and cognitive psychology consider that this system provides the physiological mechanism for the perception/action coupling (see the common coding theory). They argue that mirror neurons may be important for understanding the actions of other people, and for learning new skills by imitation. Some researchers also speculate that mirror systems may simulate observed actions, and thus contribute to theory of mind skills, while others relate mirror neurons to language abilities. However, to date, no widely accepted neural or computational models have been put forward to describe how mirror neuron activity supports cognitive functions such as imitation. There are neuroscientists who caution that the claims being made for the role of mirror neurons are not supported by adequate research. In vertebrates, landmarks of embryonic neural development include the birth and differentiation of neurons from stem cell precursors, the migration of immature neurons from their birthplaces in the embryo to their final positions, outgrowth of axons from neurons and guidance of the motile growth cone through the embryo towards postsynaptic partners, the generation of synapses between these axons and their postsynaptic partners, and finally the lifelong changes in synapses which are thought to underlie learning and memory. All bilaterian animals at an early stage of development form a gastrula, which is polarized, with one end called the animal pole and the other the vegetal pole. The gastrula has the shape of a disk with three layers of cells, an inner layer called the endoderm, which gives rise to the lining of most internal organs, a middle layer called the mesoderm, which gives rise to the bones and muscles, and an outer layer called the ectoderm, which gives rise to the skin and nervous system. In vertebrates, the first sign of the nervous system is the appearance of a thin strip of cells along the center of the back, called the neural plate. The inner portion of the neural plate (along the midline) is destined to become the central nervous system (CNS), the outer portion the peripheral nervous system (PNS). As development proceeds, a fold called the neural groove appears along the midline. This fold deepens, and then closes up at the top. At this point the future CNS appears as a cylindrical structure called the neural tube, whereas the future PNS appears as two strips of tissue called the neural crest, running lengthwise above the neural tube. The sequence of stages from neural plate to neural tube and neural crest is known as neurulation. In the early 20th century, a set of famous experiments by Hans Spemann and Hilde Mangold showed that the formation of nervous tissue is "induced" by signals from a group of mesodermal cells called the "organizer region". For decades, though, the nature of neural induction defeated every attempt to figure it out, until finally it was resolved by genetic approaches in the 1990s. Induction of neural tissue requires inhibition of the gene for a so-called bone morphogenetic protein, or BMP. Specifically the protein BMP4 appears to be involved. Two proteins called Noggin and Chordin, both secreted by the mesoderm, are capable of inhibiting BMP4 and thereby inducing ectoderm to turn into neural tissue. It appears that a similar molecular mechanism is involved for widely disparate types of animals, including arthropods as well as vertebrates. In some animals, however, another type of molecule called Fibroblast Growth Factor or FGF may also play an important role in induction. Induction of neural tissues causes formation of neural precursor cells, called neuroblasts. In drosophila, neuroblasts divide asymmetrically, so that one product is a "ganglion mother cell" (GMC), and the other is a neuroblast. A GMC divides once, to give rise to either a pair of neurons or a pair of glial cells. In all, a neuroblast is capable of generating an indefinite number of neurons or glia. As shown in a 2008 study, one factor common to all bilateral organisms (including humans) is a family of secreted signaling molecules called neurotrophins which regulate the growth and survival of neurons. Zhu et al. identified DNT1, the first neurotrophin found in flies. DNT1 shares structural similarity with all known neurotrophins and is a key factor in the fate of neurons in Drosophila. Because neurotrophins have now been identified in both vertebrate and invertebrates, this evidence suggests that neurotrophins were present in an ancestor common to bilateral organisms and may represent a common mechanism for nervous system formation. The central nervous system is protected by major physical and chemical barriers. Physically, the brain and spinal cord are surrounded by tough meningeal membranes, and enclosed in the bones of the skull and vertebral column, which combine to form a strong physical shield. Chemically, the brain and spinal cord are isolated by the blood–brain barrier, which prevents most types of chemicals from moving from the bloodstream into the interior of the CNS. These protections make the CNS less susceptible in many ways than the PNS; the flip side, however, is that damage to the CNS tends to have more serious consequences. Although nerves tend to lie deep under the skin except in a few places such as the ulnar nerve near the elbow joint, they are still relatively exposed to physical damage, which can cause pain, loss of sensation, or loss of muscle control. Damage to nerves can also be caused by swelling or bruises at places where a nerve passes through a tight bony channel, as happens in carpal tunnel syndrome. If a nerve is completely transected, it will often regenerate, but for long nerves this process may take months to complete. In addition to physical damage, peripheral neuropathy may be caused by many other medical problems, including genetic conditions, metabolic conditions such as diabetes, inflammatory conditions such as Guillain–Barré syndrome, vitamin deficiency, infectious diseases such as leprosy or shingles, or poisoning by toxins such as heavy metals. Many cases have no cause that can be identified, and are referred to as idiopathic. It is also possible for nerves to lose function temporarily, resulting in numbness as stiffness—common causes include mechanical pressure, a drop in temperature, or chemical interactions with local anesthetic drugs such as lidocaine. Physical damage to the spinal cord may result in loss of sensation or movement. If an injury to the spine produces nothing worse than swelling, the symptoms may be transient, but if nerve fibers in the spine are actually destroyed, the loss of function is usually permanent. Experimental studies have shown that spinal nerve fibers attempt to regrow in the same way as nerve fibers, but in the spinal cord, tissue destruction usually produces scar tissue that cannot be penetrated by the regrowing nerves.
https://en.wikipedia.org/wiki?curid=21944
Nutcracker A nutcracker is a tool designed to open nuts by cracking their shells. There are many designs, including levers, screws, and ratchets. A well-known type portrays a person whose mouth forms the jaws of the nutcracker, though many of these are meant for decoration. Nuts were historically opened using a hammer and anvil, often made of stone. Some nuts such as walnuts can also be opened by hand, by holding the nut in the palm of the hand and applying pressure with the other palm or thumb, or using another nut. Manufacturers produce modern functional nutcrackers usually somewhat resembling pliers, but with the pivot point at the end beyond the nut, rather than in the middle. These are also used for cracking the shells of crab and lobster to make the meat inside available for eating. Hinged lever nutcrackers, often called a "pair of nutcrackers", may date back to Ancient Greece. By the 14th century in Europe, nutcrackers were documented in England, including in the "Canterbury Tales", and in France. The lever design may derive from blacksmiths' pincers. Materials included metals such as silver, cast-iron and bronze, and wood including boxwood, especially those from France and Italy. More rarely, porcelain was used. Many of the wooden carved nutcrackers were in the form of people and animals. During the Victorian era, fruit and nuts were presented at dinner and ornate and often silver-plated nutcrackers were produced to accompany them on the dinner table. Nuts have long been a popular choice for desserts, particularly throughout Europe. The nutcrackers were placed on dining tables to serve as a fun and entertaining center of conversation while diners awaited their final course. At one time, nutcrackers were actually made of metals such as brass, and it was not until the 1800s in Germany that the popularity of wooden ones began to spread. The late 19th century saw two shifts in nutcracker production: the rise in figurative and decorative designs, particularly from the Alps where they were sold as souvenirs, and a switch to industrial manufacture, including availability in mail-order catalogues, rather than artisan production. After the 1960s, the availability of pre-shelled nuts led to a decline in ownership of nutcrackers and a fall in the tradition of nuts being put in children's Christmas stockings. In the 17th century, screw nutcrackers were introduced that applied more gradual pressure to the shell, some like a vise. The spring-jointed nutcracker was patented by Henry Quackenbush in 1913. A ratchet design, similar to a car jack, that gradually increases pressure on the shell to avoid damaging the kernel inside is used by the Crackerjack, patented in 1947 by Cuthbert Leslie Rimes of Morley, Leeds and exhibited at the Festival of Britain. Unshelled nuts are still popular in China, where a key device is inserted into the crack in walnuts, pecans, and macadamias and twisted to open the shell. Nutcrackers in the form of wood carvings of a soldier, knight, king, or other profession have existed since at least the 15th century. Figurative nutcrackers are a good luck symbol in Germany, and a folktale recounts that a puppet-maker won a nutcracking challenge by creating a doll with a mouth for a lever to crack the nuts. These nutcrackers portray a person with a large mouth which the operator opens by lifting a lever in the back of the figurine. Originally one could insert a nut in the big-toothed mouth, press down and thereby crack the nut. Modern nutcrackers in this style serve mostly for decoration, mainly at Christmas time, a season of which they have long been a traditional symbol. The ballet "The Nutcracker" derives its name from this festive holiday decoration. The carving of nutcrackers— as well as of religious figures and of cribs— developed as a cottage industry in forested rural areas of Germany. The most famous nutcracker carvings come from Sonneberg in Thuringia (also a center of dollmaking) and as part of the industry of wooden toymaking in the Ore Mountains. Wood-carving usually provided the only income for the people living there. Today the travel industry supplements their income by bringing visitors to the remote areas. Carvings by famous names like Junghanel, Klaus Mertens, Karl, Olaf Kolbe, Petersen, Christian Ulbricht and especially the Steinbach nutcrackers have become collectors' items. Decorative nutcrackers became popular in the United States after the Second World War, following the first US production of "The Nutcracker" ballet in 1940 and the exposure of US soldiers to the dolls during the war. In the United States, few of the decorative nutcrackers are now functional, though expensive working designs are still available. Many of the woodworkers in Germany were in Erzgebirge, in the Soviet zone after the end of the war, and they mass-produced poorly-made designs for the US market. With the increase in pre-shelled nuts, the need for functionality was also lessened. After the 1980s, Chinese and Taiwanese imports that copied the traditional German designs took over. The recreated "Bavarian village" of Leavenworth, Washington, features a nutcracker museum. Many other materials also serve to make decorated nutcrackers, such as porcelain, silver, and brass; the museum displays samples. The United States Postal Service (USPS) issued four stamps in October 2008 with custom-made nutcrackers made by Richmond, Virginia artist Glenn Crider. Some artists, among them the multi-instrumentalist Mike Oldfield, have used the sound nutcrackers make in music. Many animals shell nuts to eat them, including using tools. The Capuchin monkey is a fine example. Parrots use their beaks as natural nutcrackers, in much the same way smaller birds crack seeds. In this case, the pivot point stands opposite the nut, at the jaw.
https://en.wikipedia.org/wiki?curid=21946
Nicolai Abildgaard Nicolai Abraham Abildgaard (September 11, 1743 – June 4, 1809) was a Danish neoclassical and royal history painter, sculptor, architect, and professor of painting, mythology, and anatomy at the New Royal Danish Academy of Art in Copenhagen, Denmark. Many of his works were in the royal Christiansborg Palace (some destroyed by fire 1794), Fredensborg Palace, and Levetzau Palace at Amalienborg. Nicolai Abraham Abildgaard was born in Copenhagen, Denmark, as the son of Anne Margrethe (née Bastholm) and Søren Abildgaard, a noted antiquarian draughtsman. Abildgaard was trained by a painting master before he joined the Royal Danish Academy of Art ("Det Kongelige Danske Kunstakademi") in Copenhagen, where he studied under the guidance of Johan Edvard Mandelberg and Johannes Wiedewelt. He won a series of medallions at the Academy for his brilliance from 1764 to 1767. The Large Gold Medallion from the Academy won in 1767 included a travel stipend, which he waited five years to receive. He assisted Professor Johan Mandelberg of the Academy as an apprentice around 1769 and for painting decorations for the royal palace at Fredensborg. These paintings are classical, influenced by French classical artists such as Claude Lorrain and Nicolas Poussin. Mandelberg had studied in Paris under François Boucher. Although artists of that time usually journeyed to Paris for further studies, Abildgaard chose to travel to Rome, where he stayed from 1772 to 1777. He took a side trip to Naples in 1776 with Jens Juel. His ambitions focused in the genre of history painting. While in Rome, he studied Annibale Carracci's frescoes at the Palazzo Farnese and the paintings of Rafael, Titian, and Michelangelo. In addition he studied various other artistic disciplines (sculpture, architecture, decoration, wall paintings) and developed his knowledge of mythology, antiquities, anatomy, and perspective. In the company of Swedish sculptor Johan Tobias Sergel and painter Johann Heinrich Füssli, he began to move away from the classicism he had learned at the Academy. He developed an appreciation for the literature of Shakespeare, Homer, and Ossian (the putative Gaelic poet). He worked with themes from Greek as well as Norse mythology, which placed him at the forefront of Nordic romanticism. He left Rome in June 1777 with the hope of becoming professor at the Academy in Copenhagen. He stopped for a stay in Paris and arrived in Denmark in December of the same year. In 1778, soon after joining the Academy, he was appointed to a professorship. He taught mythology and anatomy in addition to painting of the neoclassical style. Beyond his position at the Academy, he was very productive as an artist from 1777 to 1794. He produced not only monumental works, but also smaller pieces such as vignettes and illustrations. He designed Old Norse costumes. He illustrated the works of Socrates and Ossian. Additionally he did some sculpting, etching, and authoring. He was interested in all manners of mythological, biblical, and literary allusion. He taught some famous painters, including Asmus Jacob Carstens, sculptor Bertel Thorvaldsen, and painters J. L. Lund and Christoffer Wilhelm Eckersberg. After his death, Lund and Eckersberg went on to become his successors as Academy professors. Eckersberg, referred to as the ""Father of Danish painting,"" went on to lay the foundation for the period of art known as the Golden Age of Danish Painting, as professor at the same Academy. As royal historical painter, Abildgaard was commissioned around 1780 by the Danish government to paint large monumental pieces, a history of Denmark, to decorate the entirety of the Knights' Room ("Riddersal)" at Christiansborg Palace. It was a prestigious and lucrative assignment. The paintings combined historical depictions with allegorical and mythological elements that glorified and flattered the government. The door pieces depicted, in allegory, four historical periods in Europe's history. Abilgaard used pictorial allegory like ideograms, to communicate ideas and transmit messages through symbols to a refined public who was initiated into this form of symbology. Abildgaard's professor Johan Edvard Mandelberg supplied the decorations to the room. He made a failed attempt to be elected to the post of Academy Director in 1787 and was unanimously elected to the post two years later, serving as director during the period 1789–1791. He had the reputation for being a tyrant and for taking as many of the academy's monumental assignments as possible for himself. Abilgaard was also known as a religious freethinker and an advocate of political reform. In spite of his service to (and in his artwork the glorification of) the government, he was hardly a great supporter of the monarchy or of the state church. He supported the emancipation of the farmers and participated in the collection of monies for the Freedom Monument ("Frihedsstøtten") in 1792. He contributed a design for the monument, as well as for two of the reliefs at its base. He got caught into controversies at the end of the 18th century because of his controversial statements and satirical drawings. He was inspired by the French Revolution, and in 1789–1790 he tried to incorporate these revolutionary ideals into the Knights' Room at Christiansborg Palace. However, the King rejected his designs. His showdowns with the establishment culminated in 1794, when his allegorical painting "Jupiter Weighs the Fate of Mankind" ("Jupiter vejer menneskenes skæbne") was exhibited at the Salon. He was politically isolated and cut out of the public debate by censors. The fire at Christiansborg Palace, in February 1794, also had a dampening effect on his career, for seven of the ten monumental paintings of the grandiose project were destroyed in that accident. The project was stopped and so were his earnings. However, after that devastating fire accident, he started getting decorative assignments and also got the opportunity to practice as an architect. He decorated the Levetzau Palace (now known as Christian VIII's Palace) at Amalienborg (1794–1798), recently occupied home of King Christian VII of Denmark's half-brother Frederik. His protégé Bertel Thorvaldsen headed the sculptural efforts. He also planned for rebuilding the Christiansborg Palace, but he could not get the assignment. At the start of the 19th century, his interest in painting was restored when he painted four scenes from Terence's comedy "Andria." In 1804 he received a commission for a series of painting for the throne room in the new palace, but disagreements between the artist and the crown prince put a halt to this project. He continued, however, to provide the court with designs for furniture and room decorations. He was once again selected to serve as the Academy's director from 1801 until his death. Abildgaard married Anna Maria Oxholm (1762-1822) in 1781 . His second marriage in 1803 was to Juliane Marie Ottesen (1777-1848). He had two sons and a daughter from the marriage. He died at Frederiksdal in 1809. Nicolai Abraham Abildgaard is buried in Copenhagen's Assistens Cemetery. Though Nicolai Abildgaard won immense fame in his own generation and helped lead the way to the period of art known as the Golden Age of Danish Painting, his works are scarcely known outside of Denmark. His style was classical, though with a romantic trend. According to the "Encyclopædia Britannica Eleventh Edition", "he was a cold theorist, inspired not by nature but by art. He had a keen sense of color. As a technical painter, he attained remarkable success, his tone being very harmonious and even, but the effect to a foreigner's eye is rarely interesting." A portrait of him painted by Jens Juel was made into a medallion by his friend Johan Tobias Sergel. August Vilhelm Saabye sculpted a statue of him in 1868, based on contemporary portraits.
https://en.wikipedia.org/wiki?curid=21949
Khyber Pakhtunkhwa Khyber Pakhtunkhwa (often abbreviated KP or KPK) ( ; ), formerly known as the North-West Frontier Province (NWFP) (), is one of the four administrative provinces of Pakistan, located in the northwestern region of the country along the International border with Afghanistan. It was previously known as the North-West Frontier Province until 2010 when the name was changed to Khyber Pakhtunkhwa by the 18th Amendment to Pakistan's Constitution and is known colloquially by various other names. Khyber Pakhtunkhwa is the third-largest province of Pakistan by the size of both population and economy, though it is geographically the smallest of four. Within Pakistan, Khyber Pakhtunkhwa shares a border with Punjab, Balochistan, Azad Kashmir, Gilgit-Baltistan and Islamabad. It is home to 17.9% of Pakistan's total population, with the majority of the province's inhabitants being Pashtuns and Hindko speakers. The province is the site of the ancient kingdom Gandhara, including the ruins of its capital Pushkalavati near modern-day Charsadda. Once a stronghold of Buddhism, the history of the region was characterized by frequent invasions by various empires due to its geographical proximity to the Khyber Pass. On 2 March 2017, the Government of Pakistan considered a proposal to merge the Federally Administered Tribal Areas (FATA) with Khyber Pakhtunkhwa, and to repeal the Frontier Crimes Regulations, which were applicable to the tribal areas at the time. However, some political parties opposed the merger, and called for the tribal areas to instead become a separate province of Pakistan. On 24 May 2018, the National Assembly of Pakistan voted in favour of an amendment to the Constitution of Pakistan to merge the Federally Administered Tribal Areas with Khyber Pakhtunkhwa province. The Khyber Pakhtunkhwa Assembly then approved the historic FATA-KP merger bill on 28 May 2018 making FATA officially part of Khyber Pakhtunkhwa, which was then signed by President Mamnoon Hussain, completing the process of this historic merger. "Khyber Pakhtunkhwa" means the "Khyber side of the land of the Pashtuns, where the word "Pakhtunkhwa" means "Land of the Pashtuns", while according to some scholars, it refers to "Pashtun culture and society". When the British established it as a province, they called it "North West Frontier Province" (abbreviated as NWFP) due to its relative location being in north west of their Indian Empire. After the creation of Pakistan, Pakistan continued with this name but a Pashtun nationalist party, Awami National Party demanded that the province name be changed to "Pakhtunkhwa". Their logic behind that demand was that Punjabi people, Sindhi people and Balochi people have their provinces named after their ethnicities but that is not the case for Pashtun people. Pakistan Muslim League was against that name since it was too similar to Bacha Khan's demand of a separate nation of Pashtunistan. PML-N wanted to name the province something other than which does not carry Pashtun identity in it as they argued that there were other minor ethnicities living in the province especially Hindkowans who spoke Hindko, thus the word "Khyber" was introduced with the name because it is the name of a major pass which connects Pakistan to Afghanistan. During the times of Indus Valley Civilization (3300 BCE – 1300 BCE) the modern Khyber Pakhtunkhwa's Khyber Pass, through Hindu Kush provided a route to other neighboring regions and was used by merchants on trade excursions. From 1500 BCE, Indo-Aryan peoples started to enter in the region(of modern-day Iran, Pakistan, Afghanistan, North India) after having passed Khyber Pass. The Gandharan civilization, which reached its zenith between the sixth and first centuries BCE, and which features prominently in the Hindu epic poem, the Mahabharatha, had one of its cores over the modern Khyber Pakhtunkhwa province. Vedic texts refer to the area as the province of Pushkalavati. The area was once known to be a great center of learning. At around 516 BCE., Darius Hystaspes sent Scylax, a Greek seaman from Karyanda, to explore the course of the Indus river. Darius Hystaspes subsequently subdued the races dwelling west of the Indus and north of Kabul. Gandhara was incorporated into the Persian Empire as one of its far easternmost satrapy system of government. The "satrapy" of Gandhara is recorded to have sent troops for Xerxes' invasion of Greece in 480 BCE. In the spring of 327 BCE, Alexander the Great crossed the Indian Caucasus (Hindu Kush) and advanced to Nicaea, where Omphis, king of Taxila and other chiefs joined him. Alexander then dispatched part of his force through the valley of the Kabul River, while he himself advanced into modern Khyber Pakhtunkhwa's Bajaur and Swat regions with his troops. Having defeated the Aspasians, from whom he took 40,000 prisoners and 230,000 oxen, Alexander crossed the Gouraios (Panjkora River) and entered into the territory of the Assakenoi – also in modern-day Khyber Pakhtunkhwa. Alexander then made Embolima (thought to be the region of Amb in Khyber Pakhtunkhwa) his base. The ancient region of Peukelaotis (modern Hashtnagar, north-west of Peshawar) submitted to the Greek invasion, leading to Nicanor, a Macedonian, being appointed satrap of the country west of the Indus, which includes the modern Khyber Pakhtunkhwa province. After Alexander's death in 323 BCE, Porus obtained possession of the region but was murdered by Eudemus in 317 BCE. Eudemus then left the region, and with his departure, Macedonian power collapsed. Sandrocottus (Chandragupta), the founder of the Mauryan dynasty, then declared himself master of the province. His grandson, Ashoka, made Buddhism the dominant religion in ancient Gandhara. After Ashoka's death the Mauryan empire collapse, just as in the west the Seleucid power was rising. The Greek princes of neighboring Bactria (in modern Afghanistan) took advantage of the power vacuum to declare their independence. The Bactrian kingdoms were then attacked from the west by the Parthians and from the north (about 139 BCE) by the Sakas, a Central Asian tribe. Local Greek rulers still exercised a feeble and precarious power along the borderland, but the last vestige of Greek dominion was extinguished by the arrival of the Yueh-chi. The Yueh-Chi were a race of nomads that were themselves forced southwards out of Central Asia by the nomadic Xiongnu people. The Kushan clan of the Yuek Chi seized vast swathes of territory under the rule of Kujula Kadphises. His successors, Vima Takto and Vima Kadphises, conquered the north-western portion of the Indian subcontinent. Vima Kadphises was then succeeded by his son, the legendary Buddhist king Kanishka, who himself was succeeded by Huvishka, and Vasudeva I. After the Saffarids had left in Kabul, the Hindu Shahis had once again been placed into power. The restored Hindu Shahi kingdom was founded by the Brahmin minister Kallar in 843 CE. Kallar had moved the capital into Udabandhapura in modern-day Khyber Pakhtunkhwa from Kabul. Trade had flourished and many gems, textiles, perfumes, and other goods had been exported West. Coins minted by the Shahis have been found all over the Indian subcontinent. The Shahis had built Hindu temples with many idols, all of which were later looted by invaders. The ruins of these temples can be found at Nandana, Malot, Siv Ganga, and Ketas, as well as across the west bank of the Indus river. At its height King Jayapala, the rule of the Shahi kingdom had extended to Kabul from the West, Bajaur to the North, Multan to the South, and the present day India-Pakistan border to the East. Jayapala saw a danger from the rise to power of the Ghaznavids and invaded their capital city of Ghazni both in the reign of Sebuktigin and in that of his son Mahmud. This had initiated the Muslim Ghaznavid and Hindu Shahi struggles. Sebuktigin, however, defeated him and forced Jayapala to pay an indemnity. Eventually, Jayapala refused payment and took to war once more. The Shahis were decisively defeated by Mahmud of Ghazni after the defeat of Jayapala at the Battle of Peshawar on 27 November 1001. Over time, Mahmud of Ghazni had pushed further into the subcontinent, as far as east as modern day Agra. During his campaigns, many Hindu temples and Buddhist monasteries had been looted and destroyed, as well as many people being converted to Islam. Following the collapse of Ghaznavid rule, local Pashtuns of the Delhi Sultanate controlled the region. Several Turkic and Pashtun dynasties ruled from Delhi, having shifted their capital from Lahore to Delhi. Several Muslim dynasties ruled modern Khyber Pakhtunkhwa during the Delhi Sultanate period: the Mamluk dynasty (1206–90), the Khalji dynasty (1290–1320), the Tughlaq dynasty (1320–1413), the Sayyid dynasty (1414–51), and the Lodi dynasty (1451–1526). Tanoli tribe of Ghilji confederation from Ghazni Afghanistan came with Sabuktagin and settled in the mountainous area of Hazara called Tanawal (Amb). Yusufzai Pashtun tribes from the Kabul and Jalalabad valleys began migrating to the Valley of Peshawar beginning in the 15th century, and displaced the Swatis of bhittani confederation ( a predominant Pashtun tribe of Hazara div ) and Dilazak Pashtun tribes across the Indus River to Hazara Division. Mughal suzerainty over the Khyber Pakhtunkhwa region was partially established after Babar, the founder of the Mughal Empire, invaded the region in 1505 CE via the Khyber Pass. The Mughal Empire noted the importance of the region as a weak point in their empire's defenses, and determined to hold Peshawar and Kabul at all cost against any threats from the Uzbek "Shaybanids". He was forced to retreat westwards to Kabul but returned to defeat the Lodis in July 1526, when he captured Peshawar from Daulat Khan Lodi, though the region was never considered to be fully subjugated to the Mughals. Under the reign of Babar's son, Humayun, a direct Mughal rule was briefly challenged with the rise of the Pashtun Emperor, Sher Shah Suri, who began construction of the famous Grand Trunk Road – which links Kabul, Afghanistan with Chittagong, Bangladesh over 2000 miles to the east. Later, local rulers once again pledged loyalty to the Mughal emperor. Yusufzai tribes rose against Mughals during the Yusufzai Revolt of 1667, and engaged in pitched-battles with Mughal battalions in Peshawar and Attock. Afridi tribes resisted Aurangzeb rule during the Afridi Revolt of the 1670s. The Afridis massacred a Mughal battalion in the Khyber Pass in 1672 and shut the pass to lucrative trade routes. Following another massacre in the winter of 1673, Mughal armies led by Emperor Aurangzeb himself regained control of the entire area in 1674, and enticed tribal leaders with various awards in order to end the rebellion. Referred to as the "Father of Pashto Literature" and hailing from the city of Akora Khattak, the warrior-poet Khushal Khan Khattak actively participated in revolt against the Mughals and became renowned for his poems that celebrated the rebellious Pashtun warriors. On 18 November 1738, Peshawar was captured from the Mughal governor Nawab Nasir Khan by the Afsharid armies during the Persian invasion of the Mughal Empire under Nader Shah. The area fell subsequently under the rule of Ahmad Shah Durrani, founder of the Afghan Durrani Empire, following a grand nine-day long assembly of leaders, known as the "loya jirga". In 1749, the Mughal ruler was induced to cede Sindh, the Punjab region and the important trans Indus River to Ahmad Shah in order to save his capital from Afghan attack.In short order, the powerful army brought under its control the Tajik, Hazara, Uzbek, Turkmen, and other tribes of northern Afghanistan. Ahmad Shah invaded the remnants of the Mughal Empire a third time, and then a fourth, consolidating control over the Kashmir and Punjab regions, with Lahore being governed by Afghans. He sacked Delhi in 1757 but permitted the Mughal dynasty to remain in nominal control of the city as long as the ruler acknowledged Ahmad Shah's suzerainty over Punjab, Sindh, and Kashmir. Leaving his second son Timur Shah to safeguard his interests, Ahmad Shah left India to return to Afghanistan. Their rule was interrupted by a brief invasion of the Hindu Marathas, ruled over the region following the 1758 Battle of Peshawar for eleven months till early 1759 when the Durrani rule was re-established. Under the reign of Timur Shah, the Mughal practice of using Kabul as a summer capital and Peshawar as a winter capital was reintroduced, Peshawar's Bala Hissar Fort served as the residence of Durrani kings during their winter stay in Peshawar. Mahmud Shah Durrani became king, and quickly sought to seize Peshawar from his half-brother, Shah Shujah Durrani. Shah Shujah was then himself proclaimed king in 1803, and recaptured Peshawar while Mahmud Shah was imprisoned at Bala Hissar fort until his eventual escape. In 1809, the British sent an emissary to the court of Shah Shujah in Peshawar, marking the first diplomatic meeting between the British and Afghans. Mahmud Shah allied himself with the "Barakzai" Pashtuns, and amassed an army in 1809, and captured Peshawar from his half-brother, Shah Shujah, establishing Mahmud Shah's second reign, which lasted under 1818. Ranjit Singh invaded Peshawar in 1818 but soon lost it to the Afghans. Following the Sikh victory against Azim Khan, half-brother of Emir Dost Mohammad Khan, at the Battle of Nowshera in March 1823, Ranjit Singh captured the Peshawar Valley. An 1835 attempt by Dost Muhammad Khan to re-occupy Peshawar failed when his army declined to engage in combat with the Dal Khalsa. Dost Muhammad Khan's son, Mohammad Akbar Khan engaged with Sikh forces the Battle of Jamrud of 1837, and failed to recapture it. During Sikh rule, an Italian named Paolo Avitabile was appointed an administrator of Peshawar, and is remembered for having unleashed a reign of fear there. The city's famous Mahabat Khan, built in 1630 in the Jeweler's Bazaar, was badly damaged and desecrated by the Sikhs, who also rebuilt the Bala Hissar fort during their occupation of Peshawar. British East India Company defeated the Sikhs during the Second Anglo-Sikh War in 1849, and incorporated small parts of the region into the Province of Punjab. While Peshawar was the site of a small revolt against British during the Mutiny of 1857, local Pashtun tribes throughout the region generally remained neutral or supportive of the British as they detested the Sikhs, in contrast to other parts of British India which rose up in revolt against the British. However, British control of parts of the region was routinely challenged by Wazir tribesmen in Waziristan and other Pashtun tribes, who resisted any foreign occupation until Pakistan was created. By the late 19th century, the official boundaries of Khyber Pakhtunkhwa region still had not been defined as the region was still claimed by the Kingdom of Afghanistan. It was only in 1893 The British demarcated the boundary with Afghanistan under a treaty agreed to by the Afghan king, Abdur Rahman Khan, following the Second Anglo-Afghan War. In 1901, the North-West Frontier Province was formally created by the British administration on the British side of the Durand Line, although the princely states of Swat, Dir, Chitral, and Amb were allowed to maintain their autonomy under the terms of maintaining friendly ties with the British. As the British war effort during World War One demanded the reallocation of resources from British India to the European war fronts, some tribesmen from Afghanistan crossed the Durand Line in 1917 to attack British posts in an attempt to gain territory and weaken the legitimacy of the border. The validity of the Durand Line, however, was re-affirmed in 1919 by the Afghan government with the signing of the Treaty of Rawalpindi, which ended the Third Anglo-Afghan War – a war in which Waziri tribesmen allied themselves with the forces of Afghanistan's King Amanullah in their resistance to British rule. The Wazirs and other tribes, taking advantage of instability on the frontier, continued to resist British occupation until 1920 – even after Afghanistan had signed a peace treaty with the British. British campaigns to subdue tribesmen along the Durand Line, as well as three Anglo-Afghan wars, made travel between Afghanistan and the densely populated heartlands of Khyber Pakhtunkhwa increasingly difficult. The two regions were largely isolated from one another from the start of the Second Anglo-Afghan War in 1878 until the start of World War II in 1939 when conflict along the Afghan frontier largely dissipated. Concurrently, the British continued their large public works projects in the region, and extended the Great Indian Peninsula Railway into the region, which connected the modern Khyber Pakhtunkhwa region to the plains of India to the east. Other projects, such as the Attock Bridge, Islamia College University, Khyber Railway, and establishment of cantonments in Peshawar, Kohat, Mardan, and Nowshera further cemented British rule in the region. During this period, North-West Frontier Province was a "scene of repeated outrages on Hindus." During the independence period there was a Congress-led ministry in the province, which was led by secular Pashtun leaders, including Bacha Khan, who preferred joining India instead of Pakistan. The secular Pashtun leadership was also of the view that if joining India was not an option then they should espouse the cause of an independent ethnic Pashtun state rather than Pakistan. The secular stance of Bacha Khan had driven a wedge between the ulama of the otherwise pro-Congress (and pro-Indian unity) Jamiat Ulema Hind (JUH) and Bacha Khan's Khudai Khidmatgars. The directives of the ulama in the province began to take on communal tones. The ulama saw the Hindus in the province as a 'threat' to Muslims. Accusations of molesting Muslim women were levelled at Hindu shopkeepers in Nowshera, a town where anti-Hindu sermons were delivered by maulvis. Tensions also rose in 1936 over the abduction of a Hindu girl in Bannu. British Indian court ruled against the marriage of a Hindu-converted Muslim girl at Bannu, after the girl's family filed a case of abduction and forced conversion. The ruling was based on the fact that the girl was a minor and was asked to make her decision of conversion and marriage after she reaches the age of majority, till then she was asked to live with a third party. The verdict 'enraged' the Muslims - especially the Pashtun tribesmen. The Dawar Maliks and mullahs left the Tochi far the Khaisora Valley to the south to rouse the Torikhel Wazir. The enraged tribesmen mustered two large lashkars 10,000 strong and battled the Bannu Brigade, with heavy casualties on both sides. Widespread lawlessness erupted as tribesmen blocked roads, overran outposts and ambushed convoys. The British retaliated by sending two columns converging in the Khaisora river valley. They suppressed the agitation by imposing fines and by destroying the houses of the ringleaders, including that of Haji Mirzali Khan (Faqir of Ipi). However, the pyrrhic nature of the victory and the subsequent withdrawal of the troops was credited by the Wazirs to be a manifestation of the power of Mirzali Khan. He succeeded in inducing a semblance of tribal unity, as the British noticed with dismay, among various sections of Tori Khel Wazirs, the Mahsud and the Bettani. He cemented his position as a religious leader by declaring a Jihad against the British. This move also helped rally support from Pashtun tribesmen across the border. Such controversies stirred up anti-Hindu sentiments amongst the province's Muslim population. By 1947 the majority of the ulama in the province began supporting the Muslim League's idea of Pakistan. In June 1947, Mirzali Khan (Faqir of Ipi), Bacha Khan, and other Khudai Khidmatgars declared the Bannu Resolution, demanding that the Pashtuns be given a choice to have an independent state of Pashtunistan composing all Pashtun majority territories of British India, instead of being made to join the new state of Pakistan. However, the British Raj refused to comply with the demand of this resolution, as their departure from the region required regions under their control to choose either to join India or Pakistan, with no third option. By 1947 Pashtun nationalists were advocating for a united India, and no prominent voices advocated for a union with Afghanistan. Immediately prior to the 1947 Partition of India, the British held a referendum in the NWFP to allow voters to choose between joining India or Pakistan. The polling began on 6 July 1947 and the referendum results were made public on 20 July 1947. According to the official results, there were 572,798 registered voters, out of which 289,244 (99.02%) votes were cast in favour of Pakistan, while 2,874 (0.98%) were cast in favour of India. The Muslim League declared the results as valid since over half of all eligible voters backed merger with Pakistan. The then Chief Minister Dr. Khan Sahib, along with his brother Bacha Khan and the Khudai Khidmatgars, boycotted the referendum, citing that it did not have the options of the NWFP becoming independent or joining Afghanistan. Their appeal for boycott had an effect, as according to an estimate, the total turnout for the referendum was 15% lower than the total turnout in the 1946 elections, although over half of all eligible voters backed merger with Pakistan. Bacha Khan pledged allegiance to the new state of Pakistan in 1947, and thereafter abandoned his goals of an independent Pashtunistan and a united India in favour of supporting increased autonomy for the NWFP under Pakistani rule. He was subsequently arrested by Pakistan several times for his opposition to strong centralized rule. He later claimed that "Pashtunistan was never a reality". The idea of Pashtunistan never helped Pashtuns and it only caused suffering for them. He further claimed that the "successive governments of Afghanistan only exploited the idea for their own political goals". After the creation of Pakistan in 1947, Afghanistan was the sole member of the United Nations to vote against Pakistan's accession to the UN because of Kabul's claim to the Pashtun territories on the Pakistani side of the Durand Line. Afghanistan's Loya Jirga of 1949 declared the Durand Line invalid, which led to border tensions with Pakistan, and decades of mistrust between the two states. Afghan governments have also periodically refused to recognize Pakistan's inheritance of British treaties regarding the region. As had been agreed to by the Afghan government following the Second Anglo-Afghan War and after the treaty ending Third Anglo-Afghan War, no option was available to cede the territory to the Afghans, even though Afghanistan continued to claim the entire region as it was part of the Durrani Empire prior the conquest of the region by the Sikhs in 1818. In 1950, Afghan-backed separatists in the Waziristan region declared the independence of Pashtunistan as an independent nation o dr the entirety of the NWFP. A Pashtun tribal jirga, held in Razmak, Waziristan, appointed Mirzali Khan as the President of the National Assembly for Pashtunistan. His popularity among the people of Waziristan declined over the years. He died a natural death in 1960 in Gurwek, Waziristan. Growing participation of Pashtuns in the Pakistani government, however, resulted in the erosion of the support for the secessionist Pashtunistan movement by the end of the 1960s. All the princely states within the boundaries of the NWFP were allowed to maintain certain autonomy following independence in 1947, but In 1969, the autonomous princely states of Swat, Dir, Chitral, and Amb were fully merged into the province. For travelers, the area remained relatively peaceful in the 1960s and '70s. It was the usual route on the Hippie trail overland from Europe to India, with buses running from Kabul to Peshawar. While waiting to cross at the border visitors were however cautioned not to stray from the main road. As a result of the Soviet invasion of Afghanistan in 1979, over five million Afghan refugees poured into Pakistan, mostly choosing to reside in the NWFP (, nearly 3 million remained). The North-West Frontier Province became a base for the Afghan resistance fighters and the Deobandi ulama of the province played a significant role in the Afghan 'jihad', with Madrasa Haqqaniyya becoming a prominent organisational and networking base for the anti-Soviet Afghan fighters. The province remained heavily influenced by events in Afghanistan thereafter. The 1989–1992 Civil war in Afghanistan following the withdrawal of Soviet forces led to the rise of the Afghan Taliban, which had emerged in the border region between Afghanistan, Balochistan, and FATA as a formidable political force. In 2010, the province was renamed "Khyber Pakhtunkhwa." Protests arose among the local Hindkowan, Chitrali, Kohistani and Kalash populations over the name change, as they began to demand their own provinces. The Hindkowans, Kohistanis and Chitralis are last remains of ancient Gandhari people and they jointly protested for preservation of their culture. Seven people were killed and 100 injured in protests on 11April 2011. The Awami National Party sought to rename the province "Pakhtunkhwa", which translates to "Land of Pashtuns" in the Pashto language. The name change was largely opposed by non-Pashtuns, and by political parties such as the Pakistan Muslim League-N, who draw much of their support from non-Pashtun regions of the province, and by the Islamist Muttahida Majlis-e-Amal coalition. Khyber Pakhtunkhwa has been a site of militancy and terrorism that started after the attacks of 11 September 2001, and intensified when the Pakistani Taliban began an attempt to seize power in Pakistan starting in 2004. Armed conflict began in 2004, when tensions, rooted in the Pakistan Army's search for al-Qaeda fighters in Pakistan's mountainous Waziristan area (in the Federally Administered Tribal Areas), escalated into armed resistance. Fighting is ongoing between the Pakistani Army and armed militant groups such as the Tehrik-i-Taliban Pakistan (TTP), Jundallah, Lashkar-e-Islam (LeI), Tehreek-e-Nafaz-e-Shariat-e-Mohammadi (TNSM), al-Qaeda, and elements of organized crime have led to the deaths of over 50,000 Pakistanis since the country joined the U.S-led War on Terror, with Khyber Pakhtunkhwa being the site of most of the conflict. Khyber Pakhtunkhwa is also the main theater for Pakistan's Zarb-e-Azb operation – a broad military campaign against militants located in the province, and neighboring FATA. By 2014, casualty rates in the country as a whole dropped by 40% as compared to 2011–2013, with even greater drops noted in Khyber Pakhtunkhwa, despite the province being the site of a large massacre of schoolchildren by terrorists in December 2014. Khyber Pakhtunkhwa sits primarily on the Iranian plateau and comprises the junction where the slopes of the Hindu Kush mountains on the Eurasian plate give way to the Indus-watered hills approaching South Asia. This situation has led to seismic activity in the past. The famous Khyber Pass links the province to Afghanistan, while the Kohalla Bridge in Circle Bakote Abbottabad is a major crossing point over the Jhelum River in the east. Geographically the province could be divided into two zones: the northern one extending from the ranges of the Hindu Kush to the borders of Peshawar basin and the southern one extending from Peshawar to the Derajat basin. The northern zone is cold and snowy in winters with heavy rainfall and pleasant summers with the exception of Peshawar basin, which is hot in summer and cold in winter. It has moderate rainfall. The southern zone is arid with hot summers and relatively cold winters and scanty rainfall. The Sheikh Badin Hills, a spur of clay and sandstone hills that stretch east from the Sulaiman Mountains to the Indus River, separates Dera Ismail Khan District from the "Marwat" plains of the Lakki Marwat. The highest peak in the range is the limestone Sheikh Badin Mountain, which is protected by the Sheikh Badin National Park. Near the Indus River, terminus of the Sheikh Badin Hills is a spur of limestone hills known as the "Kafir Kot" hills, where the ancient Hindu complex of Kafir Kot is located. The major rivers that criss-cross the province are the Kabul, Swat, Chitral, Kunar, Siran, Panjkora, Bara, Kurram, Dor, Haroo, Gomal and Zhob. Its snow-capped peaks and lush green valleys of unusual beauty have enormous potential for tourism. The climate of Khyber Pakhtunkhwa varies immensely for a region of its size, encompassing most of the many climate types found in Pakistan. The province stretching southwards from the Baroghil Pass in the Hindu Kush covers almost six degrees of latitude; it is mainly a mountainous region. Dera Ismail Khan is one of the hottest places in South Asia while in the mountains to the north the weather is mild in the summer and intensely cold in the winter. The air is generally very dry; consequently, the daily and annual range of temperature is quite large. Rainfall also varies widely. Although large parts of Khyber Pakhtunkhwa are typically dry, the province also contains the wettest parts of Pakistan in its eastern fringe specially in monsoon season from mid June to mid September. Chitral District lies completely sheltered from the monsoon that controls the weather in eastern Pakistan, owing to its relatively westerly location and the shielding effect of the Nanga Parbat massif. In many ways, Chitral District has more in common regarding climate with Central Asia than South Asia. The winters are generally cold even in the valleys, and heavy snow during the winter blocks passes and isolates the region. In the valleys, however, summers can be hotter than on the windward side of the mountains due to lower cloud cover: Chitral can reach frequently during this period. However, the humidity is extremely low during these hot spells and, as a result the summer climate is less torrid than in the rest of the Indian subcontinent. Most precipitation falls as thunderstorms or snow during winter and spring, so that the climate at the lowest elevations is classed as Mediterranean ("Csa"), continental Mediterranean ("Dsa") or semi-arid ("BSk"). Summers are extremely dry in the north of Chitral district and receive only a little rain in the south around Drosh. At elevations above , as much as a third of the snow which feeds the large Karakoram and Hindukush glaciers comes from the monsoon since these elevations are too high to be shielded from its moisture. On the southern flanks of Nanga Parbat and in Upper and Lower Dir Districts, rainfall is much heavier than further north because moist winds from the Arabian Sea are able to penetrate the region. When they collide with the mountain slopes, winter depressions provide heavy precipitation. The monsoon, although short, is generally powerful. As a result, the southern slopes of Khyber Pakhtunkhwa are the wettest part of Pakistan. Annual rainfall ranges from around in the most sheltered areas to as much as in parts of Abbottabad and Mansehra Districts. This region's climate is classed at lower elevations as humid subtropical ("Cfa" in the west; "Cwa" in the east); whilst at higher elevations with a southerly aspect, it becomes classed as humid continental ("Dfb"). However, accurate data for altitudes above are practically nonexistent here, in Chitral, or in the south of the province. The seasonality of rainfall in central Khyber Pakhtunkhwa shows very marked gradients from east to west. At Dir, March remains the wettest month due to frequent frontal cloud-bands, whereas in Hazara more than half the rainfall comes from the monsoon. This creates a unique situation characterized by a bimodal rainfall regime, which extends into the southern part of the province described below. Since cold air from the Siberian High loses its chilling capacity upon crossing the vast Karakoram and Himalaya ranges, winters in central Khyber Pakhtunkhwa are somewhat milder than in Chitral. Snow remains very frequent at high altitudes but rarely lasts long on the ground in the major towns and agricultural valleys. Outside of winter, temperatures in central Khyber Pakhtunkhwa are not so hot as in Chitral. Significantly higher humidity when the monsoon is active means that heat discomfort can be greater. However, even during the most humid periods the high altitudes typically allow for some relief from the heat overnight. As one moves further away from the foothills of the Himalaya and Karakoram ranges, the climate changes from the humid subtropical climate of the foothills to the typically arid climate of Sindh, Balochistan and southern Punjab. As in central Khyber Pakhtunkhwa, the seasonality of precipitation shows a very sharp gradient from west to east, but the whole region very rarely receives significant monsoon rainfall. Even at high elevations, annual rainfall is less than and in some places as little as . Temperatures in southern Khyber Pakhtunkhwa are extremely hot: Dera Ismail Khan in the southernmost district of the province is known as one of the hottest places in the world with temperatures known to have reached . In the cooler months, nights can be cold and frosts remain frequent; snow is very rare, and daytime temperatures remain comfortably warm with abundant sunshine. There are about 29 National Parks in Pakistan and about 18 in Khyber Pakhtunkhwa. The province of Khyber Pakhtunkhwa had a population of 35.53 million at the time of the 2017 Census of Pakistan. The largest ethnic group is the Pashtun, who historically have been living in the areas for centuries. Around 1.5 million Afghan refugees also remain in the province, the majority of whom are Pashtuns followed by Tajiks, Hazaras, Gujjar and other smaller groups. Despite having lived in the province for over two decades, they are registered as citizens of Afghanistan. The Pashtuns of Khyber Pakhtunkhwa observe tribal code of conduct called "Pakhtunwali" which has four high value components called "nang" (honor), "badal" (revenge), "melmastiya" (hospitality) and "nanawata" (rights to refuge). Urdu, being the national and official language, serves as a lingua franca for inter-ethnic communications, and sometime Pashto and Urdu are the second and third languages among communities which speak other ethnic languages. In 2011 the provincial government approved in principle the introduction of the five regional languages of Pashto, Hindko, Saraiki, Khowar and Kohistani as compulsory subjects for the schools in the areas where they are spoken. The majority of the residents of the Khyber Pakhtunkhwa overwhelmingly follows and professes the Sunni principles of Islam while the small followers of Shia principles of Islam are found among the Isma'ilis in the Chitral district. The tribe of Kalasha in southern Chitral still retain an ancient form of Hinduism mixed with Animism. There are very small numbers of residents who are the adherents of Roman Catholicism denomination of Christianity, Hinduism and Sikhism. The Provincial Assembly is a unicameral legislature, which consists of 145 members elected to serve for a constitutionally bounded term of five years. Historically, the province perceived to be a stronghold of the Awami National Party (ANP); a pro-Russian, by procommunist, left-wing and nationalist party. Since the 1970s, the Pakistan Peoples Party (PPP) also enjoyed considerable support in the province due to its socialist agenda. Khyber Pakhtunkhwa was thought to be another leftist region of the country after Sindh. After the nationwide general elections held in 2002, a plurality voting swing in the province elected one of Pakistan's only religiously-based provincial governments led by the ultra-conservative Muttahida Majlis-e-Amal (MMA) during the administration of President Pervez Musharraf. The American involvement in neighboring Afghanistan contributed towards the electoral victory of the Islamic coalition led by Jamaat-e-Islami Pakistan (JeI) whose social policies made the province a ground-swell of anti-Americanism. The electoral victory of MMA was also in context of guided democracy in the Musharraff administration that barred the mainstream political parties, the leftist Pakistan Peoples Party and the centre-right Pakistan Muslim League (N) (PML(N)), whose chairmen and presidents having been barred from participation in the elections. Policy enforcement of a range of social restrictions, though the implementation of strict Shariah was introduced by the Muttahida Majlis-e-Amal government but the law was never fully enacted due to objections of the Governor of Khyber Pakhtunkhwa backed by the Musharraff administration. Restrictions on public musical performances were introduced, as well as a ban prohibiting music to be played in public places as part of the "Prohibition of Dancing and Music Bill, 2005" – which led to the creation of a thriving underground music scene in Peshawar. The Islamist government also attempted to enforce compulsory "hijab" on women, and wished to enforce gender segregation in the province's educational institutions. The coalition further tried to prohibit male doctors from performing ultrasounds on women, and tried to close the province's cinemas. In 2005, the coalition successfully passed the "Prohibition of Use of Women in Photograph Bill, 2005," leading to the removal of all public advertisements that featured women. At the height of Taliban insurgency in Pakistan, the religious coalition lost its grip in the general elections held in 2008, and the religious coalition was swept out of power by the leftist Awami National Party which also witnessed the resignation of President Musharraf in 2008. The ANP government eventually led the initiatives to repeal the major Islamist's social programs, with the backing of the federal government led by PPP in Islamabad. Public disapproval of ANP's leftist program integrated in civil administration with the sounded allegations of corruption as well as popular opposition against religious program promoted by the MMA swiftly shifted the province's leniency towards the right-wing spectrum led by the PTI in 2012. In 2013, the provincial politics shifted towards the right wing, national conservatism when the PTI, led by Imran Khan, was able to form the minority government in coalition with the JeI; the province now serves as the stronghold of the rightist PTI and is perceived as right-wing spectrum of the country. In non-Pashtun areas, such as Abbottabad, and Hazara Division, the PML(N), the centre-right party, enjoys considerable public support over economical and public policy issues and has a substantial vote bank. The executive branch of the Kyber Pakhtunkhwa is led by the Chief Minister elected by popular vote in the Provincial assembly while the Governor, a ceremonial figure representing the federal government in Islamabad, is appointed from the necessary advice of the Prime Minister of Pakistan by the President of Pakistan. The provincial cabinet is then appointed by the Chief Minister who takes the Oath of office from the Governor. In matters of civil administration, the Chief Secretary assists the Chief Minister on executing its right to ensure the writ of the government and the constitution. The Peshawar High Court is the province's highest court of law whose judges are appointed by the approval of the Supreme Judicial Council in Islamabad, interpreting the laws and overturn those they find unconstitutional. Khyber Pakhtunkhwa is divided into seven Divisions – Bannu, Dera Ismail Khan, Hazara, Kohat, Malakand, Mardan, and Peshawar. Each division is split up into anywhere between two and nine districts, and there are 35 districts in the entire province. Below you can find a list showing each district ordered by alphabetical order. A full list showing different characteristics of each district, such as their population, area, and a map showing their location can be found at the main article. Peshawar is the capital and largest city of Khyber Pakhtunkhwa. The city is the most populous and comprises more than one-eighth of the province's population and Bannu NA35 is the largest NA Seat of the province. Khyber Pakhtunkhwa has the third largest provincial economy in Pakistan. Khyber Pakhtunkhwa's share of Pakistan's GDP has historically comprised 10.5%, although the province accounts for 11.9% of Pakistan's total population. The part of the economy that Khyber Pakhtunkhwa dominates is forestry, where its share has historically ranged from a low of 34.9% to a high of 81%, giving an average of 61.56%. Currently, Khyber Pakhtunkhwa accounts for 10% of Pakistan's GDP, 20% of Pakistan's mining output and, since 1972, it has seen its economy grow in size by 3.6 times. Agriculture remains important and the main cash crops include wheat, maize, tobacco (in Swabi), rice, sugar beets, as well as fruits are grown in the province. Some manufacturing and high tech investments in Peshawar has helped improve job prospects for many locals, while trade in the province involves nearly every product. The bazaars in the province are renowned throughout Pakistan. Unemployment has been reduced due to establishment of industrial zones. Workshops throughout the province support the manufacture of small arms and weapons. The province accounts for at least 78% of the marble production in Pakistan. The Sharmai Hydropower Project is a proposed power generation project located in Upper Dir District of Khyber Pakhtunkhwa on the Panjkora River with an installed capacity of 150MW. The project feasibility study was carried out by Japanese consulting company Nippon Koei. The Awami National Party sought to rename the province "Pakhtunkhwa", which translates to "Land of Pakhtuns" in the Pashto language. This was opposed by some of the non-Pashtuns, and especially by parties such as the Pakistan Muslim League-N (PML-N) and Muttahida Majlis-e-Amal (MMA). The PML-N derives its support in the province from primarily non-Pashtun Hazara regions. In 2010 the announcement that the province would have a new name led to a wave of protests in the Hazara region. On 15 April 2010 Pakistan's senate officially named the province "Khyber Pakhtunkhwa" with 80 senators in favour and 12 opposed. The MMA, who until the elections of 2008 had a majority in the Khyber Pakhtunkhwa government, had proposed "Afghania" as a compromise name. After the 2008 general election, the Awami National Party formed a coalition provincial government with the Pakistan Peoples Party. The Awami National Party has its strongholds in the Pashtun areas of Pakistan, particularly in the Peshawar valley, while Karachi in Sindh has one of the largest Pashtun populations in the world—around 7 million by some estimates. In the 2008 election, the ANP won two Sindh assembly seats in Karachi. The Awami National Parbeen instrumental in fighting the Taliban. In the 2013 general election Pakistan Tehreek-e-Insaf won a majority in the provincial assembly and has now formed their government in coalition with Jamaat-e-Islami Pakistan. The following is a list of some of the major NGOs working in Khyber Pakhtunkhwa: Hindko and Pashto folk music are popular in Khyber Pakhtunkhwa and have a rich tradition going back hundreds of years. The main instruments are the rubab, mangey and harmonium. Khowar folk music is popular in Chitral and northern Swat. The tunes of Khowar music are very different from those of Pashto, and the main instrument is the Chitrali sitar. A form of band music composed of clarinets (Surnai) and drums is popular in Chitral. It is played at polo matches and dances. The same form of band music is played in the neighbouring Northern Areas. Sources: This is a chart of the education market of Khyber Pakhtunkhwa estimated by the government in 1998. Khyber Pakhtunkhwa (KPK) province has 9 government medical colleges Cricket is the main sport played in Khyber Pakhtunkhwa. It has produced world-class sportsmen like Shahid Afridi, Younis Khan, Fakhar Zaman and Umar Gul. Besides producing cricket players, Khyber Pakhtunkhwa has the honour of being the birthplace of many world-class squash players, including greats like Hashim Khan, Qamar Zaman, Jahangir Khan and Jansher Khan.
https://en.wikipedia.org/wiki?curid=21950
Naiad (moon) Naiad , also known as Neptune III, is the innermost satellite of Neptune, named after the naiads of Greek legend. Naiad was discovered sometime before mid-September 1989 from the images taken by the "Voyager 2" probe. The last moon to be discovered during the flyby, it was designated S/1989 N 6. The discovery was announced on 29 September 1989, in the IAU Circular No. 4867, and mentions "25 frames taken over 11 days", implying a discovery date of sometime before September 18. The name was given on 16 September 1991. Naiad is irregularly shaped. It is likely that it is a rubble pile re-accreted from fragments of Neptune's original satellites, which were smashed up by perturbations from Triton soon after that moon's capture into a very eccentric initial orbit. Naiad is in a 73:69 orbital resonance with the next outward moon, Thalassa, in a "dance of avoidance". As it orbits Neptune, the more inclined Naiad successively passes Thalassa twice from above and then twice from below, in a cycle that repeats every ~21.5 Earth days. The two moons are about 3540 km apart when they pass each other. Although their orbital radii differ by only 1850 km, Naiad swings ~2800 km above or below Thalassa's orbital plane at closest approach. Thus this resonance, like many such orbital correlations, serves to stabilize the orbits by maximizing separation at conjunction. However, the role of orbital inclination in maintaining this avoidance in a case where eccentricities are minimal is unusual. Since the "Voyager 2" flyby, the Neptune system has been extensively studied from ground-based observatories and the Hubble Space Telescope as well. In 2002–03 the Keck telescope observed the system using adaptive optics and detected easily the largest four inner satellites. Thalassa was found with some image processing, but Naiad was not located. Hubble has the ability to detect all the known satellites and possible new satellites even dimmer than those found by "Voyager 2". On October 8, 2013 the SETI Institute announced that Naiad had been located in archived Hubble imagery from 2004. The suspicion that the loss of positioning was due to considerable errors in Naiad's ephemeris proved correct as Naiad was ultimately located 80 degrees from its expected position.
https://en.wikipedia.org/wiki?curid=21952
Nilo-Saharan languages The Nilo-Saharan languages are a proposed family of African languages spoken by some 50–60 million people, mainly in the upper parts of the Chari and Nile rivers, including historic Nubia, north of where the two tributaries of the Nile meet. The languages extend through 17 nations in the northern half of Africa: from Algeria to Benin in the west; from Libya to the Democratic Republic of the Congo in the centre; and from Egypt to Tanzania in the east. As indicated by its hyphenated name, Nilo-Saharan is a family of the African interior, including the greater Nile Basin and the Central Sahara Desert. Eight of its proposed constituent divisions (excluding Kunama, Kuliak, and Songhay) are found in the modern two nations of Sudan and South Sudan, through which the Nile River flows. In his book "The Languages of Africa" (1963), Joseph Greenberg named the group and argued it was a genetic family. It contains the languages which are not included in the Niger–Congo, Afroasiatic or Khoisan groups. Although some linguists have referred to the phylum as "Greenberg's wastebasket," into which he placed all the otherwise unaffiliated non-click languages of Africa, specialists in the field have accepted its reality since Greenberg's classification. Its supporters accept that it is a challenging proposal to demonstrate but contend that it looks more promising the more work is done. Some of the constituent groups of Nilo-Saharan are estimated to predate the African neolithic. Thus, the unity of Eastern Sudanic is estimated to date to at least the 5th millennium BC. Nilo-Saharan genetic unity would necessarily be much older still and date to the late Upper Paleolithic. This larger classification system is not accepted by all linguists, however. "Glottolog" (2013), for example, a publication of the Max Planck Institute in Germany, does not recognise the unity of the Nilo-Saharan family or even of the Eastern Sudanic branch; Georgiy Starostin (2016) likewise does not accept a relationship between the branches of Nilo-Saharan, though he leaves open the possibility that some of them may prove to be related to each other once the necessary reconstructive work is done. The constituent families of Nilo-Saharan are quite diverse. One characteristic feature is a tripartite singulative–collective–plurative number system, which Blench (2010) believes is a result of a noun-classifier system in the protolanguage. The distribution of the families may reflect ancient water courses in a green Sahara during the Neolithic Subpluvial, when the desert was more habitable than it is today. Within the Nilo-Saharan languages are a number of languages with at least a million speakers (most data from SIL's "Ethnologue" 16 (2009)). In descending order: Some other important Nilo-Saharan languages under 1 million speakers: The total for all speakers of Nilo-Saharan languages according to "Ethnologue" 16 is 38–39 million people. However, the data spans a range from ca. 1980 to 2005, with a weighted median at ca. 1990. Given population growth rates, the figure in 2010 might be half again higher, or about 60 million. The Saharan family (which includes Kanuri, Kanembu, the Tebu languages, and Zaghawa) was recognized by Heinrich Barth in 1853, the Nilotic languages by Karl Richard Lepsius in 1880, the various constituent branches of Central Sudanic (but not the connection between them) by Friedrich Müller in 1889, and the Maban family by Maurice Gaudefroy-Demombynes in 1907. The first inklings of a wider family came in 1912, when Diedrich Westermann included three of the (still independent) Central Sudanic families within Nilotic in a proposal he called "Niloto-Sudanic"; this expanded Nilotic was in turn linked to Nubian, Kunama, and possibly Berta, essentially Greenberg's Macro-Sudanic (Chari–Nile) proposal of 1954. In 1920 G. W. Murray fleshed out the Eastern Sudanic languages when he grouped Nilotic, Nubian, Nera, Gaam, and Kunama. Carlo Conti Rossini made similar proposals in 1926, and in 1935 Westermann added Murle. In 1940 A. N. Tucker published evidence linking five of the six branches of Central Sudanic alongside his more explicit proposal for East Sudanic. In 1950 Greenberg retained Eastern Sudanic and Central Sudanic as separate families, but accepted Westermann's conclusions of four decades earlier in 1954 when he linked them together as "Macro-Sudanic" (later "Chari–Nile", from the Chari and Nile Watersheds). Greenberg's later contribution came in 1963, when he tied Chari–Nile to Songhai, Saharan, Maban, Fur, and Koman-Gumuz and coined the current name "Nilo-Saharan" for the resulting family. Lionel Bender noted that Chari–Nile was a historical artifact of the discovery of the family and did not reflect an exclusive relationship between these languages, and the group has been abandoned, with its constituents becoming primary branches of Nilo-Saharan—or, equivalently, Chari–Nile and Nilo-Saharan have merged, with the name "Nilo-Saharan" retained. When it was realized that the Kadu languages were not Niger–Congo, they were commonly assumed to therefore be Nilo-Saharan, but this remains somewhat controversial. Progress has been made since Greenberg established the plausibility of the family. Koman and Gumuz remain poorly attested and are difficult to work with, while arguments continue over the inclusion of Songhai. Blench (2010) believes that the distribution of Nilo-Saharan reflects the waterways of the wet Sahara 12,000 years ago, and that the protolanguage had noun classifiers, which today are reflected in a diverse range of prefixes, suffixes, and number marking. Dimmendaal (2008) notes that Greenberg (1963) based his conclusion on strong evidence and that the proposal as a whole has become more convincing in the decades since. Mikkola (1999) reviewed Greenberg's evidence and found it convincing. Roger Blench notes morphological similarities in all putative branches, which leads him to believe that the family is likely to be valid. Koman and Gumuz are poorly known and have been difficult to evaluate until recently. Songhay is markedly divergent, in part due to massive influence from the Mande languages . Also problematic are the Kuliak languages, which are spoken by hunter-gatherers and appear to retain a non-Nilo-Saharan core; Blench believes they may have been similar to Hadza or Dahalo and shifted incompletely to Nilo-Saharan. Anbessa Tefera and Peter Unseth consider the poorly attested Shabo language to be Nilo-Saharan, though unclassified within the family due to lack of data; Dimmendaal and Blench consider it to be a language isolate on current evidence. Proposals have sometimes been made to add Mande (usually included in Niger–Congo), largely due to its many noteworthy similarities with Songhay rather than with Nilo-Saharan as a whole, however this relationship is more likely due to a close relationship between Songhay and Mande many thousands of years ago in the early days of Nilo-Saharan, so the relationship is probably more one of ancient contact than a genetic link . The extinct Meroitic language of ancient Kush has been accepted by linguists such as Rille, Dimmendaal, and Blench as Nilo-Saharan, though others argue for an Afroasiatic affiliation. It is poorly attested. There is little doubt that the constituent families of Nilo-Saharan—of which only Eastern Sudanic and Central Sudanic show much internal diversity—are valid groups. However, there have been several conflicting classifications in grouping them together. Each of the proposed higher-order groups has been rejected by other researchers: Greenberg's Chari–Nile by Bender and Blench, and Bender's Core Nilo-Saharan by Dimmendaal and Blench. What remains are eight (Dimmendaal) to twelve (Bender) constituent families of no consensus arrangement. Joseph Greenberg, in "The Languages of Africa", set up the family with the following branches. The Chari–Nile core are the connections that had been suggested by previous researchers. Gumuz was not recognized as distinct from neighboring Koman; it was separated out (forming "Komuz") by Bender (1989). Lionel Bender came up with a classification which expanded upon and revised that of Greenberg. He considered Fur and Maban to constitute a Fur–Maban branch, added Kadu to Nilo-Saharan, removed Kuliak from Eastern Sudanic, removed Gumuz from Koman (but left it as a sister node), and chose to posit Kunama as an independent branch of the family. By 1991 he had added more detail to the tree, dividing Chari–Nile into nested clades, including a Core group in which Berta was considered divergent, and coordinating Fur–Maban as a sister clade to Chari–Nile. Bender revised his model of Nilo-Saharan again in 1996, at which point he split Koman and Gumuz into completely separate branches of Core Nilo-Saharan. Christopher Ehret came up with a novel classification of Nilo-Saharan as a preliminary part of his then-ongoing research into the macrofamily. His evidence for the classification was not fully published until much later (see Ehret 2001 below), and so it did not attain the same level of acclaim as competing proposals, namely those of Bender and Blench. By 2000 Bender had entirely abandoned the Chari–Nile and Komuz branches. He also added Kunama back to the "Satellite–Core" group and simplified the subdivisions therein. He retracted the inclusion of Shabo, stating that it could not yet be adequately classified but might prove to be Nilo-Saharan once sufficient research has been done. This tentative and somewhat conservative classification held as a sort of standard for the next decade. Ehret's updated classification was published in his book "A Historical–Comparative Reconstruction of Nilo-Saharan" (2001). This model is notable in that it consists of two primary branches: Gumuz–Koman, and a "Sudanic" group containing the rest of the families (see "Sudanic languages § Nilo-Saharan" for more detail). Also, unusually, Songhay is well-nested within a core group and coordinate with Maban in a "Western Sahelian" clade, and Kadu is not included in Nilo-Saharan. Note that "Koman" in this classification is equivalent to Komuz, i.e. a family with Gumuz and Koman as primary branches, and Ehret renames the traditional Koman group as "Western Koman". With a better understanding of Nilo-Saharan classifiers, and the affixes or number marking they have developed into in various branches, Blench believes that all of the families postulated as Nilo-Saharan belong together. He proposes the following tentative internal classification, with Songhai closest to Saharan, a relationship that had not previously been suggested: ? Mimi of Decorse By 2015, and again in 2017, Blench had refined the subclassification of this model, linking Maban with Fur, Kadu with Eastern Sudanic, and Kuliak with the node that contained them, for the following structure: Georgiy Starostin (2016), using lexicostatistics based on Swadesh lists, is more inclusive than Glottolog, and in addition finds probable and possible links between the families that will require reconstruction of the protolanguages for confirmation. In addition to the families listed in "Glottolog" (previous section), Starostin considers the following to be established: A relationship of Nyima with Nubian, Nara, and Tama (NNT) is considered "highly likely" and close enough that proper comparative work should be able to demonstrate the connection if it's valid, though it would fall outside NNT proper (see Eastern Sudanic languages). Other units that are "highly likely" to eventually prove to be valid families are: In summary, at this level of certainty, "Nilo-Saharan" constitutes ten distinct and separate language families: Eastern Sudanic, Central Sudanic – Kadu, Maba–Kunama, Komuz, Saharan, Songhai, Kuliak, Fur, Berta, and Shabo. Possible further "deep" connections, which cannot be evaluated until the proper comparative work on the constituent branches has been completed, are: There are faint suggestions that Eastern and Central Sudanic may be related (essentially the old Chari–Nile clade), though that possibility is "unexplorable under current conditions" and could be complicated if Niger–Congo were added to the comparison. Starostin finds no evidence that the Komuz, Kuliak, Saharan, Songhai, or Shabo languages are related to any of the other Nilo-Saharan languages. Mimi-D and Meroitic were not considered, though Starostin had previously proposed that Mimi-D was also an isolate despite its slight similarity to Central Sudanic. In a follow up study published in 2017, Starostin reiterated his previous points as well as explicitly accepting a genetic relationship between Macro-East Sudanic and Macro-Central Sudanic. Starostin names this proposal "Macro-Sudanic" Gerrit J. Dimmendaal suggests the following subclassification of Nilo-Saharan: Dimmendaal et al. consider the evidence for the inclusion of Kadu and Songhay too weak to draw any conclusions at present, whereas whereas there is some evidence that Koman and Gumuz belong together and may be Nilo-Saharan. The large Northeastern division is based on several typological markers: In summarizing the literature to date, Hammarström et al. in "Glottolog" do not accept that the following families are demonstrably related with current research: Proposals for the external relationships of Nilo-Saharan typically center on Niger–Congo: Gregersen (1972) grouped the two together as "Kongo–Saharan". However, Blench (2011) proposed that the similarities between Niger–Congo and Nilo-Saharan (specifically Atlantic–Congo and Central Sudanic) are due to contact, with the noun-class system of Niger–Congo developed from, or elaborated on the model of, the noun classifiers of Central Sudanic. Nilo-Saharan languages present great differences, being a highly diversified group. It has proven difficult to reconstruct many aspects of Proto-Nilo-Saharan. Two very different reconstructions of the proto-language have been proposed by Lionel Bender and Christopher Ehret. The consonant system reconstructed by Bender for Proto-Nilo-Saharan is: The phonemes correspond to coronal plosives, the phonetic details are difficult to specify, but clearly, they remain distinct from and supported by many phonetic correspondences (another author, C. Ehret, reconstructs for the coronal area the sound and which perhaps are closer to the phonetic detail of , see infra) Bender gave a list of about 350 cognates and discussed in depth the grouping and the phonological system proposed by Ch. Ehret. Blench (2000) compares both systems (Bender's and Ehret's) and prefers the former because it is more secure and is based in more reliable data. For example, Bender points out that there is a set of phonemes including implosives /*ɓ, *ɗ, *ʄ, *ɠ/, ejectives /*pʼ, *tʼ, (*sʼ), *cʼ, *kʼ/ and prenasal constants /*mb, *nd, (*nt), *ñɟ, *ŋg/, but it seems that they can be reconstructed only for core groups (E, I, J, L) and the collateral group (C, D, F, G, H), but not for Proto-Nilo-Saharan. Christopher Ehret used a less clear methodology and proposed a maximalist phonemic system: Ehret's maximalist system has been criticized by Bender and Blench. These authors state that the correspondences used by Ehret are not very clear and because of this many of the sounds in the table may only be allophonic variations. Dimmendaal (2016) cites the following morphological elements as stable across Nilo-Saharan: Sample basic vocabulary in different Nilo-Saharan branches: "Note": In table cells with slashes, the singular form is given before the slash, while the plural form follows the slash.
https://en.wikipedia.org/wiki?curid=21953
Nuclear pore A nuclear pore is a part of a large complex of proteins, known as a nuclear pore complex that spans the nuclear envelope, which is the double membrane surrounding the eukaryotic cell nucleus. There are approximately 1,000 nuclear pore complexes (NPCs) in the nuclear envelope of a vertebrate cell, but it varies depending on cell type and the stage in the life cycle. The human nuclear pore complex (hNPC) is a 110 megadalton (MDa) structure. The proteins that make up the nuclear pore complex are known as nucleoporins; each NPC contains at least 456 individual protein molecules and is composed of 34 distinct nucleoporin proteins.About half of the nucleoporins typically contain solenoid protein domains—either an alpha solenoid or a beta-propeller fold, or in some cases both as separate structural domains. The other half show structural characteristics typical of "natively unfolded" or intrinsically disordered proteins, i.e. they are highly flexible proteins that lack ordered tertiary structure. These disordered proteins are the "FG" nucleoporins, so called because their amino-acid sequence contains many phenylalanine—glycine repeats. Nuclear pore complexes allow the transport of molecules across the nuclear envelope. This transport includes RNA and ribosomal proteins moving from nucleus to the cytoplasm and proteins (such as DNA polymerase and lamins), carbohydrates, signaling molecules and lipids moving into the nucleus. It is notable that the "nuclear pore complex" (NPC) can actively conduct 1000 translocations per complex per second. Although smaller molecules simply diffuse through the pores, larger molecules may be recognized by specific signal sequences and then be diffused with the help of nucleoporins into or out of the nucleus. It has been recently shown that these nucleoporins have specific evolutionary conserved features encoded in their sequences that provide insight into how they regulate the transport of molecules through the nuclear pore. Nucleoporin-mediated transport is not directly energy requiring, but depends on concentration gradients associated with the RAN cycle. Each of the eight protein subunits surrounding the actual pore (the outer ring) projects a spoke-shaped protein over the pore channel. The center of the pore often appears to contain a plug-like structure. It is yet unknown whether this corresponds to an actual plug or is merely cargo caught in transit. The entire nuclear pore complex has a diameter of about 120 nanometers in vertebrates. The diameter of the channel ranges from 5.2 nanometers in humans to 10.7 nm in the frog "Xenopus laevis", with a depth of roughly 45 nm. mRNA, which is single-stranded, has a thickness of about 0.5 to 1 nm. The molecular mass of the mammalian NPC is about 124 megadaltons (MDa) and it contains approximately 30 different protein components, each in multiple copies. In contrast, the yeast "Saccharomyces cerevisiae" is smaller, with a mass of only 66 MDa. Small particles (< ~30-60 kDa) are able to pass through the nuclear pore complex by passive diffusion. Larger particles are also able to diffuse passively through the large diameter of the pore, at rates that decrease gradually with molecular weight. Efficient passage through the complex requires several protein factors, and in particular, nuclear transport receptors that bind to cargo molecules and mediate their translocation across the NPC, either into the nucleus (importins) or out of it (exportins). The largest family of nuclear transport receptors are karyopherins, which includes dozens of both importins and exportins; this family is further subdivided to the karyopherin-α and the karyopherin-β subfamilies. Other nuclear transport receptors include NTF2 and some NTF2-like proteins. Three models have been suggested to explain the translocation mechanism: Any cargo with a "nuclear localization signal" (NLS) exposed will be destined for quick and efficient transport through the pore. Several NLS sequences are known, generally containing a conserved sequence with basic residues such as PKKKRKV. Any material with an NLS will be taken up by importins to the nucleus. The classical scheme of NLS-protein importation begins with Importin-α first binding to the NLS sequence, which then acts as a bridge for Importin-β to attach. The importinβ—importinα—cargo complex is then directed towards the nuclear pore and diffuses through it. Once the complex is in the nucleus, RanGTP binds to Importin-β and displaces it from the complex. Then the "cellular apoptosis susceptibility protein" (CAS), an exportin which in the nucleus is bound to RanGTP, displaces Importin-α from the cargo. The NLS-protein is thus free in the nucleoplasm. The Importinβ-RanGTP and Importinα-CAS-RanGTP complex diffuses back to the cytoplasm where GTPs are hydrolyzed to GDP leading to the release of Importinβ and Importinα which become available for a new NLS-protein import round. Although cargo passes through the pore with the assistance of chaperone proteins, the translocation through the pore itself is not energy-dependent. However, the whole import cycle needs the hydrolysis of 2 GTPs and is thus energy-dependent and has to be considered as active transport. The import cycle is powered by the nucleo-cytoplasmic RanGTP gradient. This gradient arises from the exclusive nuclear localization of RanGEFs, proteins that exchange GDP to GTP on Ran molecules. Thus there is an elevated RanGTP concentration in the nucleus compared to the cytoplasm. Some molecules or macromolecular complexes need to be exported from the nucleus to the cytoplasm, as do ribosome subunits and messenger RNAs. Thus there is an export mechanism similar to the import mechanism. In the classical export scheme, proteins with a "nuclear export sequence" (NES) can bind in the nucleus to form a heterotrimeric complex with an exportin and RanGTP (for example the exportin CRM1). The complex can then diffuse to the cytoplasm where GTP is hydrolysed and the NES-protein is released. CRM1-RanGDP diffuses back to the nucleus where GDP is exchanged to GTP by RanGEFs. This process is also energy dependent as it consumes one GTP. Export with the exportin CRM1 can be inhibited by Leptomycin B. There are different export pathways through the NPC for each RNA class that exists. RNA export is also signal mediated (NES); the NES is in RNA-binding proteins (except for tRNA which has no adapter). It is notable that all viral RNAs and cellular RNAs (tRNA, rRNA, U snRNA, microRNA) except mRNA are dependent on RanGTP. Conserved mRNA export factors are necessary for mRNA nuclear export. Export factors are Mex67/Tap (large subunit) and Mtr2/p15 (small subunit). In higher eukaryotes, mRNA export is thought to be dependent on splicing which in turn recruits a protein complex, TREX, to spliced messages. TREX functions as an adapter for TAP, which is a very poor RNA binding protein. However, there are alternative mRNA export pathways that do not rely on splicing for specialized messages such as histones. Recent work also suggest an interplay between splicing-dependent export and one of these alternative mRNA export pathways for secretory and mitochondrial transcripts. As the NPC controls access to the genome, it is essential that it exists in large amounts in stages of the cell cycle where plenty of transcription is necessary. For example, cycling mammalian and yeast cells double the amount of NPC in the nucleus between the G1 and G2 phase of the cell cycle, and oocytes accumulate large numbers of NPCs to prepare for the rapid mitosis that exists in the early stages of development. Interphase cells must also keep up a level of NPC generation to keep the levels of NPC in the cell constant as some may get damaged. Some cells can even increase the NPC numbers due to increased transcriptional demand. There are several theories as to how NPCs are assembled. As the immunodepletion of certain protein complexes, such as the Nup 107–160 complex, leads to the formation of poreless nuclei, it seems likely that the Nup complexes are involved in fusing the outer membrane of the nuclear envelope with the inner and not that the fusing of the membrane begins the formation of the pore. There are several ways that this could lead to the formation of the full NPC. During mitosis the NPC appears to disassemble in stages. Peripheral nucleoporins such as the Nup 153 Nup 98 and Nup 214 disassociate from the NPC. The rest, which can be considered a scaffold proteins remain stable, as cylindrical ring complexes within the nuclear envelope. This disassembly of the NPC peripheral groups is largely thought to be phosphate driven, as several of these nucleoporins are phosphorylated during the stages of mitosis. However, the enzyme involved in the phosphorylation is unknown in vivo. In metazoans (which undergo open mitosis) the NE degrades quickly after the loss of the peripheral Nups. The reason for this may be due to the change in the NPC’s architecture. This change may make the NPC more permeable to enzymes involved in the degradation of the NE such as cytoplasmic tubulin, as well as allowing the entry of key mitotic regulator proteins. In organisms that undergo a semi-open mitosis such as the filamentous fungus "Aspergillus nidulans", 14 out of the 30 nucleoporins disassemble from the core scaffold structure, driven by the activation of the NIMA and Cdk1 kinases that phosphorylate nucleoporins and open nuclear pores thereby widening the nuclear pore and allowing the entry of mitotic regulators. It was shown, in fungi that undergo closed mitosis (where the nucleus does not disassemble), that the change of the permeability barrier of the NE was due to changes within the NPC and is what allows the entry of mitotic regulators. In Aspergillus nidulans NPC composition appears to be effected by the mitotic kinase NIMA, possibly by phosphorylating the nucleoporins Nup98 and Gle2/Rae1. This remodelling seems to allow the protein complex cdc2/cyclinB to enter the nucleus as well as many other proteins, such as soluble tubulin. The NPC scaffold remains intact throughout the whole closed mitosis. This seems to preserve the integrity of the NE.
https://en.wikipedia.org/wiki?curid=21957
Nucleolus The nucleolus (, plural: nucleoli ) is the largest structure in the nucleus of eukaryotic cells. It is best known as the site of ribosome biogenesis. Nucleoli also participate in the formation of signal recognition particles and play a role in the cell's response to stress. Nucleoli are made of proteins, DNA and RNA and form around specific chromosomal regions called nucleolar organizing regions. Malfunction of nucleoli can be the cause of several human conditions called "nucleolopathies" and the nucleolus is being investigated as a target for cancer chemotherapy. The nucleolus was identified by bright-field microscopy during the 1830s. Little was known about the function of the nucleolus until 1964, when a study of nucleoli by John Gurdon and Donald Brown in the African clawed frog "Xenopus laevis" generated increasing interest in the function and detailed structure of the nucleolus. They found that 25% of the frog eggs had no nucleolus and that such eggs were not capable of life. Half of the eggs had one nucleolus and 25% had two. They concluded that the nucleolus had a function necessary for life. In 1966 Max L. Birnstiel and collaborators showed via nucleic acid hybridization experiments that DNA within nucleoli code for ribosomal RNA. Three major components of the nucleolus are recognized: the fibrillar center (FC), the dense fibrillar component (DFC), and the granular component (GC). Transcription of the rDNA occurs in the FC. The DFC contains the protein fibrillarin, which is important in rRNA processing. The GC contains the protein nucleophosmin, (B23 in the external image) which is also involved in ribosome biogenesis. However, it has been proposed that this particular organization is only observed in higher eukaryotes and that it evolved from a bipartite organization with the transition from anamniotes to amniotes. Reflecting the substantial increase in the DNA intergenic region, an original fibrillar component would have separated into the FC and the DFC. Another structure identified within many nucleoli (particularly in plants) is a clear area in the center of the structure referred to as a nucleolar vacuole. Nucleoli of various plant species have been shown to have very high concentrations of iron in contrast to human and animal cell nucleoli. The nucleolus ultrastructure can be seen through an electron microscope, while the organization and dynamics can be studied through fluorescent protein tagging and fluorescent recovery after photobleaching (FRAP). Antibodies against the PAF49 protein can also be used as a marker for the nucleolus in immunofluorescence experiments. Although usually only one or two nucleoli can be seen, a diploid human cell has ten nucleolus organizer regions (NORs) and could have more nucleoli. Most often multiple NORs participate in each nucleolus. In ribosome biogenesis, two of the three eukaryotic RNA polymerases (pol I and III) are required, and these function in a coordinated manner. In an initial stage, the rRNA genes are transcribed as a single unit within the nucleolus by RNA polymerase I. In order for this transcription to occur, several pol I-associated factors and DNA-specific trans-acting factors are required. In yeast, the most important are: UAF (upstream activating factor), TBP (TATA-box binding protein), and core binding factor (CBF)) which bind promoter elements and form the preinitiation complex (PIC), which is in turn recognized by RNA pol. In humans, a similar PIC is assembled with SL1, the promoter selectivity factor (composed of TBP and TBP-associated factors, or TAFs), transcription initiation factors, and UBF (upstream binding factor). RNA polymerase I transcribes most rRNA transcripts 28S, 18S, and 5.8S) but the 5S rRNA subunit (component of the 60S ribosomal subunit) is transcribed by RNA polymerase III. Transcription of rRNA yields a long precursor molecule (45S pre-rRNA) which still contains the ITS and ETS. Further processing is needed to generate the 18S RNA, 5.8S and 28S RNA molecules. In eukaryotes, the RNA-modifying enzymes are brought to their respective recognition sites by interaction with guide RNAs, which bind these specific sequences. These guide RNAs belong to the class of small nucleolar RNAs (snoRNAs) which are complexed with proteins and exist as small-nucleolar-ribonucleoproteins (snoRNPs). Once the rRNA subunits are processed, they are ready to be assembled into larger ribosomal subunits. However, an additional rRNA molecule, the 5S rRNA, is also necessary. In yeast, the 5S rDNA sequence is localized in the intergenic spacer and is transcribed in the nucleolus by RNA pol. In higher eukaryotes and plants, the situation is more complex, for the 5S DNA sequence lies outside the Nucleolus Organiser Region (NOR) and is transcribed by RNA pol III in the nucleoplasm, after which it finds its way into the nucleolus to participate in the ribosome assembly. This assembly not only involves the rRNA, but ribosomal proteins as well. The genes encoding these r-proteins are transcribed by pol II in the nucleoplasm by a "conventional" pathway of protein synthesis (transcription, pre-mRNA processing, nuclear export of mature mRNA and translation on cytoplasmic ribosomes). The mature r-proteins are then imported into the nucleus and finally the nucleolus. Association and maturation of rRNA and r-proteins result in the formation of the 40S (small) and 60S (large) subunits of the complete ribosome. These are exported through the nuclear pore complexes to the cytoplasm, where they remain free or become associated with the endoplasmic reticulum, forming rough endoplasmic reticulum (RER). In human endometrial cells, a network of nucleolar channels is sometimes formed. The origin and function of this network has not yet been clearly identified. In addition to its role in ribosomal biogenesis, the nucleolus is known to capture and immobilize proteins, a process known as nucleolar detention. Proteins that are detained in the nucleolus are unable to diffuse and to interact with their binding partners. Targets of this post-translational regulatory mechanism include VHL, PML, MDM2, POLD1, RelA, HAND1 and hTERT, among many others. It is now known that long noncoding RNAs originating from intergenic regions of the nucleolus are responsible for this phenomenon.
https://en.wikipedia.org/wiki?curid=21958
Nucleon In chemistry and physics, a nucleon is either a proton or a neutron, considered in its role as a component of an atomic nucleus. The number of nucleons in a nucleus defines an isotope's mass number (nucleon number). Until the 1960s, nucleons were thought to be elementary particles, not made up of smaller parts. Now they are known to be composite particles, made of three quarks bound together by the so-called strong interaction. The interaction between two or more nucleons is called internucleon interaction or nuclear force, which is also ultimately caused by the strong interaction. (Before the discovery of quarks, the term "strong interaction" referred to just internucleon interactions.) Nucleons sit at the boundary where particle physics and nuclear physics overlap. Particle physics, particularly quantum chromodynamics, provides the fundamental equations that explain the properties of quarks and of the strong interaction. These equations explain quantitatively how quarks can bind together into protons and neutrons (and all the other hadrons). However, when multiple nucleons are assembled into an atomic nucleus (nuclide), these fundamental equations become too difficult to solve directly (see lattice QCD). Instead, nuclides are studied within nuclear physics, which studies nucleons and their interactions by approximations and models, such as the nuclear shell model. These models can successfully explain nuclide properties, as for example, whether or not a particular nuclide undergoes radioactive decay. The proton and neutron are in a scheme of categories being at once fermions, hadrons and baryons. The proton carries a positive net charge and the neutron carries a zero net charge; the proton's mass is only about 0.13% less than the neutron's. Thus, they can be viewed as two states of the same nucleon, and together form an isospin doublet (). In isospin space, neutrons can be transformed into protons via SU(2) symmetries, and vice versa. These nucleons are acted upon equally by the strong interaction, which is invariant under rotation in isospin space. According to the Noether theorem, isospin is conserved with respect to the strong interaction. Protons and neutrons are best known in their role as nucleons, i.e., as the components of atomic nuclei, but they also exist as free particles. Free neutrons are unstable, with a half-life of around 13 minutes, but they have important applications (see neutron radiation and neutron scattering). Protons not bound to other nucleons are the nuclei of hydrogen atoms when bound with an electron or — if not bound to anything — are ions or cosmic rays. Both the proton and the neutron are composite particles, meaning each is composed of smaller parts, namely three quarks each; although once thought to be so, neither is an elementary particle. A proton is composed of two up quarks and one down quark, while the neutron has one up quark and two down quarks. Quarks are held together by the strong force, or equivalently, by gluons, which mediate the strong force at the quark level. An up quark has electric charge  "e", and a down quark has charge  "e", so the summed electric charges of proton and neutron are +"e" and 0, respectively. Thus, the neutron has a charge of 0 (zero), and therefore is electrically neutral; indeed, the term "neutron" comes from the fact that a neutron is electrically neutral. The masses of the proton and neutron are quite similar: The proton is or , while the neutron is or . The neutron is roughly 0.13% heavier. The similarity in mass can be explained roughly by the slight difference in masses of up quarks and down quarks composing the nucleons. However, a detailed explanation remains an unsolved problem in particle physics. The spin of the nucleon is , which means they are fermions and, like electrons, are subject to the Pauli exclusion principle: No more than one nucleon, e.g. in an atomic nucleus, may occupy the same quantum state. The isospin and spin quantum numbers of the nucleon have two states each, resulting in four combinations in total. An alpha particle is composed of four nucleons occupying all four combinations, namely it has two protons (having opposite spin) and two neutrons (also having opposite spin) and its net nuclear spin is zero. In larger nuclei constituent nucleons, to avoid Pauli exclusion, are compelled to have relative motion which may also contribute to nuclear spin via the orbital quantum number. They spread out into nuclear shells analogous to electron shells known from chemistry. The magnetic moment of a proton, denoted , is (where μ represents the atomic-scale unit of measure called the "nuclear magneton"). The magnetic moment of a neutron is = . These parameters are also important in NMR / MRI scanning. A neutron in free state is an unstable particle, with a half-life around ten minutes. It undergoes decay (a type of radioactive decay) by turning into a proton while emitting an electron and an electron antineutrino. (See the Neutron article for more discussion of neutron decay.) A proton by itself is thought to be stable, or at least its lifetime is too long to measure. This is an important discussion in particle physics, (see Proton decay). Inside a nucleus, on the other hand, combined protons and neutrons (nucleons) can be stable or unstable depending on the nuclide, or nuclear species. Inside some nuclides, a neutron can turn into a proton (producing other particles) as described above; the reverse can happen inside other nuclides, where a proton turns into a neutron (producing other particles) through decay, or electron capture. And inside still other nuclides, both protons and neutrons are stable and do not change form. Both nucleons have corresponding antiparticles: the antiproton and the antineutron, which have the same mass and opposite charge as the proton and neutron respectively, and they interact in the same way. (This is generally believed to be "exactly" true, due to CPT symmetry. If there is a difference, it is too small to measure in all experiments to date.) In particular, antinucleons can bind into an "antinucleus". So far, scientists have created antideuterium and antihelium-3 nuclei. The masses of the proton and neutron are known with far greater precision in atomic mass units (u) than in MeV/c2, due to the relatively poorly known value of the elementary charge. The conversion factor used is 1 u = MeV/c2. The masses of their antiparticles are assumed to be identical, and no experiments have refuted this to date. Current experiments show any percent difference between the masses of the proton and antiproton must be less than and the difference between the neutron and antineutron masses is on the order of MeV/c2. † "The P11(939) nucleon represents the excited state of a normal proton or neutron, for example, within the nucleus of an atom. Such particles are usually stable within the nucleus, i.e. Lithium-6." In the quark model with SU(2) flavour, the two nucleons are part of the ground state doublet. The proton has quark content of "uud", and the neutron, "udd". In SU(3) flavour, they are part of the ground state octet (8) of spin baryons, known as the Eightfold way. The other members of this octet are the hyperons strange isotriplet , , the and the strange isodoublet . One can extend this multiplet in SU(4) flavour (with the inclusion of the charm quark) to the ground state 20-plet, or to SU(6) flavour (with the inclusion of the top and bottom quarks) to the ground state 56-plet. The article on isospin provides an explicit expression for the nucleon wave functions in terms of the quark flavour eigenstates. Although it is known that the nucleon is made from three quarks, , it is not known how to solve the equations of motion for quantum chromodynamics. Thus, the study of the low-energy properties of the nucleon are performed by means of models. The only first-principles approach available is to attempt to solve the equations of QCD numerically, using lattice QCD. This requires complicated algorithms and very powerful supercomputers. However, several analytic models also exist: The Skyrmion models the nucleon as a topological soliton in a non-linear SU(2) pion field. The topological stability of the Skyrmion is interpreted as the conservation of baryon number, that is, the non-decay of the nucleon. The local topological winding number density is identified with the local baryon number density of the nucleon. With the pion isospin vector field oriented in the shape of a hedgehog space, the model is readily solvable, and is thus sometimes called the "hedgehog model". The hedgehog model is able to predict low-energy parameters, such as the nucleon mass, radius and axial coupling constant, to approximately 30% of experimental values. The "MIT bag model" confines three non-interacting quarks to a spherical cavity, with the boundary condition that the quark vector current vanish on the boundary. The non-interacting treatment of the quarks is justified by appealing to the idea of asymptotic freedom, whereas the hard boundary condition is justified by quark confinement. Mathematically, the model vaguely resembles that of a radar cavity, with solutions to the Dirac equation standing in for solutions to the Maxwell equations and the vanishing vector current boundary condition standing for the conducting metal walls of the radar cavity. If the radius of the bag is set to the radius of the nucleon, the bag model predicts a nucleon mass that is within 30% of the actual mass. Although the basic bag model does not provide a pion-mediated interaction, it describes excellently the nucleon-nucleon forces through the 6 quark bag s-channel mechanism using the P matrix. The "chiral bag model" merges the "MIT bag model" and the "Skyrmion model". In this model, a hole is punched out of the middle of the Skyrmion, and replaced with a bag model. The boundary condition is provided by the requirement of continuity of the axial vector current across the bag boundary. Very curiously, the missing part of the topological winding number (the baryon number) of the hole punched into the Skyrmion is exactly made up by the non-zero vacuum expectation value (or spectral asymmetry) of the quark fields inside the bag. , this remarkable trade-off between topology and the spectrum of an operator does not have any grounding or explanation in the mathematical theory of Hilbert spaces and their relationship to geometry. Several other properties of the chiral bag are notable: It provides a better fit to the low energy nucleon properties, to within 5–10%, and these are almost completely independent of the chiral bag radius (as long as the radius is less than the nucleon radius). This independence of radius is referred to as the "Cheshire Cat principle", after the fading of Lewis Carroll's Cheshire Cat to just its smile. It is expected that a first-principles solution of the equations of QCD will demonstrate a similar duality of quark-pion descriptions.
https://en.wikipedia.org/wiki?curid=21961
Nicotinamide Nicotinamide (NAM) is a form of vitamin B3 found in food and used as a dietary supplement and medication. As a supplement, it is used by mouth to prevent and treat pellagra (niacin deficiency). While nicotinic acid (niacin) may be used for this purpose, nicotinamide has the benefit of not causing skin flushing. As a cream, it is used to treat acne. Side effects are minimal. At high doses liver problems may occur. Normal amounts are safe for use during pregnancy. Nicotinamide is in the vitamin B family of medications, specifically the vitamin B3 complex. It is an amide of nicotinic acid. Foods that contain nicotinamide include yeast, meat, milk, and green vegetables. Nicotinamide was discovered between 1935 and 1937. It is on the World Health Organization's List of Essential Medicines, the safest and most effective medicines needed in a health system. Nicotinamide is available as a generic medication and over the counter. In the United Kingdom a 60 g tube costs the NHS about £7.10. Commercially, nicotinamide is made from either nicotinic acid or nicotinonitrile. In a number of countries grains have nicotinamide added to them. Nicotinamide is the preferred treatment for pellagra, caused by niacin deficiency. While niacin may be used, nicotinamide has the benefit of not causing skin flushing. Nicotinamide cream is used as a treatment for acne. It has anti-inflammatory actions, which may benefit people with inflammatory skin conditions. Nicotinamide increases the biosynthesis of ceramides in human keratinocytes in vitro and improves the epidermal permeability barrier in vivo. The application of 2% topical nicotinamide for 2 and 4 weeks has been found to be effective in lowering the sebum excretion rate. Nicotinamide has been shown to prevent "Cutibacterium acnes"-induced activation of toll-like receptor 2, which ultimately results in the down-regulation of pro-inflammatory interleukin-8 production. Nicotinamide at doses of 500 to 1000 mg a day decreases the risk of skin cancers, other than melanoma, in those at high risk. Nicotinamide has minimal side effects. At high doses liver problems may occur. Normal doses are safe during pregnancy. The structure of nicotinamide consists of a pyridine ring to which a primary amide group is attached in the "meta" position. It is an amide of nicotinic acid. As an aromatic compound, it undergoes electrophilic substitution reactions and transformations of its two functional groups. Examples of these reactions reported in "Organic Syntheses" include the preparation of 2-chloronicotinonitrile by a two-step process via the "N"-oxide, from nicotinonitrile by reaction with phosphorus pentoxide, and from 3-aminopyridine by reaction with a solution of sodium hypobromite, prepared "in situ" from bromine and sodium hydroxide. The hydrolysis of nicotinonitrile is catalysed by the enzyme nitrile hydratase from "Rhodococcus rhodochrous" J1, producing 3500 tons per annum of nicotinamide for use in animal feed. The enzyme allows for a more selective synthesis as further hydrolysis of the amide to nicotinic acid is avoided. Nicotinamide can also be made from nicotinic acid. According to "Ullmann's Encyclopedia of Industrial Chemistry", worldwide 31,000 tons of nicotinamide were sold in 2014. Nicotinamide, as a part of the coenzyme nicotinamide adenine dinucleotide (NADH / NAD+) is crucial to life. In cells, nicotinamide is incorporated into NAD+ and nicotinamide adenine dinucleotide phosphate (NADP+). NAD+ and NADP+ are coenzymes in a wide variety of enzymatic oxidation-reduction reactions, most notably glycolysis, the citric acid cycle, and the electron transport chain. If humans ingest nicotinamide, it will likely undergo a series of reactions that transform it into NAD, which can then undergo a transformation to form NADP+. This method of creation of NAD+ is called a salvage pathway. However, the human body can produce NAD+ from the amino acid tryptophan and niacin without our ingestion of nicotinamide. NAD+ acts as an electron carrier that helps with the interconversion of energy between nutrients and the cell's energy currency, adenosine triphosphate (ATP). In oxidation-reduction reactions, the active part of the coenzyme is the nicotinamide. In NAD+, the nitrogen in the aromatic nicotinamide ring is covalently bonded to adenine dinucleotide. The formal charge on the nitrogen is stabilized by the shared electrons of the other carbon atoms in the aromatic ring. When a hydride atom is added onto NAD+ to form NADH, the molecule loses its aromaticity, and therefore a good amount of stability. This higher energy product later releases its energy with the release of a hydride, and in the case of the electron transport chain, it assists in forming adenosine triphosphate. When one mole of NADH is oxidized, 158.2 kJ of energy will be released. Nicotinamide occurs as a component of a variety of biological systems, including within the vitamin B family and specifically the vitamin B3 complex. It is also a critically important part of the structures of NADH and NAD+, where the "N"-substituted aromatic ring in the oxidised NAD+ form undergoes reduction with hydride attack to form NADH. The NADPH/NADP+ structures have the same ring, and are involved in similar biochemical reactions. Nicotinamide occurs in trace amounts mainly in meat, fish, nuts, and mushrooms, as well as to a lesser extent in some vegetables. It is commonly added to cereals and other foods. Many multivitamins contain 20–30 mg of vitamin B3 and it is also available in higher doses. A 2015 trial found nicotinamide to reduce the rate of new nonmelanoma skin cancers and actinic keratoses in a group of people at high risk for the conditions. Nicotinamide has been investigated for many additional disorders, including treatment of bullous pemphigoid nonmelanoma skin cancers. Niacinamide may be beneficial in treating psoriasis. There is tentative evidence for a potential role of nicotinamide in treating acne, rosacea, autoimmune blistering disorders, ageing skin, and atopic dermatitis. Niacinamide also inhibits poly(ADP-ribose) polymerases (PARP-1), enzymes involved in the rejoining of DNA strand breaks induced by radiation or chemotherapy. ARCON (accelerated radiotherapy plus carbogen inhalation and nicotinamide) has been studied in cancer.
https://en.wikipedia.org/wiki?curid=21968
Virtual Boy The Virtual Boy is a 32-bit tabletop portable video game console developed and manufactured by Nintendo. Released in 1995, it was marketed as the first console capable of displaying stereoscopic "3D" graphics. The player uses the console like a head-mounted display, placing their head against the eyepiece to see a red monochrome display. The games use a parallax effect to create the illusion of depth. Sales failed to meet targets, and by early 1996, Nintendo ceased distribution and game development, having released only 22 games for the system. Development of the Virtual Boy lasted four years and began under the project name VR32. Nintendo entered a licensing agreement to use a "3D" LED eyepiece technology developed by US company Reflection Technology. It also built a factory in China to be used only for Virtual Boy manufacturing. Over the course of development, the console technology was downscaled due to high costs and potential health concerns, and an increasing amount of resources were reallocated to the development of the Nintendo 64, Nintendo's next home console. The Virtual Boy was pushed to market in an unfinished state in 1995 to focus on the Nintendo 64. The Virtual Boy was panned by critics and was a commercial failure, even after repeated price drops. Its failure has been attributed to its high price, monochrome display, unimpressive "3D" effect, lack of true portability, and health concerns. Stereoscopic technology in video game consoles reemerged in later years to more success, including Nintendo's 3DS handheld console. The Virtual Boy is Nintendo's second-lowest-selling platform after the 64DD. Since 1985, a red LED eyepiece display technology called Scanned Linear Array was developed by Massachusetts-based Reflection Technology, Inc. (RTI). The company produced a "3D" stereoscopic head-tracking prototype called the Private Eye, featuring a tank game. Seeking funding and partnerships by which to develop it into a commercial technology, RTI demonstrated Private Eye to the consumer electronics market, including Mattel and Hasbro. Sega declined the technology, due to its single-color display and concerns about motion sickness. Nintendo enthusiastically received the Private Eye, as led by Gunpei Yokoi, the general manager of Nintendo's R&D1 and the inventor of the Game & Watch and Game Boy handheld consoles. He saw this as a unique technology that competitors would find difficult to emulate. Additionally, the resulting game console was intended to enhance Nintendo's reputation as an innovator and to "encourage more creativity" in games. Codenaming the project "VR32", Nintendo entered into an exclusive agreement with Reflection Technology, Inc. to license the technology for its displays. While Nintendo's Research & Development 3 division (R&D3) was focused on developing the Nintendo 64, the other two engineering units were free to experiment with new product ideas. Spending four years in development and eventually building a dedicated manufacturing plant in China, Nintendo worked to turn its VR32 vision into an affordable and health-conscious console design. Yokoi retained RTI's choice of red LED because it was the cheapest, and because unlike a backlit LCD, its perfect blackness could achieve a more immersive sense of infinite depth. RTI and Nintendo said a color LCD system would have been prohibitively expensive, retailing for more than . A color LCD system was also said to have caused "jumpy images in tests". With ongoing concerns about motion sickness, the risk of developing lazy eye conditions in young children, and Japan's new Product Liability Act of 1995, Nintendo eliminated the head tracking functionality and converted its headmounted goggle design into a stationary, heavy, precision steel-shielded, tabletop form factor conformant to the recommendation of the Schepens Eye Research Institute. Several technology demonstrations were used to show the Virtual Boy's capabilities. "Driving Demo" is one of the more advanced demos; its 30-second clip shows a first-person view of driving by road signs and palm trees. This demo was shown at E3 and CES in 1995. The startup screen of the Virtual Boy prototype was shown at Shoshinkai in 1994. A "very confident" projection of "sales in Japan of 3 million hardware units and 14 million software units by March of 1996" was given to the press. The demo of what would have been a "Star Fox" game showed an Arwing doing various spins and motions. Cinematic camera angles were a key element, as they are in "Star Fox 2". It was shown at E3 and CES in 1995. As a result of increasing competition for internal resources alongside the flagship Nintendo 64, Virtual Boy software development proceeded without the company's full attention. According to David Sheff's book "Game Over", the increasingly reluctant Yokoi never actually intended for the increasingly downscaled console to be released in its final form. However, Nintendo pushed the Virtual Boy to market so that it could focus development resources on the Nintendo 64. "The New York Times" previewed the Virtual Boy on November 13, 1994. The console was officially announced via press release the next day, November 14. Nintendo promised that Virtual Boy would "totally immerse players into their own private universe." Initial press releases and interviews about the system focused on its technological capabilities, avoiding discussion of the actual games that would be released. The system was formally unveiled the next day at Nintendo's Shoshinkai Show. Nintendo of America showed the Virtual Boy at the Consumer Electronics Show on January 6, 1995. Even with cost-saving measures in place, Nintendo priced the Virtual Boy at a relatively high (the equivalent of approximately US $300 in 2018). Though slightly less expensive and significantly less powerful than a home console, this was considerably more costly than the Game Boy handheld. With seemingly more advanced graphics than Game Boy, the Virtual Boy was not intended to replace the handheld in Nintendo's product line, as use of the Virtual Boy requires a steady surface and completely blocks the player's peripheral vision. "Design News" described the Virtual Boy as the logical evolution of the View-Master 3D image viewer. The Virtual Boy was released on in Japan and on in North America with the launch titles "Mario's Tennis", "Red Alarm", "Teleroboxer", and "Galactic Pinball". It was not released in PAL markets. In North America, Nintendo shipped "Mario's Tennis" with every Virtual Boy sold, as a pack-in game. Nintendo had initially projected sales of 3 million consoles and 14 million games. The system arrived later than other 32-bit systems from Sony, Panasonic, and Sega, but at a lower price. At the system's release, Nintendo of America projected hardware sales of 1.5 million units and software sales numbering 2.5 million by the end of the year. Nintendo had shipped 350,000 units of the Virtual Boy by December 1995, around three and a half months after its North American release. The system made number 5 on "GamePro"s "Top 10 Worst Selling Consoles of All Time" list in 2007. The Virtual Boy had a short market timespan following its disappointing sales. The last official title to be released for the Virtual Boy was "3D Tetris", released on March 22, 1996. Nintendo announced more titles for the system at the Electronic Entertainment Expo in 1996, but these games were never released. The Virtual Boy was discontinued in late 1995 in Japan and early 1996 in North America. Nintendo discontinued the system without fanfare, avoiding an official press release. According to data that Nintendo provided to "Famitsu" after the system's cancellation, 770,000 Virtual Boy units were sold worldwide, including 140,000 in Japan. Nintendo extensively advertised the Virtual Boy and claimed to have spent on early promotional activities. Advertising promoted the system as a paradigm shift from past consoles; some pieces used cavemen to indicate a historical evolution, while others utilized psychedelic imagery. Nintendo targeted an older audience with advertisements for the Virtual Boy, shifting away from the traditional child-focused approach it had employed in the past. Nintendo portrayed the system as a type of virtual reality, as its name indicates. Nintendo also focused on the technological aspects of the new console in its press releases, neglecting to detail specific games. Confronted with the challenge of showing "3-dimensional" gameplay on 2-dimensional advertisements, the company partnered with Blockbuster and NBC in a coordinated effort. A $5 million campaign promoted NBC's fall lineup alongside the Virtual Boy. American viewers were encouraged via television advertisements on NBC to rent the console for US$10 at a local Blockbuster. This made it affordable for a large number of gamers to try the system, and produced 750,000 rentals. Upon returning the unit, renters received a coupon for $10 off the sale of a Virtual Boy from any store. 3,000 Blockbuster locations were included in the promotion, which consisted of sweepstakes with prizes including trips to see the taping of NBC shows. The popular rental system proved harmful to the Virtual Boy's long-term success, allowing gamers to see just how un-immersive the console was. By mid-1996, Blockbuster was selling its Virtual Boy units at $50 each. Taken as a whole, the marketing campaign was commonly thought of as a failure. The central processing unit is a 32-bit RISC chip, making the Virtual Boy Nintendo's first 32-bit system. The Virtual Boy system uses a pair of 1×224 linear arrays (one per eye) and rapidly scans the array across the eye's field of view using flat oscillating mirrors. These mirrors vibrate back and forth at a very high speed, thus the mechanical humming noise from inside the unit. Each Virtual Boy game cartridge has a yes/no option to automatically pause every 15–30 minutes so that the player may take a break before any injuries come to the eyes. One speaker per ear provides the player with audio. The Virtual Boy is the first video game console that was supposed to be capable of displaying stereoscopic "3D" graphics, marketed as a form of virtual reality. Whereas most video games use monocular cues to achieve the illusion of three dimensions on a two-dimensional screen, the Virtual Boy creates an illusion of depth through the effect known as parallax. Like using a head-mounted display, the user looks into an eyepiece made of neoprene on the front of the machine, and then an eyeglass-style projector allows viewing of the monochromatic (in this case, red) image. The Virtual Boy uses an oscillating mirror to transform a single line of LED-based pixels into a full field of pixels. Nintendo claimed that a color display would have made the system too expensive and resulted in "jumpy" images, so the company opted for a monochrome display. The display has a frame rate of 50.273487773488 Hz. To achieve a color display, Nintendo would have used a combination of red, green, and blue LEDs. At the time, blue LEDs were still considerably expensive and would in turn raise the price of the final product. This in combination with the other drawbacks helped influence Nintendo's decision to release the Virtual Boy as a monochrome device. The Virtual Boy was meant to be used sitting down at a table, although Nintendo said it would release a harness for players to use while standing. One of the unique features of the controller is the extendable power supply that slides onto the back. It houses the six AA batteries required to power the system. This can be substituted with a wall adapter, though a "slide-on" attachment is required for the switchout. Once the slide-on adapter is installed, a power adapter can be attached to provide constant power. The Virtual Boy, being a system with a heavy emphasis on three-dimensional movement, needed a controller that could operate along a Z-axis. The Virtual Boy's controller was an attempt to implement dual digital "D-pads" to control elements in the aforementioned "3D" environment. The controller itself is shaped like an "M" (like a Nintendo 64 controller). One holds onto either side of the controller and the part that dips down in the middle contains the battery pack. In more traditional 2-dimensional games, the two directional pads are interchangeable. For others with a more 3D environment, like "Red Alarm", "3D Tetris", or "Teleroboxer", each pad controls a different feature. The symmetry of the controller also allows left-handed gamers to reverse the controls (like the Atari Lynx). During development, Nintendo promised the ability to link systems for competitive play. A Virtual Boy link cable was being worked on at Nintendo as late as the third quarter of 1996. The system's EXT (extension) port, located on the underside of the system below the controller port, was never officially supported since no "official" multiplayer games were ever published. Although "Waterworld" and "Faceball" were intended to use the EXT port for multiplayer play, the multiplayer features in the former were removed and the latter game was canceled. Later a reproduction link cable was made. Nintendo initially showcased three games for the Virtual Boy. It planned to release three titles at launch, and two or three per month thereafter. Given the system's short lifespan, only 22 games were released. Of them, 19 games were released in the Japanese market, while 14 were released in North America. Third party support was extremely limited compared to previous Nintendo platforms. According to Gunpei Yokoi, Nintendo president Hiroshi Yamauchi had dictated that only a select few third party developers be shown the Virtual Boy hardware before its formal unveiling, to limit the risk of poor-quality software appearing on the system. When asked if Virtual Boy games were going to be available for download on the Virtual Console for the Nintendo 3DS, Nintendo of America President Reggie Fils-Aime said he could not answer, as he was unfamiliar with the platform. He noted that, given his lack of familiarity, he would be hard-pressed to make the case for the inclusion of the games on the Virtual Console. The hobbyist community at "Planet Virtual Boy" has developed Virtual Boy software. Two previously unreleased games, "Bound High" and "Niko-Chan Battle" (the Japanese version of "Faceball") were released. The Virtual Boy was overwhelmingly panned by critics and was a commercial failure. The Virtual Boy failed for several reasons, among them "its high price, the discomfort caused by play [...] and what was widely judged to have been a poorly handled marketing campaign." Gamers who previewed the system at the Shoshinkai show in 1994 complained that the "Mario" demo was not realistic enough, was not in full color, and didn't allow for "tracking" (the movement of the image when the player turns his or her head). In the lead editorial of "Electronic Gaming Monthly" following the show, Ed Semrad predicted that the Virtual Boy would have poor launch sales due to the monochrome screen, lack of true portability, unimpressive lineup of games seen at the Shoshinkai show, and the price, which he argued was as low as it could get given the hardware but still too expensive for the experience the system offered. "Next Generation"s editors were also dubious of the Virtual Boy's prospects when they left the show, and concluded their article on the system by commenting, "But who will buy it? It's not portable, it's awkward to use, it's 100% antisocial (unlike multiplayer SNES/Genesis games), it's too expensive and - most importantly - the 'VR' (i.e. 3D effect) doesn't add to the game at all: it's just a novelty." Following its release, reviews of the Virtual Boy tended to praise its novelty but questioned its ultimate purpose and longtime viability. "The Los Angeles Times" described the gameplay as being "at once familiar and strange." The column praised the quality of motion and immersive graphics but considered the hardware itself tedious to use and non-portable. A later column by the same reviewer found the system to be somewhat asocial, although it held out hope for the console's future. Reviewing the system shortly after its North American launch, "Next Generation" said, "Unusual and innovative, the Virtual Boy can be seen as a gamble in the same way that the Game Boy was, but it's a lot harder to see the VB succeeding to the same world-conquering extent that the Game Boy did." They elaborated that while the sharp display and unique 3D effect are impressive, aspects such as the monochrome display and potential vision damage to young gamers severely limit the system's appeal. They added that the software library was decent, but failed to capitalize on Nintendo's best-selling franchises ("Zelda" and "Metroid" games were absent, and the "Mario" games were not in the same style as the series's most successful installments) and lacked a system seller to compare with the Game Boy's "Tetris". While Nintendo had promised a virtual reality experience, the monochrome display limits the Virtual Boy's potential for immersion. Reviewers often considered the 3-dimensional features a gimmick, added to games that were essentially 2- or even 1-dimensional. "The Washington Post" felt that, even when a game gives the impression of 3-dimensionality, it suffers from "hollow vector graphics." Yokoi, the system's inventor, said the system did best with action and puzzle games, although those types of games provided only minimal immersion. Multiple critics lamented the absence of head-tracking in the Virtual Boy hardware. Critics found that, as a result, players were unable to immerse themselves in the game worlds of Virtual Boy games. Instead, they interacted with the fictional worlds in the manner of any traditional 2-dimensional game (that is, via a controller). Boyer said the console "struggles to merge the two distinct media forms of home consoles and virtual reality devices." While the device employed virtual reality techniques, it did so via the traditional home console. No feedback from the body was incorporated into gameplay. Many reviewers complained of painful and frustrating physiological symptoms when playing the Virtual Boy. Bill Frischling, writing for "The Washington Post", experienced "dizziness, nausea and headaches." Reviewers attributed the problems to both the monochromatic display and uncomfortable ergonomics. Several prominent scientists concluded that the long-term side effects could be more serious, and articles published in magazines such as "Electronic Engineering Times" and CMP Media's "TechWeb" speculated that using any immersive headset such as the Virtual Boy could cause sickness, flashbacks, and even permanent brain damage. Nintendo, in the years after Virtual Boy's demise, has been frank about its failure. Howard Lincoln, chairman of Nintendo of America, said flatly that the Virtual Boy "just failed." According to "Game Over", Nintendo laid the blame for the machine's faults directly on its creator, Gunpei Yokoi. The commercial failure of the Virtual Boy was said by members of the video game press to be a contributing factor to Yokoi's withdrawal from Nintendo, although he had planned to retire years prior and finished another more successful project for the company, the Game Boy Pocket, which was released shortly before his departure. According to his Nintendo and Koto colleague Yoshihiro Taki, Yokoi had originally decided to retire at age 50 to do as he pleased but had simply delayed it. Nintendo held that Yokoi's departure was "absolutely coincidental" to the market performance of any Nintendo hardware. "The New York Times" maintained that Yokoi kept a close relationship with Nintendo. After leaving Nintendo, Yokoi founded his own company, Koto, and collaborated with Bandai to create the WonderSwan, a handheld system competing with the Game Boy. The commercial failure of the Virtual Boy reportedly did little to alter Nintendo's development approach and focus on innovation. While the console itself is said to have failed in many regards, its focus on peripherals and haptic technology reemerged in later years. Because Nintendo shipped fewer than 800,000 Virtual Boy units worldwide, it is considered a valuable collector's item. The original inventor, Reflection Technology, Inc., was reportedly financially "devastated" by the Virtual Boy's performance, with dwindling operations by 1997. With the launch of the Nintendo 3DS console in 2011, Nintendo released a handheld gaming console with autostereoscopic "3D" visuals; meaning that the console produces the desired depth effects without any special glasses and is portable. In the period leading up to the release of the Nintendo 3DS, Shigeru Miyamoto discussed his view of the issues with the Virtual Boy. One was the actual use of the three-dimensional effects; while it was designed to render wireframe graphics, the effects are generally used to separate two-dimensional games into different planes separated by depth. Further, Miyamoto stated that the graphics are not as appealing, and while developing the Nintendo 64, had ruled out the use of wireframe graphics as too sparse to draw player characters. Finally, he stated that he perceived the Virtual Boy as a novelty that should not have used the Nintendo license so prominently. In February 2016, Tatsumi Kimishima stated that Nintendo was "looking into" virtual reality but also explained that it would take more time and effort for them to assess the technology, and in a February 2017 interview with Nikkei, he stated that the company was "studying" VR, and would add it to the Nintendo Switch once it is figured out how users can play for long durations without any issues. Nintendo introduced a VR accessory for the Switch as part of Labo, a line of player-assembled cardboard toys leveraging the console's hardware and Joy-Con controllers. In this case, the console is used as a display for the headset, similarly to VR viewers for smartphones (such as Cardboard). Nintendo has referenced the Virtual Boy in other games, such as "Tomodachi Life"—where a trailer for the life simulation game included a scene of several Mii characters worshiping the console. In "Luigi's Mansion 3", Luigi uses a device by Professor E. Gadd known as the "Virtual Boo" to access maps and other information in-game (succeeding the use of devices referencing the Game Boy Color and first-generation Nintendo DS in previous installments). This interface is rendered in the console's red and black color scheme, while E. Gadd is shown to be optimistic that the device would "fly off the shelves".
https://en.wikipedia.org/wiki?curid=21970